That collaboration with the Institute left a strong impression. “It was a very well organised event, professionally put-together,” Daniel recalls. Unlike many academic book launches, the Institute team were happy to experiment: they helped bring critic and podcast host Will DiGravio over from the US, handled the logistics of recording, and filled the room with more than 90 attendees, with many more joining virtually. For Daniel, it was a model of how the King's Institute for Artificial Intelligence and the Digital Futures Institute work together to support interdisciplinary, public-facing work in the arts and humanities.
Intelligent Systems for Screen Archives (ISSA)
If Cinema and Machine Vision sets out the theory, Daniel’s latest major project, Intelligent Systems for Screen Archives (ISSA), puts those ideas to work with national partners. Funded by the BFI National Lottery Innovation Challenge Fund and led by King’s Digital Humanities in collaboration with King’s Digital Lab, ISSA brings together three national and two regional film and television archives across the UK to explore how AI might help them understand, reshape and activate their growing digital collections.
Many of these archives have recently completed huge digitisation efforts, such as the BFI’s ‘Heritage 2022’ project, which rescued 100,000 at-risk videotapes. The result is an abundance of digital screen heritage material and a new question: “Now that they’re safely preserved, how do we make them accessible at scale as valuable public data?” Daniel’s team spent months talking to archivists about their hopes and concerns around AI, difficulties in implementation and oversight, and distilled these conversations into four concrete use-cases: semantic segmentation, place-based search, audio description, and creative reuse of archival footage.
Under the hood, ISSA is also a story about infrastructure and power. Many archives already rely on commercial vendors for storage and asset management, who are now layering AI services on top. That can be attractive in the short term, but it risks locking publicly funded institutions into opaque systems and pricing models. Daniel is adamant that part of the project’s mission is to “make the case for developing AI in a different way”, one that uses publicly funded compute infrastructure and shared tools rather than outsourcing everything to big technology companies. “We want archives to be in a position of strength to choose if and how they use AI,” he says, including the option to decide not to use it in certain contexts.
One of the most promising tools to emerge from this work is FrameSense, a command-line application for preprocessing large collections of video files. Originally built as a research tool, FrameSense standardises the laborious early stages of turning terabytes of audiovisual material into machine-readable datasets, freeing researchers to focus on analysis rather than plumbing. The team is now testing FrameSense with ISSA partners and at international digital-humanities venues, to make it part of national research infrastructure so that archives and scholars can share workflows better, and don’t re-invent them several times over in isolation. “Shared tools, benchmarks and standards move the whole sector forward,” Daniel says. “While that’s harder to do and requires higher investment relative to purchasing a ready-made solution off the shelf, archives and the public do get to keep the knowledge and control over the technology.”
Alongside these large-scale systems, Daniel is also drawn to small, visually striking modelling and visualisation techniques that help people see what computational thinking does to film. His note on “Movie Barcodes” explains how entire films can be compressed as single images by sampling frames and stretching them into lines in chronological order, producing a visual model of colour-to-time dynamics without any machine learning at all. He used a variation of this technique to design the cover art for Cinema and Machine Vision and is turning it into teaching materials for his students.