Skip to main content
KBS_Icon_questionmark link-ico
daniel_chavez_heras_banner_image_centered ;

Meet: Dr Daniel Chávez Heras

Dr Daniel Chávez Heras teaches machines to watch films – not to replace human viewers, but to help archives, researchers and artists understand screen culture in new ways.

An academic and technologist “specialised in the computational production and analysis of visual culture”, Daniel is a Lecturer in Digital Culture and Creative Computing in the Department of Digital Humanities at King’s College London. Being part of the Creative AI Lab (in partnership with the Serpentine Galleries), the Computational Humanities Group, and an affiliate of the King’s Institute for Artificial Intelligence, Daniel intervenes at productive crossroads, linking cinema, AI, design and digital culture.

Daniel trained first as a designer, then in film studies, and finally completed a PhD in Digital Humanities. He now works across the history and theory of moving images and practical machine learning applications. That dual fluency is rare, even in a field as interdisciplinary as digital humanities, and it shapes how he thinks about AI and moving images.

If you do high-quality computational analysis of media and culture, then you are in a better position to create meaningful generative output. Computational analysis and generation are two sides of the same coin– Dr Daniel Chávez Heras
Daniel Chavez Heras workshop
Daniel discussing the ideas of what later became the ISSA project

A formative moment in that realisation came through the experimental documentary Made by Machine: When AI Met the Archive, a collaborative project with researchers from BBC R&D to “machine see” the archive through a series of computational techniques and assemble a full-length programme of “television by the metre”. The programme aired on BBC Four, attracted nearly half a million viewers in the UK and was nominated for a Broadcast Tech Innovation award. This project was as much about probing the limits of automated television as it was about making an entirely novel type of documentary that raised many practical questions about rights, risk, and deeper societal questions about how far we want machines to mediate cultural memory.

Those questions are at the heart of his new book, Cinema and Machine Vision: Artificial Intelligence, Aesthetics and Spectatorship (Edinburgh University Press). The book “unfolds the aesthetic, epistemic, and ideological dimensions of machine-seeing films and television using computers,” bringing together film theory and applied machine-learning research to challenge assumptions about what happens when AI systems watch and even make images on our behalf. Daniel launched the book as a live recording of The Video Essay Podcast at the King’s Festival of Artificial Intelligence, an event supported by King’s Institute for Artificial Intelligence that brought together an international expert audience.

That collaboration with the Institute left a strong impression. “It was a very well organised event, professionally put-together,” Daniel recalls. Unlike many academic book launches, the Institute team were happy to experiment: they helped bring critic and podcast host Will DiGravio over from the US, handled the logistics of recording, and filled the room with more than 90 attendees, with many more joining virtually. For Daniel, it was a model of how the King's Institute for Artificial Intelligence and the Digital Futures Institute work together to support interdisciplinary, public-facing work in the arts and humanities.

Intelligent Systems for Screen Archives (ISSA)

If Cinema and Machine Vision sets out the theory, Daniel’s latest major project, Intelligent Systems for Screen Archives (ISSA), puts those ideas to work with national partners. Funded by the BFI National Lottery Innovation Challenge Fund and led by King’s Digital Humanities in collaboration with King’s Digital Lab, ISSA brings together three national and two regional film and television archives across the UK to explore how AI might help them understand, reshape and activate their growing digital collections.

Many of these archives have recently completed huge digitisation efforts, such as the BFI’s ‘Heritage 2022’ project, which rescued 100,000 at-risk videotapes. The result is an abundance of digital screen heritage material and a new question: “Now that they’re safely preserved, how do we make them accessible at scale as valuable public data?” Daniel’s team spent months talking to archivists about their hopes and concerns around AI, difficulties in implementation and oversight, and distilled these conversations into four concrete use-cases: semantic segmentation, place-based search, audio description, and creative reuse of archival footage.

Under the hood, ISSA is also a story about infrastructure and power. Many archives already rely on commercial vendors for storage and asset management, who are now layering AI services on top. That can be attractive in the short term, but it risks locking publicly funded institutions into opaque systems and pricing models. Daniel is adamant that part of the project’s mission is to “make the case for developing AI in a different way”, one that uses publicly funded compute infrastructure and shared tools rather than outsourcing everything to big technology companies. “We want archives to be in a position of strength to choose if and how they use AI,” he says, including the option to decide not to use it in certain contexts.

One of the most promising tools to emerge from this work is FrameSense, a command-line application for preprocessing large collections of video files. Originally built as a research tool, FrameSense standardises the laborious early stages of turning terabytes of audiovisual material into machine-readable datasets, freeing researchers to focus on analysis rather than plumbing. The team is now testing FrameSense with ISSA partners and at international digital-humanities venues, to make it part of national research infrastructure so that archives and scholars can share workflows better, and don’t re-invent them several times over in isolation. “Shared tools, benchmarks and standards move the whole sector forward,” Daniel says. “While that’s harder to do and requires higher investment relative to purchasing a ready-made solution off the shelf, archives and the public do get to keep the knowledge and control over the technology.”

Alongside these large-scale systems, Daniel is also drawn to small, visually striking modelling and visualisation techniques that help people see what computational thinking does to film. His note on “Movie Barcodes” explains how entire films can be compressed as single images by sampling frames and stretching them into lines in chronological order, producing a visual model of colour-to-time dynamics without any machine learning at all. He used a variation of this technique to design the cover art for Cinema and Machine Vision and is turning it into teaching materials for his students.

All models are wrong, as the saying goes, they are reductive, partial, and that can be frustrating at times. But modelling stimulates imagination and I think it’s empowering for students when they learn that sound data analysis is not the enemy of creativity.– Dr Daniel Chávez Heras

For students, collaborators and archives, Daniel’s work shows how critical, responsible and public-minded AI research can take us from “AI slop” to synthetic media that is more equitable and better represents the human experience over the past century. As AI becomes ever more entangled with visual culture, Daniel’s work at King’s helps ensure that the people invested in the transformational power of representation, from creative industries to screen heritage organisations, have a say in how machines learn to see.

Chains of Value & Tools of Attribution: Data Provenance in the Cultural and Creative Industries

On Friday 27 February, Daniel is participating in a landmark one-day event exploring data provenance, copyright, and licensing frameworks in the creative industries. 

Find out more about the event and to register.

This event is organised with the support of the Digital Futures Institute and the Department of Digital Humanities at King's, in collaboration with TikBox.

In this story

Daniel Chávez Heras

Daniel Chávez Heras

Lecturer in Digital Culture and Creative Computing

Latest news