Seizing a once-in-a-generation opportunity
Unlike previous industrial revolutions powered by water or oil, the AI revolution is made possible by a resource that comes from all of us: our data. This presents an unprecedented opportunity. If we're generating the resource, why shouldn't we have a say in shaping how it's used?
This insight drives the Centre for Data Futures' dual focus on data empowerment at the generation stage and participatory interface design at deployment. Rather than treating people as passive data subjects whose information is extracted and processed by others, the Centre is pioneering approaches that put communities in the driver's seat.
The missing profession of the 21st Century
On 23rd October, the Centre will officially launch its Data Empowerment Clinic—a groundbreaking initiative that tackles "the missing profession of the 21st century": data stewardship. The clinic will train students and professionals to support communities in developing their own data empowerment structures, from data trusts to a variety of bottom-up empowerment frameworks.
"We're not just researching these ideas in the abstract," says Innovation Director Suha Mohamed, who leads the Clinic. "We're creating real-world pathways for people to gain agency over their data, whether they're gig workers, patient groups, or entire neighbourhoods wanting to shape their digital futures."
A critical window for different design principles
At the deployment end, the Centre is determined to make the most of what it sees as a narrow window of opportunity to influence how AI systems are designed. Most current approaches underestimate the potential inherent in the built-in participatory affordances of Large Language Models.
Unlike traditional software with rigid interfaces, LLMs are conversational by nature. This conversational quality opens up possibilities that simply cannot be achieved through designer-centric approaches that seek to anticipate and set rigid parameters for every query. The Centre's research addresses a critical gap: the absence of systems that allow groups to collectively refine how AI expresses uncertainty—particularly the kind of non-quantifiable uncertainty that's vital to professional judgment.
There's a narrow window for exploring design principles that preserve productive uncertainty rather than optimise it away, before efficiency-maximising approaches become institutionally entrenched. We're pioneering the first group-interactive AI refinement systems, enabling professional communities to collectively modify how uncertainty is expressed—essential for preventing AI systems from undermining domain expertise.
Revitalising democratic dialogue
Beyond professional contexts, this approach opens remarkable possibilities for democratic renewal. One strand of the Centre's research explores a specialised approach to LLM design: creating what I call "transitional conversational spaces"—interfaces specifically designed to help people rediscover the value of productive uncertainty in an age where digital platforms reward certainty over exploration.
As digital technologies increasingly mediate our conversational infrastructure, their design will inevitably shape our collective capacity for moral perception and democratic deliberation. If we design LLM interfaces primarily to provide authoritative answers or minimise controversy, we may further erode the conditions necessary for democratic renewal.
Instead, this research proposes creating specialised interfaces that support engagement with uncertainty. In an era where the 'in-between' spaces for exploring tentative ideas—the coffee shop conversations, community meetings, and informal discussions that once helped people find their way through complex issues—are increasingly displaced by polarised online debates, these interfaces offer a potential pathway for reconstructing the infrastructure that democratic renewal requires.
Beyond the binary: A third way forward
The Centre's approach challenges the typical "AI safety versus AI progress" binary. Instead of asking whether we can trust AI systems, they're asking: How can we design AI systems that communities can actively shape and engage with?
The exploration of LLMs as transitional conversational spaces represents more than a technical exercise. It constitutes an opportunity to reimagine the relationship between technology and democracy.
As AI becomes increasingly central to how we work, learn, and make decisions, the Centre for Data Futures is ensuring that future is shaped not just by those who design the technology, but by all of us who make it possible.