Please note: this event has passed
This seminar presents early-stage findings and methodological questions from a project about ethically loaded conversations with LLMs. The project aims to move beyond designer-centric approaches to LLM optimisation toward participatory frameworks where professional communities can collectively refine how these systems express uncertainty about values, norms, dilemmas, and ethical matters. The long-term vision: leveraging human feedback methods central to LLM development to create learning environments where both systems and user communities evolve in their understanding and communication of non-quantifiable uncertainties.
The more immediate focus is our endeavour to create a dataset capturing students’ interactions with LLMs around ethical dilemmas. This data will eventually inform participatory refinement of uncertainty expression. We will discuss the particular challenges of this research design: securing ethics approval for what constitutes 'special category data' under GDPR (students' positions on ethical dilemmas qualify as 'philosophical beliefs'), navigating human-subjects research involving rapidly evolving AI systems, and designing questionnaires that can meaningfully capture belief change (doxastic plasticity) through AI-mediated conversations. If you're considering empirical research involving student-LLM interactions, or if you've been grappling with the ethical and legal complexities of similar research designs, this seminar offers both lessons learned and open questions for collective discussion.
We particularly welcome methodological input on several dimensions: experimental design challenges (managing within-subject versus between-subject comparisons across different models and dilemmas); annotation interface decisions (what kind of visual and verbal interface can help communicate with subjects in a way that maximises their understanding and avoids framing effects?); and broader questions of reproducibility, scientific contribution, and the kind of design recommendations such research can legitimately support.
Where: King’s College London, Strand Building S3.41
When: 4 February 2026, 14.00-16.00
This seminar is both in person and online. Please register for either an in-person or online ticket. A Teams link will be sent to participants on the day.
Speakers:
Jacopo Domenicucci is a philosopher based at the Centre for data Futures. Before that, he held a Neukom Fellowship at Dartmouth College and a Research Fellowship at the University of Cambridge, where he also got his PhD, after graduating from the École normale supérieure (Paris). Spanning moral philosophy and the philosophy of AI, his research grapples with the enabling and inhibiting aspects of our ecosystems of technology.
Sylvie Delacroix is the Inaugural Jeff Price Chair in Digital Law and the director of the Centre for data Futures(King’s College London). She is also a visiting professor at the University of Tohoku (Japan). Her research focuses on the role played by habit within ethical agency (see Habitual Ethics?), the role of humility markers as conversation enablers and the potential inherent in LLMs’ participatory interfaces. She also considers bottom-up data empowerment structures and the social sustainability of the data ecosystem that makes generative AI possible. The latter work led to the first data trusts pilots worldwide being launched in 2022 in the context of the Data Trusts initiative www.datatrusts.uk.
Salvatore Greco is a postdoctoral researcher at the Centre for Data Futures, King’s College London. He earned his PhD in Computer Science at Politecnico di Torino, where he also worked as a Research Associate. His research focuses on developing ethical and trustworthy Natural Language Processing (NLP) systems.
Moderator:
Claudia Aradau is Professor of International Politics and Academic Director of the Methods Centre, Faculty of Social Science and Public Policy (King’s College London). Her recent research has focused on how digital technologies reconfigure security and surveillance practices, and how algorithms and machine learning recast relations between security, democracy, and critique.
Event details
S3.41Strand Building
Strand Campus, Strand, London, WC2R 2LS


