Please note: this event has passed
Seeking confidence in chatbots for research? Machine-researcher alignment and mis-alignment
Chatbots have become a ‘part of the pipeline’ in a number of research methodologies in the social sciences and humanities, contributing to formatting, summaries, annotation, labeling and the generation of synthetic data. One question is how to go about using chatbots in the first place for such research tasks, and make use of the many best practice guides that have been shared across the research landscape. These guides contain steps about how to prompt and interact with chatbots properly as researchers. But they also advise that the chatbots explain themselves and that researchers validate their outputs. How does one gain confidence in how the chatbots work for the researchers? What to do when the machine and the researcher findings misalign?
Just a part of the pipeline? Research-with-AI critique
When gaining confidence in the chatbot output, one could consider how to ground chatbot findings. These moments raise a series of questions such as when to undertake a manual and/or multiple chatbot comparison. But it would also ask, how does the medium or the platform affect the data and the findings? Here is where guardrail auditing comes into the picture. How to detect the guardrails that have been put up by the chatbots so that they can interact with users without offence? How do they affect the quality of the data and the findings? The masterclass is dedicated to identifying medium and platform issues when using chatbots for research.
This event is co-organised by the Centre for Digital Culture and the Department of Digital Humanities.
Speaker:
Richard Rogers is Professor of New Media & Digital Culture, Media Studies and Director of the Digital Methods Initiative, Humanities Labs, University of Amsterdam.
Event details
REACH Space, 3rd Floor, Surrey Street East WingStrand Campus
Strand, London, WC2R 2LS