Skip to main content

Please note: this event has passed


Over the past years, large language models such as BERT and GPT-3 revolutionised the field of natural language processing by achieving near-human or even super-human performance in many tasks from text classification to text generation. In this talk, he will discuss how the same approaches could be employed in social sciences to uncover biases in large corpora.

While these ideas could be applied to different domains, he will focus on gender biases. In particular, he will discuss how word embeddings trained on a Google Books corpus could help to track the evolution of occupational gender stereotypes; how word embeddings trained on a corpus of American print media reveal biases against girls in schools; and how analysing reviews on an online platform could help to explain a gender pay gap.

Speaker

Dr Ivan Smirnov is a computational social scientist working at the University of Mannheim (Germany). He is using AI methods to understand biases in social systems and borrows methods from social science to understand biases of AI.