Skip to main content

AI-Enabled Influence Operations: the Threat to the UK General Election

Online

27JunAI Enabled Influence Operations. the Threat to the UK General Election

Recent advances in AI technology have caused many people to be concerned about its use to spread disinformation, influence voters, manipulate the outcome of an election or erode trust in democracy.

With the UK elections taking place in just a few weeks’ time, the question of whether AI-enabled influence operations are a threat is an urgent matter. With an increasingly diminishing window of opportunity, The Alan Turing Institute’s Centre for Emerging Technology and Security (CETaS) has urged regulators to tackle the threats posed by AI ahead of July’s general election to preserve trust in the democratic system.

The Cybersecurity Research Group at King's College London hosts Sam Stockwell from The Alan Turing Institute, who has analysed AI-enabled influence operations and their potential to undermine the upcoming UK general election, as well as other upcoming democratic elections, for a discussion at King's College London.

In this new study, researchers caution against fears that AI will directly impact election results. They noted that, to date, there is limited evidence that AI has prevented a candidate from winning compared to the expected result. Their research found that of 112 national elections taking place since January 2023 or forthcoming in 2024, just 19 had examples of AI-enabled interference.

However, there are early signs of damage to the broader democratic system. This includes confusion among the electorate over whether AI-generated content is real, which damages the integrity of online sources; deepfakes inciting online hate against political figures, which threatens their personal safety; and politicians exploiting AI disinformation for potential electoral gain. The evidence also found that current ambiguous electoral laws on AI could lead to its misuse in the upcoming general election, such as with people using generative AI systems like ChatGPT to create fake campaign endorsements, which could damage the reputation of individuals implicated and undermine trust in the information environment.

Event Panellists:

Sam Stockwell is a Research Associate at the Centre for Emerging Technology and Security (CETaS), which sits within The Alan Turing Institute. His research interests focus on the intersection between national security and the online domain, particularly in relation to countering radicalisation and violent extremism.

Megan Hughes is a Research Associate at CETaS. Prior to joining the Turing, Megan worked as an Analyst within the Defence and Security research group at RAND Europe. She led projects on a wide range of topics from assessing the impact of emerging technologies on the information environment to identifying the implications of disinformation and conspiracy theories in Europe. Her research has informed strategy and policy at the UK Home Office, UK Ministry of Defence, the European Commission, and the United Nations Development Programme.

Dr Phil Swatton is Data Scientist in the Applied Research Centre for Defence and Security (ARC), an applied research team at The Alan Turing Institute. He works on a range of projects on and adjacent to deep learning, which has included work on the effect of dataset similarity on transfer attack success and low-cost measures for pre-trained model selection.

The event will be chaired by Dr Lilly Muller, Postdoctoral Research Fellow in the War Studies Department and Deputy Director of the Cybersecurity Research Group. She is a Visiting Research Fellow in the Science and Technology Studies Department at Cornell and a non-resident research fellow at the Cornell Brooks Tech Policy Institute. Dr. Muller has published widely on topics related to cybersecurity, technology, and global politics.

At this event

Lilly Pijnenburg Muller

Research Associate (Postdoctoral Research Fellow)


Search for another event