Skip to main content
KBS_Icon_questionmark link-ico
artificial-intelligence-1903x588px ;

2024 will be the year of democracy - or disinformation

Poll to Poll 2024: A year of elections around the world
Resham Kotecha & Professor Elena Simperl

Open Data Institute (ODI) & King's College London

27 February 2024

In this year of elections around the world, how will AI shape or harm democracies? RESHAM KOTECHA and PROFESSOR ELENA SIMPERL explore the impact AI is already having, whether states are ready for the sheer volume of rule-breaking we might see and why everyone should take a more critical approach to what we see.

With nearly 2 billion people heading to the polls this year, 2024 is being touted as the year of democracy. Key elections are being held in the UK, the US, the EU, and India, with many other countries also set to hold elections over the course of the year. Along with many organisations working with data and AI, at the Open Data Institute, we’re cognisant of the vast opportunities - and significant challenges - that these technologies can play in shaping and harming our democracies.

Following the UK’s AI Safety Summit and the European Union’s AI Act, conversations about AI have taken centre stage in politics, civil society and industry circles as we navigate the way ahead. Many of these conversations tend to focus on the AI of the future - what’s been dubbed ‘Frontier AI’; we have heard about future threats, future opportunities, and a future inseparable from AI.

Real and present impact

Yet, we are already in the era of AI, and have been for over a decade. Our conversations need to focus on the very real and present impact it is already having in every part of our lives. AI is both pervasive and ubiquitous - one of its most striking features is the extraordinary potential it has to be used to influence people’s thoughts, behaviours - and their votes. We have seen AI make campaigns less costly to run, levelling the playing field, but we have also seen the incredibly negative impact of Cambridge Analytica. The ability to generate realistic deep fakes at scale, and the abilities of conversational generative AI have changed the potential reach and depth of the challenges we now face.

In the tech world, we often hear the adage “move fast and break things'' - but that doesn’t bode well for us on a local, national or global level when the things that we could be breaking are democracy and society. The latest wave of AI could, of course, increase political engagement. For example, generative AI tools like ChatGPT could be used for explaining political systems, summarising manifesto pledges, and encouraging under-represented groups to go to the ballot box. But there is increasing evidence that AI has been, and will be used to generate realistic deepfakes, create and spread disinformation, and target voters with messages that reinforce harmful, or untruthful messages to a level not seen before.

Improved quality of deep fakes

In previous elections - both in Europe and abroad - data-centric technologies like AI have flooded social media platforms with personalised and targeted ads, which often contained half-truths and dubious claims. As we approach this year’s elections, we will have to contend with the use of far more advanced AIs in elections as more people than ever before around the world are going to the polls. AI is a technology that is constantly evolving and that gives anyone with access to a smartphone the ability to create and spread misinformation - should they wish to do so. While misinformation spreads because people share it on social platforms, there is a material difference to previous elections: the quality of deep fakes has improved. People are more aware that these capabilities exist and have learned how to use them.

Even where there is no ill intent in the application of generative AI, there is still a risk of embedding, amplifying, and entrenching biases - biases that can exist in the data on which AI systems are trained. Vast swathes of data are harvested from social media platforms and used to train AI - but not enough is being done to assure it is accurate and representative. At the same time, the technology is at an inflection point: a large majority of curated data sources like archives, libraries, media content have already been used to train the current, less than perfect AIs on the market. Improvements, for instance in the form of more truthful, verified content, cannot be achieved if newer releases of these AIs rely on the flood of synthetic content that we already see on social media, so there is a risk that we might see the technology worsening if it begins to rely on AI-generated data.

We have robust electoral law in the UK, where our research and engagement is based. All of the offences, ills and evils that can be committed, are already defined. However, it's possible that ours and other state institutions could be overwhelmed by the sheer volume of breaches and rule-breaking. Famously, a rumour is halfway around the world before truth has got its boots on - and that can be turbocharged in the era of generative AI. We need to ensure that our regulators and institutions receive both the guidance on how to apply current regulations to AI, but also the resources and technical expertise to understand how and when to enforce the rules. This is particularly challenging in a sector like AI that commands impressive salaries and is already experiencing significant skills shortages.

Creating safeguards, accountability and transparency

Finding a solution to this is not something for the government alone to consider. It must involve civic society, private tech companies, citizens and consumers. Companies will need to assure their data and be more open about the data they feed their AI algorithms. Governments will need to consider innovative ways to reassure citizens and make them feel safe. As consumers and as a society, we will need to learn to be much more critical and question the fundamental origin of the information and the data on which it is based. This starts with equipping people with data and AI literacy skills and empowering them to demand that AI-generated content is labelled as such in the media and elsewhere.

Globally, we will need to consider how we can begin to build structures, institutions, regulations and technology with the values of trust, provenance and authenticity at their heart. We will need to incentivise - with carrot and stick - tech companies to build in safeguards and open up algorithms for independent assessment. The tech sector needs to continue to invest in solutions that combat fake news and tackle mis-and-disinformation: this includes leveraging deep learning to detect the nefarious use of AI, but also doing more to support the wider ecosystem and that means giving researchers and innovators access to data.

Governments should also require the disclosure - by political candidates and political campaigns - of the use of AIs and algorithmic systems, so people know whether they are the subject of targeting and if they have been algorithmically selected and information pushed their way. By increasing accountability and transparency, people will feel more secure and less open to manipulation. Globally, governments should track, assess, and learn from the lessons of 2024 so that we can protect our democracies and societies for years to come. Governments have been slow to react to the risks of social media on democracy - but we have the chance to get ahead of the curve with AI regulation in this space.

It's up to anybody how they choose to vote in an election, and they may choose to vote with more or less information, from more or less trustworthy sources, or not to vote at all. That's their choice. However, in the context of new and powerful technology, we should make sure it really is their own choice, after all.

This article originally appeared on PoliticsHome.com

Resham Kotecha is the Global Head of Policy at the Open Data Institute.

Professor Elena Simperl is Professor of Computer Science, Deputy Head of Informatics, and Enterprise and Engagement lead for the department at King’s College London. She is also Director of Research for the Open Data Institute.

In this story

Elena Simperl

Elena Simperl

Professor of Computer Science

Latest news