Skip to main content
KBS_Icon_questionmark link-ico
Artificial intelligence illustration image ;

Weaponising AI: Political Persuasion at an Industrial Scale

Lukasz Olejnik

Visiting Senior Research Fellow in the Department of War Studies

14 January 2026

Artificial intelligence is going to reshape political campaigning and persuasion. Not merely through sophisticated psychological profiling, but by enabling influence at scale. Rather than relying on psychological profiling or highly personalised messaging, advances in Large Language Models (LLMs) allow political actors to shape political attitudes simply by generating large volumes of plausible, authoritative-sounding information. AI-driven persuasion is already affecting electoral politics, and this coupled with new research raises urgent questions about democratic resilience and the ability of states and platforms to respond to influence operations.

AI will soon exert unprecedented influence over human beliefs –not by understanding human psychology or personalising messages to individuals, but by generating massive volumes of factual-sounding claims. Perfect accuracy is not needed. The appearance of substantive information suffices.

LLMs – AI systems trained to generate human-like text – are therefore perfect tools for political persuasion. A brief conversation with an AI bot can shift voter preferences more effectively than professional campaign videos, at low cost, in any language, 24/7. This isn’t speculative.

Recent experiments spanning the 2024 U.S. presidential race and the 2025 Canadian and Polish elections show that short AI-driven conversations advocating for a top candidate produced significant shifts in preferences (Lin et al., 2025, Nature). These effects exceeded typical survey-based video advertising effects. Further research suggests that conversational AI can shift attitudes across a broad policy space beyond any single election (Hackenburg et al., 2025, Science).

Quantity over quality

In interactive settings – dialogue, rather than one-way messaging – AI systems may dynamically adapt to users’ stated priorities in real time while maintaining coherence. The evidence mechanism matters. When AI is instructed to avoid factual claims, persuasive effects decline. Effective AI persuasion uses logical argumentation and evidence-based messages, not emotional manipulation or propaganda without substance.

Surprisingly, micro-targeting (i.e. personalisation) showed limited effects in practice. Non-personalized and non-specific strategies are just as effective, suggesting that AI’s persuasive powers may lie elsewhere.

Information density seems to be the key: it explained 44% of persuasive impact. The findings showed that persuasion scales with the volume of plausible (or plausible-sounding) claims, even when their accuracy declines. In other words, the information need only be plausible-sounding, not necessarily factual. Let that sink in.

In the case of the U.S., the AI-persuasion instrument was effective across partisan lines, persuading pro-Trump, pro-Harris and unaligned voters to change their minds. Similar effects were seen in the Canadian and Polish elections. In some cases, pro-Trump and pro-Harris voters were persuaded to intensify their existing stance, too.

Five human and one robotic hand placing a vote in a ballot box

The always-on campaign

These effects become more concerning when deployed in automated systems. Consider a scenario where a local candidate in a tight political race deploys an AI-based system that engages voters on social media about housing policy. After a brief message exchange, the AI learns the voter prioritises schools over transit, and so reframes its housing argument accordingly towards voter preference, generating multiple factual claims that sound authoritative, polite and optimistic. The voter’s preference shifts significantly. And crucially, this entire operation could run on a single laptop, leaving no clear attribution trail, at minimal cost.

There are two ways to design such systems: fine-tuning, i.e. training a model on specific data, or proper prompting, with carefully crafted instructions given to the model. Fine-tuning is likely not the most efficient route. Separately, while frontier-models (that is, the most advanced, state-of-the-art AI systems provided by big vendors) are best when it comes to quality, professional AI-based information operations may opt for using open-weight models that are publicly available for anybody to download and run on a local computer, as end-to-end pipelines. In practice, persona design, such as accounts simulating human-like traits or psychological constructs (for example a mid-age female with specific political preferences and background in specific studies) appears to matter more than model choice.

Furthermore, when automated agents are required to make a counter-argument, something extraordinary happens: they behave as more convincing, advocating for their points, and become even more focused ideologically. Influence isn’t limited to isolated persuasion moments, but can use durable, multi-turn identities that persist across platforms and election cycles.

One finding from Lin et al. underlines the severity of such capabilities:

“The more the AI model attempted to pre-emptively address potential objections, the less persuasion occurred—although this association may be explained, in part, by the AI model being more likely to pre-empt in conversations where the participant previously raised objections (and thus was less likely to change their minds at baseline). One particularly interesting lack of predictive power was for claim accuracy, which was excluded by the model, indicating there were no substantial persuasive gains associated with attempting to mislead.– Lin et al., 2025, Nature

Hackenburg et al. (Science, 2025) further show that persuasive power scales logarithmically with model size. Larger models excel in persuasiveness compared to smaller models.

Yet operational constraints may encourage actors to adopt smaller, simpler models. As I explain in my work, models hosted on third-party clous servers (like OpenAI, Anthropic, or other providers) may be easy to detect or disrupt. For this reason, open models run locally may be preferred in practice.

The evidence points to the potential of influence campaigns optimized at the level of sustained conversational identity. This means that societies may be in big trouble.

A changing risk calculus

This isn’t merely about better campaign tools. The combination of low cost, automated operation, and persona consistency means influence campaigns can now run continuously, target microsegments of societies, adapt messaging in real-time – capabilities previously limited to well-funded state actors. The barrier between commercial PR, political expression, politics and information warfare has narrowed, or is disappearing altogether. As I explain in my work on AI propaganda factories, these effects can be turned into an always-on influence pipeline.

Persuasion techniques as tools for offensive information operations

The strongest levers identified - persuasion-focused post-training (specialising for the task) and information-focused prompting - are practical options for actors seeking to maximize attitude change per interaction. Effective narratives (information payloads) will be packaged as evidence-heavy, policy-flavored arguments rather than overtly emotional appeals.

Deploying these tactics at scale is already nearly possible. The case is clear: Large Language Models are going to reshape both political campaigns and online propaganda operations.

What the future holds

The near-term trajectory is less about a single breakthrough model, and more about wider access to modular, automated influence systems. The most effective safeguards will need to target coordination based on making conversations and potential artificial comment threads or inauthentic discussions and exchange of messages, and the trade-off between persuasion and accuracy through auditing and policy constraints. To the best of my knowledge, currently no state agencies or platforms systematically identify active influence operations, whether domestic or foreign that, on a large scale, utilise AI to a significant degree. ed.

Next to the mechanism outlined above, it’s worth citing the Lin et al.’s Nature paper’s anchor:

“Our results unambiguously demonstrate across three different countries, with different electoral systems, that dialogues with language models can meaningfully change voter attitudes and voting intentions. This observation has implications for the future of political persuasion, political advertising and (more broadly) democracy. ... AI models are persuading potential voters by politely providing relevant facts and evidence, rather than by being skilled manipulators who leverage sophisticated psychological persuasion strategies such as social influence." – Lin et al., 2025, Nature

Sooner or later, the use of AI for political influence will require regulation, if only to ensure electoral fairness. That said, every actor involved in elections or referendums will soon be using such tools. In such a scenario, access becomes broadly equal (unless restrictions are put in place for certain users). The differentiating factor will thus be how AI is deployed. In close contests, AI may well determine electoral outcomes – who, or what, wins.

In this story

Lukasz  Olejnik

Lukasz Olejnik

Visiting Senior Research Fellow

Latest news