Skip to main content
KBS_Icon_questionmark link-ico
OnlineShortCourses-Herobanner ;

Orchestrated Crowds: Rethinking Inauthentic Participation in Digital Memory Wars

Online communication today is shaped by a new and evolving set of actors. Where once digital conversations were driven mainly by identifiable individuals and institutions, we now see a fluid ecosystem: humans, bots, AI-generated profiles, hybrid accounts, coordinated clusters — all participating in shaping what looks like public discourse. OLGA LOGUNOVA and PAVEL LEBEDEV examine why the distinction between authentic and orchestrated engagement is no longer clear-cut.

The shift from authentic to orchestrated engagement online changes not only how we communicate, but how influence operates. Who is speaking? Who is listening? What counts as genuine support, consensus, or protest in a space increasingly filled with automated and semi-automated behaviour?

Political communication and disinformation today move through the same channels as any other type of content - hashtags, comment sections, algorithmic amplification. Propaganda, like memes or brand messaging, travels via the dynamics of platform logic. What was once understood as top-down messaging has become deeply entangled with the everyday behaviours of users, influencers, and coordinated digital actors.

Our research explores how inauthentic participation operates within this environment, focusing on how different types of accounts - human, automated, and hybrid - simulate public engagement and help shape politicised narratives. We examine these dynamics across two major platforms, YouTube and Twitter/X, not simply to detect 'fake' activity, but to understand how influence is manufactured, signalled, and sustained.

To make sense of this complexity, we tested our approach on two different platforms - YouTube and Twitter/X - each chosen for its distinct mode of engagement and public discourse. This allowed us to observe orchestrated behaviour in two contrasting environments and to refine a methodological framework that is both insightful and reproducible.

The result is a three-part model for detecting and interpreting inauthentic activity: message and metadata analysis, profile-level analysis and network mapping. Together, these layers help uncover how influence is not just asserted, but performed - often through subtle and scalable mechanisms.

The spectrum of inauthenticity

Terms like 'bots' or 'trolls' no longer capture the full variety of online manipulation. Today, we observe a spectrum of actors ranging from fully automated accounts to 'cyborgs' (semi-automated users with human oversight), co-ordinated troll farms, and loyalist users who, while not inauthentic themselves, become part of orchestrated campaigns.

To distinguish between these different roles, we propose understanding inauthentic participation not as a fixed category but as a continuum - a set of behaviours designed to mimic authentic user engagement while following a strategic, often political, logic. Importantly, the starting point of this continuum is authenticity itself.

Key findings

  1. Inauthentic behavior affects performance metrics
    We found that bot-driven engagement strongly affects how content is promoted online. There is a clear correlation between video views and comment volume (R² = 0.79), which becomes even stronger when inauthentic comments are removed (R² = 0.84). Bot comments inflate engagement, making content appear more relevant to algorithmic systems - sometimes to the benefit of the content creators themselves.
  2. Authenticity as a starting point
    To identify orchestrated behaviour, we must first understand what authentic engagement looks like. We observed consistent contributions from identifiable real users - including journalists, activists, and engaged viewers - across platforms. These users display diverse narrative positions and foster genuine dialogue, offering a vital baseline for detecting manipulation.
  3. The crowd is not uniform
    Rather than a single category of 'bot', our analysis reveals a stratified digital crowd. We identified five functional clusters of accounts:

 

LogunovaFeature
Example of a YouTube comment network showing different types of bot behavior and interaction patterns. The network is segmented, with isolated communities and limited cross-group engagement. Bot functions vary — from provokers to opinion leaders who intensify discussion. The structure is typical for fragmented discussions, where communication is limited to individual clusters. High modularity (0.735) highlights this separation, and the low overall density reflects weak links between all participants.
  • Agenda-setters (6.9 per cent): High-follower accounts that set narrative direction and provoke engagement.
  • Analytical core (16.6 per cent): Thoughtful commentators offering political reflection, often aligned with opposition views.
  • Amplifiers and content warriors (27.1 per cent): Highly active users engaged in sarcasm, irony, and emotional debate - including trolls.
  • Reputation guardians and selective antagonists (8.2 per cent): Accounts defending political figures and attacking critics, often via reactive comments.
  • Synthetic participants (41.2 per cent): Low-profile, often anonymous accounts that flood threads with repeated slogans, insults, or contentless noise.

This spectrum of participation reveals a flexible and adaptive system of influence.

  1. Hybrid propaganda is emotional
    The most striking phenomenon is the emergence of 'cyborg' accounts that blend human-like responsiveness with algorithmic volume. These accounts employ emotionally resonant framing - patriotism, nostalgia, moral outrage - to appear authentic. They simulate political subjectivity, not just amplify slogans. In doing so, they perform what sociologist Jeffrey Alexander describes as “cultural drama,” staging national memory as organic and widely affirmed.
Why it matters

These findings expand our understanding of computational propaganda (Woolley & Howard, 2016) and inauthentic engagement. Rather than thinking in terms of 'real' vs. 'fake,' we need to approach digital participation as a continuum shaped by affect, co-ordination and visibility.

Ultimately, orchestrated crowds don't just manipulate metrics - they reshape what we recognise as public opinion. In a media landscape where emotional resonance and algorithmic visibility matter more than fact-checking or expert knowledge, the blurred boundary between real and orchestrated engagement has significant implications for memory, identity, and democracy.

A two-platform study

Our findings are based on two datasets on Twitter/X and YouTube. By analysing these large-scale datasets across platforms, we were able to validate our methodology and propose a new model for identifying orchestrated digital behavior.

  • YouTube: 62,528 comments across eight videos connected to Russian political and historical content, posted by opposition figures and activists. These comments came from 33,602 unique accounts.
  • Twitter/X: A dataset of 40,148 texts (tweets, replies, and quotes) involving 14,149 unique users, commenting on content by ten prominent opposition figures.
A layered methodology

We offer a multi-method approach - a three-part framework to detect and interpret inauthentic activity:

  • Message and metadata analysis: Examining the structure and tone of comments, timing of engagement, and presence of repeated slogans, emotional language, or aggression markers.
  • Profile-level analysis: Assessing user account features such as age, frequency of activity, presence of profile content, and follower symmetry.
  • Network mapping: Mapping interactions between accounts to reveal dense clusters, hub structures, and signs of coordination across content threads.

This approach allowed us to identify not individual bots, but functionally orchestrated clusters of accounts acting in concert.

In this story

Olga Logunova

Olga Logunova

Research Associate

Latest news