The shift from authentic to orchestrated engagement online changes not only how we communicate, but how influence operates. Who is speaking? Who is listening? What counts as genuine support, consensus, or protest in a space increasingly filled with automated and semi-automated behaviour?
Political communication and disinformation today move through the same channels as any other type of content - hashtags, comment sections, algorithmic amplification. Propaganda, like memes or brand messaging, travels via the dynamics of platform logic. What was once understood as top-down messaging has become deeply entangled with the everyday behaviours of users, influencers, and coordinated digital actors.
Our research explores how inauthentic participation operates within this environment, focusing on how different types of accounts - human, automated, and hybrid - simulate public engagement and help shape politicised narratives. We examine these dynamics across two major platforms, YouTube and Twitter/X, not simply to detect 'fake' activity, but to understand how influence is manufactured, signalled, and sustained.
To make sense of this complexity, we tested our approach on two different platforms - YouTube and Twitter/X - each chosen for its distinct mode of engagement and public discourse. This allowed us to observe orchestrated behaviour in two contrasting environments and to refine a methodological framework that is both insightful and reproducible.
The result is a three-part model for detecting and interpreting inauthentic activity: message and metadata analysis, profile-level analysis and network mapping. Together, these layers help uncover how influence is not just asserted, but performed - often through subtle and scalable mechanisms.
The spectrum of inauthenticity
Terms like 'bots' or 'trolls' no longer capture the full variety of online manipulation. Today, we observe a spectrum of actors ranging from fully automated accounts to 'cyborgs' (semi-automated users with human oversight), co-ordinated troll farms, and loyalist users who, while not inauthentic themselves, become part of orchestrated campaigns.
To distinguish between these different roles, we propose understanding inauthentic participation not as a fixed category but as a continuum - a set of behaviours designed to mimic authentic user engagement while following a strategic, often political, logic. Importantly, the starting point of this continuum is authenticity itself.
Key findings
- Inauthentic behavior affects performance metrics
We found that bot-driven engagement strongly affects how content is promoted online. There is a clear correlation between video views and comment volume (R² = 0.79), which becomes even stronger when inauthentic comments are removed (R² = 0.84). Bot comments inflate engagement, making content appear more relevant to algorithmic systems - sometimes to the benefit of the content creators themselves.
- Authenticity as a starting point
To identify orchestrated behaviour, we must first understand what authentic engagement looks like. We observed consistent contributions from identifiable real users - including journalists, activists, and engaged viewers - across platforms. These users display diverse narrative positions and foster genuine dialogue, offering a vital baseline for detecting manipulation.
- The crowd is not uniform
Rather than a single category of 'bot', our analysis reveals a stratified digital crowd. We identified five functional clusters of accounts: