Skip to main content

15 June 2023

Conspiracy theory research

Responses to comments and questions

street-scene

We have received a number of questions and comments on the survey released on belief in conspiracies, part of which was used in a BBC podcast. We are committed to accurate measurement and discussion of this topic, and this note outlines responses to many of the key comments. We believe the study provided valuable insight into a number of aspects of conspiracy belief that was useful context to the much wider investigation carried out by the BBC. The BBC podcast features a selection of the findings, which support the wider BBC investigation. We also recognise that there are some specific aspects of the survey itself that need to be contextualised, and have updated that in our write-up of the study.

In addition, there are some methodological queries raised by the comments where new research could provide valuable insight into the effects on results, so we will be running a new survey to test some of these effects, as outlined below.

Content and focus

This was a survey, with input from a BBC podcast series, which was focused on specific conspiracy theory media, and the conspiracies they frequently cover – this informed the selection of conspiracies covered. At King’s we have asked about many other conspiracies in other studies over a number of years, and will continue to explore more in the future. In particular, we are keen to explore the relationship between mainstream media consumption and conspiracy belief in our new study.

Question design in estimating stated belief in conspiracies

The questions on levels of conspiracy belief are in a format that we, and many other researchers, have used regularly, with a number of design elements to support accurate measurement. First, the scales are balanced, with equal numbers of “true” and “false” categories. Second, we also ensured that we included statements that are perceived to be “true” by a large majority within the question set, to encourage people to engage with the questions rather than speed through. And third, the questions were randomised so respondents saw them in different orders. The estimates that were produced for levels of belief are in line with other similar studies in related areas.

We do understand that within this there will be a proportion of the public who may not actually believe the conspiracies, but are responding with other motivations in mind, including deliberately answering incorrectly or making a mistake. One study puts this proportion at around 4%, which is important, but doesn’t change the overall picture of surprisingly high minorities saying they believe in key conspiracy theories. Nevertheless, we’ve updated the write-up of the study on our website to emphasise this more.

We used broad framings for some questions, such as whether Covid was seen as a “hoax”, deliberately to capture a wider sense of uncertainty, as we have in previous studies. Clearly there may be different interpretations of this term by the individuals responding, and it should be viewed in that light, but we believe it is valid and valuable to ask and report these broader questions, being clear about what exactly was asked.

We did also include more specific, prescribed questions in the survey which had less room for interpretation – for example, whether people think the following statement is true or false: “So-called ‘15-minute cities’, where all services are with a 15-minute walk of where people live, are an attempt by governments to restrict people's personal freedom and keep them under surveillance.” Responses to this question also indicated high levels of belief.

However, this is an area that further research could provide more insight, and we will develop a series of questions to test different formulations of specific and general framings across split samples, and follow-up open-ended questions on what people were thinking of.

It will also be the case that on items such as the “Great Reset” conspiracy, members of the public may answer that they believe it is true without much or any knowledge of what it is. Indeed, we used questions to try to get at that, asking whether they had seen or heard information that makes them think that it is true, where large minorities said they had not. Respondents could, of course, have other reasons to think it’s true, but it does help highlight the indirect nature of these beliefs.

This is another area where further questioning could shed some light on what people have in mind, by testing understanding of specific elements of the conspiracy against overall claimed belief, again including using open-ended questions.

Sample and online surveys

The study was conducted by Savanta, with 2,274 interviews conducted using their online panel, which is a standard, accepted sample size and interviewing approach for nationally representative surveys in the UK.

Savanta recruits panellists using a wide network of sources to improve panel diversity. They use a series of automatic and manual checks to validate online survey respondents to ensure data quality, including validation that respondents are who they say they are and to exclude bots, including quality control procedures to identify people who complete the survey too quickly, people who answer the same response on the majority of scaled questions, and people who give nonsensical responses in open questions. Respondents who fail these checks are removed from the sample. Data were weighted to be representative of the UK by age, sex, region, and social grade.

There will always, however, remain a challenge for online panels to completely represent the public, as they necessarily rely on people opting in to be part of a panel and this will attract a certain sort of person, who is likely to be more frequently online. We should always bear in mind that this may in turn inflate estimates that can be related to online information exposure, although this remains the standard interview mode for the large majority of polls in the UK.

Estimates of readership of the Light and participation in protests or rallies

Related to that, there are two areas where there are important points to contextualise the findings and encourage interpretation with caution, particularly the high claims of awareness, readership and distribution of the Light and claimed levels of past attendance at rallies or protests on various issues, including those related to conspiracies.

Respondents can have false recall of reading and interacting with publications, and these questions were not designed to identify accurate readership estimates that could be grossed to readership numbers, which involve a more detailed question set. Rather, they were to allow us to analyse the sample by this claimed behaviour. Similarly, past involvement in a “protest or rally” cannot be at the level claimed by respondents, particularly on more niche issues. This could be down to some respondents interpreting “protest” more broadly than most would interpret the term, or some of the other effects outlined above, on deliberately or mistakenly answering incorrectly (the low figures involved, at 7%, could mean these effects are a significant proportion of the overall response).

We have added a note to the write-up of the survey on our website making these points, and again, this will be an area we’ll explore and test in follow-up experimental studies to understand and improve measures on actions. 

Related departments