Skip to main content

05 August 2019

Facebook's fact checking is only the beginning

Susannah Hume and Michael Sanders

SUSANNAH HUME and MICHAEL SANDERS: While debunking online myths should be welcomed, earlier action is needed to stop them being shared in the first place

Fake news
Whilst debunking online myths should be welcomed, earlier action is needed to avert falsehoods from being shared in the first place.

It has been reported recently that the health information shared on Facebook is mostly false. FullFact, a fact checking agency appointed by Facebook themselves, found that of 96 health claims investigated, only five were true - with the remainder being either falsehoods, half truths, or satire.

Some of the examples given are trivial enough that policymakers need not be concerned. One example (erroneously) discouraging pregnant women from using a particular muscle soak for fear of inducing early labour, which was shared more than 100,000 times, might perhaps concern Radox who make the product and might lose out on sales, but it doesn’t seem to pose any actual risk. The idea that you can treat a heart attack with vigorous coughing, however, is more troublesome.

It’s certainly commendable that Facebook is trying to do something about the profusion of false information on their platform, but it’s important to be realistic about the likelihood of success. The desire to share information that we think is useful or interesting, and our tendency to believe things that our friends (even Facebook "friends") share with us is ingrained in our psychology, and has been since our emergence as a species. What makes Facebook different to more traditional social networks (the kind where you actually meet people, or perhaps speak to them over the phone), are two things; the presence of a central planner (Facebook itself), shaping the social environment to generate profits for itself, and the speed with which information is transmitted and can spread.

Fact checking, while something to be welcomed, only allows the stable door to be shut after the horse has bolted, as Facebook knows from its own research. They have long been aware of this tension, and the platform's capacity to spread misinformation rapidly. As far back as 2014, long before "fake news" entered the popular vernacular, Facebook’s own researchers looked at 'Rumour cascades', and at whether corrections - in this case from the mythbusting website Snopes - actually stemmed the flow of misinformation.

What they found is mixed for those of us who want to hope. Someone sharing an article debunking a viral untruth reduced the likelihood that the myth was shared more widely, and in some cases caused the original poster to delete the post. However, this only occurred in a minority of circumstances, and typically only after potentially dozens or hundreds of people had seen the fake story. Facebook’s data on this only covers stories where a Snopes link was eventually shared - so we have no way of telling how much misinformation continues, and for how long, unchallenged.

Truly tackling this phenomenon will require earlier action. Two options, both controversial, would go further. The first is to be proactive with what Facebook, and platforms like it, allow to be published. Some sources of information - particular websites, or posts from individuals - are known to be particularly prone to falsehood, and Facebook could decide not to allow this material to be posted at all, rather than waiting for a specific lie to be uncovered. This would mean taking a firmer stance, and abandoning the illusion of free speech on these platforms, but could be effective at preventing people from shouting fire in a crowded theatre when there is no fire.

The second option is a slower, more intensive process. Researchers, including Gordon Pennycook and Dave Rand, inspired by the events of the last few years, have begun investigating not just the fake news phenomenon, but what might be done to avert it. They find that believing fake news, and sharing it, can be disrupted by people being more considered and thoughtful before doing so. Social media platforms could therefore fight misinformation by slowing down their processes, and encouraging people to be more deliberative before they post or share things - something which Instagram is already doing. This wouldn’t work for everyone, but if it stopped some people and slowed down the spread of misinformation, other approaches - like fact checking - could be more effective.

Susannah Hume and Michael Sanders are the authors of Social Butterflies: Reclaiming the positive power of social networks. Sanders is a Reader in Public Policy at the Policy Institute at King’s and Hume is Associate Director of What Works at King’s College London.

Related departments