Skip to main content

13 December 2019

The pollsters have had a good election

Ben Page

BEN PAGE: After some recent challenges, the polling industry performed well this time around

Poll

If you are a Conservative voter you will be happy with the general election result. You will also be happy if you are a pollster. Pollsters have had a hard time in recent years, overestimating Labour in 2015, missing the Leave victory in 2016, and not anticipating a hung parliament in 2017. More recently, things have been better. Ipsos MORI called the May EU election correctly for the Evening Standard, predicting the massive Brexit party win, and accurately showing the vote share of all the parties to within, on average, 0.75% of the actual result.

In the 2019 general election nearly all the final polls forecast a Conservative majority. The final average of the pre-election polls was 43% for the Conservatives, 33% for Labour and 12% for the Lib Dems. At the time of writing, this seems to have been pretty much right on the money – although one or two companies did show a much smaller Conservative lead.

Ipsos MORI’s predictions from before the election had an average error of only 0.3% per party and predicted a large Conservative majority. Given that all polls, even when executed perfectly, have natural margins of error, no one should expect them to always exactly match the final, true results. Each poll will normally have a margin of error of around 4% – in other words, if it says the Conservatives will get 43%, the “range” in which one would expect results to lie, if a census of all voters had happened, would between 47% and 39%. This makes the under 2% average error of the 2019 polls reasonably impressive.

Should the pre-election polls have predicted a bigger majority than some commentators were expecting? The challenge is that most are not designed to predict seat share, only vote share. This matters, as 36% can give you a big majority (Blair, 2005) while 37% can give you a hung parliament (Cameron, 2010). A 1% difference in vote share can make a 50-seat difference in what you end up with. Combine this with the margin of error challenge and you can see why only the exit poll and the MRP models try and predict seat share.

If the excitement about MRP has passed you by, it stands for “multi-level regression with post-stratification” and is a statistical technique used to model opinion poll results into seat predictions. It models how different constituencies may vote, rather than just what percentage of the public nationally may vote for a particular party. In 2017 YouGov had an accurate estimate of a hung parliament based on its MRP model, which did better than its conventional poll, and earlier this week they released their final modelled prediction for seats in 2019.

MRP is not infallible. Much of the commentary about it ignores the 2017 Lord Ashcroft model which used the same approach and estimated a Conservative majority of over 60. And YouGov’s final 2019 model noted a hung parliament or large majority was possible, but suggested around a 28-seat majority, well below what actually happened. But while the technique is by no means perfect, it does offer a good way of estimating seat share where 650 separate constituency surveys would be impractical or too expensive, and which normal polls are not designed to provide.

Finally, there is the exit poll. It works on a completely different basis to the internet and telephone polls of the campaign. As in previous years, Ipsos MORI carried out the official exit poll, on behalf of the three major broadcasters who share the not-inconsiderable costs: the BBC, ITV and Sky News. It works by having teams of three interviewers at each of 144 polling stations for the entire time they are open to voters: 14 hours in total. These have been carefully chosen from tens of thousands of polling stations, with varying degrees of marginality, and where a simple stratified random sample – or “one in N” of those leaving the polling station are approached and interviewed. Despite the added complication this time of winter weather – in the first December election since 1923 – our teams battled the cold, dark and wet to collect more than 23,000 interviews.

For the third election in a row, the result of the exit poll, published as soon as polling stations closed at 10pm, predicted the final result with a spectacular degree of accuracy. It predicted the Conservative Party would win 368 seats, Labour would get 191, while the Scottish National Party and Liberal Democrats would gain 55 and 13 seats respectively. With 649 of 650 seats now declared, the totals for each party are 364, 203, 48 and 11.

The exit poll is a unique and challenging project, which stands alone in terms of the impact it has at 10pm on election day. It remains the very best way of predicting the result of an election. Hopefully the generally accurate pre-election polling, combined with another excellent exit poll and very accurate polling in the EU elections, means pollsters’ reputations are recovering somewhat. Professor Will Jennings of Southampton University has studied all elections globally since 1942, including Britain, and concluded that polling is actually no worse – but no better – than in the past. Jennings’ estimates suggest an average 2% error for each party in each election over 70 years. In 2019, Britain’s pollsters have done rather better than that.

Ben Page is Chief Executive of Ipsos MORI.

Related departments