Why can’t pollsters predict the future?

While Labour’s landslide victory last earlier this month was a surprise to no one, the Labour vote was considerably smaller than many pollsters expected and the complete implosion of the Conservative party that some expected did not quite materialise. 

The media and Twitter’s addiction to polling has certainly increased in the last decade. Constant updates on social media, personality pollsters, and endless new MPR models have all added to the noise and quantity of information. 

During the election the polls seemed to drive the campaign narrative more than ever before… but in the end, they seemed to miss the mark. Why did this happen and should the public pay as close attention next time?

Polls vs reality

When it comes to the votes overall, the pollsters were predicting Labour to hoover up 38.8% of voters, while the real figure on election day came out at 33.7% - not much higher than their performance in 2019. The Conservatives were expected to see their vote share drop from 43.6% to 21.4%, but it bottomed out at a slightly less apocalyptic 23.7%. 

 

Graph 1: How did the pollsters do on predicting vote shares?

 

Looking to the predictions of how these vote shares would translate into seats, the average forecast amongst pollsters’ models would have seen Labour on 441 seats (29 too many) and the Conservatives on 101 (20 too few). 

Bradshaw Advisory’s own model fared better when it comes to Labour’s winnings - we guessed they’d go home with 417 seats, though we were further out when it came to the Conservatives’ losses thinking they would be slightly heavier than what actually happened.

So where do these discrepancies come from?

Polling is more difficult than we tend to think

Polling is easy when everyone is sure a) whether they’ll vote or not and b) who they would vote for if they went to a polling station. This year in particular, there was a high degree of uncertainty in answering both these questions.

Throughout 2024 pollsters reported a much higher incidence of respondents answering that they “don’t know” or would “prefer not to say” when asked who they would vote for at the election. 

In a number of cases, the “don’t know/prefer not to say” response has been polling with more popularity than the Conservatives.

Many pollsters have tried to develop models to guess who the “don’t knows” would vote for. This meant that pollsters’ estimates for this election have been increasingly made up by (educated) guesswork rather than voters' actual views.

Polling by Lord Ashcroft has shown that a massive proportion of voters finally made their minds up on polling day itself or in the final few days running up to it. Worth noting here is that it appears Labour voters made their minds up much earlier than others, with 68% deciding that they’d go red at least a month before the election, compared to 55% among Tory voters. This may well provide some explanation as to why their polling lead was higher than their actual vote share.

In addition to making guesses about who the “don’t knows” will end up voting for, pollsters also have to come up with some idea of which of their respondents will actually turn up to vote come election day. Some will ask respondents outright and take their word for it, while others have developed models for predicting a given respondent’s turnout likelihood based on their demographics and patterns across prior elections. 

This is all well and good if turnout patterns don’t change significantly between elections. But again, 2024 was a strange election on this count with overall turnout falling 7 percentage points compared to 2019, coming in at 59.9% - the lowest since 2001

The reduction in turnout was not even across constituencies either, it fell noticeably more sharply among Labour strongholds. With these structural changes to turnout patterns in 2024. While turnout models based on patterns seen at prior elections will still certainly be usable, their reliability will have diminished.

The limits of models

The polls are meant to give us an idea of what parties’ final vote shares will be, whereas pollsters’ modelling shows us how those votes will translate into seats. These models have been as troublesome if not more so than the polls themselves. 

Survation, for example (sorry if you’re reading this), predicted that Labour would come out with 484 seats and the Conservatives on 64, as well as confidently stating that Labour had a “99% chance of beating their performance in 1997”. Clearly wide of the mark.

 

Graph 2: How did pollsters do on predicting seat numbers?

 

Before discussing what went wrong, it’s worth setting out how pollsters actually do their seat forecasting. The most in-vogue form of modelling this election was ‘MRP modelling’ which came to prominence following the 2017 general election when the YouGov MRP model managed to get 92% of its predictions correct (it got the same this year).

Pollsters put together MRP models by conducting a very large poll, where in addition to standard questions like who they will vote for, participants are also asked detailed questions about who they are - where they live, what jobs they do and what they ate for breakfast. This then allows pollsters to develop a good idea of  who a 39 year old mechanic from the North West, who had bran flakes for breakfast would likely vote for. This information can then be matched to the demographics of parliamentary constituencies which then provides pollsters with estimates of how each party will do in each seat.

The MRP models suffered this time around because of their input data. As discussed above, the polls published this year suffered from several issues which led them to miss the mark, be that the popularity of the ‘don’t know’ party or poor turnout. Any models using this data as input are bound to suffer from similar problems.

MRP models struggle to pick up on local peculiarities which buck national trends. This might come in the form of particularly strong local campaign groups or particularly well-liked local independent candidates.

One factor which may have put pollsters’ models under pressure this year was the significant uptick in tactical voting. In Scotland we saw evidence that a number of voters who would normally vote Conservative voted Labour in hopes of unseating SNP MPs, leading SNP numbers to dwindle further than most pollsters predicted. 

South of the border, we saw a massive surge for the Lib Dems across the commuter corridor in the South of England, something undoubtedly powered by would-be Labour voters switching their support to topple local Conservative MPs. These highly local patterns of tactical voting are difficult to pick up on using standard MRP methods.

What next?

There’s little doubt that over the coming five years, we’ll see pollsters refining their methods further in response to these challenges. As we saw in the aftermath of 2015, where shy Tories threw a spanner in the works, we’ll no doubt see new adjustments for the ‘don’t know’ party and improvements for turnout models. But as ever, reality has an awful habit of changing and the next election will no doubt see another flurry of articles like these published.

One thing likely to prove particularly irritating to those predicting results next time will be the wafer thin size of majority in a number of seats. Of the 650 seats on offer 255 have majorities of less than 5000. Hendon and Poole, both Labour seats, have majorities of less than 20! With such a high preponderance of marginals come next election, local races will become even harder to predict, with increased pressure on parties to front strong local campaigns - something present models find extremely difficult to capture.

 

Graph 3: parliamentary constituencies by size of majority in the 2024 general election

 

Unexpected swings in these seats could create big changes come another general election - 130 of Labour’s current seats are held in marginals with sub-5000 majorities. Were two thirds of these marginals to swing to another party, their majority would be entirely eliminated. In all such areas, prediction errors amongst pollsters could see some more big misses in their forecasts.

Next
Next

Partnerships, stability, ESG 2.0: Labour’s agenda for government