Why are pre-election polls important?
There are lots of reasons that people want to know the likely outcome of elections but from a democratic perspective, they can enable people to vote strategically, which is particularly useful in the ‘first past the post’ system we have here in the UK. The main focus in the media is on the ‘horse race’ (ie who will win the election), but there are also interesting questions about what people think about leaders, proposed policies and how the economy will do under one party rather than another, which tends to frame the debate as the election gets closer.
What was the problem with the polls leading up to last year’s general election?
The opinion polls carried out the day before the 2015 general election consistently pointed to a dead heat. When the Conservatives beat Labour by a seven-point majority, concern was expressed from many quarters – MPs, journalists and the polling industry itself – at this discrepancy. Polling is a huge industry and these pre-election opinion polls are extremely high profile, they are the way in which the public most obviously connects to surveys. So the day after the election, the British Polling Council and Market Research Society decided they wanted to establish an inquiry into what went wrong and they asked me to chair it.
Why were you approached to chair the inquiry?
Here at Southampton we are one of the leading centres for survey statistics in the UK and internationally, with a long track record of research and training in statistical methodology for surveys.
We have established strong links with the Office for National Statistics over many years, as well as many data collection agencies in the commercial sector, which has enabled us to apply our research to real-world settings.
The first thing I did after being appointed chair was set up a team of experts to assist me, including Professor Will Jennings from Politics and International Relations, who brings a lot of expertise on polling and party systems throughout the world.
How did you investigate what went wrong?
With surveys and election polls, there is a finite number of things that can go wrong; we considered all of these in our inquiry. For example, one of the causes could have been ‘late swing’ – voters changing their mind at the last minute; another is ‘differential turnout misreporting’ – when more of the supporters of one party who were included in the poll don’t turn up to vote on the day. Even the way in which the questions were worded could affect the results. We have looked at each of these potential causes in turn and found little or no evidence to support them being the main cause of the error. This left one potential big cause standing, which we have called ‘unrepresentative samples’, which means that the way the pollsters gather and adjust their sample data systematically over-represented Labour voters and under-represented Conservatives.
How could this have happened?
Polling agencies gather their data using a procedure called quota sampling to make their sample represent the voting population. Respondents complete questionnaires, either online or over the phone, and the pollsters then weight the sample to make it fit the national distribution of voters by age, gender and race by region. We have identified this as a weakness in their methodology: it is quite a strong assumption that you will be able to adjust for all relevant variables when you do the weighting in this way.
The fact that all the polls were wrong in the same way – underestimating the Conservative lead over Labour – could have something to do with a phenomenon known as ‘herding’. This is where the polls tend to give more similar results than would be expected. It arises when pollsters make adjustments to their raw data in ways that tend to pull them toward a group consensus.
How could these problems be overcome?
When we publish our full report next month we will be making some recommendations for how they might improve their samples and the weighting procedures. But if the pollsters stay with the same kind of approach that they currently use, which they are likely to do because there aren’t many alternatives that are feasible in terms of cost and timeliness, then there is nothing that can be done to guarantee this kind of error will not happen again in the future.
So should we trust the polls?
The polls are a useful tool, but we need to be more aware of the uncertainty in the estimates they produce.
People tend to endow polls with more accuracy than they are capable of providing.
These types of estimates are subject to various kinds of errors and looking at the historical record, they tend to be wrong on a not infrequent basis. There was an inquiry into what happened in the 1992 general election in the UK, there have been inquiries in the US, and there is one going on in Poland at the moment, so it’s certainly not unique to the UK political system or polling industry; these polling misses are prevalent across the world.
What about the EU referendum polls – is there scope for the same errors to occur?
An EU referendum is very different to a general election because it doesn’t break cleanly along party lines. They are also less frequent, and each one is unique, so certainly harder to get right. What’s interesting about the EU referendum polls at the moment is that the phone polls are showing a substantial lead for remaining in the UK and the online polls showing a tie, or a lead for exit. They can’t both be right so there is something about the methodology that will come to a head as the campaigns continue – I will be watching with interest.