Were the Opinion Polls really wrong about the result of the General Election?
A week before the election the media had two favourite stories. One was that Ed Miliband was stumbling towards No 10 with the help of the SNP. The other was that the election was on a knife edge – too close to call – and the consequence was another coalition government.
Since the election campaign got going after the New Year there were three key questions that everybody was looking to the opinion polls to provide answers
Could the Tories reach ‘crossover’ and establish a lead over Labour in the polls?
Would there be a ‘late swing’ that would see the Tories put daylight between
themselves and Labour?
Would there be sufficient ‘daylight’ to allow David Cameron to win enough seats
to return him to Downing Street?
We know the answer now of course and since then everyone has been asking: “why did the polls get it so wrong?”
However, analysis of those polls indicates that the final result was there to be seen. But too few people were looking properly for it.
When robust statistical reliability is essential, we have long warned against reliance on the self-declared demographics of on-line panels. Many of the Polls used established Panels as their research methodology. There is no real check as to how people describe themselves or whether they have more than one identity. And respondents select themselves based on their interest and the incentive payment offered.
So let us strip away the polling figures from Panel surveys and concentrate on what we consider to be the more reliable telephone or face-to-face interviewing. It would appear that the ‘crossover’ from Labour to Conservative had already happened. In fact it may well have happened much earlier in the year. By a week before Polling Day, the Conservatives had a lead in these polls of nearly 3%. Significantly, this lead had grown over the previous couple of weeks which in turn indicated the much anticipated (by the Tories) ‘late swing’.
Which brings us to the final question: “Would it be enough?”
Was Cameron going to get a big enough lead to see him safely back to No.10? Again, all the evidence, drawing on polls for previous elections was pointing strongly to the answer “Yes he can”.
Most political opinion poll organisations state that the incumbent candidate for any party in a constituency seeking re-election has an inbuilt advantage over their rivals equivalent to about a 1% lead. The Conservatives in their own polling were showing that in key marginals they were holding about a 2% lead and when you add in the incumbency factor that actually meant they were effectively holding a 3% lead. This also provides further evidence that ‘crossover’ had already happened.
The next key question is how should you deal with the ‘undecideds’ in the overall analysis? Figures showed that up to the election around 1 in 4 voters were still to make up their minds, higher than usual for a UK general election.
One method is to ignore them on the basis that in general they will vote in the same proportions as those who have already declared their intentions. However, one theory suggested that they are in fact ‘shy Tories’ and they would all vote for the Conservatives. Another theory is that they split 3:1 to the Tories although Lord Bell opined that they would split 2:1 in favour of the Tories. But, whichever approach you take, most would be expected to vote Tory, possibly enough to have given another 1% or 2% point advantage to the Conservatives, so a total lead of about a 4% in the polls. The strong evidence for this, of course, eventually came with the exit poll (face to face interviewing!) where there were no undecideds!
The other ‘known unknown’ in this analysis is what would happen to the UKIP vote. They were standing at about 15% nationally in the polls. Traditionally smaller party ‘protest’ votes have tended to disperse at General Elections. So having won at the Euro Elections last year it was unlikely that the UKIP vote would hold up in those areas where there was not a strong possibility of UKIP winning. Previous analysis suggested that if not voting UKIP, supporters would drift back to their natural political ‘homes’ in the proportion of about 3:1 to the Conservatives, equivalent to another 2 or 3 percentage points for the Tories.
So, one week before the election, a polls lead of about 2% (excluding Panels), another 1% for incumbency, another 1% or 2% from the undecided and a final 2% or 3% from the waning UKIP vote, and there was available a forecast for a Conservative lead of between 6% and 7%.
The thirst of the media organisations for quick turnaround and new information to feed their 24 hours news appetites may therefore have produced the situation which got those same media organisations into such a frenzy. A more considered approach which looked especially at those polls using a strong methodology such as face to face or telephone interviewing – if only they had found the time – might have given them the scoop on the final result that they craved.
What can researchers and users of research learn from all this which will help produce better value for money from research investment?
Good telephone interviewing can produce the quality of data required, but we would caution about the increasing use of mobiles as against landlines, especially amongst the younger age groups. Inevitably a telephone sample without access to mobile numbers will exclude a sizeable proportion of the population. Even with mobiles there is a major problem linking a mobile phone to a specific constituency or area. Company mobile phones may be registered in a London head office but the actual user can be found anywhere in the UK, making control of sampling and quota a major challenge.
The more exacting the quota, the more robust the data. So the final major point is that the quota for a survey influences the findings. A basic quota of age, gender, social grade and region will suffice for basic data. You may well need to add in ethnicity, educational background and voting history if you want to know if you are at that 6% point!
One major advantage of face to face interviewing – the good old street interview – is the ability of interviewers consistently to hit the stringent quota. This was how opinion polls used to be carried out, of course. But the thirst for immediate feedback (for the 24 hours news channels) has pushed this methodology into the background. Maybe, just maybe, some good old fashioned personal interaction might have given the accuracy which the news channels, and the political parties craved. Getting data is one thing, analysing and understanding it is another.
Doing so might just have saved Labour spending £25k on the Edstone!