Back from the election
I’ve just got back from the BBC after working all night (you may have seen my bald spot sat just to the left of Emily Maitlis’s big touchscreen last night) and am about to go and put my feet up and have a rest – I’ll leave other thoughts on the election until later in the weekend or next week, but a few quick thoughts about the accuracy of the polls.
Clearly, they weren’t very accurate. As I write there is still one result to come, but so far the GB figures (as opposed to the UK figures!) are CON 38%, LAB 31%, LDEM 8%, UKIP 13%, GRN 4%. Ten of the final eleven polls had the Conservatives and Labour within one point of each other, so essentially everyone underestimated the Conservative lead by a significant degree. More importantly in terms of perceptions of polling it told the wrong story – when I was writing my preview of the election I wrote about how an error in the Scottish polling wouldn’t be seen so negatively because there’s not much difference between “huge landslide” and “massive landslide”. This was the opposite – there is a whole world of difference between polls showing a hung Parliament on a knife edge and polls showing a Tory majority.
Anyway, what happens now is that we go away and try and work out what went wrong. The BPC have already announced an independent inquiry to try and identify the causes of error, but I expect individual companies will be digging through their own data and trying to work out what went wrong too. For any polling company, there inevitably comes a time when you get something wrong – the political make up, voting drivers and cleavages of society change, how people relate to surveys change. Methods that work at one election don’t necessarily work forever, and sooner or later you get something wrong. I’ve always thought the mark of a really good pollster is someone who puts their hands up to the error, says they’ve messed up and then goes and puts it right.
In terms of what went wrong this week, we obviously don’t know yet, certainly I wouldn’t want to rush to any hasty decisions before properly looking at all the data. There are some things I think we can probably flag up to start with though:
The first is that there is something genuinely wrong here. For several months before the election the polls were consistently showing Labour and Conservative roughly neck-and-neck. Individual polls exist that showed larger Conservative or Labour leads and
some companies tended to show a small Labour lead or small Conservative lead, but no company consistently showed anything even approaching a seven point Conservative lead. The difference between the polls and the result was not just random sample error, something was wrong.
I don’t think it was a late swing either. YouGov did a re-contact survey on the day and found no significant evidence of this. I think Populus and Ashcroft did some on the say stuff too (though I don’t know if it was a call-back survey), so as the inquiry progresses other evidence may come to light, but I’d be surprised if any survey found enough people changing their minds between Wednesday and Thursday to create a seven point lead.
Mode effects don’t seem to be the cause of the error either, as the final polls conducted online and the final polls conducted by telephone produced virtually identical figures in terms of the Labour/Conservative lead (though as I said on Wednesday, they were different on UKIP). In fact, having a similar error with both telephone and online polls is evidence against some other possibilities too – unless by freakish co-incidence unrelated problems with online and telephone polling produced almost identical errors it means things that only affect one modeare unlikely to have been the cause. For example, if the problem was caused by more people using mobile phones, it shouldn’t have affected online polls. If the problem was caused by panel effect, it shouldn’t have affected phone polls.
Beyond that there are some obvious areas to look at. Given that the pre-election polls were wrong but the exit polls were right, how pollsters measure likelihood is definitely worth looking at (exit polls obviously don’t have to worry about likelihood to vote – they only interview people physically leaving a polling station). I think differential response rates is something worth examining (“shy voters”… though I think enthusiastic voters is just as risky!), and the make-up of samples is obviously a major factor in the accuracy of any poll.
And of course, it might be something completely unrelated to these things that hasn’t crossed our minds yet. Time will tell, but first some sleep.