May 9, 2015

Are you embarrassed about your election polls?

By Paul Laughlin

polling stationWhat a night the 2015 general election was!

Surprises for a number of parties, with the shock of the exit polls right at the start of the night. Such disbelief at first, Paddy Ashdown has a hat to eat.

As results began to come in & those exit polls were vindicated, it became clear the polling during campaign had got it badly wrong. When as experienced a pollster as Peter Kellner of YouGov is left with ‘egg on his face‘ in BBC studio, something needs to change. This is especially true since there was such consistency across all the campaign polls, no agency called this one right.

Peter admitted on the night that polls during the campaign had got it badly wrong, conceding: “We are not as far out as we were in 1992, not that that is a great commendation.” With regards to the cause, when pushed he simply stated that: “What seems to have gone wrong is that people have said one thing and they did something else in the ballot box.” But this shouldn’t be a surprise to anyone, let alone major research firms.

Over recent years most research firms have faced the challenges of behavioural psychology evidence and populist critiques like “Consumerology“. Most have managed to effectively argue that research can still be designed & interpreted to take account of unconscious irrational bias. That is the predictable behavioural biases people exhibit and often a disconnect between what people say & what they do. However, this more nuanced approach to research does not appear to have been very evident during the 2015 election campaign polling.

Praise is due to those conducting the exit polls. Their selection of polling stations, weighting of that sample & attribution of votes to seats was clearly more accurate. Plus as Tony Wells from YouGov muses, there may be something to learn in capturing likelihood (given exit polls are post event and don’t need to worry about either likelihood to change your vote or likelihood to vote).

I’m certainly not clever enough to have the answer here & it might be more interesting to simply spark a debate amongst you Customer Insight leaders on this topic. However, I offer these possible factors to be considered by those in specialise in this area of research:

1) Learn from the exit polls, what could be improved in attribution of research sample evidence to both votes & crucially number of seats?

2) As a number of newspapers have commented, many campaign surveys did correctly capture people’s views on leadership & economy. Against both these questions Cameron scored much higher. So, part of the trick may be to design questions to capture drivers of voting preference rather than just asking people  how they will vote. Then trust the answers to the former not the latter.

3) Behavioural biases must be taken into account more clearly. A number could be at play here, including the greater reluctance to mislead when asked about an action (exit poll on voting) compared to just theorising (how will you vote?) But research design & interpretation also needs to consider effects of status quo bias (toward incumbent party especially for undecided), framing of questions & response to apparent social norms (the ‘Shy Tory’ factor).

My last thought on this matter is that this endeavour matters for the research industry and ‘in house’/client-side research teams. Public perception as to the reliability of consumer research is affected by negative PR like that currently flooding the media. For the sake of a discipline already under valued compared to some of the “big data“/”data scientist” snake oil on sale, we need to improve.

What are your thoughts? What else might improve this polling? Does it need to be captured in a totally different way?