Pitfalls but publicity of public research
How reliable is public research? Can you trust those government or media covered surveys?
In the same week as we shared some examples of public segmentations, the BBC drove plenty of media coverage on the results of its latest “Happiness Test” research. Beautifully presented within its iWonder section, it captured the public imagination, especially in terms of evidence for traditional stereotypes in some areas of the country.
On the surface there was much to praise here. At a time when research is suffering being neglected by those interested in Big Data + Predictive Analytics, it is great to have such high-profile coverage for a large survey.
Sharing their data, and visualising it in a way that’s accessible for the general public, is the kind of data reciprocity and PR that public research really needs. It also captures the media & public interest, because it’s fun; a “where should I live?” game.
To appreciate the rest of this post, I suggest you take the test: Where in Britain would you be happiest?
Take the test: Where in Britain would you be happiest?
The predictions made in this test are based on research by scientists at the universities of Cambridge and Helsinki. In a collaboration with BBC Lab UK in 2009, they conducted The Big Personality Test – a survey of the personalities and life satisfaction of over half a million people who completed the survey questionnaire to find out more about their personality traits.
Promptly after this, however, a useful critique was published on the research-live site. This does first praise the elements I mentioned above but goes on to list several mistakes in their approach which are common to many well publicised piece of public research:
- The level of granularity at which it needs to report suffers from averaging (obscuring significant differences within regions).
- Causation is not proven (are these traits a result of where people live or do people choose to live there because of their personality?)
- Self-selection for participation will introduce bias (no evidence that factors like age, class & education have been weighted out).
- Timing may also be having an effect (macro economic & ‘wellbeing’ changes since data capture in 2009).
These are important points to consider and ones that are often overlooked not just in publicly shared research but also by in-house teams in a hurry. However, I share the wise conclusion of this well written critique from Ian Carruthers: “Despite these misgivings, this is great fun. And, frankly, not enough research is. It puts insight into the hands of individuals and gets them talking about it“.
Here is his Research-Live critique in full:
Take the test: Does where we live in Britain make us happy? | Feature
Outside of being a gift to estate agents (if you’re loud and intolerant, Slough is for you), what merit does this survey have? It’ll make Smug of Tunbridge Wells smugger, Depressed of South Shields more depressed. Does it show that like-minded people are drawn to certain areas, or does living in a certain area affect your personality?
Hope that’s helpful. Do let us know of any examples you’ve seen where publicly shared research falls into similar pitfalls, or great examples which avoid them.