At our internal meetings at Edison Research, we regularly find ourselves letting loose with the same sarcastic comment: “Why do we bother doing research right when journalists seemingly cover anything that calls itself a survey the same way?”
The answer is, of course, that we only know one way to do research, and we are not willing to cut corners, employ sketchy samples, and most especially, attempt to make broad conclusions about unreliable numbers just to get published. We have tried, with mixed success, to educate reporters to adhere to higher journalistic standards when it comes to research releases (and here are 20 great questions you should ask whenever you report data.) But even if they don’t, we are still going to only publish our data only when we know it was sampled, asked and analyzed correctly, and that it is reported precisely.
If you see public data from an Edison study, you can be confident it was performed to the highest technical research standards.
Which brings us to the 2014 Infinite Dial study we recently released. As you can see if you download the survey graphs, the very first thing you find is a methodology statement. When you look at research, this is the first thing you should consider – just how was this research performed?
Let’s walk through our one-page statement and consider what it is trying to tell you:
· The study was ‘national’ in scope – that is – every household in America with either a landline or cell phone was eligible to participate.
· We collected data via telephone. Now some people might consider this outdated, but telephone data collection remains the ‘gold standard’ for projecting the outcome of a survey to the national population. As our study shows, there is still a sizeable portion of the population without Internet access at home. Amazingly – we see studies published about Internet usage – collected over the Internet – that don’t even consider the bias baked into that construct.
· To achieve our sample, we employed random digit dialing, to both landlines and cell phones, so as to reach listed and unlisted numbers, and cell-phone-only households. Numbers were called up to six times each.
· We interviewed in both English and Spanish. One cannot accurately reflect the American population without offering Spanish interviews.
· Once we reached a household, we randomized the respondent within the household to eliminate any bias created by who answered the phone.
· Upon completing our interviews, we weighted the data to known US Census figures for age, sex, and ethnicity. This is a standard research practice that is required to reflect the population.
· We plainly named the sponsor of the survey – in this case Triton Digital.
The sample size for the Infinite Dial survey was 2,023. This is a more-than-adequate number to project the total findings to the US Population. (Most pre-election national polls, which often are astonishingly accurate, have a sample size of around 1,000.) We employ this larger sample size in order to allow us to break the data down into subgroups and maintain sufficient sample sizes for projection.
Those who are not familiar with polling techniques will sometimes look at a sample size of 1,000, or even our 2,023, and think it ‘small.’ They are used to hearing about the millions of votes on American Idol or seeing huge numbers from polls posted on Web sites, and they fall into the trap of thinking that one needs these gigantic numbers to reflect a population.
In point of fact it is not the size of the survey; it is the quality of the sample that matters in projecting a finding to a population. Sometimes people look at polls with enormous sample sizes simply ‘must be right’. In fact, many Internet-based polls are wildly biased in the construction of their samples, especially if anyone can respond. These are called ‘convenience samples’ and they should be considered with significant skepticism.
To wit, one favorite example. In 1999 Time Magazine opened up the question of who should be their ‘Person of the Century’ to a poll on their website. They emailed their subscribers and touted the poll in the magazine. The overwhelming winner was Mustafa Kemal Ataturk – founder of modern Turkey. Huh? Not Winston Churchill, not Albert Einstein, not Franklin Roosevelt…but Mustafa Kemal Ataturk? By a mile? And the ‘sample size’ was in the millions. How could it be wrong?
It turns out, of course, that the people of Turkey found out about this poll and decided to ensure that their 20th Century hero won the poll. Time was forced to rescind its ‘handing the question to the people’ promise. There are many similar examples.
But the point is – for Internet data collection the sample frame is crucial. (And as an aside, these will often reference their ‘samples’ – but in fact likely haven’t sampled at all). To project data to a population you need to have taken the kinds of steps we take in Infinite Dial.
But there is sometimes another end of the spectrum – sample sizes too small. We often see researchers attempt to make broad conclusions based on vastly too-small numbers. You, as a research consumer, need to demand to know the sizes from any report.
And even after we do everything described to collect and produce all our information, there is the process of analyzing and reporting. Any survey yields an uncountable number of ways of looking at the data. There are, of course, subjective judgments involved in determining which to emphasize. With Infinite Dial, as with all our public releases, we attempt to be transparent and honest. We think we are fair in our choices, but we know that no two entities would come to the same conclusions.
In total, thousands of Edison Research staff hours go into making the Infinite Dial, like all our public releases, as high-quality as it can possibly be. While the numbers in our reports are estimates, we believe you, as a consumer of our research, can be confident that these estimates are derived by using the highest-quality research techniques.