Company News · July 11, 2005

50,000,000 Survey Respondents Can be Wrong!

By Edison Research

by Rob Farbman, Senior Vice President


Many people are skeptical of surveys with small sample sizes – and rightfully so. At the same time, surveys with very large sample sizes often get a pass – with consumers assuming that with so many interviews “it has to be right”. The idea that “more is always better” is just one of many common misconceptions about polling. The fact is that the only way to evaluate the quality of a survey is to know some basic facts about how it was conducted.

The idea that “more is always better” is just one of many common misconceptions about polling.

The radio and music trade publications are frequent consumers of research data, reporting regularly on survey releases and reports on data analysis. While we are glad to have a forum for the industry to learn and share information, many of these news stories fail to provide information necessary to evaluate the results of the research. Often a study will be presented with only sketchy details such as: “from a survey of 400 people”- no information on who the respondents were or how they were selected or interviewed. While this lack of details could be due to an oversight by the writer, it is more often a result of the researcher not disclosing any background information on the data they are releasing.
The National Council on Public Polls (NCPP) is an association of polling organization that strives to set high professional standards for public opinion pollsters. NCPP has issued a “Statement of Disclosure” that it recommends for all publicly released polls (http://www.ncpp.org/disclosure.htm.) These rules are not only important for reputable researchers to follow, they are also extremely important for journalists to be aware of when reporting on surveys. The first five of these guidelines in releasing public polls are listed below with some advice from Edison Media Research for the poll consumers out there:
1. The sponsor of the survey should always be revealed
It is a survey “fact of life” that data can be “spun” by the pollster – either through their choice of question wording or through their interpretation of the data. Biases are not always intentional and can inadvertently creep into a survey’s results even when the researcher has no obvious personal bias at all. So, it is not hard to imagine how data might be skewed when the sponsor paying for and releasing the poll findings does have an agenda or a preferred conclusion for their survey findings.
When reading a poll, make sure you understand who is paying for the survey. And for any conclusion it reaches, keep in mind whether the survey sponsor has “a horse in the race” in terms of how they want the results to be “spun”. If they do have an obvious reason to prefer a particular outcome, that doesn’t mean the survey is bad – but it does mean that the findings should be looked at with a more critical eye.
2. Dates of interviewing should be reported in the survey’s release
Survey “field times” vary widely, but since a survey’s typical goal is to provide a “snapshot” of public opinion, it is unusual to conduct interviews over a prolonged period of time.
If you see a survey released with an excessively long field period its very possible that it was done that way to save on costs. Perhaps the organization releasing the study tacked on questions to the end of other research projects or music studies they have done over a long period of time and then pieced together the data into a new “survey”. Again, this doesn’t mean the findings lack value – but it’s an important factor in evaluating the data and the survey’s conclusions.
3. The method of obtaining the interviews (telephone, in-person, Internet) needs to be explained
This may seem an obvious point, but if you start paying closer attention to press releases and news reports about surveys, you’ll be surprised how often this key bit of information is missing. Beware of polls or news stories on polls that fail to tell you this simple fact. While an Internet survey may be a correct methodology to use to achieve certain goals, there are times when an online poll will have limitations – a poll consumer should be informed of the methodology used and what the data represents.
4. Explain what population was sampled
This bit of information is critical in evaluating a survey. If the population being sampled only includes people of a certain age, or members of a database, or even former music test respondents, that information is vital.
The most statistically reliable survey will always use some kind of random or probability sampling. Probability sampling ensures that each member of the population being surveyed has an equal chance of participating in the survey. Using proper sampling techniques isn’t cheap, so if you see a survey release that fails to mention the use of these methods then there is a good chance they didn’t.
There are plenty of studies out there that use self-selected samples from Web sites and many of them offer a lot of great information. But these surveys are by definition not representative of anything other than people who would volunteer to participate. These self-selected respondents can also in some cases be more vocal and opinionated than the general population that the poll is intended to represent.
There are also studies that may use random sampling techniques but which start from a population that is not random (a customer or listener database, for example.) These studies can all certainly have value as long as you can view them in context — but the population sampled and the methodology is a critical factor in understanding the findings.
5. Report the size of the total sample and any smaller sub-samples
Using a correctly implemented probability sample, a survey of 1000 respondents can accurately reflect the opinions of the population of the United States with a sampling error of +/- 3.1%. Doubling the sample size to 2000 only lowers the potential sampling error to 2.2%. So there is a point of diminishing returns in making the sample size larger and larger – this is why we see so many national media and newspaper polls with samples in the vicinity of 1000 interviews.
It is with surveys that do not use random sampling techniques (frequently Internet surveys) where we often see very large sample sizes. When a study trumpets a sample size in the tens of thousands, the first reaction may be “with so many interviews, how can it be wrong?” But as we noted earlier, sample size alone means very little when evaluating a poll.
In addition, while a survey may have 1000 total respondents, beware when you get down to the smaller segments within the data. Example – If a survey of 1000 people shows that 2% of its respondents are subscribers to satellite radio, then a follow-up question on how much they enjoy the service would be using a sample size of only 20 people. The researcher might not even report the data based on such a small sample. But if they did decide to publish these findings, they should note the small sample size (and the resulting larger margins of sampling error).
With more educated poll consumers using a discriminating eye to evaluate surveys, the quality of the research being released as well as the value of the media reports on these surveys can only get better.

Get our latest insights delivered to your inbox.