Company News · December 3, 2004

What’s Wrong with Focus Groups?

By Edison Research

By Tom Webster, Edison Vice President

At the NAB in San Diego this year, I had the opportunity to attend a panel entitled: “Research – Luxury or Necessity.” I went for a variety of reasons, but mostly to get a sense of the kinds of questions programmers had about research and its uses. That, and the title of the panel itself—I doubt that Procter and Gamble’s $150 million dollar consumer research budget is viewed as a “luxury item.”

Radio stations are doing fewer groups than ever—which is ironic, because the focus group was originally developed to test radio programming

One thing I noted was a continuing lack of regard for that most humble of research tools, the focus group. In the early 90’s, I spent almost half of my time as a researcher conducting focus groups, interviews, and other strictly qualitative research projects. Many of the old Pyramid Broadcasting stations, for example, not only conducted regular, scheduled focus groups, but also budgeted for 2-3 days of individual interviews each year that added an additional layer of insight to our understanding of each station’s brand and resonance. In the previous decade, I probably moderated 500 focus groups and interviews. This decade—not so much. There has been a marked shift in our industry away from professionally moderated focus groups (and qualitative research in general) and towards adding qualitative questions into quantitative projects such as strategic studies (and, to a lesser extent, music tests). This may save a little money (and in the grand scheme of things, let me emphasize a little) but is wholly inappropriate for identifying the kinds of issues that focus groups excel at unearthing.

There are many in the industry who claim that focus groups are misleading, not actionable or flat out unreliable. It is true that focus groups are completely useless to answer questions like “what,” “when” or “how many.” Only a statistically reliable quantitative survey can provide that kind of decision support. Yet strategic studies are categorically incapable by themselves of answering the most crucial question—why. Now wait a minute, you might argue, of course a strategic can answer why—all I have to do is ask a “why” question, provide a number of multiple choice answers, and the “winner” is the right answer. But where did you come up with the choices? Most of the potential responses in such questions are provided by “brainstorming” sessions with programmers, consultants and yes, researchers. Rarely does the voice of the customer drive this process. Without the insight that only a qualitative project can provide, it is entirely possible that your team might identify an issue, devise a possible explanation, and confirm this explanation in a perceptual study—and be wrong.

Let me give you an example from the consumer products industry—which, by the way, spends 70% of its research budget on qualitative research. There is an apocryphal tale still told in the hallways of Betty Crocker (well, now General Mills) about the launch of their one-step cake mix in the 1950’s. The mix was launched with great fanfare…and promptly flopped. How might they (and you!) have discovered the reason why? One way might be to brainstorm with your staff, consultants and others to come up with a list of potential reasons. I came up with these four:

1.Tastes bad
2.Didn’t turn out right when I baked it
3.Packaging wrong
4.Not Interested in Package Mix/Only bake from scratch

You might come up with others. If you were to then field a survey and ask homemakers of the 50’s why they didn’t repurchase the cake mix after an initial trial, you would probably see a lot of answers 1, 2 and 4. Answer 3 might, in fact, be correct, but that would be exactly the sort of thing that only additional qualitative research could really determine. Answer 4 might be a popular choice—though considering the fact that our sample had already bought the product once, it might be a trifle dishonest and more aspirational/idealistic than real. These 1950’s housewives with their new Cuisinarts and Prefab homes were interested in timesaving convenience products; after all, this was the decade that gave birth to fast food. That leaves 1 and 2 as easy alternatives—if you can’t really be specific about what troubled you with this cake mix, it is all too easy to reply “it tasted wrong”, or “it didn’t taste as good as my homemade cake,” etc. If you were the marketing executive or consultant who suspected that the cake didn’t taste “homemade” enough, and you saw this hypothesis confirmed in a quantitative survey, you would assume that you had a handle on the problem and go back to the drawing board with different recipes, more expensive ingredients or different flavors.

Luckily for Betty Crocker, they did not follow this course. Instead, they conducted a series of focus groups (some of which involved observing women as they baked the cakes) before they did any quantitative research, just to make sure that the consideration set for the “why” questions were customer-driven, and not internally generated. What they learned was fascinating: women felt that the mix was too easy—by simply combining the mix with water and throwing the lot in the oven, consumers took no pride in the results—and even felt a twinge of guilt when they presented the cakes to their doting families. Betty Crocker tested this hypothesis in subsequent research, of course, but they wouldn’t have known the right questions to ask without the focus groups. It is important to reiterate here that focus groups do not produce answers—at least, not the quantitatively definitive kind that programmers need. They do, however, ensure that we ask the right questions, and provide critical insight into quantifiable results. Based upon this crucial psychological insight into their customers’ attitudes and behaviors, Betty Crocker altered the cake mix by removing the dried egg product and “requiring” the cook to crack an egg and beat it into the mix. This subtle change increased the “workload” of the cake mix, but made it feel more like baking.

Focus groups, when done correctly, absolutely work as advertised. They are also extremely cost-effective. So why have they fallen into disfavor in the radio industry? I can assure you that the consumer products industry still conducts thousands of group interview projects every year. Even companies that have decreased their budgets for traditional focus groups (P&G and General Electric, for example) have done so only to transfer that budget to online qualitative studies; in other words, they are still doing plenty of “groups,” they are simply doing them on the Internet instead of in shopping malls and office parks. Radio stations, however, are doing fewer groups than ever—which is ironic, because the focus group was originally developed by Robert Merton in 1941 at Columbia University’s Office of Radio Research—to test radio programming.

What has changed? For one thing, I think a misunderstanding of qualitative research has gradually led to unrealistic expectations. For instance, on more than one occasion, I have heard GM’s and PD’s comment after a particularly lively group that the results contradict what they have seen in their perceptual studies or in the questions they tack onto their weekly callout. This is hardly a valid indictment of the methodology. As I mentioned earlier, focus groups are for “why,” while perceptual studies are best for the other “w’s”. The same questions should rarely be asked in both; rather, the insights of one should shape the questions of the other. Also, the reason we do focus groups is because most consumer perceptions are far more complex than we can possibly capture with a necessarily reductive quantitative survey. In 1994, Daniel Wight, a Senior Researcher at the University of Glasgow, studied the opinions of adolescent boys as they relate to the opposite sex. In individual interviews, the boys expressed sensitive, sympathetic portraits of their opinions on girls, while in subsequent focus groups their opinions exhibited considerably more “machismo.” In contrast, a second group began with focus groups first, again expressing fairly chauvinistic views, and ended with individual interviews, in which they maintained the macho views expressed in the focus groups. In a purely quantitative world, both cannot be right: if one study says adolescent boys are sensitive to women while another says they are not, one study must be wrong. There is, however, a bit of a Heisenberg principle at work with this data. Wight’s study went on to show that both conclusions were right—the issues being grappled with were far too dynamic and complex to be reduced in this fashion.

Now, you might read the study quoted above and conclude that we should simply be doing more individual interviews, not focus groups—again, however, the observations of both methodologies are valid in this context, and understanding one without the other can lead to tragic results. I have also heard some in this industry (and even, more specifically, in the research industry) express a preference for individual interviews. The differences between the two are not inconsequential—and neither is superior to the other. Individual interviews are best when the goal of the research is to understand the particular individual in question—for instance, in Frederick Taylor’s early studies in the (then) nascent field of management science, it was crucial for him to thoroughly understand each individual worker involved with the manufacturing process, so that he could establish relevant benchmarks and optimize workflow.

Interviews, however, have a few drawbacks (as well as significant strengths). I have heard others tell me that “you get more” out of individual interviews. It may be true that you hear more data from each individual respondent, but “quantity” in this case may not produce as much relevant data as a focus group—particularly with “low involvement” products. I am sure that Johnson & Johnson once tried to conduct individual interviews about hand soap, but there is only so much any one person can say about hand soap. By the third repeated probe of “other reasons why you bought Dial” I can guarantee you that the respondent is just making things up in the hopes of pleasing the interviewer (call it a research version of the Stockholm Syndrome.) Interviews are also dramatically more dependent upon the guidance of the interviewer—opportunities to directly or indirectly bias the results are rife—while focus groups are best when the moderator asserts himself or herself as little as possible into what should be an organic discovery process. It is this process that generates 90 minutes of discussion on hand soap—or your afternoon drive jock.

Focus groups have weaknesses, like all forms of research, but as a tool to generate qualitative content on a specified topic of interest, they are without question a sound means of exploring complicated issues of consumer behavior. It is this latter point that I would like to end on. The issues raised in focus groups are complex. Sure, the points of view expressed about your station may be simple—even monosyllabic—but any focus group worth its salt can quickly move beyond the surface and get to the reason behind the reason. Too many groups stop at the first road sign—“variety,” for instance—without continuing to put the pieces together to get at the basic human need that drives the particular attitude or belief that your listeners have constructed about your station. I can assure you that the housewives in those 1950’s Betty Crocker focus groups didn’t tell the moderator that their self-esteem was damaged by serving those cakes to their families—but that is exactly what a skilled team of researchers was correctly able to conclude by mapping the linguistic and paralinguistic cues discerned from the focus groups with subsequent quantitative studies.

Radio focus groups vary wildly in quality, with many running the gamut from merely unhelpful to dangerously misleading. There is a certain “DIY” quality to focus groups (“I moderated the last one, why don’t you do this one?”) that springs from confusion with what I consider to be an entirely separate tool, the “listener panel.” (I think the latter are a great way for station personnel—particularly those who do not touch the product every day, or rarely interface with the listeners—to truly capture the zeitgeist of the “product” that we as broadcasters sell everyday. Listener panels, however, should be conducted in addition to, not in lieu of, rigorously sampled, methodologically sound focus groups.) Now, moderating a focus group is certainly not brain surgery, but I will tell you as a former (and current) student of consumer behavior that there is an extensive toolkit of techniques (e.g. experiential analysis, metaphorical analysis, laddering and even hypnosis—) that a trained and experienced moderator can choose from, depending on the specifics of the job at hand. Putting the respondents at ease so that they find themselves in a comfortable situation to express themselves is only the beginning—often, the “no bad answers” atmosphere can lower respondents’ common sense barriers, compelling them to agree with opinions they do not actually share, in the hopes that others in the group will extend them the same courtesy. The trained professional is able to recognize this phenomenon—and naturally orient the group to produce more genuine responses.

In short, a proper focus group has as many—if not more—moving parts than does a strategic study. For your next focus group, you should insist that your moderator do more than simply provide a “list of topics,” or discussion guide—you should really grill him or her on how they plan to crack the complicated nut that is the radio listener. I am certainly happy to speak with anyone about how you can get the most out of your focus groups. If you have had poor experiences with focus groups in the past, I hope you will reconsider before you reject the methodology outright. Radio can expect many challenges in the months and years ahead. Ignoring the voice of the customer is no way to meet them.

Get our latest insights delivered to your inbox.