On Focus Groups: Anyone can convene a group, ask questions, and write up the answers

There is a great book titled,  “Prove It Before You Promote It“,  that I read  a while back.  It has some very sobering remarks on focus groups and how they are applied in product development.

I am reproducing in its entirety author Steve Cuno’s commentary on focus group:

Many companies hold focus groups. They fill a room with 10 to 20 carefully selected respondents and ask them questions. That much is fine. A problem occurs only when companies mistake the resultant feedback for data—and make decisions based on what they hear.

Focus groups, with an easy-to-imitate format, are a great place for incompetents to convince themselves and unsuspecting clients that they know what they’re doing. Anyone can convene a group, ask questions, and write up the answers.

I have seen focus group reports that say things like, “Seventy percent felt the packaging was too pink” or “Eighty percent said if you open a store on the West Side, they’ll shop there.” I have seen the people running the focus groups, whose role is to remain unbiased, ask leading questions like, “Would you be more or less likely to shop at a store that advertises on violent cartoons aimed at small children?”

Amazingly and sadly, businesses actually base big decisions on these groups. They make the package less pink. They open a store on the West Side. They pull their ads from Batman cartoons. And all too often they later find that consumers don’t behave the way they said they would in the focus group.

I completely agree. I have written previously about the relevance of focus groups, this book does a much better of job teaching us the pitfalls of misuse of focus groups.

What the book says about focus groups – asking a few (leading) questions and taking product decisions based on the feedback of handful of people-   is very relevant to basing product and startup strategy based on the interviews with handful of customers.

When you are talking to customers, you are still forming hypotheses not testing them. The hard part is not testing the hypotheses, but forming better hypotheses to test.  Focus groups and customer interviews help us make better and testable hypotheses.

If you are in marketing, run a startup, manage a product or do A/B tests you definitely should read this book.

A Note on Much Misunderstood Use of Focus Groups

Sometime back I read a tweet from a Boulder based serial entrepreneur

“Watching a focus group going horribly wrong. Don’t ask if they will buy, ask them to buy. HUGE difference”

In general I have heard quotes like*,

“When we pitched the idea more than 70% in the focus group (or the customers we talked to) loved it.”

“We are not going to make decisions based on focus groups”

For your  startup, be it a tech startup or a non-tech startup selling “Eyelashes for cars”, focus groups have a role to play. If we misunderstand its role, misuse it or incorrectly execute it, we either end up with wrong decisions or the conclusion that focus groups are a complete waste of time.

A focus group can go horribly wrong for any of the reasons I will explain below, asking them, “will you buy” or ” asking them to buy” are just two of them. For the record, there is actually no difference between these two mistakes!

Here is some level setting on what a focus group is about, what you should ask and not. Most of these apply even for customer interviews, the ones you do when “you get out of the building”.

What a focus group is not?

  1. It is not a sales pitch and should not be treated as one. While you may introduce your idea or product prototype, you should not be selling it.
  2. It is not a platform for you to hear your own voice. In fact you should not make any statement at all, you are only allowed to ask questions.
  3. Do not look for validation of your preset notions in this meeting – especially product design, pricing or willingness to pay.
  4. It is not data collection let alone hypotheses validation process.  Do not bother counting how many took one position vs. other. It does not matter, what matters is there are at least 2 sets of opinions.
  5. It is not realtime data collection and digestion process. Hold your opinions and theories until at least a day after the event.

What a focus group is?

  1. It is a step towards forming better and informed hypotheses  about, their needs, wants, painpoints, buying processes, emotions etc.
  2. It is a process for finding range of opinions. If you find people expressing only middle of the ground opinions, exaggerate it to take it to the extreme. If you find only extreme, revert it and you get the full range of opinions.
  3. It is a source of customer (target customer) language.
  4. It is a source for finding the alternatives these people are employing before your product.
  5. It is a source for designing your survey.

Who should conduct it?

  1. I understand most cannot afford the fancy focus groups conducted in rooms with one way mirror, roof cameras etc. If the results of this step and the ensuing survey are important to your business decision, consider the cost of getting it wrong and decide whether you should hire professional help.
  2. If you are the most talkative, most pedantic and opinionated person in your team, you should not be conducting it. Save yourself for VC pitches, real customer visits and deal making. You want the most compassionate, silent and inquisitive type in your group. Take a non-techie if you can.
  3. You need someone who does not want their opinion heard, does not have something to prove and just want to fill the room with awful silence so the participants are forced to dispel the silence with their talking.

What you should not ask?

  1. Do not ask if they like your product, agree with you, etc.
  2. Do not give multiple choice question.
  3. Do not replicate a survey question, “On a scale of 0 to 10, 0 being …”
  4. Do not ask any Yes/No, either/or question.
  5. Do not ask them to justify their position.
  6. Do not contradict or correct their statements – just listen and take notes
  7. Do not indicate correctness/wrongness , your agreement/disagreement verbally or with body language.
  8. For touchy subjects (for example buying Eyelashes for cars) do not make it about them (make it about other people).
  9. Do not zoom in on the most talkative person at the expense of others.
  10. Do not ask questions so you can find evidence to convince yourself that Madagascar is really San Diego.

What should you ask?

  1. Open ended questions that are exploratory.
  2. Project it on others, “why do you think some people might …”
  3. Engage everyone, even the silent types.
  4. Ask for lots of options, “What are some of …”
  5. Ask, “What”, “Why”, “How”, “When”, “Where”,  questions but see above on how to ask them. You do not want to ask it in a way it will either put them on the defensive or make them say what they think will please your or the rest of the participants.
  6. Just ask the questions with your silence.
  7. You can take me to do it and do the ensuing analysis.

The next step is designing the survey based on the focus group. Subscribe to this blog and my twitter to read about it in the coming days.

Note*: Yes, there is selection bias in the quotes, fortunately the guidelines I give are not predicated on these.

Who Makes the Hypothesis in a Hypothesis Testing?

Most of my  works on pricing and consumer behavior studies rely on hypothesis testing.  Be it finding difference in means between two groups, non-parametric test or making a causation claim, explicitly or implicitly I apply hypothesis testing. I make overarching claims about customer willingness to pay and what factors influence it based on hypothesis testing. The same is true for the most popular topic, these days, for anyone with a web page – AB split testing. Nothing wrong with these methods and I bet I will continue to use these methods in all my other works.

We should note however  that the use of hypothesis and finding statistically significant difference should not blind us to the fact that there is some amount of subjectivity that go into all these. Another important distinction to note is, despite the name hypothesis testing we are not testing whether the hypothesis is validated but whether the data fits the hypothesis which we take it as given. More on this below.

All these testings proceed as follows:

  1. Start with the hypothesis. In fact you always start with two, the null hypothesis which is the same for any statistical testing
    The Null hypothesis H0: The observed difference between subjects (or groups) is just due to randomness.
    Then you write down the hypothesis that you want to make a call on.
    Alternative hypothesis H1: The observed difference between subjects (or groups) is indeed due to one or more treatment factors that you control for.
  2. Pick the statistical test you want to use among those available given your case. Be it a non-parametric test like Chi-square  that makes no assumption about the distribution of data (AB testing) or parametric test like t-test that assumes Gaussian distribution (e.g., normal) of data.
  3. Select a critical value or confidence level for the test 90%,95%, 99% with 95% being the most common. This is completely subjective. What you are stating with the critical value is the results are statistically significant only if these can be caused due to randomness in less than 5% (100-95%) of the cases. The critical value is also expressed as p value ( probability ), in this case 0.05.
  4. Perform the test with random sampling. This needs more explanation but is beyond the scope of what I want to cover here.

As you can see, we the analyst/decision maker make up the hypothesis and we are treating the hypothesis as given.  We did the right thing of writing it first. ( A common mistake in many of the AB tests and in data mining exercises is writing the hypothesis after the test.)

What we are testing is, given this hypothesis H1 is true  (P(H1)=1) what is the probability the test data D fits the hypothesis.

This is expressed as P(D|H1).  Statistical significance here means P(D|H1) > 0.95 given P(H1) =1.

When we say we accept H1, we are really saying H0 (randomness) cannot be the reason and hence H1 must be true. We rule out the fact that the observed data can be explained by any number of alternative hypotheses. Since we wrote the original hypothesis, if we did not base it on proper qualitative analysis then we could be wrong despite the fact  our tests yields statistically significant results.

This is why you should never launch a survey without doing focus groups and customer interviews. This is why you don’t jump into statistical testing before understanding enough about the subjects under study to frame relevant hypothesis.  Otherwise you are, as some wrote to me, using gut feel or pulling things out of thin air and accepting it simply because there is not enough evidence in the data to overturn the null hypothesis.

How do you come up with your hypotheses?

Look for my next article on how this is different in Bayesian statistics.