On Focus Groups: Anyone can convene a group, ask questions, and write up the answers

There is a great book titled,  “Prove It Before You Promote It“,  that I read  a while back.  It has some very sobering remarks on focus groups and how they are applied in product development.

I am reproducing in its entirety author Steve Cuno’s commentary on focus group:

Many companies hold focus groups. They fill a room with 10 to 20 carefully selected respondents and ask them questions. That much is fine. A problem occurs only when companies mistake the resultant feedback for data—and make decisions based on what they hear.

Focus groups, with an easy-to-imitate format, are a great place for incompetents to convince themselves and unsuspecting clients that they know what they’re doing. Anyone can convene a group, ask questions, and write up the answers.

I have seen focus group reports that say things like, “Seventy percent felt the packaging was too pink” or “Eighty percent said if you open a store on the West Side, they’ll shop there.” I have seen the people running the focus groups, whose role is to remain unbiased, ask leading questions like, “Would you be more or less likely to shop at a store that advertises on violent cartoons aimed at small children?”

Amazingly and sadly, businesses actually base big decisions on these groups. They make the package less pink. They open a store on the West Side. They pull their ads from Batman cartoons. And all too often they later find that consumers don’t behave the way they said they would in the focus group.

I completely agree. I have written previously about the relevance of focus groups, this book does a much better of job teaching us the pitfalls of misuse of focus groups.

What the book says about focus groups – asking a few (leading) questions and taking product decisions based on the feedback of handful of people-   is very relevant to basing product and startup strategy based on the interviews with handful of customers.

When you are talking to customers, you are still forming hypotheses not testing them. The hard part is not testing the hypotheses, but forming better hypotheses to test.  Focus groups and customer interviews help us make better and testable hypotheses.

If you are in marketing, run a startup, manage a product or do A/B tests you definitely should read this book.

We already know what to do, we just need to validate it with a survey …

Through my blog and Berkeley network I get periodic requests from aspiring startup founders and small business owners  to help them do a “survey” for them. There is a general pattern in all these requests:

  1. They already have the product going or have the idea all figured out.
  2. They are skeptical of the phrase “Marketing Research” and services that charge high fees for it.
  3. They are highly technical, sharp and committed individuals
  4. They are reaching out only because someone they respect had asked them, “have you checked this with your target customers?”.
  5. They already know what they want to do, what the customers want, what the product offering should be, pricing should be – they only need to do a survey to validate these.
  6. They want me write survey questions like, “Would you prefer auto-turbo fluid motion v7.4 over nitro-fusion hydraulic v2.3? and will you pay $100 more for that?”
  7. They also want to ask a lot of essay questions, “What are all some of the problems you find with social media XYX?”
  8. They want to run the survey on their blog readers, twitter followers, facebook friends, …
  9. They are not willing to pay anything for me because all I have to do is write up a survey for them.

I turn away all such requests. The primary reason is, I currently advise two small businesses, one from Boulder and one here and I cannot take on more unpaid work.

But there is another reason as well: they are solving the wrong problem.

If these people are convinced about their path, looking only for validation and not willing to change their path based on the data, why bother with a customer survey?

You should not jump to do a survey if you have not formed  a few hypotheses that you want objectively tested.

You should not launch a survey that is not designed based on extensive exploratory process – one that involves multiple customer interviews and focus groups.

If you have not talked to single potential customer you are targeting, running the survey is the least of your concerns.

A survey is ineffective if you have not uncovered distribution of your customer profiles, likes and attitudes.

Asking respondents to write an essay of their problems and likes in a survey question is simply wrong!

A survey is ineffective  if you have not uncovered your “customer speak” and you continue to use your techno jargon speak.

A survey is ineffective if you slapped together a few questions that don’t help answer your decision.

A survey is ineffective  if you collect data from where it is convenient, like the drunkard searching for lost keys under the light because it was dark where he lost it.

A survey is ineffective if you have not identified the target population and found a way to reach them and are not willing to spend resources to reach the right target.

If you are convinced you are in San Diego, when you could be in Madagascar, looking only for white sandy beaches to validate your conviction and not willing to seek data that will show otherwise – you do not need a survey!

6 Survey Errors You Should Know and Avoid

So you did a survey, received great response rate and found a great percentage of people prefer your product.  Or you see a study quoting a major survey from a renowned market research firm that finds, “customers are willing to pay price premium for great customer service”. We can take these results on their face value and act on it, or we can ask some key questions to see the errors in the way survey us conducted or the data is analyzed.

In today’s WSJ,  Carl Bialik, who writes The Numbers Guy column points out some of the common yet not so easy to recognize errors in surveys and interpreting the results.

  1. Leaving out Key Groups: While researchers take care to find a representative sample of people, the survey population may limit or omit a few key groups that can skew the results.  The worst form of this is using “convenience sample”, surveying only those that are available to us rather than the target population.
  2. Respondent Honesty:  There is inherent challenges in getting respondents to answer the survey honestly. Be it about a sensitive subject or simple intention to purchase we tend to mask our responses or give responses that we think the survey taker wants to hear. The problem is worse when the survey is administered in person or over telephone.
  3. Losing Segmentation Differences: There may not be enough representation of sub-groups to make any segmentation differences. On the other hand if there are enough samples, ignoring segmentation difference and treating the data only in aggregate may show a completely different aggregate result.
  4. Hidden Variables: This is the flip side of the point above. Responses to a question could show statistically significant difference between two segments. For example, women may state higher willingness to pay for Green products than men, but it hides hidden variables that are not accounted for by the survey.
  5. Not Asking the Right Question: The survey simply may not have asked the right question, be it the correct vernacular of the target population or asking unambiguous question.
  6. Not Seeking Data for All Hypotheses: The survey may narrowly focus on one hypothesis and seek only data that will prove or disprove that hypothesis. Data can fit any number of hypotheses, before designing the survey all those must have be surfaced and must be included in the survey. For example, WSJ surveying parents about their children’s performance and WSJ subscription may fail to ask about other things the children do, parent’s education and involvement.

Tags: Customer Metric, Hypothesis

A Note on Much Misunderstood Use of Focus Groups

Sometime back I read a tweet from a Boulder based serial entrepreneur

“Watching a focus group going horribly wrong. Don’t ask if they will buy, ask them to buy. HUGE difference”

In general I have heard quotes like*,

“When we pitched the idea more than 70% in the focus group (or the customers we talked to) loved it.”

“We are not going to make decisions based on focus groups”

For your  startup, be it a tech startup or a non-tech startup selling “Eyelashes for cars”, focus groups have a role to play. If we misunderstand its role, misuse it or incorrectly execute it, we either end up with wrong decisions or the conclusion that focus groups are a complete waste of time.

A focus group can go horribly wrong for any of the reasons I will explain below, asking them, “will you buy” or ” asking them to buy” are just two of them. For the record, there is actually no difference between these two mistakes!

Here is some level setting on what a focus group is about, what you should ask and not. Most of these apply even for customer interviews, the ones you do when “you get out of the building”.

What a focus group is not?

  1. It is not a sales pitch and should not be treated as one. While you may introduce your idea or product prototype, you should not be selling it.
  2. It is not a platform for you to hear your own voice. In fact you should not make any statement at all, you are only allowed to ask questions.
  3. Do not look for validation of your preset notions in this meeting – especially product design, pricing or willingness to pay.
  4. It is not data collection let alone hypotheses validation process.  Do not bother counting how many took one position vs. other. It does not matter, what matters is there are at least 2 sets of opinions.
  5. It is not realtime data collection and digestion process. Hold your opinions and theories until at least a day after the event.

What a focus group is?

  1. It is a step towards forming better and informed hypotheses  about, their needs, wants, painpoints, buying processes, emotions etc.
  2. It is a process for finding range of opinions. If you find people expressing only middle of the ground opinions, exaggerate it to take it to the extreme. If you find only extreme, revert it and you get the full range of opinions.
  3. It is a source of customer (target customer) language.
  4. It is a source for finding the alternatives these people are employing before your product.
  5. It is a source for designing your survey.

Who should conduct it?

  1. I understand most cannot afford the fancy focus groups conducted in rooms with one way mirror, roof cameras etc. If the results of this step and the ensuing survey are important to your business decision, consider the cost of getting it wrong and decide whether you should hire professional help.
  2. If you are the most talkative, most pedantic and opinionated person in your team, you should not be conducting it. Save yourself for VC pitches, real customer visits and deal making. You want the most compassionate, silent and inquisitive type in your group. Take a non-techie if you can.
  3. You need someone who does not want their opinion heard, does not have something to prove and just want to fill the room with awful silence so the participants are forced to dispel the silence with their talking.

What you should not ask?

  1. Do not ask if they like your product, agree with you, etc.
  2. Do not give multiple choice question.
  3. Do not replicate a survey question, “On a scale of 0 to 10, 0 being …”
  4. Do not ask any Yes/No, either/or question.
  5. Do not ask them to justify their position.
  6. Do not contradict or correct their statements – just listen and take notes
  7. Do not indicate correctness/wrongness , your agreement/disagreement verbally or with body language.
  8. For touchy subjects (for example buying Eyelashes for cars) do not make it about them (make it about other people).
  9. Do not zoom in on the most talkative person at the expense of others.
  10. Do not ask questions so you can find evidence to convince yourself that Madagascar is really San Diego.

What should you ask?

  1. Open ended questions that are exploratory.
  2. Project it on others, “why do you think some people might …”
  3. Engage everyone, even the silent types.
  4. Ask for lots of options, “What are some of …”
  5. Ask, “What”, “Why”, “How”, “When”, “Where”,  questions but see above on how to ask them. You do not want to ask it in a way it will either put them on the defensive or make them say what they think will please your or the rest of the participants.
  6. Just ask the questions with your silence.
  7. You can take me to do it and do the ensuing analysis.

The next step is designing the survey based on the focus group. Subscribe to this blog and my twitter to read about it in the coming days.

Note*: Yes, there is selection bias in the quotes, fortunately the guidelines I give are not predicated on these.

What is Wrong with XBox Live Pricing?

Microsoft announced today that they are increasing their price for XBox Live. In a nutshell, they started with a  mistake and are trying to correct it with more mistakes.

Value Tag: This article is worth $250,000 for the Microsoft Product Manger and $25,000 for an entrepreneur.

Customers do not have an internal value meter that tells them the price to pay or whether they price they pay is fair or not. What they have is an internal reference price that is framed by what they have usually paid for similar products in the past. If the price they pay increases over this reference price, customers feel it is “unfair” as we have seen before with airline unbundling. For new products it is so much more important to get the initial pricing right, else the marketer is either forced to forgo profits by sticking to low prices or face customer backlash from ill-executed price increases.

In case of new products like Xbox Live, the price customers were trained to pay from the beginning has become their reference price. Now Microsoft realized  that it was too low and decided to increase it by close to 25%. The problems? They are wrong with their initial pricing strategy, tactics and wrong with the way they are implementing the change.

Initial Pricing Strategy:

  1. There is no versioning. They offered one version at  fixed price and offered bundling discount for 3 month and one year subscription. Otherwise, the subscriber gets all features all the time (unlimited access). One version that  offers everything unmetered is almost always wrong and offering discount for longer term subscription is not versioning.
  2. They named this version Gold, but missed the opportunity to have bronze and silver.  One cannot simply toss together a set of features including the kitchen sink and call it  “Gold”. They should have done appropriate marketing research to find what benefits are relevant to which segments to offer versions at different price points. The versions could have easily been different either in hours of play, additional benefits or both. It is very important to allocate values across the versions and price them appropriately so that those with high willingness to pay will not be tempted by the lowest priced version.
  3. They say they have been adding new content and entertainment experiences continuously. But they kept adding these to the single version they have. They could have corrected the mistake of not having versioning by introducing higher priced versions with the new features and adding only crippled form of the new features to the current version.

Pricing Tactics:

  1. If all you have is one version it is not a good approach to name it as “Gold” or “Deluxe”. You run out of names for higher levels.
  2. In the absence of clear segmentation differences, as a versioning tactic,  offering three versions would have enabled them to capture larger share of value created  – those with higher willingness to pay or have extremity aversion would have self-selected themselves to the middle version.
  3. From behavioral pricing perspective, having a high priced version would have helped set a higher reference price for their customers.  Instead they set one low reference price and facing backlash from their price increase.

Implementing the price change:

  1. How the price increase is communicated, through the personal blog and using the word “increase”, is simply wrong.
  2. One cannot push price increase with a positioning of, “you are getting lot of value from new features we are adding and hence we are increasing the price”. Customers do not see the value as marketers see it.  It has to be made explicit in terms of $$ and they need to be convinced of it. There was no attempt of this value communication.
  3. They ignored the reference price. Even when customers get value and see the value in the additional content their reference price need to be changed before any attempt to price these additions.
  4. From a consumer behavior perspective, there is no word “because” in the statement announcing the price increase. (see the link for details)

What they should have done differently?

Tiers based on Hours of playtime: They should have looked at the mountain of data they have collected to determine the playtime distribution. This would have enabled them to introduce tiers based on hours of play. This is same as what AT&T did with their data plan pricing change. The way AT&T positioned it, they introduced a lower priced option which according to their data will fit the needs of more than 90% of their subscribers.

Introduce Versioning: Used this opportunity to introduce versioning with new features that are positioned as higher value to customers or limit the usage of certain features.

On top of all these, should have managed it with better communication and positioning.

No wonder we see a huge backlash from the price increase.

None of the mistakes listed above or methods to fix it are new but thanks to Xbox Live, this prominent example serves well to reiterate the need for effective pricing and versioning strategy for both enterprises and startups!

Should your versions differ in quantity or benefits?

In the article titled Simple Versioning Rule, I wrote this

If you are versioning your product it must differ across at least two choice dimensions. One is price and the other is the dimension that has the most incremental net value. In the special case where the marginal cost is $0, the incremental net value is same as incremental revenue.

In fact the rule can be refined to state that there is price dimension and there is value (or utility) dimension. The value dimension has many components to it:

  1. Quantity: The versions all deliver identical benefits, they differ only in size/quantity. This is the classic, small, medium, large versioning. The price variation is usually non-linear, that is the customers pay less price per unit when they choose bigger sizes.
  2. Benefits:  This is the bronze, silver, gold versioning. Customers receive additional or different feature set with each version regardless of quantity consumed. An example is a survey tool providing email marketing tie-in only with higher priced version. Or more recently, Jet Blue offers “Fly all you can” ticket in two versions, one has Friday-Saturday restriction.
  3. Perceived Benefits: This is a special case of (2) above. There is absolutely no difference between versions and yet they are perceived differently due to branding and other marketing actions. This requires its own discussion and is beyond the scope of this article.

How do you decide between Quantity and Benefit dimension or pick both? Even when you decide on this, how should the versions themselves be designed? For instance,

  1. You are designing a survey app, should you design the lowest version with 1000 responses at $10 price level or 2000 responses at $15, or are there other ways?
  2. You are designing a webapp,  should API integration be allowed in the lowest version and priced in or only be made available in higher level versions?
  3. How should the price vary across the versions so that those who have high willingness to pay are not tempted by your lowest priced version?
  4. How many versions?
  5. What about the versioning costs? (Product, Sales, Menu and Customer-Cognitive costs)

I do not believe there is a generalized way to answer all these questions. Even the answer to the basic question of, quantity or feature, there is no one answer. It depends on your customer segments, what they value and what maximizes your profit.

You can find the right versioning strategy using advanced marketing research methods like conjoint and cluster analyses. (Or I can help!)

If you are doing versioning, why not do it right?