Can you add my one question to your survey?

No sooner you let it be known, mostly inadvertently, that you are about to send out a survey to customers than starts incessant requests (and commands) from your co-workers (and bosses) to add just one more question to it. Just one more question they have been dying to find the answer for but have not gotten around to do a survey or anything else to find the answer for.

Just one question right? What harm can it do? Sure you are not opening the floodgates and adding everyone’s question, just one question to satisfy the HiPPO?

May be I am unfair to all our colleagues. It is possible it is not them asking to add one more question, it is usually us who is tempted to add just one more question to the survey we are about to send out. If survey takers are already answering a few it can’t be that bad for them to answer one more?

The answer is yes of course it can be really bad. Resist any arm-twisting, bribing and your own temptation to add that one extra question to a carefully constructed survey. That is I am assuming you did carefully construct the survey, if not sure add them all, the answers are meaningless and in-actionable anyways.

To define what carefully constructed survey means we need to ask, “What decision are you trying to make with the data you will collect?”.

survey-processIf you do not have decisions to make, if you won’t do anything different based on the data collected or if you are committed to do whatever you are doing now and only collecting data to satisfy the itch then you are doing it absolutely wrong. And in that case yes please add that extra question from your boss for some brownie points.

So you do have decisions to make and made sure the data you seek is not available through any other channels. Then you need to develop a few hypotheses about the decision. You do that by doing the background exploratory research including customer one-on-one interviews, social media search analysis and if possible focus groups. Yes we are actually paid to make better hypothesis so you should take this step seriously.

For example your decision is how to price a software offering and your hypotheses is about value perception of certain key features and consumption models.

Once you develop a minimal set of well defined hypotheses to test, you design the survey to collect data to test those hypotheses.  Every question in your survey must serve to test one or more of the hypotheses. On the flip side you may not be able to test all your hypotheses in one survey and that is okay. But if there is a question that does not serve to test any of the hypotheses then it does not belong in that survey.Slide2

The last step is deciding the relevant target mailing list you want to send this survey to. After all there is no point is asking the right questions to wrong people.

Now you can see what adding that one extra question from your colleague does to your survey. It did not come from your decision process, does not help with your hypotheses, and most likely not relevant to the sample set you are using.

Running a survey? Using a raffle to increase response rate?

Before you read on.  Take a moment to stop and think about the options. You are running a survey and decided to use raffle and not pay-per-response method to get your target customers to respond to your survey. The question then is which raffle, for the same amount, will get you better response rate?

You do not have resources to run both. You are going to pick one method.

Have you made your choice? and written it down?

Let us do the math first to see which option offers better expected value to your respondents.

Assume 100 customers.

Option A: The chances of winning is 1/100. So the expected value of $250 prize pot is $2.5.

Option B: There are 10 chances to win (no duplicates).  The prize port remains $25 for all 10 chances but the probability changes.
For the first chance it is 1/100.
For the second it is 1/99

For the 10th it is 1/91

The expected value is $2.62, a tad more than$2.5.

If your respondent were presented with these two options and asked to pick then they might choose Option B.

But notice that your respondent does not get to see both options. They either see 1/100 chance to win $250 or a little better than 1/100 chance to win $25.

We are not good in math and in case of handling probabilities we tend to focus on magnitude of the prize pot over the chances of winning. You can see this behavior when Power Ball or other lottery pot rises past $50 million.

So a 1/100 chance to win $250 will look more attractive than the $25 option even if its expected value is higher.

You will most likely receive better response rate by giving all the $250 to one lucky respondent than splitting it over 10 people.

You think otherwise? I am happy to run this test for you. I just need $500 for the prize pot and $500 for my fee.

Short Survey on your Product Perception

Please take a moment – it will take only 1 to 2 minutes – to answer this short survey.  http://www.surveygizmo.com/s3/489792/iterative-blog

I appreciate your time and response.

 

 

We already know what to do, we just need to validate it with a survey …

Through my blog and Berkeley network I get periodic requests from aspiring startup founders and small business owners  to help them do a “survey” for them. There is a general pattern in all these requests:

  1. They already have the product going or have the idea all figured out.
  2. They are skeptical of the phrase “Marketing Research” and services that charge high fees for it.
  3. They are highly technical, sharp and committed individuals
  4. They are reaching out only because someone they respect had asked them, “have you checked this with your target customers?”.
  5. They already know what they want to do, what the customers want, what the product offering should be, pricing should be – they only need to do a survey to validate these.
  6. They want me write survey questions like, “Would you prefer auto-turbo fluid motion v7.4 over nitro-fusion hydraulic v2.3? and will you pay $100 more for that?”
  7. They also want to ask a lot of essay questions, “What are all some of the problems you find with social media XYX?”
  8. They want to run the survey on their blog readers, twitter followers, facebook friends, …
  9. They are not willing to pay anything for me because all I have to do is write up a survey for them.

I turn away all such requests. The primary reason is, I currently advise two small businesses, one from Boulder and one here and I cannot take on more unpaid work.

But there is another reason as well: they are solving the wrong problem.

If these people are convinced about their path, looking only for validation and not willing to change their path based on the data, why bother with a customer survey?

You should not jump to do a survey if you have not formed  a few hypotheses that you want objectively tested.

You should not launch a survey that is not designed based on extensive exploratory process – one that involves multiple customer interviews and focus groups.

If you have not talked to single potential customer you are targeting, running the survey is the least of your concerns.

A survey is ineffective if you have not uncovered distribution of your customer profiles, likes and attitudes.

Asking respondents to write an essay of their problems and likes in a survey question is simply wrong!

A survey is ineffective  if you have not uncovered your “customer speak” and you continue to use your techno jargon speak.

A survey is ineffective if you slapped together a few questions that don’t help answer your decision.

A survey is ineffective  if you collect data from where it is convenient, like the drunkard searching for lost keys under the light because it was dark where he lost it.

A survey is ineffective if you have not identified the target population and found a way to reach them and are not willing to spend resources to reach the right target.

If you are convinced you are in San Diego, when you could be in Madagascar, looking only for white sandy beaches to validate your conviction and not willing to seek data that will show otherwise – you do not need a survey!

6 Survey Errors You Should Know and Avoid

So you did a survey, received great response rate and found a great percentage of people prefer your product.  Or you see a study quoting a major survey from a renowned market research firm that finds, “customers are willing to pay price premium for great customer service”. We can take these results on their face value and act on it, or we can ask some key questions to see the errors in the way survey us conducted or the data is analyzed.

In today’s WSJ,  Carl Bialik, who writes The Numbers Guy column points out some of the common yet not so easy to recognize errors in surveys and interpreting the results.

  1. Leaving out Key Groups: While researchers take care to find a representative sample of people, the survey population may limit or omit a few key groups that can skew the results.  The worst form of this is using “convenience sample”, surveying only those that are available to us rather than the target population.
  2. Respondent Honesty:  There is inherent challenges in getting respondents to answer the survey honestly. Be it about a sensitive subject or simple intention to purchase we tend to mask our responses or give responses that we think the survey taker wants to hear. The problem is worse when the survey is administered in person or over telephone.
  3. Losing Segmentation Differences: There may not be enough representation of sub-groups to make any segmentation differences. On the other hand if there are enough samples, ignoring segmentation difference and treating the data only in aggregate may show a completely different aggregate result.
  4. Hidden Variables: This is the flip side of the point above. Responses to a question could show statistically significant difference between two segments. For example, women may state higher willingness to pay for Green products than men, but it hides hidden variables that are not accounted for by the survey.
  5. Not Asking the Right Question: The survey simply may not have asked the right question, be it the correct vernacular of the target population or asking unambiguous question.
  6. Not Seeking Data for All Hypotheses: The survey may narrowly focus on one hypothesis and seek only data that will prove or disprove that hypothesis. Data can fit any number of hypotheses, before designing the survey all those must have be surfaced and must be included in the survey. For example, WSJ surveying parents about their children’s performance and WSJ subscription may fail to ask about other things the children do, parent’s education and involvement.

Tags: Customer Metric, Hypothesis

There is something about Kindle and $150 Sunglasses – Results from Conjoint Analysis

Let me start by stating that my previous article about Kindle positioning is most likely wrong and Amazon product managers have definitely done their marketing research well.  There is something about the segment that prefers $150 sunglasses and Kindle.

I will discuss only part of the results from my recently conducted conjoint analysis. I will not be providing here the detailed results, differences across gender and other analysis.

When Amazon introduced Wifi version of their Kindle (priced at $139), Mr. Bezos said,

“At $139, if you’re going to read by the pool, some people might spend more than that on a swimsuit and sunglasses,”

Amazon also ran TV Ads that talked specifically about $139 Kindle being cheaper than Sunglasses.

I did not believe that was a valid positioning, I believed (and wrote),

What job will these segments hire the e-Reader for? The same job they hire their $150 sunglasses for – to make a statement about themselves. For that job, they might be more inclined to hire an iPad than a Kindle.

I recently conducted a conjoint analysis by surveying the Haas Berkeley, MBA class of 2011 (thanks to Hrishika, MBA 2011 for providing access). I did not get enough samples from my twitter followers so I ignored all those samples.

A key finding of the study is, those who preferred $150 sunglasses also preferred Kindle more than they preferred iPad, nook or a $99 Generic eBook reader.

Among those who preferred $30 sunglasses, Kindle had 26% market share – tied for second place with generic eBook reader.

Among those who preferred $150 sunglasses, Kindle had 37% market share – the highest with a RMS of 1.34.

Better yet, my past claim that those who buy $150 sunglasses will also buy iPad is wrong. iPad had the lowest market share of 17% for this segment.

Clearly Kindle has a better chance among this segment and Amazon’s marketing campaign is capitalizing on that. That said, the Kindle Ads take on iPad on readability. It is likely that nook is the bigger competitor to Kindle for this segment than iPad.