Google Customer Surveys – True Business Model Innovation, But

Summary:Great business model innovation that points to the future of unbundled pricing. But is Google customer survey an effective marketing research tool? Do not cancel SurveyGizmo subscription yet.

Google’s new service, Customer Surveys, is truly a business model innovation. It unlocks value by creating a three sided market:

  1. Content creators who want to monetize their content in an unbundled fashion (charge per article, charge per access etc)
  2. Readers who want access to paid content without having to subscribe for entire content or muddle through micro-payments (pay per access)
  3. Brands seeking customer insights, willing to pay for it but have been unable to find a reliable or cheaper way to get this
When readers want to access premium content they can get it by answering a question posed by one of the brands instead of paying for access. Brands create surveys using Google customer surveys and pay per use input.

Google charges brands 10 cents per response, pays 5 cents to the content creators and keeps the rest for enabling this three sided market.

Business model is nothing but value creation and value capture. Business model innovation means innovation in value creation, capture or both. By adding a third side with its own value creation and capture Google has created an innovative three way exchange to orchestrate the business model.
This also addresses the problem with unbundled pricing, mostly operational challenges with micro-payments and metering.

But I cannot help but notice severe sloppiness in their product and messaging.

Sample Size recommendation: Google recommends brands to sign up for 1500 responses. Their reason, “recommended for statistical significance”.
Statistical significance has no meaning for surveys unless you are doing hypothesis testing. When brands are trying to find out which diaper bag feature is important, they are not doing hypothesis testing.

What they likely mean is Confidence Interval (or margin of error at a certain confidence level). What is the margin of error, at 95% confidence level? With 1500 samples, assuming 200 million as the population size it is 2.5%. But you do not need that precise value given you already have sampling bias by opting for Google Customer Surveys. Most would do well with just 5% margin of error which requires only 385 responses or 10% which requires only 97 responses.

Recommending 1500 responses is at best a deliberate pricing anchor, at worst an error.

If they really mean hypothesis testing, one can use a survey tool for that, but it is not coming through in the rest of their messaging which is all about response collection. The 1500 responses suggestion is still questionable. For most statistical hypothesis testing 385 samples are enough (Rethinking Data Analysis published in the International Journal of Marketing Research, Vol 52, Issue 1).

Survey of one question at a time: Brands can create surveys that have multiple questions in them but respondents will only see one question at any given time.
Google says,

With Google Consumer Surveys, you can run multi-question surveys by asking people one question at a time. This results in higher response rates (~40% compared with an industry standard of 0.1 – 2%) and more accurate answers.
It is not a fair comparison regarding response rate. Besides we cannot ignore the fact that the response may be just a mindless mouse click by the reader anxious to get to their article. For the same reason they cannot claim , “more accurate”.

Do not cancel your SurveyGizmo subscription yet. There is a reason why marketing researchers carefully craft a multiple question survey. They want to get responses on a per user basis, run factor analysis, segment the data using cluster analysis or run some regression analysis between survey variables.

Google says,

The system will automatically look for correlations between questions and pull out hypotheses.

I am willing to believe there is a way for them to “collate” (not correlate as they say) the responses to multiple questions of same survey by each user and present as one unified response set. If you can string together responses to multiple questions on a per user basis you can do all the statistical analysis I mentioned above.<;

But I do not get what they mean by, “look for correlations between questions” and definitely don’t get, “pull out hypotheses”. It is us, the decision makers,who make the hypothesis in the hypothesis testing. We are paid to make better hypotheses that are worthy of testing.

If we accept the phrase, “pull out hypotheses”, to be true then it really means we need yet another data collection process (from a completely different source) to test the hypotheses they pulled out for us. Because you cannot use the very data you used to form a hypothesis to test it as well.

Net-Net, an elegant business model innovation with severe execution errors.

One right price is better than three wrong prices: SurveyGizmo Simplifies Pricing

This post is my interview with CEO of SurveyGizmo, Christian Vanek on their pricing strategy.

A few weeks back I wrote about the continuing changes to SurveyGizmo pricing. It turned out they have been A/B testing their pricing for a while and I had slipped through the crack, finding both the offers. Last week I sat down (over phone) with SurveyGizmo CEO, Christian Vanek and their web marketing lead Kipp Chambers for a conversation on their new pricing.  Christian happily shared with me  the genesis and details of this simplified pricing.

The details are sure to add new dimension to the thinking of most startups that see pricing as simple freemium model or do it as tactical afterthought. Their analytical process, understanding of customer mix and their willingness to go against the conventional wisdom are exceptional traits that need to be commended.

Pricing is lot more than an eye-candy pricing page!

What was their pricing before the change?

Take a look at their previous pricing page. Their pricing options and the pricing page design look not much different from numerous other webapps out there. In fact there are wordpress templates available to show this classic three column design with the “suggested version” highlighted.

One glaring difference is, while most webapps include their free version as one of the three presented, SurveyGizmo showed their free version as a footnote.
Otherwise this is nothing more than a  instance of what Hal Varian described as Goldilocks pricing.

What is the change?

Gone are the multiple editions and the pricing page eye candy to nudge customers to a specific edition. There is just one edition with all the features including the advanced features that used to be available only in the higher priced versions. Most importantly, they used to limit the number of responses per month and now they eliminated that limit as well.

In the past they had a cheaper $19 plan even though it was not prominently featured in the pricing page. Now that is gone along with the $159 Enterprise Plan that was prominently featured and highlighted in the middle of the pricing page.

After this pruning, all is left is just one version – no name  for it (like the new iPad)- offered at $50 for the first user and a flat fee of $20 per additional user.
Another point to note is there is no non-linear pricing built into the price list. Whether it is 100 additional user of 1 additional user, the price is the same, $20 per additional user.

To discuss this change, the drivers behind it and how they arrived at it, I talked to SurveyGizmo’s Christian Vanek, their CEO, and Kipp Chambers. Here is what they had to say.

Why are you open to sharing this information? Isn’t pricing strategy meant to add to your competitive advantage?

“We have a company policy of no secrets”, said Vanek. He stayed true to this policy even when I later asked him about SurveyGizmo’s future product roadmap.  “Regarding SurveyGizmo’s pricing there is nothing really to be protective about. As soon as  the pricing page went up our competitors likely saw it. Or they will know when your article goes up. Even before this, people were copying the pricing plans and the pricing page down to the name of our plans and their feature set. Once they had comparable plans they were competing on price”. Vanek adds he could either spend all his energy protecting ideas or spend his energy on better execution and coming up with newer ideas. The choice is clear to him.

What are the drivers for this major pricing change?

We had our $19 plan, the $49 plan and the $159 plan. We found several key things from our analysis of our customers.

  1. Very people were opting for the $19 plan. Some of those who chose it for price realized they did not have all the features they needed and were calling us about that. In most cases we ended up enabling the additional features for them. We are not going to tell our customer, ‘you need to pay additional just for that feature’. Some upgraded to higher priced plan just for a brief period to use the advanced features and downgraded right away when their job was done.
  2. Those who picked the $159 plan were using only 10% of all possible features they get with it. We were taking lot more money from our customers who were not taking full advantage of what they were paying for.
  3. What if a customer wants only one of the feature offered in higher priced version and that is the only one they want? Why should they pay more just for that? We tried for a time some kind of a la carte pricing but it was not the best of experience for our customers.
  4. Surprisingly, customer satisfaction was low among those who chose the lowest priced plan and high among those who chose the higher priced plans. You could argue this is because their purchasing decision itself may have something to do with satisfaction rating.

Considering all these we thought, there is really only plan that served customer needs and presenting three options is likely aggravating customer choice by adding to their cognitive costs. So we decided to test this hypothesis.

This is so different from what every other webapp startup is doing.

Presenting  three plans, any three plans, at different price points and hoping customer will pick the one they want is shotgun approach to customer segmentation. It came apparent to us to retire the shotgun and get sniper”. (Vanek calls this his Call of Duty metaphor, “almost any business lesson can be learned from Call of Duty”, and adds The Lord of The Rings after my prompting*. )

“I think we are seeing now the end of the freemium model, signing up for free and then trying to up-sell. Our value is in providing both a great product and great service to go with it to customers who need and value our product”.

So you are giving up those customers who are willing to pay $20?

These customers were never ours to begin with. Customers who want free survey or want to pay $10 or $20 a month have always been SurveyMonkey’s customers. We are okay with that. If a customer is happy with a competitor we are okay with that. These were the customers who anyway ended up getting the features from higher priced plan because we did not want to say to them, that is extra.

What about profits lost by eliminating $159 plan?

“This was our fear as well and we discussed this internally. It would seem silly to give up on the higher priced plan. In essence you have to bring in 3 new customers at $50 level for every $159 customer we are giving up by eliminating this plan. We asked internally, can we do this? Happy to say we are doing very well after we moved to single price plan.”

“When we discuss our features with customers showing them how we compare feature for feature with competitors and then show them the price, they ask, ‘okay, why such a low price? What is the catch?’. There is no catch. We don’t have to overcharge for the product.”

About the change process?

“We did lots of A/B testing. We found that customer decision was easier with just one pricing option. In fact when we presented the simplified plan in split testing  that charged $50 for first user and  $20 for each additional user we found customers were signing up more than one user than they did with three pricing options.  We are serving marketing research field, we should be doing our homework before such change. Only after a lengthy testing process and data analysis we decided to go with this change.”

It is acceptable for a pricing geek like myself to say cognitive cost, how is that you are thinking about it?

For this Vanek seems to believe this is common sense. A customer who has to weigh multiple plans, the features it has and the price points suffers significant cognitive cost. “We work with lots of researchers who work on cognitive research and we understand the cost to customer from choice.”

Final words?

By eliminating the three plans and going to a single plan we have narrowed the field. We are targeting only those customers who want and value the advanced features.


*Talking of The Lord of The Rings, Vanek says his super power is he has the voice of Saruman.

Pricing Multiple Editions – SurveyGizmo Takes a New Approach

My favorite survey platform is SurveyGizmo. In the past I have written about its pricing and how it effectively used multiple versions and visual nudges in its pricing page. SurveyGizmo has been experimenting with their editions and pricing page since then. From presenting five options, to four options and now there are only three options when you visit their pricing page.

Before I point out the most critical change in their pricing, let us look at some of the secondary changes

  1. What is missing in the three options? What is one version you see in any pricing page you visit but is missing here? The free version. It is not prominently featured in SurveGizmo page. It is still there but as a footnote. It is an indication that their customer mix has changed as they move into next phase of the product adoption.
    Their current customer mix is more likely made of Enterprise customers with willingness to pay for a survey platform and a budget to match it. The focus has likely shifted from attracting freeloaders who may never convert to those who think differently about the product and have different buying process.
  2. What do you see about the prices? The highest priced option is listed first and the middle option is prominently featured (in the middle too). This points more to the size of organizations or groups within organizations they are targeting. While you may notice the two options as different you will later see this difference essentially going away.
  3. What do you think about unlimited number of responses in all three? Most webapps differentiate based on number of responses or equivalent – like number of Giga Bytes of storage in case of Dropbox or number of events per months in case of Kissmetrics). SurveyGizmo has done away with number of surveys or number responses as pricing meter. It is a very good approach as most likely customers are not seeing as many responses and it does not make sense as a meter to attach pricing to.

Now all these points are for naught when you try to upgrade your free account to a paid account. Despite what the pricing page says they have done away with any feature differences between the different editions. In essence there is just one version of the product with all the features.

Well the free edition comes with limitations, otherwise you would be happy with free.  Beyond that are no difference in the power of the tool, types of questions, reports, number of emails you can send, etc.

If they have done away with differences what is the pricing meter then? They rely on number of users. Want access to all these features? You can get it for $50 and after that it is $20 each additional user on the account.

Why have they done away with multiple editions? If one price is good, aren’t two better?

When you have  multiple versions (editions) these should differ in at least two dimensions. The mandatory dimension is price and you choose the second based on what the customer values and willing to pay the price difference.

For example, take MacBook Air. Its multiple versions differ in three choice dimensions. Price, screen size and capacity. Clearly the customers see value difference between 11″ and 13″ screens and are willing to pay for it.

But if the customers do not see value difference between versions, they serve no purpose. In fact they add to cognitive cost to customers in making their purchasing decision. When SurveyGizmo had Personal, Professional and Enterprise editions they tried to limit the advanced features like custom scripts to the certain versions. It is likely that only a small percentage cared about these and for the rest the most essential features of the survey platform were more than enough.

Hence their decision to get rid of multiple versions/plans/editions and charge only based on number of users.

How do you decide on offering multiple versions of your product?

Related Articles:

  1. Why there is only one version of Apple TV but three versions of Roku?
  2. Why are raspberry and strawberry yogurts priced the same?
  3. Should your Versioning differ in quantity or benefits?

Note: I have used words Plans,Editions and Versions interchangeably in this article.

Gurus Selling Old Knowledge Under New Brands

This is a long quote from a 1967 article published in Journal of Industrial Economics. This paper was written as a response to Galbraith’s theory of Consumer Sovereignty.

The sensible manufacturer works with the environment, not against it. He tries to satisfy desires, latent and patent, the consumer already has; it is much cheaper than creating new ones.

First, he tries to identify these desires. To do this he now has all the aids of marketing research. If he only researches into which detergent the consumer considers to wash cleanest, he may miss the fact that the consumer now also wants her detergent to be pleasantly perfumed.

That is why so many of the new products even of the biggest firms fail miserably in test market. It is rarely because they are poor products technically. It is because there is something in their mix of qualities that fails to appeal to the consumer.

Once the manufacturer has found out what he thinks the public wants, he has to embody it in a product.

When the manufacturer does find an answer at a reasonable price, he still has to sell it to the public. He may think the answer will work; he may feel the price to be reasonable. He does not know whether the public will see it as he does.

If you go further back you most likely will find yet another article saying the same thing in more arcane language.

Fast forward to present day and you have exactly the same concepts stated above packaged in so many different ways. Every Guru has a name for it, they want us to believe none of the existing methods work. They brand these as their own, e.g., “Trade-off“, “Customer Development”, “Freemium”, etc.

Unfortunately, when the audience suspends its skepticism or if the Gurus are popular enough, their re-packaged ideas take roots as original thesis. Worse, the original ideas these new brands represent are cast aside as anachronisms.

There really is nothing new in marketing. Only new catch-phrases that fit the language of the time.

Gallery

On Focus Groups: Anyone can convene a group, ask questions, and write up the answers

There is a great book titled,  “Prove It Before You Promote It“,  that I read  a while back.  It has some very sobering remarks on focus groups and how they are applied in product development.

I am reproducing in its entirety author Steve Cuno’s commentary on focus group:

Many companies hold focus groups. They fill a room with 10 to 20 carefully selected respondents and ask them questions. That much is fine. A problem occurs only when companies mistake the resultant feedback for data—and make decisions based on what they hear.

Focus groups, with an easy-to-imitate format, are a great place for incompetents to convince themselves and unsuspecting clients that they know what they’re doing. Anyone can convene a group, ask questions, and write up the answers.

I have seen focus group reports that say things like, “Seventy percent felt the packaging was too pink” or “Eighty percent said if you open a store on the West Side, they’ll shop there.” I have seen the people running the focus groups, whose role is to remain unbiased, ask leading questions like, “Would you be more or less likely to shop at a store that advertises on violent cartoons aimed at small children?”

Amazingly and sadly, businesses actually base big decisions on these groups. They make the package less pink. They open a store on the West Side. They pull their ads from Batman cartoons. And all too often they later find that consumers don’t behave the way they said they would in the focus group.

I completely agree. I have written previously about the relevance of focus groups, this book does a much better of job teaching us the pitfalls of misuse of focus groups.

What the book says about focus groups – asking a few (leading) questions and taking product decisions based on the feedback of handful of people-   is very relevant to basing product and startup strategy based on the interviews with handful of customers.

When you are talking to customers, you are still forming hypotheses not testing them. The hard part is not testing the hypotheses, but forming better hypotheses to test.  Focus groups and customer interviews help us make better and testable hypotheses.

If you are in marketing, run a startup, manage a product or do A/B tests you definitely should read this book.

Who Makes the Hypothesis in a Hypothesis Testing?

Most of my  works on pricing and consumer behavior studies rely on hypothesis testing.  Be it finding difference in means between two groups, non-parametric test or making a causation claim, explicitly or implicitly I apply hypothesis testing. I make overarching claims about customer willingness to pay and what factors influence it based on hypothesis testing. The same is true for the most popular topic, these days, for anyone with a web page – AB split testing. Nothing wrong with these methods and I bet I will continue to use these methods in all my other works.

We should note however  that the use of hypothesis and finding statistically significant difference should not blind us to the fact that there is some amount of subjectivity that go into all these. Another important distinction to note is, despite the name hypothesis testing we are not testing whether the hypothesis is validated but whether the data fits the hypothesis which we take it as given. More on this below.

All these testings proceed as follows:

  1. Start with the hypothesis. In fact you always start with two, the null hypothesis which is the same for any statistical testing
    The Null hypothesis H0: The observed difference between subjects (or groups) is just due to randomness.
    Then you write down the hypothesis that you want to make a call on.
    Alternative hypothesis H1: The observed difference between subjects (or groups) is indeed due to one or more treatment factors that you control for.
  2. Pick the statistical test you want to use among those available given your case. Be it a non-parametric test like Chi-square  that makes no assumption about the distribution of data (AB testing) or parametric test like t-test that assumes Gaussian distribution (e.g., normal) of data.
  3. Select a critical value or confidence level for the test 90%,95%, 99% with 95% being the most common. This is completely subjective. What you are stating with the critical value is the results are statistically significant only if these can be caused due to randomness in less than 5% (100-95%) of the cases. The critical value is also expressed as p value ( probability ), in this case 0.05.
  4. Perform the test with random sampling. This needs more explanation but is beyond the scope of what I want to cover here.

As you can see, we the analyst/decision maker make up the hypothesis and we are treating the hypothesis as given.  We did the right thing of writing it first. ( A common mistake in many of the AB tests and in data mining exercises is writing the hypothesis after the test.)

What we are testing is, given this hypothesis H1 is true  (P(H1)=1) what is the probability the test data D fits the hypothesis.

This is expressed as P(D|H1).  Statistical significance here means P(D|H1) > 0.95 given P(H1) =1.

When we say we accept H1, we are really saying H0 (randomness) cannot be the reason and hence H1 must be true. We rule out the fact that the observed data can be explained by any number of alternative hypotheses. Since we wrote the original hypothesis, if we did not base it on proper qualitative analysis then we could be wrong despite the fact  our tests yields statistically significant results.

This is why you should never launch a survey without doing focus groups and customer interviews. This is why you don’t jump into statistical testing before understanding enough about the subjects under study to frame relevant hypothesis.  Otherwise you are, as some wrote to me, using gut feel or pulling things out of thin air and accepting it simply because there is not enough evidence in the data to overturn the null hypothesis.

How do you come up with your hypotheses?

Look for my next article on how this is different in Bayesian statistics.