Summary:Great business model innovation that points to the future of unbundled pricing. But is Google customer survey an effective marketing research tool? Do not cancel SurveyGizmo subscription yet.
Google’s new service, Customer Surveys, is truly a business model innovation. It unlocks value by creating a three sided market:
- Content creators who want to monetize their content in an unbundled fashion (charge per article, charge per access etc)
- Readers who want access to paid content without having to subscribe for entire content or muddle through micro-payments (pay per access)
- Brands seeking customer insights, willing to pay for it but have been unable to find a reliable or cheaper way to get this
Google charges brands 10 cents per response, pays 5 cents to the content creators and keeps the rest for enabling this three sided market.
Business model is nothing but value creation and value capture. Business model innovation means innovation in value creation, capture or both. By adding a third side with its own value creation and capture Google has created an innovative three way exchange to orchestrate the business model.
This also addresses the problem with unbundled pricing, mostly operational challenges with micro-payments and metering.
But I cannot help but notice severe sloppiness in their product and messaging.
Sample Size recommendation: Google recommends brands to sign up for 1500 responses. Their reason, “recommended for statistical significance”.
Statistical significance has no meaning for surveys unless you are doing hypothesis testing. When brands are trying to find out which diaper bag feature is important, they are not doing hypothesis testing.
What they likely mean is Confidence Interval (or margin of error at a certain confidence level). What is the margin of error, at 95% confidence level? With 1500 samples, assuming 200 million as the population size it is 2.5%. But you do not need that precise value given you already have sampling bias by opting for Google Customer Surveys. Most would do well with just 5% margin of error which requires only 385 responses or 10% which requires only 97 responses.
Recommending 1500 responses is at best a deliberate pricing anchor, at worst an error.
If they really mean hypothesis testing, one can use a survey tool for that, but it is not coming through in the rest of their messaging which is all about response collection. The 1500 responses suggestion is still questionable. For most statistical hypothesis testing 385 samples are enough (Rethinking Data Analysis published in the International Journal of Marketing Research, Vol 52, Issue 1).
Survey of one question at a time: Brands can create surveys that have multiple questions in them but respondents will only see one question at any given time.
With Google Consumer Surveys, you can run multi-question surveys by asking people one question at a time. This results in higher response rates (~40% compared with an industry standard of 0.1 – 2%) and more accurate answers.
Do not cancel your SurveyGizmo subscription yet. There is a reason why marketing researchers carefully craft a multiple question survey. They want to get responses on a per user basis, run factor analysis, segment the data using cluster analysis or run some regression analysis between survey variables.
The system will automatically look for correlations between questions and pull out hypotheses.
I am willing to believe there is a way for them to “collate” (not correlate as they say) the responses to multiple questions of same survey by each user and present as one unified response set. If you can string together responses to multiple questions on a per user basis you can do all the statistical analysis I mentioned above.<;
But I do not get what they mean by, “look for correlations between questions” and definitely don’t get, “pull out hypotheses”. It is us, the decision makers,who make the hypothesis in the hypothesis testing. We are paid to make better hypotheses that are worthy of testing.
If we accept the phrase, “pull out hypotheses”, to be true then it really means we need yet another data collection process (from a completely different source) to test the hypotheses they pulled out for us. Because you cannot use the very data you used to form a hypothesis to test it as well.
Net-Net, an elegant business model innovation with severe execution errors.