Follow the yellow brick road to startup success

This is a guest post by Hubert, Palan, a good friend and classmate from Haas School of Business, UC Berkeley. Hubert (twitter: @hpalan) is the founder and CEO of ProductBoard.com, a platform for strategic product design and management headquartered in San Francisco, California. Prior to ProductBoard, Hubert was the Vice President of Product Management at GoodData, where he managed GoodData’s disruptive platform business, built the whole front-end product management team from the ground up and established and embodied modern principles of user experience designs.

An additional note – I see Hubert as the model for taking risks. He decided to launch a startup not because it was the only option available to him but when he was succeeding in his career and had multiple choices at his disposal.

Have you written your guest post yet?


yellow brick road
yellow brick road (Photo credit: hairchaser)

Let me tell you a short story. My wife, Jenna, and I went for a run early one morning in the Oakland hills. We had an idea where we wanted to go, but we didn’t have a map so we didn’t know how to get there. As we ran we asked several dog-walkers for directions along the way. Since it was after a rainy night, and I was running in very thin-soled running shoes, we asked if their recommended path would be muddy or not. As it turns out, different people have conflicting opinions about both the directions and the quality of the road. Eventually after a few wrong turns we found the right path, but of course contrary to what people said, it was pretty muddy.

Why am I telling you this? I recently quit my job at GoodData and started working on my own startup. Our morning run got me thinking about the challenges you have as a founder building a startup, or a new product. You have an idea of where you want to arrive – your dream target audience with a great need, that your perfect solution will satisfy.

You need to create a product roadmap that would navigate your team. Not only do you not have a map, you don’t even know if there are any roads out there. Muddy or otherwise.

So you head out and start asking for advice. You talk to advisors, investors and one potential customer after another trying to discover the roads and choose the best and shortest one. Advisors and investors give you conflicting advice about the best way, because even though they are great runners, they never ran quite the same route. Various potential customers suggest different features they would want, though they are not really sure if they even need them. They are like the dog-walkers, who are not sure about the route either, but since you ask, they make up an answer.

So you run in circles and take many wrong turns. You hope for smooth paths, and they turn out to be muddy. Hopefully though, you end up finding the right way and reaching your goal.

My friend and mentor Arthur J. Collingsworth always said: “Persistence, persistence, persistence.” So no matter if you are running in some hills, building a startup or working on a new product, persevere and keep on running.


There is nothing new in marketing

This is a long quote from a 1967 article published in Journal of Industrial Economics*. This paper was written as a response to Galbraith’s theory of Consumer Sovereignty.

The sensible manufacturer works with the environment, not against it. He tries to satisfy desires, latent and patent, the consumer already has; it is much cheaper than creating new ones.

First, he tries to identify these desires. To do this he now has all the aids of marketing research. If he only researches into which detergent the consumer considers to wash cleanest, he may miss the fact that the consumer now also wants her detergent to be pleasantly perfumed.

That is why so many of the new products even of the biggest firms fail miserably in test market. It is rarely because they are poor products technically. It is because there is something in their mix of qualities that fails to appeal to the consumer.

Once the manufacturer has found out what he thinks the public wants, he has to embody it in a product.

When the manufacturer does find an answer at a reasonable price, he still has to sell it to the public. He may think the answer will work; he may feel the price to be reasonable. He does not know whether the public will see it as he does.

If you go further back you most likely will find yet another article saying the same thing in more arcane language.

Fast forward to present day and you have exactly the same concepts stated above packaged in so many different ways. Every Guru has a name for it, they want us to believe none of the existing methods work.

Unfortunately, when the audience suspends its skepticism or the Guru is popular enough, their re-packaged ideas take roots.

There really is nothing new in marketing. Only new catch-phrases that fit the language of the time.

*You will find a copy of the said paper from your local library EBSCO host.

Demand Validation – Don’t stop with what is easy, available and fits your hypothesis

As days get hotter, customers line up at Good Humor ice cream trucks only to be disappointed to find that their favorite ice cream, Toasted Almond Bar, is no more available. Truck after truck, customer after customer, similar story. Customers cannot believe the truck  does not any more carry their favorite product. (Full story here)

What is wrong with the business that does not know its own customers and their needs?

Why are they refusing to heed the validation they get from the ice cream trucks (their distribution channel) who are outside the corporate building and with the customers?

This is not because Unilever that owns the Good Humor brand is not customer centric but because it is looking at aggregate customer demand, not just handful of customer inputs. These anecdotes about disappointed customers are just that, anecdotes and do not provide demand validation.

One, two,…, hundred people walking up and demanding a product is not enough. When Unilever looks at its flavor mix, the hero of this story is actually the least popular, bringing in only 3% of the sales. Their data shows that the almond bar is popular only in Northeast especially among grown-ups (see footnote on segmentation).

Talking to handful of grownups from Northeast, just because these were the only ones available (like talking to few people in Coupa cafe in Palo Alto) is not demand validation.  These anecdotes can only help you frame better hypothesis about about customer needs and not proof for the hypothesis itself.

Even if you were to pick 100 grownups from Northeast (good enough sample size that will provide 95% confident answer at 10% margin of error),  you are going to end up with wrong answer about your customers. (Because you are not doing random sampling from your entire target segment.)

When it comes to demand validation do get out of the building. But when you return don’t go building almond bars because a few grownups in your Northeast neighborhood (or others at a boot-camp ) said so. You have some serious market analysis work to do.


Note on Segmentation: ‘Grownups in Northeast’ is not a segment. This is a measure of their customer mix. We still do not know why these people love this specific flavor.

What can we learn about business from Sophie?

Sophie is an international star. Sophie is no ordinary person, it is a rubber teething toy for children. It brings in $29 million a year in sales. Only six years ago it was doing less than $8 million a year.  No teething trouble here. For all these big numbers, Sophie is not backed up by big company with large marketing budget.

While Mattel and other toy makers are plowing billions into new product development and marketing, a small company in France captured the hearts and minds of millions of parents and gums of their children. The revenue numbers and growth trajectory of Sophie are no child’s play.

Their journey to this state, their product decisions and marketing methods teach us valuable lessons for running a business, especially startups.

What can we learn about business from Sophie?

  1. Sophie is simple: Your product cannot be any more complex than this remarkable children’s toy. Less is more. Cut everything possible and deliver a minimum product the customer is willing to pay for. They will bite. Set the price to $25 too.
  2. User Experience must tap into all 5 senses:
    “The CEO hired a psychotherapist, who concluded the rubber chew toy tapped into all five senses: sight with its strongly contrasting colors; hearing with its easy squeak; taste because it is easy to chomp on; and the touch and smell of the natural rubber. The toy’s petite size made it easy for babies to grip.”
    Your product’s User Experience cannot be just about color of the buttons. Remember, with iPad and other devices your customers touch your product. Sooner or later, with next new iPad, they will be tasting it too.
  3. Turn customers into marketers:
    “Parents create pressure on other parents”
    Enchant your customers with a remarkable product. Delighted customers will create significant social pressure  for their friends and peers and create an environment where use of any other product will be a shame. rabid fans will be recommending your  products on a scale of 0 to 10.
  4. Stick to what works:
    “The manufacturing of Sophie has changed little over the years”
    Do not chase every new technology that comes around in the name of efficiency and cost reduction. Your product’s intrinsic characteristics are defined by how it is made. When you change how it is made, you are changing the product and the User Experience.
  5. Pivot: Sophie got its start  as rubber ballon used to spy on German lines during WW-I. Then as their business model changed and the company got out of the building and talked to their customers, it became the present day adorable product. It is clear that they applied all the lean startup principles, failed fast and pivoted by hypothesis testing.

What is your excuse for not growing your sales four-fold like Sophie did?

Gurus Selling Old Knowledge Under New Brands

This is a long quote from a 1967 article published in Journal of Industrial Economics. This paper was written as a response to Galbraith’s theory of Consumer Sovereignty.

The sensible manufacturer works with the environment, not against it. He tries to satisfy desires, latent and patent, the consumer already has; it is much cheaper than creating new ones.

First, he tries to identify these desires. To do this he now has all the aids of marketing research. If he only researches into which detergent the consumer considers to wash cleanest, he may miss the fact that the consumer now also wants her detergent to be pleasantly perfumed.

That is why so many of the new products even of the biggest firms fail miserably in test market. It is rarely because they are poor products technically. It is because there is something in their mix of qualities that fails to appeal to the consumer.

Once the manufacturer has found out what he thinks the public wants, he has to embody it in a product.

When the manufacturer does find an answer at a reasonable price, he still has to sell it to the public. He may think the answer will work; he may feel the price to be reasonable. He does not know whether the public will see it as he does.

If you go further back you most likely will find yet another article saying the same thing in more arcane language.

Fast forward to present day and you have exactly the same concepts stated above packaged in so many different ways. Every Guru has a name for it, they want us to believe none of the existing methods work. They brand these as their own, e.g., “Trade-off“, “Customer Development”, “Freemium”, etc.

Unfortunately, when the audience suspends its skepticism or if the Gurus are popular enough, their re-packaged ideas take roots as original thesis. Worse, the original ideas these new brands represent are cast aside as anachronisms.

There really is nothing new in marketing. Only new catch-phrases that fit the language of the time.

Who Makes the Hypothesis in a Hypothesis Testing?

Most of my  works on pricing and consumer behavior studies rely on hypothesis testing.  Be it finding difference in means between two groups, non-parametric test or making a causation claim, explicitly or implicitly I apply hypothesis testing. I make overarching claims about customer willingness to pay and what factors influence it based on hypothesis testing. The same is true for the most popular topic, these days, for anyone with a web page – AB split testing. Nothing wrong with these methods and I bet I will continue to use these methods in all my other works.

We should note however  that the use of hypothesis and finding statistically significant difference should not blind us to the fact that there is some amount of subjectivity that go into all these. Another important distinction to note is, despite the name hypothesis testing we are not testing whether the hypothesis is validated but whether the data fits the hypothesis which we take it as given. More on this below.

All these testings proceed as follows:

  1. Start with the hypothesis. In fact you always start with two, the null hypothesis which is the same for any statistical testing
    The Null hypothesis H0: The observed difference between subjects (or groups) is just due to randomness.
    Then you write down the hypothesis that you want to make a call on.
    Alternative hypothesis H1: The observed difference between subjects (or groups) is indeed due to one or more treatment factors that you control for.
  2. Pick the statistical test you want to use among those available given your case. Be it a non-parametric test like Chi-square  that makes no assumption about the distribution of data (AB testing) or parametric test like t-test that assumes Gaussian distribution (e.g., normal) of data.
  3. Select a critical value or confidence level for the test 90%,95%, 99% with 95% being the most common. This is completely subjective. What you are stating with the critical value is the results are statistically significant only if these can be caused due to randomness in less than 5% (100-95%) of the cases. The critical value is also expressed as p value ( probability ), in this case 0.05.
  4. Perform the test with random sampling. This needs more explanation but is beyond the scope of what I want to cover here.

As you can see, we the analyst/decision maker make up the hypothesis and we are treating the hypothesis as given.  We did the right thing of writing it first. ( A common mistake in many of the AB tests and in data mining exercises is writing the hypothesis after the test.)

What we are testing is, given this hypothesis H1 is true  (P(H1)=1) what is the probability the test data D fits the hypothesis.

This is expressed as P(D|H1).  Statistical significance here means P(D|H1) > 0.95 given P(H1) =1.

When we say we accept H1, we are really saying H0 (randomness) cannot be the reason and hence H1 must be true. We rule out the fact that the observed data can be explained by any number of alternative hypotheses. Since we wrote the original hypothesis, if we did not base it on proper qualitative analysis then we could be wrong despite the fact  our tests yields statistically significant results.

This is why you should never launch a survey without doing focus groups and customer interviews. This is why you don’t jump into statistical testing before understanding enough about the subjects under study to frame relevant hypothesis.  Otherwise you are, as some wrote to me, using gut feel or pulling things out of thin air and accepting it simply because there is not enough evidence in the data to overturn the null hypothesis.

How do you come up with your hypotheses?

Look for my next article on how this is different in Bayesian statistics.