You are more likely to retweet HubSpot post than this …

The post is titled, “10 horrifying stats about display advertising”. In a attempt to tell stories or relate arcane data to something common the author goes on to make some likelihood comparisons

You are more likely to complete NAVY SEAL training than click a banner ad.

You are more likely to get a full house while playing poker than click on a banner ad.

You are more likely to get a full house while playing poker than click on a banner ad.

You are more likely to birth twins than click a banner ad.

You are more likely to get into MIT than click a banner ad.

You are more likely to survive a plane crash than click on a banner ad.

It is pointless and simply wrong to make such comparisons based on respective frequentist probabilities. Let us say there is a one tenth of one percent of people who see  display ad click on it. Let us us say one percent of people who sign up for Navy SEAL complete the training.

That does not mean these two are comparable nor can you say that chances of  ANYONE completing Navy SEAL training is far better than clicking on display ad. What is missing here are the hidden hypotheses we take for granted (the context).

Think of those who sign up for Navy SEAL. Consider their drive, motivation, physical fitness, mental strength, initial screenings they survived to get to training stage. You already have weeded out people like us. If indeed 1% of people who attempt SEAL training complete, it is the conditional probability

P(Complete SEAL training | Passed all screenings and have wherewithal to complete it)

This is not the probability of any random person you pick from street, which is highly likely close to 0.

It is indeed horrifying that they would compare such unrelated events and their conditional probabilities to make their case about display advertising.

Now to the title of this article. Determining whether or not this is relevant comparison of likelihoods is left as an exercise to the reader.

See also: Chances of winning mega-millions vs. that of dating super model.

How entrepreneurs estimate the probability of picking a red ball from a urn?

Stay with me on this till the end of this article.

Suppose you asked me the probability of picking a red ball from a urn that has 10 red balls and 10 green balls, I would say the answer is 1/2. I cannot say with certainty what the next pick will be but if you picked a ball enough times I can say 50% of those instances will be red balls (yes you return the ball after each pick).

But what if you brought a mystery urn (of unknown size) and didn’t reveal how many red and green balls are in the urn? Heck, what if there were lot more colors than just red and green? I would have no idea. My best approach would be to guess a number (using my gut feel or intuition) and say something like  1/1000.

But as someone too analytically bent I would find it hard to play this game. You didn’t tell me how many total balls, the different colors and whether the urn had a red ball or not. I cannot even apply Bayesian reasoning to refine my answer.

Entrepreneurial outcomes,  says Saras Sarasvathy, is like estimating chances of picking red balls from such a mystery urn. According to Sarasvathy entrepreneurial thinking is

Whatever the initial distribution of balls in the urn, I will continue to acquire red balls and put them in the urn. I will look for other people who own red balls and induce them to become partners and add to the red balls in the urn. As time goes by, there will be so many red balls in the urn that almost every draw will obtain one. On the other hand, if I and my acquaintances have only green balls, we will put them in the urn, and when there are enough, will create a new game where green balls win

It is hard not to imagine someone you know doing exactly this – hustling to find those with red balls to add to their urn (discover a few early adopters and build on them) or pivoting to redefine the game as picking green balls not red balls.

That is the clear distinction between the mind of an entrepreneur with irrational optimism and that of rational person. And I do not use rational and irrational to say one is better than other rather  use irrationality as equally positive trait as rationality (or rationality as equally negative trait as irrationality), like the way Dan Ariely did.

Rational people, there are many of us, refuse to play the guessing game of picking red balls from unknown mystery urns. But if picking red balls is important to significantly improve our lives, someone has to do it. However no single person can keep on trying with a single urn. And most run out of cash and time before they can add enough red balls to their urn.

That is why we need many many entrepreneurs with irrational optimism to keep picking balls from their own mystery urns. Most end up picking all kinds of different balls but a few will find urns and change it in such a way to always draw red balls.

Where do you fall?

 

What would $100 Billion Valuation for @Evernote Look Like?

In a recent article in Inc magazine, Evernote CEO, Mr. Phil Libin, wrote

” there is a good chance that it will be worth $100 billion in a few years”

You likely want to ask what “good chance mean”.

Mr. Libin wrote this in the context of  Evernote’s current one billion valuation and comparing it valuation of The New York Times. Mr. Libin’s makes a very valid point that such comparisons are point less and valuations are based on future expected value from a business’ growth.

I agree.

Most public companies have relatively predictable levels of growth, so their valuations are heavily based on the current values of their businesses. In other words, few investors expect The New York Times‘s profits to grow tenfold in the next few years.

Such valuations on future growth are valid as long as they are computed by taking into account all possible future scenarios and not just the most optimistic outcomes. In many cases, and I don’t mean it is the case with Evernote, we not only overestimate the size of positive outcomes but also overestimate the chances of such outcomes. In such cases the valuations become segregated from reality.

Back to the $100 billion valuation for Evernote. What would it look like?

Let us say it gets the same revenue multiple of 5.51 (say 5 for ease of math) as Google. That would mean $20 billion in yearly revenue. Where would that come from?

From its current sources I estimate that Evernote makes $63 to $84 million a year from 34 million users (1.4 million paying subscribers). If the current business model is the only option that would mean one of following (or combination)

  1. Every customer generates $45 a year, meaning 444 million paying customers (13 times current user numbers and 31 times current paying subscribers)
  2. 50% paying customers, meaning  888 million users
  3. 100 million customers (not users), meaning $200 a year revenue per customer – that means either their subscription price goes up or they found other ways to monetize customer. $200 a year just from subscription does not make sense (NYTimes yearly subscription costs $195 and it did not find 100 million subscribers). Regarding other revenue sources even Google and Facebook have not found a way to get $200.

Even if Evernote does deals like Moleskine tie-up that generate $4-$6 million a year, that is a larger number of deals to get to $20 billion a year sales.

That leaves other sources of revenue that are not yet known from its current strategy. Which means one must consider higher uncertainty in such large outcomes given insufficient information.

Mr. Libin said, “there is a good chance”. Given what is known today and the uncertainties I am not sure what “good chance” means.  But given the current valuation of $1 billion, investors seem to think the expected value of the valuation (considering all good and bad chances) is $1 billion. Or in other words, the numeric value of good chance is much less than 1%.

A question you must ask is,

Is there also ‘good chance’ of $200 million valuation? (See: Zynga)

Finally  I am not going to run a complete scenario analysis here as I have done for other valuations before. That is left as a homework for you.

Testing 40 shades of blue – AB Testing

The title refers to the famous anecdote about Marissa Mayer testing 40 shades of blue to determine the right color for the links. (Unfortunately I am colorblind, I know just one blue.)

Mayer is famous for many things at Google, but the one that always sticks out – and defines her in some ways – is the “Forty Shades of Blue” episode.

she ordered that 40 different shades of blue would be randomly shown to each 2.5% of visitors; Google would note which colour earned more clicks. And that was how the blue colour you see in Google Mail and on the Google page was chosen.

Thousands of such tests happen in the web world, every website running multiple experiments in a day. Contrary to what most in webapp development may believe AB testing does not have its origins in webapp world. It is simply an application of statistical testing, Randomized Control Trial, to determine if a ‘treatment’ made a difference on the performance of treatment group compared to performance of control group.

The simplest test is testing if the observed difference between the two sample means are statistically significant. What that means is measuring the probability, p-value, the difference is just random. If p-value is less than a preset level we declare the treatment made a difference.

Does it matter if the results are statistically significant? See here why it does not:

“I have published about 800 papers in peer-reviewed journals and every single one of them stands and falls with the p-value. And now here I find a p-value of 0.0001, and this is, to my way of thinking, a completely nonsensical relation.”

Should you test 40 shades of blue to find the one that produces most click-thrus or conversions? xkcd has the answer:

Can Ms. Mayer test the way out of Yahoo’s current condition? Remember all these split testing are about finding lower hanging fruits not quantum leaps. And as Jim Manzi wrote in his book Uncontrolled,

Perhaps the single most important lesson I learned in commercial experimentation, and that I have since seen reinforced in one social science discipline after another, is that there is no magic. I mean this in a couple of senses. First, we are unlikely to discover some social intervention that is the moral equivalent of polio vaccine. There are probably very few such silver bullets out there to be found. And second, experimental science in these fields creates only marginal improvements. A failing company with a poor strategy cannot blindly experiment its way to success …

You can’t make up for poor strategy with incessant experimentation.

Odds only mean so much

Last week NPR’s Morning Edition had a piece on President Obama’s poll numbers after the second debate. As their discussion turned to his chances of winning based on Intrade betting numbers,

MONTAGNE: OK, so got all those numbers. Finally, we checked the betting crowd. Intrade, which bills itself as the world’s relieving prediction market, runs an online betting service whose participants put the odds of Barack Obama winning the election at 65 percent over Mitt Romney at 35 percent.

GREENE: But, Renee, odds only mean so much. That same service said there was a 75 percent chance the U.S. Supreme Court would strike down a national healthcare law. And the court beat those odds.

Notice the bolded text and you can see the fallacy in Greene’s argument or his lack of understanding of probabilities.

First these are very uncertain events to predict and the outcome can depend on many different variables. Modeling methods like Monte-Carlo simulation and prediction markets like Intrade try to estimate the level of uncertainty (by placing a lower and upper bound on them). The net result is a probability distribution of different outcomes and in this case whether or not President Obama will  win this November.

When a model states one outcome is more likely than the other and in reality the other outcome happens one should not treat this as failure of the model. If there is no doubt who will win, there is no uncertainty, then one does not need any of these tools. By definition there is uncertainty in the outcome.  The modeling indicates one outcome (say, Obama win) is more likely than the other.

The model is doing its job perfectly well. What it really states is, “if we were to imagine million different ways of running the 2012 election, more of them show Obama win over Romney win”.

Another point is confusing results of previous model on an unrelated event with the current one. There is no relation between Supreme Court’s decision and election results.

To say, “Odds only mean so much”, as a way to dismiss all predictive models is just plain wrong.

50/50

I was lying down on dentist chair, with my dentist busy drilling out a cavity. She tells me, “I think I did all I can but tomorrow if you feel pain of any kind then it is root canal”.

I asked her, “What are the chances I need root canal procedure?”

To this she replied, “50/50″.

I said, “50/50? How kind of odds is that? Can’t you tell now whether or not you hit the nerves and hence know whether or not I need root canal? Can’t you tell from all different cavities you have filled how likely is that I will need the painful procedure?”

With a kind expression that only dentists have she replied, “It doesn’t matter about all others. Tomorrow when you eat ice cream you either have splitting pain or not. That is all matters. So it is 50/50 for you”.

The mere fact that either I will be in pain or not (2 states) was her reasoning for 50/50.

She is my dentist and I am not going to speak against here. But I am going to point you the prediction by a reporter on how likely will Amazon introduce free Kindle. His answer, as you see above, is 50/50. The fact that there are only two outcomes does not mean they are equally probable. Take for instance lottery ticket, the fact that you will either wake up as lottery winner or not does not mean your chances of winning is 50/50.

50/50? That is not odds at all. That is just someone making things up when they don’t know and/or don’t know how one assign probabilities to future events. So in Kindle case whether or not we see free Kindle a 50/50 prediction covers both cases. The reporter can always say, “I told you so”. Such a prediction is not actionable and does not constitute probabilistic thinking.

So how would make probabilistic predictions? Let me take you back to the dentist case. Suppose she has seen 1000s of people with condition close to mine and knows the outcome after she filled their cavity. Or she has access to a database of similar cases documented by many other dentists. Then based on the outcomes (prior knowledge) she could say, “of the 1000 people who has such a cavity filled 503 of them did not need root canal, so your chances are 50/50″.

What does it mean in the case of making business predictions like predicting Free Kindle? Let me take you back to the Pinterest article I wrote and the companion article.

Imagine enumerating all possible future scenarios of Amazon’s device strategy and computing the outcome. These scenarios are the equivalent of many different futures based on numerous variables like cost of production, demand, market forces, amazon’s other opportunities etc. Then we simply need to look at in what percentage of the scenarios does a free Kindle deliver more profit to Amazon than charging for it.  Only after such an analysis can we say anything about the chances of a Free Kindle.

Since no one has done such a scenario analysis, to say “50/50 chance of Free Kindle next year”, is simply pointless.

And by the way, next time you hear 50/50 it is perfectly okay to assume the speaker has no clue because in 99.9999% of the cases no one has done the scenario analysis to make such probabilistic prediction and if they have they likely will not use an expression like 50/50.