Does Intuition Fill Data Void?

intuition

In his final letter to Groupon employees ex-CEO  Andrew Mason wrote this as his biggest regret,

My biggest regrets are the moments that I let a lack of data override my intuition on what’s best for our customers.

Decisions need to be based on data. Right and relevant data. Decisions must change based on data, when new evidence is uncovered. Decisions with same supporting data must be reproducible and repeatable by anyone within and outside the organization.  This does not mean we are assured of the results. We never will be certain of results. The role of data/information is to reduce the uncertainty.

But how do we decide when there is lack of data? It is likely true not all that matters is measurable but do we give up on gathering data? Does lack of data mean we fall back intuition?

Decisions based on intuitions are not repeatable by you or by others with their own intuitions. It is like throwing darts blindfolded, you might hit the center but that does not mean your method is correct.

Lack of data does not mean we let intuitions drive but we experiment to seek data that will help improve our decisions.

Your regret should never be you did not override your intuition but you did not experiment enough.

Now a final word on Groupon whose stock is tearing around $2.30 from its IPO level of $20. The problem is who is group’s customer? The deal seeker who moves from deal to deal or the small business owners? You should read my book from four years ago to understand that.

 

90% Chance Your Prediction is Wrong

predictionmarketWe all make predictions everyday, pundits and media bloggers more so than the rest of us. Business folks in startups and enterprises are no exceptions – we predict the market size, our share of the market, revenue growth, effect of a marketing campaign etc. These predictions drive our decisions to launch a product, enter a new market or acquire a new business.

Most if not all of these predictions are just made up, with no legs in data, with no appetite for refinement and made solely to convey confidence than nothing else. Most predictions are black and white with no room for  gray areas. This is because leaders are encouraged to show confidence and what better way to show it than by assertion. And confidence of course is usually confused with competence.

Those who want a way out deliberately make vague predictions like, “it is not that far out when Amazon Kindle will be free” and  others line up escape clauses like, “if we execute well on the market opportunity it is all ours to take”. Worse, the ultimate copout which is seemingly well balance but wrong, “50-50 chance“.

We all making predictions with no basis in reality, no understanding of base rate, no intention of seeking data to refine our estimate, no desire to state our prediction as livelihoods,  and definitely no patience to explain why we believe our prediction is correct.

There is a better way – Probabilistic Thinking. Jason Zweig of The Wall Street Journal points us to a book called Superforecasting and the associated website Good Judgement Project that show us and teach us how to get better in predictions and overall decision making.

Superforecasting: The Art and Science of Prediction,” co-written with the journalist Dan Gardner, is the most important book on decision making since Daniel Kahneman’s “Thinking, Fast and Slow.”

You can cultivate these same skills by visiting GJOpen.com and joining the next round of the tournament. Or you can try refining your own thinking.

I have written here on decision making under uncertainty and how scenario analysis and likelihood assignments can help us make better decisions. The Good Judgement Project kicks it up several notches and offers us a solid framework to improve our prediction skills and decision making skills.

Take a look at this question at GJOpen.com on predicting Twitter CEO situation by the end of this year.

gjopen-twitter

Unlike the media pundits who state with confidence, “it is going to be Jack,” this question asks you to consider more options and assign probabilities to each outcome. In addition it asks you enter a rationale to explain your choice of probabilities. You can’t get away with, “50-50 chance Jack will be appointed CEO”.

When you take what they teach you in this project to predictions and decision making in your business you are bound to improve quality of decisions and business output. I am 90% certain you will improve your decision making because this method teaches you a repeatable, defensible, data driven three step approach.

  1. Start with the base rate – What is supported in the history? If 90% of startups before you took 7 years to get to $200 million annual run rate, start with the notion yours will take as long.
  2. Ask what needs to get done to exceed base rate – If you re going to grow faster what needs to line up, what do you need to do? Look at those that grew faster than the base rate and see what drivers helped them. Are those endogenous drivers (actions the company took) or exogenous (external and random)?  Write down your estimate of likelihood of you beating the base rate and your rationale.
  3. Take action. Collect Data. Refine the estimate and repeat – Decision making is not an event, it is an iterative process.

Are you ready to check your need to speak in absolutes, make bold predictions that have no uncertainties and instead practice probabilistic thinking?

 

 

Estimating Wrong and Estimating the Wrong Metric

In business, large enterprise or startup, we make many estimates in our roles. Be it market size, addressable market,  penetration, market share growth, effect of a marketing campaign etc.

In the absence of clear data we make assumptions, look at past performance, compare to similar entities and try to estimate the metric we are interested in. In this process we make process errors – errors in spreadsheet, coding, data entry – errors that are avoidable with stricter framework or can be weeded out with a review process.

What is more dangerous are the bias driven errors that cannot be caught by any review process because we all share the same view and biases.

Here are those estimation errors that we are oblivious to. For a vivid illustration of these errors see  this story on museum visits.

Museums take pains to make the past come alive, but many are out of their depth when looking into the future.

When selling plans for a new building or a blockbuster exhibit, civic leaders and museum officials typically cite projected visitor counts. These numbers can be effective in securing funding and public support. As predictions, however, they often are less successful.

  1. Estimating Exact Numbers – Estimating a single exact number with certainty because we look down on those who don’t do that as diffident.
  2. Comparison Error – Making estimates based on other similar projects but err by choosing the most successful ones (not to mention what is available and recent).
  3. Atypical Scenarios – Making estimates not based on what is most common and most likely to happen but on atypical scenarios.
  4. Context Error – Ignoring the state of your business (stage in growth cycle etc), market, dynamics, customer segment etc.
  5. Attribution Error – Attributing someone’s success to wrong traits and making our own estimates because of shared traits.
  6. Using Lore – Using very large number from the past because it is part of the organization’s shared lore – never stopping to question, “Where did we get that number?”
  7. Ignoring Externalities – Making estimates without regard to economic factors, disruptions, what the competitors would do
  8. Underestimation –  Deliberately estimating low in order to exceed expectations

 

Finally to top all these errors, the biggest of them all is to estimate the completely irrelevant and wrong metric because it is easy —  page views, number of retweets, user base etc. So even if you fix all the above errors you end up with perfectly estimated wrong metric. I have written in detail about this here.

Do you know your estimation errors?

Quality of Decisions vs. Quality of Results

Do you judge a person by the results of their action or the decision making process that led to those results?

Judging from the most overused terms on LinkedIn we all seem to focus exclusively on results.   That is likely a result of our focus on being, “action oriented”.  Action leads to results while (decision) analysis leads to paralysis – so the saying goes.

When you judge only by the results can you identify sheer dumb luck?

Writing in the context of Poker betting, Poker theorist David Sklansky, writes

What matters is the quality of your decisions, not the results that come from them.

Seems irrelevant to quote Poker process to businesses? Not if you recognize the essence of business is decision making under uncertainty. We make “investment decisions” everyday with resources and people.

We don’t know the exact market size and which segment to target first. We do not know what prices to set. We don’t know what benefits customers value more and hence what products to build. We do not know what channels to develop and what promotions to run.

If we have a rigorous decision making process  we can estimate market sizes and pick the segment that offers best chance of success, tease out distribution of customer willingness to pay and set a price, we use statistical models to determine customer value distribution, we evaluate channel and promotion impact from past data and pick the most likely one to succeed.

There is lot of uncertainty in the results. Other factors may crush your market size. Your pricing could be all wrong. Customers may reject the main value proposition due to other factors. Your channel choice may fall flat because customer buying behavior changed. Besides all these your competition may take steps that nullifies all your actions.

Think about it, the uncertainty of outcome is not that different from Poker. Or is it Poker more like business strategy?

When you judge a person only based on the results of their actions you fail to understand if they are capable of delivering similar results in future especially when conditions are radically different. Someone with solid (and evolving) decision making process but with failed projects in the past however has the wherewithal to operate under uncertainty in the future.

It is like drafting a quarterback who threw a few game winning Hail-Mary’s vs one that plays the long game.

A person’s results do not tell the full story or tell the wrong story. Their decision making process however tells you the real story.  Let me rephrase the Poker quote,

It is the quality and repeatability of the decision making process that matters more than the results.

 

Asking the right questions. Seeking the relevant information.

jackLet us play a card guessing game. I have a standard pack of 52 cards. I pick one at random and ask you for the chances it is a Jack. That is not that difficult. It is 1/13.

But it is not any standard pack of cards. We do not know how many cards in it. In fact we do not even know if there are any Jacks in it. It is like those card dispenser contraptions that spit out a card, except of unknown size. An acceptable answer is, “I don’t know”, because the problem is not frequentist probability question. The problem space has switched from risk to uncertainty.

But we can’t end it there. What if we need to find out to help with a business decision? After all, our output as a leader is decision making- make that informed decision making under uncertainty. So we have to push forward. What if we can reduce the uncertainty by asking questions? What if I made available volumes of varied data (BigData) about the card?

Sidebar,  if you ever followed Jonah Lehrer, renowned Bob Dylan scholar who also wrote books on creativity, or you are one of the hustling valley entrepreneur type you might answer  the question by,

“Questions? I won’t ask questions. I will force your hand and turn the card over to see”.

But let us stay realistic here and continue. What questions would you ask? What relevant data would you seek in the BigData?

You could ask: What color is the card?
But is that relevant and help to reduce uncertainty?

BigData could say: In 200,000 card pickings there were 103,568 red and rest black cards
Is that BigData relevant?

You could ask: Is that a picture card?
This  helps to reduce uncertainty. If the answer is no, you are done. If the answer is yes you still have work to do, but are at a better state than before.

BigData could say: In the past 200,000 card pickings a few picture cards were sighted.
This helps too but not as effective as you actively seeking the information.

The role of information is to reduce uncertainty. If it does not help reduce uncertainty in decision making it is useless regardless of its volume and variety (or how many ever Vs you add).  BigData is not a substitution for application of mind. You the decision maker need to ask the right questions for the decision at hand and not let it spew you with interesting findings.

Do you ask the right questions?

Not everything that counts can be counted … but did you try?

Photo Courtesy: Quote Investigator

Most people attribute this quote to Einstein,

Not everything that counts can be counted, and not everything that can be counted counts

It is repeated with deference by most. There are however alternative theories about whether or  not this quote should be attributed to Einstein or not. That is not the argument here.

The argument is how statements like

“not everything can be quantified”
“not everything can be measured”
“you have to look at this holistically”
“we have been in this business long, we know”

get  thrown around to support decisions made only with lots of hand waving and posturing. Anyone pushing back to ask for data is told about this famous quote by Einstein – “surely you are not smarter than Einstein”, is the implied message I guess.

Let us treat that famous quote as given – there are indeed factors that are relevant to a decision that cannot be quantified. But ..

  1. How many business decisions depend on such non-countable factors? Consider the many product, marketing, strategy, pricing, customer acquisition etc.  decisions you need to make in your job. Do all of them depend on factors that cannot be counted?
  2.  Considering many factors that go into making a decision what is the weight of non-countable factors in any decision? You don’t say 100% of any decision depends on non-countable factors.
  3. How many times have you tried to push through decisions based entirely on non-countable factors (stories?) with not an iota of data?
  4. Finally, have you stopped to check whether what you label as non-countable is indeed so and is not so because of your lack of trying? Are you casting those fators as non-countable either because you are not aware of the methods to count them or because you are taking shortcuts? If you made an attempt you will find most things are countable (How to Measure Anything), be it segment size, willingness to pay or brand influence.

I cannot speak for what Einstein (or someone else) meant with that quote about non-countable things that count. But I am going to interpret that for our present day decision making.

For any decision there are factors that are inputs for it. Some of those factors can be measured with certainty, some with quantifiable uncertainty* and others with unquantifiable uncertainty*. You must cover first, have a model for second and put in place methods to learn more to reduce the third.

Without this understanding if you try to invoke Einstein you are only cheating yourself (and impact your business in that process).

What is your decision making process?

 


Quantifiable Uncertainty means you can express the uncertainty within some known boundaries. Like saying, “I don’t know what is the click through rate is but I am 90% confident it is between  1% and 8%”

Unquantifable Uncertainty means those factors which are unknown and cannot even be expressed in terms of odds (or probabilities).