Belief that Entrepreneurship is risky fosters risky ventures

You have seen my attempt to analyze data provided by a VC firm on how they decide to invest in startups. Contrary to what they thought they were doing there was just one factor that decided investment decision.

Do VCs make informed evidence based decisions by meticulously rating startups like the data we saw led us to believe? That requires a meta analysis across all VC firms and someone just did that.

These are some quotes from a  article by http://goo.gl/WivWG“>Stanford GSB Professor, Jeffrey Pfeffer, on the need for Evidence Based Management in Entrepreneurial environments:

  1.  … it has become conventional wisdom, accepted by all the parties ranging from entrepreneurs to those who provide them financing, that a high rate of failure is an inevitable consequence of doing new things, inventing new technologies, and opening up new markets—activities which are inherently risky and uncertain because they involve doing things that have not been successfully done before. Because this conventional wisdom suggests that a high failure rate is inevitable, there is often little effort expended trying to improve decision-making in new venture activity.
    (In other words, people start ventures without trying to validate customer demand and VCs invest based on all kinds of criteria but validity.)
  2. Many of the VC firms do what they do without much introspection or reflection, partly as a result of the egos and self-confidence of the VC partners. People who have survived and prospered in the venture industry have obviously done well, and those VC’s who don’t do well generally don’t last. Therefore, it is axiomatic that most fund managers (those who survived and prospered) believe they are much above average in their abilities and in their decision making.
    (Hey, smart people succeed. If not they wouldn’t have succeeded, would they?)
  3. Positive qualities get attributed to the people, groups, or companies that enjoy those good outcomes whether or not these qualities are true or causal. This means that high-performing VC’s will be perceived as having individual skill as a consequence of their performance, whether or not such skill actually exists.
    (No wonder we bow at the altar of success. This finding was first stated decades ago John Kenneth Galbraith in his Conventional Wisdom essay.)
  4. Entrepreneurs, too, mostly have strong egos, which is what is required to take on something new where the risks of failure are high. But this overconfidence among entrepreneurs and those that back them makes it difficult for people involved in creating new businesses to question things and to learn from setbacks and other experience.
    (Everyone is killing it! Disrupting status quo! I hope they would stop at that and not write seemingly erudite articles on brain science.)
  5. Most venture capitalists and entrepreneurs believe that outstanding individual people make the difference, leading them to focus on finding and recruiting stars and to eschew much attention to process, including decision making processes.
    (If you are already successful you are perceived to be outstanding and thought of as having success potential)
  6. Few of the participants in entrepreneurial activity suffer significant consequences from unsuccessful decisions, and therefore many players have less incentive than one might expect to improve their decision-making  - VCs get guaranteed principal and Entrepreneurs often, although not always, are working with other people’s money, so their financial downside, except in terms of the opportunity costs of their time, are also limited
    (Because failure is most often seen as an unavoidable risk of being an entrepreneur, there are few if any career risks for starting something that doesn’t work out!)

The Hippo Problem in Business Advice

Which animal do you think Hippos are related to?

My guess was elephant because in my native language Hippo was called water-elephant. Hippos are big, grey, herbivores and spend most time in water so they were related to elephants – so thought my culture.

In the modern world early naturalists thought Hippos must be related to pigs. I guess they used other physical cues like looks and teeth and that they both wallow in mud and decided on pig relation.

Not until other scientists looked at relevant evidence, not just what is convenient, available, fits a story, and supports one’s pre-conceived notion, did we find out the real relation.

Take hippos, for example. Early naturalists thought hippos must be related to pigs. After all they look somewhat alike and have similar teeth. But fossils and genetic studies showed that hippos’ closest living relatives are actually dolphins and whales. (NPR)

No one who looks only at the superficial symptoms and what is overt could have come to this conclusion.

We have a Hippo problem in business advice. More like Hippo crisis with bigger ramifications than not getting Hippo-Dolphin connection right.

Look at successful businesses. Look at their seven traits. If only you had them your business would be successful too.
Look at this magical number Net Enchantment Scores of highly successful companies. If only you get your score to that level you would be insanely profitable too.
Look at Brand V’s successful social media campaign. You better get on twitter and start conversations with your customers.

Hundreds of management/marketing/business gurus, thousands of books,  and hundreds of thousands of articles bombarding us with advice on how we should run a business based only what they saw as a relation between Hippo and PIg.

And millions of fans who have suspended skepticism to embrace a Guru’s preaching and spread it around, taking solace in the numbers. After all millions of people who read the blog articles and pass them around can’t be wrong and we are not alone in following Guru’s footsteps.

The Hippo problem in business advice is not just the fault of self-confident Gurus with not an iota of self doubt pushing their snake-oil with no repercussions. We who accept and embrace these prescriptions without asking difficult questions are bigger part of the problem.  Questions like,

  1.  Is my business like the Hippo Guru is talking about?
  2. What evidence will cause Guru’s advice to be wrong? Sure I run cupcake store, does that mean my customers want engagement and not just cupcakes?
  3. What is the opportunity cost of following in Guru’s footsteps and getting it wrong? Should I adopt razor-blade model because Guru says that is the future?

In the recent past, before the explosion of self-anointed Gurus, we had a framework for making decisions. We did not make decisions because someone else did it that way, we looked our goals, our customers, market dynamics, marketing channels, sales channels and our ability to compete.

Now everyone is a Business Guru. There is no need for looking for evidence, the Gurus know. They know just from the few anecdotes they have seen – be it a farmers market vendor, Grateful Dead or Harry Potter movie. They have their prescriptions for us on how to run a business, do marketing and price products.

The problem is not going to go away because Gurus get self-realization (intended). I think one can’t be a Guru peddling snake-oil prescriptions unless one loses all self-doubt and strongly believes in what they are selling. These people are selling to a segment with need for magical prescriptions – like engaging in social media, telling stories or to be remarkable.

Until we the consumers of business advice stop worshipping Gurus and seek relevant evidence the Hippo problem in business advice is going away.

Are you ready to stop following your Guru and start asking tough questions?

Not everything that counts can be counted … but did you try?

Photo Courtesy: Quote Investigator

Most people attribute this quote to Einstein,

Not everything that counts can be counted, and not everything that can be counted counts

It is repeated with deference by most. There are however alternative theories about whether or  not this quote should be attributed to Einstein or not. That is not the argument here.

The argument is how statements like

“not everything can be quantified”
“not everything can be measured”
“you have to look at this holistically”
“we have been in this business long, we know”

get  thrown around to support decisions made only with lots of hand waving and posturing. Anyone pushing back to ask for data is told about this famous quote by Einstein – “surely you are not smarter than Einstein”, is the implied message I guess.

Let us treat that famous quote as given – there are indeed factors that are relevant to a decision that cannot be quantified. But ..

  1. How many business decisions depend on such non-countable factors? Consider the many product, marketing, strategy, pricing, customer acquisition etc.  decisions you need to make in your job. Do all of them depend on factors that cannot be counted?
  2.  Considering many factors that go into making a decision what is the weight of non-countable factors in any decision? You don’t say 100% of any decision depends on non-countable factors.
  3. How many times have you tried to push through decisions based entirely on non-countable factors (stories?) with not an iota of data?
  4. Finally, have you stopped to check whether what you label as non-countable is indeed so and is not so because of your lack of trying? Are you casting those fators as non-countable either because you are not aware of the methods to count them or because you are taking shortcuts? If you made an attempt you will find most things are countable (How to Measure Anything), be it segment size, willingness to pay or brand influence.

I cannot speak for what Einstein (or someone else) meant with that quote about non-countable things that count. But I am going to interpret that for our present day decision making.

For any decision there are factors that are inputs for it. Some of those factors can be measured with certainty, some with quantifiable uncertainty* and others with unquantifiable uncertainty*. You must cover first, have a model for second and put in place methods to learn more to reduce the third.

Without this understanding if you try to invoke Einstein you are only cheating yourself (and impact your business in that process).

What is your decision making process?

 


Quantifiable Uncertainty means you can express the uncertainty within some known boundaries. Like saying, “I don’t know what is the click through rate is but I am 90% confident it is between  1% and 8%”

Unquantifable Uncertainty means those factors which are unknown and cannot even be expressed in terms of odds (or probabilities).

It costs 6-7 times more to acquire new customers over retaining existing ones …

No my account was not hacked (not yet, at least). I deliberately let this commonly repeated statement be the title without qualifying it.  Of course statements like these, (this particular one made famous by Loyalty Effect) cannot stand by themselves regardless of how popular the Guru who said this is.

Let us look at this closely

  1. Let us assume this statement is true. So shall we fire our sales team, shut down all marketing spend, stop product innovations and get rid of business development?  After all this statement indicates new customers are far more expensive. Then why bother?
  2. Let us take it to the extreme. Shall we stop after the first customer?
  3. Extending this even further, say you acquired the first customer at a cost of $1 and second at the cost of $7. Then by this logic does it cost $49, $343 etc to acquire third and fourth customer?
  4. What if you are essentially in a transactional business where you really need new customers every day because the current ones won’t be there tomorrow?
  5. How do you know it is 6-7 times or only 6-7 times? What are the data and metrics used? Was it based on experimental study?
  6. How generally applicable is this to your businesses – small and large, early stage vs. mature? Is the cost the same to all businesses?
  7. What about profits from new customers, is that 6-7 times as well?

You can see how ridiculous the statement sounds now. Here is a further breakdown of problem with this retention vs. acquisition costs statement.

First it is framed around cost and does not base it on marginal benefit and opportunity cost. I also doubt that the proponents know how cost accounting is done and most likely are allocating all kinds of fixed cost share to new customers. You need to have a costing system that can correctly capture only truly incremental costs for both acquiring and retaining. Simply distributing all costs to all customers won’t cut it.

Second it suffers from sunk cost bias. The fact that you spent some money to acquire a customer in the past does not matter in the decision to do everything to retain them. If you cannot recover the acquisition cost it is sunk. You should only look at future unearned marginal profit from each customer – existing or new. At decision time of spending capital on retention vs. acquisition you need to compute the opportunity cost and truly incremental profit from each path, not encumbered by the money you have already spent on existing customers.

Third, if the cost of acquisition is indeed high don’t you think you have a marketing problem? Is it likely that you are targeting wrong customers in wrong places with wrong product, versions, messaging and prices and hence wrong low value customers are self-selecting themselves to your service? Don’t you want to spend your resources fixing this strategic problem vs. worrying about retention?

Lastly the Innovator’s Dilemma.  What if the current customers are NOT the representation of future?  By choosing to focus your resources on them instead of new customers do you lose sight of new market opportunities, how the customer needs are evolving and how their choices for the job to be done are impacted by market trends and innovations?

Does the retention vs. acquisition pronouncement sound as profound as it did before?  I hope not.

How do you make business decisions?

Pig or a Dog – Which is Smarter?: Metric, Data, Analytics and Errors

How do you determine which interview candidate to hire? How do you evaluate the candidate you decided you want to hire? (or decided you want to flush?)

How do you make a call on which group is performing better? How do you hold accountable (or explain away) bad performance in a quarter for one group vs. other?

How do you determine future revenue potential of a target company you decided you want to acquire? (or decided you don’t want to acquire)?

What metrics do you use? What data do you collect? And how do you analyze that to make a call?

Here is a summary of an episode from Fetch With Ruff Rufman, PBSKids TV show:

http://pbskids.org/fetch/i/home/fetchlogo_ruff_vm.gifRuff’s peer, Blossom the cat, informs him pigs are smarter than dogs. Not believing her and determined to prove her wrong, Ruff sends two very smart kids to test. The two kids go to a farm with a dog and a pig. They decide that time taken to traverse a maze as the metric they will use to determine who is smarter. They design three different mazes

  1. A real simple straight line  (very good choice as this will serve as baseline)
  2. A maze with turn but no dead-ends (increasing complexity)
  3. A maze with two dead-ends

Then they run three experiments, letting the animals traverse the maze one at a time and measuring the time for each run. The dog comes out ahead taking less than ten seconds in each case while the pig consistently takes more than a minute.

Let me interrupt here to say that kids did not really want Ruff to win the argument. But the data seemed to show otherwise. So one of the kid changes the definition on the fly.

“May be we should re-run the third maze experiment. If the pig remembered the dead-ends and avoids them then it will show the pig is smarter because the pig is learning”

And they do. The dog takes ~7 seconds compared to 5.6 seconds it took in the first run. The pig does it in half the time, 35 seconds, as its previous run.

They write up their results. The dog’s performance worsened while pig’s improved. So the pig clearly showed learning and the dog didn’t. The pig indeed was smarter.

We are not here to critique the kids. This is not about them. This is about us, leaders, managers and marketers who have to make such calls in our jobs. The errors we make are not that different from the ones we see in the Pigs vs. Dogs study.

Are we even aware we are making such errors? Here are five errors to watch out for in our decision making:

  1. Preconceived notion: There is a difference between a hypothesis you want to test vs. proving a preconceived notion. 

    A hypothesis is, ” Dogs are smarter than pigs”.  So is, “The social media campaign helped increase sales”. 

    A preconceived notion is, “Let us prove dogs are smarter than pigs”. So is, “let us prove that the viral video of man on horse helped increase sales”. 

  2. Using right metric:  What defines success and what better means must be defined in advance and should be relevant to the hypothesis you are testing.
    Time to traverse maze is a good metric but is that the right one to determine which animal is smart? Whether smart or not dogs have an advantage over pigs – they respond to trainer’s call and move in that direction. Pigs only respond to presence of food. That seems unfair already.
    Measuring presence of a candidate may be a good but is that the right metric for the position you are hiring for? Measuring number of views on your viral video is good but is that relevant to performance?
    It is usually bad choice to pick a single metric. You need a basket of metrics that taken together point to which option is better.
  3. Data collection: Are you collecting all the relevant data vs. collecting what is convenient and available?  If you want to prove Madagasar is San Diego then you will only look for white sandy beaches. If you stop after finding a single data point that fits your preconceived notion you will end taking $9B write down on that acquisition.
    Was it enough to test one dog and one pig to make general claim about dogs and pigs?
    Was one run of each experiment enough to provide relevant data?
  4. Changing definitions midstream: Once you decide on the hypothesis to test, metrics and experimental procedure you should stick to that for the scope of the study and not change it when it appears the results won’t go your way.
    There is nothing wrong in changing definition but you have to start over and be consistent.
  5. Analytics errors: Can you make sweeping conclusions about performance without regard to variations?
    Did the dog really worsen or the pig really improve or was it simply regression to the mean?
    Does 49ers backup quarterback really have hot-hand that justifies benching Alex Smith?What you see as sales jump from your social media campaign could easily be due to usual variations in sales performance. Did you measure whether the performance uplift is beyond the usual variations by measuring against a comparable baseline?

How do you make decisions? How do you define your metrics, collect data and do your analysis?

Note: It appears from a different controlled experiment that pigs are indeed smarter. But if they are indeed so smart how did they end up as lunch?

What is worse than a meaningless Social Media metric?

The answer is, meaningless social media metric presented in the form of a infographic. And it comes to us via MediaBistro from, not Social Media Gurus, no not witch doctors, and if you guessed Social Media Scientist that would be a good guess  but not correct, because it comes from SocialBakers.

If you have not figured out the flaw in the metric (shown left) or you are one of those 311 people who tweeted the link, I will try to point out the problems with bakers’ recipe.

The bakers give us a simplistic formula for what they arbitrarily define as Average Engagement Rate. This is not a model derived based on theory, data collection and experimentation but simply a formula thrown together with a mix of arithmetic operations. Disregard for a minute how it is computed (which is beyond ridiculous) and ask the following questions:

  1. What does this really mean in the context of your marketing spend?
  2. Why is this important?
  3. What is the marginal benefit from a change in this metric?
  4. More broadly, what does the curve look like ( Revenue = f(Average Engagement Rate))
  5. What is the cost of moving the AER, if at all one could?
  6. Are the AERs of Facebook, twitter etc additive?

Well keep asking as you are not going to find any such answers from the eye candy infographic.

Now to the computation.

First the units of this metric. It is stated as a percentage. And percentages are? Dimension less. The numerator at best is a dimensionless ratio and at worse has a complicated unit of Interaction/Activity. The denominator has units of number of people – fans, followers etc.  In other words the ratio is not a dimensionless quantity because you are dividing  a quantity that is NOT number of people by a quantity that is number of people. So how can anyone simply multiply this number by 100 to state this as a percentage? Like Baker’s Dozen, they should call this Baker’s percentage.

Second the ratio is specifically designed to show as low a value as possible and hence the possibility for improvement and potential sales. The numerator is divided by total number of followers (or fans) a brand has. So larger the number of followers, smaller the magical AER. The thermometer in the infographic tells us the maximum any brand currently has is 0.7%. This also explains why they chose to multiply by 100 and call it a percentage. Otherwise, 0.007 would look too awful for anyone to pay attention.

Third, this is indeed genius move in targeting, after all those brands with millions of followers likely have higher marketing spend and hence are likely to divert some of it to improve their measly AER.

There you have it, yet another pointless and wrong Social Media engagement metric, presented as a stunning infographic that not surprisingly found many takers.

Final note, if you are tempted by any of the social media engagement metrics that talk about anything except dollar values you can cure that temptation by reading Ron Shevlin‘s book Snarketing 2.0.