How do you determine which interview candidate to hire? How do you evaluate the candidate you decided you want to hire? (or decided you want to flush?)
How do you make a call on which group is performing better? How do you hold accountable (or explain away) bad performance in a quarter for one group vs. other?
How do you determine future revenue potential of a target company you decided you want to acquire? (or decided you don’t want to acquire)?
What metrics do you use? What data do you collect? And how do you analyze that to make a call?
Here is a summary of an episode from Fetch With Ruff Rufman, PBSKids TV show:
Ruff’s peer, Blossom the cat, informs him pigs are smarter than dogs. Not believing her and determined to prove her wrong, Ruff sends two very smart kids to test. The two kids go to a farm with a dog and a pig. They decide that time taken to traverse a maze as the metric they will use to determine who is smarter. They design three different mazes
- A real simple straight line (very good choice as this will serve as baseline)
- A maze with turn but no dead-ends (increasing complexity)
- A maze with two dead-ends
Then they run three experiments, letting the animals traverse the maze one at a time and measuring the time for each run. The dog comes out ahead taking less than ten seconds in each case while the pig consistently takes more than a minute.
Let me interrupt here to say that kids did not really want Ruff to win the argument. But the data seemed to show otherwise. So one of the kid changes the definition on the fly.
“May be we should re-run the third maze experiment. If the pig remembered the dead-ends and avoids them then it will show the pig is smarter because the pig is learning”
And they do. The dog takes ~7 seconds compared to 5.6 seconds it took in the first run. The pig does it in half the time, 35 seconds, as its previous run.
They write up their results. The dog’s performance worsened while pig’s improved. So the pig clearly showed learning and the dog didn’t. The pig indeed was smarter.
We are not here to critique the kids. This is not about them. This is about us, leaders, managers and marketers who have to make such calls in our jobs. The errors we make are not that different from the ones we see in the Pigs vs. Dogs study.
Are we even aware we are making such errors? Here are five errors to watch out for in our decision making:
- Preconceived notion: There is a difference between a hypothesis you want to test vs. proving a preconceived notion.
A hypothesis is, ” Dogs are smarter than pigs”. So is, “The social media campaign helped increase sales”.
A preconceived notion is, “Let us prove dogs are smarter than pigs”. So is, “let us prove that the viral video of man on horse helped increase sales”.
- Using right metric: What defines success and what better means must be defined in advance and should be relevant to the hypothesis you are testing.
Time to traverse maze is a good metric but is that the right one to determine which animal is smart? Whether smart or not dogs have an advantage over pigs – they respond to trainer’s call and move in that direction. Pigs only respond to presence of food. That seems unfair already.
Measuring presence of a candidate may be a good but is that the right metric for the position you are hiring for? Measuring number of views on your viral video is good but is that relevant to performance?
It is usually bad choice to pick a single metric. You need a basket of metrics that taken together point to which option is better. - Data collection: Are you collecting all the relevant data vs. collecting what is convenient and available? If you want to prove Madagasar is San Diego then you will only look for white sandy beaches. If you stop after finding a single data point that fits your preconceived notion you will end taking $9B write down on that acquisition.
Was it enough to test one dog and one pig to make general claim about dogs and pigs?
Was one run of each experiment enough to provide relevant data? - Changing definitions midstream: Once you decide on the hypothesis to test, metrics and experimental procedure you should stick to that for the scope of the study and not change it when it appears the results won’t go your way.
There is nothing wrong in changing definition but you have to start over and be consistent. - Analytics errors: Can you make sweeping conclusions about performance without regard to variations?
Did the dog really worsen or the pig really improve or was it simply regression to the mean?
Does 49ers backup quarterback really have hot-hand that justifies benching Alex Smith?What you see as sales jump from your social media campaign could easily be due to usual variations in sales performance. Did you measure whether the performance uplift is beyond the usual variations by measuring against a comparable baseline?
How do you make decisions? How do you define your metrics, collect data and do your analysis?
Note: It appears from a different controlled experiment that pigs are indeed smarter. But if they are indeed so smart how did they end up as lunch?
You must be logged in to post a comment.