What can businesses learn from

This Monday you will read at least one article of this mold. A Google search will yield you at least dozens of them. From popular management/business/startup gurus to anyone who knows how to open a WordPress blog many will feel compelled to educate us masses about lessons that are hitherto unknown, came into existence only after the final whistle is blown in the World Cup finals and visible only to these writers who find it in their grace to share that knowledge with us.

The articles will no doubt be very nicely written, like a listicle. Key lessons called out and some supporting text to go with it. If you do not feel our collective intelligence is not insulted by that drivel you will share that article around. I bet the articles are being written if not already written. When the final score comes through one only needs to hit Publish on one of the two.

Say if Germany wins,

What can businesses learn from Germany’s World Cup victory?

Attack – Don’t wait for opportunity to come to you.

Plan-B – Always have a Plan-B when attack does not work.

Teamwork – They know what teamwork is. All around best teamwork, no individual star.

Hustle – Show mental toughness, tactical prowess, always be moving.

Plan for long term – This is the team that knows all 90+ minutes matter.

Rockstars don’t matter – Messi was a non factor because Argentina lacked teamwork …

On the other hand if Argentina wins,

What can Argentina’s World Cup win teach us about business?

Small team of A+ players can run circles around large team of B, C players – The problem with Germany’s team was there was no single A+ player. They all played good but none rose up to levels of Messi or Higuain.

Don’t rest on your half-time laurels – The problem with Germany was their big win against Brazil.

Teamwork- Messi was superstar yet every time he was cornered no one was better than Messi in passing the ball to his teammate.

Be Hungry – German uniform has three shades of red for their three past victories. They were not hungry any more.

Which ever way it goes we will know what we will see.

Can you look past these drivel come Monday morning?

 

Jack Reacher the Bayesian

Frequentist is one thinks of probability as countable – that is count the number of all possibilities and find the probability of specific event as fraction of those possibilities. Like 50% for heads in a coin flip. Talking of coin flip I came across this passage in a Jack Reacher novel by Lee Child,

      First time: heads or tails? Fifty-fifty, obviously.
      So, second time: heads or tails? Exactly fifty-fifty again. And the third time, and the fourth time. Each flip was a separate event all its own, with identical odds, statistically independent of anything that came before. Always fifty-fifty, every single time.

Then the hero Jack Reacher computes probability of four heads in a row, a simple multiplication of 1/2 X 1/2 X 1/2 X 1/2 as he correctly states that each coin flip is an independent event. Our hero  then goes on to calculate probabilities of events that matter to him,

    And Reacher needed four heads in a row. As in: Would Susan Turner get a new lawyer that afternoon? Answer: either yes or no. Fifty-fifty. Like heads or tails, like flipping a coin. Then: Would that new lawyer be a white male? Answer: either yes or no. Fifty-fifty. And then: Would first Major Sullivan or subsequently Captain Edmonds be in the building at the same time as Susan Turner’s new lawyer? Assuming she got one? Answer: either yes or no. Fifty-fifty. And finally: Would all three lawyers have come in through the same gate as each other? Answer: either yes or no. Fifty-fifty.

Now we have a few problems. Can you spot them?

First, the fact that there are only two outcomes should not be confused with their chances of occurring. For example if you bought  PowerBall for tonight’s drawing, the next day you will either wake up as millionaire or not. So is your chance of winning 50-50? Same reasoning for the four outcomes Reacher is worried about – they are not 50-50 just because each has a yes-no answer.

Second problem is not all four of the events Reacher is concerned about can be modeled as countable events. For the first, third and fourth events described above the definition of probability is a measure of uncertainty and should not be modeled as countable events.

For the second outcome, “Would that new lawyer be a white male?” – it sure can be modeled as countable rather than as a measure of uncertainty but it is still not 50-50.

But Lee Child, the author, is likely deliberately misleading readers with this. Starting with 50-50 in the absence of no prior information and given only two outcomes is actually not a bad start. That is if you understand this is uninformed estimate and make an effort to refine the estimate with new information.

That is you are moving from being a frequentist to Bayesian. And Jack Reacher does seem a Bayesian,

Statistics were cold and indifferent. Which the real world wasn’t, necessarily. The army was an imperfect institution. Even in noncombatant roles like the JAG Corps, it wasn’t perfectly gender-neutral, for instance. Senior ranks favoured men. And a senior rank would be seen as necessary, for the defence of an MP major on a corruption charge. Therefore the gender of Susan Turner’s new lawyer wasn’t exactly a fifty-fifty proposition. Probably closer to seventy-thirty, in the desired direction. Moorcroft had been male, after all. And white. Black people were well represented in the military, but in no greater proportion than the population as a whole, which was about one in eight. About eighty-seven to thirteen, right there.

And so on and so forth Reacher goes on to refine the probabilities of four outcomes with new information like a Bayesian would do.

I am impressed.

Do you know your p(A), p(B) and p(C)s?

 

 

You better sit down for this

Most journal papers go unnoticed by general public until a version of it sans any of the caveats mentioned in the study pop up in popular media. There is one such article from Quartz that gives us a dire warning about our sedentary work life. You better sit down for this, errr  stand up,

Why not even exercise will undo the harm of sitting all day

Quoting from a recently published meta-analysis of observational studies the article says, (emphasis mine)

Sitting can be fatal.

It’s been linked to cancerdiabetes, and cardiovascular disease. In this latest meta-analysis, Daniela Schmid and Michael F. Leitzmann of the University of Regensburg in Germany analyzed 43 observational studies, amounting to more than 4 million people’s answers to questions about their sitting behavior and cancer incidences. The researchers examined close to 70,000 cancer cases and found that sitting is associated with a 24% increased risk of colon cancer, a 32% increased risk of endometrial cancer, and a 21% increased risk of lung cancer.

Before we look at the numbers let me tell you I spend most of my workday standing up. I am fortunately enough to have a standing desk and even when I did not have one I rigged up one by placing the monitor on a book shelf and propping up boxes on the work desk. I should be happy to jump on this evidence and tout it as validation of my behavior. A part of me did that. But however favorable a piece of evidence looks like you need to question the method it was arrived at.

While the reported study was enough for some to make a case for standing up some of us have a the undesirable job of calling into question the hypothesis formation, data and method.

Here are the problems I see in adopting this report

  1. All the studies included in the meta analysis are observational studies and not controlled experiments – which is very hard if not impossible to do with sitting/standing.
  2. The studies are based on self-reporting by participants and not based on observations by experimenters – so it is hard to verify whether or not the subjects exercised as they reported.
  3. The meta-analysis clearly states this is just correlation – that is lost on Quartz. When you see a statement like, “sitting is associated with a 24% increased risk of colon cancer”, you must ask, “is it likely that the people are sitting down because of the illness?”
  4. Finally what does the 24% increased risk mean? It is the relative risk that is increased. If people who sit 0 hours are at x% risk of getting colon cancer those who sit 8 hours are at risk of 1.24x%.   Colon cancer is indeed the second leading cause of cancer related deaths in US but you should note that its incidence rate is 1 in 20 or 5%. And individual factors affect the incidence rate far more than other factors.

So at 1.24 times relative risk if you are sitting down for 8 hours of work the incidence rate goes from 5% to  6.2%. But what other factors have higher relative risk that you should worry about?

If you want a standing desk, get it. But don’t settle for correlational studies to make a case for it.

The message is that nobody can be trusted

Social media did not make us more gullible, create more of us who are gullible, nor did it create more of those who want us to be gullible. It merely made it easier for those who want us to be gullible to find us and find in really large numbers. It helped them increase our current level of gullibility and comforted us that we are not alone in our following or in our failure to ask questions.

That is why we are bombarded everyday with articles like these

  1. Sleep your way to creativity
  2. Science(?) of retweets
  3. What can Spurs teach us
  4. The Innovator’s DNA
  5. Marketing lessons from Grateful Dead
  6. The list goes on – just check any of blogs, marketing self-help books sections

I want to recommend you an antidote, something that will help us to not instinctively embrace our gullibility – questions. What kind of questions? In a recent NPR Intelligence Squared podcast, arguing against “Death is not Final”,  Prof. Sean Carroll tells us this,

I hope it’s not insulting. It’s certainly not ad hominem because the message is that nobody can be trusted.

I think that that is part of what science has taught us, that if someone makes an extraordinary claim, the very first questions we should be asking ourselves are, number one, is there a different, simpler alternative explanation? And number two, how would we know if our purported explanation were false?

If you are not asking for alternative explanation or seeking data that would falsify what you read you are continuing to feed the gullibility machine. Asking questions makes the gullibility machine uncomfortable. It handles questions by labeling you as theoretical, non-doer, professor who knows nothing about business, lizard brain, …

Don’t feed the gullibility machine!

 

 

5 Things Woefully Wrong in Google Shopping Express Survey

google-shopping-expressGoogle shopping express is a same day (or next day) shopping service from Google. If you have not see this in your city that is because you are not in handful of cities it is being tested in. If you are in the valley it needs no further introduction. It is offered free for at least six months – that is no delivery charges, even when you order just a can of chickpeas or order 4 different chickpeas cans from four different stores.

I believed this was all a big experiment from Google – something more than a new business development. May be a experimental platform for them to learn more about us, our buying behavior, or test their logistics algorithm etc. It turns out they are really seeing Google Shopping Express as a line of business, something they want to make profit from.

I know this because of the recent survey they sent out, asking about willingness to pay, likelihood of picking different versions, purchasing behavior etc. And they offered $10 for taking the survey. What a survey it is! After such a huge investment to test the service, spending likely large sums of money on operating expenses and being known for having a supreme data collection and analytics engine they sent out a supremely mediocre survey to help decide something so important as next billion dollar business for Google.

I guess after all the infrastructure costs, driver salaries, and $10 bills for responses they ran out of cash to do a professional survey. Here are 10 things I find wrong with the survey, something your business should learn and avoid – whether DIY or when hiring a professional. (There are actually 25 things wrong with the survey, but I hear if I split that into five different articles I get more page views.)

  1.  Don’t ask respondents for prices they will payIMG_0152
    We the customers do not know the value – the value of single delivery or sum of values of many deliveries over the year to give a price. Asking for four different prices especially the price at which we will consider it too cheap is silly at best.
  2. Do not ask in a survey what you can find from secondary sources
    IMG_0154
    Last I checked we have been shopping for fresh produce and dairies for decades. Such data is already readily available from – in volume, velocity and variety (take that #bigdata). So seeking data that is readily available wastes a survey question and the answers they will get would pale in comparison to POS, US Retail and US Census data available.
  3. Keep the questions in a page related and in context
    IMG_0155Three unrelated questions – the third one to prevent machines from filling it out to get the $10 reward. Besides they repeat the mistake of asking about customer intent and expected frequency or purchase.
  4. Do not impose significant cognitive cost to respondents – that is keep it simple so we can grok it with no effort
    IMG_0157It may look like there are only four options, but pay attention to what they are asking us to do. We are expected to compare different delivery fees, delivery windows etc. This is highly likely a Conjoint Analysis question. Conjoint done wrong!
  5. Do not make free form text questions mandatory
    IMG_0181I understand you want to give us the option to write you an essay of what we think and you didn’t consider asking. But please make that an option and not force respondents to type an answer. What do you think most people are going to type – especially after a really long tiring survey like this Google survey? asdf?

You wonder how can a data driven company will resort to what looks like a DIY survey to decide on the fate of what could be the biggest competition to Amazon and deliver billions of new revenues.

If the outcome is important to you, shouldn’t you take the time to do it right?

A few #fitness hypotheses I want to test with @MyFitnessPal data

I have been using the MyFitnessPal app for the past 14 or so months – using the app everyday to meticulously log every food intake and exercise. It is almost a habit now to do the data entry just before digging in or right after.

The app has been tremendous help in giving me awareness of how easy it is to overeat – there are calories rich food everywhere and the calorie density  (calories per volume) is so high for the food you least expect and most like – like those Specialty’s cookies and Panera sandwiches. Having to log every bite I eat gave me visibility into these foods and eventually led me to reduce and void them. Note that I did not say anything about good vs. bad calories, this app does not help you yet and I will write more on that later.

And the result you have been anxiously waiting for? I did lose somewhere between 15-20lbs since I started using the App. But I am not assigning it causation nor should you. My journey stated six months before I started using MyFitnessPal. The initial motivation, drive and discipline existed before the app. You could say that precondition was the one that led me to search for and use an app like MyFitnessPal. Besides using the app I made several changes to my daily routine, cuisine etc. Having clarified that I still believe this is a service I would gladly pay a monthly subscription price for.

I am not the only entering data of intake, millions of people are doing the same. Everyone entering, almost everyday,

  1. Mix of breakfast, lunch, snacks and dinner foods
  2. Exercises they do
  3. Their weekly weights

So MyFitnessPal is sitting on a treasure trove of data (dare I say #bigdata?) that I believe can be put to some good use in the name of fitness science. I do not mean go data dredging into these to tease out interesting correlations but start with specific informed hypotheses and test them with random sampling. All I need is 300-400 data profile samples to test the following hypotheses

Fitness Hypotheses to Test

  1. When people exceed their calorie limit they had set themselves for the day they exceed by significant margin (20-40%) and definitely not 1-10%
  2. There exists a limited specific set of food categories that constitute significant portion of daily budget
  3. For those who exercise regularly (as shows by their logs) on days they skip are also the days they eat extremely poorly
  4. Eating within the limits is a far better predictor of weight loss than exercise
  5. Among those who lost weight, larger portion of calorie intake consists of homemade food and less of packaged processed food.

That is it for now. Next I will write about a few product enhancements their product management team should consider that will lead them to monetization through a subscription service. I sincerely hope their business model involves adding value to the users and getting paid for it in the form of subscription than selling their data for marketers.