## Because I am all about the base rate

Tis the season for predictions. If one has an audience one seem compelled to make predictions.  You are better off reading the book Superforecasting than this article. The book explains in depth the simplest elements you need in making predictions and forecast.

It starts with – Base Rate – which is how frequent does the said event happen in general relative to all other events. For example

1. What percentage of tweets are retweeted?
2. What percentage of people are named Bill?
3. What percentage of startups achieve \$1B valuation?
4. What are the chances of you winning Survivor when you start the season with 19 others?

The next step is an iterative process that refined this prior knowledge by seeking new information and refining your estimate. That is the posterior probability.

Most likely you won’t read the book, so I present here these two concepts set to the tune of Megan Trainor’s song.

Because you know I’m all about that base-rate

‘Bout that base-rate, no tails

‘Bout that base-rate, no tails

‘Bout that base-rate, no tails

‘Bout that base-rate… base-rate… base-rate… base-rate

Yeah, it’s pretty clear, I ain’t no sigma two

But I can predict it, predict it, like I’m supposed to do

‘Cause I got that Bayesian that all the gurus chase

And all the right tunables in all the right places

I see the magazine workin’ that Crystalball

We know that shit ain’t real, come on now, make it stop

If you got logic, logic, just raise ’em up

‘Cause every inch of you is curious from the bottom to the top

Yeah, my mama she told me “don’t worry about your data size”

(Shoo wop wop, sha-ooh wop wop)

She says, “Bayesians like a little more posterior to hold at night”

(That booty, uh, that booty booty)

You know I am wont to be stick figure xkcd comic doll

So if that what you’re into, then go ‘head and move along

## What you can’t change, don’t measure

How many times have you heard the phrase thrown at you by anyone and their guru? No I am not asking you to keep a measure (count) of that. Whether it was a consulting firm or someone doing time and motion study that popularized it, I don’t know its origins. This statement has become a standard quip by someone, usually someone in power and position with nice title, trying to sound data driven. What that has led to is a world where we collect anything and everything that moves – from measuring sleep pattern with wearable fitness device to up to the minute update of sales pipeline.
We now have a new product positioning for this craze – Fitbit for xyz, named after the wearable pedometer that tracks the number of steps you take and the number of minutes you stay restless in your sleep. With a gauge for everything, collecting data all the time, the craziness has led to utter madness – Big Data.

So I urge you, before you put on a Fitbit on your wrist or Fitbit for customer experience, to stop and ask a few basic questions:

1. Can I change what I seek to measure?
2. Even if I can change, can I change it at the same cadence I seek to measure it?
3. If I can change, fast enough, will the change have meaningful impact on the true measures of business performance? Sure you can change the number of retweets, blog mentions, video views, “how likely to recommend on 0 to 10”, etc., but does it matter to revenue, marketing ROI, pricing effectiveness and profit?
4. Can I measure it cheaper than the effect of change?
5. Even if the answer is yes to the questions above you need to ask – what other metric I could be collecting with my limited resources?

What do you measure? I hope the answer is, “I measure what I can change to make meaningful positive impact on my business objectives”.
What you can’t change, don’t measure.

## You’re stuck in an elevator with someone who read a tweet about a study on women in boards

Let me start by repeating what I wrote a while back on the faulty analysis by Credit-Suisse,

Here is an undeniable fact – considering ~50% of the population is women and 50% of the board of directors are not women, we can safely say women are underrepresented in corporate boards. Now let us return to the reported study in question that makes a faulty case for adding women board members. My arguments are only with errors in the research methods and its application of faulty logic. Nothing more.

I will add to this my case against those who keep quoting such studies as evidence for their side.

In that article Ms. LaFrance makes point by point argument to what seem to be silly questions from a clueless elevator companion who fell in love with Ms.Lacy’s post.  For one such question Ms.LaFrance quotes as evidence the research by Thomson-Reuters,

You should check out this study from earlier this year that showed how diverse corporate boards outperform those with no women. You’d think that a company like Twitter would put its business interests first.

She isn’t alone in quoting this study, almost everyone taking   Mr. Vivek Wadhwa‘s side use this study. I am not sure how many read this report or looked at its methods and caveats. Let me do that in this article.

Here is the link to the said research report.

1. Does the board matter?: The study starts with unverified assumption that a company’s board matters to its performance and then goes on to see differences in performance between boards. If your hidden hypothesis you took for granted is false it does not matter what your stated hypothesis is.
What the study does is, If A=TRUE,  then A(With Women) > A(Without Women)
You can see that if A =FALSE, the rest does not matter.You might want to stop here as nothing else matters after this error.
2. Control Variable: When you want to study the influence of a single variable you want to make sure all other variables are held constant. But when you read this report it is clear that they have no way to do that. They started with composition of a company board in 2007 then compared the performance of a group of companies over a period. There are two many uncontrolled variables during this period  (tech trends, market trends, industry verticals, etc.) and these affected different companies differently.
3. Error in Comparing Averages?: The comparisons are done on averages. There is a group of companies with mixed board and then there is another without women in board. The two groups are compared against another group, the benchmark which consist of companies of both kinds.

The report says companies with women on board did marginally better or same as the benchmark while those with no women on board did 10-15% lower than the benchmark.

First you notice that the difference in performance is not as significant s those who quote the study. Next you want to ask a simple clarifying question here.  If the benchmark has both types of companies, if one subset is  underperforming by 10-15%, shouldn’t the other subset outperform by 10-15% to bring it back to benchmark average?

The only explanation I can think of is average hides details here. There must be a few companies in each side that are significantly different from the arithmetic mean for that group and they account for the difference. If you leave out these samples and compare again, the difference will likely vanish.

Now to the question of what to do when stuck in an elevator with someone who merely heard about the Thomson Reuter’s study?

Just smile and nod.

## Comparing Tails Vs. Comparing Means

Here is data posted at local YMCA for 30 top performers for the month of August. The images show total weights lifted by top 30 performers in men’s and women’s category.

Do you think you can compare these two sets of data and go on to make generalized statement about men or women’s strengths?

Be careful here.

First this is data from only those who chose to go to YMCA to lift weights and chose to have their results recorded.

Second these are top 30 people, the right tail. You can’t compare extremes and say generalized statement about population. For that you have to compare means.

How many such measurements errors do we commit when comparing effect of marketing campaigns and sales programs?

## You are smart but can you make money?

It may sound like children’s taunting, make no mistake, it is. But we all have heard this before, directly or indirectly. It is the childish taunt,

“Well if you are so smart, why aren’t you making more money than I do?”

Some of the researchers asked exactly that question. Well not exactly and definitely not to their fellow researchers with more citations but making less money than themselves. They asked,

“Does intelligence drive future career success as measured by income”

Instead of doing just one study, they did a meta-analysis, a statistical summary of 85 existing data sets collected by other similar studies, and conclude that

“There is only 20% correlation between intelligence and future income, showing very low predictability”

Not to mention finding correlation is not an indication of causation.

Well it still does not answer the original question we face but at least we can try to explain how only 4% of income variation can be explained by changes in intelligence.

One more thing, next time you hear advice on “hard work, do your best, etc.” think of this statistic. There is more to getting ahead than just intelligence.

## 3 Articles on Statistics This Week That are Worth Your Time

Here are three articles this week that use  or quote statistics wisely.

1. Connection between women wearing red and  fertility:  Slate asks the right questions on representative bias and how it was measured. It points out the major flaw that the study looked for best hypothesis that fits data ex post.  That is looking looking for more white sand that confirms Madagascar is indeed San Diego.
2. Phil Mickelson’s regression analysis: Mickelson thinks putting distance alone, chances of making it. Not the variances in the field, slope, weather etc,., just the  distance. That is his prediction model. However the truth is unknown unless someone has looked at  his data in aggregation.
3. Causal link between stop and frisk and murder rates:  Ira Glasser, ex- Executive Director of ACLU writes this letter to The Journal questioning the causal link.