Too Loud!

Photo Credit: The Nickster

Filter Out That Noise!
If you’ve been following this blog for a while, you know I’m a big proponent of control charts. Why? Because they help filter out noise and identify signals in a data set that contains variation. And that’s just about every data set I’ve ever seen since leaving engineering school – because the real world is noisy. Just ask him.

Briefly, “signals” are either:

  1. Outliers – much higher or lower than typical (as determined by the data itself, not some arbirtrary goal set by management)
  2. Trends – 6 or more points in a row either all increasing or all decreasing, or
  3. Shifts – 9 or more points in a row either all above or all below the mean.

Special vs. Common Cause Variation
Because these events aren’t very likely to appear in a randomly generated data set, they are flagged as “special”. Everything else is just noise, or “common cause” variation. You can imagine what would happen if we reacted to every bit of noise. And who would want to miss a true signal in the data?

People in manufacturing love control charts (or at least the smart ones do) because signals tell them right away if something has changed on the shop floor. And catching these changes early can help the company avoid a ton of scrap. I’ve seen it happen and it’s a beautiful thing.

For the Board Room?
But what about sales data? Could this tool apply to one of the most important figures a company tracks? A sales rep’s commission check is dependent on hitting sales growth targets, which are almost always given in terms of percent change (e.g. sell 15% more that you sold last year or last quarter). Whether or not the change is statistically significant hardly matters to a rep. It’s not about stats, it’s about dollars in the rep’s pocket. I get that.

But could there be times when the way a rep is measured and compensated violates the signal vs. noise principle?
Could some reps benefit from pure noise, and others be punished in spite of clear signals?

The following (fictitious) example shows just how it could happen:

Did you see how that worked? 
The real drop in sales only occurred in the last month of the year, and it was caused by the rep with the sales that had actually increased on a year-over-year percent change basis. The other rep may not have been anyone’s hero, but all he did was deliver his usual performance, at least statistically speaking. The bar charts with the non-zero y-axis made an insignificant change look significant by “optical delusion”.

Of course I doubt the sales analytics world will run out and purchase an SPC package and start uploading their data. But imagine if they did – rather than waiting until the end of the quarter (or the year!) to know if a certain rep or key account is in need of attention, they’d know right when the change happened.

Time Between Sales
Instead of simply counting units sold during a particular period (month, quarter, year), they could track time between sales as a key metric. The account might be ordering less and less often, but not enough to be reflected in the quarterly sales figures yet. If a company has hundreds or thousands of accounts, how are they supposed to know which ones are “on the move” and which ones are “business as usual”?

If they only look at percent change, they are sure to be fooled, at least some of the time.

These are tricks the gear heads running the production line figured out a long time ago. I wonder if these tricks will make their way upstairs to the board room one day.  Maybe they already have.

Thanks for reading!
Ben