I suffer from cluster headaches. Every year and a half or so I live through a month or two of daily episodes of excruciating pain that calls for gobs of Excedrin, quite a lot of Sumatriptan, gratitude for the existence of automatic ice makers, and the inescapable sensation that I’m taking dumb pills.

No, I’m not looking for remedies, empathy, or sympathy. Let’s skip directly to pity.

Or skip it altogether. Now that I have your attention, let’s move on to pain’s relevance to your day-to-day working life.

Pain evolved to get our attention when something is wrong that needs fixing. Which is why migraines, cluster headaches and their kindred maladies are so annoying: The only thing that’s wrong is the headache itself.

Still don’t see the relevance?

Biologically speaking, pain is an indicator. It’s like a blinking light on a control panel that tells the brain something isn’t working the way it’s supposed to work. The brain needs this mechanism because the body is way too complicated for the brain to directly monitor all of its components.

So instead animal physiology includes receptors scattered throughout a critter’s anatomy. The brain doesn’t have to monitor. What requires attention calls for attention.

Only it’s easy to ignore a blinking light. Pain is designed to be hard to ignore. Pain says something isn’t working the way it’s supposed to work, so please do something about it RIGHT NOW!

KPIs (key performance indicators) and their metrics-based brethren are, for managers, what pain is for the brain. They’re a way for managers to know something needs attention without their having to directly monitor everything in their organization.

But (here’s the big tie-in! Drum roll please) … KPIs share both the blinking light’s and migraine’s limitations.

They’re blinking lights in the sense that when a KPI is out of bounds, it’s just a number on a report, easy to ignore in the press of making sure work gets out the door when it’s supposed to.

There’s nothing attention-getting about a KPI. It’s just a blinking light. Unless, that is, a manager’s boss decides to inflict some pain … perhaps “discomfort” would be in better taste … when a KPI is out of bounds.

KPIs can also be migraines, though, in the sense that it isn’t uncommon for a KPI to be out of spec without anything at all being actually wrong.

Migraine KPIs can happen for any number of reasons. Among the most important is the reason the first, and arguably best quality improvement methodology was called “statistical process control.”

Many KPIs are, that is, subject to stochastic variability, stochastic being a word every process manager should be able to use correctly in a sentence without first having to look it up on Wikipedia.

Sometimes a KPI is out of range because the effect it’s supposed to measure is the consequence of one or more causal factors that vary more or less randomly. Usually their variance is within a close enough range that the KPI is reasonably reliable.

But, stochasticism being what it is, not always. If the KPI looks bad because of simple random variation, the worst thing a process manager can do is try to fix the underlying problem.

The fixes can and often do push the KPI in question, or a different, causally connected KPI, out of range when process inputs return to normal.

As long as we’re on the subject of pain, you don’t have to have any for something to be wrong with you, which is why most of us have a medical check-up every so often, even when we feel just fine.

KPIs can be like this too. The IT trade is replete with managers who meet every service level they’ve agreed to and as a result think everything is fine when in fact it’s falling apart. Help desks are particularly prone to this phenomenon, because of a phenomenon the users to contact an offending Help Desk know about but the help desk manager doesn’t: Because they’re usually measured on ticket closures, help desk staff close tickets whether or not they’ve actually solved a user’s problem.

It’s the first rule of metrics: You get what you measure. That’s the risk you take.

Metrics are less useful than you’ve been told. Even the best are just ratios that tell you whether you’re making progress toward a well-defined goal.

But not why, how, or what to do if you aren’t. As last week’s KJR pointed out, not only aren’t metrics explanatory on their own, in most cases a metrics change won’t have a single root cause. If, for example, you’re losing marketshare, you might have:

  • Missed a complete marketplace shift.
  • Lousy advertising.
  • No advertising, lousy or otherwise.
  • Poor quality products.
  • Deplorably ugly products.
  • Products that lack key features competitors have.
  • Hapless distributors.
  • Hapful distributors who like your competitors better.
  • A customer disservice hotline.

To list just a few possible causes, none of which are mutually exclusive.

Which is to say, root cause analysis is a multivariate affair, which is why analytics is, or at least should be, the new metrics.

But while multivariatism is an important complicating factor when business decision-makers have to decide what to do when success isn’t happening the way it should, it isn’t the only complicating factor.

Far more difficult to understand in any quantitative fashion is the nasty habit many business effects have of causing themselves.

Many cause-and-effect relationships are, that is, loops.

These feedback loops come in more than one flavor. There are vicious and virtual cycles, and there are positive and negative feedback loops.

In business, the cycles you want are the virtuous ones. They’re where success breeds more success. Apple under Steve Jobs was, for example, a successful fanbody fosterer. (Don’t like “fanbody”? Send me a better gender-neutral alternative).

The more fanbodies Apple has the cooler its products are, making it more likely the next electronics consumer will become another Apple fanbody.

These loops work in reverse, too: Start to lose marketshare and a vicious cycle often ensues. Corporate IT pays close attention to this effect: When choosing corporate technology standards, products that are failing in the marketplace are undesirable no matter how strong they might be technically. Why? Because products that are losing share are less likely to get new features and other forms of support than competing products.

So IT doesn’t buy them, and so the companies that sell them have less money to invest in making them competitive and attractive, and so IT doesn’t buy them.

A frequently misunderstood nicety: virtuous and vicious cycles are both positive feedback loops. In both cases an effect causes more of itself.

Negative feedback loops aren’t like that. Negative feedback as the term is properly used is corrective. With negative feedback loops, an effect makes itself less likely than it was before.

Take business culture. It’s self-reinforcing. When someone strays from accepted behavioral norms, their co-workers disapprove in ways that are clear and punitive.

Want an example? Of course you do. In many companies, employees are known to complain about management. Not necessarily any particular manager, but about management.

An employee who, in conversation, makes complimentary statements about management is likely to be ostracized, no matter how justified the statements might be.

Symmetry requires negative feedback loops to have unfortunate as well as fortunate outcomes, just as positive feedback loops do. Here’s a well-known one: Analysis paralysis. It’s what happens when corrective feedback overwhelms all other decision criteria.

Where does all this go?

The idea behind “if you can’t measure you can’t manage” is well-intentioned. Underneath it is an important idea — that you should prefer to base your decisions on data and logic, rather than your mood and digestive condition.

The point here is that those who lead large organizations need to kick it up a notch. Measurement isn’t the point, and it isn’t the be-all and end-all of decision-making. It’s just a part of something much bigger and more important: Leaders and managers need to understand how their organizations work. That includes understanding the simple cause-and-effect relationships metrics tend to be associated with, and the multivariate causal relationships multivariate analytics can help you understand.

And, you should add to that at least a qualitative understanding of the various feedback loops that drive success or failure in your line of work.

A quantitative understanding would be better. It’s just not often possible.

Qualitative might be inferior to quantitative, but it’s much better than ignoring something important, just because you can’t put a number to it.

As Einstein … by all accounts a bright guy … put it, “Not everything that can be counted counts, and not everything that counts can be counted.