Metrics are less useful than you’ve been told. Even the best are just ratios that tell you whether you’re making progress toward a well-defined goal.

But not why, how, or what to do if you aren’t. As last week’s KJR pointed out, not only aren’t metrics explanatory on their own, in most cases a metrics change won’t have a single root cause. If, for example, you’re losing marketshare, you might have:

  • Missed a complete marketplace shift.
  • Lousy advertising.
  • No advertising, lousy or otherwise.
  • Poor quality products.
  • Deplorably ugly products.
  • Products that lack key features competitors have.
  • Hapless distributors.
  • Hapful distributors who like your competitors better.
  • A customer disservice hotline.

To list just a few possible causes, none of which are mutually exclusive.

Which is to say, root cause analysis is a multivariate affair, which is why analytics is, or at least should be, the new metrics.

But while multivariatism is an important complicating factor when business decision-makers have to decide what to do when success isn’t happening the way it should, it isn’t the only complicating factor.

Far more difficult to understand in any quantitative fashion is the nasty habit many business effects have of causing themselves.

Many cause-and-effect relationships are, that is, loops.

These feedback loops come in more than one flavor. There are vicious and virtuous cycles, and there are positive and negative feedback loops.

In business, the cycles you want are the virtuous ones. They’re where success breeds more success. Apple under Steve Jobs was, for example, a successful fanbody fosterer. (Don’t like “fanbody”? Send me a better gender-neutral alternative).

The more fanbodies Apple has the cooler its products are, making it more likely the next electronics consumer will become another Apple fanbody.

These loops work in reverse, too: Start to lose marketshare and a vicious cycle often ensues. Corporate IT pays close attention to this effect: When choosing corporate technology standards, products that are failing in the marketplace are undesirable no matter how strong they might be technically. Why? Because products that are losing share are less likely to get new features and other forms of support than competing products.

So IT doesn’t buy them, and so the companies that sell them have less money to invest in making them competitive and attractive, and so IT doesn’t buy them.

A frequently misunderstood nicety: virtuous and vicious cycles are both positive feedback loops. In both cases an effect causes more of itself.

Negative feedback loops aren’t like that. Negative feedback as the term is properly used is corrective. With negative feedback loops, an effect makes itself less likely than it was before.

Take business culture. It’s self-reinforcing. When someone strays from accepted behavioral norms, their co-workers disapprove in ways that are clear and punitive.

Want an example? Of course you do. In many companies, employees are known to complain about management. Not necessarily any particular manager, but about management.

An employee who, in conversation, makes complimentary statements about management is likely to be ostracized, no matter how justified the statements might be.

Symmetry requires negative feedback loops to have unfortunate as well as fortunate outcomes, just as positive feedback loops do. Here’s a well-known one: Analysis paralysis. It’s what happens when corrective feedback overwhelms all other decision criteria.

Where does all this go?

The idea behind “if you can’t measure you can’t manage” is well-intentioned. Underneath it is an important idea — that you should prefer to base your decisions on data and logic, rather than your mood and digestive condition.

The point here is that those who lead large organizations need to kick it up a notch. Measurement isn’t the point, and it isn’t the be-all and end-all of decision-making. It’s just a part of something much bigger and more important: Leaders and managers need to understand how their organizations work. That includes understanding the simple cause-and-effect relationships metrics tend to be associated with, and the multivariate causal relationships multivariate analytics can help you understand.

And, you should add to that at least a qualitative understanding of the various feedback loops that drive success or failure in your line of work.

A quantitative understanding would be better. It’s just not often possible.

Qualitative might be inferior to quantitative, but it’s much better than ignoring something important, just because you can’t put a number to it.

As Einstein … by all accounts a bright guy … put it, “Not everything that can be counted counts, and not everything that counts can be counted.

Were there a posthumous prize for history’s most important poorly known scientist, the first recipient should surely be Sir Ronald Fisher (1890 — 1962).

It was Fisher who merged Mendel’s genetics with Darwin’s natural selection, creating our modern understanding of how evolution works.

In his spare time he invented modern multivariate statistics, including the analysis of variance, which, as it happens, closely resembles natural selection.

Multivariatism is, I’m starting to think, a significant reason to embrace a principle espoused in The Cognitive Enterprise: Analytics are the new metrics.

Start with measurement. Measurement is the raw data. Ignore raw data’s detractors. It can have direct value, as, for example, your gas gauge telling you you’re low on fuel.

The popular “If you can’t measure you can’t manage” was never entirely true. You could, for example, drive forever without any of the instruments on your dashboard. You would, however, find yourself tanking up more often than you have to, just in case. You’d change your oil more often than necessary for much the same reason.

With nothing to estimate your velocity on beyond your perception of how fast the landscape is rushing by in the other direction you’d probably drive more slowly than your speedometer-enabled habits allow.

If you can measure, you can drive more effectively … less punchy but more accurate.

Measurements are numbers. Metrics are ratios. If you’re a car owner, your most important metrics are probably miles per gallon, miles per hour, cost per mile, ethanol as a percent of blood volume (I hope not), and other ratios that tell you how you and your car are performing.

If you’re obsessive about car care you’ll chart your automotive metrics over time to see if there are any trends. Changes to your mileage or operating costs over time might let you know of developing problems. So, for that matter, might your top speed (miles per hour), if you could gauge it safely.

Except that this is how metrics-obsessed business managers can get into trouble. Metrics report what. They don’t explain why, but not all managers care about subtleties like this. Something is wrong, which means we have to hold someone accountable.

For these fine managers, if their car’s mileage is deteriorating, their spouse, teenage offspring, or both are subjecting it to jackrabbit starts, driving way over the speed limit, or putting cheap gas in the tank.

Or, putting automotive analogies aside, if IT spending per employee is going up, IT must be buying technology for technology’s sake, business managers must be asking for new laptops for employees before they need to be retired, or the company should be negotiating harder with its IT vendors.

Or, sales are down (the metric: revenue per employee). Here’s an outstanding example of a principle I just made up: Metrics expand your opportunities to mismanage.

Because metrics report symptoms. They don’t diagnose.

To manage you don’t need to measure and you don’t need metrics. You need demonstrable causal relationships (hence “root causes”). You care about what the buttons and levers are that you can push and pull to change the metrics that matter for the better.

That’s buttons and levers in the plural. Very few business metrics change due to a single root cause. More often, several different factors interplay to cause the problem.

Most businesses are complex systems that operate in marketplaces that are also complex systems.

Which means business success or failure are multivariate affairs, which is why Sir Fisher earned a mention at the top of this column. Metrics tell you what’s changed. Understanding why and what you can do about it calls for high quality data and lots of it, applying multivariate analysis to untangle the multiple factors that cause it.

Not that this will deliver definitive results. Analytics give you correlations, which as we all know don’t prove causation.

Except they do, sort of, although “prove” is too strong a word.

If there’s a statistically significant correlation between A and B, the smart money says one of three conclusions is true: A might cause B, B might cause A, or there’s a C out there somewhere that causes both A and B.

Only it’s multivariate, so if G is headed in the wrong direction, and A, C, and F are all positively correlated with it … see A and B, above, only more so.

Likewise if G is headed in the right direction, except that no matter how strongly A, C, and F are correlated with G, DON’T TOUCH ANYTHING!

The root causes of success are just as hard to determine as the reasons for failure. Addressing them, on the other hand, is considerably more risky.

Not to mention less urgent.