Metrics are less useful than you’ve been told. Even the best are just ratios that tell you whether you’re making progress toward a well-defined goal.

But not why, how, or what to do if you aren’t. As last week’s KJR pointed out, not only aren’t metrics explanatory on their own, in most cases a metrics change won’t have a single root cause. If, for example, you’re losing marketshare, you might have:

  • Missed a complete marketplace shift.
  • Lousy advertising.
  • No advertising, lousy or otherwise.
  • Poor quality products.
  • Deplorably ugly products.
  • Products that lack key features competitors have.
  • Hapless distributors.
  • Hapful distributors who like your competitors better.
  • A customer disservice hotline.

To list just a few possible causes, none of which are mutually exclusive.

Which is to say, root cause analysis is a multivariate affair, which is why analytics is, or at least should be, the new metrics.

But while multivariatism is an important complicating factor when business decision-makers have to decide what to do when success isn’t happening the way it should, it isn’t the only complicating factor.

Far more difficult to understand in any quantitative fashion is the nasty habit many business effects have of causing themselves.

Many cause-and-effect relationships are, that is, loops.

These feedback loops come in more than one flavor. There are vicious and virtual cycles, and there are positive and negative feedback loops.

In business, the cycles you want are the virtuous ones. They’re where success breeds more success. Apple under Steve Jobs was, for example, a successful fanbody fosterer. (Don’t like “fanbody”? Send me a better gender-neutral alternative).

The more fanbodies Apple has the cooler its products are, making it more likely the next electronics consumer will become another Apple fanbody.

These loops work in reverse, too: Start to lose marketshare and a vicious cycle often ensues. Corporate IT pays close attention to this effect: When choosing corporate technology standards, products that are failing in the marketplace are undesirable no matter how strong they might be technically. Why? Because products that are losing share are less likely to get new features and other forms of support than competing products.

So IT doesn’t buy them, and so the companies that sell them have less money to invest in making them competitive and attractive, and so IT doesn’t buy them.

A frequently misunderstood nicety: virtuous and vicious cycles are both positive feedback loops. In both cases an effect causes more of itself.

Negative feedback loops aren’t like that. Negative feedback as the term is properly used is corrective. With negative feedback loops, an effect makes itself less likely than it was before.

Take business culture. It’s self-reinforcing. When someone strays from accepted behavioral norms, their co-workers disapprove in ways that are clear and punitive.

Want an example? Of course you do. In many companies, employees are known to complain about management. Not necessarily any particular manager, but about management.

An employee who, in conversation, makes complimentary statements about management is likely to be ostracized, no matter how justified the statements might be.

Symmetry requires negative feedback loops to have unfortunate as well as fortunate outcomes, just as positive feedback loops do. Here’s a well-known one: Analysis paralysis. It’s what happens when corrective feedback overwhelms all other decision criteria.

Where does all this go?

The idea behind “if you can’t measure you can’t manage” is well-intentioned. Underneath it is an important idea — that you should prefer to base your decisions on data and logic, rather than your mood and digestive condition.

The point here is that those who lead large organizations need to kick it up a notch. Measurement isn’t the point, and it isn’t the be-all and end-all of decision-making. It’s just a part of something much bigger and more important: Leaders and managers need to understand how their organizations work. That includes understanding the simple cause-and-effect relationships metrics tend to be associated with, and the multivariate causal relationships multivariate analytics can help you understand.

And, you should add to that at least a qualitative understanding of the various feedback loops that drive success or failure in your line of work.

A quantitative understanding would be better. It’s just not often possible.

Qualitative might be inferior to quantitative, but it’s much better than ignoring something important, just because you can’t put a number to it.

As Einstein … by all accounts a bright guy … put it, “Not everything that can be counted counts, and not everything that counts can be counted.

It’s pop quiz time. The quiz has one question: Which application development methodology is gaining the most popularity?

If you answered “Agile,” Blaaaaaaat! Wrong answer bucko.

If you tried to demonstrate your more in-depth knowledge of the app dev landscape by answering “Scrum,” Blaaaaaaat! Nice try, but wrongo.

Test Driven Development (TDD) or one of its variants, Acceptance Test Driven Development (ATDD) or Behavior Driven Development (BDD), you’re just showing off. But Blaaaaaaat! TDD might be a technician’s paradise, and for that matter it might be a very good idea, but it isn’t what’s gaining the most acceptance.

Want one more guess? I’ll give you a hint: What do you get when you combine a change in process with the same old attitudes?

Now you’ve got it. The app dev methodology that’s sweeping the world is (drumroll) … Scrummerfall!

Scrummerfall (not an original term) is what you get when you stitch a Waterfall head onto a Scrum body. It’s what happens when you do want iteration and incrementalism, but for one reason or another want developers to do nothing but write code to specifications — you have no interest in their understanding the context, business purpose, or user predilections.

To be fair (an excruciating exercise but I’ll try) there are good reasons for going this route. In particular, if you’re willing to trade off Agile’s high levels of team engagement, enthusiasm and commitment for the large savings in raw labor rates you get from sending work offshore, Scrummerfall might be the right choice for you.

This is especially true in organizations that consider financial measures to be the only measures that matter, because from a purely financial perspective, it’s iteration and incrementalism that drain most of the risk from Waterfall’s combination of long-range planning and short-range planning accuracy. If all you do is wait as long as possible before making design decisions, that by itself will increase your project success rate.

What do you have to lose?

Quite a lot, as it happens. The problem is, what you lose by settling for Scrummerfall is much harder to quantify, because with Scrummerfall, what you keep is form but what you lose is essence.

Another way of saying it: Scrummerfall is an excellent example of what goes wrong when you mistake a business practice for a business processes. For the difference, see “Is it a Process, or just a process?KJR 5/17/1999, although when I wrote it I used lowercase “process” where “practice” is now my preferred vocabulary.

In any event, with a true process, following the right steps in the right order gets you to the desired result. They’re repeatable and all that. The assembly line is your model.

That isn’t true with a practice. Following the right steps in the right order is just the ante that lets you play the game.

With a process, the steps are the essence. With a practice, they’re separate, and following the steps while losing the essence means the steps generally degenerate into nothing more than a bunch of check boxes people follow because they have to, not because they add any value to the proceedings.

And so to the differences between Agile and Scrummerfall. Start with the basics: Writing user stories and estimating them by assigning story points. (If you’re unfamiliar with these terms, user stories are the Agile equivalent of requirements; story points are vaguely equivalent to function points only they’re heuristic, not algorithmic.)

With Agile, the whole team writes the stories and assigns the story points, which means the whole team understands all of the requirements and commits to their estimated difficulty.

With Scrummerfall, business analysts write the stories and assign the story points. Team members only understand the user stories assigned to them for development, and instead of assigning story points … estimates of relative difficulty … the business analysts estimate the time that should be needed for development.

Anyone who’s been on either side of any exercise in delegation knows the difference between me telling you how much time you should need to achieve your assignment and the you telling me how much time you’ll need.

What’s the financial impact of the difference? We can envision what the research needed to answer a question like this might look like, but I certainly can’t imagine who might pay for the research, let alone any business leaders making decisions based on this research.

There’s one more piece of this puzzle to mention right now, and that’s the core model for The Cognitive Enterprise — that cognitive enterprises replace the old people/process/technology model with customers, communities, and capabilities.

With true Agile, developers and business stakeholders form a community.

With Scrummerfall, they’re just cogs in a machine.