Blame Lord Kelvin, who once said, “When you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind.”

It’s a philosophy that works well in physics and engineering — fields where what matters can, for the most part, be unambiguously observed, counted, or otherwise be quantified.

The further you get from physics and engineering, the more you might wish Lord Kelvin had added, “Of course, just because you can attach a number to something, that doesn’t mean you understand anything about it.”

So if you accidentally touch a bare copper wire, it’s fair to consider how loud you yell “Ouch!” to be an inferior metric to how many volts and amperes you were exposed to.

But on the other side of the metrics divide, imagine you’re researching headaches and want to rank them in order of decreasing agony.

You think cluster headaches are the worst (they get my vote), followed by migraines, sinus, tension, and faking it to get sympathy. But really, how can you tell?

There’s the well-known pain scale. It does a pretty good job of assigning a number by assessing how debilitating the pain is.

But debilitation is an index, not a direct measure. It passes most of the seven Cs, but not entirely. In particular its calibration is imperfect at best — some people seem to tolerate the same pain better than others, although there’s really no way of knowing whether they actually tolerate pain better or whether the same stimulus doesn’t result in as painful an experience as someone else might feel.

Which insights we need to pivot into something that has to do with helping you run your part of the enterprise better.

Consider it done.

Start with the difference between leadership and management. If people report to you, you lead (or should). If you’re responsible for producing results, you manage. With infrequent exceptions, leaders are also managers and vice versa.

Metrics are natural tools for managing. What they do for managers is help them assess whether the results they’re responsible for producing are what they’re supposed to be. The results in question are about the process (or practice) characteristics that matter:

Fixed cost — the cost of turning the lights on before any work gets done.

> Incremental cost — the cost of processing one more item.

> Cycle time — how much time elapses processing one item from start to finish.

> Throughput — how much work the function churns out in a unit of time … its capacity, in other words.

> Quality — adherence to specifications and the absence of defects in work products.

> Excellence — flexibility, the ability to tailor to individual needs, and to deliver high-value product characteristics.

When it comes to managing whatever process or practice it is you manage, pick the three most important of these six dimensions of potential optimization, establish metrics and measurement systems to report them, and use the results to (1) make sure things are running like they’re supposed to; (2) let you know if you’re improving the situation or not; and (3) let employees know if they’re improving the situation or not.

You only get to pick three because except when a process is a mess — at which point you can probably improve all six optimization dimensions — improvements result in trade-offs. For example, if you want to improve quality, one popular tactic is simplifying process outputs and disallowing tailoring and customization. More quality means less excellence and vice versa.

If it turns out you aren’t getting what you’re supposed to get, that means your process has bottlenecks. You’ll want to establish some temporary metrics to keep track of the bottlenecks until you’ve fixed them.

I say temporary because once you’ve cleared out one bottleneck you’ll move on to clearing out the next one. Letting metrics accumulate can be more confusing than illuminating. Also, as pointed out last week, metrics are expensive. Letting them accumulate means increasingly complex reporting systems that are costly to maintain and keep current.

Given the value metrics provide for effective management, lots of organizations try to use them as a leadership tool as well. The result is the dreaded employee satisfaction survey.

In Leading IT I established eight tasks of leadership: Setting direction, delegation, decision-making, staffing, motivation, team dynamics, establishing culture, and communicating. A system of leadership metrics should assess how well these are accomplished by a company’s collective leadership.

Which gets us to this week’s KJR Challenge: Define metrics for these that can survive the seven Cs.

Useful metrics have to satisfy the seven C’s.

Until two weeks ago it was the six C’s (Keep the Joint Running: A Manifesto for 21st Century Information Technology, Bob Lewis, IS Survivor Publishing, 2006). That’s when I found myself constructing a metric to assess the health of the integration layer as part of rationalizing clients’ application portfolios.

In case you haven’t yet read the Manifesto (and if you haven’t, what are you waiting for?), metrics must be connected, consistent, calibrated, complete, communicated, and current. That is, they’re:

> Connected to important goals or outcomes.

> Consistent — they always go in one direction when the situation improves and in the opposite direction when it deteriorates.

> Calibrated — no matter who takes the measurement they report the same number.

> Complete, to avoid the third metrics fallacy — anything you don’t measure you don’t get.

> Communicated, because the biggest benefit of establishing metrics is that they shape behavior. Don’t communicate them and you get no benefit.

> Current — when goals change, your metrics had better change to or they’ll make sure you get your old goals, not your current ones.

The six C’s seemed to do the job quite well, right up until I got serious about establishing application integration health metrics. That’s when I discovered that (1) just satisfying these six turned out to be pretty tough; and (2) six didn’t quite do the job.

To give you a sense of the challenge, consider what makes an application’s integration healthy or unhealthy. There are two factors at work.

The first is the integration technique. At one extreme we have swivel-chairing, also known as integration by manual re-keying. Less bad but still bad are custom, batch point-to-point interfaces.

At the other extreme are integration platforms like enterprise application integration (EAI), enterprise service busses (ESB) and Integration Platform as a Service (IPaaS) that provide for synchronization and access by way of single, well-engineered connectors.

Less good but still pretty good are unified data stores (UDS).

The second factor is the integration count — the more interfaces needed to keep an application’s data synchronized to every other application’s data, the worse the integration score.

Here’s where it gets tricky.

The biggest challenge turned out to be crafting a Consistent metric. Without taking you through all the ins and outs of how I eventually solved the problem (sorry — there is some consulting IP I do need to charge for) I did arrive at a metric that reliably got smaller with better integration engineering and bigger with an integration tangle.

The metric did well at establishing better and worse. But it failed to establish good vs bad. I needed a seventh C.

Well, to be entirely honest about it, I needed an “R” (range), but since “Seven C’s” sounds much cooler than “Six C’s and an R,” Continuum won the naming challenge.

What it means: Good metrics have to be placed on a well-defined continuum whose poles are the worst possible reading on one end and the best possible reading on the other.

When it comes to integration, the best possible situation is a single connector to an ESB or equivalent integration platform.

The worst possible situation is a bit more interesting to define, but with some ingenuity I was able to do this, too. Rather than detail it out here I’ll leave it as an exercise for my fellow KJR metrics nerds. The Comments await you.

The point

The point of this week’s exercise isn’t how to measure the health of your enterprise architecture’s integration layer.

It also isn’t to introduce the 7th C, although I’m delighted to do so.

The point is how much thought and effort went into constructing this one metric, which is just one of twenty or so characteristics of application health that need measurement.

Application and integration health are, in turn, two of five contributors to the health of a company’s overall enterprise technical architecture, the enterprise technical architecture is one of four factors that determine IT’s overall organizational health, and IT health is one of ten dimensions that comprise the overall enterprise.

Which, at last, gets to the key issue.

If you agree with the proposition that you can’t manage if you can’t measure, everything that must be managed must be measured.

Count up everything in the enterprise that has to be managed, and considering just how hard it is to construct metrics that can sail the 7 C’s …

… is it more likely your company is managed well through well-constructed metrics, or managed wrong by being afflicted with poorly designed ones?

It’s Lewis’s metrics corollary: You get what you measure. That’s the risk you take.