Blame Lord Kelvin, who once said, “When you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind.”

It’s a philosophy that works well in physics and engineering — fields where what matters can, for the most part, be unambiguously observed, counted, or otherwise be quantified.

The further you get from physics and engineering, the more you might wish Lord Kelvin had added, “Of course, just because you can attach a number to something, that doesn’t mean you understand anything about it.”

So if you accidentally touch a bare copper wire, it’s fair to consider how loud you yell “Ouch!” to be an inferior metric to how many volts and amperes you were exposed to.

But on the other side of the metrics divide, imagine you’re researching headaches and want to rank them in order of decreasing agony.

You think cluster headaches are the worst (they get my vote), followed by migraines, sinus, tension, and faking it to get sympathy. But really, how can you tell?

There’s the well-known pain scale. It does a pretty good job of assigning a number by assessing how debilitating the pain is.

But debilitation is an index, not a direct measure. It passes most of the seven Cs, but not entirely. In particular its calibration is imperfect at best — some people seem to tolerate the same pain better than others, although there’s really no way of knowing whether they actually tolerate pain better or whether the same stimulus doesn’t result in as painful an experience as someone else might feel.

Which insights we need to pivot into something that has to do with helping you run your part of the enterprise better.

Consider it done.

Start with the difference between leadership and management. If people report to you, you lead (or should). If you’re responsible for producing results, you manage. With infrequent exceptions, leaders are also managers and vice versa.

Metrics are natural tools for managing. What they do for managers is help them assess whether the results they’re responsible for producing are what they’re supposed to be. The results in question are about the process (or practice) characteristics that matter:

Fixed cost — the cost of turning the lights on before any work gets done.

> Incremental cost — the cost of processing one more item.

> Cycle time — how much time elapses processing one item from start to finish.

> Throughput — how much work the function churns out in a unit of time … its capacity, in other words.

> Quality — adherence to specifications and the absence of defects in work products.

> Excellence — flexibility, the ability to tailor to individual needs, and to deliver high-value product characteristics.

When it comes to managing whatever process or practice it is you manage, pick the three most important of these six dimensions of potential optimization, establish metrics and measurement systems to report them, and use the results to (1) make sure things are running like they’re supposed to; (2) let you know if you’re improving the situation or not; and (3) let employees know if they’re improving the situation or not.

You only get to pick three because except when a process is a mess — at which point you can probably improve all six optimization dimensions — improvements result in trade-offs. For example, if you want to improve quality, one popular tactic is simplifying process outputs and disallowing tailoring and customization. More quality means less excellence and vice versa.

If it turns out you aren’t getting what you’re supposed to get, that means your process has bottlenecks. You’ll want to establish some temporary metrics to keep track of the bottlenecks until you’ve fixed them.

I say temporary because once you’ve cleared out one bottleneck you’ll move on to clearing out the next one. Letting metrics accumulate can be more confusing than illuminating. Also, as pointed out last week, metrics are expensive. Letting them accumulate means increasingly complex reporting systems that are costly to maintain and keep current.

Given the value metrics provide for effective management, lots of organizations try to use them as a leadership tool as well. The result is the dreaded employee satisfaction survey.

In Leading IT I established eight tasks of leadership: Setting direction, delegation, decision-making, staffing, motivation, team dynamics, establishing culture, and communicating. A system of leadership metrics should assess how well these are accomplished by a company’s collective leadership.

Which gets us to this week’s KJR Challenge: Define metrics for these that can survive the seven Cs.