Useful metrics have to satisfy the seven C’s.
Until two weeks ago it was the six C’s (Keep the Joint Running: A Manifesto for 21st Century Information Technology, Bob Lewis, IS Survivor Publishing, 2006). That’s when I found myself constructing a metric to assess the health of the integration layer as part of rationalizing clients’ application portfolios.
In case you haven’t yet read the Manifesto (and if you haven’t, what are you waiting for?), metrics must be connected, consistent, calibrated, complete, communicated, and current. That is, they’re:
> Connected to important goals or outcomes.
> Consistent — they always go in one direction when the situation improves and in the opposite direction when it deteriorates.
> Calibrated — no matter who takes the measurement they report the same number.
> Complete, to avoid the third metrics fallacy — anything you don’t measure you don’t get.
> Communicated, because the biggest benefit of establishing metrics is that they shape behavior. Don’t communicate them and you get no benefit.
> Current — when goals change, your metrics had better change to or they’ll make sure you get your old goals, not your current ones.
The six C’s seemed to do the job quite well, right up until I got serious about establishing application integration health metrics. That’s when I discovered that (1) just satisfying these six turned out to be pretty tough; and (2) six didn’t quite do the job.
To give you a sense of the challenge, consider what makes an application’s integration healthy or unhealthy. There are two factors at work.
The first is the integration technique. At one extreme we have swivel-chairing, also known as integration by manual re-keying. Less bad but still bad are custom, batch point-to-point interfaces.
At the other extreme are integration platforms like enterprise application integration (EAI), enterprise service busses (ESB) and Integration Platform as a Service (IPaaS) that provide for synchronization and access by way of single, well-engineered connectors.
Less good but still pretty good are unified data stores (UDS).
The second factor is the integration count — the more interfaces needed to keep an application’s data synchronized to every other application’s data, the worse the integration score.
Here’s where it gets tricky.
The biggest challenge turned out to be crafting a Consistent metric. Without taking you through all the ins and outs of how I eventually solved the problem (sorry — there is some consulting IP I do need to charge for) I did arrive at a metric that reliably got smaller with better integration engineering and bigger with an integration tangle.
The metric did well at establishing better and worse. But it failed to establish good vs bad. I needed a seventh C.
Well, to be entirely honest about it, I needed an “R” (range), but since “Seven C’s” sounds much cooler than “Six C’s and an R,” Continuum won the naming challenge.
What it means: Good metrics have to be placed on a well-defined continuum whose poles are the worst possible reading on one end and the best possible reading on the other.
When it comes to integration, the best possible situation is a single connector to an ESB or equivalent integration platform.
The worst possible situation is a bit more interesting to define, but with some ingenuity I was able to do this, too. Rather than detail it out here I’ll leave it as an exercise for my fellow KJR metrics nerds. The Comments await you.
The point
The point of this week’s exercise isn’t how to measure the health of your enterprise architecture’s integration layer.
It also isn’t to introduce the 7th C, although I’m delighted to do so.
The point is how much thought and effort went into constructing this one metric, which is just one of twenty or so characteristics of application health that need measurement.
Application and integration health are, in turn, two of five contributors to the health of a company’s overall enterprise technical architecture, the enterprise technical architecture is one of four factors that determine IT’s overall organizational health, and IT health is one of ten dimensions that comprise the overall enterprise.
Which, at last, gets to the key issue.
If you agree with the proposition that you can’t manage if you can’t measure, everything that must be managed must be measured.
Count up everything in the enterprise that has to be managed, and considering just how hard it is to construct metrics that can sail the 7 C’s …
… is it more likely your company is managed well through well-constructed metrics, or managed wrong by being afflicted with poorly designed ones?
It’s Lewis’s metrics corollary: You get what you measure. That’s the risk you take.
Minor correction to Lewis’s metrics corollary: You get ONLY what you measure.
I’m not so sure. You may get things you don’t measure. You just would not be aware of those things, most likely to your detriment…
You might, as an accidental consequence. The issue is that once you define a metric, employees move the metric whether or not what they did to move it actually made things better. That includes ignoring or even actively damaging other factors you don’t measure.
So I suppose your formulation is just as accurate as mine: What you don’t measure might change, but it probably won’t change well.
Gary, you have a valid point. Sometimes it’s just not possible to measure what you actually want, so you devise a metric that you believe is correlated with what you want, intending (hoping?) to get the desired behaviors. Whether or not there’s an actual cause/effect relationship between the measured behavior and the desired behavior is usually based on faith or wishful thinking. IBM is very proud of their consistent leadership in the number of patents filed every year. Based on their recent financial performance, there’s a serious question as to whether that’s a useful metric for achieving, in the short term, the only objective a for-profit enterprise is pursuing; the long term result is speculative, at best. The same might be said for all those employee satisfaction polls that seem to be performed annually (and then, in the opinion of the employees, completely ignored). It’s likely that the actual result is negative, except for the senior executive “feel good.”
Great column. Thank you.
While I largely agree with this post, I want to take issue with the notion that management always requires measurement. There is a blog I read called Trusted Advisor that argues that there are many important things that cannot be measured, and further that trying to measure some of them is counterproductive. He says that most efforts to measure trust diminish it. For that matter, how do you measure the quality of the relationships between a business and its customer? How do you measure the relationships between a business and its employees? A factor that you often write about is company culture. How do you measure that?
And even things that can be measured are often misunderstood. Eli Goldratt wrote about the misuse of cost accounting. The failure to include overhead in the cost of purchased materials has often lead to make/buy decisions that increase an organizations costs in the expectation of saving them.
Take issue? You’re helping make the point, which is over-reliance on metrics as the one-and-only all-important tool for managing everything.
Thanks!
Some of the folks at Home Depot like to comment about the MBAs who are busy computing profit per cubic inch. The result is that some products that customers have come to depend upon are no longer carried, because they take up too much space, so the customers go elsewhere and discover other sources. Ooops.
Great example of why “complete” matters – it’s how to avoid Metrics Fallacy #3: Anything you don’t measure you don’t get.
As you point out, when you do measure profit per square inch and don’t measure customer mindshare and walletshare, you’ll get the former and not the latter.
I see three Cs missing. (Or if I were texting “I c 3 cs msing”). Whenever I deal with a requirements gathering scenario, my request is for the four Cs (I prefer the term “C4”): Clear, Complete, Correct, Concise.
Well, that’s requirements. And Complete is one of the 7 C’s. Not that I disagree with you.
The presence of Complete in your 7Cs is why I suggest three are missing, not four.
Oh … right. Missed that.
Not to be controversial, but just having C4 could be an explosive combination of factors.
It would be interesting to see if Bob could add two more C’s, and then the current set would be Seven of Nine.
20 characteristics times 5 contributors times 4 factors times 10 dimensions.
And someone needs to understand what these 4000 measurements mean, and communicate that understanding to the company decision makers.
Ahoy, sailor! Love the image of swivel-chairing. It seems so friendly and so smooth. But in design, it’s so primitive that it makes a kludge look good by comparison. I was going to say the worst possible situation is a kludge (built by the wrong committee) but I’m rethinking. Maybe something extravagantly messy, like Rube Goldberg, with a nautical twist? “If you don’t know how to tie any good knots, tie a lot of them.”
I posted a link to this article from my LinkedIn account, and received one comment: “I like metrics…it’s just easier to use.”
I have never facepalmed so hard in my life.
It does lead to the question, easier to use than what? Thanks for sharing this. Very strange.