Useful metrics have to satisfy the seven C’s.

Until two weeks ago it was the six C’s (Keep the Joint Running: A Manifesto for 21st Century Information Technology, Bob Lewis, IS Survivor Publishing, 2006). That’s when I found myself constructing a metric to assess the health of the integration layer as part of rationalizing clients’ application portfolios.

In case you haven’t yet read the Manifesto (and if you haven’t, what are you waiting for?), metrics must be connected, consistent, calibrated, complete, communicated, and current. That is, they’re:

> Connected to important goals or outcomes.

> Consistent — they always go in one direction when the situation improves and in the opposite direction when it deteriorates.

> Calibrated — no matter who takes the measurement they report the same number.

> Complete, to avoid the third metrics fallacy — anything you don’t measure you don’t get.

> Communicated, because the biggest benefit of establishing metrics is that they shape behavior. Don’t communicate them and you get no benefit.

> Current — when goals change, your metrics had better change to or they’ll make sure you get your old goals, not your current ones.

The six C’s seemed to do the job quite well, right up until I got serious about establishing application integration health metrics. That’s when I discovered that (1) just satisfying these six turned out to be pretty tough; and (2) six didn’t quite do the job.

To give you a sense of the challenge, consider what makes an application’s integration healthy or unhealthy. There are two factors at work.

The first is the integration technique. At one extreme we have swivel-chairing, also known as integration by manual re-keying. Less bad but still bad are custom, batch point-to-point interfaces.

At the other extreme are integration platforms like enterprise application integration (EAI), enterprise service busses (ESB) and Integration Platform as a Service (IPaaS) that provide for synchronization and access by way of single, well-engineered connectors.

Less good but still pretty good are unified data stores (UDS).

The second factor is the integration count — the more interfaces needed to keep an application’s data synchronized to every other application’s data, the worse the integration score.

Here’s where it gets tricky.

The biggest challenge turned out to be crafting a Consistent metric. Without taking you through all the ins and outs of how I eventually solved the problem (sorry — there is some consulting IP I do need to charge for) I did arrive at a metric that reliably got smaller with better integration engineering and bigger with an integration tangle.

The metric did well at establishing better and worse. But it failed to establish good vs bad. I needed a seventh C.

Well, to be entirely honest about it, I needed an “R” (range), but since “Seven C’s” sounds much cooler than “Six C’s and an R,” Continuum won the naming challenge.

What it means: Good metrics have to be placed on a well-defined continuum whose poles are the worst possible reading on one end and the best possible reading on the other.

When it comes to integration, the best possible situation is a single connector to an ESB or equivalent integration platform.

The worst possible situation is a bit more interesting to define, but with some ingenuity I was able to do this, too. Rather than detail it out here I’ll leave it as an exercise for my fellow KJR metrics nerds. The Comments await you.

The point

The point of this week’s exercise isn’t how to measure the health of your enterprise architecture’s integration layer.

It also isn’t to introduce the 7th C, although I’m delighted to do so.

The point is how much thought and effort went into constructing this one metric, which is just one of twenty or so characteristics of application health that need measurement.

Application and integration health are, in turn, two of five contributors to the health of a company’s overall enterprise technical architecture, the enterprise technical architecture is one of four factors that determine IT’s overall organizational health, and IT health is one of ten dimensions that comprise the overall enterprise.

Which, at last, gets to the key issue.

If you agree with the proposition that you can’t manage if you can’t measure, everything that must be managed must be measured.

Count up everything in the enterprise that has to be managed, and considering just how hard it is to construct metrics that can sail the 7 C’s …

… is it more likely your company is managed well through well-constructed metrics, or managed wrong by being afflicted with poorly designed ones?

It’s Lewis’s metrics corollary: You get what you measure. That’s the risk you take.

We consultants live and die on methodologies. Just as double-blind therapeutic trials are what make modern doctors are more reliable than shamans for preventing and curing diseases, the methodologies we consultants use are what make our analyses and recommendations more reliable than an executive’s gut feel.

Take, for example, the methodology I use for application, application portfolio, and application integration rationalization (AR/APR/AIR).

It starts with collecting data about more than twenty indicators of application health, redundancy, and integration for each application in the portfolio. It’s by analyzing this health data that my colleagues and I are in a position to reliably and provably recommend programs and strategies for improving the enterprise technical architecture’s application layer, along with the information and platform layers the applications rely on.

For large application portfolios the process is intimidating, not to mention invasive and expensive. Fortunately for you and unfortunately for me when I’m trying to persuade clients to engage our services, there is a more frugal alternative. In most situations it’s amply reliable for guiding AR/APR/AIR priorities as our sophisticated methodology, while costing quite a lot less.

Call it the TYE methodology, TYE standing for “Trust Your Experts.”

But first, before we get to TYE, take the time to clean up your integration architecture.

Maybe the techniques you use to keep redundant data synchronized and present it for business use through systematic APIs are clean and elegant. If so, you can skip this step on the grounds that you’ve already taken it. Also, congratulate everyone involved. As near as I can tell you’re in the minority, and good for you.

Otherwise, you need to do this first for two big reasons: (1) it’s probably the single biggest architecture-related opportunity you have for immediate business and IT benefit; and (2) it creates a “transition architecture” that will let you bring new application replacements in without hugely disrupting the business areas that currently rely on the old ones.

And now … here’s how TYE works: Ask your experts which applications are the biggest messes. Who are your experts? Everyone — your IT staff who maintain and enhance the applications used by the rest of the business, and the business users who know what using the applications is like.

And a bit often missed, no matter the methodology: Make sure to include the applications used by IT to support the work it does. IT is just as much a business department as any other part of the enterprise. Its supporting applications deserve just as much attention.

What do you ask your experts? Ask them two questions. #1: List the five worst applications you use personally or know about, in descending order of awfulness. #2: What’s the worst characteristic of each application on your list?

Question #1 is for tabulation. Whichever applications rank worst get the earliest attention.

Question #2 is for qualification. Not all question #1 votes are created equal, and you’re allowed to toss out ballots cast by those who can produce no good reason for their opinions.

Once you’ve tabulated the results, pick the three worst applications and figure out what you want to do about them — the term of art is to determine their “dispositions.”

Charter projects to implement their dispositions and you’re off and running. Once you’ve disposed of one of the bottom three, determine the disposition of what had been the fourth worst application; repeat for the fifth.

After five it will probably be a good idea to re-survey your experts, as enough of the world will have changed that the old survey’s results might no longer apply.

You can use the basic TYE framework for much more than improving the company’s technical architecture. In fact, you can use it just about any time you need to figure out where the organization is less effective than it ought to be, and what to do about it.

It’s been the foundation of most of my consulting work, not to mention being a key ingredient in Undercover Boss.

TYE does rely on an assumption that’s of overwhelming importance: That you’ve hired people worth listening to. If you have, they’re closer to the action than anyone else, and know what needs fixing better than anyone else.

And if the assumption is false … if you haven’t hired people worth listening to, what on earth were you thinking?