And now some admiring words about Mitt Romney.

No, no, no, no, no. I’m not referring to any recent votes he might have made in the Senate. I’m referring to his recent, well-publicized 72nd birthday and the parallels he and his staff established for achieving business and IT agility. The standard they set for business and IT thought leadership rivals anything Romney achieved during his years at Bain Capital.

Start with his staff’s innovative multi-Twinkie cake architecture.

Most birthday cakes are layer cakes and result from waterfall design and production techniques. The baker starts with a recipe — a complete specification for the cake itself, coupled with a detailed work breakdown structure for creating it.

Many cake makers achieve excellent levels of success using these waterfall techniques, and I’d be unlikely to reject their work products.

But … personally, I’d be likely to concentrate my gustatory efforts on the icing. It isn’t that I dislike the cake component of the finished product. It’s that the cake component dilutes the flavor of the frosting, which I enjoy quite a lot more.

In business/technical terms, layer cakes aren’t modular, and deliver unnecessary features and functionality. Twinkie cakes are, in contrast, modular. Each component Twinkie is a complete, integrated whole.

Also: A layer cake is an all-or-none proposition. The baker decides how big a cake to make and that’s that. If unexpected guests show up, well that’s just too bad. Either everyone gets less dessert, or the new guests do without.

Traditional cake-baking doesn’t scale. Because Twinkie cakes are modular they scale easily: Just add more Twinkies, frost them, and everyone’s happy.

Another aspect of the Twinkie cake deserves mention: It evokes the value of an important technical architecture design principle: buy when you can, build when you have to.

Layer-cake bakers start with raw ingredients and baking infrastructure (the oven and other paraphernalia) and engage in actions equivalent to application development.

Twinkie-cake-makers start with a pile of commercially manufactured Twinkies. They do then make and apply their own frosting, but that step is more analogous to application configuration and integration than to application development.

Our final step in beating the metaphor to death (as opposed to beating the eggs that go into many layer cakes) is testing.

Bake a layer cake and the only way to test it is to mar the cake by cutting a slice out of it. Sure, you can reserve some of the cake mix to bake a mini-cake instead, but small cakes bake more quickly than full-size ones so the baker can never be sure the test cake tastes the same as the production version.

Compare that to a Twinkie cake. Want to test it? Eat a Twinkie. Not sure? Eat another one.

No problem.

The Twinkie cake architecture was innovative and interesting. But just as there’s no such thing as an IT project — it’s always about doing business differently and better or what’s the point? — so Romney himself deserves credit for the “business innovation” of using Agile techniques to blow out his cake’s candles.

Traditionally, candle blowing has been just as waterfall-oriented as cake baking: The birthday celebrator attempts to blow out all of the candles in one great whoof.

As is the case with waterfall project management, this is rarely successful, due to another waterfall parallel: Just as the risk of failure rises in direct proportion to the size of a project, the older the candle-blower, and therefore the more candles there are to extinguish, the less likely it is that anyone could nail all the candles in one breath.

Not to mention the unpleasant thought that unavoidably, in an attempt to blow out all those candles, some of the blower’s saliva must inevitably end up on the cake.

I’ll leave it to you to figure out parallels to application development or business change. And please do feel free to share your analogies in the Comments.

In any event, Romney used an Agile technique — iteration — to dodge the challenges of traditional candle out-blowing: He removed each candle from the cake and blew it out separately.

Especially, kudos for explaining that this way each candle was another wish.

The candles, that is, were his birthday backlog. And he dealt with them as all Agile teams deal with items in the backlog: One at a time, with little stress, and a very high level of success.

And, in the end, a spit-free cake.

Useful metrics have to satisfy the seven C’s.

Until two weeks ago it was the six C’s (Keep the Joint Running: A Manifesto for 21st Century Information Technology, Bob Lewis, IS Survivor Publishing, 2006). That’s when I found myself constructing a metric to assess the health of the integration layer as part of rationalizing clients’ application portfolios.

In case you haven’t yet read the Manifesto (and if you haven’t, what are you waiting for?), metrics must be connected, consistent, calibrated, complete, communicated, and current. That is, they’re:

> Connected to important goals or outcomes.

> Consistent — they always go in one direction when the situation improves and in the opposite direction when it deteriorates.

> Calibrated — no matter who takes the measurement they report the same number.

> Complete, to avoid the third metrics fallacy — anything you don’t measure you don’t get.

> Communicated, because the biggest benefit of establishing metrics is that they shape behavior. Don’t communicate them and you get no benefit.

> Current — when goals change, your metrics had better change to or they’ll make sure you get your old goals, not your current ones.

The six C’s seemed to do the job quite well, right up until I got serious about establishing application integration health metrics. That’s when I discovered that (1) just satisfying these six turned out to be pretty tough; and (2) six didn’t quite do the job.

To give you a sense of the challenge, consider what makes an application’s integration healthy or unhealthy. There are two factors at work.

The first is the integration technique. At one extreme we have swivel-chairing, also known as integration by manual re-keying. Less bad but still bad are custom, batch point-to-point interfaces.

At the other extreme are integration platforms like enterprise application integration (EAI), enterprise service busses (ESB) and Integration Platform as a Service (IPaaS) that provide for synchronization and access by way of single, well-engineered connectors.

Less good but still pretty good are unified data stores (UDS).

The second factor is the integration count — the more interfaces needed to keep an application’s data synchronized to every other application’s data, the worse the integration score.

Here’s where it gets tricky.

The biggest challenge turned out to be crafting a Consistent metric. Without taking you through all the ins and outs of how I eventually solved the problem (sorry — there is some consulting IP I do need to charge for) I did arrive at a metric that reliably got smaller with better integration engineering and bigger with an integration tangle.

The metric did well at establishing better and worse. But it failed to establish good vs bad. I needed a seventh C.

Well, to be entirely honest about it, I needed an “R” (range), but since “Seven C’s” sounds much cooler than “Six C’s and an R,” Continuum won the naming challenge.

What it means: Good metrics have to be placed on a well-defined continuum whose poles are the worst possible reading on one end and the best possible reading on the other.

When it comes to integration, the best possible situation is a single connector to an ESB or equivalent integration platform.

The worst possible situation is a bit more interesting to define, but with some ingenuity I was able to do this, too. Rather than detail it out here I’ll leave it as an exercise for my fellow KJR metrics nerds. The Comments await you.

The point

The point of this week’s exercise isn’t how to measure the health of your enterprise architecture’s integration layer.

It also isn’t to introduce the 7th C, although I’m delighted to do so.

The point is how much thought and effort went into constructing this one metric, which is just one of twenty or so characteristics of application health that need measurement.

Application and integration health are, in turn, two of five contributors to the health of a company’s overall enterprise technical architecture, the enterprise technical architecture is one of four factors that determine IT’s overall organizational health, and IT health is one of ten dimensions that comprise the overall enterprise.

Which, at last, gets to the key issue.

If you agree with the proposition that you can’t manage if you can’t measure, everything that must be managed must be measured.

Count up everything in the enterprise that has to be managed, and considering just how hard it is to construct metrics that can sail the 7 C’s …

… is it more likely your company is managed well through well-constructed metrics, or managed wrong by being afflicted with poorly designed ones?

It’s Lewis’s metrics corollary: You get what you measure. That’s the risk you take.