HomeLeadership

Optimizing benchmarks

Like Tweet Pin it Share Share Email

To optimize the whole you usually have to sub-optimize the parts.

This is a fundamental reality of design. It isn’t, however, an original discovery. I personally first encountered the concept in Arno Penzias’ Ideas and Information (Simon and Schuster, 1989): “When each activity focuses on providing ‘quality service’ according to its own metrics, important efficiencies get overlooked.”

Are you going to argue with a Nobel Prize-winning physicist? Not me. But I wonder how many currently popular business trends are the result of ignoring this reality. Take the commonplace activity of benchmarking. Whenever you compare your organization’s performance to a best-practices benchmark, you’re making the unstated assumption that your company is trying to optimize the function in question.

No, it’s worse than that. A typical business function juggles six basic parameters: cycle time, throughput, overhead cost, unit cost, excellence (how Spartan or complete it is), and quality (adherence to specifications). It’s the traditional quicker/cheaper/better, but with the recognition that neither quicker, cheaper, nor better is as simple as it seems.

Here’s how this relates: When you make use of a benchmark you make two assumptions: That you’re trying to optimize the individual function in the first place; and that both you and the benchmark are trying to optimize the same parameters in the same priority ranking.

In any real business, neither assumption is likely to be true.

Let’s imagine you run the data center. One of the most basic measures of old-school data center management is cost per MIPS (millions of instructions per second). So you go out and buy some industry benchmarks. Low and behold, your cost per MIPS is 20% lower than the industry. It’s a mitzvah!

Well, no. Actually, it isn’t, and you’re a schlemiel, because you run the data center for a hospital, where five-nines reliability (99.999%) is the minimal level of acceptability. You’re only achieving nine fives. Sure, patients are dropping like pins on open lane night at the Alley Cat Bowl, but you’re beating the benchmark and that’s what matters — optimizing your area of responsibility.

Nobody would do that, of course. Certainly not you. First comes reliability (adherence to specification); cost comes next. So you get to five-nines, then check your cost per MIPS and find you’re 25% over the best practices benchmark. Schmuck.

That benchmark. Is it derived from businesses with similar goals and drivers? Probably not: The hospital you support has embarked on an ambitious program that implements electronic medical charts connected through wireless networks. It’s an industry-leading program that makes use of state-of-the-art technology, which is to say technology for which the bugs haven’t yet been worked out. Your ability to maintain reliability in the face of this program has had a significant impact on IT operating costs.

Which is to say, you don’t get to optimize your business function, nor does anyone else. Nor should they. It’s a fundamental problem with benchmarking that no amount of Yiddish can fix. Whether the subject is supply chain management, retail loss prevention or the IT help desk, benchmarks are useless, because they’re based on “best practices” designed to optimize the individual function. Well-run businesses, though, compromise every single business function to support the whole.

This isn’t just theory, either. I doubt there’s a reader of this column that hasn’t worked in a company in which executives and managers focus on taking care of their slice of the company, regardless of the consequences for the business as a whole. Chances are their bonuses depend on it. It’s organizational design predicated on the false premise that if you optimize the parts you optimize the whole. Whether you call it operating in silos, overemphasizing the organizational chart or a highly politicized organization, it derives from the same root cause: The self-defeating attempt to optimize the whole by optimizing the parts.

Are benchmarks completely useless? No. There are industries in which some functions are standardized, generally through external regulation. Aircraft maintenance would be an example. Under these circumstances, the regulator establishes all functional requirements, and a benchmark measuring the cost of operating that function does mean something.

But even here, it doesn’t mean much, because an airline with a brand new fleet of identical aircraft has different operating characteristics from an airline with a mixed fleet.

There’s a bare chance you might find a function in your IT organization for which some external benchmark, like cost per function point, might be meaningful. It isn’t impossible. After all, in an infinite universe, everything that’s allowed by the laws of physics must happen somewhere, sometime.

But in your IT department? I doubt it.