When my eldest daughter Kimberly was three, she got her first ruler. She happily wandered around the house, measuring everything in sight. She did a good job, too. For example, she established beyond the slightest doubt that I weighed exactly 39 inches, 39 being the biggest number she knew.

As we discussed last week, plenty of business measures similarly apply precise numbers and inappropriate units to the wrong concepts. Doing it right matters a lot, because once you establish performance and quality measures, employees will move the measures in the desired direction.

They may drive your organization off a cliff while doing it, but they’ll move those measures. You can count on it. And you can’t finesse the problem by using weasel words like, “These measures are important, but use your common sense.” What your employees just heard is, “We know we’re supposed to establish measures, but we’re really running this company according to our daily whims.” If you don’t always want waste, for example, to go down, don’t measure waste — measure something else for which you can establish consistent performance goals. (By the way, a lot of consultants use the term “metrics,” probably for the same reason we call records “rows” or “tuples” when they’re in a relational database. That is to say, for no good reason.)

Enough carping (or trouting, if you like classier fish) – what’s the process for establishing good measures? Here are some guidelines you may find useful:

1. Decide what’s important: List the most important products and services you deliver. The list should have no more than seven entries (more and you’ll confuse people instead of enlightening them). The list should be specific, in plain language, exclude adjectives and adverbs, and should refer to results as end-users define them. For example, you don’t deliver “reliable access to corporate computing resources.”  “Reliable” is an adjective which will translate to a measure later on. “Corporate computing resources” is vague and abstract. Substitute “terminal emulation services to corporate mainframe computers.”

Some IS managers like the idea of “Function Points.” Function points may be a really nifty tool for evaluating your effectiveness in developing applications, but in this context all they do for end-users is obfuscate, and that means they give you verbiage to hide behind. If you deliver data-entry screens, reports, and batch maintenance programs, say so.

2. Define, in end-user terms, goals for those results: We’re not ready for numbers yet. We’re being slow and methodical, and for a reason – to force ourselves to think through the issues. The common goals tend to invoke the gods of reliability, performance, and cost. These apply often, but not always, so make sure these are what you care about.

3. Turn your goals into preliminary measures: Here you translate English to Math. Don’t be clever, and above all don’t use indicators – it’s easy to improve an indicator without improving your business. If, for example, you think your end-users value reliable fileservers, establish “Percent Availability” as a measure. Psychologically, positive measures are probably superior to negative ones, so availability has an edge over down-time.

4. Test your measures: Make sure each measure behaves properly. In other words, imagine every situation you can, and make sure the measure always goes up when things improve, and goes down when it gets worse.

Usually, you’ll have to do some fiddling. Imagine a lawn-mower manufacturer that has decided to reduce the number of defects. Up goes a chart showing the percentage of defect-free mowers shipped each week. Week after week, the percentage edges higher as employees drive down the number of defects. That’s good, isn’t it?

Not really. Turns out, mowers exhibit two kinds of defect: bad paint jobs and defective blades that shatter, amputating customers’ feet. Because they can fix the paint problem easily, employees pay close attention to it.

Usually, you’ll have to do some tuning. In this case you’d categorize defects and assign different weights to them based on their relative importance.

5. Publicize the results: Make sure everyone sees how you’re doing. Show graphs on the wall. Share results with key customers or end-users. Don’t forget your boss. And by all means, make a big fuss with your staff when the measures improve. They’ll deserve your praise.

Measurement can be a powerful tool in the IS management toolbox. Use it well and performance will improve. Use it poorly and only the measure will improve.

“If you can’t measure you can’t manage.”

Hear this much? Probably. What’s even more amazing, it’s a true statement. More people misinterpret it, though, than you can shake a stick at, assuming you’re in the habit of shaking sticks at people who misinterpret management theory.

That’s because while most people assume the inverse – “If you can measure you can control.” – they’ve never heard Lewis’s Corollary (not surprising since I just made it up): “What you measure wrong, you manage wrong.”

Fact is, establishing the wrong measures will lead to far worse results than establishing no measures at all.

When I was in graduate school studying the behavior of electric fish (I’m not making this up!) I read a book by the respected behavioral scientist, Donald Griffin. In it he described the four stages of scientific inquiry:

  1. List every variable that can affect a system.
  2. Study every variable that’s easy to measure.
  3. Declare these to be the only important variables affecting the system.
  4. Proclaim the other variables don’t really exist.

In establishing quality and performance measures, many managers follow a similar sequence. They end up measuring what’s convenient, not what’s important.

Here’s a simple example, and you’ve been on the receiving end. Often. I guarantee it.

Lots of your suppliers have customer-service call centers. These call centers have a gadget called an “Automated Call Distributor” (ACD) running the telephones. Since the call center manager can’t manage without measuring, and productivity is a Good Thing (to use the technical term), he or she merrily starts measuring productivity.

Now productivity can be a slippery thing to define – we’ll get into the Productivity Paradox in a future column. Unaware of the conceptual banana peels littering the floor of productivity-definition, our hero starts graphing productivity. The convenient definition: calls per hour. ACDs spit out this statistic sliced and diced in hundreds of different dimensions, so the cost of data collection and reporting is zilch.

So’s the value.

This brings us to the First Law of Performance Measurement: Employees will improve whatever measure you establish.

Perceptive readers will recognize this as a special case of the more general dictum, “Be careful what you ask for … you may get it.”

How might you, a high-performing customer-service representative, improve your productivity? Why, you’ll be as abrupt as you can, getting rid of callers just as fast as possible. Never mind if you did anything useful for them. You’re Maximizing Productivity!

Our call center manager now does what any self-respecting manager would do. He or she starts holding staff meetings, chewing everyone out for being so rude and unhelpful. The productivity graphs, though, stay up on the wall.

Our mythical, but all-too-real manager would have avoided this mess by following the Second Law of Performance Measurement: Measure What’s Important. The right place to begin, then, in establishing performance measures is to figure out what you care about – what you’re trying to achieve.

For a customer service center, what’s important? Solving customer problems. What’s a good performance measure? Problems Solved per Hour.

ACDs don’t report the number of problems solved per hour, and for that I’m really and truly sorry. On the other hand, a good problem management system ought to be able to handle this one. All it needs is a field identifying repeat calls.

Well, that’s not all it needs. I lied. If we start graphing this statistic, customer service reps will do their best to pass on the hard problems to Someone Else, because they don’t want to ruin their statistics. Even worse, they may “accidentally” forget to log the hard ones for the same reason.

You need to gauge the relative difficulty of different kinds of problems, and use a weighted average. Now, when a customer service rep helps a caller use italics in MS Word, that counts for one problem solved. When someone else helps a caller fix his SYSTEM.INI file, she gets 10,000 points and a trip to Bermuda.

Now there’s a measure worth putting on the wall.