“If you can’t measure you can’t manage.”

Hear this much? Probably. What’s even more amazing, it’s a true statement. More people misinterpret it, though, than you can shake a stick at, assuming you’re in the habit of shaking sticks at people who misinterpret management theory.

That’s because while most people assume the inverse – “If you can measure you can control.” – they’ve never heard Lewis’s Corollary (not surprising since I just made it up): “What you measure wrong, you manage wrong.”

Fact is, establishing the wrong measures will lead to far worse results than establishing no measures at all.

When I was in graduate school studying the behavior of electric fish (I’m not making this up!) I read a book by the respected behavioral scientist, Donald Griffin. In it he described the four stages of scientific inquiry:

  1. List every variable that can affect a system.
  2. Study every variable that’s easy to measure.
  3. Declare these to be the only important variables affecting the system.
  4. Proclaim the other variables don’t really exist.

In establishing quality and performance measures, many managers follow a similar sequence. They end up measuring what’s convenient, not what’s important.

Here’s a simple example, and you’ve been on the receiving end. Often. I guarantee it.

Lots of your suppliers have customer-service call centers. These call centers have a gadget called an “Automated Call Distributor” (ACD) running the telephones. Since the call center manager can’t manage without measuring, and productivity is a Good Thing (to use the technical term), he or she merrily starts measuring productivity.

Now productivity can be a slippery thing to define – we’ll get into the Productivity Paradox in a future column. Unaware of the conceptual banana peels littering the floor of productivity-definition, our hero starts graphing productivity. The convenient definition: calls per hour. ACDs spit out this statistic sliced and diced in hundreds of different dimensions, so the cost of data collection and reporting is zilch.

So’s the value.

This brings us to the First Law of Performance Measurement: Employees will improve whatever measure you establish.

Perceptive readers will recognize this as a special case of the more general dictum, “Be careful what you ask for … you may get it.”

How might you, a high-performing customer-service representative, improve your productivity? Why, you’ll be as abrupt as you can, getting rid of callers just as fast as possible. Never mind if you did anything useful for them. You’re Maximizing Productivity!

Our call center manager now does what any self-respecting manager would do. He or she starts holding staff meetings, chewing everyone out for being so rude and unhelpful. The productivity graphs, though, stay up on the wall.

Our mythical, but all-too-real manager would have avoided this mess by following the Second Law of Performance Measurement: Measure What’s Important. The right place to begin, then, in establishing performance measures is to figure out what you care about – what you’re trying to achieve.

For a customer service center, what’s important? Solving customer problems. What’s a good performance measure? Problems Solved per Hour.

ACDs don’t report the number of problems solved per hour, and for that I’m really and truly sorry. On the other hand, a good problem management system ought to be able to handle this one. All it needs is a field identifying repeat calls.

Well, that’s not all it needs. I lied. If we start graphing this statistic, customer service reps will do their best to pass on the hard problems to Someone Else, because they don’t want to ruin their statistics. Even worse, they may “accidentally” forget to log the hard ones for the same reason.

You need to gauge the relative difficulty of different kinds of problems, and use a weighted average. Now, when a customer service rep helps a caller use italics in MS Word, that counts for one problem solved. When someone else helps a caller fix his SYSTEM.INI file, she gets 10,000 points and a trip to Bermuda.

Now there’s a measure worth putting on the wall.

“Oh, &$@%#, not another &%^ing RFP!”
Requests for Proposal (RFPs) and runners have two shared characteristics. First, you see a lot of both of them. Second, nobody ever seems to actually enjoy either one. (To the runners I just offended: how come I never see you smiling?)
Clearly, we’ve become a nation of masochists.
But how else than an RFP to evaluate vendors and products? Form Follows Function. Your method of evaluation depends on the circumstances.
You generally face one of these three situations: (1) you fully understand your requirements and the market, and you need equivalent information from all suppliers; (2) you understand your business, have a general understanding that technology can improve it, and want open-ended suggestions on how different products can help improve or transform your organization; or (3) you need to choose a product from a well-defined category and need something that’s good enough. These situations call for different approaches.
When You Know Your Requirements
Here’s when you should write an RFP. Quite a few books (including my own Telecommunications for Every Business, Bonus Books, Chicago, 1992) provide detailed guidance. Three principles are worth mentioning here.
First, specify your design goals, not the means by which vendors should address them. For example, if you need a fault-tolerant database server, don’t say you need a system with redundant power supplies, backplanes, CPUs, and network interface cards. If you do you’ll get what you asked for (in this case, a system that frequently fails from software bugs). Instead, ask how the vendor ensures fault tolerance. Then you’ll learn one of the vendors provides mirrored servers with shared RAID storage for a lower overall cost and higher reliability.
Second, don’t withhold information. If you’re a Windows/95 shop, for example, don’t pretend to be open to other solutions. Just say so in your RFP. You’ll save both your vendors and yourself a lot of work.
And finally, if any vendor offers to “help you write your RFP” just laugh gently, compliment them on their sense of humor, and go onto the next vendor (who will make the same offer). Don’t take offense – they’re just doing their job. Don’t take them up on the offer, either.
Looking for Help
Sometimes, you don’t know all the questions. You know you want to phase out your nationwide SNA network, for example, but have an open mind regarding the best replacement strategy.
You can hire a consultant to help you write an RFP, I suppose … or, you can hold extensive conversations with a variety of vendors to learn what each has to offer. By doing so you’ll get a broader look at the market, and you’ll also get a wonderful education into the strengths (from each vendor) and weaknesses (from their competitors) of each approach currently selling.
In this example, you may find yourself talking to two frame relay vendors, a Transparent LAN Service provider, AT&T and Novell regarding their Netware Connect Services, and an independent systems integrator. You’ll benefit from an unstructured dialog in which each vendor can assess your situation in depth and describe a scenario of how their approach will work for your company.
When Good Enough Will Do
Let’s imagine you’ve been asked to select a new standard Ethernet network interface card (NIC). You could write an RFP or hold extensive conversations with sales reps, but why? Read a few reviews, ask a few basic questions, insist on a few evaluation units (to make sure they work and to learn about any installation glitches) and pick one. Flip a coin if you have to. It’s a low impact decision.
Oh yeah, just one more thing: very few of us make decisions based on logic. Salespeople know we make emotional decisions, then construct logical arguments to justify them. Don’t fall into this trap: recognize your emotional preference up front, figure out how much weight you should give it, and keep it from dominating your process.