Dear Car Companies,

I won’t be filling out your satisfaction surveys today.

It isn’t that I don’t want you to know whether or not I’m satisfied, and why. Car Company #1, I’d love to explain that your service department did a fine job taking care of my car, and even washed and waxed it while I waited, and that I didn’t mind all that much having had to wait an extra few minutes because the guy who was supposed to tell me my car was ready got stuck on a phone call.

Car Company #2, I’d be happy to explain that your salesman did his best, but needed more training on your new models … and that the training you did give him about making sure we knew how to handle the car’s nifty electronics resulted in his taking more of our time than we really would have liked.

But I can’t. Your sales representative (CC2) and service representative (CC1) explained the rules to me clearly: Either I give them a perfect score, or you give them a failing grade.

Why would I participate in a sham like this?

Look, there are four “metrics fallacies” — four ways using metrics can make a business worse. You must employ an army of analysts. Haven’t any of them explained these to you?

I usually charge good money for this, but just this once I’ll explain them to you for nothing, if you promise to pay attention. The fallacies:

  • Measuring things wrong.
  • Measuring the wrong things, whether you measure them right or wrong.
  • Failing to measure something important.
  • Extending your metrics to individual employees.

You botched #4. But then, a lot of companies botch #4 because a lot of business executives seem to assume that if something goes wrong, it must be someone’s fault.

I guess it’s good they’re measuring all cases, not just the problem ones, because that implies they’re also assuming that if something goes well, someone must deserve the credit.

But they botch it because (I guess) they think their employees are so dim they can’t figure out how to game the metrics to their advantage … like, for example, letting customers know that anything less than a perfect score will land them in a world of hurt.

Here’s a hint: If they are that dim, you’re hiring dim employees, which is a seriously bad idea, especially for the employees you’re putting in front of your customers.

Don’t get me wrong. It isn’t that I think you shouldn’t pay attention to how your employees are doing. Quite the opposite.

The employees you decide to hire — how you choose them, how you train them, how you do your best to keep them, motivate them, and promote the best of them — they’re the single most important determinant of your success. I’m confident of this because I’ve watched outstanding employees succeed in spite of bad processes, substandard tools, and execrable managers, just as I’ve watched disgruntled employees get mediocre results in spite of having the best process designs and tools at their disposal (I didn’t add “great managers” because if they had great managers they wouldn’t have been disgruntled).

What you have to understand is that metrics don’t report root causes. They report symptoms. Unless, that is, you have a predefined list of potential root causes and monitor them all.

But that isn’t what your customer satisfaction survey is doing. The poor schmuck in the service department you’ve asked me to evaluate didn’t do anything wrong. He was stuck, having to choose which of two customers he had to dissatisfy — the one on the phone or me. Why would I give you any ammunition to shoot him with, when the problem, assuming this counts as a problem, was that you had the same person answering the phone and dealing with in-person customers?

And why would I give you ammunition to shoot the salesman with, when the problem was with your sales training program?

Tell you what. Why don’t you send me a new survey? This one would assess my satisfaction with your customer satisfaction assessment process. I’d be happy to fill it out. You could use the results as the ammunition you need to shoot yourself.

Okay, that was mean. It’s just that I’ve written books about this, I’ve given speeches about this, and (now pay attention — this is important) I consult about this, which means that if you had been paying attention, you wouldn’t have made this mistake and my bank balance would be higher.

It’s an outcome that’s known in some circles as a “win/win.”

Sincerely,

 

Robert Lewis

President, IT Catalysts, Inc.

Flackery ain’t what it used to be.

I promised to publish Deloitte’s response to last week’s critique of its Center for the Edge’s Shift Index.

Once upon a time, public relations professionals made sure no potential image challenge went unaddressed. And yet, even though I contacted Deloitte, no response has been forthcoming.

From Deloitte, that is. Steve Denning, author of Forbes’ excellent Radical Management blog, posted a critique of my critique (and others as well; he addresses points I didn’t and wouldn’t have made).

(Endorsement: While Steve and I don’t completely agree on this particular topic, his thoughts on business leadership and management are innovative and interesting … well worth your time and attention.)

Denning’s defends the use of declining ROA as an indicator of economic decline, and points out that in Deloitte’s 2010 report it considered alternative metrics, like return on equity and return on invested capital. They all reported equivalent trends.

But that three metrics report similar trends simply means they’re correlated — unsurprising given how computationally similar they are.

It’s how Deloitte misused them that matters to you, because understanding this misuse can help you as you sort through your own information overload, trying to make sense of things.

Misuse #1 — starting with the metric: This is a fundamental fallacy that’s distressingly common in the world of business. When someone starts by asking, “What metrics should we use?” nothing good will come of it.

The better question is, “What are we trying to accomplish?” It very well might be that as a matter of macroeconomic policy, the United States should be trying to maximize the return on aggregate business investment across all industries (ROA, ROE and ROIC all measure return on overall investment).

If that’s the question, say so. Deloitte does not. It simply takes a commonly reported financial ratio that some but not all professional investors consider to be a gauge of management performance, and that all professional investors understand should not be compared across industries, and aggregates it across all industries.

My guess: Deloitte chose its metrics based on what data were readily available that had some association with business performance.

Misuse #2 — working backward from a pre-determined conclusion. Deloitte’s Center for the Edge is committed to its Big Shift view of the world. That’s a problem.

To summarize and oversimplify, the Big Shift is increased competitive intensity, knowledge flow mattering more than knowledge assets, and failure to redefine business practices in response to these two changes.

Imagine ROA is a perfect measure of business management performance. Its 47-year decline might mean what Deloitte says it means … that the caliber of management throughout the U.S. economy has deteriorated in some way, in particular by failing to adapt to the “Big Shift.”

Or, it might mean something completely different. As I pointed out last week, the U.S. shift in industry emphasis … from manufacturing to finance, services and entertainment … also accounts for the ROA decline.

In private correspondence, Denning made the point that my explanation doesn’t change anything, because even if this is the case, it still points to increasing economic weakness.

But it does matter, because there’s a huge difference between mismanaging the economy and mismanaging the individual corporations that comprise the economy.

Just because someone finds data that supports their preferred interpretation doesn’t mean they’re right. And if they get the root cause wrong, they’ll only improve the situation by accident.

Root cause analysis is a scientific practice, not a search for ammunition. What Deloitte failed to do was to clearly formulate the different hypotheses that might explain the decline in ROA, identify the evidence that would disprove each of them, and then collect that evidence.

As it happens, I agree that Deloitte’s “Big Shift” trends matter. It’s a truism that the Internet has made agility more important than size, increasing competition. As for “knowledge flow,” I’m pretty sure it’s important as well, although I prefer a more prosaic formulation … that a business’s success is tied to how well employees collaborate, both with each other and with employees in its network of suppliers, partners, and customers.

These trends matter to you, because IT lives in the middle of both of them. If you agree, ask yourself what you’re doing to help your company exploit them.

And, ask what evidence you have that supports your opinion. It might not be very good, but that’s okay. In business, you don’t always have conclusive evidence in time to support the decisions you have to make.

Often, evidence arrives just too late to do any good. All I ask is that you don’t have more confidence in your conclusions than the evidence allows.