How did benchmarking and best practices ever get lumped together? They’re entirely different subjects, linked only by their common use of outside information.

Regular readers know that I’m no fan of benchmarking. Think of benchmarks as Barbie, and the typical corporate response as the equivalents of implants, liposuction and anorexia. Benchmarks can drive organizations into obsessive monitoring of measures irrelevant to their situations, and mindless pursuit of results developed for someone else.

Benchmarks are, for the most part, a waste of time. Baselines are an entirely different story. If you’ve chosen the right measures, baselines are unfailingly useful. Baselines tell you if you’re improving, and by how much. Without baselines, you’re just guessing.

Baselines and best practices go together like green eggs and ham. You may not like green eggs and ham, (you may not like them, Sam-I-am), but once you try them you will find/they’ll help you in your daily grind. Best practices are what you use to improve upon your baseline measures.

I don’t get many questions about developing appropriate PC support measures, or for PC support best practices. What I’m asked most often is the right ratio of PC support staff to end-users.

This is a benchmarking question. The answer is similar to Abe Lincoln’s response to the boy who asked how long his legs should be (“Long enough to reach the ground.”). Don’t even ask, because some other company’s answer, or even worse, the average of hundreds of other companies’ answers, are meaningless for you.

Too many variables affect the answer: Laptops used by travelers are harder to support than desktops used in the home office. Engineers need a wider range of software than financial analysts (and their problems are generally harder to understand, too). A network of small sales offices poses support challenges not found in a centralized facility. And you may need to provide different service levels than “the average” company does.

Here’s a good way to set your company’s support ratio: Use current service levels and staffing as a baseline. Set a goal of improving your service goals by 10 percent a year without changing your ratio of analysts to end-users. Then stop worrying about whether you have “the right” ratio. There isn’t one.

If that isn’t good enough, visit your company’s call center manager and ask for help. Call centers have the same problem you do, only they’ve solved it. The solution isn’t just for the Help Desk, which is (of course!) a call center.

A call center calculates its optimal workforce by modeling a queue. Callers enter the queue at a predictable rate, and agents remove callers from the queue at intervals determined by the number of agents and the average time each agent spends on a call. All these quantities are subject to statistical variation, of course, and calls can move from one queue to another.

Sounds like your PC support function, doesn’t it?

The call center manager almost certainly has call center workforce planning software you can use. Plug in your data – number of requests for assistance, average time needed to service different types of request, and the number of analysts available to handle requests – and read out the answer.

You do have the data, don’t you?

Definitions get me into a lot of trouble.

Early in my career, I was asked to perform a “feasibility study.”

“What’s the subject?” I asked.

“An inventory system,” my boss answered.

“OK, it’s feasible,” I told him. “I guarantee it. Lots of other companies keep track of their inventories on computers, so it must be.”

More patient than most of the managers I’ve reported to in my career, he explained to me that in IS, a feasibility study doesn’t determine whether something is feasible. It determines whether it’s a good idea or not.

It turned out to be a good idea (a tremendous surprise), so next we analyzed requirements. You know what’s coming: A senior analyst asked me if the requirements were before or after the negotiation.

“What negotiation?” I asked. “These are requirements. They’re required.”

This is how I learned that we do feasibility studies and requirements analyses in part to test the validity of the requests we receive. The process would be unnecessary if we believed end-users were our customers.

At the supermarket, nobody says to a customer, “Those fried pork rinds aren’t an acceptable part of your diet!” or, “Prove you need that ice cream!” At the supermarket, wanting something and being able to pay for it are all that matter.

In IS we used to view end-users as our (internal) customers, and we figured the relationship followed from the role: If they’re our customers, our job is (as Tom Peters would say) to delight them.

End-users aren’t our customers, though. They’re our consumers – they consume our products and services but don’t make buying decisions about them. But does that really change anything, or is it just a useless distinction?

It does change things. “Customer” defines both a role and a relationship. What does “consumer” say about a relationship? Nothing. Or at best, very little.

“Consumer” defines only a role, and in the context of organizational design, role is a process concept, whereas relationship is a cultural one. (Definitions: Processes describe the steps employees follow to accomplish a result. Culture describes their attitudes and the behavior they exhibit in response to their environment.)

What should the relationship between IS and the rest of the business look like? This is one of the most continuously controversial issues in our industry. When you view it as a clash between process design and cultural cues, the reason our discussions are jumbled is more clear.

Defining the rest of the business as our consumers frees us to define whatever relationship works the best. As with my inventory system, every highly successful result I’ve ever seen in IS has been the result of an open collaboration between IS and end-users, with authority shared and the dividing line between the two groups obliterated.

Yet many of my colleagues and much of the correspondence I receive on the subject still advocate a hard dividing line between the two groups, with formally documented “requirements” defined by a small group of end-users and the authority for “technical” decisions reserved for IS.

Of course, purely technical decisions are few and far between these days. What brand of NIC should you buy? OK, that’s a technical decision, but even something as seemingly technical as choosing a server OS places constraints on business choice. Or, looking at the opposite end of the process, selecting a business application limits IS’s choice of operating system, sometimes to a single one.

Trying to partition responsibilities to preserve the prerogatives of one group or the other leads to nothing but bad results. “You have to do this for me because I’m your customer,” and “You can’t do that because it violates our standards,” are two sides of the same counterfeit coin.

There are few purely technical or purely business decisions anymore. Since form follows function, you should strive for a relationship that recognizes this fact. What kind of relationship should you have with your consumers? One that emphasizes joint problem-solving and decision-making, and working together in a collaborative environment.

Or, in a word, “partnership.”