Bob Lewis’s IS Survival Guide (MacMillan Computer Publishing) will be on the shelves by the time this column appears. For everyone who reads this column and wonders whether it’s worth buying I have only one thing to say: My kids need new shoes.

If you do buy the book and like it, tell your friends. If you don’t like it … pretend I’m your boss. Act like it’s the greatest thing ever, even though you know better. I have plenty of reality in my life already, and lots of friends and colleagues who minimize any risk of ego-inflation.

Of everything in the book, the chapter on measurement was the hardest to write. Measurement doesn’t lend itself to lively prose under the best of circumstances, and even among IS professionals the plague of innumeracy is rampant.

Worst of all, the state of the art when it comes to IT measurement is dismal. A recent conference in which I recently participated reinforced that conclusion.

The good-news part of the story is that we know how to understand the performance of data center operations. We have well-developed measures to help us understand how reliable our systems are, how well they perform, and how much they cost to operate.

Not only do we know how to measure operations, several professional benchmarking companies have extensive performance databases, so you can compare yourself to the rest of your industry, or to business as a whole. If you’re sub-standard, you can set up improvement programs to make yourself better. If, on the other hand, you’re ahead of industry averages you can … well, you can still establish improvement programs, because you always want to improve, don’t you?

Benchmarking really doesn’t do a lot for you, unlike baselining, which does. There are only two reasons for benchmarking, both of them social. The first is to defend yourself against executive-suite attacks (“We’ve just undertaken a benchmarking exercise and are ahead of industry averages, so QUIT YOUR GRIPING, FRED!”). The second is to break through internal resistance to change. It’s as common in IS as anywhere else for employees to figure they’ve already done as much as possible, so a benchmarking study that demonstrates sub-standard performance can help break through this resistance. (So can establishing a baseline and saying to employees, “I don’t care if we’re good or bad, we’re going to be better next year than this year.”)

Internally, we know how to measure operating costs. How about our contribution to the rest of the business? Well …

We do know how to measure how much process-improvement projects increase productivity. If anyone is willing to go through the effort, they can perform a before-and-after productivity analyses of the process being improved.

This doesn’t answer the question we’re asking. Process-improvement projects include not only new technology, but also process redesign, culture change, usually a new business model, and sometimes a new philosophy of leadership. What part of the productivity increase comes from information technology? It isn’t a meaningful question — technology is integral to the new process, not a bolted-on automator of activities you’d otherwise do manually.

Assessing the contribution of technology to productivity is what we’re best at and we don’t have an adequate framework for that, only a way to measure the impact of a specific process improvement project. We have no idea at all of how to measure the value information technology creates. Instead, silly notions like the Gartner Group’s Total Cost of Ownership and weak analyses like Paul Strassmann’s The Squandered Computer (both critiqued extensively in this space) still get a lot of attention.

It’s time for us to get a handle on this issue. If measurement of the value we create is important, it’s time to get on with it. If not, it’s time to formulate a clear-headed debunking of the whole concept.

Either way, we need to do better. Next week, we’ll start exploring how.

Technical people, such as programmers, engineers, and scientists, have gained a reputation among nontechnical folk as poor communicators. Most of the problem arises not from poor communications skills but from an excess of them. Tech-folk — the real ones, not the jargon-meisters who substitute neologisms designed to impress the rubes for actual knowledge — assign precise meanings to precise terms to avoid the ambiguity that marketing professionals (for example) not only embrace, but sometimes insist on.

Sometimes precision requires complex mathematics, because English, evolved to handle prosaic tasks like describing the weather, explaining how to harvest crops, and insulting that ugly guy from the next village, isn’t quite up to the task of explicating the nature of a 10-dimensional universe. Many physicists communicate poorly with the innumerate.

Other times precision simply requires a willingness to make distinctions. Take, for example, the words “theory” and “hypothesis.” Most people use theory to mean “an impractical idea,” and hypothesis to mean “a brilliant insight” (“hypothesis” has more syllables so it must be more important). Scientists, in contrast, know that theories have been subjected to extensive testing, and can be used to address real-world problems. It’s hypotheses that are simply interesting notions worthy of discussion and (maybe) testing.

This is a distinction worthy of a manager’s attention, since a lot of our responsibilities boil down to being a broker of ideas, figuring out which ones to sponsor or apply and which to reject or ignore. Last week’s column dealt with how you can assess relatively untested ideas — business hypotheses, if you like. This week we’ll cover the harder question of how to deal with some of the well-worn thoughts that, while popular, may still be poor choices for your department and which may even be downright wrong, no matter how widely used.

Your first step in assessing an idea that’s in wide use (assuming it’s applicable to one of your priority issues) is to show some respect. Keep your ego out of it. Most of us have an ego-driven tendency toward what scientists would call Type 1 and Type 2 errors.

We make Type 1 errors — rejecting good ideas — through our unwillingness to admit that someone could think of something we can’t instantly understand. Remember, lots of smart people have applied these ideas, so they’re unlikely to be an example of mass stupidity. If the idea may apply to your situation, make sure you understand it — don’t reject it through Argument from Personal Incredulity (a term borrowed from the evolutionary scientist Richard Dawkins and discussed at length last week).

Our egos also lead us to the opposite problem, by the way. We commit Type 2 errors — accepting bad ideas — through our desire to be the one to find and sponsor something new and useful.

Next step: Make sure the idea has been tested and not simply used a lot. Businesses survive lots of screwy notions. Using and surviving an idea doesn’t mean it led to valuable results. Look for business outcomes, not warm fuzzies. (In the world of science, psychotherapy has received extensive criticism on the same grounds.)

Your last step is to look at the idea’s original scope. Well-tested scientific theories are rarely invalidated. Instead, as with Newtonian physics (which doesn’t work in quantum or relativistic situations), scientists discover boundaries outside which they don’t apply. Well-tested business ideas also may fail when applied outside their scope. As an example, Total Quality Management (TQM) is unsurpassed at perfecting manufacturing processes, where quality consists of adherence to measurable specifications. TQM’s successes outside the factory, however, have been spotty.

One more thought: Have enough self-confidence to respect your own expertise. Doing something because the experts say so is as miserable an excuse as “I was just obeying orders.”

Don’t worry — if you need an expert to back up the course of action you’ve chosen you can always find a tame consultant willing to recommend it … for a small fee, of course.