Here’s an alarming statistic that I read recently: 81 percent of everyone surveyed thinks their IS organization is average or below average. If “below average” translates to “below the mean,” only 20 percent of us are in the top 50 percent.

Since human perception is a pretty dull scalpel, “average or below average” may not be quite as precisely defined as “worse than or equal to exactly half the total number.” Let’s try a different interpretation. Figure anything within one standard deviation of the mean counts as average. In round numbers about two-thirds of any sample falls inside one standard deviation. The remaining third splits in half, so one sixth of any sample is above average. The remainder – five sixths, or just over 83 percent, are average or below.

Mystery solved! The 81 percent who figure their IS departments are average or worse are almost exactly the number who ought to think so according to the inviolable laws of statistical sampling.

The authors of the paper reporting this statistic made no further comment, so we don’t know if its absurdity escaped them or not. That three college professors who specialize in business metrics resorted to this kind of number, though, speaks poorly of the state of the art in IS measurement. I certainly didn’t do anything like that in my new book, Bob Lewis’s IS Survival Guide from MacMillan Computer Publishing (nor would I ever stoop to shamelessly plugging it in this column).

As we found last week, we have plenty of measures to choose from, all internal ones that tell us how good our processes are compared to internal baselines or external benchmarks. What we lack are external measures that assess the value we create for the enterprise. The measures we have tell us, to borrow a phrase, whether we’re doing things right, but not whether we’re doing the right things.

We do have one external measure at our disposal. The cost of technology is depressingly easy to measure, and our detractors gleefully proclaim it during budget season. But the value we create? That’s a lot tougher.

The purpose of any measurement system is improvement (ignoring its important use in political self-defense). The point of calculating the value we deliver to the enterprise is helping us increase it. How do we create useful measures of value? It’s tough. At the highest level, the formula for calculating value is Bang per Buck. We know how to measure the buck part, which leaves the bang as the part we need to measure. Start by listing the major categories of benefit we provide:

  • Capabilities needed so the company can achieve its strategic goals.
  • Capabilities needed for effective marketing efforts.
  • Fully automated (and therefore high-efficiency) processes.
  • Capabilities needed by redesigned processes.
  • Capabilities for improving communications with customers and suppliers.
  • Capabilities for improving internal communications.
  • Capabilities that allow individual employees to be more effective in their jobs.

See a trend? Except for the rare situation that allows for complete process automation, the value we deliver is capabilities. They’re enablers – necessary but not sufficient conditions for success. To measure the value we deliver, we need to understand how to measure the value of a capability when that capability may or may not be used effectively.

How will we go about that? In principle, we need to list every contributor to success in each of these categories, then assign a weighting factor to each of them that reflects its relative importance or contribution.

Great theory. Can we turn it into practice?

Oh, gee, we’re about out of space. Too bad … you’ll have to tune in next week to read the next installment.

Technical people, such as programmers, engineers, and scientists, have gained a reputation among nontechnical folk as poor communicators. Most of the problem arises not from poor communications skills but from an excess of them. Tech-folk — the real ones, not the jargon-meisters who substitute neologisms designed to impress the rubes for actual knowledge — assign precise meanings to precise terms to avoid the ambiguity that marketing professionals (for example) not only embrace, but sometimes insist on.

Sometimes precision requires complex mathematics, because English, evolved to handle prosaic tasks like describing the weather, explaining how to harvest crops, and insulting that ugly guy from the next village, isn’t quite up to the task of explicating the nature of a 10-dimensional universe. Many physicists communicate poorly with the innumerate.

Other times precision simply requires a willingness to make distinctions. Take, for example, the words “theory” and “hypothesis.” Most people use theory to mean “an impractical idea,” and hypothesis to mean “a brilliant insight” (“hypothesis” has more syllables so it must be more important). Scientists, in contrast, know that theories have been subjected to extensive testing, and can be used to address real-world problems. It’s hypotheses that are simply interesting notions worthy of discussion and (maybe) testing.

This is a distinction worthy of a manager’s attention, since a lot of our responsibilities boil down to being a broker of ideas, figuring out which ones to sponsor or apply and which to reject or ignore. Last week’s column dealt with how you can assess relatively untested ideas — business hypotheses, if you like. This week we’ll cover the harder question of how to deal with some of the well-worn thoughts that, while popular, may still be poor choices for your department and which may even be downright wrong, no matter how widely used.

Your first step in assessing an idea that’s in wide use (assuming it’s applicable to one of your priority issues) is to show some respect. Keep your ego out of it. Most of us have an ego-driven tendency toward what scientists would call Type 1 and Type 2 errors.

We make Type 1 errors — rejecting good ideas — through our unwillingness to admit that someone could think of something we can’t instantly understand. Remember, lots of smart people have applied these ideas, so they’re unlikely to be an example of mass stupidity. If the idea may apply to your situation, make sure you understand it — don’t reject it through Argument from Personal Incredulity (a term borrowed from the evolutionary scientist Richard Dawkins and discussed at length last week).

Our egos also lead us to the opposite problem, by the way. We commit Type 2 errors — accepting bad ideas — through our desire to be the one to find and sponsor something new and useful.

Next step: Make sure the idea has been tested and not simply used a lot. Businesses survive lots of screwy notions. Using and surviving an idea doesn’t mean it led to valuable results. Look for business outcomes, not warm fuzzies. (In the world of science, psychotherapy has received extensive criticism on the same grounds.)

Your last step is to look at the idea’s original scope. Well-tested scientific theories are rarely invalidated. Instead, as with Newtonian physics (which doesn’t work in quantum or relativistic situations), scientists discover boundaries outside which they don’t apply. Well-tested business ideas also may fail when applied outside their scope. As an example, Total Quality Management (TQM) is unsurpassed at perfecting manufacturing processes, where quality consists of adherence to measurable specifications. TQM’s successes outside the factory, however, have been spotty.

One more thought: Have enough self-confidence to respect your own expertise. Doing something because the experts say so is as miserable an excuse as “I was just obeying orders.”

Don’t worry — if you need an expert to back up the course of action you’ve chosen you can always find a tame consultant willing to recommend it … for a small fee, of course.