Bob Lewis’s IS Survival Guide (MacMillan Computer Publishing) will be on the shelves by the time this column appears. For everyone who reads this column and wonders whether it’s worth buying I have only one thing to say: My kids need new shoes.
If you do buy the book and like it, tell your friends. If you don’t like it … pretend I’m your boss. Act like it’s the greatest thing ever, even though you know better. I have plenty of reality in my life already, and lots of friends and colleagues who minimize any risk of ego-inflation.
Of everything in the book, the chapter on measurement was the hardest to write. Measurement doesn’t lend itself to lively prose under the best of circumstances, and even among IS professionals the plague of innumeracy is rampant.
Worst of all, the state of the art when it comes to IT measurement is dismal. A recent conference in which I recently participated reinforced that conclusion.
The good-news part of the story is that we know how to understand the performance of data center operations. We have well-developed measures to help us understand how reliable our systems are, how well they perform, and how much they cost to operate.
Not only do we know how to measure operations, several professional benchmarking companies have extensive performance databases, so you can compare yourself to the rest of your industry, or to business as a whole. If you’re sub-standard, you can set up improvement programs to make yourself better. If, on the other hand, you’re ahead of industry averages you can … well, you can still establish improvement programs, because you always want to improve, don’t you?
Benchmarking really doesn’t do a lot for you, unlike baselining, which does. There are only two reasons for benchmarking, both of them social. The first is to defend yourself against executive-suite attacks (“We’ve just undertaken a benchmarking exercise and are ahead of industry averages, so QUIT YOUR GRIPING, FRED!”). The second is to break through internal resistance to change. It’s as common in IS as anywhere else for employees to figure they’ve already done as much as possible, so a benchmarking study that demonstrates sub-standard performance can help break through this resistance. (So can establishing a baseline and saying to employees, “I don’t care if we’re good or bad, we’re going to be better next year than this year.”)
Internally, we know how to measure operating costs. How about our contribution to the rest of the business? Well …
We do know how to measure how much process-improvement projects increase productivity. If anyone is willing to go through the effort, they can perform a before-and-after productivity analyses of the process being improved.
This doesn’t answer the question we’re asking. Process-improvement projects include not only new technology, but also process redesign, culture change, usually a new business model, and sometimes a new philosophy of leadership. What part of the productivity increase comes from information technology? It isn’t a meaningful question — technology is integral to the new process, not a bolted-on automator of activities you’d otherwise do manually.
Assessing the contribution of technology to productivity is what we’re best at and we don’t have an adequate framework for that, only a way to measure the impact of a specific process improvement project. We have no idea at all of how to measure the value information technology creates. Instead, silly notions like the Gartner Group’s Total Cost of Ownership and weak analyses like Paul Strassmann’s The Squandered Computer (both critiqued extensively in this space) still get a lot of attention.
It’s time for us to get a handle on this issue. If measurement of the value we create is important, it’s time to get on with it. If not, it’s time to formulate a clear-headed debunking of the whole concept.
Either way, we need to do better. Next week, we’ll start exploring how.