People get all excited about the darndest things.

I know otherwise normal people who froth at the mouth when they hear me say the millennium starts Jan. 1, 2000. No, they insist angrily, it begins Jan. 1, 2001. Don’t I know anything?

Well yes, I do. I know it’s more a matter of opinion than the hard-liners think. Why? Let’s begin with a startling realization: Decades and centuries don’t line up!

Decades, named by their first year, number 0 through 9, so the 1990s are named for the year 1990 and end Dec. 31, 1999. Very few people claim the year 2000 is part of the 1990s.

Centuries number from 1 through 100. That makes sense — this is, after all, the 20th century, so the year 2000 had better be a part of it.

The question of when the millennium begins, then, all boils down to this: Does it begin with a new decade or a new century? I say it starts with the new decade, in 2000. You’re free to wait until the new century begins, but I’m guessing you’ll miss an awesome party on Dec. 31, 1999.

And you won’t get to attend one the following year, because the world will, of course, end in the year 2000, destroyed by ubiquitous computer failures.

Just kidding. As it always does, the world will muddle through, saved by a mixture of planning, hard work, and improvisation.

I call this column the IS Survival Guide because survival is quite an accomplishment for the working CIO. Surviving the year 2000 will be quite an accomplishment.

Two big myths surround the year-2000 problem. The first is that it’s a bug. The second is that it’s a mess because somehow the end of the millennium snuck up on unwary CIOs all over the world.

Let’s explode these myths right now so you can focus on solving the problem instead of avoiding the blame.

The way we encode dates, or at least used to encode dates, was an intelligent design decision back in the 1960s and 1970s when in-house and commercial programmers wrote most of our legacy systems. Storage — both RAM (we called it “core memory” back then) and disk — cost lots of money, and the best programmers were those who could squeeze the most performance into the smallest computing footprint. Saving 2 bytes per date field made all kinds of business sense, and nobody figured these systems would have to last three decades or more.

They’re still running, either because we failed at our grandiose replacement projects (I’ve seen several of these) or because there simply has been no compelling business reason to replace systems that work just fine.

That is, it really is a feature, not a bug, and it proves once again that no good deed ever goes unpunished.

Here’s who will be punished: You, for not starting to fix the problem several years ago. And it isn’t entirely your fault.

I remember asking in 1994 whether we had any year-2000 problems, when just a few worriers first started to write about the subject. It didn’t matter. We had a tight budget, had just reduced staffing 10 percent to help the company improve its short-term profitability, and had the usual laundry list of urgent projects. The millennium would just have to wait a year or two until it became urgent.

Business has a short-term focus because Wall Street drives business strategy, and Wall Street insists on quarter-by-quarter earnings improvement. Fixing year-2000 software problems adds no new value, so until the problem reached crisis proportions last year, few companies bothered to spare any resources to fix it.

There’s plenty of blame to spread around, but let’s not. Instead, next week, we’ll look at some lessons we can learn from this fiasco.

All manner of experts claim to know better than people who do real work. Consultants such as myself have broader exposure to ideas and practices. Executives see the big picture more clearly. And accountants understand the financial realities far better than factory workers, who only know that without a new forklift work will stop when the old one breaks down.

Well, consultants really do have broader exposure, executives do see the big picture more clearly, and accounts do understand the numbers. And we all ask the people who do real work to respect our knowledge, expertise, and perspective.

So why do so few of us return the favor?

We’re going to spend one more week critiquing Paul Strassmann’s thesis that spending on IT hasn’t generated any economic returns. Strassmann, you’ll recall, has amassed a daunting array of financial statistics which he’s sliced and diced more ways than a Vegematic can handle potatoes, all without finding any correlation between financial improvement and IT spending.

I’ve been contending that he’s the kid looking for his lost quarter under the streetlight even though he lost it a block away, because “the light is better here”.

Here’s one place Strassmann hasn’t looked: He hasn’t talked to people who actually use computers to do their jobs. I wonder why not?

People who use computers to do their jobs aren’t misguided children or poor deluded fools. They’re smart people. If they tell you they do their jobs better with a particular technology than without it, they’re more likely to be right than a number-cruncher who’s never talked to them.

Here another problem with Strassmann’s analytical approach: He tries to correlate “IT spending” with various measures of financial return. “IT spending” is an undifferentiated blob. Let’s break it down into its components.

In typical organizations, 70% of IT spending goes to the data center and systems maintenance. These don’t deliver new value. They maintain value you’ve already gained. So by definition only 30% of IT spending has any chance of yielding further financial improvements.

It’s well-known that only 30% of all IS/IT projects are satisfactorily completed. That means while 30% of IT spending tries to create new value, only 9% has the chance. I wonder what conclusions Strassmann would reach if he re-analyzed his data using only the IT spending expected to deliver value.

Because so little spending delivers value in the end, Strassmann claims we’re aiming at the wrong targets, providing technology for technology’s sake. I’d contend the problem is with our technique, not our aim. It’s in the hard work of project management and system implementation that we goof up, not in our ability to link projects to business strategy.

Now here’s something remarkable. I re-analyzed some of Strassmann’s data, converting back to numbers as best I could a graph plotting IT spending per employee against return on equity (ROE) for 20 companies in “the food industry”. As expected, the regression analysis showed no correlation.

Then I tossed out four outliers — data points clearly outside the pack. The new regression showed a strong, statistically significant correlation between IT spending and ROE. For you statisticians, R Square equals .28 at a .033 level of significance. The slope (increase in ROE per $1,000 spent per employee) is about 1.5%.

This new analysis no more proves IT spending does provide value than Strassmann’s analysis disproves it. Since the original analysis lumps supermarkets, agribusiness conglomerates, pet food suppliers and a tobacco company together the whole analysis is just a tad dubious.

But when the inclusion or exclusion of just four companies makes the difference between no correlation and strong correlation, Strassmann’s conclusions must be taken with at least a grain of salt.

Or pepper, if you like spicier stories.