From the IS Survival Mailbag …

My recent column on the Year 2000 raised both the ire and scholarship of the IS Survivalist community.

Quite a few readers took me to task for proposing that both the “decadist” camp (the millennium ends on the decade boundary … December 31, 1999) and the “centurist” camp (December 31, 2000) have legitimate claims.

A few disagreed with my fundamental premise, insisting 1990 was the last year of the 1980s. To them I respond, “thlppp!” Never let it be said I stray from the high road.

Others explained that decades don’t matter — since there was no Year 0, 2,000 years won’t have passed until midnight, December 31, 2000. Jim Carls wrote to explain why they’re wrong:

“If you look it up in your history book, do the math and assume that historical accuracy is of some importance in defining the start of the Third Millennium Anno Domini, the latest point at which the millennium could start was in 1997 (Herod the Great died in 4 BC). And, according to Stephen Jay Gould (interviewed last week on PBS), the latest possible date was October 23rd. Let’s all bring that up in the next planning meeting!”

I say we start celebrating December 31, 1999 and don’t stop until January 1, 2001. Which means the real Year 2000 crisis will be a severe grape shortage. Vineyards … start planting!

Another group of correspondents took issue with the idea that the two-digit year was a feature, rather than a bug, and that it made good business sense at the time. Quite a few e-mails pointed out that a four-byte integer field could have stored 179 years worth of dates avoiding the problem for some time to come. Others questioned how much money programmers saved by not using a four-position year.

The first group would be right if storage were the only issue. Backtrack 25 years, though, and figure out how many iterations of Moore’s Law we have to undo. Computers had, I’d guess, about 1% of today’s processing power. The computation time needed to convert dates to and from integer format would have greatly extended batch processing times, which would have been very expensive. Tim Oxler invites everyone to visit a Web page he put up to discuss this in more detail: http://www.i1.net/~troxler/html/space.html.

The second group raises an interesting question. Leon Kappelman & Phil Scott answer it at http://comlinks.com/mag/accr.htm. Short version: The savings have been huge, far in excess of even the largest Year 2000 cost estimates.

And then there’s the other point — my contention that the world will muddle through as usual, neither blowing up nor sailing through unscathed. Robert Nee wrote to formulate this more precisely. He points out that the basic laws of supply and demand in a market-based economy predict that for every company that goes bankrupt due to Year 2000 problems there will be others that pick up the slack, both in terms of supplying goods and services, and in terms of employment.

This is a wonderful insight. Yes, lots of companies will fail. Yes, lawyers will file trillions of dollars worth of lawsuits, bayoneting the wounded to make sure as few companies recover as possible. (To the gathering flock of vultures now soliciting Year 2000 whistleblowers I’d like to make a simple comment. I’d like to, but I’m not sure libel laws permit it.)

In the end, though, demand will drive supply and so long as whole industries don’t fail, suppliers that are Year 2000 compliant will buy the bloody remains of those that aren’t, providing enough supply to satisfy demand and enough employment to keep everyone working as they do so.

Which, in turn, hearkens back to another point made frequently here: Many of the best investments in IT are those focused on your company’s survival, whether they’ll deliver measurable returns or not.

Isaac Asimov once told the tale of the world’s greatest surfer, a legend in his own mind, if nowhere else. Tired of hearing him brag, his audience challenged him to demonstrate his skills. So, taking surfboard in hand, he ran to the water’s edge where he stood still, gazing over the waves.

“Why don’t you go in?” taunted the crowd.

His response: “We also surf who only stand and wait.”

Identifying the next big wave is a big challenge in our own industry, too, as is knowing when to start swimming. I alluded to this problem in my Jan. 12 column, talking about the need for CIOs to identify new and promising technologies and to actively search for their potential business impact. (See “If you wait for business needs to drive technology buys, you will fall behind.”) This, I think, is at least as important as responding to requests from business leaders.

This is an important idea. It isn’t, however, as original as I’d thought. I found this out by reading Clayton Christensen’s new book The Innovator’s Dilemma: When New Technologies Cause Great Firms to Fail, which told it first and better.

I find books like this annoying. Christensen came up with my idea years before I did, and had the nerve to research it extensively and develop it into a well-thought-out program for developing and implementing corporate strategy.

How’s a poor columnist supposed to maintain his reputation for original thinking, anyway?

Christensen divides innovation into two categories, sustaining and disruptive. Sustaining innovation improves service delivery to existing markets. Disruptive innovation, in contrast, is initially irrelevant to existing markets but improves faster than market requirements until it can invade a market from below. For example:

Mainframe computers experienced sustaining innovation for years, steadily improving their price-performance characteristics. Minicomputers, less capable, were a disruptive innovation. Completely incapable of handling mainframe chores at first they found entirely new markets — in scientific computing, shop floor automation, and departmental applications. Companies like Digital and Data General got their start not by competing with IBM (IBM asked, and its customers had no interest in minicomputers at the time) but by finding new markets for their products too small for IBM to care about.

Minicomputers never did overtake mainframes in capacity. They did, however, overtake the requirements of much of the mainframe marketplace, invading from below and draining away a significant share of the market.

Companies miss the opportunities presented by disruptive technologies because they listen to their customers and deliver what those customers want. Disruptive technologies appeal to entirely different (and much smaller) marketplaces at first, so listening to customers is exactly the wrong thing to do.

Now think about how IS organizations deal with disruptive technologies. That’s right, this isn’t just an academic question. This is your problem we’re talking about.

Remember when PCs started floating into the organization? The average CIO sees business executives as IS’s “customer” and delivers what they ask for. PCs held no appeal for the CIO’s “customers.” PCs were useful to analysts, clerks, and secretaries — an entirely different market too clout-free to be visible to the CIO — until it was too late.

Eventually, networks of PCs did start solving more traditional information processing tasks, and IS knew less about them than the end-user community.

Right now you’re faced with quite a few potentially disruptive technologies — personal digital assistants, intranets, and computer-telephone integration, to name just three. How do you plan to deal with them?

Here’s one plan, based on ideas from The Innovator’s Dilemma: Charter one or two small, independent groups of innovators. Detach them from IS so they aren’t sidetracked into mega-projects.

Tell them to start small and find ways to make these new technologies beneficial to the company.

And then, most importantly … leave them alone.