Technology … all successful technology … follows a predictable life cycle: Hype, Disillusionment, Application.

Some academic type or other hatches a nifty idea in a university lab and industry pundits explain why it will never fly (it’s impossible in the first place, it won’t scale up, it’s technology-driven instead of a response to customer demand … you know this predictable litany of nay-saying foolishness).

When it flies anyway, the Wall Street Journal runs an article proclaiming it to be real, and everyone starts hyping the daylights out of it, creating hysterical promises of its wonders.

Driven by piles of money, early adopters glom onto the technology and figure out how to make it work outside the lab. For some reason, people express surprise at how complicated it turns out to be, and become disillusioned that it didn’t get us to Mars, cure cancer, and repel sharks without costing more than a dime.

As this disillusionment reaches a crescendo of I-told-you-so-ism, led by headline-grabbing cost-accountants brandishing wildly inflated cost estimates, unimpressed professionals figure out what the technology is really good for, and make solid returns on their investments in it.

Client/server technology has just entered the disillusionment phase. I have proof – a growing collection of recent articles proclaiming the imminent demise of client/server computing. Performance problems and cost overruns are killing it, we’re told, but Intranets will save it.

Perfect: a technology hitting its stride in the Hype phase will rescue its predecessor from Disillusionment.

What a bunch of malarkey.

It’s absolutely true that far too many client/server development projects run way over the originally estimated cost. It’s also true that most client/server implementations experience performance problems.

Big deal. Here’s a fact: most information systems projects, regardless of platform, experience cost overruns, implementation delays, and initial performance problems, if they ever get finished at all. Neither the problem nor the solution has anything to do with technology – look, instead, to ancient and poorly conceived development methodologies, poor project management, and a bad job of managing expectations.

I’m hearing industry “experts” talk about costs three to six times greater than for comparable mainframe systems – and these are people who ought to know better.

I have yet to see a mainframe system that’s remotely comparable to a client/server system. If anyone bothered to create a client/server application that used character-mode screens to provide the user-hostile interface typical of mainframe systems, the cost comparison would look very different. The cost of GUI design and coding is being assigned to the client/server architecture, leading to a lot of unnecessary confusion. But of course, a headline reading, “GUIs Cost More than 3278 Screens!” wouldn’t grab much attention.

And this points us to the key issue: the client/server environment isn’t just a different kind of mainframe. It’s a different kind of environment with different strengths, weaknesses, and characteristics. Client/server projects get into the worst trouble when developers ignore those differences.

Client/server systems do interactive processing very well. Big batch runs tend to create challenges. Mainframes are optimized for batch, with industrial-strength scheduling systems and screamingly fast block I/O processing. They’re not as good, though, at on-line interactive work.

You can interface client/server systems to anything at all with relative ease. You interface with mainframe systems either by emulating a terminal and “screen-scraping,” by buying hyper-expensive middleware gateways (I wonder how much of the typical client/server cost over-run comes from the need for interfaces with legacy systems?), or by the arcane issues of setting up and interfacing with LU2 process-to-process communication.

And of course, the development tools available for client/server development make those available for mainframes look sickly. Here’s a question for you to ponder: Delphi, Powerbuilder and Visual Basic all make a programmer easily 100 times more productive than languages like Cobol. So why aren’t we building the same size systems today with 1/100th the staff?

The answer is left as an exercise for the reader.

Bob Metcalfe has been predicting the imminent collapse of the Internet in these pages. Since your employer looks to you for technical expertise and advice, and since Dr. Metcalfe is a Recognized Industry Pundit (RIP), you’re probably worried about having recommended building that big Web site.
I’ve decided to offer a different perspective on the problem so you can trot out a second RIP to counter the effects of the first. (Also, if Dr. Metcalfe and I quibble in print you get to gripe about the incestuous nature of the press in Ed Foster’s gripe line, post items in our Forums on InfoWorld Electric (www.infoworld.com) and otherwise feed the liberal media conspiracy.)

Anyhow …the Internet scares people. Commonly described as an anarchic agglomeration of unplanned interconnections, it makes no sense to those who believe central planning is the key to quality.

Many of those same people, Dr. Metcalfe included, also say they believe in the power of laissez-faire capitalist economics. In other words, they believe in the power of Adam Smith’s “invisible hand” that uses market forces to regulate the interplay of independent agents.

From the perspective of general systems theory, this is nothing more than the use of negative feedback loops to create stable systems. (If you’re not familiar with the concept, it just means that inputs listen to outputs, adjusting themselves when the output drifts off course.)

Laissez-faire capitalism says shortages lead to higher prices which reduce demand, eliminating the shortage. Higher prices motivate an increase in production capacity, increasing supply which then reduces price, increasing demand. The result: A self-regulating system with no need for external controls.

Why does Dr. Metcalfe, who believes in this kind of self-regulation for the economy, not believe it will work for the Internet? After all, money comes in along with increased demand. Increased demand leads to supply shortages (poor response time). These shortages certainly can result in higher prices. They also can result in more companies getting into the business, and in existing Internet providers increasing the bandwidth they make available. It’s a pretty basic example of the very same kind of self-regulated economic system most cherished by the all-government-regulation-is-bad crowd.

This doesn’t mean the Internet won’t catastrophically fail this year. Laissez-faire capitalism breaks down in several different circumstances. Here are two:

Any time individuals or organizations compete for a common resource, market forces just plain don’t work.

This is called “First pigs to the trough.” It’s also known as the tragedy of the commons. In merry olde Englande, farmers grazed their cattle on public grazing land – the commons. After awhile, some farmers figured out the more cattle they grazed on public land the more they profited. When all farmers figured it out the cattle overgrazed the commons, ruining it.

Market forces don’t regulate use of a commons – market forces ruin it, leading to the need for external regulation by, for example, the government. Regulation isn’t always a bad thing, despite current political cant.

Another, very interesting way negative feedback loops (including pure free-enterprise economics) lead to unstable results comes from feedback delays. Bring up your spreadsheet and model the “logistic” equation (a very simple negative feedback system): v(t+1)=kv(t)*(1-v(t)). Plot it for a hundred values or so, starting with k=1.1 and v=.01. You’ll see a smooth s-shaped curve.

Change k. Between 2 and 3.5 the curve oscillates. From 3.5 to just over 4 it becomes chaotic, jumping around randomly. Somewhere between 4.01 and 4.001 it crashes to extinction. The lesson: Once feedback isn’t immediate, the value of a constant changes not just the scale of a system but its very nature. The results are unpredictable.

(You’ll find other fascinating tidbits like this in the excellent book, A Mathematician Reads the Newspaper by John Allen Paulos.)

So Dr. Metcalfe may be right – the Internet could turn out to be an unstable, chaotic system.

But I doubt it. I have more faith in free enterprise than that.