Not too long ago, some expert or other claimed that overcoming the market dominance of an initial entrant is virtually impossible. Interesting theory.

As I recall, at least a half-dozen colleagues quoted this “first mover” theory as an important business insight. It was pretty convincing. The only reason I’m skeptical is that I can’t think of a single example where the theory holds, while the entire history of our industry consists of ugly little facts that disprove it.

The early dominators – the Apple II computer, CP/M operating system, Macintosh GUI, Epson printer, WordStar word processor, VisiCalc spreadsheet, dBase II database – all have either vanished completely or have retreated to being niche players. Often, their immediate successors have fallen by the wayside as well. Only Netware has so far managed to maintain its early market dominance, and its continued success is far from a sure thing.

As my grandmother said in a different context, what you say is less important than how you say it. This explains the influence of many experts and self-anointed visionaries.

As your company’s technology leader, you’re supposed to be its technology visionary (in your spare time, when you’re done tuning network performance). Today, I’m going to reveal the secret to success: read some old science fiction. It doesn’t have to be good science fiction. It just has to be science fiction – not fantasy, or some other related genre. Want proof?

Wearable computers: I’ve read the occasional pundit sound tremendously visionary by describing these gadgets. The main unit may be flexible and adhere to the skin, or it may be a pouch on the belt. The user-interface is either a direct jack into the brain, or it’s a heads-up display built into a pair of glasses. Whatever the case, the pundit describes it as the Next Big Deal.

Big Deal. I first read about wearable computers sometime around 1966 in a short story whose plot was thoroughly forgettable. The human received output through bone conduction and input by subvocalizing – pretty good, practical ideas even now.

Wearable computers are, then, a 30-year-old idea.

Information Everywhere: We all know of at least one industry leader who presents this as a daring vision. Too bad Poul Anderson described the same notion over 20 years ago. In this case, the plot revolved around a society in which all but the dispossessed enjoyed the services of a portable, AI-based interface to a wireless ubiquitous information network. Society’s untouchables couldn’t afford the technology, and were forever relegated to poverty and the disdain of the higher classes.

The idea of information haves and have-nots isn’t, then, exactly new.

Replicators: About 30 years ago, I read a series of short stories about a space station run by some wizard-caliber engineers. One of their inventions was a replicator, similar to that shown on Star Trek around the same time.

What was remarkable was the author’s description of complete economic collapse resulting from the invention. Since a replicator could replicate replicators, the device itself cost next to nothing. And with replicators, once someone created anything of value, an infinite number of additional copies immediately spread through society without any compensation going to the inventor.

This, to me, sounds a lot like our information economy. Replication technology, in the form of the COPY command, is available to huge numbers of individuals beyond any realistic hope of copyright enforcement.

Why do you need to be a technology visionary? That’s a topic for a future column. For now, I’ll just present that as an opinion. If you agree and want help, you can get it from high-priced “experts”, or you can read the literature of the future … the literature in which Robert Heinlein, in the 1930s, predicted a post-world-war-two nuclear stalemate between the United States and Russia.

And science fiction generally has, as a fringe bonus, a plot.

Back when I studied electric fish I learned a valuable lesson from my research advisor: not all criticism in my professional life would be phrased with concern for my ego’s ongoing desire to inflate. The following episode, which also provided a second invaluable insight, led me to this conclusion.

My research involved a quantitative analysis of behavior. After some nasty FORTRAN programming and some manual charting and graphing, I showed it to my advisor.

“What’s it mean?” he asked. My answer involved the mathematics of information theory, capped by a precise measure of the communication between two interacting fish. Nobel Prize material.

“But what’s it mean?” he asked again, impatiently. When he failed to hear a satisfactory answer he told me to stop bothering him until I could explain it in English. I knew the number, but not what it meant.

Just for giggles, let’s take the same potshot at a basic number used in the specs for information systems: response time.

Oh, the concept is fine. Unfortunately, we usually measure it in units of time, which demonstrates our lack of insight into its meaning.

“The benchmark transaction completed in .782 seconds,” we might say. Or, in comparing brands and models of personal computer, “Model A completed our benchmark task in 1 minute 34 seconds, clearly outperforming Model B which completed it in 1 minute 51 seconds.”

Precise, measures, yes, but they mean very little.

The reason? Look from a customer-value perspective, remembering that “Customers define value,” is the first of the three rules of management. Response time isn’t a continuous numeric quantity. Customer-valued response time (for most of us) has only six values, and none of them is a number. Those values are Eye-blink, Quick, Pause, Delay, Break, and Overnight. To elaborate:

An eye-blink is, from a customer perspective, instantaneous. Never mind that sufficiently sensitive instruments can measure it. It takes place faster than a person’s ability to notice. Eye-blink response time is perfect. You can’t improve on it.

Quick processes happen fast enough that users don’t pay attention to them. The wait is noticeable, but not obtrusive. When we talk about “sub-second response time” we’re really saying we need it to be quick.

A pause breaks our rhythm, but doesn’t let us do anything else useful. Lots of computer processes cause us to pause. Printing a one-page memo results in a pause. So can saving a long document to disk or downloading a small, well-constructed Web page from an adequately powered server via modem. Likewise database updates when response time is bad. Pauses are awfully annoying. Every pause requires the exercise of patience, and each of us has only a fixed amount of patience available for our use each day.

Delays take longer than pauses – enough longer to do something useful. We can take a gulp of coffee during a delay. Sharpen a pencil. Dial a telephone number. File a document. In many respects a delay, while longer than a pause, tries our patience less. Loading nearly any application in Windows causes a delay. So do most Web pages.

Breaks are even better than delays. You can refill your coffee mug during a break. Ask your boss a quick question. File several documents. Booting your computer gives you a break. So does faxing a document through your fax/modem, or running a report off a medium-sized database.

Anything longer than a break may as well wait until you leave for the night. Start it up, go home, see the result the next morning. If you want to be daring, start it up before you leave for a meeting instead.

That’s it. Six possible response times. You haven’t improved response time until you cross a boundary from one to the other.

And that’s not always an improvement, because sometimes, you just need a break.