My first professional contact with InfoWorld was in 1993. I’d just installed a user-friendly front end to CompuServe when along came the Gartner Group’s total cost of ownership (TCO) model — at the time, well over $8,000 per year for a single PC.

Inflamed with righteous anger and (more to the point) itching to use my new CompuServe software, I wrote a less-than-diplomatic guest column for InfoWorld that began, “Does anyone else find the Gartner Group annoying?” and finished, “The definition of an expert here in Minnesota is `a guy from the East Coast with slides’. So I’m expecting Gartner to win without even a chance to debate the issues.”

InfoWorld Opinions Editor Rachel Parker added a thoroughly inflammatory headline, and a few weeks later, the Gartner Group offered me, manager of a 1,000-node network, a chance to debate the issues after all — at its annual symposium, in front of about 600 CIOs and other assorted dignitaries.

Modesty forbids my reporting the one-sided results. Oh, OK. I used Gartner’s accounting methods to calculate the TCO for a day planner. As I recall, it came to well over $4,000 per year. The whole thing was hilarious and left no doubt in the minds of the 600 attendees who was on the side of truth and the American Way.

A week later our chief financial officer, unaware of my newfound status as industry pundit, asked if I’d seen the Wall Street Journal article showing that PCs really cost more than $8,000 per year.

A lot has changed since then. Rachel has become a good friend. I’m writing regularly for InfoWorld and have moved to the consulting side of the industry. And the TCO estimates have inflated even further. Earlier this year I promised to provide an alternative. Starting this week I’m going to do just that.

The most important step in arriving at the right answer, Albert Einstein once pointed out, is asking the right question. Most sources describe Einstein as a pretty bright guy, so we’re going to take his advice.

The question answered by TCO is the aggregate PC/LAN cost to an average company. Not only is this the wrong question, it’s wrong in two different dimensions.

Here’s the first: It measures cost. That’s not a very intelligent thing to measure, as you know if you invest. Do you worry about the cost? No, you worry about the avoidable costs.

Businesses don’t spend, they invest. They invest in salaries, benefits, office space, raw materials, and, yes, personal computers and LANs. Businesses expect a return on all of these investments better than what they’d earn by putting the same money in an indexed mutual fund (or some similar measure).

Companies — smart ones at least — don’t cut costs. They cut avoidable costs, just as you do. They cut costs that don’t deliver enough return, just as you sell stocks that don’t perform. And they find lower-cost methods, just like you move to a discount brokerage to reduce trading fees.

The other problem with the TCO question is more subtle. PCs and LANs are a means to multiple ends, and an incomplete means to most of them. TCO lumps together some, but not all, of the costs of three very different kinds of process:

  • Personal productivity and effectiveness: You use your PC to write memos, letters, and reports; develop financial models; do research; maintain your calendar; keep track of contacts; and file and retrieve all kinds of documents.
  • Communications: You use your PC to send and receive information to people both inside and outside your company.
  • Company core processes: PCs and LANs are part (but not all) of the computing platform on which you run production systems. These production systems define the work of many employees. This distinguishes them from word processors and spreadsheets – tools for which employees define the work.

The result of adding three partial costs isn’t a useful insight.

It’s just a number.

My kids (Kimberly and Erin, to properly identify the guilty parties) regularly sponsor an event they call Drive Dad Nuts Night (D2N2).

All kids do things that drive their parents nuts, of course. It’s their cunning revenge for all the things parents do to drive them crazy. My kids are no different, except that as a perfect parent I give them no cause for D2N2.

High on the list of things that drive Dad nuts is the need to repeat the same information over and over before it penetrates their consciousness. It’s my own fault: I’ve taken an important principle of communications theory – injecting redundancy into a signal so it can penetrate noise – and inappropriately replaced it with the data design principle of eliminating redundant data.

Repetition is important in a noisy channel, and few channels are noisier than print communications, what with advertisements, news articles, and other columns standing between you and my opinions. Which, of course, is my excuse for revisiting an earlier topic this week.

The subject is the difference between the idea of client/server computing — a software design concept — and distributed computing, which deals with hardware issues.

Most writers in the trade press don’t seem to worry about this distinction — even in these hallowed pages. And it isn’t a mere semantic nicety. It gets to the heart of every current hot issue in computing. If you get it wrong you’ll make bad decisions about important, practical, day-to-day problems.

Here’s a quick recap for readers who missed my previous columns on the subject. (See “Only circular reasoning proves C/S systems cost less than mainframes,” Feb. 10, page 62.) “Client/server” refers to software designs that partition applications into two or more independent, communicating modules. Modern designs use at least three partitions: a presentation module that handles all user-interface logic, a business logic module that takes care of data processing and integration issues, and a DBMS that handles all details of data management. Three-partition designs — three-tier architectures — are themselves giving way to n-tier layered architectures as software designers gain experience and design theory gains subtlety.

Not only doesn’t a client/server architecture care about where each partition executes, but the best architectures make each partition portable. Which is why the “mainframe-vs.-client/server” controversy is so nonsensical: It’s easier to create n-tier client/server applications in which every partition executes on the mainframe than it is to build them with each partition executing on a separate piece of hardware.

“Distributed computing,” in contrast, refers to hardware designs that facilitate spreading the computing load over multiple communicating computers. Client/server applications are easier to distribute, of course, than software monoliths, but it’s certainly as possible (although not yet commercially viable) to deploy symmetrical multiprocessing across a LAN as it is to deploy it across a system bus.

Think about your business goals for client/server and distributed architectures. Lots of us, blurring these two concepts, expected client/server systems to cost less than mainframes by running on cheaper hardware. Since client/server doesn’t speak to hardware this isn’t a meaningful goal. The point of client/server architectures is to reduce costs by maximizing code reuse.

It’s distributed computing that ought to reduce hardware costs, and it can, if a distributed design fits your application load better than the alternatives.

Let’s apply the distinction between client/server computing and distributed architectures to Web-based systems. You often hear people describe the browser as a “paper-thin client” when it isn’t a client at all. The supposed client’s thinness is described as a “good thing.” Why? It’s portable! And you don’t have messy software installations to perform on the desktop! And it’s … well, it’s thin!

Regarding portability: 3×78 emulators are portable, too (and thin). So what? And software installations don’t have to be messy if you’re careful. Thinness? You have processing power to burn on the desktop.

Browsers are incomplete clients. They do format the screen and accept keystrokes, but except for drop-down lists they can’t handle other aspects of presentation logic, such as screen sequencing and data validation.

And that’s why Web-based forms are so slow and irritating to use. You’re waiting for a host computer to notice you, process a whole screenful of input, and send a response.

We ought to know better.