My kids (Kimberly and Erin, to properly identify the guilty parties) regularly sponsor an event they call Drive Dad Nuts Night (D2N2).

All kids do things that drive their parents nuts, of course. It’s their cunning revenge for all the things parents do to drive them crazy. My kids are no different, except that as a perfect parent I give them no cause for D2N2.

High on the list of things that drive Dad nuts is the need to repeat the same information over and over before it penetrates their consciousness. It’s my own fault: I’ve taken an important principle of communications theory – injecting redundancy into a signal so it can penetrate noise – and inappropriately replaced it with the data design principle of eliminating redundant data.

Repetition is important in a noisy channel, and few channels are noisier than print communications, what with advertisements, news articles, and other columns standing between you and my opinions. Which, of course, is my excuse for revisiting an earlier topic this week.

The subject is the difference between the idea of client/server computing — a software design concept — and distributed computing, which deals with hardware issues.

Most writers in the trade press don’t seem to worry about this distinction — even in these hallowed pages. And it isn’t a mere semantic nicety. It gets to the heart of every current hot issue in computing. If you get it wrong you’ll make bad decisions about important, practical, day-to-day problems.

Here’s a quick recap for readers who missed my previous columns on the subject. (See “Only circular reasoning proves C/S systems cost less than mainframes,” Feb. 10, page 62.) “Client/server” refers to software designs that partition applications into two or more independent, communicating modules. Modern designs use at least three partitions: a presentation module that handles all user-interface logic, a business logic module that takes care of data processing and integration issues, and a DBMS that handles all details of data management. Three-partition designs — three-tier architectures — are themselves giving way to n-tier layered architectures as software designers gain experience and design theory gains subtlety.

Not only doesn’t a client/server architecture care about where each partition executes, but the best architectures make each partition portable. Which is why the “mainframe-vs.-client/server” controversy is so nonsensical: It’s easier to create n-tier client/server applications in which every partition executes on the mainframe than it is to build them with each partition executing on a separate piece of hardware.

“Distributed computing,” in contrast, refers to hardware designs that facilitate spreading the computing load over multiple communicating computers. Client/server applications are easier to distribute, of course, than software monoliths, but it’s certainly as possible (although not yet commercially viable) to deploy symmetrical multiprocessing across a LAN as it is to deploy it across a system bus.

Think about your business goals for client/server and distributed architectures. Lots of us, blurring these two concepts, expected client/server systems to cost less than mainframes by running on cheaper hardware. Since client/server doesn’t speak to hardware this isn’t a meaningful goal. The point of client/server architectures is to reduce costs by maximizing code reuse.

It’s distributed computing that ought to reduce hardware costs, and it can, if a distributed design fits your application load better than the alternatives.

Let’s apply the distinction between client/server computing and distributed architectures to Web-based systems. You often hear people describe the browser as a “paper-thin client” when it isn’t a client at all. The supposed client’s thinness is described as a “good thing.” Why? It’s portable! And you don’t have messy software installations to perform on the desktop! And it’s … well, it’s thin!

Regarding portability: 3×78 emulators are portable, too (and thin). So what? And software installations don’t have to be messy if you’re careful. Thinness? You have processing power to burn on the desktop.

Browsers are incomplete clients. They do format the screen and accept keystrokes, but except for drop-down lists they can’t handle other aspects of presentation logic, such as screen sequencing and data validation.

And that’s why Web-based forms are so slow and irritating to use. You’re waiting for a host computer to notice you, process a whole screenful of input, and send a response.

We ought to know better.

ManagementSpeak: I see you involved your peers in developing your proposal.
Translation: One person couldn’t possibly come up with something this stupid.
This week’s contributor prefers to let you wonder if he or she is the person smiling at you from the next cubicle.