I spent five long years studying the behavior of electric fish in graduate school before becoming a professional programmer in the world of commerce. Not a day of my fish research was wasted – I’ve reused nearly everything I learned in graduate school in my business career.

You’re probably expecting a segue from reuse to object technology. Nope. Not this week. We’re going to apply part of the philosophy of science to your day-to-day business decision-making.

My colleagues and I had many a fine discussion about which theories had scientific value and which ones provided bull-fodder when mixed with a few mugs o’ brew. The short version: only theories that have both explanatory and predictive power are scientifically useful, because theories that explain but don’t predict can’t be tested.

Businesses deal with theories all the time. To their misfortune, businesses have only one way to test a theory: Try it and see what happens. Sadly, the results still don’t tell us much. Businesses want to make money, not test theories, so they don’t apply the kinds of experimental and statistical controls that lead to confidence in the results. One perfectly valid business theory may be associated with marketplace failure (perhaps due to poor execution) while another – a really stupid idea – ends up looking brilliant because the company that followed it did enough other things right to thrive.

While business theories are rarely as certain as, say, the laws of thermodynamics, they’re often good enough to be worth using – if they’re useful and not just interesting. Good business theories must be useful, and that means they have to provide guidance when making decisions.

And that takes us to last week’s column on the difference between client/server computing and distributed processing. “Client/server”, you’ll recall, refers to a software partitioning model that separates applications into independent communicating modules. The test of client/server isn’t where the modules execute, it’s their separation and independence.

Distributed processing is a hardware and network architecture in which multiple, physically independent computers cooperate to accomplish a processing task.

You can certainly implement client/server computing on a distributed architecture – they go together naturally – but they’re not the same thing.

I could almost hear some readers saying, “Oh, come on. That’s just semantics,” while writing the column, but the distinction matters. In other words (we’re there!) we’re dealing with a useful business theory.

One of our teams used it recently while helping a client sort through some product claims. One vendor touted its “paper-thin client” – it uses X-Windows on the desktop – as one of its desirable design features. A thin-client design was just what we were looking for, because we wanted to reuse a lot of the core system business and integration logic in new front-end applications.

Looking at the product more closely we discovered something wonderful. The vendor hadn’t implemented a thin client at all. It had built a fat client that mixed presentation and business processing together, but executed it on the server. Their system used paper-thin desktops, not paper-thin clients.

Thin desktops may be just what you’re looking for. Thin desktops reduce the cost of system management (fewer desktop software installations) and can give you highly portable applications. They come at a price though – an impoverished interface and a much higher processor load on the server, to name two.

We weren’t looking for thin desktops. We wanted to reuse a lot of the application logic built into the system, and that meant disqualifying this particular product.

Take a minute to think about some of the claims you’ve read about the network computer (NC). Ever hear someone refer to it as a thin-client architecture? I have, but it isn’t any kind of client architecture. It’s a distributed computing architecture. Whether the applications you run on an NC use thin clients, fat clients, or just terminal emulators depends on how you or the vendor partition the application logic and where you execute the various modules that make up the application.

Think the distinction between distributed processing and client/server computing is “just a theory” or “just semantics”? Think again: it’s central to your technical architecture.

My kids (Kimberly and Erin, to properly identify the guilty parties) regularly sponsor an event they call Drive Dad Nuts Night (D2N2).

All kids do things that drive their parents nuts, of course. It’s their cunning revenge for all the things parents do to drive them crazy. My kids are no different, except that as a perfect parent I give them no cause for D2N2.

High on the list of things that drive Dad nuts is the need to repeat the same information over and over before it penetrates their consciousness. It’s my own fault: I’ve taken an important principle of communications theory – injecting redundancy into a signal so it can penetrate noise – and inappropriately replaced it with the data design principle of eliminating redundant data.

Repetition is important in a noisy channel, and few channels are noisier than print communications, what with advertisements, news articles, and other columns standing between you and my opinions. Which, of course, is my excuse for revisiting an earlier topic this week.

The subject is the difference between the idea of client/server computing — a software design concept — and distributed computing, which deals with hardware issues.

Most writers in the trade press don’t seem to worry about this distinction — even in these hallowed pages. And it isn’t a mere semantic nicety. It gets to the heart of every current hot issue in computing. If you get it wrong you’ll make bad decisions about important, practical, day-to-day problems.

Here’s a quick recap for readers who missed my previous columns on the subject. (See “Only circular reasoning proves C/S systems cost less than mainframes,” Feb. 10, page 62.) “Client/server” refers to software designs that partition applications into two or more independent, communicating modules. Modern designs use at least three partitions: a presentation module that handles all user-interface logic, a business logic module that takes care of data processing and integration issues, and a DBMS that handles all details of data management. Three-partition designs — three-tier architectures — are themselves giving way to n-tier layered architectures as software designers gain experience and design theory gains subtlety.

Not only doesn’t a client/server architecture care about where each partition executes, but the best architectures make each partition portable. Which is why the “mainframe-vs.-client/server” controversy is so nonsensical: It’s easier to create n-tier client/server applications in which every partition executes on the mainframe than it is to build them with each partition executing on a separate piece of hardware.

“Distributed computing,” in contrast, refers to hardware designs that facilitate spreading the computing load over multiple communicating computers. Client/server applications are easier to distribute, of course, than software monoliths, but it’s certainly as possible (although not yet commercially viable) to deploy symmetrical multiprocessing across a LAN as it is to deploy it across a system bus.

Think about your business goals for client/server and distributed architectures. Lots of us, blurring these two concepts, expected client/server systems to cost less than mainframes by running on cheaper hardware. Since client/server doesn’t speak to hardware this isn’t a meaningful goal. The point of client/server architectures is to reduce costs by maximizing code reuse.

It’s distributed computing that ought to reduce hardware costs, and it can, if a distributed design fits your application load better than the alternatives.

Let’s apply the distinction between client/server computing and distributed architectures to Web-based systems. You often hear people describe the browser as a “paper-thin client” when it isn’t a client at all. The supposed client’s thinness is described as a “good thing.” Why? It’s portable! And you don’t have messy software installations to perform on the desktop! And it’s … well, it’s thin!

Regarding portability: 3×78 emulators are portable, too (and thin). So what? And software installations don’t have to be messy if you’re careful. Thinness? You have processing power to burn on the desktop.

Browsers are incomplete clients. They do format the screen and accept keystrokes, but except for drop-down lists they can’t handle other aspects of presentation logic, such as screen sequencing and data validation.

And that’s why Web-based forms are so slow and irritating to use. You’re waiting for a host computer to notice you, process a whole screenful of input, and send a response.

We ought to know better.