I spent five long years studying the behavior of electric fish in graduate school before becoming a professional programmer in the world of commerce. Not a day of my fish research was wasted – I’ve reused nearly everything I learned in graduate school in my business career.

You’re probably expecting a segue from reuse to object technology. Nope. Not this week. We’re going to apply part of the philosophy of science to your day-to-day business decision-making.

My colleagues and I had many a fine discussion about which theories had scientific value and which ones provided bull-fodder when mixed with a few mugs o’ brew. The short version: only theories that have both explanatory and predictive power are scientifically useful, because theories that explain but don’t predict can’t be tested.

Businesses deal with theories all the time. To their misfortune, businesses have only one way to test a theory: Try it and see what happens. Sadly, the results still don’t tell us much. Businesses want to make money, not test theories, so they don’t apply the kinds of experimental and statistical controls that lead to confidence in the results. One perfectly valid business theory may be associated with marketplace failure (perhaps due to poor execution) while another – a really stupid idea – ends up looking brilliant because the company that followed it did enough other things right to thrive.

While business theories are rarely as certain as, say, the laws of thermodynamics, they’re often good enough to be worth using – if they’re useful and not just interesting. Good business theories must be useful, and that means they have to provide guidance when making decisions.

And that takes us to last week’s column on the difference between client/server computing and distributed processing. “Client/server”, you’ll recall, refers to a software partitioning model that separates applications into independent communicating modules. The test of client/server isn’t where the modules execute, it’s their separation and independence.

Distributed processing is a hardware and network architecture in which multiple, physically independent computers cooperate to accomplish a processing task.

You can certainly implement client/server computing on a distributed architecture – they go together naturally – but they’re not the same thing.

I could almost hear some readers saying, “Oh, come on. That’s just semantics,” while writing the column, but the distinction matters. In other words (we’re there!) we’re dealing with a useful business theory.

One of our teams used it recently while helping a client sort through some product claims. One vendor touted its “paper-thin client” – it uses X-Windows on the desktop – as one of its desirable design features. A thin-client design was just what we were looking for, because we wanted to reuse a lot of the core system business and integration logic in new front-end applications.

Looking at the product more closely we discovered something wonderful. The vendor hadn’t implemented a thin client at all. It had built a fat client that mixed presentation and business processing together, but executed it on the server. Their system used paper-thin desktops, not paper-thin clients.

Thin desktops may be just what you’re looking for. Thin desktops reduce the cost of system management (fewer desktop software installations) and can give you highly portable applications. They come at a price though – an impoverished interface and a much higher processor load on the server, to name two.

We weren’t looking for thin desktops. We wanted to reuse a lot of the application logic built into the system, and that meant disqualifying this particular product.

Take a minute to think about some of the claims you’ve read about the network computer (NC). Ever hear someone refer to it as a thin-client architecture? I have, but it isn’t any kind of client architecture. It’s a distributed computing architecture. Whether the applications you run on an NC use thin clients, fat clients, or just terminal emulators depends on how you or the vendor partition the application logic and where you execute the various modules that make up the application.

Think the distinction between distributed processing and client/server computing is “just a theory” or “just semantics”? Think again: it’s central to your technical architecture.