I spent five long years studying the behavior of electric fish in graduate school before becoming a professional programmer in the world of commerce. Not a day of my fish research was wasted – I’ve reused nearly everything I learned in graduate school in my business career.

You’re probably expecting a segue from reuse to object technology. Nope. Not this week. We’re going to apply part of the philosophy of science to your day-to-day business decision-making.

My colleagues and I had many a fine discussion about which theories had scientific value and which ones provided bull-fodder when mixed with a few mugs o’ brew. The short version: only theories that have both explanatory and predictive power are scientifically useful, because theories that explain but don’t predict can’t be tested.

Businesses deal with theories all the time. To their misfortune, businesses have only one way to test a theory: Try it and see what happens. Sadly, the results still don’t tell us much. Businesses want to make money, not test theories, so they don’t apply the kinds of experimental and statistical controls that lead to confidence in the results. One perfectly valid business theory may be associated with marketplace failure (perhaps due to poor execution) while another – a really stupid idea – ends up looking brilliant because the company that followed it did enough other things right to thrive.

While business theories are rarely as certain as, say, the laws of thermodynamics, they’re often good enough to be worth using – if they’re useful and not just interesting. Good business theories must be useful, and that means they have to provide guidance when making decisions.

And that takes us to last week’s column on the difference between client/server computing and distributed processing. “Client/server”, you’ll recall, refers to a software partitioning model that separates applications into independent communicating modules. The test of client/server isn’t where the modules execute, it’s their separation and independence.

Distributed processing is a hardware and network architecture in which multiple, physically independent computers cooperate to accomplish a processing task.

You can certainly implement client/server computing on a distributed architecture – they go together naturally – but they’re not the same thing.

I could almost hear some readers saying, “Oh, come on. That’s just semantics,” while writing the column, but the distinction matters. In other words (we’re there!) we’re dealing with a useful business theory.

One of our teams used it recently while helping a client sort through some product claims. One vendor touted its “paper-thin client” – it uses X-Windows on the desktop – as one of its desirable design features. A thin-client design was just what we were looking for, because we wanted to reuse a lot of the core system business and integration logic in new front-end applications.

Looking at the product more closely we discovered something wonderful. The vendor hadn’t implemented a thin client at all. It had built a fat client that mixed presentation and business processing together, but executed it on the server. Their system used paper-thin desktops, not paper-thin clients.

Thin desktops may be just what you’re looking for. Thin desktops reduce the cost of system management (fewer desktop software installations) and can give you highly portable applications. They come at a price though – an impoverished interface and a much higher processor load on the server, to name two.

We weren’t looking for thin desktops. We wanted to reuse a lot of the application logic built into the system, and that meant disqualifying this particular product.

Take a minute to think about some of the claims you’ve read about the network computer (NC). Ever hear someone refer to it as a thin-client architecture? I have, but it isn’t any kind of client architecture. It’s a distributed computing architecture. Whether the applications you run on an NC use thin clients, fat clients, or just terminal emulators depends on how you or the vendor partition the application logic and where you execute the various modules that make up the application.

Think the distinction between distributed processing and client/server computing is “just a theory” or “just semantics”? Think again: it’s central to your technical architecture.

When I lived in Washington DC I wanted to write a book about a Russian invasion. Troops and tanks surround the city, then enter to take the capital.

Eight weeks later, exhausted, out of fuel, low on rations and hopelessly lost, the Russians surrender.

The Washington street system is ridiculously complicated, even if you ignore the potential confusion between I Street and Eye Street. There are those who think PCs also are far too complicated.

I don’t.

A few months back I wrote about one reason PCs seem hard to use: no matter how simple each function may be, PCs provide so much capability that just keeping track of all the different easy things you can do is tough. People gripe about this when they talk about “feature bloat” – a ridiculous complaint, equivalent to griping about the menu at a Chinese restaurant because all the choices make it hard to decide what to eat.

PCs seem complicated for a second, more subtle reason: they seem complicated because they simplify tasks that are intrinsically complex.

Yes, that’s right. The PC’s ability to simplify complex tasks makes them seem hard to use. What’s really going on is that the PC reveals our own lack of knowledge.

I learned this a long time ago training end-users in Lotus 1-2-3, back when DOS was king and the Xerox Star ran the world’s first GUI (but nobody cared). “Here’s how you calculate a percent,” I’d explain. “What’s a percent?” someone in the class would inevitably ask.

So I’d explain percentages, but I knew most of the students left figuring Lotus was just too hard to learn. They were wrong, of course. The software had nothing to do with their ignorance of basic arithmetic.

This problem recurs in every software category. Electronic spreadsheets make mathematical modeling relatively easy. They don’t, however, make mathematics easy – mathematics and mathematical modeling are intrinsically hard.

Word processors make the mechanics of document creation and formatting pretty simple. They don’t, however, simplify the fundamental process of organizing thoughts and translating them into coherent explanations.

End-user databases highlight this even more: Access, Paradox and Approach all make it easy to define databases, create entry screens, and format reports. They don’t, however, teach you the business problem you’re trying to solve, redesign processes to take advantage of automation, or create third-normal-form data designs.

Don’t think of this as an overwhelming problem that makes end-user education impossible. Think of it as your new design specification for your PC training program.

Create two parallel curricula. One, for end-users who know the subject, teaches the mechanics of the software. The other teaches business skills using the PC.

Here’s your new course list:

  • Basic Business Writing using MS Word: Memos and Letters
  • Advanced Business Writing using MS Word: Reports and White Papers
  • Business Math using Quattro Pro: The Basics
  • Business Math using Quattro Pro: Introduction to Mathematical Modeling
  • Introduction to Data Design using Paradox
  • Business Automation using Paradox
  • Creating Efficient Work Processes using Lotus Notes
  • …Get the idea?

Don’t, by the way, fall into the “snooty waitron” trap (“Sorry, that’s not my table.”) Far too many companies artificially divide employee knowledge into technical skills and business skills, with separate training organizations for each. You only have two choices: either help end-users succeed, or teach irrelevant material.

Listen closely when end-users have trouble using their computers. If they aren’t complaining about an installation problem (not an ease-of-use issue at all) you’ll find every complaint falls into one of two categories. Your end-users may be complaining because they can’t find a feature, or don’t know to look for the feature in the first place. Emphasize how to find features, and create “cheat sheets” built around common end-user tasks rather than the software menus.

Or their task may be intrinsically complex – a tax-adjusted return-on-investment analysis, for example. Build a subject-based curriculum like the one just outlined.

Build your training programs to solve these problems and you’re far more likely to deliver real value.