Neuroscientists use a nifty technique called “Positron Emission Tomography” to map which parts of the human brain process different kinds of thoughts and sensations. I’d bet if we PET scanned some religious fanatics, serious football fans, and the authors of the flames I received in response to my follow-up article on Network Computers a few weeks ago, they’d all be using the same cerebral structures.

Larry Ellison of Oracle coined the term “network computer” and Oracle has an NC reference specification. This is the gadget I argued against in recent columns. The Citrix Winframe may be fabulous. The HDS @workStation may be just the ticket. Last I looked they aren’t built to the Oracle reference spec.

You can call anything you want an NC – it’s a free country (expensive, but free). The companies that took advantage of free publicity by calling their various stuff “NCs” have to take the good with the bad.

One question: since Microsoft’s new license terms only let you run MS applications on MS operating systems, are you sure what you’re doing is legal? It’s debatable whether an NC running an MS application remotely is kosher or not, and Microsoft has better lawyers than God.

Speaking of definitions, I’ll bet lots of readers got excited over my exit line last week: that the opposite of “client/server” is “bad programming”. Got your attention, didn’t I?

Applications are client/server when the developer breaks out different pieces of program logic into independent, portable executables. It isn’t fundamentally different from what we’ve been doing all along with CICS, VTAM and so on, but you may want to draw a distinction. That’s cool: let’s call it client/server only when application partitioning goes beyond operating system and database management utilities to involve at least presentation logic, and maybe business rules and processes as well.

We’ve been breaking these into independently compiled subroutines for years, so why would it suddenly start costing more when we called it “client/server” and making them portable? Answer: we’re confusing several separate issues:

Building to a Platform: COBOL/CICS/3278 programmers build to an existing, stable environment. They’re just writing applications. Lots of client/server projects sink because the team has to build their ship while they’re trying to sail it. Of course it’s going to leak.

Scaling: The IBM mainframe hardware/software architecture has been optimized and refined over the years to handle high-volume batch processing. Lots of client/server projects include a goal of unplugging the mainframe in favor of cheaper MIPS. This is a great goal, and you should go for it if your system won’t include big batch runs. If it will, you’ll have to build in all sorts of nasty workarounds and kludges, and these will inflate project costs unreasonably.

You won’t win the Indy 500 with a freight train, but you also won’t economically haul grain with a fleet of Porsches.

User Interface: We used to build character-based monochrome interfaces that required users to learn both business and technology. Remember training call center agents hundreds of transaction codes?

Employees learn how good an interface can be at their local PC software retailer. They rightfully hold IS to a higher standard now. Surprise! Building GUIs, with lots of interface objects, windowing, and extensive business intelligence, takes more time than building 3278 screens.

Programmer Training: We hire trained COBOL programmers. They learn in trade school or we just say, “3 years of COBOL/CICS experience” in the ad. We ask client/server development teams to learn their tools as they build applications. C’mon folks, what do you expect – perfection on the first try?

So …

When I was a studying fish behavior many years ago, I presented some serious statistics to my research advisor. He said, “This is fine, but what does it mean?”

Ask this question whenever you hear silly average-cost statistics from self-styled industry pundits … except, of course, from yours truly.