You’ll find this hard to believe, but I have an unfortunate tendency to wisecrack. Sometimes, it gets out of hand, becoming the conversational goal. Function – communication – gets lost in the quest for form (and you’ll recall that “Form Follows Function” is one of the three great laws of management).

I victimized a perfectly good argument this way a few columns ago. Comparing the costs of mainframe and client/server computing, I described a small system – few users, low transaction volume, small table size – and then asked which would cost more, the mainframe or client/server version. Instead of just making the point, I tried to be clever (I said, “Hint: the word “COBOL” shouldn’t appear in your answer,”) successfully inverting my meaning.

I hate it when that happens.

This was a minor gaffe. Worse was another recent column in which I Salingered InfoWorld‘s readers. (To Salinger – v. transitive: to state as authoritative knowledge information gleaned from secondary sources, thereby perpetuating unfounded rumor. From Pierre Salinger, who repeated an unfounded Internet rumor as fact, trying to precipitate a major scandal.)

I’d read that the license terms for Microsoft Office 97 only allowed licensees to run it on Microsoft operating systems, and, trusting the source, I used this “fact” in my column.

For the record, the license terms do not – repeat, not – include this restriction. My sources of information were wrong. I should have read the license terms myself instead of relying on secondary sources. With too little time to do so, I took a chance instead.

Usually, when someone makes a mistake in print they run a correction, say, “we regret the error,” and move on.

I got to thinking, though. We all have too little time to absorb too much information, make sense of it, and decide on courses of action. That means we have to take shortcuts, relying on news articles, opinion pieces summarizing news articles, and even opinion pieces summarizing other opinion pieces.

Usually, this amounts to efficiently using limited time. It can, however, lead to embarrassing mistakes.

The Internet magnifies this dilemma greatly. It’s the nature of print media that you can make an informed judgment regarding the trustworthiness of what you read. It’s the nature of the Internet that you can’t. Any damned fool can create Web pages that look just as official and authoritative as InfoWorld Electric. The Internet may be the greatest source of information ever, but figuring out what constitutes information and what constitutes deception, rumor-mongering, or just plain trouble-making isn’t all that easy.

Here’s what’s needed: Some independent authority should establish a certification program for information providers. The program will define minimum standards for news gathering and editorial practice. Sites that qualify will be allowed to display the “TIP” (Trusted Information Provider) logo. Consumers of information can then look for the TIP logo before accepting what they read.

This – the ISO9000 of publishing – would be of awesomely high value for every information consumer on the planet. It wouldn’t guarantee perfection. It would let you know whether your source is worth attending to.

No TIP certification program exists today, and in its absence you have to make your own decisions regarding how to be sure what you’ve read reflects reality. As stated before in this space, you need a finely tuned BS detector.

This isn’t a new problem. There’s a branch of philosophy – epistemology – that deals with how we know what we know. It teaches that there’s no such thing as absolute certainty, just relative confidence.

Which, I guess, means we all have to gather information to the depth we think is appropriate, draw the best conclusions we can, and hope to get it right when it counts.

And to always acknowledge the possibility that we got it wrong.

Neuroscientists use a nifty technique called “Positron Emission Tomography” to map which parts of the human brain process different kinds of thoughts and sensations. I’d bet if we PET scanned some religious fanatics, serious football fans, and the authors of the flames I received in response to my follow-up article on Network Computers a few weeks ago, they’d all be using the same cerebral structures.

Larry Ellison of Oracle coined the term “network computer” and Oracle has an NC reference specification. This is the gadget I argued against in recent columns. The Citrix Winframe may be fabulous. The HDS @workStation may be just the ticket. Last I looked they aren’t built to the Oracle reference spec.

You can call anything you want an NC – it’s a free country (expensive, but free). The companies that took advantage of free publicity by calling their various stuff “NCs” have to take the good with the bad.

One question: since Microsoft’s new license terms only let you run MS applications on MS operating systems, are you sure what you’re doing is legal? It’s debatable whether an NC running an MS application remotely is kosher or not, and Microsoft has better lawyers than God.

Speaking of definitions, I’ll bet lots of readers got excited over my exit line last week: that the opposite of “client/server” is “bad programming”. Got your attention, didn’t I?

Applications are client/server when the developer breaks out different pieces of program logic into independent, portable executables. It isn’t fundamentally different from what we’ve been doing all along with CICS, VTAM and so on, but you may want to draw a distinction. That’s cool: let’s call it client/server only when application partitioning goes beyond operating system and database management utilities to involve at least presentation logic, and maybe business rules and processes as well.

We’ve been breaking these into independently compiled subroutines for years, so why would it suddenly start costing more when we called it “client/server” and making them portable? Answer: we’re confusing several separate issues:

Building to a Platform: COBOL/CICS/3278 programmers build to an existing, stable environment. They’re just writing applications. Lots of client/server projects sink because the team has to build their ship while they’re trying to sail it. Of course it’s going to leak.

Scaling: The IBM mainframe hardware/software architecture has been optimized and refined over the years to handle high-volume batch processing. Lots of client/server projects include a goal of unplugging the mainframe in favor of cheaper MIPS. This is a great goal, and you should go for it if your system won’t include big batch runs. If it will, you’ll have to build in all sorts of nasty workarounds and kludges, and these will inflate project costs unreasonably.

You won’t win the Indy 500 with a freight train, but you also won’t economically haul grain with a fleet of Porsches.

User Interface: We used to build character-based monochrome interfaces that required users to learn both business and technology. Remember training call center agents hundreds of transaction codes?

Employees learn how good an interface can be at their local PC software retailer. They rightfully hold IS to a higher standard now. Surprise! Building GUIs, with lots of interface objects, windowing, and extensive business intelligence, takes more time than building 3278 screens.

Programmer Training: We hire trained COBOL programmers. They learn in trade school or we just say, “3 years of COBOL/CICS experience” in the ad. We ask client/server development teams to learn their tools as they build applications. C’mon folks, what do you expect – perfection on the first try?

So …

When I was a studying fish behavior many years ago, I presented some serious statistics to my research advisor. He said, “This is fine, but what does it mean?”

Ask this question whenever you hear silly average-cost statistics from self-styled industry pundits … except, of course, from yours truly.