I’ve told my share of “dumb user” stories. Whiteout-on-the-screen is a popular entry. I’m fond of the using-a-bulk-tape-eraser-as-a-diskette-bookend story. My all-time favorite has the punch line, “Well, your first problem is, that’s not a modem, it’s an answering machine.”

You don’t hear as many “dumb IS analyst” stories. Here’s one: “We don’t have time to do it for you, and we won’t give you the tools to do it yourself.” Another favorite: “I don’t care if you’ve solved your business problem – your data model isn’t in third normal form!”

The all-time classic goes like this: “No I haven’t been on the factory floor. Why would I want to do that?”

This New Year I resolved to eschew dumb-user stories altogether. They have too much in common with ethnic humor – even if the gag is funny, it’s generally in poor taste, and ties your thinking into stupid stereotypes.

For example (you were wondering when I’d get to the actual topic, weren’t you?) it renders computer training programs completely ineffective. Start with a dumb-user premise and you’ll design boring, basic, pointless computer classes that convey so little information that attendees wander away muttering about their wasted time.

When you’re teaching (and I’ve done a fair amount of it in my career) your audience believes what you tell them. Tell your class that computers are complicated and they’ll believe you. If, on the other hand, you tell them the truth – that computers greatly simplify many complex tasks – they’ll believe that instead.

How has the myth arisen that computers are hard to use? I hosted an InfoWorld Electric Forum on this subject awhile back, and the consensus was remarkable. Computers have become increasingly hard to setup and maintain, in lockstep with a trend towards extraordinary ease of use. In this they have a lot in common with automobiles. Very few of us have the specialized knowledge needed to even tune a modern engine. Driving, however, has become easier: push on the gas to go, push on the brake to stop, turn the wheel to steer. Cars no longer have the manual chokes, standard transmissions, or crank ignitions that used to complicate learning to drive.

Hmmm … push and steer. Sounds a lot like “point and click” doesn’t it?

Computers seem hard to use for two basic reasons. We’ll address one of them this week, and save the other.

Computers make such a huge number of different things easy to do that just keeping track of them all is daunting. Want to change fonts? Easy. Bullets and numbering? Easy. Standard deviations? Same answer. And on and on and on.

In fact, computers and the Internet have this in common – the hardest part of using them is finding what you’re looking for among all the other stuff. The actual operation is simple. And even here, there are so many different routes to each operation (menus, button bars, the right mouse button) that you can generally figure things out without much difficulty.

When you teach, emphasized that every single task is easy, and establish three goals for every class: (1) Make sure to clarify the concepts (folders are like their paper equivalents – you use them to organize your files). (2) Help everyone succeed in the actual operation a few times, so they knew they’re capable of it. (3) Make sure everyone knows how to look for the functions they needed, so they have the confidence to poke around among the menus.

And give them a bit of great advice: For each project, add precisely one new technique to their bags o’ tricks. (In a very short period of time, they’ll master an awesome assortment of skills with very low stress.)

This teaching style will go along way to making your end-users self-sufficient. Of course, there’s a downside to all of this: you’ll have far fewer dumb-user stories to swap with your friends.

Neuroscientists use a nifty technique called “Positron Emission Tomography” to map which parts of the human brain process different kinds of thoughts and sensations. I’d bet if we PET scanned some religious fanatics, serious football fans, and the authors of the flames I received in response to my follow-up article on Network Computers a few weeks ago, they’d all be using the same cerebral structures.

Larry Ellison of Oracle coined the term “network computer” and Oracle has an NC reference specification. This is the gadget I argued against in recent columns. The Citrix Winframe may be fabulous. The HDS @workStation may be just the ticket. Last I looked they aren’t built to the Oracle reference spec.

You can call anything you want an NC – it’s a free country (expensive, but free). The companies that took advantage of free publicity by calling their various stuff “NCs” have to take the good with the bad.

One question: since Microsoft’s new license terms only let you run MS applications on MS operating systems, are you sure what you’re doing is legal? It’s debatable whether an NC running an MS application remotely is kosher or not, and Microsoft has better lawyers than God.

Speaking of definitions, I’ll bet lots of readers got excited over my exit line last week: that the opposite of “client/server” is “bad programming”. Got your attention, didn’t I?

Applications are client/server when the developer breaks out different pieces of program logic into independent, portable executables. It isn’t fundamentally different from what we’ve been doing all along with CICS, VTAM and so on, but you may want to draw a distinction. That’s cool: let’s call it client/server only when application partitioning goes beyond operating system and database management utilities to involve at least presentation logic, and maybe business rules and processes as well.

We’ve been breaking these into independently compiled subroutines for years, so why would it suddenly start costing more when we called it “client/server” and making them portable? Answer: we’re confusing several separate issues:

Building to a Platform: COBOL/CICS/3278 programmers build to an existing, stable environment. They’re just writing applications. Lots of client/server projects sink because the team has to build their ship while they’re trying to sail it. Of course it’s going to leak.

Scaling: The IBM mainframe hardware/software architecture has been optimized and refined over the years to handle high-volume batch processing. Lots of client/server projects include a goal of unplugging the mainframe in favor of cheaper MIPS. This is a great goal, and you should go for it if your system won’t include big batch runs. If it will, you’ll have to build in all sorts of nasty workarounds and kludges, and these will inflate project costs unreasonably.

You won’t win the Indy 500 with a freight train, but you also won’t economically haul grain with a fleet of Porsches.

User Interface: We used to build character-based monochrome interfaces that required users to learn both business and technology. Remember training call center agents hundreds of transaction codes?

Employees learn how good an interface can be at their local PC software retailer. They rightfully hold IS to a higher standard now. Surprise! Building GUIs, with lots of interface objects, windowing, and extensive business intelligence, takes more time than building 3278 screens.

Programmer Training: We hire trained COBOL programmers. They learn in trade school or we just say, “3 years of COBOL/CICS experience” in the ad. We ask client/server development teams to learn their tools as they build applications. C’mon folks, what do you expect – perfection on the first try?

So …

When I was a studying fish behavior many years ago, I presented some serious statistics to my research advisor. He said, “This is fine, but what does it mean?”

Ask this question whenever you hear silly average-cost statistics from self-styled industry pundits … except, of course, from yours truly.