Steven Wright once asked if shaving cream does anything at all beyond helping you keep track.

It’s a good question.

It isn’t just shaving cream whose only role is helping you keep track. Sometimes, that’s the role of the “repeatable, predictable processes” that so many of us high falutin’ consultants promote as the solution to every business problem.

Before we had process redesign we had Taylor’s “scientific management” and its time-and-motion studies, which tried to turn industrial processes into precisely defined repetitive motions. Beginning with the assumption that business works best when human brains aren’t involved in running it, scientific management led inevitably to repetitive stress disorder. Oops.

We’ve replaced scientific management with process redesign. According to the process perspective, “everything is a process,” a phrase I’ve heard often enough to make me want to argue, just out of spite. “My desk isn’t a process,” I hear myself retort cleverly while I watch ’em fold like pawn-shop accordions. “Neither is my car. Or my …”

“No, no!” they sputter, nonplussed. “We meant to say, everything you do is a process, because everything you do is a series of steps that gets you to the end result.”

Which is absolutely, true — everything you do is a process. Everything you do isn’t, however, a Process, a distinction process design consultants often fail to make in their zeal to craft high-quality-producing methods for achieving results. There are three big differences between processes and Processes:

1. Most of the intelligence needed to create the desired results has been built into Processes. In contrast, most of the intelligence needed to successfully follow a process is in the minds of the individuals following it.

2. The products of Processes have well-defined specifications; quality is defined as adherence to those specifications and can be objectively measured. A Process generates either large numbers or a continuous flow of its product. A process also creates an output. That output may be unique or a custom item; often its specifications aren’t known in advance.

3. People fulfill roles in Processes — the Process is at the center. It’s the other way around with processes: People use them to make sure they do things in the right order without forgetting anything. Lower-case processes play a role in employees’ success.

Don’t buy it yet? Think of the difference between the Process of manufacturing a car and the process of creating advertising. You can specify the steps for building a car so precisely that industrial robots can handle it — all of the intelligence is in the Process. Every last detail of the product has exact specifications and tolerances. If you follow the Process exactly, you must end up with a high-quality car.

You can also specify the steps needed to create advertising — you may analyze the marketplace, determine the product’s tangible and emotional benefits for each market segment, and so on. When you’re done, you’ll never end up with a process that can be handled by industrial robots (although many advertisements certainly look as if they were authored by automata). There’s no tight specification for distinguishing good ads from bad ones until you test-market to find out which ones make the cash register ring.

In our quest to make systems development and integration repeatable, predictable, and most important an activity we can reliably budget, we keep trying to turn it into a Process.

Systems development should follow a well-defined process, if for no other reason than to make sure we don’t leave anything out.

But a Process? Nope.

A great system is a work of art, both internally and in use. The processes used to create it help programmers focus on getting the job done instead of figuring out what the job is. Following the methodology facilitates great results. Only talented designers and programmers can cause them.

Here’s the wonderful irony of it all: Process redesign consultants don’t follow a Process. Only a process.

I was sitting with Moe, Larry, and Curly at lunch the other day (not their real names but I feel an obligation to protect the guilty) when the conversation turned to information technology.

My colleagues (we’ll call them S3 for short) recently left the military, so their perspective on IT is a bit broader than that of most IS professionals. Moe led off with a mention of genetic algorithms. Here’s how these amazing things work: You feed the computer any old airplane wing design (for example) and a definition of what it means for a wing to be optimal. Let the computer churn for a day or two, and just as an automatic bread-maker magically produces bread, it will pop out an aerodynamically perfect wing design.

The algorithm is called “genetic” because it mimics evolution, randomly mutating the design in small increments and accepting those mutations that improve the design. Very cool stuff. If you support an engineering design group, this technology is in your future.

From there, Curly somehow got to artificial intelligence, and in particular the AI golf caddy. Apparently, these little robots actually exist, following you around the golf course and recommending the perfect club for every shot. Larry pointed out the hazards of combining the AI caddy with Y2K: “Carnage on the course,” he called it.

If you haven’t noticed, people are doing amazing things with computers these days. So why is it that most IS departments, in most projects, can’t seem to design a database, create data-entry and transaction-entry screens for it, design and code a bunch of useful reports, and hook it all to the legacy environment without the project going in the ditch?

When I started in this business, a typical big project needed 25 people for three years and was completed about a year after the deadline — if it got completed at all. Compared with the simple compilers we had when I started programming, our integrated development environments should easily make us 100 times more productive. So why is it that as I write this column, a typical big project needs 25 people for three years and is completed about a year after the deadline — if at all?

Do the math, people. One programmer should complete everything in nine months. What’s the problem?

It isn’t, of course, quite that simple. It also isn’t that complicated. Try this: Start with a small but useful subset of the problem. Then, understand the data and design the database. Create edit programs for each table. Work with end-users to jointly figure out what the update transactions are, and design transaction entry screens for each of them. Design a navigation screen that gets you to the edit and transaction screens. Build a simple batch interface to the legacy environment. Do it as fast as you can. Don’t worry about being sloppy — you’re building Quonset huts, not skyscrapers.

Put it all into production with a pilot group of end-users for a month. Turn your programming team into end-users for that period so they experience their system in action first-hand. At the end of the month, start over and do it all again, this time building the system around how the pilot group wants to work. After a month with the new system they’ll have all kinds of ideas on what a system should do for them.

Build Version 2 more carefully, but not too much more carefully because you’re going to loop through the process one more time before you’re done. In parallel with Version 2, though, start building the infrastructure — real-time legacy interfaces, partitioned business logic and so on — that you’ll need for Version 3, the production application that needs a solid n-tier internal architecture and production-grade code.

Does this process work? It has to — it’s just a manual version of a genetic algorithm. I’ve used it on small-scale projects where it’s been very successful, but haven’t yet found anyone willing to risk it on something bigger. Given the risks of traditional methodologies, though (by most estimates, more than 70 percent of all IS projects fail) it almost has to be an improvement.