I was sitting with Moe, Larry, and Curly at lunch the other day (not their real names but I feel an obligation to protect the guilty) when the conversation turned to information technology.

My colleagues (we’ll call them S3 for short) recently left the military, so their perspective on IT is a bit broader than that of most IS professionals. Moe led off with a mention of genetic algorithms. Here’s how these amazing things work: You feed the computer any old airplane wing design (for example) and a definition of what it means for a wing to be optimal. Let the computer churn for a day or two, and just as an automatic bread-maker magically produces bread, it will pop out an aerodynamically perfect wing design.

The algorithm is called “genetic” because it mimics evolution, randomly mutating the design in small increments and accepting those mutations that improve the design. Very cool stuff. If you support an engineering design group, this technology is in your future.

From there, Curly somehow got to artificial intelligence, and in particular the AI golf caddy. Apparently, these little robots actually exist, following you around the golf course and recommending the perfect club for every shot. Larry pointed out the hazards of combining the AI caddy with Y2K: “Carnage on the course,” he called it.

If you haven’t noticed, people are doing amazing things with computers these days. So why is it that most IS departments, in most projects, can’t seem to design a database, create data-entry and transaction-entry screens for it, design and code a bunch of useful reports, and hook it all to the legacy environment without the project going in the ditch?

When I started in this business, a typical big project needed 25 people for three years and was completed about a year after the deadline — if it got completed at all. Compared with the simple compilers we had when I started programming, our integrated development environments should easily make us 100 times more productive. So why is it that as I write this column, a typical big project needs 25 people for three years and is completed about a year after the deadline — if at all?

Do the math, people. One programmer should complete everything in nine months. What’s the problem?

It isn’t, of course, quite that simple. It also isn’t that complicated. Try this: Start with a small but useful subset of the problem. Then, understand the data and design the database. Create edit programs for each table. Work with end-users to jointly figure out what the update transactions are, and design transaction entry screens for each of them. Design a navigation screen that gets you to the edit and transaction screens. Build a simple batch interface to the legacy environment. Do it as fast as you can. Don’t worry about being sloppy — you’re building Quonset huts, not skyscrapers.

Put it all into production with a pilot group of end-users for a month. Turn your programming team into end-users for that period so they experience their system in action first-hand. At the end of the month, start over and do it all again, this time building the system around how the pilot group wants to work. After a month with the new system they’ll have all kinds of ideas on what a system should do for them.

Build Version 2 more carefully, but not too much more carefully because you’re going to loop through the process one more time before you’re done. In parallel with Version 2, though, start building the infrastructure — real-time legacy interfaces, partitioned business logic and so on — that you’ll need for Version 3, the production application that needs a solid n-tier internal architecture and production-grade code.

Does this process work? It has to — it’s just a manual version of a genetic algorithm. I’ve used it on small-scale projects where it’s been very successful, but haven’t yet found anyone willing to risk it on something bigger. Given the risks of traditional methodologies, though (by most estimates, more than 70 percent of all IS projects fail) it almost has to be an improvement.

Here’s a common question: “Should we be using Linux?”

I have a standard answer: “I have no idea.”

As described last week, vendor/product decisions are social beasts, best addressed through the When/Who/How/Why/What formula. Determine when you have to make the decision, who should be involved, and how you’re going to make it. If everyone commits to “How” — that is, to the process you’ll employ in making the decision — then you’ll be able to reach consensus on why it’s important and what the organization should do.

“How” — the decision process itself — is the real work of the formula and this week’s subject. Since Linux triggered this discussion, our focus will be on platform-layer decisions. Application-layer decisions are similar, although with much heavier involvement on the part of the end-user community. It’s a seven-step process.

Step 1: List about five candidate products and vendors. It should be a diverse list, including both mainstream and wild-card candidates. (Avoid the popular practice of creating a rigged list so you can just go through the motions.)

Step 2: Determine the important features. In the platform layer, this mostly translates to applications and services. Make this list generic (“directory service”) rather than product-specific (“NDS”). Include price as a feature.

Step 3: Establish integration requirements. For database servers, you may specify ODBC compliance. For servers you may require compatibility with your data center management system. Just as new employees must be able to fit into the team, so products must be able to fit into the network.

Step 4: Specify vendor requirements. List what matters to you, not the vendor’s internal characteristics. Availability of experts and third-party enhancements matters. The potential for a product becoming an orphan matters. Quality of support matters as well. And, there’s the vendor’s announced plans for the product. The vendor’s financial strength and the product’s market presence may lead to something that matters to you but they themselves don’t.

Step 5: Establish scoring criteria. For example: When listing key applications and services you used generic terms. If support for specific applications and products matter, give higher scores to products that provide it. Built-in services may rate higher scores than third-party solutions. And so on. The key: Know how you’ll score features, integration requirements, and vendor characteristics before you do any research, not after.

Step 6: Blend requirements into a single list and assign weightings to each item. Use a three-point scale: 3 is a deal-breaker, 2 is important, 1 is useful. If the group is small, assign the weightings through group consensus. If it’s a big group, consensus is too unwieldy so you’ll just have to vote on it: Give everyone involved a fixed number of 3s, 2s and 1s to vote with and tabulate the results.

Step 7: Gather data. Read product comparisons, talk to the vendors, issue a request for proposal if you must — and use it to score the products.

Step 8: Pick the winner. Remember, though, this scoring system isn’t precise. If the top product scores 132 and the next in line scores 128, they have tied. Also remember, vendors have been known to exaggerate, so run a pilot with the top scoring candidate (or the top two if they’re close) before making a final decision.

Now … how does Linux fit into this process?

Very nicely, if you handled Step 4 properly. With an open-source product, support is entirely through third-parties (something true of many traditional products as well), the product evolves by duplicating successful innovations of commercial competitors, and the personal commitment of its advocates is part of the question of potential orphan-hood, but the vendor issues that matter to you are the same.

Product and vendor decisions easily become emotional issues. The way to combat this is for everyone involved to attach their emotions to the selection process instead. Do it by the numbers and you’ll make a good decision. Most important of all, it will be a decision everyone can support.