I recently had the pleasure of reading Richard Dawkins’ River Out of Eden. I haven’t spent time with Dawkins since reading his influential The Selfish Gene two decades ago. In both books, Dawkins explores the ramifications of a DNA-centric view of natural selection. Begin with the premise that bodies are just DNA’s way of making more DNA, and the consequences are plentiful, fascinating — and very helpful in understanding how businesses operate, a subject we’ll explore in future columns.

Dawkins is, among other things, a modern Sir Thomas Huxley, who joyfully and convincingly demolishes the arguments of those who reject evolutionary theory. In Eden he takes particular pleasure in demolishing the intellectual sin of what he calls “Argument from Personal Incredulity” (API).

API begins with an accurate statement: “I don’t see how that could be possible.” The implication — that because you don’t see how it can work, it can’t work — replaces logic with a sizable dose of arrogance.

Arrogance? ‘Fraid so. If it’s evolution, API means your inability to figure it out outweighs lifetimes of hard work and deep thought by thousands of geniuses who have researched, modified, refined and extended Darwin’s work over more than a century. Ah, what did they know, anyway?

Natural selection is one thing. If you don’t feel like accepting this thoroughly researched scientific theory, that’s your privilege. The problem is, plenty of managers apply API to their day-to-day decision-making. How about you?

Business is as filled with interesting ideas as a Greek restaurant is with savory vittles. Should you augment financial statements with a balanced scorecard? Perhaps you should start calculating “Economic Value Added” (EVA). On a technical note, there’s the potential for use-case analysis to replace traditional methodologies.

You walk a fine line when you evaluate new ideas. Accept them all and you’re following the fad of the month. Reject them all and you invite stagnation.

It’s tempting to apply API, embracing what fits your biases while rejecting the rest as unworkable. Ever say, “It doesn’t work that way in this company,”? It’s API — you’ve decided it can’t work because you don’t personally understand how it can. Then there’s the popular, “It’s a great theory, but …” Ever wonder what would have happened if Franklin Delano Roosevelt had said that to Albert Einstein?

Okay, both API and automatic acceptance of the experts are wrong. What’s right?

The first step in resolving this dilemma is simply to match new ideas to your top priorities. At any given moment, a good leader will be sponsoring between one and three high-level goals — significant changes that will make a real difference to the company. Screen out as interesting but unimportant, or maybe file away for future use, all except those ideas that can help you achieve your current goals.

Next, assess how widely each idea has been tested.

This assessment shouldn’t drive your choice, just your method of evaluation. A new and untested idea, for example, may be just what you need. Analyze it closely, though. Great ideas live or die in the details, and in the absence of wide real-world use you’ll have to figure them out yourself.

Many of the most highly hyped ideas have been applied in only one, or maybe just a few, companies. In these cases it’s the glowing descriptions of success that call for scrutiny. Sometimes what looks like success on the surface is really a glowing story of how great everything is going to be someday. Or the success is real enough, but the great idea isn’t what caused it. Or you may be reading a history written by the survivors. Regardless, make sure you understand the circumstances of each success before you decide to replicate them yourself.

Then there are ideas that have been widely deployed and are generally accepted. Should you just accept them too and put them into practice?

Since this column challenges popular, widely accepted ideas on a regular basis, that clearly isn’t the right answer. But what is?

Tune in next week to find out.

I was sitting with Moe, Larry, and Curly at lunch the other day (not their real names but I feel an obligation to protect the guilty) when the conversation turned to information technology.

My colleagues (we’ll call them S3 for short) recently left the military, so their perspective on IT is a bit broader than that of most IS professionals. Moe led off with a mention of genetic algorithms. Here’s how these amazing things work: You feed the computer any old airplane wing design (for example) and a definition of what it means for a wing to be optimal. Let the computer churn for a day or two, and just as an automatic bread-maker magically produces bread, it will pop out an aerodynamically perfect wing design.

The algorithm is called “genetic” because it mimics evolution, randomly mutating the design in small increments and accepting those mutations that improve the design. Very cool stuff. If you support an engineering design group, this technology is in your future.

From there, Curly somehow got to artificial intelligence, and in particular the AI golf caddy. Apparently, these little robots actually exist, following you around the golf course and recommending the perfect club for every shot. Larry pointed out the hazards of combining the AI caddy with Y2K: “Carnage on the course,” he called it.

If you haven’t noticed, people are doing amazing things with computers these days. So why is it that most IS departments, in most projects, can’t seem to design a database, create data-entry and transaction-entry screens for it, design and code a bunch of useful reports, and hook it all to the legacy environment without the project going in the ditch?

When I started in this business, a typical big project needed 25 people for three years and was completed about a year after the deadline — if it got completed at all. Compared with the simple compilers we had when I started programming, our integrated development environments should easily make us 100 times more productive. So why is it that as I write this column, a typical big project needs 25 people for three years and is completed about a year after the deadline — if at all?

Do the math, people. One programmer should complete everything in nine months. What’s the problem?

It isn’t, of course, quite that simple. It also isn’t that complicated. Try this: Start with a small but useful subset of the problem. Then, understand the data and design the database. Create edit programs for each table. Work with end-users to jointly figure out what the update transactions are, and design transaction entry screens for each of them. Design a navigation screen that gets you to the edit and transaction screens. Build a simple batch interface to the legacy environment. Do it as fast as you can. Don’t worry about being sloppy — you’re building Quonset huts, not skyscrapers.

Put it all into production with a pilot group of end-users for a month. Turn your programming team into end-users for that period so they experience their system in action first-hand. At the end of the month, start over and do it all again, this time building the system around how the pilot group wants to work. After a month with the new system they’ll have all kinds of ideas on what a system should do for them.

Build Version 2 more carefully, but not too much more carefully because you’re going to loop through the process one more time before you’re done. In parallel with Version 2, though, start building the infrastructure — real-time legacy interfaces, partitioned business logic and so on — that you’ll need for Version 3, the production application that needs a solid n-tier internal architecture and production-grade code.

Does this process work? It has to — it’s just a manual version of a genetic algorithm. I’ve used it on small-scale projects where it’s been very successful, but haven’t yet found anyone willing to risk it on something bigger. Given the risks of traditional methodologies, though (by most estimates, more than 70 percent of all IS projects fail) it almost has to be an improvement.