Bob’s article is a classic on pilots.  This rerun works well with our previous topics.

I was sitting with Moe, Larry, and Curly at lunch the other day (not their real names but I feel an obligation to protect the guilty) when the conversation turned to information technology.

My colleagues (we’ll call them S3 for short) recently left the military, so their perspective on IT is a bit broader than that of most IS professionals. Moe led off with a mention of genetic algorithms. Here’s how these amazing things work: You feed the computer any old airplane wing design (for example) and a definition of what it means for a wing to be optimal. Let the computer churn for a day or two, and just as an automatic bread-maker magically produces bread, it will pop out an aerodynamically perfect wing design.

The algorithm is called “genetic” because it mimics evolution, randomly mutating the design in small increments and accepting those mutations that improve the design. Very cool stuff. If you support an engineering design group, this technology is in your future.

From there, Curly somehow got to artificial intelligence, and in particular the AI golf caddy. Apparently, these little robots actually exist, following you around the golf course and recommending the perfect club for every shot. Larry pointed out the hazards of combining the AI caddy with Y2K: “Carnage on the course,” he called it.

If you haven’t noticed, people are doing amazing things with computers these days. So why is it that most IS departments, in most projects, can’t seem to design a database, create data-entry and transaction-entry screens for it, design and code a bunch of useful reports, and hook it all to the legacy environment without the project going in the ditch?

When I started in this business, a typical big project needed 25 people for three years and was completed about a year after the deadline — if it got completed at all. Compared with the simple compilers we had when I started programming, our integrated development environments should easily make us 100 times more productive. So why is it that as I write this column, a typical big project needs 25 people for three years and is completed about a year after the deadline — if at all?

Do the math, people. One programmer should complete everything in nine months. What’s the problem?

It isn’t, of course, quite that simple. It also isn’t that complicated. Try this: Start with a small but useful subset of the problem. Then, understand the data and design the database. Create edit programs for each table. Work with end-users to jointly figure out what the update transactions are, and design transaction entry screens for each of them. Design a navigation screen that gets you to the edit and transaction screens. Build a simple batch interface to the legacy environment. Do it as fast as you can. Don’t worry about being sloppy — you’re building Quonset huts, not skyscrapers.

Put it all into production with a pilot group of end-users for a month. Turn your programming team into end-users for that period so they experience their system in action first-hand. At the end of the month, start over and do it all again, this time building the system around how the pilot group wants to work. After a month with the new system they’ll have all kinds of ideas on what a system should do for them.

Build Version 2 more carefully, but not too much more carefully because you’re going to loop through the process one more time before you’re done. In parallel with Version 2, though, start building the infrastructure — real-time legacy interfaces, partitioned business logic and so on — that you’ll need for Version 3, the production application that needs a solid n-tier internal architecture and production-grade code.

Does this process work? It has to — it’s just a manual version of a genetic algorithm. I’ve used it on small-scale projects where it’s been very successful, but haven’t yet found anyone willing to risk it on something bigger. Given the risks of traditional methodologies, though (by most estimates, more than 70 percent of all IS projects fail) it almost has to be an improvement.

 

By way of introduction for some of our readers, Memorial Day is the unofficial start to the American summer holidays.  Kids are out of school, families may go on vacations, and there will certainly be some cookouts and beverages.   We will circle back to these traditions in a moment.

For many of readers of this column, it also the day to remember those that died in service to our Country.   Let’s remember one person  who was an IT Leader, empowered and trusted by the Business, that you may not have heard about—Major General Harold Greene.

I met him when he was still Colonel Greene, and he was in charge of re-inventing a big part of the Army’s command systems that were related to Intelligence.   Let’s imagine the inputs for an “ERP” like system, using pretty low bandwidth devices, and no edge computing capabilities, trying to manage troop movements,  imagery, and communications, and integrating with logistics, health and safety and more, over 20 years ago.

Yeah, it was complicated.

Colonel Greene brought wisdom, humor and optimistic, yet healthy expectations to the design and deployment of these systems.  His combination of engineering acumen  and genuine leadership helped him gain trust with “The Business”, and he developed fair, demanding and genuine relationships with contractors and engineers.  He was, to put it simply, the kind of CIO that we should all be lucky enough to work for or aspire to be.

MG Greene was the highest ranking US serviceperson to die in Afghanistan.  He was married, had two kids, and was serving in Afghanistan, helping with the training and development of the Afghan security forces.  This makes sense, since he cared a lot about education and helping others around him learn.

Brevity being the soul of blogging, it is an excellent day to think about MG Greene, and perhaps find a little inspiration for our work.   Have a cookout, eat a hot dog, and be with your friends and family.  I think MG Greene would approve.