Ambrose Bierce, the 19th century cynic, told of the inventor who built a flying machine. When the inventor started the machine, it immediately and quickly bored a hole straight to the center of the earth. Leaping free, the inventor was heard to remark: “My invention was perfect in every detail. The problems were merely basic and fundamental.”

Big projects are like that – they dig us all into deep holes, rarely fly, and even when they’re perfect every detail, they often turn out to be flawed in basic and fundamental ways – the usual consequence of having implementation scheduled years after conception.

For some reason, everyone expresses surprise when IS projects come in late, over budget and with fewer features than promised, even though that’s by far the most common outcome. Since most new IS projects are based on client/server technology, we of course blame the technology, even though, as we saw last week, client/server projects fail neither more nor less often than traditional mainframe systems.

The complexity of project management increases exponentially with the size of the project. This means big projects need exceptional project managers. Unfortunately, exceptional project managers are hard to come by, and they deservedly command salaries that can make IS executives uncomfortable.

Next week, we’ll look at the basic principles of managing big projects. This week we’ll talk about how to avoid them in the first place, because most can be prevented. Here’s what to strive for:

  • Small Teams: Don’t put more than five people on a project team. Small teams mean low overhead.
  • Quick Delivery: Define projects no more than six months long. When a product is due in 180 days, the team feels a sense of urgency in the first team meeting. The project deliverable, by the way, should provide tangible value, not just something tangible.
  • Restricted User Involvement: End-users should define business processes and system functionality, not system design details. Get agreement on this point up front, and then have frequent, informal contacts rather than formal interviews. Be highly interactive, and learn their business from them.
  • Staged Releases: Make your first release as small as you can. Set up two teams working on staggered schedules. Team One freezes its design three months after starting. Team Two starts designing the next release while Team One codes. Team Two freezes its design three months later, while Team One installs its release. The benefit? You can successfully freeze the design, because it’s easy to add new features to the next release. This gets you out of the trap of “scope creep” that kills so many projects.
  • High-Productivity Tools: Delphi, Powerbuilder, Visual Basic and their competitors all increase programmer productivity by a huge multiplier compared to Cobol or C++. Only use procedural languages when you have no other choice.
  • Simple User Interfaces: GUIs tempt programmers into showing off by building in lots of overlapping pop-up windows with cool interface widgets and heavy mouse action. Make your programmers experts in clean interface design.
  • Usually, when you’re faced with a big, intimidating project you can break it up into a series of overlapping, independent, small, manageable projects that match the above characteristics. When you do, you’ll experience several key benefits. Your projects will come in on time. You’ll be able to track changing business requirements

    You’ll also find yourself able to respond to changing company priorities, because you won’t have committed all of your development resources to a single project for a long period of time.

    Think small.

    Technology … all successful technology … follows a predictable life cycle: Hype, Disillusionment, Application.

    Some academic type or other hatches a nifty idea in a university lab and industry pundits explain why it will never fly (it’s impossible in the first place, it won’t scale up, it’s technology-driven instead of a response to customer demand … you know this predictable litany of nay-saying foolishness).

    When it flies anyway, the Wall Street Journal runs an article proclaiming it to be real, and everyone starts hyping the daylights out of it, creating hysterical promises of its wonders.

    Driven by piles of money, early adopters glom onto the technology and figure out how to make it work outside the lab. For some reason, people express surprise at how complicated it turns out to be, and become disillusioned that it didn’t get us to Mars, cure cancer, and repel sharks without costing more than a dime.

    As this disillusionment reaches a crescendo of I-told-you-so-ism, led by headline-grabbing cost-accountants brandishing wildly inflated cost estimates, unimpressed professionals figure out what the technology is really good for, and make solid returns on their investments in it.

    Client/server technology has just entered the disillusionment phase. I have proof – a growing collection of recent articles proclaiming the imminent demise of client/server computing. Performance problems and cost overruns are killing it, we’re told, but Intranets will save it.

    Perfect: a technology hitting its stride in the Hype phase will rescue its predecessor from Disillusionment.

    What a bunch of malarkey.

    It’s absolutely true that far too many client/server development projects run way over the originally estimated cost. It’s also true that most client/server implementations experience performance problems.

    Big deal. Here’s a fact: most information systems projects, regardless of platform, experience cost overruns, implementation delays, and initial performance problems, if they ever get finished at all. Neither the problem nor the solution has anything to do with technology – look, instead, to ancient and poorly conceived development methodologies, poor project management, and a bad job of managing expectations.

    I’m hearing industry “experts” talk about costs three to six times greater than for comparable mainframe systems – and these are people who ought to know better.

    I have yet to see a mainframe system that’s remotely comparable to a client/server system. If anyone bothered to create a client/server application that used character-mode screens to provide the user-hostile interface typical of mainframe systems, the cost comparison would look very different. The cost of GUI design and coding is being assigned to the client/server architecture, leading to a lot of unnecessary confusion. But of course, a headline reading, “GUIs Cost More than 3278 Screens!” wouldn’t grab much attention.

    And this points us to the key issue: the client/server environment isn’t just a different kind of mainframe. It’s a different kind of environment with different strengths, weaknesses, and characteristics. Client/server projects get into the worst trouble when developers ignore those differences.

    Client/server systems do interactive processing very well. Big batch runs tend to create challenges. Mainframes are optimized for batch, with industrial-strength scheduling systems and screamingly fast block I/O processing. They’re not as good, though, at on-line interactive work.

    You can interface client/server systems to anything at all with relative ease. You interface with mainframe systems either by emulating a terminal and “screen-scraping,” by buying hyper-expensive middleware gateways (I wonder how much of the typical client/server cost over-run comes from the need for interfaces with legacy systems?), or by the arcane issues of setting up and interfacing with LU2 process-to-process communication.

    And of course, the development tools available for client/server development make those available for mainframes look sickly. Here’s a question for you to ponder: Delphi, Powerbuilder and Visual Basic all make a programmer easily 100 times more productive than languages like Cobol. So why aren’t we building the same size systems today with 1/100th the staff?

    The answer is left as an exercise for the reader.