Ambrose Bierce, the 19th century cynic, told of the inventor who built a flying machine. When the inventor started the machine, it immediately and quickly bored a hole straight to the center of the earth. Leaping free, the inventor was heard to remark: “My invention was perfect in every detail. The problems were merely basic and fundamental.”

Big projects are like that – they dig us all into deep holes, rarely fly, and even when they’re perfect every detail, they often turn out to be flawed in basic and fundamental ways – the usual consequence of having implementation scheduled years after conception.

For some reason, everyone expresses surprise when IS projects come in late, over budget and with fewer features than promised, even though that’s by far the most common outcome. Since most new IS projects are based on client/server technology, we of course blame the technology, even though, as we saw last week, client/server projects fail neither more nor less often than traditional mainframe systems.

The complexity of project management increases exponentially with the size of the project. This means big projects need exceptional project managers. Unfortunately, exceptional project managers are hard to come by, and they deservedly command salaries that can make IS executives uncomfortable.

Next week, we’ll look at the basic principles of managing big projects. This week we’ll talk about how to avoid them in the first place, because most can be prevented. Here’s what to strive for:

  • Small Teams: Don’t put more than five people on a project team. Small teams mean low overhead.
  • Quick Delivery: Define projects no more than six months long. When a product is due in 180 days, the team feels a sense of urgency in the first team meeting. The project deliverable, by the way, should provide tangible value, not just something tangible.
  • Restricted User Involvement: End-users should define business processes and system functionality, not system design details. Get agreement on this point up front, and then have frequent, informal contacts rather than formal interviews. Be highly interactive, and learn their business from them.
  • Staged Releases: Make your first release as small as you can. Set up two teams working on staggered schedules. Team One freezes its design three months after starting. Team Two starts designing the next release while Team One codes. Team Two freezes its design three months later, while Team One installs its release. The benefit? You can successfully freeze the design, because it’s easy to add new features to the next release. This gets you out of the trap of “scope creep” that kills so many projects.
  • High-Productivity Tools: Delphi, Powerbuilder, Visual Basic and their competitors all increase programmer productivity by a huge multiplier compared to Cobol or C++. Only use procedural languages when you have no other choice.
  • Simple User Interfaces: GUIs tempt programmers into showing off by building in lots of overlapping pop-up windows with cool interface widgets and heavy mouse action. Make your programmers experts in clean interface design.
  • Usually, when you’re faced with a big, intimidating project you can break it up into a series of overlapping, independent, small, manageable projects that match the above characteristics. When you do, you’ll experience several key benefits. Your projects will come in on time. You’ll be able to track changing business requirements

    You’ll also find yourself able to respond to changing company priorities, because you won’t have committed all of your development resources to a single project for a long period of time.

    Think small.

    An ongoing debate fostered by Stewart Alsop rages over when we’ll unplug the last mainframe. (Does this mean there are debates alsoped by Ed Foster? Inquiring minds want to know.)

    Back in the good old days, microcomputers processed eight bits, minicomputers sixteen, and mainframes thirty-two. Then progress happened. The laptop computer I’m using to write this column has more raw processing power, and even with lowly Windows/95 crashes less often than the IBM 360/158 I used in 1980.

    Stewart has concluded we’ll never unplug the last mainframe. I’m forced to agree, because mainframe isn’t a class of technology, it’s a state of mind. The mainframe mentality – central control – has gained renewed popularity.

    Sherman, set the Wayback Machine for 1980. Apple computer dominates the fledgling personal computer market with a 6502 microprocessor, a 40-column screen, and VisiCalc. Accountants flock to this puppy. Why? Because it makes them independent of Data Processing, that’s why.

    Well, progress has overtaken us:

    • Various forms of .ini files have made it impossible for end-users to be self supporting, just as fuel injection spelled the end of home car care.
    • Local Area Networks means our formerly independent systems now plug into a shared resource, and we may even load software from central file servers.
    • Electronic Mail and shared directories mean we ship files back and forth, which in turn means we have to agree to common file formats.

    Progress is just dandy. In this case it means more powerful systems that are easier to use and provide more value than ever before. The price?

    The combination of interconnectedness and maintenance complexity has given central IS a logical reason to regain the control it lost when PCs hit their growth curve in the mid-1980s.

    Many IS departments now forbid end-users from loading software into their PCs – only IS-approved standards may be used. That’s fine if IS has a standard – if your employer uses WordPerfect, why should you insist on using WordPro? – but it makes no sense when IS provides no tool and forces users to do without.

    Another example of the trend: Not all that long ago, I heard several senior IS executives talk about the importance of getting control over all the “hidden code” that had come into being over the past ten years in their enterprises. The code in question? Formulas in spreadsheets.

    Yes, these people seriously believed it would be in their companies’ best interests if IS gained control over the formulas embedded in the various and sundry spreadsheet models employees had created to help them do their jobs.

    Why? Two reasons. First, some spreadsheets go into production, serving as crude database management systems that keep track of departmental information. Second, IS supposedly has a far better understanding of how to create consistent “business rules” in ways that encourage code re-use and logical consistency than the end-users who keep on re-inventing the wheel in the various spreadsheets they build.

    While clearly absurd (why IS should have any more to say about the contents of an electronic spreadsheet than it does over one created with graph paper, pencils and calculators is beyond me) the trend back to central control is gaining force.

    Yes, it’s absolutely true that end-users use spreadsheets to manage databases, using the wrong tool for the job and creating maintenance headaches downstream. I use a screwdriver to open paint cans, for that matter. There are no “Paint Can Tool Police” to stop me, and if I bend the screwdriver, that’s my business.

    Duplication of effort is a price companies pay for empowered employees who act independently. Inconsistent spreadsheet formulas are simply the electronic consequence of diverse perspectives about the business.

    And IS isn’t all that good at consistency. It manages multiple databases. Equivalent fields in different databases usually have different formats, inconsistent values, and often, subtle differences in the semantics of their definitions.

    The personal computer was a key enabler of employee empowerment. Resist the trend back to mainframes. Give end-users as much freedom as you can.