HomeIndustry Commentary

Not-so-extreme programming

Like Tweet Pin it Share Share Email

eXtreme programming is a shame.

Understand, there’s a lot to be said for it. I have nothing against it. Smart people have extolled its benefits in print, and in person. It undoubtedly works very well.

But it’s still a shame, because it was carefully packaged to scare the living daylights out of a typical CIO.

When you think of eXtreme programming, what comes to mind first? See? A CIO’s first thought is almost certainly, “Two programmers at one keyboard? There’s no way on earth I can afford to literally cut programmer productivity in half. What’s next on the agenda?”

Or, the CIO will hear the word “extreme” and immediately tune out everything else, because extreme means risk and risk means waiting until other companies make it mainstream.

But doubling up programmers is, while interesting, a nit. Here’s why eXtreme programming, or some other “adaptive methodology,” should be an easy sell:

If you ask business executives what IT does worst, the most common answer is probably project completion. Ask them what IT does best, and you hear about application maintenance and small enhancements — responsibilities most IT organizations address with great competence.

What adaptive methodologies have done is to turn big-bang application development into development by continuous enhancement. They start by building something small that works and adding to it until there’s something big that works. They play, that is, to IT’s greatest strength. That should make sense to even the most curmudgeonly of CIOs.

As with everything else on this planet, the great strength of adaptive methodologies is the cause of their biggest weaknesses, ones they also share with old-fashioned application enhancement.

The first is the risk of accidental architecture. To address this issue, adaptive methodologies rely heavily on “refactoring,” which sounds an awful lot like changing the plumbing after you’ve finished the building.

By beginning with a “functional design” effort that publishes an architectural view of the business as well as the overall technology plan you can reduce the need for refactoring. It’s also important to make sure the development effort starts with the components that constitute a logical architectural hub, as opposed to (for example) taping a list of the functional modules on a wall and throwing a dart at it.

The second risk is colliding requirements. With ongoing enhancements to more stable applications there’s a risk that this month’s enhancement is logically inconsistent with a different enhancement put into production three years ago. With adaptive methodologies, the time frame is closer to three weeks ago but the same potential exists: To a certain extent they replace up-front requirements and specifications with features-as-they-occur-to-someone. It’s efficient, but not a sure route to consistency.

How can you deal with colliding requirements? Once again, take a page from how you handle (or should be handling) system enhancements. In most situations, you’re better off bundling enhancements into scheduled releases than putting them into production one at a time. This gives you a fighting chance of spotting colliding requirements. As a fringe benefit it amortizes the cost of your change control process across a collection of enhancements. (Here’s an off-the-topic tip: If your developers like your change control process you need to improve your change control process. But I digress.)

The same principle applies to adaptive methodologies. As a very smart application development manager explained it to me, “My goal isn’t to have frequent releases. The business couldn’t handle that anyway. What I want is to have frequent releasable builds.”

Yeah, but who cares? As last week’s column argued so persuasively (how’s that for being humble?) most IT shops purchase and integrate, rarely developing internal applications, and integration methodologies aren’t the same as development methodologies. Are there adaptive integration methodologies?

It’s a good question for which the answer is still emerging. Right now, it’s “kinda.” The starting point is so obvious it’s barely worth printing: Implement big packages one module at a time. If the package isn’t organized into modules, buy a competing package that is.

Which leads to the question of which module to implement first. The wrong answer is to implement the module with the biggest business benefit. The right answer is to start with the application’s architectural hub. That will minimize the need to build ad hoc interfaces.

Taking these steps doesn’t make your integration methodology adaptive. The chunks are still too big for that.

But it’s a start.