What do you do when someone else’s evidence and your logic collide? You might:

  • Use ad hominem “logic” to cast aspersions on the someone else.
  • Not waste time bothering with ad hominem logic – just deny the other side’s evidence.
  • Create a strawman version of what you’re trying to refute – something that resembles it, only with a few fatal flaws added to let you “prove” it can’t work.
  • Embrace your confirmation bias. Find someone on the internet … anyone on the internet … who asserts evidence supporting whatever position you like best. Cite them as the only authority worth paying attention to.
  • Redefine the criteria for being right.
  • Find a historical case of a scientist whose colleagues ridiculed their theories, only to be vindicated in the end. Claim they’re your ideological brethren. Whether the historical scientist was actually subjected to ridicule or not doesn’t matter.
  • Or, reverse the last stratagem: Find a theory that was popular once upon a time, only to ultimately be proven wrong. Those who disagree with you would have agreed with it.

Which brings us to the sad, sad case of why Waterfall methodologies persist.

In case you’re unfamiliar with the term, a Waterfall methodology is one that designs a project plan in terms of a linear progression. For application development these would typically start with a feasibility study, followed by requirements definition, specifications, coding, testing, training, and deployment.

With Waterfall methodologies, design has to be complete before development can begin, and if anything has been left out, or a design assumption has changed (as in, “Oh, you wanted wings on that airplane?”) whoever is championing the change goes through a change control process, which includes the project manager rippling the change through the entire project plan.

Agile, in contrast, is a family of methodologies, not a methodology. What its variants have in common is that they build software iteratively and incrementally. The list of Things the Application is Supposed to Do is called the backlog.

Projects still start with some form of feasibility study, one of whose work products is the initial backlog; another is defining the “minimum viable product” – the irreducible core of the planned application that everything else hooks onto.

From that point forward there is a process for deciding which items in the backlog have the highest priority. As for anything left out of the backlog or a change in design assumptions, these are pushed into the backlog where they are subjected to the same priority-setting process as everything else.

This will do as a barebones sketch. If you want more, please refer to chapter 3 of There’s no such thing as an IT project, Fixing Agile.

The best available evidence, from the widely respected Standish Group, reports that Agile projects are fully successful at nearly twice the rate as Waterfall projects, which fail completely about two and a half times as often as their Agile counterparts.

Case closed? If only.

Some Waterfall proponents counter with one or more of the denial strategies that started this article. For example a popular strawman is that Agile can’t work because in order to optimize the whole you have to suboptimize the parts, which supposedly can’t happen because in Agile, each developer does whatever he or she wants to build a capability that accretes onto the accumulating application.

This is a strawman. Agile projects build to a well-defined architecture, to user-interface standards, and to an overall coherent plan: Anything added to the backlog has to be consistent with the rest of the backlog.

Meanwhile, the implication is that in Waterfall projects, designers can optimize the whole. This assertion is, in a way, accurate. Designers can optimize the whole, so long as the target “the whole” is shooting at doesn’t change over the life of the project.

Good luck with that. The specifics of what a business needs to succeed within a given application’s domain change all the time.

So by the time a Waterfall-developed application is done, the criteria for “done” have changed.

Bob’s last word: The specifics of what a business needs to succeed within a given application’s domain changes all the time in successful businesses. Leaders of successful businesses know that their success depends on continuous adaptation to changing circumstances.

Success, that is, depends on agility.

Bob’s sales pitch: “To optimize the whole you must sub-optimize the parts” is a well-recognized principle here in KJR-land. If you’d like some guidance on how to apply it to your own real-world situations, you’ll find it in chapter 2 of Keep the Joint Running: A Manifesto for 21st Century Information Technology, which explores this concept in depth.

I’m working on a (probably) three-part sequence on technical architecture, to be part of the IT Management 101 series I’m writing for CIO.com. As a famous person once said about health care, who knew architecture is so complicated?

This isn’t a substitute for it. It’s more along the lines of stray thoughts you might find helpful in assessing and managing technical architecture in your own organization.

Beware the seductive siren call of metaphor. The parallels between technical architecture and what professional building designers do are limited at best, and dangerous at worst.

The work of professional architects begins with a sketch and ends with blueprints. Technical architects don’t create blueprints, and if they did they would be embracing waterfall methodologies.

Agile methodologies don’t rely on blueprints of any kind. They often do rely on the equivalent of a sketch, but if so it’s the business analyst / internal consultant who draws it.

Crowdsourcing is a dicey way to gather data. Given how much information you’re going to want about each component in your portfolios, crowdsourcing it … sending out questionnaires to subject matter experts … is tempting.

Given that many enterprises can have a thousand or more components across all of their portfolios, crowdsourcing might not just be tempting – it might be unavoidable.

So if you do crowdsource your data-gathering, make sure you educate all of your information sources in the nuances of what you’re looking for.

And, assuming they do complete your questionnaires, curate the daylights out of the information they provide.

Version is data. Currency is information. You should include in your technical architecture database how current each component is, “current” meaning whether it matches what the vendor currently ships (fully current) or, descending through the possibilities, whether it has fallen out of support (obsolete).

Keeping track of which version of a component is deployed in production is relatively straightforward – just make sure than any time the responsible team installs an update they know to update the architecture database to match.

But what you care about is how current the component is, and you can only determine that if you know the product’s full version history, so you can match your production version to its position in that history.

Currency scores are, of course, perishable. As they change each time a vendor issues a new release, someone needs to keep track of this across every commercial product in every portfolio in your architecture.

It isn’t just your technology that has to stay current. You have to keep every piece of information you collect about each component in your architecture current, too.

You collect information about each component of your technical architecture. Some of it is constant. But quite a lot may change over time. For example, you’ll probably want to know how well each application supports the business functions it’s associated with. But business functions change, which means an application’s business function support score changes along with it.

So your information-gathering process has to operate on a cadence that balances the sheer effort required with the rate of decay of information accuracy.

Bob’s last word: Speaking of balancing effort and information it’s tempting to collect a lot of data about each component in the architecture. Tempting, that is, until you pivot from collecting it the first time to updating it on a regular cadence, over and over again.

In the framework I use, I’ve identified about 30 attributes just for the application layer of the architecture. That’s a starting point. An important part of the process is whittling them down to the essentials.

Because 30 is too big a number. Ten will usually do the trick.

Bob’s sales pitch: I’m still whittling down the CIO.com architecture articles to their essentials. I’ll let you know when they’re available for your reading enjoyment.