What do you do when someone else’s evidence and your logic collide? You might:

  • Use ad hominem “logic” to cast aspersions on the someone else.
  • Not waste time bothering with ad hominem logic – just deny the other side’s evidence.
  • Create a strawman version of what you’re trying to refute – something that resembles it, only with a few fatal flaws added to let you “prove” it can’t work.
  • Embrace your confirmation bias. Find someone on the internet … anyone on the internet … who asserts evidence supporting whatever position you like best. Cite them as the only authority worth paying attention to.
  • Redefine the criteria for being right.
  • Find a historical case of a scientist whose colleagues ridiculed their theories, only to be vindicated in the end. Claim they’re your ideological brethren. Whether the historical scientist was actually subjected to ridicule or not doesn’t matter.
  • Or, reverse the last stratagem: Find a theory that was popular once upon a time, only to ultimately be proven wrong. Those who disagree with you would have agreed with it.

Which brings us to the sad, sad case of why Waterfall methodologies persist.

In case you’re unfamiliar with the term, a Waterfall methodology is one that designs a project plan in terms of a linear progression. For application development these would typically start with a feasibility study, followed by requirements definition, specifications, coding, testing, training, and deployment.

With Waterfall methodologies, design has to be complete before development can begin, and if anything has been left out, or a design assumption has changed (as in, “Oh, you wanted wings on that airplane?”) whoever is championing the change goes through a change control process, which includes the project manager rippling the change through the entire project plan.

Agile, in contrast, is a family of methodologies, not a methodology. What its variants have in common is that they build software iteratively and incrementally. The list of Things the Application is Supposed to Do is called the backlog.

Projects still start with some form of feasibility study, one of whose work products is the initial backlog; another is defining the “minimum viable product” – the irreducible core of the planned application that everything else hooks onto.

From that point forward there is a process for deciding which items in the backlog have the highest priority. As for anything left out of the backlog or a change in design assumptions, these are pushed into the backlog where they are subjected to the same priority-setting process as everything else.

This will do as a barebones sketch. If you want more, please refer to chapter 3 of There’s no such thing as an IT project, Fixing Agile.

The best available evidence, from the widely respected Standish Group, reports that Agile projects are fully successful at nearly twice the rate as Waterfall projects, which fail completely about two and a half times as often as their Agile counterparts.

Case closed? If only.

Some Waterfall proponents counter with one or more of the denial strategies that started this article. For example a popular strawman is that Agile can’t work because in order to optimize the whole you have to suboptimize the parts, which supposedly can’t happen because in Agile, each developer does whatever he or she wants to build a capability that accretes onto the accumulating application.

This is a strawman. Agile projects build to a well-defined architecture, to user-interface standards, and to an overall coherent plan: Anything added to the backlog has to be consistent with the rest of the backlog.

Meanwhile, the implication is that in Waterfall projects, designers can optimize the whole. This assertion is, in a way, accurate. Designers can optimize the whole, so long as the target “the whole” is shooting at doesn’t change over the life of the project.

Good luck with that. The specifics of what a business needs to succeed within a given application’s domain change all the time.

So by the time a Waterfall-developed application is done, the criteria for “done” have changed.

Bob’s last word: The specifics of what a business needs to succeed within a given application’s domain changes all the time in successful businesses. Leaders of successful businesses know that their success depends on continuous adaptation to changing circumstances.

Success, that is, depends on agility.

Bob’s sales pitch: “To optimize the whole you must sub-optimize the parts” is a well-recognized principle here in KJR-land. If you’d like some guidance on how to apply it to your own real-world situations, you’ll find it in chapter 2 of Keep the Joint Running: A Manifesto for 21st Century Information Technology, which explores this concept in depth.

In the beginning there was dBase II, designated “II” by Ashton-Tate, its publisher, to convey a level of maturity beyond its actual virtues. It was followed in quick succession by Paradox, Delphi, and Microsoft Access, all of which overcame most of dBase II’s (and III’s, and especially IV’s) numerous limitations.

Compared to traditional programming languages these platforms increased developer productivity by approximately 10,000% compared to traditional COBOL coding – they let me get about a day’s worth of COBOL coding in five minutes or so.

This history was current events more than twenty years ago and yet IT shops still write code and enshrine the practice with various methodologies (Scrum, Kanban, DevOps, add-your-favorite-here) intended to improve IT’s overall app dev effectiveness.

Speaking of deja vu, the pundits who track such things write about no-code/low-code (NC/LC) development environments as if they’re something new and different – vuja de, like nothing they’ve seen before – when in fact they offer little their 1990s-vintage predecessors weren’t capable of way back when.

Should NCLC be in your future? Gartner says yes, predicting that by 2024, “… low-code application development will be responsible for more than 65% of application development activity.”

They make it so easy … to nitpick, is that 65% of all lines of code that will be produced using No Code tools? Probably not, as No Code tools by definition produce no code.

Function points? Maybe, except that nobody uses function points any more.

Probably, Gartner means 65% of all developer hours will be spent using NC/LC tools.

Which is simply wrong, on the grounds that most IT shops license when they can and only build when they have to. In my unscientific experience, looking at total application functionality as the metric, maybe 75% comes from COTS implementations (commercial, off-the-shelf software, which includes but isn’t limited to SaaS packages). Maybe 25% comes from in-house-developed custom applications, and that’s being generous.

As NC/LC platforms don’t touch COTS/SaaS functionality, it’s doubtful that work on 25% of the application portfolio can occupy 65% of all developer hours.

But I digress. The question isn’t whether Gartner has done it again. The question is how much attention IT should pay to this platform category.

Answer: If coding and unit testing are enough of a development bottleneck to care about, then yes. When it comes to optimizing any function, removing bottlenecks is generally a good idea.

Second answer: If in your company DIY application development is a source of a lot of application functionality, then selecting an NC/LC standard, integrating it with your application portfolio’s systems-of-record APIs, and providing training in its use will save everyone involved from a lot of headaches, while removing a source of friction and conflict between IT and the rest of the business.

Third answer: Most COTS/SaaS applications have some sort of no-code or low-code toolkit built into them. These should be IT’s starting point for moving in the NC/LC direction, and for that matter for any new application functionality.

Bob’s last word: It’s easy to fall into the trap of answering the question someone asks. “Are NC/LC tools useful and ready for prime time?” is an example, and shows why Dr. Yeahbut makes frequent appearances in this space.

The answer to the question is, in fact, “Yeah, but.” NC/LC development should, I think, be part of the IT app dev toolkit. But mastering the tools needed to integrate, configure, enhance, and extend the company’s COTS application suites has, for most IT organizations, far more impact on overall IT app dev effectiveness than anything in the way of app dev tools.

Bob’s sales pitch: As a member of the KJR community you might enjoy my most recent contribution to CIO.com, and a podcast I was interviewed for.

The CIO.com article is titled “The hard truth about business-IT alignment.” You’ll find it here.

The interview was for Greg Mader’s Open and Resilient podcast and covered a number of KJR sorts of topics. You’ll find it here.