As everyone knows, when compared to waterfall application development, Agile speeds things up, and DevOps speeds them up even more.
There’s just one little gap in what everyone knows: exactly what these things are that Agile speeds up.
When it comes to optimizing business processes and practices, speed has two very different and independent meanings — cycle time, which is the time that elapses between an average item entering the function as an input and the time it exits as an output, and throughput, which tallies how many completed items exit the function in a given unit of time.
For application development, I’m guessing most businesses care more about cycle time than throughput, but that’s just a guess. I’ve never seen any survey data to confirm it.
On the other hand, managers who want something from IT mostly ask and then gripe about is when they’ll get it. They care about cycle time and want it to be small.
So for the sake of argument, accept that the goal of supplanting waterfall development with Agile is to improve cycle time.
But as cycle time measures the time that elapses between something entering a function and exiting it, we’re back to the question of what are these things of which we speak?
Back in the days when waterfall held sway, the closest anyone came to nailing this down was the function point. Function points were (and are — the discipline still has adherents) supposed to correspond to business functionality, and so they do, in the sense of corresponding to software functionality business people use.
So we could ask the musical question, do Agile methodologies speed up the delivery of function points?
And we’d have our answer, which is the same as the answer to the question, “What’s the difference between ignorance and apathy?” which is, “I don’t know and I don’t care.”
That’s because Agile methodologies don’t deliver function points. They deliver user stories, each of which is assigned a degree-of-difficulty weighting factor, typically the aforementioned story points.
So on the subject of velocity we now find ourselves asking which delivers a user story more quickly — waterfall or Agile. But as waterfall deals in function points, not user stories, aren’t we still stuck with incomparables?
Well, yes, but not insurmountably, because a user story is, if you squint a bit and don’t worry overmuch about the details, pretty close to what in waterfall terms we’d call a requirement.
Al fin, nous arrivons! as a Parisian shopkeeper said to me many years ago as I was attempting, in my very limited French, to explain what I needed and he was attempting to make sense of my near gibberish.
At last, we’ve arrived: To compare the speed of waterfall and Agile, “all” we need to do is compare how much time elapses between the first articulation of an average requirement and its appearing in production some time later.
Interestingly enough, Agile doesn’t measure this. It measures throughput — story points delivered per week, or some similar metric. Why? Probably because throughput is what’s easy to measure never mind what matters most.
Superficially, cycle time doesn’t seem hard to measure. Except that with waterfall methodologies the early steps aren’t atomic: Business Analysts talk with a bunch of people (ConsultantSpeak: Stakeholders and subject matter experts), try to make sense of it all, and write up the result in a requirements document.
Average cycle time for this step: Total step duration (first interview through requirements publication and ratification) divided by the number of requirements described in the document, weighting each requirement by its degree of difficulty.
Agile equivalent: Time needed to rephrase someone’s requirement as a user story and add it to the backlog.
This isn’t an entirely fair comparison, though: Waterfall business analysts are expected to filter out low-value requirements. With Agile, they just sit in the backlog, never important enough to be worked on, either forever or until someone decides it’s time to clear all the dead items out of the backlog.
Which means with Agile, cycle time will have to be, shall we say, dynamically recalibrated from time to time to remove Worthless Items Never Worked On from the calculation.
Clearly, app dev cycle time measurement isn’t for the faint of heart. And we haven’t even begun to explore how to account for project failures.
Given the Standish Group’s current statistics — that Agile projects enjoy roughly three times the success rate of waterfall projects — accounting for this is an important piece of the puzzle.
Our App Dev speed is measured by when the project is due, without regard to the level of difficulty or need for new infrastructure. So we may have some acquisition take place; customers added to our portfolio in that acquisition expect us to provide all the same services, reporting, etc., that they enjoyed with the acquired company, even if what we have is similar, but not quite the same lingo. The customers are assured the features will be added, and a go-live date given without consultation for an LOE from App Dev.
This is not a joke. We really have scenarios like this. I’ve made very modest inroads to the use of work items and tracking using Team Foundation Server, but we’re a long ways from any kind of organized methodology, whether it be Waterfall or some variation of Agile.
As always, thanks for your insight on topics important to me.
Demonstrating once again that estimation is the untamed frontier.
Agile actually can solve this (assuming “much better” constitutes solving it). But that requires a couple of things: (1) the people doing the estimating now have to want to solve it; and (2) everyone has to be patient while the organization accumulates enough experience to do a decent job of describing what’s needed in terms of user stories, and to have confidence in teams’ story point estimation.