Metrics seems like such a good idea. If you can’t measure, after all, you can’t manage, or so Peter Drucker asserted a long time ago. He’d be horrified at how this dictum has been misapplied.

Drucker was talking about business processes, where if you can’t measure you really can’t know when something has broken down and needs attention. That’s the proper scope of Drucker’s Metrics Dictum. It applies to business processes. Too many business managers think it applies to everything.

We’ve been talking about DevOps, whose name comes for the inclusion of Ops members in Dev teams, but whose most interesting characteristic is that it replaces waterfall’s big-bang deployments, and even Agile’s large-scale releases, with a continuous stream of small deployments that are subjected to automated testing and then automatically promoted to production without so much as a rubber stamp from a Change Advisory Board.

The metrics question of the day: Do Agile and DevOps speed things up?

The answer is that there is no easy answer. To understand the challenge, answer this question instead: Which is faster, the Internet or a truck?

Figure the truck has a 4,000 cubic foot capacity. You fill it with SanDisks, each holding 64GB of data. The truck drives 1,000 miles at an average of 50 mph. If I’ve done my arithmetic right, the truck delivers more than 35 million terabits of data in 20 hours, yielding a bandwidth of just under 500 terabits per second.

Imagine your MPLS monthly bill for that kind of bandwidth. The truck wins by, you’ll pardon the expression, a mile.

Fortunately, network engineers recognize that bandwidth only tells half the speed story. The other critical network metric is latency. Here, the truck doesn’t fare so well. The first data packet doesn’t arrive until 20 hours after it’s transmitted, compared to a typical Internet ping of maybe 15 milliseconds.

So which is faster, the Internet or the truck? Answer: It depends on what you need more, high bandwidth or short latency.

Waterfall methodologies are the trucks of application development. They deliver a truckload of software features in one big delivery. The business waits for six months or more before anything goes into production, but when the wait is over it gets a lot of new functionality all at once.

Agile methodologies, and DevOps even more so, are the Internet of app dev, delivering new functionality more frequently but in smaller increments.

Where the metaphor breaks down is that with our truck vs Internet comparison we had a useful standard unit of data delivery – the bit.

Comparing app dev methodologies, we don’t. For a while we had function points, but they never really caught on, mostly because they’re too durned complicated to properly count. Also, they’re useless for Agile because function point analysis has deep connections to waterfall’s up-front specifications, which is one reason Agile replaced them with user stories and story points (estimated degree of difficulty).

So trying to compare waterfall and Agile for the speed of software delivery just isn’t going to happen in any reliable way. Even comparing Agile and DevOps is dicey. Agile delivers user stories weighted by story points. Increasingly, DevOps delivers microservices, not full user stories.

Try to apply Drucker’s Metrics Dictum to application development and you’ll find you’re trying to answer the question (to change metaphors mid-stream): Which is better for beating the other team — a great passing game, or the knuckleball?

And oh, by the way, these things have strong connections to the type of business change you need to achieve. Standard Agile practices are just the ticket when your goal is continuous improvement. Waterfall actually can work well, assuming you’re implementing a new business process you’re able to specify with precision and that will still be relevant when the multi-year initiative wraps up.

When designing a good metric turns into an intellectual quagmire, the problem is probably that we’re asking the wrong question. IT’s goal isn’t software delivery, after all. It’s supporting the achievement of intentional business change.

That being the case, what we should be looking at is whether, for a given desired business change, IT’s app dev methodology is a major source of friction.

Increasingly, business leaders care more about the organization’s ability to change direction quickly to address threats and pursue opportunities, and less about organizing and implementing large-scale strategic change.

With this change in the style of business change there’s no longer much doubt. The Agile/DevOps spectrum of methodologies is far more likely to keep IT from being sand in the company’s gears of progress.

I’m writing this the day after Thanksgiving. Yesterday, I was thankful when Da Bears implausibly beat the Packers. This led to a spiritual reflection on the limitations of the so-called golden rule, namely, that it only applies to like-minded individuals.

I reached this conclusion because of my wife’s family, which hails from within hailing distance of Lambeau Field and didn’t appreciate my enthusiastic response to the game’s outcome. Nor, to be fair, have I always appreciated theirs when faced with the more common result of Bear/Packer encounters.

What’s this have to do with DevOps, the subject we’ve been exploring in this space the last couple of weeks?

Not much. Except that applying DevOps to internal IT has golden-rule-like flaws (okay, it’s a stretch): As has been mentioned in this space from time to time, the similarities between developing commercial software or customer-facing websites and what IT needs to do are quite limited.

The big difference, as if you don’t already know what’s coming: Both waterfall and any of the popular Agile variants — Kanban, Scrum, xTreme, Test-Driven Development, and the strangely acronymed Lean Software Development — are designed to develop software.

And DevOps, in case you aren’t already aware of this, is built on top of one of these Agile variants, most often Scrum.

DevOps is a fine way to create software products, as Microsoft reportedly does. For that matter it’s a fine way for advanced retailers to constantly test new selling approaches on their websites. But … while the number of deploys per day is a frequently touted benefit in articles extolling the virtues of DevOps in retail, what these deploys are for is usually left to the imagination.

Which gets us to the questions raised last week about DevOps inside the enterprise, and an Agile methodology mentioned in this space several times but … and I apologize for this … never fully explained: Conference Room Pilot (CRP).

The question: Can DevOps be based on CRP instead of Scrum, and if so what would the result look like.

But first: What is CRP and why does it matter?

Answer: CRP is the only Agile variant designed from the ground up to implement commercial off-the-shelf software (COTS) and, by extension, Software as a Service (SaaS) solutions as well.

Here’s how it works.

First, IT installs the new COTS package to create the development environment. If the COTS system is supposed to replace one or more existing legacy systems, as is often the case, IT also converts the legacy data — a logically waterfall effort that shouldn’t be made Agile because what would be the point?

Next, whoever is in the best position to do so collects a few hundred or thousand test transactions, in the form of actual business conducted using the legacy systems over the past few weeks or months. These are staged as paper or electronic forms, whichever makes the most sense for use in exercising the new system.

One more preparatory step: IT trains a few developers in the new application — the ones it plans to turn into its gurus, because IT shouldn’t ever implement any COTS package without developing gurus for it — along with a training professional.

Now it’s time to lock the team, composed of business managers and users plus the newly anointed COTS gurus, in a conference room, to pilot the new system (hence the name).

Locked? Metaphorically — bathroom breaks are allowed, and pizza and beverages (caffeinated) are provided on demand.

The business users enter randomly chosen transactions into the new system. They’ll experience one of two outcomes: The new system will either:

  • Handle the transaction cleanly. Result — add it to the system’s automated test suite, for use later on to make sure changes don’t break what’s fixed.
  • Handle the transaction clumsily or not at all. Result — discuss it with one of the gurus, designing enhancements that don’t violate the integrity of the COTS system and do handle the transaction smoothly and efficiently. When the enhancements are finished and satisfactory, the transaction is added to the automated test suite.

Note that neither of the outcomes is “handles the transaction the way it’s currently handled.” There’s no intrinsic value to that, and that this is the case is a critical point, with which all team members are familiarized before being locked in the conference room.

By the time the team has plowed through the complete stack of prepared transaction and the resulting system passes the accumulated automated test suite, it’s ready for deployment.

Now it’s time for the magic question: What would the marriage of CRP and DevOps look like?

* * *

Sadly, we’re out of space, which means you’ll have to wait until next week for the magic answer.