HomeApp Dev Methodologies

Captain Obvious presents: DevOps!

Like Tweet Pin it Share Share Email

When my children were young, I offered them two alternatives. They could either make the same mistakes I made growing up, or they could learn from my mistakes and make a bunch of new ones instead. I’ll let you guess. Heck, I can only guess myself.

But I don’t have to guess about DevOps, where some practitioners are making mistakes IT learned to avoid decades ago.

In particular, we learned IT shouldn’t release application changes into the wild without: (1) conducting comprehensive regression tests if the application change in any way alters system integrations; and (2) providing at least delta communication, and in many cases delta training, if the user interface changes.

Wait! Why am I wasting your time with 30+ year old wisdom?

I know it looks like I’m changing my name to Captain Obvious. But while I know you know better than to engage in these IT worst practices, that doesn’t mean your technology vendors do too.

If you’ve read anything about DevOps, you know CICD is a key element. But many vendors, having invested significant money and effort into adopting DevOps as How We Do Things Around Here, only got three of the four letters right: Continuous, Integration, and Continuous.

But they read somewhere that the “D” stands for Deployment, and, with the enthusiasm of the converted, gave the matter no further thought.

As a regular KJR reader you know the difference between a release and a releasable build, and with that the difference between Continuous Integration/Continuous Deployment … appropriate for eCommerce applications where what changes is the customer’s shopping experience … and Continuous Integration/Continuous Delivery, the model that works for applications whose purpose is to support internal processes and practices.

Just in case: The difference between delivery and deployment is simple: Delivery installs to the staging environment; deployment installs to production.

It’s past time to make sure your vendors deliver to staging and don’t make you vulnerable to vendor-DevOps-driven unmanaged deployments.

Start with your contracts. Do they prohibit your vendors from deploying changes to the UI and application interfaces without your permission? If not, start negotiating the requisite contract changes.

SaaS providers are particularly notorious for unmanaged deployments, because you can’t block changes they install on their servers with your defenses. But increasingly, providers of customer-installed COTS are adopting DevOps practices too.

This doesn’t make it okay to go back to … or, in many cases, to persist in … the outdated practice of staying on stable versions as long as possible. Staying current or nearly current is no longer one choice among many. In an age of state-sponsored and organized-crime-sponsored assaults on your information systems, staying current or nearly current is now the choice, not a choice.

So let your vendors off the hook, and accept a deadline for implementing new releases. Your vendors should give you ample time to test. You should let them retire ancient versions.

This doesn’t just apply to the IT vendors you have, either. Every time you go through a solution selection, make sure you include your requirement that the vendor doesn’t push changes into your production environment without your knowledge and consent.

Second: It’s time to automate regression testing. Yes, setting it up is painful and expensive. For that matter, maintaining the automated test plan is no picnic either.

The alternative, though? There’s an old, old rule in IT, which is that you always test. Professionals test before putting software in production. Amateurs test by putting it into production.

And we’re well past the time when just grabbing a bunch of end-users and having them bang on their keyboards for an hour or two will give IT a passing grade.

Third: Make #2 less expensive by cleaning up your interface tangle once and for all. While reliable industry statistics are hard to come by (strike that — they’re impossible to come by), anecdotal and conversational evidence suggests that the ongoing cost of maintaining an ad hoc collection of point-to-point interfaces can, over time, overwhelm IT’s ability to implement new applications and application changes.

To put a bow on it: How could DevOps proponents make such an elementary mistake as conflating delivery and deployment? It’s sadly easy to understand. As a member in good standing of the KJR community, you understand there’s no such thing as an IT project.

But we’re still in the minority. The IT pundit class thinks moving from projects to products is an exciting transformation.

If the job is done when the software runs, CI/CDeployment is fine and dandy.

If it isn’t … choose the wrong goal and wrong practices are inevitable.

Comments (8)

  • Very timely. Just updated our ERP by applying thousands of patches (no version change). We had a couple of CRPs with end users from all areas of the organization. It went okay but we noticed some custom reports were not tested and currently don’t work as they did before.

    Question: How do we test the ERP patches? (3rd party ERP on-prem) We are a SMB that can’t afford an IT QA group. What do you suggest other than spending a day or two testing with end users (which we call a CRP).

    FS

    • I don’t pretend to have enough knowledge of your specifics to do more than provide some general principles.

      Actually, one principle: Don’t accumulate patches like that. While applying them didn’t constitute a version change, the principle is the same – you were far behind current.

      The good news about being an SMB is that most businesses your size can tolerate a bit of disruption. What I’d suggest is putting a basic test suite together to use with patches as they come in. You want a test suite good enough to be confident your systems won’t crash and burn as a result of applying the patch.

      Any patch that passes your test suite but breaks something or other isn’t going to break so much that you’ll have a hard time recovering.

      Key to this working: Communicate to everyone that the patch is going to go in, and let them know who to report any problems to.

      That’s the best I have. Anyone else care to weigh in?

      – Bob

  • Great article as usual, Bob. Your points (especially about delta training) could also apply to Microsoft Office and Windows in the Steve Sinofsky era, when adopters of Office 2007 and Windows 8.0 found themselves spending several months to re-learn how to do what they had been doing for the past 10 years. (The irony is that he was fired not for causing agony to millions of users, but because of a power clash with Ballmer.)

  • Microsoft hasn’t learned the first thing about testing and issuing changes, either! Look at their recent track record with Windows 10 updates.

  • Here is the tough part of this everyone (and I have spoken with Bob about this). Many SaaS providers aren’t giving companies a choice, you either take continuous “improvements” or you don’t use their products. Our current HR vendor is locked into monthly releases that I believe are also tied into Scrum/Sprints. They miss their releases fairly often and also have to patch weekly after the releases create new problems. You can imagine the enjoyment for HR teams to deal with this. Sadly this appears to be the wave of the future and I have argued with their product managers to no avail. The kool-aid has been drunk.

  • This is going straight to the Pool Room…. cracker article.

  • On target as usual. It must be time to retire because I keep seeing history repeat itself in people rebranding tried and true techniques and methodologies, then discarding them, then making the same mistakes as their predecessors, and reinventing the same again with a new acronym for the already-established lesson learned. Don’t know why certification hasn’t helped to break this awful cycle.

Comments are closed.