“I like TCO — Total Cost of Ownership. I feel like it’s a much more accurate economic model of price. Granted, it’s just price, not usefulness, but as long as you know that, then it’s a highly useful metric, much better than manufacturer’s suggested retail price.”

So said Mike M in a comment he posted to a recent KJR, which took some courage given how often I ridicule … uh … critique TCO in this space.

Credit where it’s due, he’s quite correct. TCO is a more accurate measure of spending than MSRP. That doesn’t fix any of TCO’s intrinsic flaws. Quite the opposite, it puts a spotlight on a flaw I ignored in my last missive on the subject: From a 7 C’s perspective, neither MSRP nor TCO are Connected to any important goal.

Wait, wait, wait, wait, wait! I can hear you shouting at your screen. Yes, reducing costs can be painful. It’s still an important goal in most or all businesses, isn’t it?

Well … no, or at least it shouldn’t be.

Increasing value is an important goal. Cost-cutting can be a useful way to increase value, but, as I’ve pointed out enough times to make your eyeballs roll, only when the organization can cut costs without a commensurate reduction in benefits.

As I haven’t pointed out enough times (yet) to make your eyeballs roll, reducing TCO can drive a short-term perspective that can, over time, prove calamitous.

For example …

I have, over the years, run into a handful of companies that (I’m not making this up) wrote their own development languages, transaction processing handlers, and file management software. In some cases these companies used their proprietary platforms to write proprietary applications that underpinned their go-to-market services.

No question — they reduced TCO quite a lot compared to competitors that had to license COBOL, CICS, and VSAM from IBM, not to mention licensing applications instead of relying on their home-grown ones. They passed this reduction along to their clients in the form of lower prices that helped them win and retain business.

What’s not to like?

Let’s start with staffing. Someone has to maintain these proprietary platforms. The folks who wrote them decades ago either have retired or will retire soon. Recruiting programmers qualified to and interested in taking on this sort of work is, in this day and age, pretty close to impossible.

But if you can’t recruit, why not just freeze the platforms in place? They all work, after all.

But that assumes the next IBM mainframe they buy, with any operating system that’s available and maintained by IBM, will run proprietary platforms written before IBM re-named MVS to Z/OS.

So … never mind all that. Nothing lasts forever. It’s time to convert the application to a more modern platform.

A fine idea, made even better by the only other alternative that would work being shuttering the business.

One problem with the conversion strategy is that decades of enhancements made to applications that are directly visible to customers either mean a lot of time and effort adapting a commercial package to service contractual obligations; or else committing the very large investment of capital and effort that would be needed to rewrite the application on a modern platform.

One more challenge: As mentioned, companies like these won and retained business by offering more attractive pricing than their competitors, made possible by avoiding the costs of licensing COTS applications and commercially available development and operating platforms.

No matter what these companies convert the applications to, they’ll be paying non-trivial license fees they’ll have to pass along to their customers in the form of higher prices.

They are, to turn a phrase, borrowing from the future.

Businesses borrow all the time. When it’s money, your average banker will work with companies to restructure debt to improve the odds of being repaid. The future isn’t like that. When the time comes, it demands repayment, often at usurious interest rates, and with mafia-like collection practices.

No argument — this week’s example of TCO reduction gone wild is extreme, and by now increasingly uncommon.

But while your IT shop probably doesn’t rely on proprietary platforms, other forms of technical debt — the term we use in IT for borrowing from the future — are distressingly common just as funding to repay them is distressingly uncommon.

Even TCO’s strongest advocates will agree that accurately calculating it ranges from difficult to Full Employment for Accountants.

But compared to the challenge of accurately measuring and reporting technical debt, TCO calculations look easy. Perhaps that’s why you never see technical debt and other forms of future-debt on company balance sheets.

Or maybe it’s just because reporting future-debt isn’t required, and would make the books look worse than ignoring it.

When my children were young, I offered them two alternatives. They could either make the same mistakes I made growing up, or they could learn from my mistakes and make a bunch of new ones instead. I’ll let you guess. Heck, I can only guess myself.

But I don’t have to guess about DevOps, where some practitioners are making mistakes IT learned to avoid decades ago.

In particular, we learned IT shouldn’t release application changes into the wild without: (1) conducting comprehensive regression tests if the application change in any way alters system integrations; and (2) providing at least delta communication, and in many cases delta training, if the user interface changes.

Wait! Why am I wasting your time with 30+ year old wisdom?

I know it looks like I’m changing my name to Captain Obvious. But while I know you know better than to engage in these IT worst practices, that doesn’t mean your technology vendors do too.

If you’ve read anything about DevOps, you know CICD is a key element. But many vendors, having invested significant money and effort into adopting DevOps as How We Do Things Around Here, only got three of the four letters right: Continuous, Integration, and Continuous.

But they read somewhere that the “D” stands for Deployment, and, with the enthusiasm of the converted, gave the matter no further thought.

As a regular KJR reader you know the difference between a release and a releasable build, and with that the difference between Continuous Integration/Continuous Deployment … appropriate for eCommerce applications where what changes is the customer’s shopping experience … and Continuous Integration/Continuous Delivery, the model that works for applications whose purpose is to support internal processes and practices.

Just in case: The difference between delivery and deployment is simple: Delivery installs to the staging environment; deployment installs to production.

It’s past time to make sure your vendors deliver to staging and don’t make you vulnerable to vendor-DevOps-driven unmanaged deployments.

Start with your contracts. Do they prohibit your vendors from deploying changes to the UI and application interfaces without your permission? If not, start negotiating the requisite contract changes.

SaaS providers are particularly notorious for unmanaged deployments, because you can’t block changes they install on their servers with your defenses. But increasingly, providers of customer-installed COTS are adopting DevOps practices too.

This doesn’t make it okay to go back to … or, in many cases, to persist in … the outdated practice of staying on stable versions as long as possible. Staying current or nearly current is no longer one choice among many. In an age of state-sponsored and organized-crime-sponsored assaults on your information systems, staying current or nearly current is now the choice, not a choice.

So let your vendors off the hook, and accept a deadline for implementing new releases. Your vendors should give you ample time to test. You should let them retire ancient versions.

This doesn’t just apply to the IT vendors you have, either. Every time you go through a solution selection, make sure you include your requirement that the vendor doesn’t push changes into your production environment without your knowledge and consent.

Second: It’s time to automate regression testing. Yes, setting it up is painful and expensive. For that matter, maintaining the automated test plan is no picnic either.

The alternative, though? There’s an old, old rule in IT, which is that you always test. Professionals test before putting software in production. Amateurs test by putting it into production.

And we’re well past the time when just grabbing a bunch of end-users and having them bang on their keyboards for an hour or two will give IT a passing grade.

Third: Make #2 less expensive by cleaning up your interface tangle once and for all. While reliable industry statistics are hard to come by (strike that — they’re impossible to come by), anecdotal and conversational evidence suggests that the ongoing cost of maintaining an ad hoc collection of point-to-point interfaces can, over time, overwhelm IT’s ability to implement new applications and application changes.

To put a bow on it: How could DevOps proponents make such an elementary mistake as conflating delivery and deployment? It’s sadly easy to understand. As a member in good standing of the KJR community, you understand there’s no such thing as an IT project.

But we’re still in the minority. The IT pundit class thinks moving from projects to products is an exciting transformation.

If the job is done when the software runs, CI/CDeployment is fine and dandy.

If it isn’t … choose the wrong goal and wrong practices are inevitable.