Is Chronodebt ever a good idea?

Chronodebt is the accumulated cost of remediating all IT assets that aren’t what engineering standards say they should be.

It’s what most of us have been calling “technical debt,” and I would too except that Ward Cunningham and his fellows at the Agile Aliance have already claimed it to mean “the implied cost of additional rework caused by choosing an easy solution now instead of using a better approach that would take longer.”

Anyway, the short answer is yes. Taking on Chronodebt is, in many circumstances, exactly the right choice. The conundrum preceded the advent of business computing by at least a century, when, during construction of the transcontinental railroad, the Southern Pacific Railroad, was faced with a shortage of hardwood railroad ties.

And so, instead of waiting for enough hardwood ties to continue construction, it took on massive Chronodebt in the form of ties made of cheaper and plentiful cottonwood (“Cottonwood now or hardwood too late?“, KJR, 4/28/2003).

The cottonwood ties would only last a few years, but with them the company could generate enough revenue to replace them. Without them it would never have completed enough track to sell a single ticket.

The significant difference between this and most modern companies’ Chronodebt is that the Southern Pacific Railroad paid its Chronodebt off.

Chronodebt, like most other forms of debt, is neither good nor bad as an absolute. As with most other decisions, context matters. In the case of Chronodebt it all depend on what stage of concept development you’re in.

Imagine your “concept” barely deserves the term — it’s really more of a notion. You think it has promise, but you don’t have much in the way of supporting evidence to support it.

It’s time to bet the farm!

And it is, but only if you’re betting someone else’s farm, “someone else” is someone whose friendship you don’t value very much, and you’ve checked with your lawyers to confirm you aren’t at risk when someone else finds themselves farmless.

If it’s your kids’ college fund it’s time to launch Excel, or maybe Access, or an ISP’s generic eCommerce development kit.

If it isn’t all about you … if we’re talking about a corporate setting and it’s a proposal to try something new and different for which there isn’t and can’t be much data to bring to bear on the decision … then it’s still time for Excel, or maybe Access. Or, because it’s a corporation, perhaps SharePoint, or some SaaS product whose licensing terms aren’t too expensive and onerous.

It’s time, that is, for Chronodebt, because doing things the so-called “right way” probably means missing the opportunity altogether. And in fact we might not be talking about Chronodebt at all. Chronodebt in this situation comes from the danger of success, because it only has to be paid off if the idea pans out. Success is, to push the metaphor to the breaking point, the usurious interest rate charged for underinvestment, which wasn’t underinvestment until success happened.

Chronodebt is a good idea during the exploratory phase of innovation management. It’s a bad idea when innovations start to prove out. That’s when it’s time to replace the kludges and prototypes you built the new concept on with more robust and scalable alternatives … time, that is, to pay down the debt, which means investing in sustainability.

That isn’t the whole story, though.

There are times when a company’s whole business model starts to approach its use-by date. Imagine, for example, you’re CEO of a metropolitan daily newspaper and your presses are a major source of corporate Chronodebt. Time to pay it off by replacing them with something more modern?

Probably not. Like it or not (I don’t), newspaper print circulation has been steadily declining for decades and the more important metric — advertising revenue — is in even sharper decline. The best and most advanced presses money can buy won’t sell a single additional newspaper, or, more importantly, attract more advertisers.

As CEO, you tell the god Chronos to take a hike.

If, on the other hand, your on-line news site or mobile app are Chronodebt-bound, that’s another story entirely.

None of this is particularly complicated. And yet, especially in IT circles, we do have a tendency to consider engineering excellence to be an unalloyed and immutable good.

Sometimes, prototypes and kludges are exactly what the situation calls for.

And sometimes the right answer, although painful, is limping along on your ancient legacy systems until they crumble into dust.

I’m not sure what follows belongs in KJR, and if it does whether it offers anything new and insightful to what’s being published about the subject elsewhere.

Please share your opinion on both fronts, preferably in the Comments.

Thanks. — Bob

# # #

In the game of evolution by natural selection there are no rules. Anything a gene can do to insert more copies of itself in succeeding generations is considered fair play, not that the players have any sense they’re playing a game; not that the concept of “fair” plays any part in their thinking; not that thinking plays any part in most of the players’ lives.

Among the ways of dividing the world into two types of people … no, not “those who divide the world into two types of people and those who don’t …

Where was I? Some of those in leadership roles figure rules are part of the game, and there’s really no point in winning without following them.

That’s in contrast to a different sort of leader — those who consider rules as soft boundaries, to be followed when convenient or when the risk of being caught violating them, multiplied by the penalties likely to be incurred as a result of the violation, are excessive.

For this class of leader, the only rule is that there are no rules. Winning is all that matters.

Which gets us to a subject covered here a couple of weeks ago — the confluence of increasingly sophisticated artificial intelligence and simulation technologies, and their potential for abuse.

Before reading further, take a few minutes to watch a terrifying demonstration of just how easy it now is for a political candidate to, as described last week, “… use this technology to make it appear that their opponent gave a speech encouraging everyone to, say, embrace Satan as their lord and master.”

And thanks to Jon Payton for bringing this to our attention in the Comments.

Nor will this sort of thing be limited to unscrupulous politicians. Does anyone reading these words doubt that some CEO, in pursuit of profits, will put a doctored video on YouTube showing a competitor’s CEO explaining, to his board of directors, “Sure our products kill our customers! Who cares? We can conceal the evidence where no one will ever find it, and in the meantime our profits are much higher than they’d be if we bore the time and expense of making our products safe!”

Easy to make, hard to trace, and even harder to counter with the truth.

Once upon a time our vision of rogue AI depended on robots that autonomously selected human targets to obliterate.

Now? Skynet seems almost utopian. Its threat is physical and tangible.

Where we’re headed is, I think, even more dangerous.

The technology used to create “Deepfake” videos depends on one branch of artificial intelligence technology. Combine it with text generation that writes the script and we’re at the point where AI passes the well-known Turing test.

Reality itself is under siege, and Virtual is winning. Just as counterfeit money devalues real currency, so counterfeit reality devalues actual facts.

We can take limited comfort in knowing that, at least for now, researchers haven’t made AI self-directed. If, for example, a deepfake pornographic video shows up in which a controversial politician appears to have a starring role, we can be confident a human directed tame AIs to create and publicize it.

And here I have to apologize, on two fronts.

The first: KJR’s purpose is to give you ideas you can put to immediate, practical use. This isn’t that.

The second: As the old management adage has it, I’m supposed to provide solutions, not problems.

The best I have in the way of solutions is an AI arms race, where machine-learning AIs tuned to be deepfake detectors become part of our anti-malware standard kit. Or, if you’re a more militant sort, built to engage in deepfake search-and-destroy missions.

That’s in addition to the Shut the ‘Bots Up Act of 2019 I proposed last week, which would limit First Amendment rights to actual human beings.

It’s weak, but it’s the best I have.

How about you?