Turns out, the speed of light isn’t the universe’s limiting velocity. As evidence, I offer the SolarWinds security breach, which exited the news faster than any photon could follow.

Among the more interesting bits and piece of the SolarWinds security fiasco was how it familiarized us with the phrase “supply chain” as a cloud computing consideration.

But first, in the interest of burying the lede …

The business case for cloud computing – we’re talking about public cloud providers like AWS, Azure, and GCP – has always been a bit fuzzy. For example:

Economics: The cloud saves companies money … except when it doesn’t. If the demand for computing resources is unpredictable, provisioning in the cloud is just the ticket, because the cloud lets you add and shed resources on demand.

That’s in contrast to on-premises provisioning, where you provision for a specified level of demand. If you can accurately predict demand and your negotiating skills are any good you can probably buy enough computing resources to satisfy that demand for less than a cloud provider can rent them to you.

Engineering: Modern computing platforms and infrastructure are complex, with a lot of (metaphorically) moving parts. In the ancient days, IT dealt with this by buying its infrastructure from a single-vendor supply chain that pre-packaged it (IBM, if you’re too annoyingly youthful to remember such things).

With the advent of distributed computing and multivendor environments, IT had to bring its infrastructure engineering expertise in-house, partially offsetting distributed systems’ lower prices while supplanting a single-link supply chain with more links than a chain mail tunic.

Meanwhile, the requirements of multivendor supply chain management made the complexities of infrastructure engineering seem simple when compared to the complexities of service-provider contract negotiations. And, even worse, the complexities of multi-layer license agreements.

And, even worse than that, the aggravations of multivendor bickering and mutual finger-pointing whenever something goes wrong.

The rise of PaaS providers promised to reverse this trend – not completely, but enough that IT figured it could reduce both its vendor management and engineering burdens.

Security: In the early days of cloud computing, security was where the cloud value proposition seemed most dubious. Putting a company’s valuable data and business logic in the public cloud where IT had no control or oversight over how it was secured struck most CIOs and CSOs as a risky business at best.

But those were the good old days of basement-dwelling hobbyist hackers. Over the past decade or so these quaint relics of a bygone age have been replaced by malicious state actors and organized crime.

Meanwhile, working with a cloud provider has more and more in common with renting space in an office building: You’re relying on the architect who designed it and the construction firm that built it to select suppliers of concrete and girders that provide quality materials, and to hire a workforce that won’t plant concealed weaknesses in the structure.

You could, of course, hire your own architect, project manager, and construction workers and build your own office building.

But probably not. Unmetaphorically speaking, whether you manage your own data center and computing infrastructure or outsource it to a cloud services provider, you’re dealing with a complex, multi-layer supply chain.

The major cloud providers have economies of scale that let them evaluate suppliers and detect sophisticated incursions better than all but their largest customers can afford.

But on the other side of the Bitcoin, the major cloud providers are far more interesting targets for state- and organized-crime-scale intruders than you are.

Bob’s last word: Sometimes, making decisions is like dining at a gourmet buffet, where our choices are all good and the limiting factor is the size of our plates and appetites.

Other times, changing metaphors (again), the best we can do is, as Tony Mendez says in Argo, choose “the best bad plan we have.”

Right now, when it comes to cybersecurity, our situation is more Argo than buffet.

Bob’s sales pitch: Nope. I don’t consult on security. So I can’t help you there. But in the meantime, if you’re looking for reading material, I’m your guy. Help support KJR by buying some.

# # #

I tried to write a column based on Ruth Bader Ginsburg and how her passing affects us all.

I couldn’t do it.

Please accept my apologies.

# # #

Prepare for a double-eye-glazer. The subjects are metrics and application portfolio rationalization and management (APR/APM). We’re carrying on from last week, which covered some APR/APM fundamentals.

If, as you’re undoubtedly tired of reading, you can’t manage if you can’t measure, APM provides an object lesson in no, that can’t be right.

It can’t be right because constructing an objective metric that differentiates between well-managed and poorly managed application portfolios is, if not impossible, an esoteric enough challenge that most managers wouldn’t bother, anticipating the easily anticipated conversation with company decision-makers that would ensue:

Application Portfolio Manager: “As you can see, making these investments in the applications portfolio would result in the APR index rising by eleven percent.”

Company Decision Maker: “Let me guess. I can either trust you that the APR index means something, or I’ll have to suffer through an hour-long explanation, and even then I’d need to remember my high school trigonometry class to make sense of it. Right?”

Application Portfolio Manager: “Well …”

What make this exercise so challenging?

Start with where most CIOs finish: Total Cost of Ownership — the ever-popular TCO, which unwary CIOs expect to be lower for well-managed application portfolios than for poorly managed ones.

They’re right that managing an applications portfolio better sometimes reduces TCO. Sadly, so, sometimes does bad portfolio management, as when the portfolio manager decommissions every application that composes it.

Oh, and by the way, sometimes managing an applications portfolio better can increase TCO, as when IT implements applications that automate previously manual tasks, or that attract business on the Internet currently lost to competitors that already sell and support customers through web and mobile apps.

How about benefits minus costs — value?

Well, sure. If we define benefits properly, well-managed portfolios should always deliver more value than poorly managed ones, shouldn’t they?

Not to nitpick or nuthin’, but no, not because delivering value is a bad thing but because for the most part, information technology doesn’t deliver value. It enables it.

You probably don’t remember, but we covered how to measure the value of an enabler back in 2003. To jog your memory, it went like this:

1. Calculate the total cost of every business process (TCBP) IT supports.

2. Design the best possible alternative processes (BPAP) that use no technology more complicated than a hand calculator.

3. BPAP — TCBP is the value provided by IT. (BPAP — TCBP)/TCBP is the return on IT investment — astronomical in nearly every case, I suspect, although possibly not as astronomical as actually going through the exercise.

It appears outcome metrics like cost and value won’t get us to where we need to go. How about something structural?

Start with the decisions application portfolio managers have to make (or, if they’re wiser, facilitate). Boil it all down and there are just two: (1) what is an application’s disposition — keep as is, extend and enhance, replace, retire, and so on — and (2) what is the priority for implementing these dispositions across the whole portfolio.

Disposition is a non-numeric metric — a metric in the same sense that “orange” is a metric. It depends on such factors as whether the application’s data are properly normalized, whether it’s built on company-standard platforms, and whether it’s a custom application when superior alternatives are now available in the marketplace.

Disposition is about what needs to be done. Priority is about when to do it. As such it depends on how big the risk is of not implementing the disposition, to what extent the application’s deficiencies impair business functioning, and, conversely, how big the opportunities are for implementing the dispositions … minus the disposition’s cost.

Priority, that is, is a reflection of each application’s health.

Which gets us to the point of this week’s exercise: Most of what an application portfolio manager needs to know to decide on dispositions and priorities is subjective. In some cases the needed measures are subjective because making them objective requires too much effort, like having to map business processes in detail to identify where applications cause process bottlenecks.

Sometimes they’re just subjective, as when the question is about the risk that an application vendor will lose its standing in the applications marketplace.

All of which gets us to this: “If you can’t measure you can’t manage” had better not be true, because as often as not managers can’t measure.

But they have to manage anyway.