# # #

I tried to write a column based on Ruth Bader Ginsburg and how her passing affects us all.

I couldn’t do it.

Please accept my apologies.

# # #

Prepare for a double-eye-glazer. The subjects are metrics and application portfolio rationalization and management (APR/APM). We’re carrying on from last week, which covered some APR/APM fundamentals.

If, as you’re undoubtedly tired of reading, you can’t manage if you can’t measure, APM provides an object lesson in no, that can’t be right.

It can’t be right because constructing an objective metric that differentiates between well-managed and poorly managed application portfolios is, if not impossible, an esoteric enough challenge that most managers wouldn’t bother, anticipating the easily anticipated conversation with company decision-makers that would ensue:

Application Portfolio Manager: “As you can see, making these investments in the applications portfolio would result in the APR index rising by eleven percent.”

Company Decision Maker: “Let me guess. I can either trust you that the APR index means something, or I’ll have to suffer through an hour-long explanation, and even then I’d need to remember my high school trigonometry class to make sense of it. Right?”

Application Portfolio Manager: “Well …”

What make this exercise so challenging?

Start with where most CIOs finish: Total Cost of Ownership — the ever-popular TCO, which unwary CIOs expect to be lower for well-managed application portfolios than for poorly managed ones.

They’re right that managing an applications portfolio better sometimes reduces TCO. Sadly, so, sometimes does bad portfolio management, as when the portfolio manager decommissions every application that composes it.

Oh, and by the way, sometimes managing an applications portfolio better can increase TCO, as when IT implements applications that automate previously manual tasks, or that attract business on the Internet currently lost to competitors that already sell and support customers through web and mobile apps.

How about benefits minus costs — value?

Well, sure. If we define benefits properly, well-managed portfolios should always deliver more value than poorly managed ones, shouldn’t they?

Not to nitpick or nuthin’, but no, not because delivering value is a bad thing but because for the most part, information technology doesn’t deliver value. It enables it.

You probably don’t remember, but we covered how to measure the value of an enabler back in 2003. To jog your memory, it went like this:

1. Calculate the total cost of every business process (TCBP) IT supports.

2. Design the best possible alternative processes (BPAP) that use no technology more complicated than a hand calculator.

3. BPAP — TCBP is the value provided by IT. (BPAP — TCBP)/TCBP is the return on IT investment — astronomical in nearly every case, I suspect, although possibly not as astronomical as actually going through the exercise.

It appears outcome metrics like cost and value won’t get us to where we need to go. How about something structural?

Start with the decisions application portfolio managers have to make (or, if they’re wiser, facilitate). Boil it all down and there are just two: (1) what is an application’s disposition — keep as is, extend and enhance, replace, retire, and so on — and (2) what is the priority for implementing these dispositions across the whole portfolio.

Disposition is a non-numeric metric — a metric in the same sense that “orange” is a metric. It depends on such factors as whether the application’s data are properly normalized, whether it’s built on company-standard platforms, and whether it’s a custom application when superior alternatives are now available in the marketplace.

Disposition is about what needs to be done. Priority is about when to do it. As such it depends on how big the risk is of not implementing the disposition, to what extent the application’s deficiencies impair business functioning, and, conversely, how big the opportunities are for implementing the dispositions … minus the disposition’s cost.

Priority, that is, is a reflection of each application’s health.

Which gets us to the point of this week’s exercise: Most of what an application portfolio manager needs to know to decide on dispositions and priorities is subjective. In some cases the needed measures are subjective because making them objective requires too much effort, like having to map business processes in detail to identify where applications cause process bottlenecks.

Sometimes they’re just subjective, as when the question is about the risk that an application vendor will lose its standing in the applications marketplace.

All of which gets us to this: “If you can’t measure you can’t manage” had better not be true, because as often as not managers can’t measure.

But they have to manage anyway.

Rank the most-reported aspects of COVID-19, in descending order of worst-explained-ness. Modeling is, if not at the top, close to it.

Which is a shame, because beyond improving our public policy discussions, better coverage would also help all of us who think in terms of business strategy and tactics think more deeply and, perhaps, more usefully about the role modeling might play in business planning.

For those interested in the public-health dimension, “COVID-19 Models: Can They Tell Us What We Want to Know?” Josh Michaud, Jennifer Kates, and Larry Levitt, KFF (Kaiser Family Foundation), Apr 16, 2020 provides a useful summary. It discusses three types of model that, translated to business planning terms, we might call actuarial, simulation, and multivariate-statistical.

Actuarial models divide a population into groups (cohorts) and move numbers of members of each cohort to other cohorts based on a defined set of rules. If you run an insurance company that needs to price risk (there’s no other kind), actuarial models are a useful alternative to throwing darts.

Imagine that instead you’re responsible for managing a business process of some kind. A common mistake process designers make is describing processes as collections of interconnected boxes.

It’s a mistake because most business processes consist of queues, not boxes. Take a six-step process, where each step takes an hour to execute. Add the steps and the cycle time should be six hours.

Measure cycle time and it’s more likely to be six days. That’s because each item tossed into a queue has to wait its turn before anyone starts tow work on it.

Think of these queues as actuarial cohorts and you stand a much better chance of accurately forecasting process cycle time and throughput — an outcome process managers presumably might find useful.

Truth in advertising: I don’t know if anyone has ever tried applying actuarial techniques to process analysis. But queue-to-queue vs box-to-box process analysis? It’s one of Lean’s most important contributions.

Simulation models are as the name implies. They define a collection of “agents” that behave like entities in the situation being simulated. The more accurately they describe agent behaviors, estimate the numbers of each type of agent, the probability distributions of different behaviors for each type, and the outcomes of these behaviors … including the outcomes of encounters among agents … the more accurate the model’s predictions.

For years, business strategists have talked about a company’s “business model.” These have mostly been narratives rather than true models. That is, they’ve been qualitative accounts of the buttons and levers business managers can push and pull to get the outcomes they want.

There’s no reason to think sophisticated modelers couldn’t develop equivalent simulation models to forecast the impact of different business strategies and tactics on, say, customer retention, mindshare, and walletshare.

If one of your modeling goals is understanding how something works, simulation is just the ticket.

The third type of model, multivariate-statistical, applies such techniques as multiple regression analysis, analysis of variance, and multidimensional scaling to large datasets to determine how strongly different hypothesized input factors correlate with the outputs that matter. For COVID-19, input factors are such well-known variables as adherence to social distancing, use of masks and gloves, and not pressuring a cohabiter to join you in your kale and beet salad diet. Outputs are correlations to rates of infection and strangulation.

In business, multivariate-statistical modeling is how most analytics gets done. It’s also more or less how neural-network-based machine learning works. It works better for interpolation than extrapolation, and depends on figuring out which way the arrow of causality points when an analysis discovers a correlation.

As with all programming, model value depends on testing, although model testing is more about consistency and calibration than defect detection. And COVID-19 models have brought the impact of data limitations on model outputs into sharp focus.

For clarity’s sake: Models are consistent when output metrics improve and get worse in step with reality. They’re calibrated when the output metrics match real-world measurements.

With COVID-19 testers have to balance clinical and statistical needs. Clinically, testing is how physicians determine which disease they’re treating, leading to the exact opposite of random sampling. With non-random samples, testing for consistency is possible, but calibration testing is, at best, contorted.

Lacking enough testing capacity to satisfy clinical demands, which for most of us must come first as an ethical necessity. Modelers are left to de-bias their non-random datasets — an inexact practice at best that limits their ability to calibrate models. That they yield different forecasts is unsurprising.

And guess what: Your own data scientists face a similar challenge: Their datasets are piles of business transactions that are, by their very nature, far from random.

Exercise suitable caution.