You’re managing a project. What can go wrong?

Well …

Scenario #1: It’s hard to overcome the CEO

A friend managed a business transformation tied to a large software suite (I’m not allowed to be more specific). Her client was a multinational concern that wasn’t wise in the ways of project management. She had a strong team, the right work breakdown structure, a good working relationship with the vendor, and a committed executive sponsor.

But then the CEO happened. All by itself, just one restructuring would have driven quite a bit of re-work into the project. Multiples multiplied the impact.

And that’s before the ritual laying off of several people the project couldn’t do without. As The Mythical Man Month makes clear, replacing these key staff slowed things down even more, as the effort needed to acclimate the project newbies to the project and their responsibilities exceeded any benefit they could provide.

My friend got the project done, but it got done much uglier than it needed to.

Lesson learned: Include a list of specific critical personnel in your project Risks and Issues reporting, and make sure that reporting is visible at least one layer higher than the project’s sponsor. It won’t completely prevent the chaos, but it might reduce it.

Scenario #2: A filter that should be a conduit

So you say your executive sponsor cares deeply about your project’s success. You say he’s assigned the right people to the core team, and has let everyone else know they should support the project when their support is called for.

And … he’s a busy guy, so he’s delegated day-to-day sponsorship to a trusted member of his team, who is to be your primary Point of Contact. His busyness also means he has no time for regular face-to-face updates.

But not to worry. Your PoC meets with him weekly, and will keep him informed.

As the project progresses, unexpected discoveries drive a number of course corrections. Taken one at a time, none seem particularly controversial, so you and your PoC make the decisions and move on.

A couple of months later, though, with a major milestone approaching, you bring the sponsor in for a briefing. That’s when you discover that what seemed minor to you seems less minor to your sponsor, and the decisions you and your PoC to resolve the issues weren’t the solutions the sponsor would have chosen.

This is when you find out your PoC either hasn’t embraced the “bad news doesn’t improve with age” dictum or also didn’t think the issues in question were important enough to mention in his weekly updates.

And, it’s when you first figure out the sponsor defines “handled correctly” as “how I would have handled it,” and “handled wrong” as “all other ways of handling it.”

So now you have an irritated sponsor and a project schedule that’s in recovery mode.

You can’t entirely avoid this. What might at least help is, prior to your PoC’s weekly meetings with the project sponsor, rehearse the topics to be covered in the project update.

Scenario #3: EPMO — enabler or bureaucracy

Congratulations! As a result of your many well-managed projects and the value they delivered, you’ve been promoted to the Enterprise Program Management Office — the EPMO. In your new role you’re responsible for ensuring all project investments are worthwhile, and providing oversight to make sure they’re well-managed by project managers who aren’t you.

And so, guided by “industry best practices,” you establish a governance process to screen out proposals that don’t make the grade.

Then you start to hear those governed by the EPMO use the B-word in your general direction. No, not that B-word. Bureaucrat.

Which, if you think your job is to screen out bad proposals, you’ve become.

First and worst, a bureaucrat evaluates proposals. A leader evaluates the ideas behind the proposals.

Second and almost as worst, if you expect to see dumb ideas you’ll see dumb ideas, because most people, most of the time, see what they expect to see. And anyway, if what you do is screen out dumb ideas you’ll pass the proposals that don’t give you a reason to screen them out, not those that give you a reason to keep them in.

So take the B out of your job. Starting tomorrow, the EPMO’s job is to help good ideas succeed.

Followed by your stretch goal: to help turn good ideas into great ones.

Rank the most-reported aspects of COVID-19, in descending order of worst-explained-ness. Modeling is, if not at the top, close to it.

Which is a shame, because beyond improving our public policy discussions, better coverage would also help all of us who think in terms of business strategy and tactics think more deeply and, perhaps, more usefully about the role modeling might play in business planning.

For those interested in the public-health dimension, “COVID-19 Models: Can They Tell Us What We Want to Know?” Josh Michaud, Jennifer Kates, and Larry Levitt, KFF (Kaiser Family Foundation), Apr 16, 2020 provides a useful summary. It discusses three types of model that, translated to business planning terms, we might call actuarial, simulation, and multivariate-statistical.

Actuarial models divide a population into groups (cohorts) and move numbers of members of each cohort to other cohorts based on a defined set of rules. If you run an insurance company that needs to price risk (there’s no other kind), actuarial models are a useful alternative to throwing darts.

Imagine that instead you’re responsible for managing a business process of some kind. A common mistake process designers make is describing processes as collections of interconnected boxes.

It’s a mistake because most business processes consist of queues, not boxes. Take a six-step process, where each step takes an hour to execute. Add the steps and the cycle time should be six hours.

Measure cycle time and it’s more likely to be six days. That’s because each item tossed into a queue has to wait its turn before anyone starts tow work on it.

Think of these queues as actuarial cohorts and you stand a much better chance of accurately forecasting process cycle time and throughput — an outcome process managers presumably might find useful.

Truth in advertising: I don’t know if anyone has ever tried applying actuarial techniques to process analysis. But queue-to-queue vs box-to-box process analysis? It’s one of Lean’s most important contributions.

Simulation models are as the name implies. They define a collection of “agents” that behave like entities in the situation being simulated. The more accurately they describe agent behaviors, estimate the numbers of each type of agent, the probability distributions of different behaviors for each type, and the outcomes of these behaviors … including the outcomes of encounters among agents … the more accurate the model’s predictions.

For years, business strategists have talked about a company’s “business model.” These have mostly been narratives rather than true models. That is, they’ve been qualitative accounts of the buttons and levers business managers can push and pull to get the outcomes they want.

There’s no reason to think sophisticated modelers couldn’t develop equivalent simulation models to forecast the impact of different business strategies and tactics on, say, customer retention, mindshare, and walletshare.

If one of your modeling goals is understanding how something works, simulation is just the ticket.

The third type of model, multivariate-statistical, applies such techniques as multiple regression analysis, analysis of variance, and multidimensional scaling to large datasets to determine how strongly different hypothesized input factors correlate with the outputs that matter. For COVID-19, input factors are such well-known variables as adherence to social distancing, use of masks and gloves, and not pressuring a cohabiter to join you in your kale and beet salad diet. Outputs are correlations to rates of infection and strangulation.

In business, multivariate-statistical modeling is how most analytics gets done. It’s also more or less how neural-network-based machine learning works. It works better for interpolation than extrapolation, and depends on figuring out which way the arrow of causality points when an analysis discovers a correlation.

As with all programming, model value depends on testing, although model testing is more about consistency and calibration than defect detection. And COVID-19 models have brought the impact of data limitations on model outputs into sharp focus.

For clarity’s sake: Models are consistent when output metrics improve and get worse in step with reality. They’re calibrated when the output metrics match real-world measurements.

With COVID-19 testers have to balance clinical and statistical needs. Clinically, testing is how physicians determine which disease they’re treating, leading to the exact opposite of random sampling. With non-random samples, testing for consistency is possible, but calibration testing is, at best, contorted.

Lacking enough testing capacity to satisfy clinical demands, which for most of us must come first as an ethical necessity. Modelers are left to de-bias their non-random datasets — an inexact practice at best that limits their ability to calibrate models. That they yield different forecasts is unsurprising.

And guess what: Your own data scientists face a similar challenge: Their datasets are piles of business transactions that are, by their very nature, far from random.

Exercise suitable caution.