Projects should have a positive return on investment – wisdom shared so often that our extra-ocular musculature has probably thrown in the towel by now.

Those less schooled in the mysteries of management decision-making might be forgiven for thinking this means projects should return more money to the corporate coffers than they company invests in them.

Those with a bit more financial sophistication add opportunity cost to the calculation. Projects, in this more-robust view, should return not only the initial investment, but also the dividends and interest that would have been earned on that money had it been invested in a financial instrument of some kind.

This threshold is called the hurdle rate. Not the hurl rate, although many discussions about project desirability contribute to this. Project governance mavens insist that proposed projects promise to clear a set rate of return – a hurdle in the run-fast-and-jump-high-enough sense of the word.

It’s a superficially plausible criterion that isn’t so much wrong as it is, as someone once observed, insufficiently right. Why it’s insufficiently right is something any chess player who has progressed beyond the novice level of play would recognize.

Novice chess players are schooled in ROI-based decision-making. Each chess piece is, according to this model, worth a given number of points. Why does it work that way? Don’t worry about it unless you’re just curious.

Anyway, ROI-based chess players will cheerfully trade any piece for an opponent’s piece or pieces that are worth more in total than the piece they’re sacrificing – trades, that is, that have a positive chess-piece-point-count ROI.

It’s a formula that’s as plausible and wrong for chess-playing as ROI-based decision-making is for project governance decisions.

The fault in ROI-driven decision-making logic stems from this characteristic of business (and chess): Strategies don’t have ROIs.

In chess, strategic decisions are based on whether a move will increase the likelihood of beating the opponent. Removing an opponent’s most powerful pieces certainly can contribute to this, but so can other moves.

In business, strategic decisions should, in similar fashion, be rooted in beating opponents – in a word (okay, in two words) – competitive advantage.

This is, by the way, the flaw in stock buy-backs. When a board of directors decides to buy back stock it’s spending money that could have been used to make products more appealing or customer-care more loyalty-building. Instead, the board reduces the number of stock shares profits are allocated to, artificially … and temporarily … inflating the company’s earnings-per-share calculation.

Nothing about this analysis makes a focus on ROI wrong. Sure, a project that delivers untold wealth to the corporate coffers is, more often than not, a good idea.

But not always. A project that, for example, makes a colossal profit by posting a few million more cat videos to YouTube is sufficiently horrific that it should be vetoed by all right-thinking (and, for that matter, left-thinking) individuals, ROI or no ROI.

But I digress. Getting back to the point, strategy doesn’t have an ROI. It might seem to – you’d sure think competitive advantage should generate countable currency – but that’s rarely the case. One reason is something that, in evolutionary theory, is called the Red Queen hypothesis. It proposes that newly evolved adaptive advantages don’t always confer lasting results because a species that evolves an adaptive advantage leads its predators, prey, or competitors to adapt to their adaptation with their own now-advantageous adaptations.

Bob’s last word: I trust the business parallel is clear. But we need to take this one step further: As with so many instances of organizational dysfunction, the insistence on ROI stems from an unhealthy emphasis on measurement.

ROI makes value measurable. Not really, but it looks like it. Competitive advantage, for example, generates a financial return, but the size of the financial return can’t be predicted in advance. It isn’t just that anyone who tries to predict future customer behavior is about as reliable a source as Nostradamus, although they are.

It’s also that predicting how competitors will respond to a company’s strategy is almost as hard, and arguably more important.

Bob’s sales pitch: About once a month I publish a piece on CIO.com under the heading “CIO Survival Guide.” They’re a bit longer than KJR. And as the title implies they have a more overt CIO focus. You can see them all at Bob Lewis | CIO .

New on CIO.com’s CIO Survival Guide:Why IT communications fail to communicate.” The point? Never confuse documentation with communication. The purpose of documentation is to remind, not to communicate.

# # #

I tried to write a column based on Ruth Bader Ginsburg and how her passing affects us all.

I couldn’t do it.

Please accept my apologies.

# # #

Prepare for a double-eye-glazer. The subjects are metrics and application portfolio rationalization and management (APR/APM). We’re carrying on from last week, which covered some APR/APM fundamentals.

If, as you’re undoubtedly tired of reading, you can’t manage if you can’t measure, APM provides an object lesson in no, that can’t be right.

It can’t be right because constructing an objective metric that differentiates between well-managed and poorly managed application portfolios is, if not impossible, an esoteric enough challenge that most managers wouldn’t bother, anticipating the easily anticipated conversation with company decision-makers that would ensue:

Application Portfolio Manager: “As you can see, making these investments in the applications portfolio would result in the APR index rising by eleven percent.”

Company Decision Maker: “Let me guess. I can either trust you that the APR index means something, or I’ll have to suffer through an hour-long explanation, and even then I’d need to remember my high school trigonometry class to make sense of it. Right?”

Application Portfolio Manager: “Well …”

What make this exercise so challenging?

Start with where most CIOs finish: Total Cost of Ownership — the ever-popular TCO, which unwary CIOs expect to be lower for well-managed application portfolios than for poorly managed ones.

They’re right that managing an applications portfolio better sometimes reduces TCO. Sadly, so, sometimes does bad portfolio management, as when the portfolio manager decommissions every application that composes it.

Oh, and by the way, sometimes managing an applications portfolio better can increase TCO, as when IT implements applications that automate previously manual tasks, or that attract business on the Internet currently lost to competitors that already sell and support customers through web and mobile apps.

How about benefits minus costs — value?

Well, sure. If we define benefits properly, well-managed portfolios should always deliver more value than poorly managed ones, shouldn’t they?

Not to nitpick or nuthin’, but no, not because delivering value is a bad thing but because for the most part, information technology doesn’t deliver value. It enables it.

You probably don’t remember, but we covered how to measure the value of an enabler back in 2003. To jog your memory, it went like this:

1. Calculate the total cost of every business process (TCBP) IT supports.

2. Design the best possible alternative processes (BPAP) that use no technology more complicated than a hand calculator.

3. BPAP — TCBP is the value provided by IT. (BPAP — TCBP)/TCBP is the return on IT investment — astronomical in nearly every case, I suspect, although possibly not as astronomical as actually going through the exercise.

It appears outcome metrics like cost and value won’t get us to where we need to go. How about something structural?

Start with the decisions application portfolio managers have to make (or, if they’re wiser, facilitate). Boil it all down and there are just two: (1) what is an application’s disposition — keep as is, extend and enhance, replace, retire, and so on — and (2) what is the priority for implementing these dispositions across the whole portfolio.

Disposition is a non-numeric metric — a metric in the same sense that “orange” is a metric. It depends on such factors as whether the application’s data are properly normalized, whether it’s built on company-standard platforms, and whether it’s a custom application when superior alternatives are now available in the marketplace.

Disposition is about what needs to be done. Priority is about when to do it. As such it depends on how big the risk is of not implementing the disposition, to what extent the application’s deficiencies impair business functioning, and, conversely, how big the opportunities are for implementing the dispositions … minus the disposition’s cost.

Priority, that is, is a reflection of each application’s health.

Which gets us to the point of this week’s exercise: Most of what an application portfolio manager needs to know to decide on dispositions and priorities is subjective. In some cases the needed measures are subjective because making them objective requires too much effort, like having to map business processes in detail to identify where applications cause process bottlenecks.

Sometimes they’re just subjective, as when the question is about the risk that an application vendor will lose its standing in the applications marketplace.

All of which gets us to this: “If you can’t measure you can’t manage” had better not be true, because as often as not managers can’t measure.

But they have to manage anyway.