Projects should have a positive return on investment – wisdom shared so often that our extra-ocular musculature has probably thrown in the towel by now.

Those less schooled in the mysteries of management decision-making might be forgiven for thinking this means projects should return more money to the corporate coffers than they company invests in them.

Those with a bit more financial sophistication add opportunity cost to the calculation. Projects, in this more-robust view, should return not only the initial investment, but also the dividends and interest that would have been earned on that money had it been invested in a financial instrument of some kind.

This threshold is called the hurdle rate. Not the hurl rate, although many discussions about project desirability contribute to this. Project governance mavens insist that proposed projects promise to clear a set rate of return – a hurdle in the run-fast-and-jump-high-enough sense of the word.

It’s a superficially plausible criterion that isn’t so much wrong as it is, as someone once observed, insufficiently right. Why it’s insufficiently right is something any chess player who has progressed beyond the novice level of play would recognize.

Novice chess players are schooled in ROI-based decision-making. Each chess piece is, according to this model, worth a given number of points. Why does it work that way? Don’t worry about it unless you’re just curious.

Anyway, ROI-based chess players will cheerfully trade any piece for an opponent’s piece or pieces that are worth more in total than the piece they’re sacrificing – trades, that is, that have a positive chess-piece-point-count ROI.

It’s a formula that’s as plausible and wrong for chess-playing as ROI-based decision-making is for project governance decisions.

The fault in ROI-driven decision-making logic stems from this characteristic of business (and chess): Strategies don’t have ROIs.

In chess, strategic decisions are based on whether a move will increase the likelihood of beating the opponent. Removing an opponent’s most powerful pieces certainly can contribute to this, but so can other moves.

In business, strategic decisions should, in similar fashion, be rooted in beating opponents – in a word (okay, in two words) – competitive advantage.

This is, by the way, the flaw in stock buy-backs. When a board of directors decides to buy back stock it’s spending money that could have been used to make products more appealing or customer-care more loyalty-building. Instead, the board reduces the number of stock shares profits are allocated to, artificially … and temporarily … inflating the company’s earnings-per-share calculation.

Nothing about this analysis makes a focus on ROI wrong. Sure, a project that delivers untold wealth to the corporate coffers is, more often than not, a good idea.

But not always. A project that, for example, makes a colossal profit by posting a few million more cat videos to YouTube is sufficiently horrific that it should be vetoed by all right-thinking (and, for that matter, left-thinking) individuals, ROI or no ROI.

But I digress. Getting back to the point, strategy doesn’t have an ROI. It might seem to – you’d sure think competitive advantage should generate countable currency – but that’s rarely the case. One reason is something that, in evolutionary theory, is called the Red Queen hypothesis. It proposes that newly evolved adaptive advantages don’t always confer lasting results because a species that evolves an adaptive advantage leads its predators, prey, or competitors to adapt to their adaptation with their own now-advantageous adaptations.

Bob’s last word: I trust the business parallel is clear. But we need to take this one step further: As with so many instances of organizational dysfunction, the insistence on ROI stems from an unhealthy emphasis on measurement.

ROI makes value measurable. Not really, but it looks like it. Competitive advantage, for example, generates a financial return, but the size of the financial return can’t be predicted in advance. It isn’t just that anyone who tries to predict future customer behavior is about as reliable a source as Nostradamus, although they are.

It’s also that predicting how competitors will respond to a company’s strategy is almost as hard, and arguably more important.

Bob’s sales pitch: About once a month I publish a piece on CIO.com under the heading “CIO Survival Guide.” They’re a bit longer than KJR. And as the title implies they have a more overt CIO focus. You can see them all at Bob Lewis | CIO .

New on CIO.com’s CIO Survival Guide:Why IT communications fail to communicate.” The point? Never confuse documentation with communication. The purpose of documentation is to remind, not to communicate.

Prometheus brought fire (metaphorically, knowledge about how to do stuff) to humanity, making him a mythical hero.

Lucifer (light-bringer) brought knowledge (of good and evil, no less), to humanity, earning him the mantle of most villainous of all our mythical villains.

Go figure.

Now we have ChatGPT which, in case you’ve been living in a cave the past few months and missed all the excitement, seems to be passing the Turing, Prometheus, and Lucifer tests while making the whole notion of knowledge obsolete.

You can ask ChatGPT a question and it will generate an answer that reads like something a real, live human being might have written [Turing].

And just like dealing with real, live human beings you’d have no way of knowing whether the answer was … what’s the word I’m looking for? … help me out, ChatGPT … oh, yeah, that’s the word … “right” [Prometheus] or false [Lucifer].

And a disclaimer: I’m not going to try to differentiate between what ChatGPT and allied AI technologies are capable of as of this writing from what they’ll obviously and quickly evolve into.

Quite the opposite – what follows is both speculative and, I think, inevitable, in a short enough planning window that we need to start thinking about the ramifications right now. Here are the harbingers:

Siri and Watson: When Apple introduced Siri, its mistakes were amusing but its potential was clear – technology capable of understanding a question, sifting through information sources to figure out the answer, and expressing the answer in an easily understood voice.

Watson won Jeopardy the same way.

The sophistication of research-capable AIs will only continue to improve, especially the sifting-through-data-sources algorithms.

Synthesizers: It’s one thing to engage in research to find the answer to a question. It’s quite another to be told what the right answer is and formulate a plausible argument for it.

Trust me on this – as a professional management consultant I’ve lost track of how often a client has told me the answer they want and asked me to find it.

So there’s no reason to figure an AI, armed with techniques for cherry-picking some data and forging the rest, might resist the temptation. Because while I’ve read quite a lot about where AI is going and how it’s evolving, I’ve read of no research into the development of an Ethics Engine or, its close cousin, an integrity API.

Deep fakes: Imagine a deep-faked TED Talk whose presenter doesn’t actually exist here in what we optimistically call the “real world” but that speaks and gestures in ways that push our this-person-is-an-authority-on-the-subject buttons to persuade us that a purely falsified answer is, in fact, how things are.

Or, even more unsavory, imagine the possibilities for character assassination to be had by pasting a political opponent’s or business rival’s face onto … well, I’ll leave the possibilities as an exercise for the reader.

Persuasion: Among the algorithms we can count on will be several that engage in meme promotion – that know how to disseminate an idea so as to maximize the number of people who encounter and believe it.

Recursion: It’s loop-closing time – you ask your helpful AI a question (we’ll name it “Keejer” – I trust the etymology isn’t too mysterious?) “Hey, Keejer, how old is the universe?”

Keejer searches and sifts through what’s available on the subject, synthesizes the answer (by averaging the values it finds, be they theological or astrophysical), and writes a persuasive essay presenting its findings – that our universe is 67,455 years old.

But, many of the sources Keejer discovers are falsifications created and promoted by highly persuasive AIs, and Keejer lacks a skepticism algorithm.

And so Keejer gives you the wrong answer. Worse, Keejer’s analysis is added to the Internet’s meme stack to further mislead the next research AI.

Bob’s last word: Science fictioneers, writing about dangerous robots and AIs, gravitate to Skynet scenarios, where androids engage in murderous rampages to exterminate humanity.

The unexplored territory – rogue ‘bots attempting to wipe out reality itself – hasn’t received the same attention.

But putting the literary dimension of the problem aside, it’s time to put as much R&D into  Artificial Skepticism as we’ve put into AI itself.

There is a precedent: Starting in the very early days of PCs, as malicious actors started to push computer viruses out onto the hard drives of the world, a whole anti-malware industry came into being.

It’s time we all recognize that disinformation is a form of malware that deserves just as much attention.

Bob’s sales pitch: Not for anything of mine this time, but for a brilliant piece everyone on earth ought to read. It’s titled “40 Useful Concepts You Should Know,” by someone who goes by the handle “Gurwinder.”

All 40 concepts are useful, and you should review them all.

On CIO.com’s CIO Survival Guide: Brilliance: The CIO’s most seductive career-limiting trait.” It’s about why, for CIOs, brokering great ideas is better than having them.