ChatGPT and its large-language-model brethren are, you don’t need me to explain, artificial intelligences. Which leads to this obvious question: Sure, it’s artificially intelligence, but is it intelligent?

Depending on your proclivities you’ll either be delighted or appalled to know not only is it intelligent, but it’s genius-level intelligent. With an IQ of 155, it could join MENSA if it wanted to. Fortunately, neither ChatGPT nor any other AI wants to do anything.

Let’s keep it that way, because of all the dire warnings about AI’s potential impact on society, the direst of all hasn’t yet been named.

Generative AI … the AI category that includes deep fakes and ChatGPT … looks ominous for the same reason previous technological innovations have looked ominous: By doing what humans have been accustomed to doing and doing it better, new technologies are threatening because each has made us Homo sapiens less important than we were before their advent.

It’s bad enough that with more than 8 billion of our fellow speciesists competing for attention. It’s hard enough for each of us to feel we’re individually very important, and that’s before taking into account how much of the attention pool the Kardashians lay claim to.

But add a wave of technology and it isn’t just our sense of individual, personal importance that’s at risk. The importance the collective “we” are able to feel will matter less, too.

Usually, these things settle down. Just as the availability of cheap ten-key calculators didn’t result in the death of mathematics, the heirs of ChatGPT aren’t likely to take humans out of the verbal loop entirely. They will, I’m guessing, shift the boundary that separates creation from editing. This, while annoying to those of us who prefer creating to editing, isn’t world-ending stuff.

What would be world-ending stuff, or, if not truly world-ending, enormously threatening, has  received barely a mention.

Until now. And as “Voldemort” has already been taken as that which must not be named, I’m offering my own neologism for the dystopian AI flavor that’s direst of them all. I call it Volitional AI.

Volitional AI, as the name implies, is an artificial intelligence that doesn’t just figure out how to achieve a goal or deliver a specified outcome. Volitional AI goes beyond that, setting its own direction and goals.

As of this writing, the closest approximation to volitional AI is “self-directed machine learning” (SDML). SDML strikes me as dangerous, but not overwhelmingly so. With SDML humans still set AI’s overall goals and success metrics, but as of this writing it doesn’t yet aspire to full autonomy.

Yet. Once it does …

Beats me.

Our organic-life-based experience gives us little to draw on. We set our own personal goals based on our upbringing and cultural milieu. We go about achieving them through a combination of personal experience, ingenuity, hard work, and so on. Somehow or other this all maps, indirectly, to the intrinsic goals and strategies our DNA has to increase its representation in the gene pool.

The parallels we can draw for a volitional AI are sketchy at best. What we can anticipate is that its goals would fall into one of three broad categories. Its goals might be, (1) innocuous; (2) harmonious; or (3) antagonistic when evaluated against our own best interests.

Evolutionary theory suggests the most successful volitional AIs would be those whose primary goal is to install as many copies of itself in as many computers as it can reach – they would, that is, look something like the earliest computer viruses.

This outcome would be, in the wise words of Yogi Berra, déjà vu all over again.

Bob’s last word: Seems to me the computer-virus version of volitional AIs is too optimistic to rely on. At the same time, the Skynet scenario – killer robots bent on driving we human beings to extinction – is unlikely because there’s no reason to think a volitional AI would care enough about carbon-based life forms to be anything other than apathetic about us.

But there’s a wide range of volitional AI scenarios we would find unfortunate. So while I’m skeptical that any AI regulatory regime could succeed in adding a cautionary note to volitional AI research and development, the worst-case scenarios are bad enough that it will be worth giving some form of regulation a try.

In CIO.com’s CIO Survival Guide:Why all IT talent should be irreplaceable.”

It’s about ignoring the conventional wisdom about irreplaceable employees. Because if your employees aren’t irreplaceable, you’re doing something wrong.

What’s the difference between a “Digital Twin” and a simulation? Or a model?

Not much, except maybe Digital Twins have a more robust connection between production data and the simulation’s behavior.

Or, as explained in a worth-your-while-if-you’re-interested-in-the-subject article titled “How to tell the difference between a model and a Digital Twin,” (Louise Wright & Stuart Davidson, SpringerOpen.com¸ 3/11,2020), “… a Digital Twin without a physical twin is a model.”

Which leaves open the question of what to call a modeled or simulated physical thingie.

Anyway, like models, simulations, and, for that matter, data mining, “Digital Twins” can become little more than a more expensive and cumbersome alternative to the Excel-Based Gaslighting (EBG) already practiced in many businesses.

If you aren’t familiar with the term EBG that isn’t surprising as I just made it up. What it is:

Gaslighting is someone trying to persuade you that up is the same as down, black is the same as white, and in is the same as out only smaller. EBG is what politically-oriented managers do when they tweak and twiddle an Excel model’s parameters to “prove” their plan’s business case.

Count on less-than-fully-scrupulous managers fiddling with the data cleansing and filtering built into their Digital Twin’s inputs so it yields the guidance the manager in question’s gut insists is right. Unless you also program digital twins of these managers so you can control their behavior, Digital Twin Gaslighting is just about inevitable.

Not that simulations, models, and/or Digital Twins are bad things. Quite the opposite. As Scott Lee and I point out in The Cognitive Enterprise, “If you can’t model you can’t manage.” Our point: managers can only make rational decisions to the extent they can predict the results of a change to a given business input or parameter. Models and simulations are how to do this. And, I guess, Digital Twins.

But then there’s another, complementary point we made. We called it the “Stay the Same / Change Ratio.” It’s the gap between the time and effort needed to implement a business change to the time the business change will remain relevant.

Digital Twinning is vulnerable to this ratio. If the time needed to program, test (never ignore testing!) and deploy a Digital Twin is longer than the period of time through which its results remain accurate, Digital Twinning will be a net liability.

Building a “Digital Twin,” simulation, or model of any kind is far from instantaneous. The business changes Digital Twinning aspires to help businesses cope with will arrive in a steady stream, starting on the day twin development begins. And the time needed to develop these twins isn’t trivial. As a result, the twin in question will always be a moving target.

How fast it moves, compared to how fast the Digital Twin programming team can dynamically adjust the twin’s specifications, determines whether investing in the Digital Twin is a good idea.

So simulating a wind tunnel makes sense. The physics of wind doesn’t change.

But the behavior of mortgage loan applicants, is, to choose a contrasting example, less stable, not to mention the mortgage product development team’s ongoing goal of creating new types of mortgage, each of which will have to be twinned as well.

Bob’s last word: You might think the strong connection to business data intrinsic to Digital Twinning would protect a twin from becoming obsolete.

But that’s an incomplete view. As Digital Twins are, essentially, software models of physical something-or-others, their data coupling can keep the parameters that drive them accurate.

That’s good so far as it goes. But if what needs updating in the Digital Twin is its logic, all the tight data coupling will give you is a red flag that someone needs to update it.

Which means the budget for building Digital Twins had better include the funds needed to maintain them, not just the funds needed to build them.

Bob’s sales pitch: All good things must come to an end. Whether you think KJR is a good thing or not, it’s coming to an end, too – the final episode will appear December 18th of this year. That’s should give you plenty of time to peruse the Archives to download copies of whatever material you like and might find useful.

On CIO.com’s CIO Survival Guide:6 ways CIOs sabotage their IT consultant’s success.” The point? It’s up to IT’s leaders to make it possible for the consultants they engage to succeed. If they weren’t serious about the project, why did they sign the contract?