What’s the difference between a “Digital Twin” and a simulation? Or a model?

Not much, except maybe Digital Twins have a more robust connection between production data and the simulation’s behavior.

Or, as explained in a worth-your-while-if-you’re-interested-in-the-subject article titled “How to tell the difference between a model and a Digital Twin,” (Louise Wright & Stuart Davidson, SpringerOpen.com¸ 3/11,2020), “… a Digital Twin without a physical twin is a model.”

Which leaves open the question of what to call a modeled or simulated physical thingie.

Anyway, like models, simulations, and, for that matter, data mining, “Digital Twins” can become little more than a more expensive and cumbersome alternative to the Excel-Based Gaslighting (EBG) already practiced in many businesses.

If you aren’t familiar with the term EBG that isn’t surprising as I just made it up. What it is:

Gaslighting is someone trying to persuade you that up is the same as down, black is the same as white, and in is the same as out only smaller. EBG is what politically-oriented managers do when they tweak and twiddle an Excel model’s parameters to “prove” their plan’s business case.

Count on less-than-fully-scrupulous managers fiddling with the data cleansing and filtering built into their Digital Twin’s inputs so it yields the guidance the manager in question’s gut insists is right. Unless you also program digital twins of these managers so you can control their behavior, Digital Twin Gaslighting is just about inevitable.

Not that simulations, models, and/or Digital Twins are bad things. Quite the opposite. As Scott Lee and I point out in The Cognitive Enterprise, “If you can’t model you can’t manage.” Our point: managers can only make rational decisions to the extent they can predict the results of a change to a given business input or parameter. Models and simulations are how to do this. And, I guess, Digital Twins.

But then there’s another, complementary point we made. We called it the “Stay the Same / Change Ratio.” It’s the gap between the time and effort needed to implement a business change to the time the business change will remain relevant.

Digital Twinning is vulnerable to this ratio. If the time needed to program, test (never ignore testing!) and deploy a Digital Twin is longer than the period of time through which its results remain accurate, Digital Twinning will be a net liability.

Building a “Digital Twin,” simulation, or model of any kind is far from instantaneous. The business changes Digital Twinning aspires to help businesses cope with will arrive in a steady stream, starting on the day twin development begins. And the time needed to develop these twins isn’t trivial. As a result, the twin in question will always be a moving target.

How fast it moves, compared to how fast the Digital Twin programming team can dynamically adjust the twin’s specifications, determines whether investing in the Digital Twin is a good idea.

So simulating a wind tunnel makes sense. The physics of wind doesn’t change.

But the behavior of mortgage loan applicants, is, to choose a contrasting example, less stable, not to mention the mortgage product development team’s ongoing goal of creating new types of mortgage, each of which will have to be twinned as well.

Bob’s last word: You might think the strong connection to business data intrinsic to Digital Twinning would protect a twin from becoming obsolete.

But that’s an incomplete view. As Digital Twins are, essentially, software models of physical something-or-others, their data coupling can keep the parameters that drive them accurate.

That’s good so far as it goes. But if what needs updating in the Digital Twin is its logic, all the tight data coupling will give you is a red flag that someone needs to update it.

Which means the budget for building Digital Twins had better include the funds needed to maintain them, not just the funds needed to build them.

Bob’s sales pitch: All good things must come to an end. Whether you think KJR is a good thing or not, it’s coming to an end, too – the final episode will appear December 18th of this year. That’s should give you plenty of time to peruse the Archives to download copies of whatever material you like and might find useful.

On CIO.com’s CIO Survival Guide:6 ways CIOs sabotage their IT consultant’s success.” The point? It’s up to IT’s leaders to make it possible for the consultants they engage to succeed. If they weren’t serious about the project, why did they sign the contract?

Prometheus brought fire (metaphorically, knowledge about how to do stuff) to humanity, making him a mythical hero.

Lucifer (light-bringer) brought knowledge (of good and evil, no less), to humanity, earning him the mantle of most villainous of all our mythical villains.

Go figure.

Now we have ChatGPT which, in case you’ve been living in a cave the past few months and missed all the excitement, seems to be passing the Turing, Prometheus, and Lucifer tests while making the whole notion of knowledge obsolete.

You can ask ChatGPT a question and it will generate an answer that reads like something a real, live human being might have written [Turing].

And just like dealing with real, live human beings you’d have no way of knowing whether the answer was … what’s the word I’m looking for? … help me out, ChatGPT … oh, yeah, that’s the word … “right” [Prometheus] or false [Lucifer].

And a disclaimer: I’m not going to try to differentiate between what ChatGPT and allied AI technologies are capable of as of this writing from what they’ll obviously and quickly evolve into.

Quite the opposite – what follows is both speculative and, I think, inevitable, in a short enough planning window that we need to start thinking about the ramifications right now. Here are the harbingers:

Siri and Watson: When Apple introduced Siri, its mistakes were amusing but its potential was clear – technology capable of understanding a question, sifting through information sources to figure out the answer, and expressing the answer in an easily understood voice.

Watson won Jeopardy the same way.

The sophistication of research-capable AIs will only continue to improve, especially the sifting-through-data-sources algorithms.

Synthesizers: It’s one thing to engage in research to find the answer to a question. It’s quite another to be told what the right answer is and formulate a plausible argument for it.

Trust me on this – as a professional management consultant I’ve lost track of how often a client has told me the answer they want and asked me to find it.

So there’s no reason to figure an AI, armed with techniques for cherry-picking some data and forging the rest, might resist the temptation. Because while I’ve read quite a lot about where AI is going and how it’s evolving, I’ve read of no research into the development of an Ethics Engine or, its close cousin, an integrity API.

Deep fakes: Imagine a deep-faked TED Talk whose presenter doesn’t actually exist here in what we optimistically call the “real world” but that speaks and gestures in ways that push our this-person-is-an-authority-on-the-subject buttons to persuade us that a purely falsified answer is, in fact, how things are.

Or, even more unsavory, imagine the possibilities for character assassination to be had by pasting a political opponent’s or business rival’s face onto … well, I’ll leave the possibilities as an exercise for the reader.

Persuasion: Among the algorithms we can count on will be several that engage in meme promotion – that know how to disseminate an idea so as to maximize the number of people who encounter and believe it.

Recursion: It’s loop-closing time – you ask your helpful AI a question (we’ll name it “Keejer” – I trust the etymology isn’t too mysterious?) “Hey, Keejer, how old is the universe?”

Keejer searches and sifts through what’s available on the subject, synthesizes the answer (by averaging the values it finds, be they theological or astrophysical), and writes a persuasive essay presenting its findings – that our universe is 67,455 years old.

But, many of the sources Keejer discovers are falsifications created and promoted by highly persuasive AIs, and Keejer lacks a skepticism algorithm.

And so Keejer gives you the wrong answer. Worse, Keejer’s analysis is added to the Internet’s meme stack to further mislead the next research AI.

Bob’s last word: Science fictioneers, writing about dangerous robots and AIs, gravitate to Skynet scenarios, where androids engage in murderous rampages to exterminate humanity.

The unexplored territory – rogue ‘bots attempting to wipe out reality itself – hasn’t received the same attention.

But putting the literary dimension of the problem aside, it’s time to put as much R&D into  Artificial Skepticism as we’ve put into AI itself.

There is a precedent: Starting in the very early days of PCs, as malicious actors started to push computer viruses out onto the hard drives of the world, a whole anti-malware industry came into being.

It’s time we all recognize that disinformation is a form of malware that deserves just as much attention.

Bob’s sales pitch: Not for anything of mine this time, but for a brilliant piece everyone on earth ought to read. It’s titled “40 Useful Concepts You Should Know,” by someone who goes by the handle “Gurwinder.”

All 40 concepts are useful, and you should review them all.

On CIO.com’s CIO Survival Guide: Brilliance: The CIO’s most seductive career-limiting trait.” It’s about why, for CIOs, brokering great ideas is better than having them.