What’s the difference between a “Digital Twin” and a simulation? Or a model?

Not much, except maybe Digital Twins have a more robust connection between production data and the simulation’s behavior.

Or, as explained in a worth-your-while-if-you’re-interested-in-the-subject article titled “How to tell the difference between a model and a Digital Twin,” (Louise Wright & Stuart Davidson, SpringerOpen.com¸ 3/11,2020), “… a Digital Twin without a physical twin is a model.”

Which leaves open the question of what to call a modeled or simulated physical thingie.

Anyway, like models, simulations, and, for that matter, data mining, “Digital Twins” can become little more than a more expensive and cumbersome alternative to the Excel-Based Gaslighting (EBG) already practiced in many businesses.

If you aren’t familiar with the term EBG that isn’t surprising as I just made it up. What it is:

Gaslighting is someone trying to persuade you that up is the same as down, black is the same as white, and in is the same as out only smaller. EBG is what politically-oriented managers do when they tweak and twiddle an Excel model’s parameters to “prove” their plan’s business case.

Count on less-than-fully-scrupulous managers fiddling with the data cleansing and filtering built into their Digital Twin’s inputs so it yields the guidance the manager in question’s gut insists is right. Unless you also program digital twins of these managers so you can control their behavior, Digital Twin Gaslighting is just about inevitable.

Not that simulations, models, and/or Digital Twins are bad things. Quite the opposite. As Scott Lee and I point out in The Cognitive Enterprise, “If you can’t model you can’t manage.” Our point: managers can only make rational decisions to the extent they can predict the results of a change to a given business input or parameter. Models and simulations are how to do this. And, I guess, Digital Twins.

But then there’s another, complementary point we made. We called it the “Stay the Same / Change Ratio.” It’s the gap between the time and effort needed to implement a business change to the time the business change will remain relevant.

Digital Twinning is vulnerable to this ratio. If the time needed to program, test (never ignore testing!) and deploy a Digital Twin is longer than the period of time through which its results remain accurate, Digital Twinning will be a net liability.

Building a “Digital Twin,” simulation, or model of any kind is far from instantaneous. The business changes Digital Twinning aspires to help businesses cope with will arrive in a steady stream, starting on the day twin development begins. And the time needed to develop these twins isn’t trivial. As a result, the twin in question will always be a moving target.

How fast it moves, compared to how fast the Digital Twin programming team can dynamically adjust the twin’s specifications, determines whether investing in the Digital Twin is a good idea.

So simulating a wind tunnel makes sense. The physics of wind doesn’t change.

But the behavior of mortgage loan applicants, is, to choose a contrasting example, less stable, not to mention the mortgage product development team’s ongoing goal of creating new types of mortgage, each of which will have to be twinned as well.

Bob’s last word: You might think the strong connection to business data intrinsic to Digital Twinning would protect a twin from becoming obsolete.

But that’s an incomplete view. As Digital Twins are, essentially, software models of physical something-or-others, their data coupling can keep the parameters that drive them accurate.

That’s good so far as it goes. But if what needs updating in the Digital Twin is its logic, all the tight data coupling will give you is a red flag that someone needs to update it.

Which means the budget for building Digital Twins had better include the funds needed to maintain them, not just the funds needed to build them.

Bob’s sales pitch: All good things must come to an end. Whether you think KJR is a good thing or not, it’s coming to an end, too – the final episode will appear December 18th of this year. That’s should give you plenty of time to peruse the Archives to download copies of whatever material you like and might find useful.

On CIO.com’s CIO Survival Guide:6 ways CIOs sabotage their IT consultant’s success.” The point? It’s up to IT’s leaders to make it possible for the consultants they engage to succeed. If they weren’t serious about the project, why did they sign the contract?

In case you missed the news, Israeli scientists have taught a goldfish how to drive.

Well, not exactly. They placed it in a bowl with various sensors and actuators, and it correlated its initially random movements to which movements moved it toward food.

The goldfish, that is, figured out how to drive the way DeepMind figured out how to win at Atari games.

This is the technology – machine-learning AI – whose proponents advocate using for business decision-making.

I say we should turn over business decision-making to goldfish, not machine learning AIs. They cost less and ask for nothing except food flakes and an occasional aquarium cleaning. They’ll even reproduce, creating new business decision-makers far more cheaply than any manufactured neural network.

And with what we’re learning about epigenetic heritability, it’s even possible their offspring will be pre-trained when they hatch.

It’s just the future we’ve all dreamed of: If we have jobs at all we’ll find ourselves studying ichthyology to get better at “managing up.” Meanwhile, our various piscine overseers will vie for the best corner koi ponds.

Which brings us to a subject I can’t believe I haven’t written about before: the Human/Machine Relationship Index, or HMRI, which Scott Lee and I introduced in The Cognitive Enterprise (Meghan-Kiffer Press, 2015). It’s a metric useful for planning where and how to incorporate artificial intelligence technologies, included but not limited to machine learning, into the enterprise.

The HMRI ranges from +2 to -2. The more positive the number, the more humans remain in control.

And no, just because somewhere back in the technology’s history a programmer was involved that doesn’t mean the HMRI = +2. The HMRI describes the technology in action, not in development. To give you a sense of how it works:

+2: Humans are in charge. Examples: industrial robots, Davinci surgical robots.

+1: Humans can choose to obey or ignore the technology. Examples: GPS navigation, cruise control.

0: Technology provides information and other capabilities to humans. Examples:Traditional information systems, like ERP and CRM suites.

-1: Humans must obey. Machines tell humans what they must do. Examples: Automated Call Distributors, Business Process Automation.

-2: All humans within the AI’s domain must obey. Machines set their own agenda, decide what’s needed to achieve it, and, if humans are needed, tell them what to do and when to do it. Potential examples: AI-based medical diagnostics and prescribed therapies, AIs added to boards of directors, Skynet.

A lot of what I’ve read over the years regarding AI’s potential in the enterprise talks about freeing up humans to “do what humans do best.”

The theory, if I might use the term “theory” in its “please believe this utterly preposterous propaganda” sense, is that humans are intrinsically better than machines with respect to some sorts of capabilities. Common examples are judgment, innovation, and the ability to deal with exceptions.

But judgment is exactly what machine learning’s proponents are working hard to get machines to do – to find patterns in masses of data that will help business leaders prevent the bad judgement of employees they don’t, if we’re being honest with each other, trust very much.

As for innovation, what fraction of the workforce is encouraged to innovate and are in a position to do so and to make their innovations real? The answer is, almost none because even if an employee comes up with an innovative idea, there’s no budget to support it, no time in their schedule to work on it, and lots of political infighting it has to integrate into.

That leaves exceptions. But the most acceptable way of handling exceptions is to massage them into a form the established business processes … now executed by automation … can handle. Oh, well.

Bob’s last word: Back in the 20th century I contrasted mainframe and personal computing systems architectures: Mainframe architectures place technology at the core and human beings at the periphery, feeding and caring for it so it keeps on keeping on. Personal computing, in contrast, puts a human being in the middle and serves as a gateway to a universe of resources.

Machine learning is a replay. We can either put machines at the heart of things, relegating to humans only what machines can’t master, or we can think in terms of computer-enhanced humanity – something we experience every day with GPS and Wikipedia.

Yes, computer-enhanced humanity is messier. But given a choice, I’d like our collective HMRI to be a positive number.

Bob’s sales pitch: CIO.com is running the most recent addition to my IT 101 series. It’s titled The savvy CIO’s secret weapon: Your IT team | CIO .