In my defense, I was much younger then, and maybe less skeptical about consultants’ recommendations.

Also in my defense, I lacked the political capital to challenge the idea anyway – it would have happened with or without me.

And, still in my defense, when I found myself, as a consultant, leading a client’s IT reorganization, I didn’t commit the same crime.

Which was having employees apply for the jobs they’d been doing since long before we came on the scene.

Let’s start by going back a step or two, to the difference between a reorganization and a restructuring. Sometimes, the difference is that “restructuring” sounds fancier than “reorganization.” Going for the snazzier word can be seductive, even when it’s at the expense of accuracy. With that in mind, a reorganization leaves the work intact, along with the workgroups that do it and who lives in each workgroup. What it changes is who reports to whom.

A restructuring, in contrast, changes how work gets done – it divvies it up into different pieces, and by extension, which workgroup does each piece.

Which gets us to IT: Except, perhaps, for shops transitioning from waterfall methodologies to one of the Agile variants, most of the work that has to get done in IT doesn’t lend itself to restructuring: programming, software quality assurance, systems administration and so on, don’t change in ways fundamental enough to change the job titles needed to get IT’s jobs done.

The buried lede

A correspondent related their situation: IT is “restructuring,” but really reorganizing, and everyone in it will have the “opportunity” (in scare quotes for obvious reasons) to apply for a job in the new organization.

In a true restructuring this might make sense. After all, if many of the jobs in an organization are going to change in fundamental ways it might not be obvious who should hold each of them.

But in a reorganization the jobs don’t change in fundamental ways. And if they don’t, IT’s leaders need to ask themselves a question that, once asked, is self-answering: Will asking employees to apply and compete for the jobs they currently hold be superior for figuring out who in the organization will be most likely to succeed in each of the jobs that aren’t going to change? Or is basing job assignments on the deep knowledge managers should have of how each IT employee currently performs more reliable?

Bob’s last word: If it isn’t already clear why having IT’s current employees apply for positions in the new org chart is inferior to appointing them, just ask yourself how good you are … how anyone is … in basing hiring decisions on how well each applicant interviews.

Depending on your source (mine is a study by Leadership IQ), about half of all new hires fail within a year and a half.

My advice: Slot employees to jobs based on what you know about what they are and aren’t good at, not on having them apply for internal jobs as if they’re unknown quantities.

Bob’s sales pitch: My friend Thomas Bertels and his co-author David Henkin have written an engaging business fable about how to improve the employee experience and, by improving it, how to make a business more effective and competitive.

It’s titled Fixing Work and does a fine job of focusing on the authors’ goal – connecting the dots that connect making how work gets done better for both employees and their employers.

On CIO.com’s CIO Survival Guide: The ‘IT Business Office’: Doing IT’s admin work right.” It’s a prosaic piece on how to handle IT administrivia.

What’s the difference between a “Digital Twin” and a simulation? Or a model?

Not much, except maybe Digital Twins have a more robust connection between production data and the simulation’s behavior.

Or, as explained in a worth-your-while-if-you’re-interested-in-the-subject article titled “How to tell the difference between a model and a Digital Twin,” (Louise Wright & Stuart Davidson, SpringerOpen.com¸ 3/11,2020), “… a Digital Twin without a physical twin is a model.”

Which leaves open the question of what to call a modeled or simulated physical thingie.

Anyway, like models, simulations, and, for that matter, data mining, “Digital Twins” can become little more than a more expensive and cumbersome alternative to the Excel-Based Gaslighting (EBG) already practiced in many businesses.

If you aren’t familiar with the term EBG that isn’t surprising as I just made it up. What it is:

Gaslighting is someone trying to persuade you that up is the same as down, black is the same as white, and in is the same as out only smaller. EBG is what politically-oriented managers do when they tweak and twiddle an Excel model’s parameters to “prove” their plan’s business case.

Count on less-than-fully-scrupulous managers fiddling with the data cleansing and filtering built into their Digital Twin’s inputs so it yields the guidance the manager in question’s gut insists is right. Unless you also program digital twins of these managers so you can control their behavior, Digital Twin Gaslighting is just about inevitable.

Not that simulations, models, and/or Digital Twins are bad things. Quite the opposite. As Scott Lee and I point out in The Cognitive Enterprise, “If you can’t model you can’t manage.” Our point: managers can only make rational decisions to the extent they can predict the results of a change to a given business input or parameter. Models and simulations are how to do this. And, I guess, Digital Twins.

But then there’s another, complementary point we made. We called it the “Stay the Same / Change Ratio.” It’s the gap between the time and effort needed to implement a business change to the time the business change will remain relevant.

Digital Twinning is vulnerable to this ratio. If the time needed to program, test (never ignore testing!) and deploy a Digital Twin is longer than the period of time through which its results remain accurate, Digital Twinning will be a net liability.

Building a “Digital Twin,” simulation, or model of any kind is far from instantaneous. The business changes Digital Twinning aspires to help businesses cope with will arrive in a steady stream, starting on the day twin development begins. And the time needed to develop these twins isn’t trivial. As a result, the twin in question will always be a moving target.

How fast it moves, compared to how fast the Digital Twin programming team can dynamically adjust the twin’s specifications, determines whether investing in the Digital Twin is a good idea.

So simulating a wind tunnel makes sense. The physics of wind doesn’t change.

But the behavior of mortgage loan applicants, is, to choose a contrasting example, less stable, not to mention the mortgage product development team’s ongoing goal of creating new types of mortgage, each of which will have to be twinned as well.

Bob’s last word: You might think the strong connection to business data intrinsic to Digital Twinning would protect a twin from becoming obsolete.

But that’s an incomplete view. As Digital Twins are, essentially, software models of physical something-or-others, their data coupling can keep the parameters that drive them accurate.

That’s good so far as it goes. But if what needs updating in the Digital Twin is its logic, all the tight data coupling will give you is a red flag that someone needs to update it.

Which means the budget for building Digital Twins had better include the funds needed to maintain them, not just the funds needed to build them.

Bob’s sales pitch: All good things must come to an end. Whether you think KJR is a good thing or not, it’s coming to an end, too – the final episode will appear December 18th of this year. That’s should give you plenty of time to peruse the Archives to download copies of whatever material you like and might find useful.

On CIO.com’s CIO Survival Guide:6 ways CIOs sabotage their IT consultant’s success.” The point? It’s up to IT’s leaders to make it possible for the consultants they engage to succeed. If they weren’t serious about the project, why did they sign the contract?