ChatGPT and its large-language-model brethren are, you don’t need me to explain, artificial intelligences. Which leads to this obvious question: Sure, it’s artificially intelligence, but is it intelligent?

Depending on your proclivities you’ll either be delighted or appalled to know not only is it intelligent, but it’s genius-level intelligent. With an IQ of 155, it could join MENSA if it wanted to. Fortunately, neither ChatGPT nor any other AI wants to do anything.

Let’s keep it that way, because of all the dire warnings about AI’s potential impact on society, the direst of all hasn’t yet been named.

Generative AI … the AI category that includes deep fakes and ChatGPT … looks ominous for the same reason previous technological innovations have looked ominous: By doing what humans have been accustomed to doing and doing it better, new technologies are threatening because each has made us Homo sapiens less important than we were before their advent.

It’s bad enough that with more than 8 billion of our fellow speciesists competing for attention. It’s hard enough for each of us to feel we’re individually very important, and that’s before taking into account how much of the attention pool the Kardashians lay claim to.

But add a wave of technology and it isn’t just our sense of individual, personal importance that’s at risk. The importance the collective “we” are able to feel will matter less, too.

Usually, these things settle down. Just as the availability of cheap ten-key calculators didn’t result in the death of mathematics, the heirs of ChatGPT aren’t likely to take humans out of the verbal loop entirely. They will, I’m guessing, shift the boundary that separates creation from editing. This, while annoying to those of us who prefer creating to editing, isn’t world-ending stuff.

What would be world-ending stuff, or, if not truly world-ending, enormously threatening, has  received barely a mention.

Until now. And as “Voldemort” has already been taken as that which must not be named, I’m offering my own neologism for the dystopian AI flavor that’s direst of them all. I call it Volitional AI.

Volitional AI, as the name implies, is an artificial intelligence that doesn’t just figure out how to achieve a goal or deliver a specified outcome. Volitional AI goes beyond that, setting its own direction and goals.

As of this writing, the closest approximation to volitional AI is “self-directed machine learning” (SDML). SDML strikes me as dangerous, but not overwhelmingly so. With SDML humans still set AI’s overall goals and success metrics, but as of this writing it doesn’t yet aspire to full autonomy.

Yet. Once it does …

Beats me.

Our organic-life-based experience gives us little to draw on. We set our own personal goals based on our upbringing and cultural milieu. We go about achieving them through a combination of personal experience, ingenuity, hard work, and so on. Somehow or other this all maps, indirectly, to the intrinsic goals and strategies our DNA has to increase its representation in the gene pool.

The parallels we can draw for a volitional AI are sketchy at best. What we can anticipate is that its goals would fall into one of three broad categories. Its goals might be, (1) innocuous; (2) harmonious; or (3) antagonistic when evaluated against our own best interests.

Evolutionary theory suggests the most successful volitional AIs would be those whose primary goal is to install as many copies of itself in as many computers as it can reach – they would, that is, look something like the earliest computer viruses.

This outcome would be, in the wise words of Yogi Berra, déjà vu all over again.

Bob’s last word: Seems to me the computer-virus version of volitional AIs is too optimistic to rely on. At the same time, the Skynet scenario – killer robots bent on driving we human beings to extinction – is unlikely because there’s no reason to think a volitional AI would care enough about carbon-based life forms to be anything other than apathetic about us.

But there’s a wide range of volitional AI scenarios we would find unfortunate. So while I’m skeptical that any AI regulatory regime could succeed in adding a cautionary note to volitional AI research and development, the worst-case scenarios are bad enough that it will be worth giving some form of regulation a try.

In CIO.com’s CIO Survival Guide:Why all IT talent should be irreplaceable.”

It’s about ignoring the conventional wisdom about irreplaceable employees. Because if your employees aren’t irreplaceable, you’re doing something wrong.

In my defense, I was much younger then, and maybe less skeptical about consultants’ recommendations.

Also in my defense, I lacked the political capital to challenge the idea anyway – it would have happened with or without me.

And, still in my defense, when I found myself, as a consultant, leading a client’s IT reorganization, I didn’t commit the same crime.

Which was having employees apply for the jobs they’d been doing since long before we came on the scene.

Let’s start by going back a step or two, to the difference between a reorganization and a restructuring. Sometimes, the difference is that “restructuring” sounds fancier than “reorganization.” Going for the snazzier word can be seductive, even when it’s at the expense of accuracy. With that in mind, a reorganization leaves the work intact, along with the workgroups that do it and who lives in each workgroup. What it changes is who reports to whom.

A restructuring, in contrast, changes how work gets done – it divvies it up into different pieces, and by extension, which workgroup does each piece.

Which gets us to IT: Except, perhaps, for shops transitioning from waterfall methodologies to one of the Agile variants, most of the work that has to get done in IT doesn’t lend itself to restructuring: programming, software quality assurance, systems administration and so on, don’t change in ways fundamental enough to change the job titles needed to get IT’s jobs done.

The buried lede

A correspondent related their situation: IT is “restructuring,” but really reorganizing, and everyone in it will have the “opportunity” (in scare quotes for obvious reasons) to apply for a job in the new organization.

In a true restructuring this might make sense. After all, if many of the jobs in an organization are going to change in fundamental ways it might not be obvious who should hold each of them.

But in a reorganization the jobs don’t change in fundamental ways. And if they don’t, IT’s leaders need to ask themselves a question that, once asked, is self-answering: Will asking employees to apply and compete for the jobs they currently hold be superior for figuring out who in the organization will be most likely to succeed in each of the jobs that aren’t going to change? Or is basing job assignments on the deep knowledge managers should have of how each IT employee currently performs more reliable?

Bob’s last word: If it isn’t already clear why having IT’s current employees apply for positions in the new org chart is inferior to appointing them, just ask yourself how good you are … how anyone is … in basing hiring decisions on how well each applicant interviews.

Depending on your source (mine is a study by Leadership IQ), about half of all new hires fail within a year and a half.

My advice: Slot employees to jobs based on what you know about what they are and aren’t good at, not on having them apply for internal jobs as if they’re unknown quantities.

Bob’s sales pitch: My friend Thomas Bertels and his co-author David Henkin have written an engaging business fable about how to improve the employee experience and, by improving it, how to make a business more effective and competitive.

It’s titled Fixing Work and does a fine job of focusing on the authors’ goal – connecting the dots that connect making how work gets done better for both employees and their employers.

On CIO.com’s CIO Survival Guide: The ‘IT Business Office’: Doing IT’s admin work right.” It’s a prosaic piece on how to handle IT administrivia.