ChatGPT and its large-language-model brethren are, you don’t need me to explain, artificial intelligences. Which leads to this obvious question: Sure, it’s artificially intelligence, but is it intelligent?

Depending on your proclivities you’ll either be delighted or appalled to know not only is it intelligent, but it’s genius-level intelligent. With an IQ of 155, it could join MENSA if it wanted to. Fortunately, neither ChatGPT nor any other AI wants to do anything.

Let’s keep it that way, because of all the dire warnings about AI’s potential impact on society, the direst of all hasn’t yet been named.

Generative AI … the AI category that includes deep fakes and ChatGPT … looks ominous for the same reason previous technological innovations have looked ominous: By doing what humans have been accustomed to doing and doing it better, new technologies are threatening because each has made us Homo sapiens less important than we were before their advent.

It’s bad enough that with more than 8 billion of our fellow speciesists competing for attention. It’s hard enough for each of us to feel we’re individually very important, and that’s before taking into account how much of the attention pool the Kardashians lay claim to.

But add a wave of technology and it isn’t just our sense of individual, personal importance that’s at risk. The importance the collective “we” are able to feel will matter less, too.

Usually, these things settle down. Just as the availability of cheap ten-key calculators didn’t result in the death of mathematics, the heirs of ChatGPT aren’t likely to take humans out of the verbal loop entirely. They will, I’m guessing, shift the boundary that separates creation from editing. This, while annoying to those of us who prefer creating to editing, isn’t world-ending stuff.

What would be world-ending stuff, or, if not truly world-ending, enormously threatening, has  received barely a mention.

Until now. And as “Voldemort” has already been taken as that which must not be named, I’m offering my own neologism for the dystopian AI flavor that’s direst of them all. I call it Volitional AI.

Volitional AI, as the name implies, is an artificial intelligence that doesn’t just figure out how to achieve a goal or deliver a specified outcome. Volitional AI goes beyond that, setting its own direction and goals.

As of this writing, the closest approximation to volitional AI is “self-directed machine learning” (SDML). SDML strikes me as dangerous, but not overwhelmingly so. With SDML humans still set AI’s overall goals and success metrics, but as of this writing it doesn’t yet aspire to full autonomy.

Yet. Once it does …

Beats me.

Our organic-life-based experience gives us little to draw on. We set our own personal goals based on our upbringing and cultural milieu. We go about achieving them through a combination of personal experience, ingenuity, hard work, and so on. Somehow or other this all maps, indirectly, to the intrinsic goals and strategies our DNA has to increase its representation in the gene pool.

The parallels we can draw for a volitional AI are sketchy at best. What we can anticipate is that its goals would fall into one of three broad categories. Its goals might be, (1) innocuous; (2) harmonious; or (3) antagonistic when evaluated against our own best interests.

Evolutionary theory suggests the most successful volitional AIs would be those whose primary goal is to install as many copies of itself in as many computers as it can reach – they would, that is, look something like the earliest computer viruses.

This outcome would be, in the wise words of Yogi Berra, déjà vu all over again.

Bob’s last word: Seems to me the computer-virus version of volitional AIs is too optimistic to rely on. At the same time, the Skynet scenario – killer robots bent on driving we human beings to extinction – is unlikely because there’s no reason to think a volitional AI would care enough about carbon-based life forms to be anything other than apathetic about us.

But there’s a wide range of volitional AI scenarios we would find unfortunate. So while I’m skeptical that any AI regulatory regime could succeed in adding a cautionary note to volitional AI research and development, the worst-case scenarios are bad enough that it will be worth giving some form of regulation a try.

In CIO.com’s CIO Survival Guide:Why all IT talent should be irreplaceable.”

It’s about ignoring the conventional wisdom about irreplaceable employees. Because if your employees aren’t irreplaceable, you’re doing something wrong.

Dear Bob …

I need some project management advice.

I’ve read Bare Bones Project Management … and thank you for writing it! … but my issues aren’t about a project I’m managing. I’m just part of the project team, and the project manager doesn’t seem to be following your guidelines.

Which would be okay if my fellow project team members were strong players. But they aren’t – most of them are, to use a phrase I’ve borrowed from you, hiding behind the herd.

And I’m the herd.

Okay, that isn’t fair. My team does have some competent members. That’s the plot twist: The productive team members are the ones supplied by our client. They get their work done on time and in accordance with the project plan. They’re a pleasure to work with.

A pleasure, that is, except for the conversations in which I have to make excuses for my colleagues. I’m running out. That’s one place I need your advice.

Another challenge is our embarrassing weekly status meetings – embarrassing in that the project manager – not the project’s team members but the project manager – presents the project’s status. His version always shades the facts just enough to make it look like the project has made progress, while concealing that whatever progress has been made was either made by one of the client’s staff, or by me.

One more? While it’s too soon to say the project will fail completely – there’s still a chance we’ll find a way to muddle through – it certainly won’t be something to brag about. I need some ways to be recognized for how I helped keep the project from failing completely,

Or, if you don’t have any magic formulas for that, can you at least suggest ways I can keep my name from being connected to the mess?

Sincerely,

Vulnerable

Dear Vulnerable …

Based on your description it’s clear the project manager doesn’t know how to manage a project. If for no other reason, conducting status meetings where the project manager informs the team about the project’s status instead of asking team members to tell the PM, and each other, what their status is betrays a complete misunderstanding of what status meetings are for, namely, to apply peer pressure to underperforming team members to get them to pick up the tempo.

But you didn’t need me to tell you this.

Here’s what you do need me to tell you: You can’t fix this project. Don’t try.

Fixing the project means improving the PM’s skills. But no matter your intentions, and no matter how you go about it, if you were to try to fix the PM all you’d do is add hostility and defensiveness to the PM’s current list of failings.

If the PM was interested in your ideas about how to manage projects more effectively they’d ask.

In the meantime, you should get out of the habit of making excuses for anyone. Instead, direct the question back to the PM, as in, “I’m not in a position to speak to that – it’s something you’ll need to ask the PM.

This also applies to your under-performing colleagues. Sure, if they ask you for help and the help they’re asking for is coaching on how to do something, not to get you to do it for them, that’s well within the scope of healthy team interactions.

If they haven’t asked you for help, offering it anyway is a sure path to alienation.

Beyond that, it’s up to the PM to recognize under-performing team members and do something about it.

You can’t fix the project. Your job now is self-protection.

I’m guessing that in your company billable employees have two managers – a project manager, whose limitations we’ve been discussing, and an administrative manager (AM), responsible for helping you plan your career, conduct your performance reviews, and otherwise navigate organizational challenges.

Your AM is your first stop in vulnerability management. Schedule enough time to provide an accurate rendition of the situation and ask them for help.

Help might include documenting things and getting more in-depth advice than what I’m providing here. More important is letting the sales lead for the project know there’s a problem. You can’t do this … see “You can’t fix this project,” above. If you were to approach the sales lead directly it would look like backstabbing. But if your AM approaches the sales lead it’s an appropriate way to keep the company out of trouble and, more important, keeping the company’s revenue generator out of trouble.

And one more thing: Keep your AM apprised as the project situation evolves.

Bob’s last word: And one more thing – if you don’t think your AM has the political chops to help you with the situation, you should still familiarize them about it.

But don’t ask them for help. They wouldn’t be able to give you much anyway.