Yeah, yeah, I know. I should stay out of politics and current events; certainly, if I do, I shouldn’t contribute to our current state of tribalism by affiliating with any political tribe.

But I have to, because (Warning: Breaking Political News follows) in case you missed it, the inmates really are trying to run the asylum. Only they’re failing; also, I’m not being fair to the non-metaphorical asylums, let alone their inmates.

Call me naïve; I can’t help thinking that if we could limit every inmate to statements that are factually correct, then our asylum’s governance couldn’t help but improve.

No, this isn’t a particularly novel sentiment. Worse, merely bemoaning that our public discourse has been polluted by Jewish Space Lasers and preposterous braggadocio about power poles and power lines. doesn’t accomplish very much.

Bemoaning is useless. Fortunately, I think I’ve just designed a way to leverage artificial intelligence technologies to improve the quality of our great nation’s political dialog.

It starts with an ankle bracelet.

But not just any ankle bracelet. This one wouldn’t track its wearer’s location to make sure they don’t violate the terms of their parole.

This one would track the factualness of its wearer’s statements. On uttering something completely or mostly false, the ankle bracelet would emit a deafening sound effect (ah-ooooo-ga!(?)) along with a loud voice yelling “Liar, liar, pants on fire!” or something equally pithy. And unless the wearer immediately retracted the statement it would be ‘posted (what used to be “tweeted”) along with a snarky and disparaging commentary.

The goal would be to humiliate any and every public servant who doesn’t respect basic honest discourse.

Who would have to wear one of these undecorative but useful pieces of information technology?

That would be anyone and everyone who holds or aspires to holding elective or high-level appointive office.

But … I can hear critics complain … wouldn’t this violate the office-holder’s first amendment rights?

I don’t think so, for two reasons.

The first: Nobody (and nothing) stops anyone from saying or publishing anything. The magic AI gadget would be responsive, not preventive.

And second: Very much like a driver’s license, we can define running for office as implied consent.

Now I’m the first to caution that machine-learning-style AI insights aren’t completely reliable. The KJR Honesty-Assessment Ankle Bracelet would only be as reliable as its training data.

A technology and process like this would certainly require an appeals process. We might even imagine that this appeals process would be fair, with published retractions when necessary, and with the cost of investigating the appeal paid by the bracelet manufacturer if the appeal is affirmed, but … fair is fair … paid by the offender if the bracelet’s assessment is upheld.

Bob’s last word: This week’s screed might strike you as satire. Satire was, in fact, my plan.

But as long-time readers know I’ve been warning about the dangers of intellectual relativism and the organizational importance of a culture of honest inquiry for a very long time now, and recent events just reinforce that we as a society need to do something, and the fact-checkers we have in place, no matter how good they are, just don’t scale up enough to cope with the scope of the problem..

I’m not yet convinced we need to do anything quite this radical. But a concerted effort to reinforce the importance of factualness in our public dialog? Absolutely. A process that ridicules, lambasts, embarrasses, and otherwise humiliates the propagandists who increasingly control our public dialog?

Sign me up!.

ChatGPT and its large-language-model brethren are, you don’t need me to explain, artificial intelligences. Which leads to this obvious question: Sure, it’s artificially intelligence, but is it intelligent?

Depending on your proclivities you’ll either be delighted or appalled to know not only is it intelligent, but it’s genius-level intelligent. With an IQ of 155, it could join MENSA if it wanted to. Fortunately, neither ChatGPT nor any other AI wants to do anything.

Let’s keep it that way, because of all the dire warnings about AI’s potential impact on society, the direst of all hasn’t yet been named.

Generative AI … the AI category that includes deep fakes and ChatGPT … looks ominous for the same reason previous technological innovations have looked ominous: By doing what humans have been accustomed to doing and doing it better, new technologies are threatening because each has made us Homo sapiens less important than we were before their advent.

It’s bad enough that with more than 8 billion of our fellow speciesists competing for attention. It’s hard enough for each of us to feel we’re individually very important, and that’s before taking into account how much of the attention pool the Kardashians lay claim to.

But add a wave of technology and it isn’t just our sense of individual, personal importance that’s at risk. The importance the collective “we” are able to feel will matter less, too.

Usually, these things settle down. Just as the availability of cheap ten-key calculators didn’t result in the death of mathematics, the heirs of ChatGPT aren’t likely to take humans out of the verbal loop entirely. They will, I’m guessing, shift the boundary that separates creation from editing. This, while annoying to those of us who prefer creating to editing, isn’t world-ending stuff.

What would be world-ending stuff, or, if not truly world-ending, enormously threatening, has  received barely a mention.

Until now. And as “Voldemort” has already been taken as that which must not be named, I’m offering my own neologism for the dystopian AI flavor that’s direst of them all. I call it Volitional AI.

Volitional AI, as the name implies, is an artificial intelligence that doesn’t just figure out how to achieve a goal or deliver a specified outcome. Volitional AI goes beyond that, setting its own direction and goals.

As of this writing, the closest approximation to volitional AI is “self-directed machine learning” (SDML). SDML strikes me as dangerous, but not overwhelmingly so. With SDML humans still set AI’s overall goals and success metrics, but as of this writing it doesn’t yet aspire to full autonomy.

Yet. Once it does …

Beats me.

Our organic-life-based experience gives us little to draw on. We set our own personal goals based on our upbringing and cultural milieu. We go about achieving them through a combination of personal experience, ingenuity, hard work, and so on. Somehow or other this all maps, indirectly, to the intrinsic goals and strategies our DNA has to increase its representation in the gene pool.

The parallels we can draw for a volitional AI are sketchy at best. What we can anticipate is that its goals would fall into one of three broad categories. Its goals might be, (1) innocuous; (2) harmonious; or (3) antagonistic when evaluated against our own best interests.

Evolutionary theory suggests the most successful volitional AIs would be those whose primary goal is to install as many copies of itself in as many computers as it can reach – they would, that is, look something like the earliest computer viruses.

This outcome would be, in the wise words of Yogi Berra, déjà vu all over again.

Bob’s last word: Seems to me the computer-virus version of volitional AIs is too optimistic to rely on. At the same time, the Skynet scenario – killer robots bent on driving we human beings to extinction – is unlikely because there’s no reason to think a volitional AI would care enough about carbon-based life forms to be anything other than apathetic about us.

But there’s a wide range of volitional AI scenarios we would find unfortunate. So while I’m skeptical that any AI regulatory regime could succeed in adding a cautionary note to volitional AI research and development, the worst-case scenarios are bad enough that it will be worth giving some form of regulation a try.

In CIO.com’s CIO Survival Guide:Why all IT talent should be irreplaceable.”

It’s about ignoring the conventional wisdom about irreplaceable employees. Because if your employees aren’t irreplaceable, you’re doing something wrong.