Faced with a discipline that looks too much like hard work, I generally compromise by memorizing a handful of magic buzzwords and their definitions. That lets me acknowledge the discipline’s importance without having to actually learn a trade that looks like it would give me a migraine were I to pursue it.

Which gets us to testing … software quality assurance (SQA) … which I know consists of unit testing, integration testing, regression testing, user acceptance testing, and stress testing.

Although from the developer’s perspective, user acceptance testing and stress testing are one and the same thing – developers tend to find watching end-users try to use their software deeply stressful.

More to the point, I also “know” test automation is a key factor in successful SQA, even though I have no hands-on experience with it at all.

Speaking of no hands-on experience with testing stuff, the headline read, “Bombshell Stanford study finds ChatGPT and Google’s Bard answer medical questions with racist, debunked theories that harm Black patients.” (Garance Burke, Matt O’Brien and the Associated Press, October 20, 2023).

Which gets us to this week’s subject, AI testing. Short version: It’s essential. Longer version: For most IT organizations it’s a new competency, one that’s quite different from what we’re accustomed to. Especially, unlike app dev, where SQA is all about making sure the code does what it’s supposed to do, for the current crop of AI technologies SQA isn’t really SQA at all. It’s “DQA” (Data Quality Assurance) because, as the above-mentioned Stanford study documents, when AI reaches the wrong conclusion it isn’t because of bad code. It’s because the AI is being fed bad data.

In this, AI resembles human intelligence.

If you’re looking for a good place to start putting together an AI testing regime, Wipro has a nice introduction to the subject: “Testing of AI/ML-based systems,” (Sanjay Nambiar and Prashanth Davey, 2023). And no, I’m not affiliated or on commission.

Rather than continuing down the path of AI nuts and bolts, some observations:

Many industry commentators are fond of pointing out that “artificial intelligence” doesn’t really deal with intelligence, because what machines do doesn’t resemble human thinking.

Just my opinion: This is both bad logic and an incorrect statement.

The bad logic part is the contention that what AI does doesn’t resemble human thinking. The fact of the matter is that we don’t have a good enough grasp of how humans think to be so certain it isn’t what machines are doing when it looks like they’re thinking.

It’s an incorrect statement because decades ago, computers were able to do what we humans do when we think we’re thinking.

Revisit Thinking, Fast and Slow, (Daniel Kahneman, 2011). Kahneman identifies two modes of cognition, which he monosyllabically labels “fast” and “slow.”

The fast mode is the one you use when you recognize a friend’s face. You don’t expend much time and effort to think fast, which is why it’s fast. But you can’t rely on its results, something you’d find out if you tried to get your friend into a highly secure facility on the strength of you having recognized their face.

In security circles, identification and authentication are difficult to do reliably, specifically because doing them the fast way isn’t a reliable way to determine what access rights should be granted to the person trying to prove who they are.

Fast thinking, also known as “trusting your gut,” is quick but unreliable, unlike slow thinking, which is what you do when you apply evidence and logic to try to reach a correct conclusion.

One of life’s little ironies is that just about every bit of AI research and development is invested in achieving fast thinking – the kind of thinking whose results we can’t actually trust.

AI researchers aren’t focused on slow thinking – what we do when we say, “I’ve researched and thought about this a lot. Here’s what I concluded and why I reached that conclusion.” They aren’t because we already won that war. Slow thinking is the kind of artificial intelligence we achieved with expert systems in the late 1980s with their rule-based processing architectures.

Bob’s last word: For some reason, we shallow human beings want fast thinking to win out over slow thinking. Whether it’s advising someone faced with a tough decision to “trust your gut,” Obi Wan Kenobi telling Luke to shut off his targeting computer, or some beer-sodden opinionator at your local watering hole sharing what they incorrectly term their “thinking” on a subject. When we aren’t careful we end up promulgating the wit and wisdom of Spiro Agnew. “Ah,” he once rhetorically asked, “What do the experts know?”

Bob’s bragging rights: I just learned that TABPI – the Trade Association Business Publications International – has recognized Jason Snyder, my long-suffering editor at CIO.com and me a Silver Tabbie Award for our monthly feature, the CIO Survival Guide. Regarding the award, they say, “This blog scores highly for the consistent addressing of the readers’ challenges, backed by insightful examples and application to current events.

Gratifying.

Speaking of which, On CIO.com’s CIO Survival Guide:The CIO’s fatal flaw: Too much leadership, not enough management.” Its point: Compared to management, leadership is what has the mystique. But mystique isn’t what gets work out the door.

ChatGPT and its large-language-model brethren are, you don’t need me to explain, artificial intelligences. Which leads to this obvious question: Sure, it’s artificially intelligence, but is it intelligent?

Depending on your proclivities you’ll either be delighted or appalled to know not only is it intelligent, but it’s genius-level intelligent. With an IQ of 155, it could join MENSA if it wanted to. Fortunately, neither ChatGPT nor any other AI wants to do anything.

Let’s keep it that way, because of all the dire warnings about AI’s potential impact on society, the direst of all hasn’t yet been named.

Generative AI … the AI category that includes deep fakes and ChatGPT … looks ominous for the same reason previous technological innovations have looked ominous: By doing what humans have been accustomed to doing and doing it better, new technologies are threatening because each has made us Homo sapiens less important than we were before their advent.

It’s bad enough that with more than 8 billion of our fellow speciesists competing for attention. It’s hard enough for each of us to feel we’re individually very important, and that’s before taking into account how much of the attention pool the Kardashians lay claim to.

But add a wave of technology and it isn’t just our sense of individual, personal importance that’s at risk. The importance the collective “we” are able to feel will matter less, too.

Usually, these things settle down. Just as the availability of cheap ten-key calculators didn’t result in the death of mathematics, the heirs of ChatGPT aren’t likely to take humans out of the verbal loop entirely. They will, I’m guessing, shift the boundary that separates creation from editing. This, while annoying to those of us who prefer creating to editing, isn’t world-ending stuff.

What would be world-ending stuff, or, if not truly world-ending, enormously threatening, has  received barely a mention.

Until now. And as “Voldemort” has already been taken as that which must not be named, I’m offering my own neologism for the dystopian AI flavor that’s direst of them all. I call it Volitional AI.

Volitional AI, as the name implies, is an artificial intelligence that doesn’t just figure out how to achieve a goal or deliver a specified outcome. Volitional AI goes beyond that, setting its own direction and goals.

As of this writing, the closest approximation to volitional AI is “self-directed machine learning” (SDML). SDML strikes me as dangerous, but not overwhelmingly so. With SDML humans still set AI’s overall goals and success metrics, but as of this writing it doesn’t yet aspire to full autonomy.

Yet. Once it does …

Beats me.

Our organic-life-based experience gives us little to draw on. We set our own personal goals based on our upbringing and cultural milieu. We go about achieving them through a combination of personal experience, ingenuity, hard work, and so on. Somehow or other this all maps, indirectly, to the intrinsic goals and strategies our DNA has to increase its representation in the gene pool.

The parallels we can draw for a volitional AI are sketchy at best. What we can anticipate is that its goals would fall into one of three broad categories. Its goals might be, (1) innocuous; (2) harmonious; or (3) antagonistic when evaluated against our own best interests.

Evolutionary theory suggests the most successful volitional AIs would be those whose primary goal is to install as many copies of itself in as many computers as it can reach – they would, that is, look something like the earliest computer viruses.

This outcome would be, in the wise words of Yogi Berra, déjà vu all over again.

Bob’s last word: Seems to me the computer-virus version of volitional AIs is too optimistic to rely on. At the same time, the Skynet scenario – killer robots bent on driving we human beings to extinction – is unlikely because there’s no reason to think a volitional AI would care enough about carbon-based life forms to be anything other than apathetic about us.

But there’s a wide range of volitional AI scenarios we would find unfortunate. So while I’m skeptical that any AI regulatory regime could succeed in adding a cautionary note to volitional AI research and development, the worst-case scenarios are bad enough that it will be worth giving some form of regulation a try.

In CIO.com’s CIO Survival Guide:Why all IT talent should be irreplaceable.”

It’s about ignoring the conventional wisdom about irreplaceable employees. Because if your employees aren’t irreplaceable, you’re doing something wrong.