Writers obsess about word choice.

No, that isn’t precisely true: Writers pay attention to word choice.

No again. That’s a generalization. “Writers” is too big a group to generalize from. It’s wordsmiths I’m writing about, and not all wordsmiths – just the best ones.

Word maestros choose words the way a cuisinier chooses spices.

Does this mean that if you aren’t a professional writer then it’s okay to rely on “thing” as a general-purpose noun, to be hauled out in place of the word that means what you’re trying to talk about?

In a word, no.

Nor is precision the only issue at stake when you decide how much you want to care if you’ve chosen the optimal term. How you say what you say affects you, just as much as it conveys meaning to those you’re speaking to.

There was, for example, the colleague who, in a conversation about office politics, referred to a mutual acquaintance as his “enemy.”

Enemy. Out of every word available to him in his lexicographic warehouse … opponent, adversary, rival, antagonist … he chose the most extreme item in his inventory.

So far as intentions are concerned, I’m confident my associate was merely too lazy to select a less extreme alternative. He wasn’t a bad person.

But we all know what the road to hell is paved with. And calling someone an enemy legitimizes forms of political weaponry more vicious and unsavory than what labeling them your “rival” would suggest are acceptable.

Calling them your enemy, that is, makes them deserve to be your victim.

In a business setting, if you hear anyone among your direct or indirect reports refer to anyone as their enemy, take the opportunity to school them in how inappropriate it is, not to mention organizationally damaging.

That’s different from hearing expressions of rivalry, something that can, pointed in a productive direction, be useful. Do too much to suppress feelings of rivalry and you’ll find that you’ve discouraged smart people from pointing out the flaws in unfortunate ideas, or from suggesting potentially superior alternatives.

Sure, I know you’re busy. And yes, I understand that attending to word choice slows you down.

But allow me to suggest a reframing that might change your attitude about such matters: Choosing the right superlative instead of mindlessly typing “g-r-e-a-t,” … or on the other end of the semantic continuum, finding a term of disparagement more potent than the ever-present “b-a-d” … can be fun.

I might almost suggest that as hobbies go, this one is outstanding.

Bob’s last word: In our national dialog (multilog?) I’ve read lots of opinion pieces that try to explain how it’s all become so toxic and what to do about it.

One I haven’t run across is lazy word choice.

Once upon a time, Grover Norquist famously introduced the Taxpayer Protection Pledge. It had an outsized impact on fiscal policy.

So in that vein, might I suggest some enterprising reader should create the Vocabulary Protection Pledge? Sample phrasing: “Whenever I’m speaking where anyone might hear, I will carefully choose only the most precise words when explaining my ideas.”

It might not stop Empty Green from blathering about Jewish Space Lasers, but as is the case with chicken soup to treat assorted maladies, it wouldn’t hurt.

And anyway, if Jews really did have space lasers, I know whose posterior would be first in line to get zapped.

Bob’s bragging rights: In case you missed the news last week, I’m proud to tell you my long-suffering CIO.com editor, Jason Snyder and I have been awarded a Silver Tabbie award from Trade Association Business Publications International, for my monthly feature, the CIO Survival Guide. Regarding the award, they say, “This blog scores highly for the consistent addressing of the readers’ challenges, backed by insightful examples and application to current events.“

Speaking of which, this week on the (ahem) award-winning CIO Survival Guide: “The CIO’s fatal flaw: Too much leadership, not enough management.” Its point: Compared to management, leadership is what has the mystique. But mystique isn’t what gets work out the door.

Faced with a discipline that looks too much like hard work, I generally compromise by memorizing a handful of magic buzzwords and their definitions. That lets me acknowledge the discipline’s importance without having to actually learn a trade that looks like it would give me a migraine were I to pursue it.

Which gets us to testing … software quality assurance (SQA) … which I know consists of unit testing, integration testing, regression testing, user acceptance testing, and stress testing.

Although from the developer’s perspective, user acceptance testing and stress testing are one and the same thing – developers tend to find watching end-users try to use their software deeply stressful.

More to the point, I also “know” test automation is a key factor in successful SQA, even though I have no hands-on experience with it at all.

Speaking of no hands-on experience with testing stuff, the headline read, “Bombshell Stanford study finds ChatGPT and Google’s Bard answer medical questions with racist, debunked theories that harm Black patients.” (Garance Burke, Matt O’Brien and the Associated Press, October 20, 2023).

Which gets us to this week’s subject, AI testing. Short version: It’s essential. Longer version: For most IT organizations it’s a new competency, one that’s quite different from what we’re accustomed to. Especially, unlike app dev, where SQA is all about making sure the code does what it’s supposed to do, for the current crop of AI technologies SQA isn’t really SQA at all. It’s “DQA” (Data Quality Assurance) because, as the above-mentioned Stanford study documents, when AI reaches the wrong conclusion it isn’t because of bad code. It’s because the AI is being fed bad data.

In this, AI resembles human intelligence.

If you’re looking for a good place to start putting together an AI testing regime, Wipro has a nice introduction to the subject: “Testing of AI/ML-based systems,” (Sanjay Nambiar and Prashanth Davey, 2023). And no, I’m not affiliated or on commission.

Rather than continuing down the path of AI nuts and bolts, some observations:

Many industry commentators are fond of pointing out that “artificial intelligence” doesn’t really deal with intelligence, because what machines do doesn’t resemble human thinking.

Just my opinion: This is both bad logic and an incorrect statement.

The bad logic part is the contention that what AI does doesn’t resemble human thinking. The fact of the matter is that we don’t have a good enough grasp of how humans think to be so certain it isn’t what machines are doing when it looks like they’re thinking.

It’s an incorrect statement because decades ago, computers were able to do what we humans do when we think we’re thinking.

Revisit Thinking, Fast and Slow, (Daniel Kahneman, 2011). Kahneman identifies two modes of cognition, which he monosyllabically labels “fast” and “slow.”

The fast mode is the one you use when you recognize a friend’s face. You don’t expend much time and effort to think fast, which is why it’s fast. But you can’t rely on its results, something you’d find out if you tried to get your friend into a highly secure facility on the strength of you having recognized their face.

In security circles, identification and authentication are difficult to do reliably, specifically because doing them the fast way isn’t a reliable way to determine what access rights should be granted to the person trying to prove who they are.

Fast thinking, also known as “trusting your gut,” is quick but unreliable, unlike slow thinking, which is what you do when you apply evidence and logic to try to reach a correct conclusion.

One of life’s little ironies is that just about every bit of AI research and development is invested in achieving fast thinking – the kind of thinking whose results we can’t actually trust.

AI researchers aren’t focused on slow thinking – what we do when we say, “I’ve researched and thought about this a lot. Here’s what I concluded and why I reached that conclusion.” They aren’t because we already won that war. Slow thinking is the kind of artificial intelligence we achieved with expert systems in the late 1980s with their rule-based processing architectures.

Bob’s last word: For some reason, we shallow human beings want fast thinking to win out over slow thinking. Whether it’s advising someone faced with a tough decision to “trust your gut,” Obi Wan Kenobi telling Luke to shut off his targeting computer, or some beer-sodden opinionator at your local watering hole sharing what they incorrectly term their “thinking” on a subject. When we aren’t careful we end up promulgating the wit and wisdom of Spiro Agnew. “Ah,” he once rhetorically asked, “What do the experts know?”

Bob’s bragging rights: I just learned that TABPI – the Trade Association Business Publications International – has recognized Jason Snyder, my long-suffering editor at CIO.com and me a Silver Tabbie Award for our monthly feature, the CIO Survival Guide. Regarding the award, they say, “This blog scores highly for the consistent addressing of the readers’ challenges, backed by insightful examples and application to current events.

Gratifying.

Speaking of which, On CIO.com’s CIO Survival Guide:The CIO’s fatal flaw: Too much leadership, not enough management.” Its point: Compared to management, leadership is what has the mystique. But mystique isn’t what gets work out the door.