Faced with a discipline that looks too much like hard work, I generally compromise by memorizing a handful of magic buzzwords and their definitions. That lets me acknowledge the discipline’s importance without having to actually learn a trade that looks like it would give me a migraine were I to pursue it.

Which gets us to testing … software quality assurance (SQA) … which I know consists of unit testing, integration testing, regression testing, user acceptance testing, and stress testing.

Although from the developer’s perspective, user acceptance testing and stress testing are one and the same thing – developers tend to find watching end-users try to use their software deeply stressful.

More to the point, I also “know” test automation is a key factor in successful SQA, even though I have no hands-on experience with it at all.

Speaking of no hands-on experience with testing stuff, the headline read, “Bombshell Stanford study finds ChatGPT and Google’s Bard answer medical questions with racist, debunked theories that harm Black patients.” (Garance Burke, Matt O’Brien and the Associated Press, October 20, 2023).

Which gets us to this week’s subject, AI testing. Short version: It’s essential. Longer version: For most IT organizations it’s a new competency, one that’s quite different from what we’re accustomed to. Especially, unlike app dev, where SQA is all about making sure the code does what it’s supposed to do, for the current crop of AI technologies SQA isn’t really SQA at all. It’s “DQA” (Data Quality Assurance) because, as the above-mentioned Stanford study documents, when AI reaches the wrong conclusion it isn’t because of bad code. It’s because the AI is being fed bad data.

In this, AI resembles human intelligence.

If you’re looking for a good place to start putting together an AI testing regime, Wipro has a nice introduction to the subject: “Testing of AI/ML-based systems,” (Sanjay Nambiar and Prashanth Davey, 2023). And no, I’m not affiliated or on commission.

Rather than continuing down the path of AI nuts and bolts, some observations:

Many industry commentators are fond of pointing out that “artificial intelligence” doesn’t really deal with intelligence, because what machines do doesn’t resemble human thinking.

Just my opinion: This is both bad logic and an incorrect statement.

The bad logic part is the contention that what AI does doesn’t resemble human thinking. The fact of the matter is that we don’t have a good enough grasp of how humans think to be so certain it isn’t what machines are doing when it looks like they’re thinking.

It’s an incorrect statement because decades ago, computers were able to do what we humans do when we think we’re thinking.

Revisit Thinking, Fast and Slow, (Daniel Kahneman, 2011). Kahneman identifies two modes of cognition, which he monosyllabically labels “fast” and “slow.”

The fast mode is the one you use when you recognize a friend’s face. You don’t expend much time and effort to think fast, which is why it’s fast. But you can’t rely on its results, something you’d find out if you tried to get your friend into a highly secure facility on the strength of you having recognized their face.

In security circles, identification and authentication are difficult to do reliably, specifically because doing them the fast way isn’t a reliable way to determine what access rights should be granted to the person trying to prove who they are.

Fast thinking, also known as “trusting your gut,” is quick but unreliable, unlike slow thinking, which is what you do when you apply evidence and logic to try to reach a correct conclusion.

One of life’s little ironies is that just about every bit of AI research and development is invested in achieving fast thinking – the kind of thinking whose results we can’t actually trust.

AI researchers aren’t focused on slow thinking – what we do when we say, “I’ve researched and thought about this a lot. Here’s what I concluded and why I reached that conclusion.” They aren’t because we already won that war. Slow thinking is the kind of artificial intelligence we achieved with expert systems in the late 1980s with their rule-based processing architectures.

Bob’s last word: For some reason, we shallow human beings want fast thinking to win out over slow thinking. Whether it’s advising someone faced with a tough decision to “trust your gut,” Obi Wan Kenobi telling Luke to shut off his targeting computer, or some beer-sodden opinionator at your local watering hole sharing what they incorrectly term their “thinking” on a subject. When we aren’t careful we end up promulgating the wit and wisdom of Spiro Agnew. “Ah,” he once rhetorically asked, “What do the experts know?”

Bob’s bragging rights: I just learned that TABPI – the Trade Association Business Publications International – has recognized Jason Snyder, my long-suffering editor at CIO.com and me a Silver Tabbie Award for our monthly feature, the CIO Survival Guide. Regarding the award, they say, “This blog scores highly for the consistent addressing of the readers’ challenges, backed by insightful examples and application to current events.

Gratifying.

Speaking of which, On CIO.com’s CIO Survival Guide:The CIO’s fatal flaw: Too much leadership, not enough management.” Its point: Compared to management, leadership is what has the mystique. But mystique isn’t what gets work out the door.

Dear Bob …

I need some project management advice.

I’ve read Bare Bones Project Management … and thank you for writing it! … but my issues aren’t about a project I’m managing. I’m just part of the project team, and the project manager doesn’t seem to be following your guidelines.

Which would be okay if my fellow project team members were strong players. But they aren’t – most of them are, to use a phrase I’ve borrowed from you, hiding behind the herd.

And I’m the herd.

Okay, that isn’t fair. My team does have some competent members. That’s the plot twist: The productive team members are the ones supplied by our client. They get their work done on time and in accordance with the project plan. They’re a pleasure to work with.

A pleasure, that is, except for the conversations in which I have to make excuses for my colleagues. I’m running out. That’s one place I need your advice.

Another challenge is our embarrassing weekly status meetings – embarrassing in that the project manager – not the project’s team members but the project manager – presents the project’s status. His version always shades the facts just enough to make it look like the project has made progress, while concealing that whatever progress has been made was either made by one of the client’s staff, or by me.

One more? While it’s too soon to say the project will fail completely – there’s still a chance we’ll find a way to muddle through – it certainly won’t be something to brag about. I need some ways to be recognized for how I helped keep the project from failing completely,

Or, if you don’t have any magic formulas for that, can you at least suggest ways I can keep my name from being connected to the mess?

Sincerely,

Vulnerable

Dear Vulnerable …

Based on your description it’s clear the project manager doesn’t know how to manage a project. If for no other reason, conducting status meetings where the project manager informs the team about the project’s status instead of asking team members to tell the PM, and each other, what their status is betrays a complete misunderstanding of what status meetings are for, namely, to apply peer pressure to underperforming team members to get them to pick up the tempo.

But you didn’t need me to tell you this.

Here’s what you do need me to tell you: You can’t fix this project. Don’t try.

Fixing the project means improving the PM’s skills. But no matter your intentions, and no matter how you go about it, if you were to try to fix the PM all you’d do is add hostility and defensiveness to the PM’s current list of failings.

If the PM was interested in your ideas about how to manage projects more effectively they’d ask.

In the meantime, you should get out of the habit of making excuses for anyone. Instead, direct the question back to the PM, as in, “I’m not in a position to speak to that – it’s something you’ll need to ask the PM.

This also applies to your under-performing colleagues. Sure, if they ask you for help and the help they’re asking for is coaching on how to do something, not to get you to do it for them, that’s well within the scope of healthy team interactions.

If they haven’t asked you for help, offering it anyway is a sure path to alienation.

Beyond that, it’s up to the PM to recognize under-performing team members and do something about it.

You can’t fix the project. Your job now is self-protection.

I’m guessing that in your company billable employees have two managers – a project manager, whose limitations we’ve been discussing, and an administrative manager (AM), responsible for helping you plan your career, conduct your performance reviews, and otherwise navigate organizational challenges.

Your AM is your first stop in vulnerability management. Schedule enough time to provide an accurate rendition of the situation and ask them for help.

Help might include documenting things and getting more in-depth advice than what I’m providing here. More important is letting the sales lead for the project know there’s a problem. You can’t do this … see “You can’t fix this project,” above. If you were to approach the sales lead directly it would look like backstabbing. But if your AM approaches the sales lead it’s an appropriate way to keep the company out of trouble and, more important, keeping the company’s revenue generator out of trouble.

And one more thing: Keep your AM apprised as the project situation evolves.

Bob’s last word: And one more thing – if you don’t think your AM has the political chops to help you with the situation, you should still familiarize them about it.

But don’t ask them for help. They wouldn’t be able to give you much anyway.