In the early days of business computing, stupid computer tricks appeared frequently in the popular press … stories like the company that sent out dunning notices for customers who owed $0 on their accounts. (Resolution: customers mailed them checks for $0 to cover what they owed.)

Somewhere in most of these stories was an obligatory explanation, that computers weren’t really the culprits. Behind any mistake a computer made was a programmer who did something wrong to make the computer do it.

Years of bug fixes, better testing regimes, and cultural acclimatization pretty much dried up the supply of stories like these. But we’re about to experience a resurgence, the result of the increasing popularity of artificial intelligence.

This week’s missive covers two artificial-intelligence-themed tales of woe.

The first happened as I was driving to a regular destination from an unfamiliar direction. My GPS brought me close. Then it announced, “Your destination is on your right.”

Which it was, only to take advantage of that intelligence I’d have had to make a 90 degree turn that would have had me driving off the shoulder of the highway and up a steep grassy slope, at which point I could hope I’d have enough momentum to knock down the chain-link fence at the top.

Dumb GPS. Uh … oops. Dumb user, as it turned out, because I’d been too lazy to look up my client’s street address. Instead I’d entered a nearby intersection and forgotten that’s what I’d done. So AI lesson #1 is that even the smartest AI will have a hard time overcoming dumb human beings.

The more infuriating tale of AI woe leads to my making an exception to a long-standing KJR practice. Usually, I avoid naming companies guilty of whatever business infraction I’m critiquing, on the grounds that naming the perpetrator lets lots of other just-as-guilty perpetrators off the hook.

But I’m making an exception because really, how many global on-line booksellers that have authors pages as part of their web presence are there?

I was about to point a new client to my Amazon author’s page, as he’d expressed interest, when I noticed an unfamiliar title on my list of books published: The Feminist Lie by Bob Lewis.

If you’ve read much of anything I’ve written over the past 21 years you’d know, this isn’t a book I would have written. Among the many reasons, I figure men shouldn’t write books criticizing feminism, any more than feminists should write books that explain male motivations, Jews should write books critiquing Catholicism and vice versa, or Latvians should publish patronizing nastiness about Albanians.

Minnesotans about Iowans? Maybe.

But I distrust pretty much any critique of any tribe that’s written by someone who isn’t a member of that tribe and who feels aggrieved by that tribe.

But some other Bob Lewis proudly wrote a book with this title, and somehow I was being given credit for it. Well, “credit” isn’t the right word, but saying I was being given debit for it might be puzzling.

In any event, I don’t think all of us named “Bob Lewis” constitute a tribe, and I want no responsibility for the actions of all the other Bob Lewises who are making their way through the world.

And yet, somehow I was listed as the author of this little screed.

Oh, well. No problem. Amazon’s Author Central lets me add books I’ve written to my author page. Surely there’s a button to delete any I don’t want on the list.

Nope. Authors can add and they can edit, but they can’t delete.

Turns out, an author’s only recourse is to send a form-based email to the folks who run Author Central to request a deletion. A couple of tries and a week-and-a-half later, the offending title was finally removed from my list.

And, I got an answer to the question of how this happened in the first place. To quote Amazon’s explanation: “Books are added by the Artificial Intelligence system Amazon has in our catalog when the system determines it matches with the author name for the first time.”

Artificial what? Oh, right.

Which leads to one more prediction. Whereas as of this writing “artificial intelligence” has some actual, useful definitions, within two years the phrase will be about as meaningful as “cloud,” because any and all business applications will be described as AI, no matter how limited the logic.

And, as in this case, no matter how lacking in intelligence.

The problem with quadrant charts isn’t that they have two axes and four boxes. It’s the magic part — why their contents are what they are.

Well, okay, that’s one of the problems. Another is that once you (you being me, that is) get in the quadrant habit, new ones pop into your head all the time.

Like, for example, this little puppy that came to me while I was watching Kong: Skull Island as my Gogo inflight movie.

It’s a new, Gartnerized test of actorhood. Preposterousness is the vertical axis. Convincing portrayal of a character is the horizontal. In Kong, Samuel L. Jackson, Tom Hiddleston, and John C. Reilly made the upper right. I leave it to KJR’s readers to label the quadrants.

While this might not be the best example, quadrant charts can be useful for visualizing how a bunch of stuff compares. Take, for example, my new Opinionization Quadrant. It visualizes the different types of thinking you and I run across all the time … and, if we’re honest with each other, the ones we ourselves engage in as well.

It’s all about evidence and certainty. No matter the subject, more and better evidence is what defines expertise and should be the source of confident opinion.

Less and worse evidence should lead to skepticism, along with a desire to obtain more and better evidence unless apathy prevails.

When more and better evidence doesn’t overcome skepticism, that’s just as bad as prejudice and as unfounded as belief. It’s where denial happens — in the face of overwhelming evidence someone is unwilling to change their position on a subject.

Rationality happens when knowledge and certainty positively correlate. Except there’s so much known about so many subjects that, with the possible exception of Professor Irwin Corey (the world’s foremost authority), we should all be completely skeptical about just about everything.

So we need to allow for once-removed evidence — reporting about those subjects we lack the time or, in some cases genius to become experts in ourselves.

No question, once-removed evidence — journalism, to give it a name — does have a few pitfalls.

The first happens when we … okay, I start my quest for an opinion in the Belief/Prejudice quadrant. My self-knowledge extends to knowing I’m too ignorant about the subject to have a strongly held opinion, but not to acknowledging to myself that my strongly held opinion might be wrong.

And so off I go, energetically Googling for ammunition rather than illumination. This being the age of the Internet and all, someone will have written exactly what I want to read, convincingly enough to stay within the boundaries set by my confirmation bias.

This isn’t, of course, actual journalism but it can look a lot like it to the unwary.

The second need for care is understanding the nature and limits of reportage.

Start here: Journalism is a profession. Journalists have to learn their trade. And like most professions it’s an affinity group. Members in good standing care about the respect and approval of other members in good standing.

So when it comes to reporting on, say, social or political matters, a professional reporter might have liberal or conservative inclinations, but are less likely to root their reporting in their political affinity than you or I would be.

Their affinity, when reporting, is to their profession, not to where they sit on the political spectrum. Given a choice between supporting politicians they agree with and publishing an exclusive story damaging to those same politicians, they’ll go with the scoop every time.

IT journalism isn’t all that different, except that instead of being accused of liberal or conservative bias, IT writers are accused of being Apple, or Microsoft, (or Oracle, or open source) fanbodies.

Also: As with political writing, there’s a difference between professional reporters and opinionators. In both politics and tech, opinionators are much more likely to be aligned to one camp or another than reporters. Me too, although I try to keep a grip on it.

And in tech publishing the line separating reporting and opinion isn’t as bright and clear as with political reporting. It can’t be. With tech, true expertise often requires deep knowledge of a specific product line, so affinity bias is hard to avoid. Also, many of us who write in the tech field aren’t degreed journalists. We’re pretty good writers who know the territory, so our journalistic affinity is more limited.

There’s also tech pseudojournalism, where those who are reporting and opinionating (and, for that matter, quadrant-izing) work for firms that receive significant sums from those being reported on.

As Groucho said so long ago, “Love goes out the door when money comes innuendo.”