In the early days of business computing, stupid computer tricks appeared frequently in the popular press … stories like the company that sent out dunning notices for customers who owed $0 on their accounts. (Resolution: customers mailed them checks for $0 to cover what they owed.)

Somewhere in most of these stories was an obligatory explanation, that computers weren’t really the culprits. Behind any mistake a computer made was a programmer who did something wrong to make the computer do it.

Years of bug fixes, better testing regimes, and cultural acclimatization pretty much dried up the supply of stories like these. But we’re about to experience a resurgence, the result of the increasing popularity of artificial intelligence.

This week’s missive covers two artificial-intelligence-themed tales of woe.

The first happened as I was driving to a regular destination from an unfamiliar direction. My GPS brought me close. Then it announced, “Your destination is on your right.”

Which it was, only to take advantage of that intelligence I’d have had to make a 90 degree turn that would have had me driving off the shoulder of the highway and up a steep grassy slope, at which point I could hope I’d have enough momentum to knock down the chain-link fence at the top.

Dumb GPS. Uh … oops. Dumb user, as it turned out, because I’d been too lazy to look up my client’s street address. Instead I’d entered a nearby intersection and forgotten that’s what I’d done. So AI lesson #1 is that even the smartest AI will have a hard time overcoming dumb human beings.

The more infuriating tale of AI woe leads to my making an exception to a long-standing KJR practice. Usually, I avoid naming companies guilty of whatever business infraction I’m critiquing, on the grounds that naming the perpetrator lets lots of other just-as-guilty perpetrators off the hook.

But I’m making an exception because really, how many global on-line booksellers that have authors pages as part of their web presence are there?

I was about to point a new client to my Amazon author’s page, as he’d expressed interest, when I noticed an unfamiliar title on my list of books published: The Feminist Lie by Bob Lewis.

If you’ve read much of anything I’ve written over the past 21 years you’d know, this isn’t a book I would have written. Among the many reasons, I figure men shouldn’t write books criticizing feminism, any more than feminists should write books that explain male motivations, Jews should write books critiquing Catholicism and vice versa, or Latvians should publish patronizing nastiness about Albanians.

Minnesotans about Iowans? Maybe.

But I distrust pretty much any critique of any tribe that’s written by someone who isn’t a member of that tribe and who feels aggrieved by that tribe.

But some other Bob Lewis proudly wrote a book with this title, and somehow I was being given credit for it. Well, “credit” isn’t the right word, but saying I was being given debit for it might be puzzling.

In any event, I don’t think all of us named “Bob Lewis” constitute a tribe, and I want no responsibility for the actions of all the other Bob Lewises who are making their way through the world.

And yet, somehow I was listed as the author of this little screed.

Oh, well. No problem. Amazon’s Author Central lets me add books I’ve written to my author page. Surely there’s a button to delete any I don’t want on the list.

Nope. Authors can add and they can edit, but they can’t delete.

Turns out, an author’s only recourse is to send a form-based email to the folks who run Author Central to request a deletion. A couple of tries and a week-and-a-half later, the offending title was finally removed from my list.

And, I got an answer to the question of how this happened in the first place. To quote Amazon’s explanation: “Books are added by the Artificial Intelligence system Amazon has in our catalog when the system determines it matches with the author name for the first time.”

Artificial what? Oh, right.

Which leads to one more prediction. Whereas as of this writing “artificial intelligence” has some actual, useful definitions, within two years the phrase will be about as meaningful as “cloud,” because any and all business applications will be described as AI, no matter how limited the logic.

And, as in this case, no matter how lacking in intelligence.

I don’t get it.

I just read Lucas Carlson’s excellent overview of microservices architecture in InfoWorld. If you want an introduction to the subject you could do far worse, although I confess, it appears microservices architecture violates what I consider to be one of the fundamentals of good architecture. (It’s contest time: The first winning guess, defined as the one I agree with the most, will receive a hearty public virtual handshake from yours truly.)

My concern isn’t about the architectural value of microservices vs its predecessors. It’s that by focusing so much attention on it, IT ignores what it spends most of its time and effort doing.

Microservices, and DevOps, to which it’s tied at the hip, and almost all variants of Agile, to which DevOps is tied at the ankle, and Waterfall, whose deficiencies are what have led to Agile’s popularity, all focus on application development.

WAKE UP!!!!! IT only develops applications when it has no choice. Internal IT mostly buys when it can and only builds when it has to. Knowing how to design, engineer and build microservices won’t help you implement SAP, Salesforce, or Workday, to pick three examples out of a hat. Kanban and Scrum might be a bit more helpful, but not all that much. The reasons range from obvious to abstruse.

On the obvious end of the equation, when you build your own solutions you have complete control of the application and information architecture. When you buy solutions you have no control over either.

Sure, you can require a microservices foundation in your RFPs. Good luck with that: The best you can successfully insist on is full access to functionality via a RESTful (or SOAPy, or JSON-and-the-Argonauths) API.

Halfway between obvious and abstruse lies the difference in cadence between programming and configuration, and its practical consequences.

Peel away a few layers of any Agile onion and you’ll find a hidden assumption about the ratio of time and effort needed to specify functionality … to write an average-complexity user story … and the time needed to program and test it. The hidden assumption is that programming takes a lot longer than specification. It’s a valid assumption when you’re writing Angular, or PHP, or Python, or C# code.

It’s less valid when you’re using a COTS package’s built-in configuration tools, which are designed to let you tweak what the package does with maximum efficiency and minimum risk that the tweak will blow up production. The specify-to-build ratio is much closer to 1 than when a team is developing software from scratch, which means Scrum, with its user-story writing and splitting, backlog management, and sprint planning, imposes more overhead that needed.

And that ignores the question of whether each affected business area would find itself more effective by adopting the process that’s built into the COTS package instead of spending any time and effort adapting the COTS package to the processes they use at the moment.

At the full-abstruse end of the continuum lies the challenge of systems integration that’s lying in the weeds there, waiting to nail your unwary implementation teams.

To understand the problem, go back to Edgar Codd and his “twelve” laws of relational data normalization (there are thirteen of them; his numbering starts at zero). Codd’s framework for data normalization is still the touchstone for IT frameworks and methodologies of all kinds, and just about all of them come up short in comparison.

Compare the process we go through to design a relational database with the process we go through to integrate and synchronize the data fields that overlap among the multiple COTS and SaaS packages your average enterprise needs to get everything done that needs to get done.

As a veteran of the software wars explained to me a long time ago, software is just an opinion. Which means that if you have three different packages that manage employee data, you bought three conflicting opinions of what’s important to know about employees and how to represent it.

Which in turn means synchronizing employee data among these packages isn’t as simple as “create a metadata map” sounds when you write the phrase on a PowerPoint slide.

To the best of my knowledge, nobody has yet created an integration architecture design methodology.

Which shouldn’t be all that surprising: Creating one would mean creating a way to reconcile differing opinions.

And that’s a shame, because a methodology for resolving disagreements would have a lot more uses than just systems integration.