Irony fans rejoice. AI has entered the fray.

More specifically, the branch of artificial intelligence known as self-learning AI, also known as machine learning, sub-branch neural networks, is taking us into truly delicious territory.

Before getting to the punchline, a bit of background.

“Artificial Intelligence” isn’t a thing. It’s a collection of techniques mostly dedicated to making computers good at tasks humans accomplish without very much effort — tasks like: recognizing cats; identifying patterns; understanding the meaning of text (what you’re doing right now); turning speech into text, after which see previous entry (what you’d be doing if you were listening to this as a podcast, which would be surprising because I no longer do podcasts); and applying a set of rules or guidelines to a situation so as to recommend a decision or course of action, like, for example, determining the best next move in a game of chess or go.

Where machine learning comes in is making use of feedback loops to improve the accuracy or efficacy of the algorithms used to recognize cats and so on.

Along the way we seem to be teaching computers to commit sins of logic, like, for example, the well-known fallacy of mistaking correlation for causation.

Take, for example, a fascinating piece of research from the Pew Research Center that compared the frequencies of men and women in Google image searches of various job categories to the equivalent U.S. Department of Labor percentages (“Searching for images of CEOs or managers? The results almost always show men,” Andrew Van Dam, The Washington Post’s Wonkblog, 1/3/2019.

It isn’t only CEOs and managers, either. The research showed that, “…In 57 percent of occupations, image searches indicate the jobs are more male-dominated than they actually are.”

While we don’t know exactly how Google image searches work, somewhere behind all of this the Google image search AI must have discovered some sort of correlation between images of people working and the job categories those images are typical of. The correlation led to the inference that male-ness causes CEO-ness; also, strangely, bartender-ness and claims-adjuster-ness, to name a few other misfires.

Skewed Google occupation image search results are, if not benign, probably quite low on the list of social ills that need correcting.

But it isn’t much of a stretch to imagine law-enforcement agencies adopting similar AI techniques, resulting in correlation-implies-causation driven racial, ethnic, and gender-based profiling.

Or, closer to home, to imagine your marketing department relying on equivalent demographic or psychographic correlations, leading to marketing misfires when targeting messages to specific customer segments.

I said the Google image results must have been the result of some sort of correlation technique, but that isn’t entirely true. It’s just as possible Google is making use of neural network technology, so called because it roughly emulates how AI researchers imagine the human brain learns.

I say “roughly emulates” as a shorthand for seriously esoteric discussions as to exactly how it all actually works. I’ll leave it at that on the grounds that (1) for our purposes it doesn’t matter; (2) neural network technology is what it is whether or not it emulates the human brain; and (3) I don’t understand the specifics well enough to go into them here.

What does matter about this is that when a neural network … the technical variety, not the organic version … learns something or recommends a course of action, there doesn’t seem to be any way of getting a read-out as to how it reached its conclusion.

Put simply, if a neural network says, “That’s a photo of a cat,” there’s no way to ask it “Why do you think so?”

Okay, okay, if you want to be precise, it’s quite easy to ask it the question. What you won’t get is an answer, just as you won’t get an answer if it recommends, say, a chess move or an algorithmic trade.

Which gets us to AI’s entry into the 2019 irony sweepstakes.

Start with big data and advanced analytics. Their purpose is supposed to be moving an organization’s decision-making beyond someone in authority “trusting their gut,” to relying on evidence and logic instead.

We’re now on the cusp of hooking machine-learning neural networks up to our big data repositories so they can discover patterns and recommend courses of action through more sophisticated means than even the smartest data scientists can achieve.

Only we can’t know why the AI will be making its recommendations.

Apparently, we’ll just have to trust its guts.

I’m not entirely sure that counts as progress.

Customer Elimination Management … CEM … is CRM’s evil twin.

We all have memories of companies doing their utmost to drive us away. If you’re like me, my family offers its sympathies to your family.

No, wait, that wasn’t it. If you’re like me you might have wondered just when the first instance of CEM took place.

Wonder no more. While it might not have been first, science has pushed the date of the earliest known gripe back to 1782 BCE. That’s the approximate date of a clay tablet found in the ruins of the Sumerian city-state of UR …

In the clay tablet, a man named Nanni whined to merchant Ea-nasir about how he was delivered the wrong grade of copper ore. “How have you treated me for that copper?” he wrote. “You have withheld my money bag from me in enemy territory; it is now up to you to restore [my money] to me in full.” (“World’s Oldest Customer Complaint Goes Viral,” Christina Zhao, Newsweek, 8/24/2018.)

Even with the best efforts of digital technology, I doubt your calls to customer service, recorded as they are for training and improvement purposes, will be discovered for translation by even the most diligent of 5918’s archeologists.

In the meantime we’re left to wonder if Nanni received a response that began, “Your clay tablet is important to us …”

We’re also left to wonder, with a bit more relevance to the world of modern commerce, if Digital technologies and practices (no no no no no, not “best practices!”) can, as promised, transform customer service.

But we aren’t left to wonder very long, because the answer is obvious. For companies already dedicated to providing outstanding customer service, Digital technologies won’t transform it, but they will undoubtedly improve it.

For companies that didn’t give an infinitestimal damn before Digital strategies and technologies became the Next Big Thing, Digitization will make their already awful customer service even worse.

In theory, business intelligence technologies, applied to masses of data gleaned from social media, might make a persuasive executive suite case that current service is putrid and customers are defecting in droves because of it while blackening the offending company’s reputation among those who, without the benefit of Yelp, might have given it a shot.

In theory, these same technologies, combined with the near-future capability to interpret telephone conversations for both substance and emotional content, might give that same company’s decision-makers, who couldn’t enter the Clue Store with a plutonium American Express card and leave with any merchandise, the clues they need to figure out why their cost of sales is so much higher than that of their competitors while their customer retention and walletshare continue to plummet.

But in the wise words of 1882 Yale University student Benjamin Brewster, in theory there’s no difference between theory and practice, while in practice there is.

The service a company provides its customers is an inextricable component of the overall value they receive when they buy its products and services. Digitize a business whose leaders don’t personally and intrinsically care about it … who care only about the impact bad customer service has on their annual bonuses and options awards … and the result will be the same bad service, available through more channels.

We’re entering a post-Turing world of chat ‘bots, email autoresponders, and, very soon, AIs with synthetic voices, all poised to correctly interpret what we’re saying or writing so as to accurately diagnose their product’s defects and scour our databases of successful resolutions so as to find the one that precisely fits our situation.

More often than not, though, what these capabilities will give customers are the same useless non-solutions to the problems they contacted the service channel to complain about, delivered a wider variety of more convenient channels but not providing more useful information.

Only now, the IT organization’s name will be on whatever complaints do filter through to top management. Which in turn suggests it isn’t too early to think about the brave new world of software quality assurance. Because in addition to the litany of tests IT already applies to its software … unit, integration, regression, stress, and end-user acceptance being the most prominent … we’ll need to add another.

Call it AIIQ testing. Its purpose will be to determine if the artificial intelligences we’re deploying to support buyers of the company’s products and services are just too stupid to expose to the outside world.

Maybe we can figure out how to use artificial intelligence technology to automate the testing.