HomeCognitive Enterprise

Trusting electronic guts

Like Tweet Pin it Share Share Email

Irony fans rejoice. AI has entered the fray.

More specifically, the branch of artificial intelligence known as self-learning AI, also known as machine learning, sub-branch neural networks, is taking us into truly delicious territory.

Before getting to the punchline, a bit of background.

“Artificial Intelligence” isn’t a thing. It’s a collection of techniques mostly dedicated to making computers good at tasks humans accomplish without very much effort — tasks like: recognizing cats; identifying patterns; understanding the meaning of text (what you’re doing right now); turning speech into text, after which see previous entry (what you’d be doing if you were listening to this as a podcast, which would be surprising because I no longer do podcasts); and applying a set of rules or guidelines to a situation so as to recommend a decision or course of action, like, for example, determining the best next move in a game of chess or go.

Where machine learning comes in is making use of feedback loops to improve the accuracy or efficacy of the algorithms used to recognize cats and so on.

Along the way we seem to be teaching computers to commit sins of logic, like, for example, the well-known fallacy of mistaking correlation for causation.

Take, for example, a fascinating piece of research from the Pew Research Center that compared the frequencies of men and women in Google image searches of various job categories to the equivalent U.S. Department of Labor percentages (“Searching for images of CEOs or managers? The results almost always show men,” Andrew Van Dam, The Washington Post’s Wonkblog, 1/3/2019.

It isn’t only CEOs and managers, either. The research showed that, “…In 57 percent of occupations, image searches indicate the jobs are more male-dominated than they actually are.”

While we don’t know exactly how Google image searches work, somewhere behind all of this the Google image search AI must have discovered some sort of correlation between images of people working and the job categories those images are typical of. The correlation led to the inference that male-ness causes CEO-ness; also, strangely, bartender-ness and claims-adjuster-ness, to name a few other misfires.

Skewed Google occupation image search results are, if not benign, probably quite low on the list of social ills that need correcting.

But it isn’t much of a stretch to imagine law-enforcement agencies adopting similar AI techniques, resulting in correlation-implies-causation driven racial, ethnic, and gender-based profiling.

Or, closer to home, to imagine your marketing department relying on equivalent demographic or psychographic correlations, leading to marketing misfires when targeting messages to specific customer segments.

I said the Google image results must have been the result of some sort of correlation technique, but that isn’t entirely true. It’s just as possible Google is making use of neural network technology, so called because it roughly emulates how AI researchers imagine the human brain learns.

I say “roughly emulates” as a shorthand for seriously esoteric discussions as to exactly how it all actually works. I’ll leave it at that on the grounds that (1) for our purposes it doesn’t matter; (2) neural network technology is what it is whether or not it emulates the human brain; and (3) I don’t understand the specifics well enough to go into them here.

What does matter about this is that when a neural network … the technical variety, not the organic version … learns something or recommends a course of action, there doesn’t seem to be any way of getting a read-out as to how it reached its conclusion.

Put simply, if a neural network says, “That’s a photo of a cat,” there’s no way to ask it “Why do you think so?”

Okay, okay, if you want to be precise, it’s quite easy to ask it the question. What you won’t get is an answer, just as you won’t get an answer if it recommends, say, a chess move or an algorithmic trade.

Which gets us to AI’s entry into the 2019 irony sweepstakes.

Start with big data and advanced analytics. Their purpose is supposed to be moving an organization’s decision-making beyond someone in authority “trusting their gut,” to relying on evidence and logic instead.

We’re now on the cusp of hooking machine-learning neural networks up to our big data repositories so they can discover patterns and recommend courses of action through more sophisticated means than even the smartest data scientists can achieve.

Only we can’t know why the AI will be making its recommendations.

Apparently, we’ll just have to trust its guts.

I’m not entirely sure that counts as progress.

Comments (76)

  • I have been a long-time reader, and I look forward every week to your new article. I get great value from your insights and wisdom. Thank you.

  • The most severe problems related to AI will not be due to a glitch in the AI process. They will likely arise from blind or lazy trust that the sources and directives given to the process will produce in the best overall results.

    Peter Haas’s TEDx talk “The Real Reason to be Afraid of Artificial Intelligence” mentions the Compass Criminal Sentencing algorithm used in 13 states. For judges faced with long backlogs, it provides a welcome relief. But the Wisconsin Supreme Court ruled that the people most directly affected by it have no legal right to audit the internal mechanism producing the assessments.

    Some time back, when business regarded AI as a subject solely for academic speculation, there was a response to computed results called “GIGO”. It stood for Garbage In Gospel Out. If a computer printed it, it must be accurate. I fear that fast, powerful AI will only amplify this tendency.

  • Yep, I’m paying att

    Keep the good thoughts and insights flowing, please. V

  • Hi Bob — throwing in my 2 cents. Yes, I read your column every week and have been for years. This week’s column, like all, was thought provoking. Trusting AI? We probably will have to. Trust is built up over time and re-evaluated continually based on results. If there is one thing about machine automation that does concern me is that, when they are wrong, they can be wrong many times faster or more efficiently than a human. There are times were being slow is and advantage.

    Looking forward to many more columns churned out at a weekly — human — pace. If your columns start to appear too rapidly I will suspect you have built some KJR AI

    • You think I have the skills to built a KJR AI? Now that’s a compliment!

    • To you comment “…when they are wrong, they can be wrong many times faster or more efficiently than a human.”

      “…he was also a longtime student of the military, and one day he told me a story. Years before, he said, a bright, forward-thinking German general divided his officers into four classes; the clever, the stupid, the industrious, and the lazy. The general believed that every officer possessed two of these qualities. The clever and lazy, for example, were suited for command (they’d figure out the easiest way to do a task); the clever and industrious were suited for high-level staff. The lazy and stupid, he maintained, were an unfortunate by-product of any system and could be slotted in somewhere; but the stupid and industrious were just too dangerous, and the general’s standing order was to have them removed from the military completely, the moment they were identified.”

      “About Face”

  • This is an automated response: my AI is responding to your email, It doesn’t know why, however.

  • The is an automated response: my AI is responding to your request. It doesn’t know ‘why’, however.

  • Thanks for sharing your thoughts, Bob. TechCrunch has an interesting article on how a machine learning agent hid data from its creators to cheat at its task, nothing evil but still clever. In the end, I think everything the machine does is still very much on us the humans.

    https://techcrunch.com/2018/12/31/this-clever-ai-hid-data-from-its-creators-to-cheat-at-its-appointed-task/

  • Reading, paying attention, still somewhat fearful of how often we think alike after all these years. Must have been all that ink residue and paper dust. Might even want to consider some Real Intelligence Commiseration Hangouts (RICH) with live participants before the bots, algorithms, and fuzzy things completely take over and not allow us to congregate.

  • Yes, still here after all these years – and it has been a few years for sure.

    I suspect that, for most of my generation at least, our trust of AI will be strongly affected by the words “Open the pod bay doors, HAL”. That and the many stories Isaac Asimov wrote about robots that did not obey the 3 laws of robotics. It is going to be an interesting ride!

  • I look forward to reading IS Survivor every week. Yes, please keep going!

  • I’m looking forward to another year of serious commentary and discussion stemming from your longtime, meaningful, experience. You know what you are talking about even though you are a little tongue in cheek where you can, a hint of sarcasm perhaps. I have to think and be honest with myself — especially when I agree most. Admittedly I enjoy your sense of humor; it is very similar to mine!

    Please keep up the work in 2019 — and beyond.
    Stephen

  • Reminds me of the line from the movie “High Fidelity,” when John Cusack’s character says “I’ve been listening to my gut since I was 14 years old, and frankly speaking, I’ve come to the conclusion that my guts have shit for brains.”

  • Re: trusting your gut, would like to see your comments on Wardley mapping, https://medium.com/wardleymaps
    It’s about as contrary to that philosophy as I’ve seen.

  • I’m still here reading you in 2019.

    As I was in 2009, 1999, etc. (Think we’ll make 2029?)

  • Interesting that “Dilbert” has been about AI for the last week or so…

  • The article was thought-provoking, as were the comments

Comments are closed.