HomeCognitive Enterprise

Trusting electronic guts

Like Tweet Pin it Share Share Email

Irony fans rejoice. AI has entered the fray.

More specifically, the branch of artificial intelligence known as self-learning AI, also known as machine learning, sub-branch neural networks, is taking us into truly delicious territory.

Before getting to the punchline, a bit of background.

“Artificial Intelligence” isn’t a thing. It’s a collection of techniques mostly dedicated to making computers good at tasks humans accomplish without very much effort — tasks like: recognizing cats; identifying patterns; understanding the meaning of text (what you’re doing right now); turning speech into text, after which see previous entry (what you’d be doing if you were listening to this as a podcast, which would be surprising because I no longer do podcasts); and applying a set of rules or guidelines to a situation so as to recommend a decision or course of action, like, for example, determining the best next move in a game of chess or go.

Where machine learning comes in is making use of feedback loops to improve the accuracy or efficacy of the algorithms used to recognize cats and so on.

Along the way we seem to be teaching computers to commit sins of logic, like, for example, the well-known fallacy of mistaking correlation for causation.

Take, for example, a fascinating piece of research from the Pew Research Center that compared the frequencies of men and women in Google image searches of various job categories to the equivalent U.S. Department of Labor percentages (“Searching for images of CEOs or managers? The results almost always show men,” Andrew Van Dam, The Washington Post’s Wonkblog, 1/3/2019.

It isn’t only CEOs and managers, either. The research showed that, “…In 57 percent of occupations, image searches indicate the jobs are more male-dominated than they actually are.”

While we don’t know exactly how Google image searches work, somewhere behind all of this the Google image search AI must have discovered some sort of correlation between images of people working and the job categories those images are typical of. The correlation led to the inference that male-ness causes CEO-ness; also, strangely, bartender-ness and claims-adjuster-ness, to name a few other misfires.

Skewed Google occupation image search results are, if not benign, probably quite low on the list of social ills that need correcting.

But it isn’t much of a stretch to imagine law-enforcement agencies adopting similar AI techniques, resulting in correlation-implies-causation driven racial, ethnic, and gender-based profiling.

Or, closer to home, to imagine your marketing department relying on equivalent demographic or psychographic correlations, leading to marketing misfires when targeting messages to specific customer segments.

I said the Google image results must have been the result of some sort of correlation technique, but that isn’t entirely true. It’s just as possible Google is making use of neural network technology, so called because it roughly emulates how AI researchers imagine the human brain learns.

I say “roughly emulates” as a shorthand for seriously esoteric discussions as to exactly how it all actually works. I’ll leave it at that on the grounds that (1) for our purposes it doesn’t matter; (2) neural network technology is what it is whether or not it emulates the human brain; and (3) I don’t understand the specifics well enough to go into them here.

What does matter about this is that when a neural network … the technical variety, not the organic version … learns something or recommends a course of action, there doesn’t seem to be any way of getting a read-out as to how it reached its conclusion.

Put simply, if a neural network says, “That’s a photo of a cat,” there’s no way to ask it “Why do you think so?”

Okay, okay, if you want to be precise, it’s quite easy to ask it the question. What you won’t get is an answer, just as you won’t get an answer if it recommends, say, a chess move or an algorithmic trade.

Which gets us to AI’s entry into the 2019 irony sweepstakes.

Start with big data and advanced analytics. Their purpose is supposed to be moving an organization’s decision-making beyond someone in authority “trusting their gut,” to relying on evidence and logic instead.

We’re now on the cusp of hooking machine-learning neural networks up to our big data repositories so they can discover patterns and recommend courses of action through more sophisticated means than even the smartest data scientists can achieve.

Only we can’t know why the AI will be making its recommendations.

Apparently, we’ll just have to trust its guts.

I’m not entirely sure that counts as progress.

Comments (76)

  • Thank you for another great year of IT insights!

    As to of”Why do you think so?”, my experience has been that asking humans that question rarely returns a useful reply. Neural networks…seem to act as poorly as humans 🙂 My take on AI has always has been that while it doesn’t work, we call it “AI;” when it does what’s expected, we call it “programming.” But I agree with your secondary point — we’re beginning to trust our world to the product of processes we don’t understand and can’t interrogate. I’m not sure that’s much different from trusting the U.S. Congress…

    • “We aren’t dealing with ordinary machines here. These are highly complicated pieces of equipment. Almost as complicated as living organisms. In some cases, they have been designed by other computers. We don’t know exactly how they work.”

      “Westworld” (1973)

  • Keep up the good work, Bob.

  • Bob – you are correct ML bias matters. A concrete example is the Amazon resume filtering system which was de-emphasizing women: https://www.theguardian.com/technology/2018/oct/10/amazon-hiring-ai-gender-bias-recruiting-engine

    But there are lots of tools to help illuminate what complex neural network models are doing. See http://www.heatmapping.org/ to light up the pixels the model is using to characterize your picture as ‘cat’, DALEX (http://smarterpoland.pl/index.php/2018/02/dalex-which-variables-influence-this-single-prediction/), SHAP (https://github.com/slundberg/shap), Anchors (http://sameersingh.org/files/papers/anchors-aaai18.pdf), and LIME (https://homes.cs.washington.edu/~marcotcr/blog/lime/).

    HIH,
    Andy

    • While I don’t pretend to have sufficient sophistication to discuss this in depth and with authority, it seems to me we’re dealing with the difference between accessing the neural net’s logic and being able to validate it. It’s the difference between “these are the neurons that are firing” and “Here’s why you should trust that those are the right neurons to fire.

      This doesn’t make the neural network inferior to human logic. Quite the opposite – try explaining how it is, when you look at a picture of a cat … or, for that matter, at a cat … how you know that what you’re looking for is, in fact, a cat.

  • Even though I retired from my computer managerial/supervisor/technician responsibilities over two years ago, I still enjoy reading your musings. Please keep up the good work!

  • Bob, keep up the good work for at least another year.

    BTW, when I clicked the comments link in your email, it sent me to the comments section of your last column of 2018. Perhaps some neural network somewhere decided that I wanted that column instead of this one, since I posted two comments for that one!

  • You have to have access and great skills to be able to decide to trust an algorithm.

  • Hi Bob,

    A very insightful article. Even though I was aware of the impenetrable nature of ML logic, I wouldn’t have thought to frame it as the machine trusting it’s gut. But that’s a wonderfully apt analogy. At best, we can try to reassure ourselves that we’re trusting a gut more reliable than any (or at least, most) human’s.

    Looking forward to another year,
    Jeff

  • Thanks again for thoughtful commentary Bob.
    Looking at the progress of AI driving cars. May be up to a beginning teenager, maybe more progress this year.
    The problem is AI doesn’t know what to do in weird situations like weather, pedestrians acting unpredictably, or anticipate what happens in blind spots behind other vehicles. Is AI smart enough to know to stay home in bad weather?

  • I really enjoy your posts, Bob. It’s one of the very few blogs I read on a regular basis.

  • DARPA and others are looking into Explainable AI (XAI) algorithms that can be interrogated. That plus rigorous independent testing should help some, but it’s difficult to stop people from trusting computing answers too much.

    I really enjoy your column and insights, and look forward to another year. Thank you

  • Hi Bob, I read your weekly musings religiously. If I don’t learn something at least I know I will be aptly entertained. Keep it up!

  • I would like to continue to read your ideas. They do get me to think and question what is trustworthy. It may keep me a bit more on my toes.

  • Keep up the good work. I’ve been reading you for as long as I’ve worked in IT (18+ years). There is nothing else that has stayed with me for so long work wise at least!

  • As requested, here’s a note to let you know I’m paying attention.

  • We already have this problem with black-box algorithms that are used to help judges in sentencing by offering estimates of the likelihood that a person will re-offend. These appear to systematically demonstrate a racial bias.

  • Thanks Bob. I always look forward to columns (including the rebroadcasts 🙂 ). I think AI is just another tool that would be used by a person to help make the decisions. The real trap here is a person’s guts trusting the AI guts…

    -Dave

  • Hi Bob,

    I don’t usually leave comments but I look forward to reading your insights so please keep them coming! Here’s to a great 2019!

    Maria

  • Hi, Bob! You asked if I’m paying attention, and I am. Long-time reader, keep up the good work!

  • This finally clears up the Railroad Conductor male/female controversy.

    OK, on a more serious note. From my personal experience, you are spot on Bob. AI/Predictive Modeling has been the hot buzzword. Heck, I learned LISP in college in 1983 and was told that AI would take over the world.

    My personal experiences with Predictive Modeling have been less than exciting. First, you must have ALL of the data points and then admit what percentage of accuracy you can live with. Of course where you get that data is also important since your own data may be skewed. Then you need to convince humans to follow the model.

    In my industry companies are already starting to bail on models that that have lead them astray. Problem is it took years to figure that out.

    On the AI front, I keep thinking autonomous driving – recognizing the cat before running it over. Cars are attempting to see what humans do and now many are including automatic adjustments for lane departure. The latest BMW X5 has an oops, in certain situations it will veer hard the wrong way and BMW is not alone.

    All of this stuff may well get better, but business seems to be driven by FOMO more than ever.

    Happy New Year Bob!

  • I look forward to your IS Survivor newsletter each week! Thank you so much for your effort–I appreciate your insights!

    Regards,
    Mike

  • bob,

    AI is genuine stupidity.

    we need IA = Intelligence Amplification.

    I can see putting AI in charge of something important and it screwing things up terrible bad.

    Already biz is using correlation to infer causation which harms the innocent person who is mislabelled.

    EG buy something at the store and they ASSume that it tells something about you. Doesn’t matter if you bought it for someone else. Google for something medican and they ASSume that you have that affliction and will jack up your insurance rates.

  • I’ve been following you since the early Infoworld days. Always interesting and thought-provoking. Keep the joint running! Thanks, Nick

  • Still enjoying reading your wisdom after all these years.

  • Much thanks for another great year. Your column (and Prince Valiant) are my weekly “must reads.”

  • I like you comments and would like you to continue.

  • Bob,

    One of your best. Thanks.

    Jim

  • I’m still listening.

    Thanks, Bob.

  • I an paying attention.

  • Heh, irony indeed. Trusting the guts of a machine. Enjoyed the column.

  • Not yet time to throw in the towel Bob. I look forward to your musings as it challenges me to think more critically about the challenges I face and about the flood of information that crashes over us constantly. Thank you for your efforts.

  • Hi Bob

    I read each and every one of your columns, don’t always agree but always something to think on. Please keep it up.

  • I have been paying attention for years – and hope to receive your thoughts and comments for many more moons. Thank you.

  • Thanks again for your thought provoking articles.

    Have a great New Year!

  • We’re still here Bob and so glad you are too!

  • Keep it up. It is the highlight of my week to read your columns. Especially the ones that deal with balancing keeping the joint running with developing new capabilities for the business. In a one-man IT environment that becomes more than a little tricky. The number of people in “small” IT wearing many hats greatly out number those in large settings.

  • Hi Bob, I have been studying AI since even before you started KJR.

    I think we make a mistake when we expect AI to do what humans do. What it can do is fill in for typical human weaknesses, and recommend things that humans have missed.

    For example, we humans are guided by our past experience and our fixed opinions, and this interferes with what we see before us. Doctors unconsciously try to fit symptoms into the syndrome they expect, and do not see or consider other symptoms that might lead to a different diagnosis. AI, on the other side, cannot see the patient and consider things that are not recorded in its database. Even a neural network AI in this situation useful, although if it reports that the data it has been given suggests a disease or a treatment that the doctor had not considered, or had not known about. I do not believe it is advisable to leap to letting AI make decisions in these cases, although it seems to me that the medical and pharmaceutical establishment seems eager to do. In this case, doctors should make the decision, just as ditch-diggers make the decision on when to use the machine (the excavator) and when to dig or shore up.

    One of the early applications of “expert systems” was a project at GE, which had an essential train-service expert retiring. They tried to capture his knowledge by following him through his diagnostic process, identifying what he looked at and the possible things he might find, then follow the paths to diagnose and repair the electric train engine. Of course this was a very structured field, but that project appealed to me because I could see someone working with this service person saying “O.K., what are you looking at? Why are we looking right here now? How would you know whether this part is o.k. or not? What are the ways it could need to be fixed?” As you can see, this would be a very long process. I leaned this when I tried to develop a similar system to capture the knowledge of a person that we would soon lose. He and I were very disappointed that the process turned out to be too hard to implement in the time we had.

  • Hi Bob, just an note to let you know I am still reading, and appreciated the article this week. Keep ’em coming!

  • Bob, definitely keep up the good work. I’ve been a faithful reader for many years!

    Apropos of AI and guts, the issues you write about this time are on point in many applications/fields. For instance, courts are applying AI routines in growing areas, such as recommending bail/release and online dispute resolution of smaller cases. The risk of bias is real, and a definite concern.

  • Hi Bob, I’ve been reading your column/blog for 20 plus years. Please keep them coming. I always find insight in your systems view of IT and life. I attribute a good bit of that to your spending time in graduate school around electric fish and prairie chickens rather than becoming a narrow expert of a particular technology solution.

  • I am currently reading Cathy O’Neil’s “Weapons of Math Destruction”, which provides many examples of overuse and over-confidence in the results of AI and mathematical models in general. The broad theme is as Bob describes, that hidden biases skew the results of any model or algorithm.

    I recommend it.

  • Okay, I’m definitely reading! I’ve been out of IT for 8 years now (thank goodness!) but your maangement advice applies just as well to other professions. Besides, I still have to look after IT for my own company — just not full time — and every now and then you give some good technical advice along with the management stuff (and it’s not Microsoft-centric like too many blogs are)!

  • Bob, I am a first time reader, recommended by a friend. Wonderful column! I recently wrote a paper on problems with using AI based tools in US trial courts and presenting at a couple of conferences. I could have saved myself a lot of work and just read or handed out copies of your column.
    I urge you to keep writing columns with this comfortable style and brevity.

    • I suspect that, in spite of the wonderful compliment, your presentations and paper had quite a lot more depth than my paean to AI irony.

  • Bob – When I forward your columns on to friends and colleagues (which I do quite often), I say something like, “I liked this guys ideas well before I found out we share an alma mater.”

  • I need to know what you’re thinking about and i love the way you can put ideas on “paper”. As long as you keep writing it, i will keep reading it.

    Thanks so very much2

  • I celebrated 1.5 years of retirement yesterday, Bob, and still read your columns. I also share them with my less chronologically blessed former colleagues, while encouraging them to subscribe on their own. Keep on keepin’ on, Bob.

  • AI making management decisions just doesn’t make any sense to me. Big data can be good at ferreting out obscure, but important patterns. Yet determining which of these patterns deserve a closer look for a given organization will always be an art, depending on the resources, people, goals, and possibilities available at the time of decision making.

    I have to wonder how many people have understood that Big Blue’s victory of chess champ Garry Kasparov was because B2 had access to not only all of the moves made by the strongest grandmasters of last century, but also to all of the consequences of those of all of those moves. It wasn’t looking for a possible pattern of unknown value, it was looking for the best of the moves that were known to succeed.

    I have no reason to believe that this kind of database could ever be constructed for a human organization.

    As to your writing efforts, it seems that the people have spoken. Please, continue.

  • Bob – left the IT arena 15 years ago, and have been retired 7 of those, but I still look forward to and read your KJR articles. Keep up the great work.

  • Bob, I greatly appreciate your unique incites, sharp wit, and willingness to speak your mind. I may not totally agree with you on some points but I always feel enriched by your commentary. I look forward to continuing to read your insights for many years to come.
    In short, please keep up the good work.
    Best Regards, SP

  • Bob, keep it up.

  • hi Bob always been a avid reader and admirer of yours… Neural networks are not god after all? oh dear what will we do now? expert systems can be a help of course (after all they are really a database of knowledge, nothing more). but an impenetrable and inanimate artificial intelligence ‘engine’ does not seem at all trustworthy to me. .

Comments are closed.