Experts don’t just know stuff. Well, yes, they do know stuff, but more than that, they’re immersed in it. True experts, that is, live in the stuff’s gestalt.
Which gets us to David Brooks’ take on the subject of artificial intelligence. It matters to you, not because Brooks is misinformed, but because he lacks the deep background … the gestalt … of computing in general, let alone what those of us who have toiled in IT’s trenches over the years recognize as familiar misconceptions.
No, I take that back. Brooks’ take on the subject is hazardous to business IT’s health because he lacks the gestalt but also has the ear of business executives – often, more so than the CIO and IT’s expert staff.
Start here, where he cites the Canadian scholar Michael Ignatieff regarding human intelligence: “What we do is not processing. It is not computation. It is not data analysis. It is a distinctively, incorrigibly human activity that is a complex combination of conscious and unconscious, rational and intuitive, logical and emotional reflection.”
Now I don’t mean to be snide or nuthin’, but explaining that human thinking is an “incorrigibly human activity” isn’t an explanation at all. It’s just repetition.
Then there’s this: “Sometimes I hear tech people saying they are building machines that think like people. Then I report this ambition to neuroscientists and their response is: That would be a neat trick, because we don’t know how people think.”
Clever. But the most active realm of AI research and development is built on a foundation of neural networks, which were devised to mimic a model of human neural functioning.
Which leads directly to one of the most important aspects of artificial intelligence – one Brooks misses entirely: for AI to be useful it should do just about anything but mimic human intelligence. Read Daniel Kahneman’s mind-blowing Thinking, Fast and Slow and you’ll understand that the Venn diagram circles showing “What humans do,” and “Get a useful answer” have so little overlap that it’s only because we humans are so numerous that there’s any hope of us getting right results of any kind.
Then there’s this: “A.I. can impersonate human thought because it can take all the ideas that human beings have produced and synthesize them into strings of words or collages of images that make sense to us. But that doesn’t mean the A.I. “mind” is like the human mind.”
No? Taking ideas other humans have produced and synthesizing them into new forms sounds a whole lot like how humans think … certainly, how experts like David Brooks (and myself, for that matter) arrive at many of the ideas we share. As someone once said, stealing an idea from one person is plagiarism; stealing from three is research.
Brooks is less wrong than, as someone else once said, “insufficiently right.” What he gets right (in my awesomely humble opinion) is that AI is, and will be even more, a tool that makes humans more effective. What he misses is that the most optimistic expectation about AI envisions humans as cyborgs – as computer enhanced humanity with AI taking on a far less narrow role than a mere tool in this duality.
But where Brooks’ essay scores an F- is his prediction that A.I. isn’t “… going to be as powerful as many of its evangelists think it will be.”
What’s unfortunate, and is up to you to fix, is that when the business executives who comprise your executive leadership team want to understand AI they’re more likely to encounter and buy into something written by David Brooks than by, say, someone like Geoffrey Hinton, just as they’re more likely to buy into any plausible-sounding content generated by a well-known personality than by someone less arresting but with deeper exposure to whatever the subject is.
AI is built on information technology. Those of us who live in IT’s gestalt know, deep in our bones, that IT’s capabilities increase in R&D time, which is to say, fast and accelerating.
Human capabilities, in contrast, are increasing at evolutionary rates of change, which is to say, much slower.
Unless we achieve a state of computer-enhanced humanity, AI can’t help but surpass us. The question that matters isn’t whether.
It’s when.
Well, my first comment is gone. It was a nice essay! But, you are right, Brooks holds to much sway and he writes well enough to fool the average person or executive. Enjoy the Fall!
Just a you write this Bob, Gartner proclaims that proclaims parts of AI as “doomed”. The same Gartner that was talking about two year paybacks a month ago.
https://www.theregister.com/2024/09/10/brute_force_ai_era_gartner/
For the first time in my career, I am with Gartner guy. Yes, I have been drug tested.
Like you, I am not all-in with what Brooks writes. But I remain an AI skeptic. The current brute force AI is not really AI. Other “AI” remains in early stages, just ask my self-driving car that nearly killed me on a freeway (or the numerous Teslas that have crashed).
Once Nvidia stock crashes and all of the huge planet warming data centers are powered off, maybe we will start to see some actual creative AI. I just know my LISP programming skills will be needed someday!
As I think I pointed out somewhere or other, the problem we all have with this sort of thing is that we don’t have useful definitions for “artificial” or “intelligence.” Once upon a time, a goal of AI was finding objects in photographs. Now computers can find objects in photographs. Likewise speech recognition and figuring out what a chunk of text might mean. Whether a computer figures these things out the way humans would figure them out really doesn’t matter.
Somewhere in all this we need to take quantum computing into account and how it will change the techniques used to solve particular AI goals, too.
Not to mention asking the world’s punditocracy to remember that once upon a time “expert systems” were considered to be AI. Once they remember, your LISP skills will recover their value!
The explanations and definitions will not fit on a two-page three bullet each PowerPoint presentation crafted for executives.
My favorite CTO said that when briefing senior folks and you say “this is a cheap way to eliminate Lake Michigan’s ability to provide drinking water to the city of Chicago, the only word they’re going to hear is CHEAP, they will hear absolutely nothing else in the two hour meeting you have on making sure people have drinking water. If they’re polite they will sit there and tolerate whatever it is you’re saying and totally tune you out and if they’re efficiency freaks, they’ll say meeting over and walk out”. In my job a bunch of the CTOs and CIOs constantly ask me to explain to the executives that AI doesn’t exist so that they can stop wasting time on it and deal with more serious problems like information security.
I’m thinking of modifying the Turing test to say that if AI can tell me if I will enjoy a chili dog based on my personal preferences when I walk up to a hot dog vendor, we will have a discussion. Of course I can find a four-year-old and do it for free but the venture capitalists aren’t going to accept that.