Experts don’t just know stuff. Well, yes, they do know stuff, but more than that, they’re immersed in it. True experts, that is, live in the stuff’s gestalt.
Which gets us to David Brooks’ take on the subject of artificial intelligence. It matters to you, not because Brooks is misinformed, but because he lacks the deep background … the gestalt … of computing in general, let alone what those of us who have toiled in IT’s trenches over the years recognize as familiar misconceptions.
No, I take that back. Brooks’ take on the subject is hazardous to business IT’s health because he lacks the gestalt but also has the ear of business executives – often, more so than the CIO and IT’s expert staff.
Start here, where he cites the Canadian scholar Michael Ignatieff regarding human intelligence: “What we do is not processing. It is not computation. It is not data analysis. It is a distinctively, incorrigibly human activity that is a complex combination of conscious and unconscious, rational and intuitive, logical and emotional reflection.”
Now I don’t mean to be snide or nuthin’, but explaining that human thinking is an “incorrigibly human activity” isn’t an explanation at all. It’s just repetition.
Then there’s this: “Sometimes I hear tech people saying they are building machines that think like people. Then I report this ambition to neuroscientists and their response is: That would be a neat trick, because we don’t know how people think.”
Clever. But the most active realm of AI research and development is built on a foundation of neural networks, which were devised to mimic a model of human neural functioning.
Which leads directly to one of the most important aspects of artificial intelligence – one Brooks misses entirely: for AI to be useful it should do just about anything but mimic human intelligence. Read Daniel Kahneman’s mind-blowing Thinking, Fast and Slow and you’ll understand that the Venn diagram circles showing “What humans do,” and “Get a useful answer” have so little overlap that it’s only because we humans are so numerous that there’s any hope of us getting right results of any kind.
Then there’s this: “A.I. can impersonate human thought because it can take all the ideas that human beings have produced and synthesize them into strings of words or collages of images that make sense to us. But that doesn’t mean the A.I. “mind” is like the human mind.”
No? Taking ideas other humans have produced and synthesizing them into new forms sounds a whole lot like how humans think … certainly, how experts like David Brooks (and myself, for that matter) arrive at many of the ideas we share. As someone once said, stealing an idea from one person is plagiarism; stealing from three is research.
Brooks is less wrong than, as someone else once said, “insufficiently right.” What he gets right (in my awesomely humble opinion) is that AI is, and will be even more, a tool that makes humans more effective. What he misses is that the most optimistic expectation about AI envisions humans as cyborgs – as computer enhanced humanity with AI taking on a far less narrow role than a mere tool in this duality.
But where Brooks’ essay scores an F- is his prediction that A.I. isn’t “… going to be as powerful as many of its evangelists think it will be.”
What’s unfortunate, and is up to you to fix, is that when the business executives who comprise your executive leadership team want to understand AI they’re more likely to encounter and buy into something written by David Brooks than by, say, someone like Geoffrey Hinton, just as they’re more likely to buy into any plausible-sounding content generated by a well-known personality than by someone less arresting but with deeper exposure to whatever the subject is.
AI is built on information technology. Those of us who live in IT’s gestalt know, deep in our bones, that IT’s capabilities increase in R&D time, which is to say, fast and accelerating.
Human capabilities, in contrast, are increasing at evolutionary rates of change, which is to say, much slower.
Unless we achieve a state of computer-enhanced humanity, AI can’t help but surpass us. The question that matters isn’t whether.
It’s when.