Experts don’t just know stuff. Well, yes, they do know stuff, but more than that, they’re immersed in it. True experts, that is, live in the stuff’s gestalt.

Which gets us to David Brooks’ take on the subject of artificial intelligence. It matters to you, not because Brooks is misinformed, but because he lacks the deep background … the gestalt … of computing in general, let alone what those of us who have toiled in IT’s trenches over the years recognize as familiar misconceptions.

No, I take that back. Brooks’ take on the subject is hazardous to business IT’s health because he lacks the gestalt but also has the ear of business executives – often, more so than the CIO and IT’s expert staff.

Start here, where he cites the Canadian scholar Michael Ignatieff regarding human intelligence: “What we do is not processing. It is not computation. It is not data analysis. It is a distinctively, incorrigibly human activity that is a complex combination of conscious and unconscious, rational and intuitive, logical and emotional reflection.”

Now I don’t mean to be snide or nuthin’, but explaining that human thinking is an “incorrigibly human activity” isn’t an explanation at all. It’s just repetition.

Then there’s this: “Sometimes I hear tech people saying they are building machines that think like people. Then I report this ambition to neuroscientists and their response is: That would be a neat trick, because we don’t know how people think.”

Clever. But the most active realm of AI research and development is built on a foundation of neural networks, which were devised to mimic a model of human neural functioning.

Which leads directly to one of the most important aspects of artificial intelligence – one Brooks misses entirely: for AI to be useful it should do just about anything but mimic human intelligence. Read Daniel Kahneman’s mind-blowing Thinking, Fast and Slow and you’ll understand that the Venn diagram circles showing “What humans do,” and “Get a useful answer” have so little overlap that it’s only because we humans are so numerous that there’s any hope of us getting right results of any kind.

Then there’s this: “A.I. can impersonate human thought because it can take all the ideas that human beings have produced and synthesize them into strings of words or collages of images that make sense to us. But that doesn’t mean the A.I. “mind” is like the human mind.”

No? Taking ideas other humans have produced and synthesizing them into new forms sounds a whole lot like how humans think … certainly, how experts like David Brooks (and myself, for that matter) arrive at many of the ideas we share. As someone once said, stealing an idea from one person is plagiarism; stealing from three is research.

Brooks is less wrong than, as someone else once said, “insufficiently right.” What he gets right (in my awesomely humble opinion) is that AI is, and will be even more, a tool that makes humans more effective. What he misses is that the most optimistic expectation about AI envisions humans as cyborgs – as computer enhanced humanity with AI taking on a far less narrow role than a mere tool in this duality.

But where Brooks’ essay scores an F- is his prediction that A.I. isn’t “… going to be as powerful as many of its evangelists think it will be.”

What’s unfortunate, and is up to you to fix, is that when the business executives who comprise your executive leadership team want to understand AI they’re more likely to encounter and buy into something written by David Brooks than by, say, someone like Geoffrey Hinton, just as they’re more likely to buy into any plausible-sounding content generated by a well-known personality than by someone less arresting but with deeper exposure to whatever the subject is.

AI is built on information technology. Those of us who live in IT’s gestalt know, deep in our bones, that IT’s capabilities increase in R&D time, which is to say, fast and accelerating.

Human capabilities, in contrast, are increasing at evolutionary rates of change, which is to say, much slower.

Unless we achieve a state of computer-enhanced humanity, AI can’t help but surpass us. The question that matters isn’t whether.

It’s when.

Greg Says:

SAP is trying to build an open source community, the article reads.  Considering how many open source communities there are, and how much they propel software, infrastructure, OSs, and management tools, I can’t say that I am surprised that SAP is trying to build this community.  In fact, SAP previously open sourced its core database decades ago, and doesn’t have a bad record in fostering collaborative communities.

The story goes, however, that SAP is struggling to get interest—and I think that the basic problem that all Open Source communities face is the problem of balancing self-interest (“What’s in it for me?”) with free riders with varying degrees of ethical chutzpah.    These problems have been around since the beginning of the movement, and there isn’t an easy answer to fix it.

In short— everyone who productively contributes to an Open Source community feels a little taken advantage of, at least in the short run.

Bob Says:

I’m old enough to … well, to know better than to start my side of a dialog with “I’m old enough to … ” And yet, I am. Old enough, that is, to remember when the reaction to open source was somewhere on a continuum with “Being part of something important,” on one end and “a bunch of commies” on the other.

While the two sides were busy disparaging each other, those with a more business-like mentality figured out that Gillette had long ago paved the way to open source prosperity with its “give away the razor and sell the blades” business model.

But to have razors to give away, the world of IT needed communities to create them – communities large enough to be self-reinforcing, but not so large that incompetent developers could degrade the product.

Which gets me to a point about free riders: Look at them with glass-colored glasses and it’s hard to differentiate between a free rider and a customer.

One more point about communities: Sure, they’re a collection of roles people take on to build and enhance the product. But they also create a sense of belonging. They are, in a sense, a tribe.

Which gets me to a point: It’s unsurprising that SAP is finding it hard to charter yet another open source community. There are already so many in play that I’m guessing anyone wanting to sign up will have to bow out of a community they’re already part of.

 

Greg says:

I love your point about free riders actually being customers.  And this is where the Open Source world struggles—The price may or may not be Free, but the value is of some significance, or the solution wouldn’t exist.   “

What’s in it for me?” really should be thought of in the Open Source sense of “does the work that I do in this project offer me more value than if I didn’t participate?”  not “Are there others that will benefit,  based on my work?”

And for the average customer, who may just want to download a great extension that somebody else created, and gifted to the community, this is a pretty easy question.

Getting back to SAP (and any other software publisher that wishes to build an engaged, active community)—Their Marketing team has its work cut out for them to demonstrate friendship, gratitude, respect, however the community is constituted.