Bob says:

Now I’m not claiming to be original in what follows, but to define “artificial intelligence” we need to agree on (1) what “artificial” means; and (2) what “intelligence” means.

“Intelligence” first. The problem I see with defining it based on human behavior as a benchmark is Daniel Kahneman’s Thinking Fast and Slow. Thinking fast is how humans recognize faces. Thinking slow is how humans solve x=34*17. The irony here is that thinking slow is the reliable way to make a decision, but thinking fast is what neural networks do. It’s intrinsically unreliable, made worse by its reliance on equating correlation and causation.

To finish our definition of AI we need to define “Artificial” – something that’s less obvious than you might think. “Built by humans” is a good start, but a question: Is a hydroponic tomato artificial? Then there’s AI’s current dependence on Large Language Models. They’re convincing, but to a significant extent whoever loads the large language model shapes its responses. To a significant extent it’s really just a different way to program.

 

Greg says:

AI’s defining feature (IMHO) is that we are training a piece of software to make decisions more or less the way we would make the same decision—using probability models to do so.  (It is a machine after all).

AI has different levels of intelligence and gradations of capabilities.  A “Smart” microwave oven that can learn the optimal power level for popping popcorn isn’t the same thing as what a Radiologist might use for automatic feature extraction for cancer detection—But they both might use self-learning heuristics to get smarter.  Speaking of self-learning—

Self-learning software isn’t necessarily included in AI—and AI may not be self-learning.   If you want to have a flashback to Proto-AI from 40 years ago, be my guest here.   This is fun, but trust me, she isn’t learning anything.

 

Bob says:

I think you win the prize for best AI use case with your microwave popcorn example. I could have used this once upon a time, when I trusted my microwave’s Popcorn setting. Getting rid of the over-nuked popcorn smell took at least a week.

I won’t quibble about your definition of AI. Where I will quibble is the question of whether teaching computers to make decisions as we humans do is a good idea. I mean … we already have human beings to do that. When we train a neural network to do the same thing we’re just telling the computer to “trust its gut” – a practice whose value has been debunked over and over again when humans do it.

Having computers figure out new ways to make decisions, on the other hand, would be a truly interesting feat. Maybe if we find ways to meld AI and quantum computing we might make some progress on this front.

Or else I’m just being Fully Buzzword Compliant.

 

Greg Says:

You hit on the big question, of whether it is a good idea or not, and to sound like Bob Lewis for a second, I think the answer is –“It Depends.”

If we are using AI tools that know how to make human type decisions for feature extraction from fire department imagery, or 911 call center dispatching, but faster and better– the answer is clearly “Yes!”.

In these cases, we are gaining a teammate who can help us resolve ambiguity and help us make better decisions.

To test this, I was thinking about a disabled relative of mine– confined to a wheelchair, and with some big limits in quality of life– Used well, AI has the potential to enable this loved one to lead a much more fulfilling life, by coming alongside them.

But, if we are using AI that encourages our inner sloth, and decline towards Idiocracy, we will couch it as “Trusting our computer gut” and we suffer the outcomes.

Used poorly, further enabling our collective dopamine addictions– No thanks, we have enough of that.

 

Bob says:

And so, a challenge. If I’m prone to asserting “it depends,” and AI is all about getting computers to behave the way we humans behave, what has to happen so AIs answer most questions with “it depends,” given that this is the most accurate answer to most questions?

A hint: The starting point is what’s known as “explanatory AI.” Its purpose is to get AIs to answer the question, “Why do you think so?” That’s a useful starting point, but it’s far from the finish line as “it depends” is about context, not algorithms.