HomeAI

Trying to get our arms around AI- Bob and Greg

Like Tweet Pin it Share Share Email

Bob says:

Now I’m not claiming to be original in what follows, but to define “artificial intelligence” we need to agree on (1) what “artificial” means; and (2) what “intelligence” means.

“Intelligence” first. The problem I see with defining it based on human behavior as a benchmark is Daniel Kahneman’s Thinking Fast and Slow. Thinking fast is how humans recognize faces. Thinking slow is how humans solve x=34*17. The irony here is that thinking slow is the reliable way to make a decision, but thinking fast is what neural networks do. It’s intrinsically unreliable, made worse by its reliance on equating correlation and causation.

To finish our definition of AI we need to define “Artificial” – something that’s less obvious than you might think. “Built by humans” is a good start, but a question: Is a hydroponic tomato artificial? Then there’s AI’s current dependence on Large Language Models. They’re convincing, but to a significant extent whoever loads the large language model shapes its responses. To a significant extent it’s really just a different way to program.

 

Greg says:

AI’s defining feature (IMHO) is that we are training a piece of software to make decisions more or less the way we would make the same decision—using probability models to do so.  (It is a machine after all).

AI has different levels of intelligence and gradations of capabilities.  A “Smart” microwave oven that can learn the optimal power level for popping popcorn isn’t the same thing as what a Radiologist might use for automatic feature extraction for cancer detection—But they both might use self-learning heuristics to get smarter.  Speaking of self-learning—

Self-learning software isn’t necessarily included in AI—and AI may not be self-learning.   If you want to have a flashback to Proto-AI from 40 years ago, be my guest here.   This is fun, but trust me, she isn’t learning anything.

 

Bob says:

I think you win the prize for best AI use case with your microwave popcorn example. I could have used this once upon a time, when I trusted my microwave’s Popcorn setting. Getting rid of the over-nuked popcorn smell took at least a week.

I won’t quibble about your definition of AI. Where I will quibble is the question of whether teaching computers to make decisions as we humans do is a good idea. I mean … we already have human beings to do that. When we train a neural network to do the same thing we’re just telling the computer to “trust its gut” – a practice whose value has been debunked over and over again when humans do it.

Having computers figure out new ways to make decisions, on the other hand, would be a truly interesting feat. Maybe if we find ways to meld AI and quantum computing we might make some progress on this front.

Or else I’m just being Fully Buzzword Compliant.

 

Greg Says:

You hit on the big question, of whether it is a good idea or not, and to sound like Bob Lewis for a second, I think the answer is –“It Depends.”

If we are using AI tools that know how to make human type decisions for feature extraction from fire department imagery, or 911 call center dispatching, but faster and better– the answer is clearly “Yes!”.

In these cases, we are gaining a teammate who can help us resolve ambiguity and help us make better decisions.

To test this, I was thinking about a disabled relative of mine– confined to a wheelchair, and with some big limits in quality of life– Used well, AI has the potential to enable this loved one to lead a much more fulfilling life, by coming alongside them.

But, if we are using AI that encourages our inner sloth, and decline towards Idiocracy, we will couch it as “Trusting our computer gut” and we suffer the outcomes.

Used poorly, further enabling our collective dopamine addictions– No thanks, we have enough of that.

 

Bob says:

And so, a challenge. If I’m prone to asserting “it depends,” and AI is all about getting computers to behave the way we humans behave, what has to happen so AIs answer most questions with “it depends,” given that this is the most accurate answer to most questions?

A hint: The starting point is what’s known as “explanatory AI.” Its purpose is to get AIs to answer the question, “Why do you think so?” That’s a useful starting point, but it’s far from the finish line as “it depends” is about context, not algorithms.

Comments (6)

  • While being rather positive on the opportunities where AI – I actually still prefer to use the term ML – can be beneficial, I just have a déjà-vu with all the other “hypes” I’ve come across in my professional life: 15 years ago top management was trying to solve everything by “having an app” (of which 95% were garbage), 25 years ago we had dot-com, inbetween we were kept busy by looking if DLT can solve our business problems, now they want AI everywhere. Bottomline: Despite the undisputable advancements in the long run, we are likely to see a lot of disappointment and mishaps in short and medium term.

    Reply
  • Love this format!!

    Reply
  • For me, it’s not about making decisions, its about plowing through (gee I want to use an expletive here) tons of data from lots of sources to seek out correlations. Once we’ve reviewed those then we can make decisions about what to do. Automation then comes later when we believe those correlations are right and proper and repeatable (that last one is what I worry about most with GenAI tools). Because I want it to tell me why before we start guessing what action(s) to take…

    Reply
  • Some of us have been doing this for a long, long time, and we have a much simpler approach. It’s only “artificial intelligence” until we figure out how to program it. After that, it’s just “clever code.” 🙂

    From another perspective, what we now call AI is just another step in using computers to help humans. I remember when we replaced manual installment loan accounting (people updating physical loan ledger sheets with payment data) with punched card payment books, with the payee sending in a 59-column piece of the card with their payment. The cards were then read by…a computer, which then updated the loan balance on a record kept on tape. A couple of decades later we were developing “smart systems” that assisted a customer service representative in responding to a customer complaint letter by asking the rep a few questions and invoking a decision tree behind the scenes. And I remember the magic day when I paid a small fortune for an IBM card that allowed me to dictate to my PC and have speech converted to text.

    I’m sure you have similar experiences, all part of increasingly sophisticated computer assistance for humans. Underneath it all, though, it’s just programming!

    Reply
  • AI is currently massive back end databases, petabytes of STOLEN IP coupled to a poorly written front end query engine.

    The stolen IP (and boy, is it ever blatantly stolen), which developers laughably claim is ‘fair use’, may be their undoing.

    Although based on recent court decisions, big wealthy corporations can do no wrong, are legally not liable for anything.

    Should that hold up, I suggest the people who generate worthwhile and quality IP will simply stop doing so except in private, and AI’s output will sink further to the utter mediocrity and stale content where it does its very ‘best’ work.

    So far, all I see is a naked greed money grab by Sam Altman et al. You know the segment is garbage when Elon Musk wants in and Microsoft deploys blatant theft of user data as an ‘AI upgrade’ to their bloated mess of an OS and Skype.

    It’s Clippy without the warmth and charm.

    Reply
    • Not sure I buy into the “theft” aspect of this, and I say that as someone whose content has been borrowed from time to time. The challenge: IP creators can copyright the specific form of something, but as a general rule we can’t protect our knowledge.

      That, at least, is my understanding having had (for example) the meaning of my slides incorporated into work being performed by other consultancies. From what I’ve seen, much of the concerns about IP misappropriation falls into the realm of learning from published knowledge, not theft of copyrightable works.

      Reply

Leave a Reply

Your email address will not be published.