Bob says:

Now I’m not claiming to be original in what follows, but to define “artificial intelligence” we need to agree on (1) what “artificial” means; and (2) what “intelligence” means.

“Intelligence” first. The problem I see with defining it based on human behavior as a benchmark is Daniel Kahneman’s Thinking Fast and Slow. Thinking fast is how humans recognize faces. Thinking slow is how humans solve x=34*17. The irony here is that thinking slow is the reliable way to make a decision, but thinking fast is what neural networks do. It’s intrinsically unreliable, made worse by its reliance on equating correlation and causation.

To finish our definition of AI we need to define “Artificial” – something that’s less obvious than you might think. “Built by humans” is a good start, but a question: Is a hydroponic tomato artificial? Then there’s AI’s current dependence on Large Language Models. They’re convincing, but to a significant extent whoever loads the large language model shapes its responses. To a significant extent it’s really just a different way to program.

 

Greg says:

AI’s defining feature (IMHO) is that we are training a piece of software to make decisions more or less the way we would make the same decision—using probability models to do so.  (It is a machine after all).

AI has different levels of intelligence and gradations of capabilities.  A “Smart” microwave oven that can learn the optimal power level for popping popcorn isn’t the same thing as what a Radiologist might use for automatic feature extraction for cancer detection—But they both might use self-learning heuristics to get smarter.  Speaking of self-learning—

Self-learning software isn’t necessarily included in AI—and AI may not be self-learning.   If you want to have a flashback to Proto-AI from 40 years ago, be my guest here.   This is fun, but trust me, she isn’t learning anything.

 

Bob says:

I think you win the prize for best AI use case with your microwave popcorn example. I could have used this once upon a time, when I trusted my microwave’s Popcorn setting. Getting rid of the over-nuked popcorn smell took at least a week.

I won’t quibble about your definition of AI. Where I will quibble is the question of whether teaching computers to make decisions as we humans do is a good idea. I mean … we already have human beings to do that. When we train a neural network to do the same thing we’re just telling the computer to “trust its gut” – a practice whose value has been debunked over and over again when humans do it.

Having computers figure out new ways to make decisions, on the other hand, would be a truly interesting feat. Maybe if we find ways to meld AI and quantum computing we might make some progress on this front.

Or else I’m just being Fully Buzzword Compliant.

 

Greg Says:

You hit on the big question, of whether it is a good idea or not, and to sound like Bob Lewis for a second, I think the answer is –“It Depends.”

If we are using AI tools that know how to make human type decisions for feature extraction from fire department imagery, or 911 call center dispatching, but faster and better– the answer is clearly “Yes!”.

In these cases, we are gaining a teammate who can help us resolve ambiguity and help us make better decisions.

To test this, I was thinking about a disabled relative of mine– confined to a wheelchair, and with some big limits in quality of life– Used well, AI has the potential to enable this loved one to lead a much more fulfilling life, by coming alongside them.

But, if we are using AI that encourages our inner sloth, and decline towards Idiocracy, we will couch it as “Trusting our computer gut” and we suffer the outcomes.

Used poorly, further enabling our collective dopamine addictions– No thanks, we have enough of that.

 

Bob says:

And so, a challenge. If I’m prone to asserting “it depends,” and AI is all about getting computers to behave the way we humans behave, what has to happen so AIs answer most questions with “it depends,” given that this is the most accurate answer to most questions?

A hint: The starting point is what’s known as “explanatory AI.” Its purpose is to get AIs to answer the question, “Why do you think so?” That’s a useful starting point, but it’s far from the finish line as “it depends” is about context, not algorithms.

Enterprise software companies all promise a better future. The absolute core of their business is Marketing.  They all want to offer you a game changing software solution that , although very expensive, is worth every penny to the customer. (Whether this change is successful or not is up to us. We are the ones to make sure that the software is implemented well, and delivers one ( or more) of the six possible optimizations that delivers meaningful results for the organization. )

These companies must continuously innovate to try to stay ahead of each other. They are reading the trends, and trying to stay ahead of what you may ask for, or what they fear that competitors might tout at a Gartner conference. Good marketing is a vital input into product planning, always trying to anticipate what buyers will want next.

There is a bit of “creative imitation” in this, but most of the time, this works to the buyer’s benefit. Consider native cloud hosting of applications—not that long ago, this concept was pretty foreign to most organizations. Now, I don’t think there are more than a handful of companies left that would host their own email or E-commerce servers.

For Enterprise Software companies, AI is the new Cloud (Or the new NoSQL, or Consumerization, or SaaS, or etc, etc.), still high on the hype cycle promising lower long-term costs and better results. In their marketing efforts, they are trying to convince CIOs and other executives to sell the case to the leadership of why and how a new technology, whether it’s a big upgrade, a platform change, or a new application is going to solve important, existential challenges. As one Tech leader says, his goal is to use Marketing to position his product as “the reflex response for a CIO who is replacing legacy technology for the functional area of the asset.”

Something happened that completely surprised me, however—Salesforce reported a big slowdown in new deals, even with all of the AI hype. In fact, all Enterprise software companies seem to be struggling a bit right now.

In thinking about this, I think that AI has the same Marketing problem that the Cloud had 10 years ago—Security and Privacy.

With AI, the unspoken concerns are worse—because whether we can articulate it or not, we are not just worried about sensitive data, breaches, and so forth, but we are worried about the security and privacy of our insights.

We take the software company’s word ( and legal documents) that they won’t share our customer or product data.  That is step one in a basic agreement, and the infrastructure in a multi tenant architecture has proven safe enough to be trusted.

However, we can see clearly that our data, and more importantly, our questions, prompts and refinements are being used to make the AI smarter and more useful, not just for us, but for competitors, snooping governments, and potential bad actors.

Software companies need to address these concerns head on (again, even if we are not saying this out loud yet).  Organizations need to understand what ideas and insights are being shared between instances of these systems, as well as what is being exposed externally.   The concern that I have is that the Software companies themselves may not know the answers to these questions.

Helping CIOs and their colleagues gain confidence that the intelligent “soul” they are inviting to the organization can keep secrets is the marketing leap needed.  Keep your eyes open for whose Marketing department figures this out first.