People are beginning to trust machines (more precisely an AI) for answers, instead of another person.

There is a remarkable change that is happening in front of us, and it will affect how we do our jobs as tech leaders immensely.  This change isn’t entirely new, but it seems to be accelerating rapidly, and people are looking to us Tech Leaders for answers (at least for now, until machines replace us as well).

Let’s look at where we are coming from:

Over the last few decades, we have come to expect that we could rely on the “wisdom of the internet”, with search engines helping us find the most relevant human experiences and opinions on how to best set up a router, program a macro, formulate a derivative in Excel or make cioppino.  These human experiences were ranked on relevance and utility but were ultimately still very human in origin.   We live in a glorious age of being able to learn about anything we wish.  (Wanna learn how to make a handmade nail?  Here you go)

But we are not the only ones to learn from all of this human experience.  We have been training AIs to learn from us as well, and they are getting to a point that they know what they are talking about.  (Wanna see how an AI will tell you to make a handmade nail?  Here you go)

When you compare the two answers, is it clear which answer was human generated and which was synthesized by an AI?  In this case, yes!

Which answer is better?  I know that David (the person in the first link) grew up blacksmithing, and learned from his father and grandfather.  I also know that the AI has never actually made a single nail in the entire time that datacenter has been running. Which answer is better? Does the origin of this information matter to most people?  I can’t begin to tell you.

Based on this, I think I have figured out where AI will first emerge as utterly disruptive to companies, and that will be in Marketing, Marketing automation, and Search Engine Optimization (SEO).

The whole Marketing industry is based on the idea of trying to help others become aware of products and services that are relevant to one’s needs.  Marketing automation is being able to scale awareness, and (done ethically) helps more people connect with something that they want.  SEO (again, ideally, and ethically) is helping others find important and relevant information, written by others, to solve their problems.    (I hate having to use all of these qualifiers, but as one Marketing executive told  me, “Marketers ruin everything”).

What happens to a company that has invested scrillions of dollars in carefully created and curated content to position their solution accurately,  went through the effort of researching how others learn about their solution, making sure that they were in the right place  to be found, only to discover that a somewhat shambolic, hallucinating AI decides to answer user’s questions with whatever it scraped from a variety of sources?   The name for this condition is a  “Zero Click Search”, where the searcher is able to gain their answer without any further clicks, based the search tool “answering” the question directly in some manner.

Marketing departments and Marketing software companies are looking at the biggest challenge to their work in decades—with no understanding of how to make sure that their message (much less accurate or truthful messages) is delivered effectively, much less accurately.    To make a prediction, I think the Marketing industry will fight back.  To make another prediction, I think it needs to—Pharmaceuticals, Politics, and consumer safety situations demand some sort of basic accountability in communication, and I believe that we will find ways to ensure that make this happen.

Where do we go from here? Let’s talk about what is happening in Marketing Tech in an upcoming post.

Bob says:

Now I’m not claiming to be original in what follows, but to define “artificial intelligence” we need to agree on (1) what “artificial” means; and (2) what “intelligence” means.

“Intelligence” first. The problem I see with defining it based on human behavior as a benchmark is Daniel Kahneman’s Thinking Fast and Slow. Thinking fast is how humans recognize faces. Thinking slow is how humans solve x=34*17. The irony here is that thinking slow is the reliable way to make a decision, but thinking fast is what neural networks do. It’s intrinsically unreliable, made worse by its reliance on equating correlation and causation.

To finish our definition of AI we need to define “Artificial” – something that’s less obvious than you might think. “Built by humans” is a good start, but a question: Is a hydroponic tomato artificial? Then there’s AI’s current dependence on Large Language Models. They’re convincing, but to a significant extent whoever loads the large language model shapes its responses. To a significant extent it’s really just a different way to program.

 

Greg says:

AI’s defining feature (IMHO) is that we are training a piece of software to make decisions more or less the way we would make the same decision—using probability models to do so.  (It is a machine after all).

AI has different levels of intelligence and gradations of capabilities.  A “Smart” microwave oven that can learn the optimal power level for popping popcorn isn’t the same thing as what a Radiologist might use for automatic feature extraction for cancer detection—But they both might use self-learning heuristics to get smarter.  Speaking of self-learning—

Self-learning software isn’t necessarily included in AI—and AI may not be self-learning.   If you want to have a flashback to Proto-AI from 40 years ago, be my guest here.   This is fun, but trust me, she isn’t learning anything.

 

Bob says:

I think you win the prize for best AI use case with your microwave popcorn example. I could have used this once upon a time, when I trusted my microwave’s Popcorn setting. Getting rid of the over-nuked popcorn smell took at least a week.

I won’t quibble about your definition of AI. Where I will quibble is the question of whether teaching computers to make decisions as we humans do is a good idea. I mean … we already have human beings to do that. When we train a neural network to do the same thing we’re just telling the computer to “trust its gut” – a practice whose value has been debunked over and over again when humans do it.

Having computers figure out new ways to make decisions, on the other hand, would be a truly interesting feat. Maybe if we find ways to meld AI and quantum computing we might make some progress on this front.

Or else I’m just being Fully Buzzword Compliant.

 

Greg Says:

You hit on the big question, of whether it is a good idea or not, and to sound like Bob Lewis for a second, I think the answer is –“It Depends.”

If we are using AI tools that know how to make human type decisions for feature extraction from fire department imagery, or 911 call center dispatching, but faster and better– the answer is clearly “Yes!”.

In these cases, we are gaining a teammate who can help us resolve ambiguity and help us make better decisions.

To test this, I was thinking about a disabled relative of mine– confined to a wheelchair, and with some big limits in quality of life– Used well, AI has the potential to enable this loved one to lead a much more fulfilling life, by coming alongside them.

But, if we are using AI that encourages our inner sloth, and decline towards Idiocracy, we will couch it as “Trusting our computer gut” and we suffer the outcomes.

Used poorly, further enabling our collective dopamine addictions– No thanks, we have enough of that.

 

Bob says:

And so, a challenge. If I’m prone to asserting “it depends,” and AI is all about getting computers to behave the way we humans behave, what has to happen so AIs answer most questions with “it depends,” given that this is the most accurate answer to most questions?

A hint: The starting point is what’s known as “explanatory AI.” Its purpose is to get AIs to answer the question, “Why do you think so?” That’s a useful starting point, but it’s far from the finish line as “it depends” is about context, not algorithms.