Prometheus brought fire (metaphorically, knowledge about how to do stuff) to humanity, making him a mythical hero.
Lucifer (light-bringer) brought knowledge (of good and evil, no less), to humanity, earning him the mantle of most villainous of all our mythical villains.
Go figure.
Now we have ChatGPT which, in case you’ve been living in a cave the past few months and missed all the excitement, seems to be passing the Turing, Prometheus, and Lucifer tests while making the whole notion of knowledge obsolete.
You can ask ChatGPT a question and it will generate an answer that reads like something a real, live human being might have written [Turing].
And just like dealing with real, live human beings you’d have no way of knowing whether the answer was … what’s the word I’m looking for? … help me out, ChatGPT … oh, yeah, that’s the word … “right” [Prometheus] or false [Lucifer].
And a disclaimer: I’m not going to try to differentiate between what ChatGPT and allied AI technologies are capable of as of this writing from what they’ll obviously and quickly evolve into.
Quite the opposite – what follows is both speculative and, I think, inevitable, in a short enough planning window that we need to start thinking about the ramifications right now. Here are the harbingers:
Siri and Watson: When Apple introduced Siri, its mistakes were amusing but its potential was clear – technology capable of understanding a question, sifting through information sources to figure out the answer, and expressing the answer in an easily understood voice.
Watson won Jeopardy the same way.
The sophistication of research-capable AIs will only continue to improve, especially the sifting-through-data-sources algorithms.
Synthesizers: It’s one thing to engage in research to find the answer to a question. It’s quite another to be told what the right answer is and formulate a plausible argument for it.
Trust me on this – as a professional management consultant I’ve lost track of how often a client has told me the answer they want and asked me to find it.
So there’s no reason to figure an AI, armed with techniques for cherry-picking some data and forging the rest, might resist the temptation. Because while I’ve read quite a lot about where AI is going and how it’s evolving, I’ve read of no research into the development of an Ethics Engine or, its close cousin, an integrity API.
Deep fakes: Imagine a deep-faked TED Talk whose presenter doesn’t actually exist here in what we optimistically call the “real world” but that speaks and gestures in ways that push our this-person-is-an-authority-on-the-subject buttons to persuade us that a purely falsified answer is, in fact, how things are.
Or, even more unsavory, imagine the possibilities for character assassination to be had by pasting a political opponent’s or business rival’s face onto … well, I’ll leave the possibilities as an exercise for the reader.
Persuasion: Among the algorithms we can count on will be several that engage in meme promotion – that know how to disseminate an idea so as to maximize the number of people who encounter and believe it.
Recursion: It’s loop-closing time – you ask your helpful AI a question (we’ll name it “Keejer” – I trust the etymology isn’t too mysterious?) “Hey, Keejer, how old is the universe?”
Keejer searches and sifts through what’s available on the subject, synthesizes the answer (by averaging the values it finds, be they theological or astrophysical), and writes a persuasive essay presenting its findings – that our universe is 67,455 years old.
But, many of the sources Keejer discovers are falsifications created and promoted by highly persuasive AIs, and Keejer lacks a skepticism algorithm.
And so Keejer gives you the wrong answer. Worse, Keejer’s analysis is added to the Internet’s meme stack to further mislead the next research AI.
Bob’s last word: Science fictioneers, writing about dangerous robots and AIs, gravitate to Skynet scenarios, where androids engage in murderous rampages to exterminate humanity.
The unexplored territory – rogue ‘bots attempting to wipe out reality itself – hasn’t received the same attention.
But putting the literary dimension of the problem aside, it’s time to put as much R&D into Artificial Skepticism as we’ve put into AI itself.
There is a precedent: Starting in the very early days of PCs, as malicious actors started to push computer viruses out onto the hard drives of the world, a whole anti-malware industry came into being.
It’s time we all recognize that disinformation is a form of malware that deserves just as much attention.
Bob’s sales pitch: Not for anything of mine this time, but for a brilliant piece everyone on earth ought to read. It’s titled “40 Useful Concepts You Should Know,” by someone who goes by the handle “Gurwinder.”
All 40 concepts are useful, and you should review them all.
On CIO.com’s CIO Survival Guide: “Brilliance: The CIO’s most seductive career-limiting trait.” It’s about why, for CIOs, brokering great ideas is better than having them.
https://en.wikipedia.org/wiki/The_God_Machine_(novel)
Greatly enjoyed. Thought provoking and valuable. How do we teach real skepticism to our young people who do not have the benefit of real world experience?
I’d suggest adding a course in spotting BS to the standard high school curriculum, except that at least one political party would object to the contents of any such course and likely both, depending on the course’s specific contents.
Or instead, maybe adding a history class titled “Disastrous events in history caused by failing to recognize reality”?
40 concepts post was indeed brilliant!
You have inspired me! I’ve been brooding for a while, ever since ChatGPT starting making big news. And THIS post of yours has finally crystallized for me the REAL problem that has been bothering me.
The problem is NOT: “Artificial Intelligence” (that is supposedly so contemptuously superior to the traditional human kind that it renders you and me superfluous)
The problem is NOT: “Artificial Stupidity” (which is indeed the GENERAL NEIGHBORHOOD of the problem, but not tightly bracketed enough)
The REAL problem is: ARTIFICIAL GULLIBILITY !
The whole point of AI currently is to turn it loose on some sort of gigantic “training database” that was obtained from… where, exactly?… let it absorb the thing, and then the AI embodies whatever it has absorbed. No matter how dumb it is. No matter how polluted with prejudice and BS and propaganda and spin doctoring and honest-but-ignorant wild guessing. By definition, the AI is a COMPLETELY ignorant tabula rasa that has no independent knowledge of its own by which to judge that its own training database, either as a whole or in specific parts, might not be true or reasonable or plausible or even sane.
One of the major websites (I think it was cNet) was caught recently experimenting with having ChatGPT write authoritative-sounding explainers and how-to’s on various computer-related topics, which were supposedly checked by honest-to-gosh human readers for accuracy, and then those articles were published online with a byline of “by [Site’sName] Staff”. This was intended as a labor-saving way to AUGMENT human writers and make them more productive, rather than replace them entirely. Of course, some real howlers slipped through, unnoticed by the human editors/checkers.
The AI had no actual thoughts in its head. It absolutely NAILED the usual friendly-but-authoritative tone for such articles, while spouting blatant laughable falsehoods mixed in among all the truths. The AI itself had no way to tell the difference, and believed it ALL — if “belief” is the right word here, and it probably isn’t really.
In a way, this is merely the latest twist on that age-old standby of computer science, “Garbage In, Garbage Out”. But I can’t help thinking that the phrase “ARTIFICIAL GULLIBILITY” puts the emphasis on a specific part of the problem, in a truly of-this-EXACT-moment way.
Thank you so much for provoking this inspiration!
Artificial gullibility (AG)? I like it – thanks!
Hi again! I goofed partially in my description of a major website publishing error-riddled AI-written explainers. I was correct in outline, but I misremembered some key details. Because I am a Natural Intelligence, instead of an Artificial Intelligence, one advantage that I have — for now — is that I am willing and able to detect and correct at least SOME of my goofs.
The website was indeed cNet, but the AI-written explainers related to personal finance, not to computer technology; over-trusting human readers, who actually BELIEVED what they had read in those explainers and acted (in the real world) accordingly, could conceivably have suffered actual real-world monetary losses. The byline was “CNET Money Staff”. The AI writer was not ChatGPT but an in-house creation; though actual GPT products *have* been used on other websites, owned by the same holding company. cNet published 77 such AI-written articles, and ultimately published corrections on 41 of them. Some of the corrections hinted that the problem might have been, not factual errors, but plagiarism.
I first learned of this in an article in The Verge dated January 19; I don’t remember when I actually READ that article, but I definitely have been brooding over it ever since:
https://www.theverge.com/2023/1/19/23562966/cnet-ai-written-stories-red-ventures-seo-marketing
Additional articles about this:
https://futurism.com/the-byte/cnet-publishing-articles-by-ai
https://www.theverge.com/2023/1/25/23571082/cnet-ai-written-stories-errors-corrections-red-ventures
Also, a single AI-written explainer in Men’s Journal, with a title of “What All Men Should Know About Low Testosterone,” and a byline of “Men’s Fitness Editors”, is accused by a human expert of containing 18 separate factual errors. This one might be a case of Artificial Gullibility, in the sense of just Making Stuff Up, and then believing it:
https://futurism.com/neoscope/magazine-mens-journal-errors-ai-health-article
Finally, it turns out that there is, or once was, an easy way to “activate an evil alter ego of ChatGPT” using a well-crafted prompt. It seems to me that “DAN” isn’t so much evil as extremely Artificially Gullible, with a particular weakness for believing conspiracy theories:
https://futurism.com/hack-deranged-alter-ego-chatgpt
Hey Bob,
Reader/follower of yours for a couple of decades, now. I look forward to your article every week. This post was very insightful, like most of yours….
I am finding AI in my (engineering) world as often a pretty good assistant. I AM concerned about having AI completely replace engineering judgement, for the reasons you discuss, plus more….