“Research is what I’m doing when I don’t know what I’m doing.” – Wernher von Braun
Year: 2023
AIEEE! It’s AI!
Prometheus brought fire (metaphorically, knowledge about how to do stuff) to humanity, making him a mythical hero.
Lucifer (light-bringer) brought knowledge (of good and evil, no less), to humanity, earning him the mantle of most villainous of all our mythical villains.
Go figure.
Now we have ChatGPT which, in case you’ve been living in a cave the past few months and missed all the excitement, seems to be passing the Turing, Prometheus, and Lucifer tests while making the whole notion of knowledge obsolete.
You can ask ChatGPT a question and it will generate an answer that reads like something a real, live human being might have written [Turing].
And just like dealing with real, live human beings you’d have no way of knowing whether the answer was … what’s the word I’m looking for? … help me out, ChatGPT … oh, yeah, that’s the word … “right” [Prometheus] or false [Lucifer].
And a disclaimer: I’m not going to try to differentiate between what ChatGPT and allied AI technologies are capable of as of this writing from what they’ll obviously and quickly evolve into.
Quite the opposite – what follows is both speculative and, I think, inevitable, in a short enough planning window that we need to start thinking about the ramifications right now. Here are the harbingers:
Siri and Watson: When Apple introduced Siri, its mistakes were amusing but its potential was clear – technology capable of understanding a question, sifting through information sources to figure out the answer, and expressing the answer in an easily understood voice.
Watson won Jeopardy the same way.
The sophistication of research-capable AIs will only continue to improve, especially the sifting-through-data-sources algorithms.
Synthesizers: It’s one thing to engage in research to find the answer to a question. It’s quite another to be told what the right answer is and formulate a plausible argument for it.
Trust me on this – as a professional management consultant I’ve lost track of how often a client has told me the answer they want and asked me to find it.
So there’s no reason to figure an AI, armed with techniques for cherry-picking some data and forging the rest, might resist the temptation. Because while I’ve read quite a lot about where AI is going and how it’s evolving, I’ve read of no research into the development of an Ethics Engine or, its close cousin, an integrity API.
Deep fakes: Imagine a deep-faked TED Talk whose presenter doesn’t actually exist here in what we optimistically call the “real world” but that speaks and gestures in ways that push our this-person-is-an-authority-on-the-subject buttons to persuade us that a purely falsified answer is, in fact, how things are.
Or, even more unsavory, imagine the possibilities for character assassination to be had by pasting a political opponent’s or business rival’s face onto … well, I’ll leave the possibilities as an exercise for the reader.
Persuasion: Among the algorithms we can count on will be several that engage in meme promotion – that know how to disseminate an idea so as to maximize the number of people who encounter and believe it.
Recursion: It’s loop-closing time – you ask your helpful AI a question (we’ll name it “Keejer” – I trust the etymology isn’t too mysterious?) “Hey, Keejer, how old is the universe?”
Keejer searches and sifts through what’s available on the subject, synthesizes the answer (by averaging the values it finds, be they theological or astrophysical), and writes a persuasive essay presenting its findings – that our universe is 67,455 years old.
But, many of the sources Keejer discovers are falsifications created and promoted by highly persuasive AIs, and Keejer lacks a skepticism algorithm.
And so Keejer gives you the wrong answer. Worse, Keejer’s analysis is added to the Internet’s meme stack to further mislead the next research AI.
Bob’s last word: Science fictioneers, writing about dangerous robots and AIs, gravitate to Skynet scenarios, where androids engage in murderous rampages to exterminate humanity.
The unexplored territory – rogue ‘bots attempting to wipe out reality itself – hasn’t received the same attention.
But putting the literary dimension of the problem aside, it’s time to put as much R&D into Artificial Skepticism as we’ve put into AI itself.
There is a precedent: Starting in the very early days of PCs, as malicious actors started to push computer viruses out onto the hard drives of the world, a whole anti-malware industry came into being.
It’s time we all recognize that disinformation is a form of malware that deserves just as much attention.
Bob’s sales pitch: Not for anything of mine this time, but for a brilliant piece everyone on earth ought to read. It’s titled “40 Useful Concepts You Should Know,” by someone who goes by the handle “Gurwinder.”
All 40 concepts are useful, and you should review them all.
On CIO.com’s CIO Survival Guide: “Brilliance: The CIO’s most seductive career-limiting trait.” It’s about why, for CIOs, brokering great ideas is better than having them.