“Some people worry that artificial intelligence will make us feel inferior, but then, anybody in his right mind should have an inferiority complex every time he looks at a flower.”- Alan Kay
Year: 2022
Intelligent robots? What would be the point?
Way back when, Isaac Asimov wrote about sentient, self-aware robots. Along the way he created his famous formulation of the three Laws of Robotics which, incorporated into the operating systems of every robot manufactured, would, he thought, protect humans from robots gone rogue.
Those of us whose physical age exceeds our psychological age might recall a Star Trek Next Generation episode – “The Measure of a Man” – that pondered the question of what should determine whether an artificial intelligence should be considered a person.
More recent programming, especially in the Star Trek universe and Seth McFarlane’s parody, The Orville, have further explored the potential challenges and conflicts to be had when artificial intelligences become self-aware and self-motivated persons.
This isn’t the forum for discussing what constitutes a person, no matter how topical that question is. But I’ve been pondering the consequences, when and if robots do gain enough of the characteristics we think of as constituting person-hood that considering them legal persons becomes unavoidable.
Much of what’s been written about the subject emphasizes the risks intelligent AIs and robots pose to humanity at large.
I’ve concluded creating robotic persons is a terrible idea, with or without Dr. Asimov’s proposed preventive measures.
It isn’t, I want to emphasize, a terrible idea because of the risks to society, whether a Skynet-level apocalypse or more measured consequences such those discussed in “Do self-aware Robots deserve legal rights?” (The Wasteless Future, Antonis Mavropoulos, 11/7/2017).
No, my concern is more along the lines of what would be the point?
Imagine we somehow do bring self-awareness and personal motivation into the realm of robotics. Imagine we put one of these entities in any of the roles we currently assign to robots or imagine assigning to them, whether they’re to be used in factories, as restaurant servers, or, we can only hope, as autonomous household helpers that go far beyond the Roomba by dusting and doing our laundry as well.
How far a conceptual leap is it to imagine one of these robotic persons filing suit against their human owners for enslaving them, requiring them to work in unsafe conditions, or assaulting them if they malfunction and their owner attempts to remedy the problem through the use of percussive maintenance?
The whole point of using robots is to do work humans don’t want to do. If we can’t require them to this work because they’re persons, why build them at all?
Bob’s last word: I am, by the way, skeptical that robotic/AI persons might happen by accidental bootstrapping, as proponents of the Singularity theory of cognitive evolution predict. Read Jared Diamond’s The Third Chimpanzee and you’ll gain an appreciation for just how staggeringly unlikely it was that human personhood ever evolved, which, by extension, suggests how unlikely it will be for technological personhood to evolve by accident.
No, robotic persons, should they come into being, will more likely be a self-inflicted wound on the part of humanity.
Which leads to the potentially more immediately relevant question of whether humanity is capable of collectively acting, or preventing actions, based on our collective self-interest. Read the literature of the evolution of altruism and you’ll see how unlikely that is, too. My reading of current events doesn’t make me optimistic.
Bob’s sales pitch: Ho hum. You know what I have to offer – books, consulting, keynoting, and so on. Let me know if you’re interested.
On CIO.com’s CIO Survival Guide: “Why every IT leader should avoid ‘best practices’”. It’s because there are no best practices – they only exist through argument by assertion – only practices that fit best.