I’m not sure what follows belongs in KJR, and if it does whether it offers anything new and insightful to what’s being published about the subject elsewhere.
Please share your opinion on both fronts, preferably in the Comments.
Thanks. — Bob
# # #
In the game of evolution by natural selection there are no rules. Anything a gene can do to insert more copies of itself in succeeding generations is considered fair play, not that the players have any sense they’re playing a game; not that the concept of “fair” plays any part in their thinking; not that thinking plays any part in most of the players’ lives.
Among the ways of dividing the world into two types of people … no, not “those who divide the world into two types of people and those who don’t …
Where was I? Some of those in leadership roles figure rules are part of the game, and there’s really no point in winning without following them.
That’s in contrast to a different sort of leader — those who consider rules as soft boundaries, to be followed when convenient or when the risk of being caught violating them, multiplied by the penalties likely to be incurred as a result of the violation, are excessive.
For this class of leader, the only rule is that there are no rules. Winning is all that matters.
Which gets us to a subject covered here a couple of weeks ago — the confluence of increasingly sophisticated artificial intelligence and simulation technologies, and their potential for abuse.
Before reading further, take a few minutes to watch a terrifying demonstration of just how easy it now is for a political candidate to, as described last week, “… use this technology to make it appear that their opponent gave a speech encouraging everyone to, say, embrace Satan as their lord and master.”
And thanks to Jon Payton for bringing this to our attention in the Comments.
Nor will this sort of thing be limited to unscrupulous politicians. Does anyone reading these words doubt that some CEO, in pursuit of profits, will put a doctored video on YouTube showing a competitor’s CEO explaining, to his board of directors, “Sure our products kill our customers! Who cares? We can conceal the evidence where no one will ever find it, and in the meantime our profits are much higher than they’d be if we bore the time and expense of making our products safe!”
Easy to make, hard to trace, and even harder to counter with the truth.
Once upon a time our vision of rogue AI depended on robots that autonomously selected human targets to obliterate.
Now? Skynet seems almost utopian. Its threat is physical and tangible.
Where we’re headed is, I think, even more dangerous.
The technology used to create “Deepfake” videos depends on one branch of artificial intelligence technology. Combine it with text generation that writes the script and we’re at the point where AI passes the well-known Turing test.
Reality itself is under siege, and Virtual is winning. Just as counterfeit money devalues real currency, so counterfeit reality devalues actual facts.
We can take limited comfort in knowing that, at least for now, researchers haven’t made AI self-directed. If, for example, a deepfake pornographic video shows up in which a controversial politician appears to have a starring role, we can be confident a human directed tame AIs to create and publicize it.
And here I have to apologize, on two fronts.
The first: KJR’s purpose is to give you ideas you can put to immediate, practical use. This isn’t that.
The second: As the old management adage has it, I’m supposed to provide solutions, not problems.
The best I have in the way of solutions is an AI arms race, where machine-learning AIs tuned to be deepfake detectors become part of our anti-malware standard kit. Or, if you’re a more militant sort, built to engage in deepfake search-and-destroy missions.
That’s in addition to the Shut the ‘Bots Up Act of 2019 I proposed last week, which would limit First Amendment rights to actual human beings.
It’s weak, but it’s the best I have.
How about you?
what *is* reality anyway ?
arent we just 7 dimensional holograms being beamed in from another existence.
perhaps some sort of higher tek video game or movie of the evening for whatever takes the place of TV is there where ever they are
as to here , wherever we really are, look for the dems to do whatever it takes to win using last minute fakes of all sorts along with their disinformation methods already infesting TV and the internet discussions as well as the newspapers
Really? Within a week of the Republicans posting and disseminating doctored videos of Nancy Pelosi, and months after Donald Trump uttered his 10,000th lie (according to the Washington Post’s scrupulously agnostic Fact Checker) you’re accusing the Democrats of being the party of disinformation?
I was reading “Dark Money: The Hidden History of the Billionaires Behind the Rise of the Radical Right” today and came to the same conclusion about the 1st amendment. .
“That’s in addition to the Shut the ‘Bots Up Act of 2019 I proposed last week, which would limit First Amendment rights to actual human beings.”
If it should become a political benefit to a party in power to have non-humans have First Amendment rights, that’s what will more than likely happen. Can giving non-humans the vote be far behind?
The sane have lost control over the process, which is now controlled by the stupidest people in society – politicians.
Completely off-topic and completely appropriate. It will be tough to manage corporate reality if you’re not as sure what is real. Fake videos can be challenged but so will “real” videos. Ultimately, just as any document can be faked and any photo can be manipulated, we’ll adjust to any video being inherently untrustworthy. I’m not worried about the future but there will be a period of adjustment to become skeptical of video documentation.
Not relevant to the overall discussion, but according to Dawkins, in “The Selfish Gene”, humans and all other organisms are just genes way of making more genes.
absolutely brilliant article. thanks for the thought provocation.
You hardly touched on the most worrisome aspect of all this – people who have made up their minds are impervious to facts. https://www.newyorker.com/magazine/2017/02/27/why-facts-dont-change-our-minds?
As long as we have people who believe the earth is flat and that dinosaurs were on the Ark and that Clinton ran a child trafficking ring out of the basement of a building that had no basement, politics is doomed to be influenced by Deep Fakes. And when that results in the election of people who don’t even know a fact from fiction, it is too easy to imagine a world war that no one wins.
(And it is fine to have a digression from your standards for post once in a while )
On the other hand, we can look at the points you make and figure deepfakes really don’t matter. If I see one that supports my confirmation bias I’ll buy it without question. And now, anything I see that runs counter to what I want to be true I can conveniently claim is just a deepfake and not evidence I’m wrong.
Sort of like crying “fake news” whenever reporters discover something inconvenient.
Sadly, I’m afraid that the scenario you described will become increasingly common – whether something is deepfake won’t matter to many people… and for those for whom deepfakes DO matter, there won’t be much they can do about it.
An article as brilliant as it is disturbing.
As I see it, we nerds invented this tool, so we nerds have to figure out what to do about what we created, which is not straightforward.
Nerds invented the atomic bomb, but it took nerds + biologists + physicians + psychologists to understand the A-bomb’s direct consequences. Similarly, we nerds invented this technology but it will take us nerds + psychologists + neuroscientists + neurologists + endocrinologists to begin to understand its direct consequences on those who see these confusional videos.
In the hands of demagogues, it can and does directly affect the climate any particular business operates in a destabilizing way and the options available to it customers, not to mention potential internal synergy and morale toxicity .
My stereotype about us is that most of us nerds are not generalists, yet we will need to be able to collaborate with open ears with other disciplines, as we will be the people in our companies that understand this tool best and that the company (and perhaps the country) will need look to for guidance and leadership against this threat.
Seems to me, this is a responsibility for everyone in IT, not just management.
Respectfully, I disagreed that there is little we can do about. It is my sense that this is a war situation. With Pearl Harbor, we didn’t really expect an attack, but our Naval commanders absolutely understood the weapons, tactics and strategies of the Japanese. And, they had the full support and resources of the entire United States.
However, our situation seems more that of native peoples being attacked, conquered, and colonized by the Europeans, in that the native peoples neither recognized or sufficiently understood the weapons, tactics, and strategies of the Europeans in a timely manner.
We geeks actually do understand the weapons technologies, since we invented and mostly implemented them. What these technologies do is shape human perception. Who knows the most about human perception? Neuroscientists, (see Slights of Mind, https://www.amazon.com/Sleights-Mind-Neuroscience-Everyday-Deceptions/dp/0312611676), psychologist, hypnotherapists, and magicians (illusionists).
Most of us know at least one person who has the relevant human perception expertise that we could collaborate with. It is a daunting situation, but if we give up, where does that leave our organizations and society?