I’m not sure what follows belongs in KJR, and if it does whether it offers anything new and insightful to what’s being published about the subject elsewhere.

Please share your opinion on both fronts, preferably in the Comments.

Thanks. — Bob

# # #

In the game of evolution by natural selection there are no rules. Anything a gene can do to insert more copies of itself in succeeding generations is considered fair play, not that the players have any sense they’re playing a game; not that the concept of “fair” plays any part in their thinking; not that thinking plays any part in most of the players’ lives.

Among the ways of dividing the world into two types of people … no, not “those who divide the world into two types of people and those who don’t …

Where was I? Some of those in leadership roles figure rules are part of the game, and there’s really no point in winning without following them.

That’s in contrast to a different sort of leader — those who consider rules as soft boundaries, to be followed when convenient or when the risk of being caught violating them, multiplied by the penalties likely to be incurred as a result of the violation, are excessive.

For this class of leader, the only rule is that there are no rules. Winning is all that matters.

Which gets us to a subject covered here a couple of weeks ago — the confluence of increasingly sophisticated artificial intelligence and simulation technologies, and their potential for abuse.

Before reading further, take a few minutes to watch a terrifying demonstration of just how easy it now is for a political candidate to, as described last week, “… use this technology to make it appear that their opponent gave a speech encouraging everyone to, say, embrace Satan as their lord and master.”

And thanks to Jon Payton for bringing this to our attention in the Comments.

Nor will this sort of thing be limited to unscrupulous politicians. Does anyone reading these words doubt that some CEO, in pursuit of profits, will put a doctored video on YouTube showing a competitor’s CEO explaining, to his board of directors, “Sure our products kill our customers! Who cares? We can conceal the evidence where no one will ever find it, and in the meantime our profits are much higher than they’d be if we bore the time and expense of making our products safe!”

Easy to make, hard to trace, and even harder to counter with the truth.

Once upon a time our vision of rogue AI depended on robots that autonomously selected human targets to obliterate.

Now? Skynet seems almost utopian. Its threat is physical and tangible.

Where we’re headed is, I think, even more dangerous.

The technology used to create “Deepfake” videos depends on one branch of artificial intelligence technology. Combine it with text generation that writes the script and we’re at the point where AI passes the well-known Turing test.

Reality itself is under siege, and Virtual is winning. Just as counterfeit money devalues real currency, so counterfeit reality devalues actual facts.

We can take limited comfort in knowing that, at least for now, researchers haven’t made AI self-directed. If, for example, a deepfake pornographic video shows up in which a controversial politician appears to have a starring role, we can be confident a human directed tame AIs to create and publicize it.

And here I have to apologize, on two fronts.

The first: KJR’s purpose is to give you ideas you can put to immediate, practical use. This isn’t that.

The second: As the old management adage has it, I’m supposed to provide solutions, not problems.

The best I have in the way of solutions is an AI arms race, where machine-learning AIs tuned to be deepfake detectors become part of our anti-malware standard kit. Or, if you’re a more militant sort, built to engage in deepfake search-and-destroy missions.

That’s in addition to the Shut the ‘Bots Up Act of 2019 I proposed last week, which would limit First Amendment rights to actual human beings.

It’s weak, but it’s the best I have.

How about you?

If you’re interested in machine learning, or, and especially if you have any involvement in big data, analytics, and related matters, before today is over you must read “Why scientific findings by AI can’t always be trusted,” (Maria Temming, Science News, Vol. 195, No. 7, 4/13/2019).

It describes research by Genevera Allen, a data scientist at Rice University, that attempts to answer a question asked in this space not long ago: With neural networks, which can’t explain their logic when presenting a conclusion, aren’t we just substituting trusting a machine’s guts for our own?

Allen’s conclusion: Yes, we are, and no, we shouldn’t.

Machine learning can, she says, be useful for providing preliminary results humans can later validate. “More exploratory algorithms that poke around datasets to find previously unknown patterns or relationships are very hard to verify,” she explains. “Deferring judgment to such autonomous systems may lead to faulty conclusions.”

Reinforcing the parallel with humans and their guts, Allen points out one of the more important limitations of machine learning: “… data-mining algorithms are designed to draw conclusions with no uncertainty.”

The people I know who trust their guts also seem to lack uncertainty.

Among those who should be less certain are those who figure the so-called “technological singularity” represents the biggest risk AI poses to humanity at large. The singularity — runaway AI where automated improvement cycles beget ever-more-advanced non-biological superintelligences — is the least of our concerns, for the simple reason that intelligence and motivation have little to do with each other.

To choose a banal example, Watson beat all human opponents at Jeopardy. We didn’t see a bunch of autonomous Watsons vying to become the next game-show contestants. Watson provided the ability; IBM’s researchers provided the motivation.

If we shouldn’t worry about the Singularity, what should concern us?

The answer: GPT-2 and, more broadly, the emerging technology of AI text generation.

And is so often the case, the danger doesn’t come from the technology itself. It comes from us pesky human beings who will, inevitably, use it for nefarious purposes.

This isn’t science fiction. The risk is now. Assuming you haven’t been living in a cave the past couple of years you know that Russian operatives deployed thousands of ‘bots across social media to influence the 2016 election by creating a Twitter echo chamber for opinions they wanted spread to audiences they considered vulnerable.

Now … add sophisticated text generation to these ‘bots capabilities.

You thought Photoshop was dangerous? Take it a step further: We already have the technology to convincingly CGI the faces of dead people onto living actors. What’s to stop a political campaign from using this technology to make it appear that their opponent gave a speech encouraging everyone to, say, embrace Satan as their lord and master?

Oh, and, by the way, as one of those who is or soon will be responsible for making your company more Digital,” it likely won’t be long before you find yourself figuring out whether, in this brave new world, it is more blessed to give than to receive. Because while less politically alarming, do you doubt your Marketing Department won’t want to be the last one on their block to have these new toys to play with?

The same technologies our geopolitical opponents have and will use to sell us their preferred candidates for office will undoubtedly help marketeers everywhere sell us their products and services.

How to solve this?

It’s quite certain prevention isn’t an option, although, as advocated in this space once or twice, we might hope for legislation restricting first amendment rights to actual human persons and not their technological agents, and, beyond that, explicitly limiting the subjects non-humans are allowed to speak about while also requiring all non-human messagers to clearly identify themselves as such.

We might also hope that, unlike the currently pitiful enforcement of the Do-Not-Call Implementation Act of 2003, enforcement of the Shut the ‘Bots Up Act of 2019 would be more vigorous.

Don’t hold your breath.

What might help at least a bit would be development of AI defenses for AI offenses.

Way back in 1997 I proposed that some independent authority should establish a Trusted Information Provider (TIP) certification that information consumers could use to decide which sources to rely on.

What we need now is like that, only using the same amplification techniques the bad guys are using. We need something a lot like spam filters and malware protection — products that use AI techniques to identify and warn users about ‘bot-authored content.

Of course, we’d then need some way to distinguish legitimate ‘bot-blocking software from phony alternatives.

Think of it as full employment for epistemologists.