Can you win?

When I was growing up (or at least older), many conversations fell into the category of Battle o’ Wits, although in the cruel light of accurate remembrance, Battle o’ Half-wits was probably the more accurate description.

Which is why, asked which threesome was funniest, my kindred spirits and I would unhesitatingly choose the Marx Brothers over the Three Stooges. Given a choice between becoming the next Groucho and the next Chuck Morris, we’d have chosen Groucho in a heartbeat.

But … Marx and Morris had this in common: It was always, for them and for us, about winning. Groucho’s “The next time I see you, remind not to talk to you,” was, psychologically, exactly equivalent to Chuck breaking an opponent’s nose.

What brought this to mind was an interchange in the Comments to last week’s column in response to my having said, “Bigots who aren’t violent and don’t incite violence aren’t dangerous. They’re merely annoying.”

The commenter’s points are that (1) verbal bigotry can do direct damage to its targets and (2) it can encourage discrimination even when it falls well short of incitement.

They’re points that deserve attention.

And so …

First and foremost, before anything else, in case this wasn’t entirely clear last week, the workplace has no place for any expression of bigotry of any kind. If you think this represents a triumph of political correctness, go ahead and think it.

But if you want to gripe about it … in the workplace … all you’re doing is announcing that you want to say something bigoted and would if you were allowed to. Which isn’t very different from saying the bigoted thing in the first place, except that you’re making us guess who you’re bigoted against.

This includes, by the way, bias against White Supremacists, a group I personally find detestable, but whose perspectives are just as legitimate and important to its devotees as my own are to me. In the workplace I’m just as responsible for keeping my views about them to myself as they are for keeping their views to themselves about … well, statistically speaking, most of this planet’s inhabitants.

Outside the workplace is another matter, where, faced with someone spouting off about one or more of the usual targets, we each have to decide how to deal with the situation.

If I’m the target, I maintain now what I maintained last week: Non-violent bigotry, and I include all bigotry that doesn’t incite, is a mere annoyance. It has to be, because if I give it any more significance than that, I’m giving the bigot power over me.

The bigot wins, and as a Groucho-ist in good standing, that would be just plain unacceptable.

That leads to the next, more uncomfortable question: Does the bigot have to lose the encounter, or is their not winning a satisfactory outcome?

Here’s where it gets complicated.

If it’s just the two of us, a Groucho-grade put-down might be personally satisfying, but it isn’t likely to cause the bigot to break down and beg me not to nail him with another one.

Quite the opposite, all I’d have accomplished is to escalate the situation. Worse, the less-verbally-skilled my opponent might be, the more likely escalation to physical violence would be, and I have nothing in common with Chuck Norris.

If the two of us have an audience, I have to weigh the possibility that humiliating my opponent could win the audience over to my side against the equally likely possibility that they’re already on my opponent’s side, at which point escalation would likely be quite unfortunate.

Here’s where I am, personally. Your mileage may vary:

Neither you nor I will persuade a single white supremacist to change his or her worldview, any more than you’ll persuade a dedicated Waterfall-oriented project manager that really, anyone who hasn’t gone full DevOps is a dinosaur who should be put out to pasture … a herbivorous dinosaur, that is, because as any Jurassic Park-goer knows, Tyrannosaurs and velociraptors don’t remain pasture-bound.

Persuasion won’t get us anywhere. Lecturing won’t get us anywhere. Neither will self-righteous indignation. What will?

Opinion: The Blues Brothers and Blazing Saddles did more to combat bigotry than all the speeches in the world. They did so by ridiculing the whole system of beliefs and its vocal proponents, making the whole business socially unacceptable.

Ridicule. We need more ridicule.

Groucho, where are you when we need you?

If you’re interested in machine learning, or, and especially if you have any involvement in big data, analytics, and related matters, before today is over you must read “Why scientific findings by AI can’t always be trusted,” (Maria Temming, Science News, Vol. 195, No. 7, 4/13/2019).

It describes research by Genevera Allen, a data scientist at Rice University, that attempts to answer a question asked in this space not long ago: With neural networks, which can’t explain their logic when presenting a conclusion, aren’t we just substituting trusting a machine’s guts for our own?

Allen’s conclusion: Yes, we are, and no, we shouldn’t.

Machine learning can, she says, be useful for providing preliminary results humans can later validate. “More exploratory algorithms that poke around datasets to find previously unknown patterns or relationships are very hard to verify,” she explains. “Deferring judgment to such autonomous systems may lead to faulty conclusions.”

Reinforcing the parallel with humans and their guts, Allen points out one of the more important limitations of machine learning: “… data-mining algorithms are designed to draw conclusions with no uncertainty.”

The people I know who trust their guts also seem to lack uncertainty.

Among those who should be less certain are those who figure the so-called “technological singularity” represents the biggest risk AI poses to humanity at large. The singularity — runaway AI where automated improvement cycles beget ever-more-advanced non-biological superintelligences — is the least of our concerns, for the simple reason that intelligence and motivation have little to do with each other.

To choose a banal example, Watson beat all human opponents at Jeopardy. We didn’t see a bunch of autonomous Watsons vying to become the next game-show contestants. Watson provided the ability; IBM’s researchers provided the motivation.

If we shouldn’t worry about the Singularity, what should concern us?

The answer: GPT-2 and, more broadly, the emerging technology of AI text generation.

And is so often the case, the danger doesn’t come from the technology itself. It comes from us pesky human beings who will, inevitably, use it for nefarious purposes.

This isn’t science fiction. The risk is now. Assuming you haven’t been living in a cave the past couple of years you know that Russian operatives deployed thousands of ‘bots across social media to influence the 2016 election by creating a Twitter echo chamber for opinions they wanted spread to audiences they considered vulnerable.

Now … add sophisticated text generation to these ‘bots capabilities.

You thought Photoshop was dangerous? Take it a step further: We already have the technology to convincingly CGI the faces of dead people onto living actors. What’s to stop a political campaign from using this technology to make it appear that their opponent gave a speech encouraging everyone to, say, embrace Satan as their lord and master?

Oh, and, by the way, as one of those who is or soon will be responsible for making your company more Digital,” it likely won’t be long before you find yourself figuring out whether, in this brave new world, it is more blessed to give than to receive. Because while less politically alarming, do you doubt your Marketing Department won’t want to be the last one on their block to have these new toys to play with?

The same technologies our geopolitical opponents have and will use to sell us their preferred candidates for office will undoubtedly help marketeers everywhere sell us their products and services.

How to solve this?

It’s quite certain prevention isn’t an option, although, as advocated in this space once or twice, we might hope for legislation restricting first amendment rights to actual human persons and not their technological agents, and, beyond that, explicitly limiting the subjects non-humans are allowed to speak about while also requiring all non-human messagers to clearly identify themselves as such.

We might also hope that, unlike the currently pitiful enforcement of the Do-Not-Call Implementation Act of 2003, enforcement of the Shut the ‘Bots Up Act of 2019 would be more vigorous.

Don’t hold your breath.

What might help at least a bit would be development of AI defenses for AI offenses.

Way back in 1997 I proposed that some independent authority should establish a Trusted Information Provider (TIP) certification that information consumers could use to decide which sources to rely on.

What we need now is like that, only using the same amplification techniques the bad guys are using. We need something a lot like spam filters and malware protection — products that use AI techniques to identify and warn users about ‘bot-authored content.

Of course, we’d then need some way to distinguish legitimate ‘bot-blocking software from phony alternatives.

Think of it as full employment for epistemologists.