If you’re interested in machine learning, or, and especially if you have any involvement in big data, analytics, and related matters, before today is over you must read “Why scientific findings by AI can’t always be trusted,” (Maria Temming, Science News, Vol. 195, No. 7, 4/13/2019).

It describes research by Genevera Allen, a data scientist at Rice University, that attempts to answer a question asked in this space not long ago: With neural networks, which can’t explain their logic when presenting a conclusion, aren’t we just substituting trusting a machine’s guts for our own?

Allen’s conclusion: Yes, we are, and no, we shouldn’t.

Machine learning can, she says, be useful for providing preliminary results humans can later validate. “More exploratory algorithms that poke around datasets to find previously unknown patterns or relationships are very hard to verify,” she explains. “Deferring judgment to such autonomous systems may lead to faulty conclusions.”

Reinforcing the parallel with humans and their guts, Allen points out one of the more important limitations of machine learning: “… data-mining algorithms are designed to draw conclusions with no uncertainty.”

The people I know who trust their guts also seem to lack uncertainty.

Among those who should be less certain are those who figure the so-called “technological singularity” represents the biggest risk AI poses to humanity at large. The singularity — runaway AI where automated improvement cycles beget ever-more-advanced non-biological superintelligences — is the least of our concerns, for the simple reason that intelligence and motivation have little to do with each other.

To choose a banal example, Watson beat all human opponents at Jeopardy. We didn’t see a bunch of autonomous Watsons vying to become the next game-show contestants. Watson provided the ability; IBM’s researchers provided the motivation.

If we shouldn’t worry about the Singularity, what should concern us?

The answer: GPT-2 and, more broadly, the emerging technology of AI text generation.

And is so often the case, the danger doesn’t come from the technology itself. It comes from us pesky human beings who will, inevitably, use it for nefarious purposes.

This isn’t science fiction. The risk is now. Assuming you haven’t been living in a cave the past couple of years you know that Russian operatives deployed thousands of ‘bots across social media to influence the 2016 election by creating a Twitter echo chamber for opinions they wanted spread to audiences they considered vulnerable.

Now … add sophisticated text generation to these ‘bots capabilities.

You thought Photoshop was dangerous? Take it a step further: We already have the technology to convincingly CGI the faces of dead people onto living actors. What’s to stop a political campaign from using this technology to make it appear that their opponent gave a speech encouraging everyone to, say, embrace Satan as their lord and master?

Oh, and, by the way, as one of those who is or soon will be responsible for making your company more Digital,” it likely won’t be long before you find yourself figuring out whether, in this brave new world, it is more blessed to give than to receive. Because while less politically alarming, do you doubt your Marketing Department won’t want to be the last one on their block to have these new toys to play with?

The same technologies our geopolitical opponents have and will use to sell us their preferred candidates for office will undoubtedly help marketeers everywhere sell us their products and services.

How to solve this?

It’s quite certain prevention isn’t an option, although, as advocated in this space once or twice, we might hope for legislation restricting first amendment rights to actual human persons and not their technological agents, and, beyond that, explicitly limiting the subjects non-humans are allowed to speak about while also requiring all non-human messagers to clearly identify themselves as such.

We might also hope that, unlike the currently pitiful enforcement of the Do-Not-Call Implementation Act of 2003, enforcement of the Shut the ‘Bots Up Act of 2019 would be more vigorous.

Don’t hold your breath.

What might help at least a bit would be development of AI defenses for AI offenses.

Way back in 1997 I proposed that some independent authority should establish a Trusted Information Provider (TIP) certification that information consumers could use to decide which sources to rely on.

What we need now is like that, only using the same amplification techniques the bad guys are using. We need something a lot like spam filters and malware protection — products that use AI techniques to identify and warn users about ‘bot-authored content.

Of course, we’d then need some way to distinguish legitimate ‘bot-blocking software from phony alternatives.

Think of it as full employment for epistemologists.

If you can’t resolve a thorny conundrum, try asking the question backward.

In the United States we have an ongoing, unresolved question: What are society’s obligations to the poor? 90 years after FDR’s New Deal we’re still arguing about this, with plenty of programs but little in the way of a national consensus.

What if we asked the question backward — instead of asking what obligations, if any, we have to the poor, let’s ask what privileges should accompany wealth?

We might imagine a continuum. On one end are certainties: Wealth should entitle those who have it to more and cooler toys. Tastier meals. Freedom from having to pick up after themselves, vacuum their floors, and scrub their plumbing fixtures.

Terry Pratchett once pointed out that “privilege” means “private law.” On the other end of the continuum from better toys, food, and household hygiene I think most of us would agree that wealth shouldn’t entitle its owners to private laws, whether in the form of legislation passed to benefit the favored few, or better judicial outcomes because that’s what you get when you can afford the best lawyers.

For that matter, instead of asking if the poor should be entitled to free healthcare, question inversion leads us to instead ask if wealth should confer better health and longer lifespans for those who, through luck or skill, have more of it.

Keep the Joint Running isn’t the place for this conversation, although I’d be delighted if you decide to have it, whether in the Comments, at your dinner table, or in a local tavern accompanied by conversational lubricants.

What does fit KJR’s charter is a very different business question that looks much the same when you invert it.

The question: How can business leaders keep their organizations from turning into stifling, choking bureaucracies.

The inversion: Must all rules apply, all the time, to everyone, regardless of their performance, contribution to the bottom line, or where they rank on the organizational chart?

For example:

> In your sales force is a rainmaker — someone who’s exceptional at designing and closing big, profitable deals. He also has a volatile disposition and huge temper, which he aims at whoever is convenient whenever he feels frustrated. The question: Should his direct financial contributions result in, shall we say, a more flexible and nuanced response from HR than another employee, with a similar temperament who contributes far less to the company’s success should get?

> Your company has a well-structured governance practice for defining, evaluating, and deciding which capital projects to undertake.

Your CFO is championing a major capital project. While it seems to make sense she hasn’t run it through the process. Instead she’s schmoozed it through the committee, whose members trust her judgment … or might want her to return the favor come budget season.

The question: Should the CFO and her executive peers be allowed to skip procedural steps that lower-level managers are required to follow?

> Your company’s recruiting function has established specific procedures for filling all open positions. The CEO recently brought in a new CIO to straighten out the company’s IT organization, and the CIO wants to bring in “his team” — three managers he’s worked with in the past. He knows they share his approach to running IT and are the right people to lead the company’s IT turnaround.

Should he be allowed to bypass Recruiting’s procedures?

For most of us the instinctive answer is yes — the rules apply to everyone.

Except for most entrepreneurs, who tend to see the uniquenesses of situations as well as their similarities.

Take the case of the CFO and her capital project. Companies institute governance frameworks and procedures for reviewing capital proposals to reduce the risk of making poor investments. The CFO applies these frameworks in her sleep. Dragging her proposal through the procedure wastes a lot of her valuable time with little additional risk reduction.

On the other hand, insisting everyone follows these rules, from the top of the organization to the bottom, helps establish an egalitarian perspective that says nobody gets special privileges. It also ensures the company’s executives don’t get sloppy, mistaking arrogance for good judgment.

But on yet another hand, if everyone in the organization had the CFO’s level of financial sophistication, there might never have been a need for the rules in the first place.

“There are reasons we have rules,” is a phrase you’ve probably heard from time to time.

And I agree. There are reasons we have rules. And if we took the time to remember those reasons we’d all be better off.