If you can’t resolve a thorny conundrum, try asking the question backward.

In the United States we have an ongoing, unresolved question: What are society’s obligations to the poor? 90 years after FDR’s New Deal we’re still arguing about this, with plenty of programs but little in the way of a national consensus.

What if we asked the question backward — instead of asking what obligations, if any, we have to the poor, let’s ask what privileges should accompany wealth?

We might imagine a continuum. On one end are certainties: Wealth should entitle those who have it to more and cooler toys. Tastier meals. Freedom from having to pick up after themselves, vacuum their floors, and scrub their plumbing fixtures.

Terry Pratchett once pointed out that “privilege” means “private law.” On the other end of the continuum from better toys, food, and household hygiene I think most of us would agree that wealth shouldn’t entitle its owners to private laws, whether in the form of legislation passed to benefit the favored few, or better judicial outcomes because that’s what you get when you can afford the best lawyers.

For that matter, instead of asking if the poor should be entitled to free healthcare, question inversion leads us to instead ask if wealth should confer better health and longer lifespans for those who, through luck or skill, have more of it.

Keep the Joint Running isn’t the place for this conversation, although I’d be delighted if you decide to have it, whether in the Comments, at your dinner table, or in a local tavern accompanied by conversational lubricants.

What does fit KJR’s charter is a very different business question that looks much the same when you invert it.

The question: How can business leaders keep their organizations from turning into stifling, choking bureaucracies.

The inversion: Must all rules apply, all the time, to everyone, regardless of their performance, contribution to the bottom line, or where they rank on the organizational chart?

For example:

> In your sales force is a rainmaker — someone who’s exceptional at designing and closing big, profitable deals. He also has a volatile disposition and huge temper, which he aims at whoever is convenient whenever he feels frustrated. The question: Should his direct financial contributions result in, shall we say, a more flexible and nuanced response from HR than another employee, with a similar temperament who contributes far less to the company’s success should get?

> Your company has a well-structured governance practice for defining, evaluating, and deciding which capital projects to undertake.

Your CFO is championing a major capital project. While it seems to make sense she hasn’t run it through the process. Instead she’s schmoozed it through the committee, whose members trust her judgment … or might want her to return the favor come budget season.

The question: Should the CFO and her executive peers be allowed to skip procedural steps that lower-level managers are required to follow?

> Your company’s recruiting function has established specific procedures for filling all open positions. The CEO recently brought in a new CIO to straighten out the company’s IT organization, and the CIO wants to bring in “his team” — three managers he’s worked with in the past. He knows they share his approach to running IT and are the right people to lead the company’s IT turnaround.

Should he be allowed to bypass Recruiting’s procedures?

For most of us the instinctive answer is yes — the rules apply to everyone.

Except for most entrepreneurs, who tend to see the uniquenesses of situations as well as their similarities.

Take the case of the CFO and her capital project. Companies institute governance frameworks and procedures for reviewing capital proposals to reduce the risk of making poor investments. The CFO applies these frameworks in her sleep. Dragging her proposal through the procedure wastes a lot of her valuable time with little additional risk reduction.

On the other hand, insisting everyone follows these rules, from the top of the organization to the bottom, helps establish an egalitarian perspective that says nobody gets special privileges. It also ensures the company’s executives don’t get sloppy, mistaking arrogance for good judgment.

But on yet another hand, if everyone in the organization had the CFO’s level of financial sophistication, there might never have been a need for the rules in the first place.

“There are reasons we have rules,” is a phrase you’ve probably heard from time to time.

And I agree. There are reasons we have rules. And if we took the time to remember those reasons we’d all be better off.

Irony fans rejoice. AI has entered the fray.

More specifically, the branch of artificial intelligence known as self-learning AI, also known as machine learning, sub-branch neural networks, is taking us into truly delicious territory.

Before getting to the punchline, a bit of background.

“Artificial Intelligence” isn’t a thing. It’s a collection of techniques mostly dedicated to making computers good at tasks humans accomplish without very much effort — tasks like: recognizing cats; identifying patterns; understanding the meaning of text (what you’re doing right now); turning speech into text, after which see previous entry (what you’d be doing if you were listening to this as a podcast, which would be surprising because I no longer do podcasts); and applying a set of rules or guidelines to a situation so as to recommend a decision or course of action, like, for example, determining the best next move in a game of chess or go.

Where machine learning comes in is making use of feedback loops to improve the accuracy or efficacy of the algorithms used to recognize cats and so on.

Along the way we seem to be teaching computers to commit sins of logic, like, for example, the well-known fallacy of mistaking correlation for causation.

Take, for example, a fascinating piece of research from the Pew Research Center that compared the frequencies of men and women in Google image searches of various job categories to the equivalent U.S. Department of Labor percentages (“Searching for images of CEOs or managers? The results almost always show men,” Andrew Van Dam, The Washington Post’s Wonkblog, 1/3/2019.

It isn’t only CEOs and managers, either. The research showed that, “…In 57 percent of occupations, image searches indicate the jobs are more male-dominated than they actually are.”

While we don’t know exactly how Google image searches work, somewhere behind all of this the Google image search AI must have discovered some sort of correlation between images of people working and the job categories those images are typical of. The correlation led to the inference that male-ness causes CEO-ness; also, strangely, bartender-ness and claims-adjuster-ness, to name a few other misfires.

Skewed Google occupation image search results are, if not benign, probably quite low on the list of social ills that need correcting.

But it isn’t much of a stretch to imagine law-enforcement agencies adopting similar AI techniques, resulting in correlation-implies-causation driven racial, ethnic, and gender-based profiling.

Or, closer to home, to imagine your marketing department relying on equivalent demographic or psychographic correlations, leading to marketing misfires when targeting messages to specific customer segments.

I said the Google image results must have been the result of some sort of correlation technique, but that isn’t entirely true. It’s just as possible Google is making use of neural network technology, so called because it roughly emulates how AI researchers imagine the human brain learns.

I say “roughly emulates” as a shorthand for seriously esoteric discussions as to exactly how it all actually works. I’ll leave it at that on the grounds that (1) for our purposes it doesn’t matter; (2) neural network technology is what it is whether or not it emulates the human brain; and (3) I don’t understand the specifics well enough to go into them here.

What does matter about this is that when a neural network … the technical variety, not the organic version … learns something or recommends a course of action, there doesn’t seem to be any way of getting a read-out as to how it reached its conclusion.

Put simply, if a neural network says, “That’s a photo of a cat,” there’s no way to ask it “Why do you think so?”

Okay, okay, if you want to be precise, it’s quite easy to ask it the question. What you won’t get is an answer, just as you won’t get an answer if it recommends, say, a chess move or an algorithmic trade.

Which gets us to AI’s entry into the 2019 irony sweepstakes.

Start with big data and advanced analytics. Their purpose is supposed to be moving an organization’s decision-making beyond someone in authority “trusting their gut,” to relying on evidence and logic instead.

We’re now on the cusp of hooking machine-learning neural networks up to our big data repositories so they can discover patterns and recommend courses of action through more sophisticated means than even the smartest data scientists can achieve.

Only we can’t know why the AI will be making its recommendations.

Apparently, we’ll just have to trust its guts.

I’m not entirely sure that counts as progress.