Every system IT deploys must be given a clean bill of health by your Compliance department. They’re the folks who make sure you don’t run afoul of the federal, state, county, and city statutes and regulations that establish boundaries and set requirements for organizations doing business within their jurisdictions.

If your company is multinational, multiply by the number of nations within which you do business.

And don’t complain … not because I want to convince you that regulation as public policy is a good thing.

Don’t complain because what good will it do you? As a leader, complaining will do you no good at all. Quite the opposite – it will cause harm by demoralizing the employees who have to make compliance happen.

So figure out the good idea that’s at the core of most compliance requirements, make sure everyone understands that underlying good idea, never mind the cumbersome implementation requirements, and move on.

Move on to what?

To Facebook, and its emerging status as an independent government, as intriguingly explained in “Facebook has declared sovereignty” (Molly Roberts, The Washington Post, 1/31/2019).

Is Facebook-as-nation real, or is it metaphor? That’s a surprisingly hard call.

If Rocket J. Squirrel lives in a private residence at 246 Freon Drive, Frostbite Falls, MN 56537, his home ownership and property rights and privileges are defined and protected by various U.S. governmental entities.

But Mr. Squirrel also has a virtual life. He goes online and it’s Facebook that provides the real estate in which he resides … his home page … and just as surely provides the foundations on which the social media society in which he lives has been built.

There’s more: Facebook must defend itself from intruders with malicious intent — it needs a department of defense — and also must help its citizens protect themselves from smaller-scale intruders: It needs a police force. Calling the two InfoSec doesn’t change their functions, only their names.

That isn’t the end of it: Many on-line businesses let you make use of your Facebook credentials instead of establishing a separate login ID and password. Facebook issues passports or, if you prefer, these other sites award visas to people who possess Facebook passports.

Facebook-as-nation leads to all sorts of questions, like, when its citizens are living their virtual, as opposed to their physical lives, does Facebook have a role to play when the governing entity for its citizen’s physical location wants to independently impose rules restricting their on-line behavior?

Some countries, for example, recognize sedition as a felony, unlike the U.S., which long ago declared such laws unconstitutional. So …

A Dutch national posts content that insults King Willem-Alexander Claus George Ferdinand, which can be and is read by various and sundry citizens of the Netherlands.

This is, in Holland, a crime (who knew?). The Dutch government, reasonably enough, would probably like (not Like) Facebook to enforce its laws when functioning in the Netherlands — to take down offending posts and reveal the criminals’ identities to the proper authorities.

But … the criminal responsible for posting this content might not, as it turns out, post it while in the Netherlands. J-walking might be a misdemeanor in New York City but that doesn’t mean I’ve violated New York City law when I J-walk in Minneapolis.

It was, the miscreant might argue, posted in Facebookland, not the Netherlands.

And … it gets even more complicated from there.

All things considered, a declaration of national sovereignty on Facebook’s part might actually simplify things. Its offices become embassies, and all of the complexities of enforcing local laws in Facebookland are dealt with by negotiated treaties.

Interesting or not, this might not appear to be relevant to you in your role in corporate life.

Except for this: Your business undoubtedly has its own social media presence — on Facebook, and Twitter, and Instagram, and all the rest. That means your business is a citizen of Facebook, subject to its laws and regulations just as it’s subject to the laws and regulations of every governing entity within which it does business.

I suspect that right now, responsibility for complying with this new regulatory landscape isn’t clearly defined.

Which leads to this week’s suggestions for Things You Can Do Right Now to Protect Yourself from Harm:

1: For any project you’re involved in that might be affected by social media laws and regulations — especially but not limited to Facebook — make sure someone is responsible for defining these constraints.

2: Make sure that person isn’t you.

3: Suggest to whoever is responsible that the Compliance Department might be a good place to start.

4: Duck.

Irony fans rejoice. AI has entered the fray.

More specifically, the branch of artificial intelligence known as self-learning AI, also known as machine learning, sub-branch neural networks, is taking us into truly delicious territory.

Before getting to the punchline, a bit of background.

“Artificial Intelligence” isn’t a thing. It’s a collection of techniques mostly dedicated to making computers good at tasks humans accomplish without very much effort — tasks like: recognizing cats; identifying patterns; understanding the meaning of text (what you’re doing right now); turning speech into text, after which see previous entry (what you’d be doing if you were listening to this as a podcast, which would be surprising because I no longer do podcasts); and applying a set of rules or guidelines to a situation so as to recommend a decision or course of action, like, for example, determining the best next move in a game of chess or go.

Where machine learning comes in is making use of feedback loops to improve the accuracy or efficacy of the algorithms used to recognize cats and so on.

Along the way we seem to be teaching computers to commit sins of logic, like, for example, the well-known fallacy of mistaking correlation for causation.

Take, for example, a fascinating piece of research from the Pew Research Center that compared the frequencies of men and women in Google image searches of various job categories to the equivalent U.S. Department of Labor percentages (“Searching for images of CEOs or managers? The results almost always show men,” Andrew Van Dam, The Washington Post’s Wonkblog, 1/3/2019.

It isn’t only CEOs and managers, either. The research showed that, “…In 57 percent of occupations, image searches indicate the jobs are more male-dominated than they actually are.”

While we don’t know exactly how Google image searches work, somewhere behind all of this the Google image search AI must have discovered some sort of correlation between images of people working and the job categories those images are typical of. The correlation led to the inference that male-ness causes CEO-ness; also, strangely, bartender-ness and claims-adjuster-ness, to name a few other misfires.

Skewed Google occupation image search results are, if not benign, probably quite low on the list of social ills that need correcting.

But it isn’t much of a stretch to imagine law-enforcement agencies adopting similar AI techniques, resulting in correlation-implies-causation driven racial, ethnic, and gender-based profiling.

Or, closer to home, to imagine your marketing department relying on equivalent demographic or psychographic correlations, leading to marketing misfires when targeting messages to specific customer segments.

I said the Google image results must have been the result of some sort of correlation technique, but that isn’t entirely true. It’s just as possible Google is making use of neural network technology, so called because it roughly emulates how AI researchers imagine the human brain learns.

I say “roughly emulates” as a shorthand for seriously esoteric discussions as to exactly how it all actually works. I’ll leave it at that on the grounds that (1) for our purposes it doesn’t matter; (2) neural network technology is what it is whether or not it emulates the human brain; and (3) I don’t understand the specifics well enough to go into them here.

What does matter about this is that when a neural network … the technical variety, not the organic version … learns something or recommends a course of action, there doesn’t seem to be any way of getting a read-out as to how it reached its conclusion.

Put simply, if a neural network says, “That’s a photo of a cat,” there’s no way to ask it “Why do you think so?”

Okay, okay, if you want to be precise, it’s quite easy to ask it the question. What you won’t get is an answer, just as you won’t get an answer if it recommends, say, a chess move or an algorithmic trade.

Which gets us to AI’s entry into the 2019 irony sweepstakes.

Start with big data and advanced analytics. Their purpose is supposed to be moving an organization’s decision-making beyond someone in authority “trusting their gut,” to relying on evidence and logic instead.

We’re now on the cusp of hooking machine-learning neural networks up to our big data repositories so they can discover patterns and recommend courses of action through more sophisticated means than even the smartest data scientists can achieve.

Only we can’t know why the AI will be making its recommendations.

Apparently, we’ll just have to trust its guts.

I’m not entirely sure that counts as progress.