No, don’t thank me.

Last year I proposed what I’m now calling the Shut the ‘Bots Up Act of 2019 — legislation limiting the First Amendment right to free speech to organic humans only.

California has just passed legislation that, while not quite so all encompassing, still takes an important step, making it “… unlawful for any person to use a ‘bot to communicate or interact with another person in California online with the intent to mislead the other person about its artificial identity.”

You’re welcome.

# # #

Speaking of rights I wonder if it’s time to recognize that states’ rights interfere, with greater or lesser seriousness, from your average company’s ability to do business.

For example: Every state in the union has its very own regulatory rules and regimes governing the same sorts of services. Fifty separate sets of rules place a significant compliance burden on companies that would have been happier to comply with any one set of rules — any — than with having to comply with all of them.

Except, that is, for the mega-providers who can afford a platoon-ful of lawyers to contend with these regulations. For them, 50 PUCs are a formidable barrier to entry for new competitors.

Getting back to California, it will have to figure out whether and how it can enforce its new Truth in ‘Botfulness Act against offenders whose ‘bots aren’t located in California, just as the federal government, should it decide to make foreign interference in our elections illegal (wait … it is illegal!) would have trouble enforcing a ban on foreign ‘bot-based interference when it comes from ‘bots located outside the 50 states.

# # #

More about free speech: Once upon a time I figured if you prevent stupid people from speaking, others might think they’re idiots, but if you let them speak they’ll remove all doubt.

That was when I lived in the Chicago suburbs and the ACLU defended the right of the National Socialist Party (aka the Nazis) to hold a march in Skokie.

But back then it was safe, because the Nazis were a harmless fringe group.

Sadly, that’s no longer the case and I’m a bit more cognizant that the line dividing speech from incitement is, although fuzzy, critical.

Nazis are still idiots, but their fringiness is steadily decreasing. Enough of our citizens are embracing it that we need to aggressively keep its promoters inside the speech/incitement dividing line.

# # #

The question of defending Nazis’ freedom of speech versus preventing them from inciting violence brings up an esoteric but still important notion. Call it the Principles Scaling Rule. What it is: Take any fundamental principle that’s near and dear to your heart. My guess is that when you think of one of these principles it’s in the context of a small, close to home example.

For example, in the Good Old Days the local butcher knew that Mr. Phillips loved pork chops. He used that knowledge to recommend them to Mrs. Phillips when she entered his shop. It was early CRM.

My principles about companies knowing their customers’ preferences are built on this model. The digital model that scales this up to millions of customers whose on-line behavior companies mine to their sales and marketing advantage? That’s more complicated.

The Principles Scaling Rule states that when you scale a principle, nuances and complexities start to matter.

# # #

Closer to home, where do you think the line is that separates your right to free speech from your employer’s right to restrict it so you don’t say or publish something that might embarrass it?

There’s more to this question than the speech itself. There’s also the question of how employers might find out about infractions. So before we get to freeing speech from employer interference …

Before we get to that we need to establish the line that separates an employer’s right to surveille its employees from their employees’ right to insist that whatever they say in their private lives is none of their employer’s business.

It’s an old topic, made new again by the digital technologies businesses use to mine social media so their marketeers know what customers are saying about their products and services.

HR can use the exact same technologies to track down employee-generated content and evaluate its impact.

And by the way, I’m hardly unbiased on this topic, given that I’m gainfully employed by an utterly marvelous technology services firm while also publishing books and articles whose content has little to do with the company’s official stances.

Utterly marvelous. Hey, you — the HR ‘bot — did you catch that? It’s a compliment! You don’t need to flag it. It can be our little secret.

Nooooooooo!

Call it plausible blame.

A frequent correspondent (who wasn’t, by the way, endorsing it) brought an interview with Thomas Sowell in The Federalist to my attention. In it, Sowell says:

… just the other day I came across an article about how employers setting up new factories in the United States have been deliberately locating those factories away from concentrations of black populations because they find it costlier to hire blacks than to hire whites with the same qualifications. The reason is that the way civil rights laws are interpreted, it is so easy to start a discrimination lawsuit which can go on for years and cost millions of dollars regardless of the outcome.

Shall we deconstruct it?

Start with Sowell’s evidence: he “came across an article.” That isn’t evidence. It’s an unsubstantiated assertion once removed. And … uh oh … I came across an article too. Turns out, fewer than half of all EEOC filings are based on race or color; for claims where the plaintiff wins the average settlement is $160,000. That isn’t a small number, but at best it’s a tenth of Sowell’s claimed “millions of dollars.”

Oh, and presumably some of the plaintiff wins were due to actual harassment or discrimination.

And the “evidence” is stronger than the rest of Sowell’s claim. If you’ve ever been involved even slightly in business decisions like where to locate a factory, you know the process is far too complicated to give discrimination-lawsuit-prevention-by-avoiding-populations-with-too-many-potential-lawsuit-filers a determining role.

Or, for that matter, any role at all.

The underlying message, though, is pretty clear: government programs to correct social ills backfire, so those who propose them are misguided.

Only there’s no evidence that the problem even exists, and its purported root cause doesn’t stand up to even the slightest scrutiny.

That’s why I call it “plausible blame:” The stated problem isn’t real, but plausibly could be. The blame for the problem is plausibly ascribed to a group the blamer wants to disparage, with “plausibly” defined as “sufficient to support confirmation bias.”

Which brings us to Shadow IT, as you knew it would.

I’ve been reading about Shadow IT and its enormous risks. Why, just a few weekends ago, Shadow IT took down Target’s point-of-sale terminals in 1,900 or so stores.

Oh, wait, that wasn’t Shadow IT. At least, it probably wasn’t. We don’t know because all Target has divulged about the outage is that its cause was an “internal technology problem” that didn’t result in a data breach.

That’s unlike Target’s massive 2013 data breach, which was due to Shadow IT.

It wasn’t? Sorry. Bad memory.

In case you’re unfamiliar with the term, “Shadow IT” is Professional IT’s term for unsanctioned do-it-yourself IT projects taken on by business departments without the benefit of the IT organization’s expertise. With all the bad press Shadow IT gets, I figured it must have been the root cause of at least one major outage or data loss event.

But google “data breach” and while you’ll find a rich vein of newsworthy events, none had anything to do with Shadow IT.

This is plausible blame too. The problem hasn’t been documented as real, and fault for the undocumented problem is assigned based on superficially sound logic that doesn’t stand up to close scrutiny.

Plausible blame is a handy way to make us despise and direct our anger at some group or other. Shadow IT’s undocumented perils, for example, lead IT professionals already predisposed to disrespect end users (see “Wite-Out® on the screen“) to sneer at the clueless business managers who encourage it.

And it is plausible: Information Security professionals know what to look for in assessing the vulnerability of potential IT implementations — a lot more than do-it-yourselfers. Sometimes they know so much that applying that knowledge cripples creativity and initiative.

Make no mistake, Shadow IT does entail real risk. But stamping it out ignores the even greater risks associated with manual methods. Risks? Yes. Few IT organizations have the bandwidth to attend to every automation opportunity in the enterprise. Insisting on nothing but manual methods for everything else means operating far less efficiently and effectively than possible.

Logic says Shadow IT entails some risk. The evidence says professional IT is, in its own ways, just as risky. Plausible blame says Information Security should focus its attention on Shadow IT.

My conclusion: plausible blame is riskier.