No, don’t thank me.

Last year I proposed what I’m now calling the Shut the ‘Bots Up Act of 2019 — legislation limiting the First Amendment right to free speech to organic humans only.

California has just passed legislation that, while not quite so all encompassing, still takes an important step, making it “… unlawful for any person to use a ‘bot to communicate or interact with another person in California online with the intent to mislead the other person about its artificial identity.”

You’re welcome.

# # #

Speaking of rights I wonder if it’s time to recognize that states’ rights interfere, with greater or lesser seriousness, from your average company’s ability to do business.

For example: Every state in the union has its very own regulatory rules and regimes governing the same sorts of services. Fifty separate sets of rules place a significant compliance burden on companies that would have been happier to comply with any one set of rules — any — than with having to comply with all of them.

Except, that is, for the mega-providers who can afford a platoon-ful of lawyers to contend with these regulations. For them, 50 PUCs are a formidable barrier to entry for new competitors.

Getting back to California, it will have to figure out whether and how it can enforce its new Truth in ‘Botfulness Act against offenders whose ‘bots aren’t located in California, just as the federal government, should it decide to make foreign interference in our elections illegal (wait … it is illegal!) would have trouble enforcing a ban on foreign ‘bot-based interference when it comes from ‘bots located outside the 50 states.

# # #

More about free speech: Once upon a time I figured if you prevent stupid people from speaking, others might think they’re idiots, but if you let them speak they’ll remove all doubt.

That was when I lived in the Chicago suburbs and the ACLU defended the right of the National Socialist Party (aka the Nazis) to hold a march in Skokie.

But back then it was safe, because the Nazis were a harmless fringe group.

Sadly, that’s no longer the case and I’m a bit more cognizant that the line dividing speech from incitement is, although fuzzy, critical.

Nazis are still idiots, but their fringiness is steadily decreasing. Enough of our citizens are embracing it that we need to aggressively keep its promoters inside the speech/incitement dividing line.

# # #

The question of defending Nazis’ freedom of speech versus preventing them from inciting violence brings up an esoteric but still important notion. Call it the Principles Scaling Rule. What it is: Take any fundamental principle that’s near and dear to your heart. My guess is that when you think of one of these principles it’s in the context of a small, close to home example.

For example, in the Good Old Days the local butcher knew that Mr. Phillips loved pork chops. He used that knowledge to recommend them to Mrs. Phillips when she entered his shop. It was early CRM.

My principles about companies knowing their customers’ preferences are built on this model. The digital model that scales this up to millions of customers whose on-line behavior companies mine to their sales and marketing advantage? That’s more complicated.

The Principles Scaling Rule states that when you scale a principle, nuances and complexities start to matter.

# # #

Closer to home, where do you think the line is that separates your right to free speech from your employer’s right to restrict it so you don’t say or publish something that might embarrass it?

There’s more to this question than the speech itself. There’s also the question of how employers might find out about infractions. So before we get to freeing speech from employer interference …

Before we get to that we need to establish the line that separates an employer’s right to surveille its employees from their employees’ right to insist that whatever they say in their private lives is none of their employer’s business.

It’s an old topic, made new again by the digital technologies businesses use to mine social media so their marketeers know what customers are saying about their products and services.

HR can use the exact same technologies to track down employee-generated content and evaluate its impact.

And by the way, I’m hardly unbiased on this topic, given that I’m gainfully employed by an utterly marvelous technology services firm while also publishing books and articles whose content has little to do with the company’s official stances.

Utterly marvelous. Hey, you — the HR ‘bot — did you catch that? It’s a compliment! You don’t need to flag it. It can be our little secret.


I’m not sure what follows belongs in KJR, and if it does whether it offers anything new and insightful to what’s being published about the subject elsewhere.

Please share your opinion on both fronts, preferably in the Comments.

Thanks. — Bob

# # #

In the game of evolution by natural selection there are no rules. Anything a gene can do to insert more copies of itself in succeeding generations is considered fair play, not that the players have any sense they’re playing a game; not that the concept of “fair” plays any part in their thinking; not that thinking plays any part in most of the players’ lives.

Among the ways of dividing the world into two types of people … no, not “those who divide the world into two types of people and those who don’t …

Where was I? Some of those in leadership roles figure rules are part of the game, and there’s really no point in winning without following them.

That’s in contrast to a different sort of leader — those who consider rules as soft boundaries, to be followed when convenient or when the risk of being caught violating them, multiplied by the penalties likely to be incurred as a result of the violation, are excessive.

For this class of leader, the only rule is that there are no rules. Winning is all that matters.

Which gets us to a subject covered here a couple of weeks ago — the confluence of increasingly sophisticated artificial intelligence and simulation technologies, and their potential for abuse.

Before reading further, take a few minutes to watch a terrifying demonstration of just how easy it now is for a political candidate to, as described last week, “… use this technology to make it appear that their opponent gave a speech encouraging everyone to, say, embrace Satan as their lord and master.”

And thanks to Jon Payton for bringing this to our attention in the Comments.

Nor will this sort of thing be limited to unscrupulous politicians. Does anyone reading these words doubt that some CEO, in pursuit of profits, will put a doctored video on YouTube showing a competitor’s CEO explaining, to his board of directors, “Sure our products kill our customers! Who cares? We can conceal the evidence where no one will ever find it, and in the meantime our profits are much higher than they’d be if we bore the time and expense of making our products safe!”

Easy to make, hard to trace, and even harder to counter with the truth.

Once upon a time our vision of rogue AI depended on robots that autonomously selected human targets to obliterate.

Now? Skynet seems almost utopian. Its threat is physical and tangible.

Where we’re headed is, I think, even more dangerous.

The technology used to create “Deepfake” videos depends on one branch of artificial intelligence technology. Combine it with text generation that writes the script and we’re at the point where AI passes the well-known Turing test.

Reality itself is under siege, and Virtual is winning. Just as counterfeit money devalues real currency, so counterfeit reality devalues actual facts.

We can take limited comfort in knowing that, at least for now, researchers haven’t made AI self-directed. If, for example, a deepfake pornographic video shows up in which a controversial politician appears to have a starring role, we can be confident a human directed tame AIs to create and publicize it.

And here I have to apologize, on two fronts.

The first: KJR’s purpose is to give you ideas you can put to immediate, practical use. This isn’t that.

The second: As the old management adage has it, I’m supposed to provide solutions, not problems.

The best I have in the way of solutions is an AI arms race, where machine-learning AIs tuned to be deepfake detectors become part of our anti-malware standard kit. Or, if you’re a more militant sort, built to engage in deepfake search-and-destroy missions.

That’s in addition to the Shut the ‘Bots Up Act of 2019 I proposed last week, which would limit First Amendment rights to actual human beings.

It’s weak, but it’s the best I have.

How about you?