Three threads, one conclusion:

Thread #1: In a recent advertorial (“Stop Using Excel, Finance Chiefs Tell Staffs,” Tatyana Shumsky, 3/31/2018), The Wall Street Journal proved once again that, as someone once said, if you ignore the lessons of history you’re doomed to repeat the 7th grade.

Dan Bricklin first invented the electronic spreadsheet back in 1979. It was immediately and wildly popular, for some very simple reasons: It was incredibly versatile; you could use it to think something through by literally visualizing it; and, when IT responded as it usually does to requests for small solutions — not a good enough business case — users could ignore IT and solve their own problems, right now.

The Wall Street Journal’s story tells the usual tales of spreadsheets gone wild, with their high error rates and difficulties in consolidating information. What were those fools thinking, using Excel for <insert Excel-nightmare-case here>!?!

I was nowhere near the place and I can tell you exactly what they were thinking. They were thinking they had a job to do and the alternatives were (1) Excel, and (2) … uh, Excel.

The business case for the solutions extolled in The Wall Street Journal story was that the Excel-based solutions caused problems. Had users not solved their problems with Excel first, they’d still have no business case.

When Excel is the problem you can be sure the pre-Excel problem was much bigger.

Thread #2: One of my current consulting areas is application portfolio rationalization. It’s usually about enterprise applications that number in the hundreds, but sometimes clients want to consolidate desktop applications that, in large enterprises, easily number in the thousands, not including all of the applications masquerading as Excel spreadsheets.

It’s a shocking statistic, and a support nightmare!

Only it isn’t a shocking statistic at all. A typical Fortune 500 corporation might have 50,000 or more employees. With 50,000 employees, what are the odds there aren’t at least a couple of thousand different processes that might be improved through automation IT will never get around to?

It isn’t a support nightmare either. For the most part the applications in question are used by a dozen or fewer employees who are almost entirely self-supporting.

Support isn’t the problem. Lack of control is the problem. And, in highly regulated industries, lack of control is a real problem corporate compliance needs to solve. It needs to document not only that a given business function’s outputs are correct, but that its processes and supporting tools ensure they’re correct.

On top of which, information security needs to ensure applications with gaping holes are kept off the network, and that applications stay properly patched so that as new vulnerabilities are detected, new vulnerabilities are addressed.

All of this is certainly harder when each business function solves its own problems, but it’s hardly impossible.

And it’s much easier when IT is an active partner that helps business functions solve their own problems.

Thread #3: Once upon a time I was part of a team that redesigned our company’s CapEx governance process. We hit upon a novel idea: that our job wasn’t to prevent bad ideas from leaking through. It was to recognize good ideas and help them succeed.

It turned out we were on target. What we found was that bad ideas that needed screening out were few and far between. Good ideas explained badly? We saw plenty of those.

Tying the threads together: Large enterprises have lots of moving parts, which means small problems are real, worth solving, and too numerous for IT to handle on its own. Users engage in “rogue IT” to make their part of the business more effective, because they can and they should. IT ought to find a way to help their good ideas succeed instead of assuming they’re all pursuing bad ideas that have to be stopped.

The KJR solution: create a Certified Power User program (CPU — catchy, isn’t it?). Certified Power Users will understand the basics of normalized design so they can use MS Access instead of spreadsheets when they have a database problem to solve. They’ll know how to evaluate solutions professionally, so they don’t buy whatever looked flashy at a trade show. They’ll also know how to keep solutions patched, to minimize vulnerabilities.

And, they’ll keep an inventory of the small solutions they create and share it with IT.

In exchange, they’ll have administrative privileges for their PCs, and those of the users they support.

When you’re trying to persuade, “Let us help” is a more powerful message than “No you can’t.”

We need reliable bot detectors.

The irresistible subject is Facebook, about which you’ve probably read more than enough to make your eyes glaze. Nonetheless, bear with me, because, as pointed out in this space not long ago, the central problem isn’t data privacy or policy enforcement failures.

No, bots, not Facebook’s data usage policies and violations thereof, are the central problem here. The reason it’s the central problem is that bots scale. Human beings don’t.

And just as Twitter’s failure to implement and deploy bot detectors directly led to zillions of bot-amplified tweet-storms during the 2016 election, so bots are the reason 50 million Facebook subscribers were caught up in the latest fiasco.

Bots and their detection and prevention are the big under-reported issue here, because until it’s addressed, even the most ingenious terms-of-use policies will have all the impact of an eye dropper in a forest fire.

Even if you don’t use Facebook, you and the business you support might nonetheless be on the front lines of the war against the bot apocalypse.

Bots scale. Humans don’t. That’s at the core of Facebook’s data breach. That’s because the initial breach wasn’t a breach at all. A researcher paid a group of Facebook users to take a personality test and to share personal information … a perfectly legal transaction.

Then came the bots, in the form of a crawler that, starting with this list of identified Facebook users, navigated their networks so as to harvest information from 50 million users who hadn’t given their permissions.

This is the nature of social networks: They are networks, which means that from any node you can navigate to any other node.

If the aforementioned researcher were to personally try to harvest data from 50 million connected Facebook subscribers, my back-of-the-envelope calculations say Facebook would have ceased to exist centuries before he finished the job.

But add bots to the mix and you get a very different result. They can crawl the nodes of a network orders of magnitude more quickly than a human can. That’s how they’re able to harvest personal information from millions after only receiving permission from a relative handful. Facebook purportedly disabled the ability to harvest friend data from its API in 2015. All this means is that instead of invoking the API, bots have to screen-scrape instead, which in turn means the bot is impersonating a human being.

Add this: Like it or not, we’re rapidly mastering the discipline predicted in Isaac Asimov’s Foundation Trilogy. He called it “psychohistory,” and its practitioners knew so much about human psychology that they could manipulate people to do just about anything. Asimov optimistically made psychohistorians a secret, benevolent group. Unsurprisingly, our actual psychohistorians are using their techniques to create robotic human impersonators that manipulate actual humans more for power and profit than the greater good.

Why would we expect anything else?

If you’re wearing your business/IT priority-setter hat right now, my best advice is, sadly enough, don’t unilaterally disarm. Your competitors are, or soon will take advantage of these techniques and technologies to sell more products and services. From this perspective you’re in an arms race. If you aren’t actively monitoring developments in these area and working with the business strategy team to see how you can profit from them, it won’t be long before you’re replaced by someone who understands these matters.

But if you’re wearing your human-who-doesn’t-want-the-bot-apocalypse hat, you might why Facebook, which is investing heavily in artificial intelligence research and development, doesn’t devote more of its R&D budget to bot detection … like, for example, any of it?

My guess: Facebook is investing heavily in human impersonation. It’s in the bot business … chatbot technology, for example … so why would it also develop bot detection technology?

Especially when its customers … businesses … see direct financial benefits from being able to deploy convincing chatbots and other human impersonations and no obvious profit from detecting such things.

Because make no mistake about it, you might be a Facebook user, but you aren’t a Facebook customer. Facebook follows the standard media business model. As pointed out in this space back in 2002 in the context of newpapers and television, when it comes to most media, you aren’t the customer. You’re the product. And in the world of business, it’s the customer who’s always right.

Products like us enjoy no such privileges.