Technically, they’re right.

I’m talking about Gartner and its new forecast, that “… by 2021, CIOs Will Be as Responsible for Culture Change as Chief HR Officers.”

They’re technically correct. Chief HR Officers aren’t responsible for culture change right now, and won’t be in 2021. Chief Information Officers also won’t be responsible for it in 2021, making them exactly as responsible as CHROs.

Why am I so sure? It’s because of the nature of culture, discussed here many times and codified in both Leading IT and Bare Bones Change Management. Culture is the learned behavior people exhibit in response to their environment. Among employees, most of the environment each employee responds to is the behavior of the employees with whom they work.

And not all of these employees have an equal impact. Those who supervise and manage have more impact than those who don’t.

So today, tomorrow, and in 2021, employees’ managers will be the ones who have to change the culture, accomplishing this by changing their own behavior. Not HR, not the CIO. Every manager in the company.

Gartner’s forecast begins with the proposition that, “Successful Digital Transformation Initiatives Must Be Accompanied by Culture Changes.”

Which isn’t wrong. No matter how you define “digital,” it can’t succeed without a radical change to most business cultures.

The illogic starts shortly thereafter an assertion that the mission and values of an organization usually fall into the remit of HR.

There’s only one counterargument, but it’s compelling: WHAT?!?!

HR often does take charge of the dreaded Mission Statement. But, were you to take a random sample of corporate mission statements and their actual corporate missions, you’d find the correlation between the two is at best a miniscule statistical artifact, nothing more.

Asserting that HR is responsible for the corporate mission disqualifies Gartner as an advisor regarding How Things Work. (If you’re looking for a qualified advisor you know who to call …)

As for HR owning culture change, yes, smart CEOs, having superior CHROs, will consult with them and involve them in operationalizing the digital culture change. And increasingly, assuming they’ve also hired superior CIOs, they’ll consult with them, involving them in defining what a digital culture looks and feels like.

But consultation and involvement aren’t the same thing as delegation and authority, and any CEO willing to delegate the business culture to anyone else is misguided — misguided because it abrogates their single most important responsibility.

And more misguided because it can’t be done. Business culture is the learned behavior employees exhibit in response to their environment and in particular in response to their line manager’s behavior.

The company’s management culture is the learned behavior line managers exhibit in response to their environment, and in particular in response to their managers’ behavior.

Which in turn is a response to middle management behavior, which is connected to the ankle bone, which is connected to the thigh bone, which ossium inexorably ends up in the CEO’s office for the same reason that when you fall, the direction you go is down:

That’s how the world is put together.

But the fallacy starts upstream from there, with the culture change needed most for digital transformations to succeed. It’s in the executive suite, as I recently explained (he modestly pointed out) on CIO.com (“Digital transformation’s dark secret,” 10/31/2018). Neither the CIO (or Chief Digital Officer if your company has one) nor CHRO is going to lead an executive suite culture change.

Who is? Gartner needs to pick up the clue phone about this, because (it’s time for a blinding flash of the obvious) that’s the CEO’s job.

What’s the essence of the executive suite culture change? That, of course, depends on the organization in question and its current situation. One place to look is something we discussed last week: the lack of respect given to what are usually called “intangible benefits.”

Hidden among the benefits of digital strategies and transformations is a radical change in management thinking. In the industrial age of business, tangible, which is to say direct financial benefits, usually in the form of cost-cutting, was what mattered. Everything else was a means to that end.

Digital strategies, in contrast, focus, or at least should focus, on competitive advantage and what gives it to you. While in the end tangible financial benefits do happen, they’re a byproduct, nothing more.

So here’s the scorecard: Gartner is right about digital transformations requiring a change in corporate culture. I’m happy for Gartner that its analysts finally figured this out.

As for how to make it happen? Maybe, if its analysts start to read KJR, they’ll figure that out too someday.

They’ll probably take credit for it when they do.

Irony fans rejoice. AI has entered the fray.

More specifically, the branch of artificial intelligence known as self-learning AI, also known as machine learning, sub-branch neural networks, is taking us into truly delicious territory.

Before getting to the punchline, a bit of background.

“Artificial Intelligence” isn’t a thing. It’s a collection of techniques mostly dedicated to making computers good at tasks humans accomplish without very much effort — tasks like: recognizing cats; identifying patterns; understanding the meaning of text (what you’re doing right now); turning speech into text, after which see previous entry (what you’d be doing if you were listening to this as a podcast, which would be surprising because I no longer do podcasts); and applying a set of rules or guidelines to a situation so as to recommend a decision or course of action, like, for example, determining the best next move in a game of chess or go.

Where machine learning comes in is making use of feedback loops to improve the accuracy or efficacy of the algorithms used to recognize cats and so on.

Along the way we seem to be teaching computers to commit sins of logic, like, for example, the well-known fallacy of mistaking correlation for causation.

Take, for example, a fascinating piece of research from the Pew Research Center that compared the frequencies of men and women in Google image searches of various job categories to the equivalent U.S. Department of Labor percentages (“Searching for images of CEOs or managers? The results almost always show men,” Andrew Van Dam, The Washington Post’s Wonkblog, 1/3/2019.

It isn’t only CEOs and managers, either. The research showed that, “…In 57 percent of occupations, image searches indicate the jobs are more male-dominated than they actually are.”

While we don’t know exactly how Google image searches work, somewhere behind all of this the Google image search AI must have discovered some sort of correlation between images of people working and the job categories those images are typical of. The correlation led to the inference that male-ness causes CEO-ness; also, strangely, bartender-ness and claims-adjuster-ness, to name a few other misfires.

Skewed Google occupation image search results are, if not benign, probably quite low on the list of social ills that need correcting.

But it isn’t much of a stretch to imagine law-enforcement agencies adopting similar AI techniques, resulting in correlation-implies-causation driven racial, ethnic, and gender-based profiling.

Or, closer to home, to imagine your marketing department relying on equivalent demographic or psychographic correlations, leading to marketing misfires when targeting messages to specific customer segments.

I said the Google image results must have been the result of some sort of correlation technique, but that isn’t entirely true. It’s just as possible Google is making use of neural network technology, so called because it roughly emulates how AI researchers imagine the human brain learns.

I say “roughly emulates” as a shorthand for seriously esoteric discussions as to exactly how it all actually works. I’ll leave it at that on the grounds that (1) for our purposes it doesn’t matter; (2) neural network technology is what it is whether or not it emulates the human brain; and (3) I don’t understand the specifics well enough to go into them here.

What does matter about this is that when a neural network … the technical variety, not the organic version … learns something or recommends a course of action, there doesn’t seem to be any way of getting a read-out as to how it reached its conclusion.

Put simply, if a neural network says, “That’s a photo of a cat,” there’s no way to ask it “Why do you think so?”

Okay, okay, if you want to be precise, it’s quite easy to ask it the question. What you won’t get is an answer, just as you won’t get an answer if it recommends, say, a chess move or an algorithmic trade.

Which gets us to AI’s entry into the 2019 irony sweepstakes.

Start with big data and advanced analytics. Their purpose is supposed to be moving an organization’s decision-making beyond someone in authority “trusting their gut,” to relying on evidence and logic instead.

We’re now on the cusp of hooking machine-learning neural networks up to our big data repositories so they can discover patterns and recommend courses of action through more sophisticated means than even the smartest data scientists can achieve.

Only we can’t know why the AI will be making its recommendations.

Apparently, we’ll just have to trust its guts.

I’m not entirely sure that counts as progress.