We took a break from the Minnesota winter, so I decided to take a break from KJR, too. It was a timely opportunity: Vox recently published “Intellectual humility: the importance of knowing you might be wrong,” (Brian Resnick, 1/4/2019) which is well worth your time and attention.

It’s also an opportunity to say, once again with feeling, that I told you so! Yes, it’s ungraceful. Still, I did get there first, with the piece that follows, first published in 2007, along with this follow-up article that focused more on what you can do about it. – Bob

—————————

Learn from your mistakes.

It’s barely adequate advice. You can fail in a thousand ways. Learn from one and, like bottles of beer on a wall, you still have 999 left.

Compare that to what you can discover from success. Learning how to avoid one route to failure leaves you many ways to fail again. Learn how to succeed and you succeed.

Learning from mistakes matters. Learning from successes is vital.

So here are two questions to ponder: Why are most organizations more than willing to repeat their mistakes and so unwilling to learn from their successes?

These are two entirely different questions.

One reason organizations refuse to learn from their mistakes is well-known and obvious: To learn from a mistake, the organization’s decision-makers first have to acknowledge it. That’s a problem in our winning-is-the-only-thing, hold-people-accountable, lean-and-mean (really, famished and feeble) business culture.

We might encourage risk-taking, but that doesn’t mean we’re willing to accept a little failure now and then. That we have to redefine “risk” to mean “sure-things-only” is a small price to pay.

As is making the same mistake over and over.

Another reason organizations refuse to learn from their mistakes is more subtle: Very often, those who make the mistakes are also those who define the metrics that measure success. This might not seem to be a problem but it is because of the three fallacies of business measurement: Measure the right things wrong and you’ll get the wrong results; measure the wrong things right or wrong and you’ll get the wrong results, and anything you don’t measure you don’t get.

Example: A CIO who presided over an enterprise with four business units. He established measures of success for the four service desks that supported them. Unsurprisingly, he established productivity as a key metric, defined as the number of incidents resolved per technician.

One of the service desk managers seriously underperformed the others — productivity was truly awful. Here’s what he did wrong: He established a very effective program of end-user education. Because it was so effective, end-users in his business unit reported many fewer incidents.

The CIO held him accountable for his failure and praised the other service desk managers. His metrics defined failure as success, ensuring the perpetuation of a mistake — failing to educate the end-user community.

This really happened. It probably has really happened in company after company. It wouldn’t surprise me a bit to learn that someone has enshrined “maximizing technician productivity in service desk environments” as a best practice.

This example also illustrates one reason businesses sometimes fail to learn from their successes: Metrics that define failure as success also define success as failure (if they don’t just ignore it completely).

For more than a decade, the business punditocracy has blathered incessantly about success being the creation of shareholder value. There’s a problem with shareholder value as a measure: It’s hard to know whether today’s rise in the price of a share of stock is a blip that’s due to actions that will harm a company’s long-term competitiveness, or is the result of a real improvement in the health of the enterprise.

Even worse, it isn’t clear that it matters. I created shareholder value today. Next year, or the year after that is Someone Else’s Problem.

Just an opinion: The proper definition of business success is that it is sustainable. Never mind that sustainability is hard to measure. Never mind that it’s hard to recognize. It’s the only goal that matters.

If knowing what success looks like is hard, connecting actions to results is even harder. The actions that lead to sustainable success rarely produce immediate, dramatic results. Important change takes time and patience. By the time the impact of successful effort is visible, many business leaders will have given up on the effort.

Then there is the most common reason businesses refuse to learn from success: The Not Invented Here Syndrome (NIHS).

Very few enterprises reward managers for sharing their formulas for success with their peers. They don’t reward managers for emulating the practices of other managers either. Nor does emulating a peer do much to feed the average ego.

Being the first to spot a useful idea from outside the company looks and feels a lot like creativity. But if I borrow an idea from you, you get more credit and I get none. Where’s the value and satisfaction in that?

Why do businesses so rarely learn? The barriers are immense.

The miracle is that, occasionally, they do.

Irony fans rejoice. AI has entered the fray.

More specifically, the branch of artificial intelligence known as self-learning AI, also known as machine learning, sub-branch neural networks, is taking us into truly delicious territory.

Before getting to the punchline, a bit of background.

“Artificial Intelligence” isn’t a thing. It’s a collection of techniques mostly dedicated to making computers good at tasks humans accomplish without very much effort — tasks like: recognizing cats; identifying patterns; understanding the meaning of text (what you’re doing right now); turning speech into text, after which see previous entry (what you’d be doing if you were listening to this as a podcast, which would be surprising because I no longer do podcasts); and applying a set of rules or guidelines to a situation so as to recommend a decision or course of action, like, for example, determining the best next move in a game of chess or go.

Where machine learning comes in is making use of feedback loops to improve the accuracy or efficacy of the algorithms used to recognize cats and so on.

Along the way we seem to be teaching computers to commit sins of logic, like, for example, the well-known fallacy of mistaking correlation for causation.

Take, for example, a fascinating piece of research from the Pew Research Center that compared the frequencies of men and women in Google image searches of various job categories to the equivalent U.S. Department of Labor percentages (“Searching for images of CEOs or managers? The results almost always show men,” Andrew Van Dam, The Washington Post’s Wonkblog, 1/3/2019.

It isn’t only CEOs and managers, either. The research showed that, “…In 57 percent of occupations, image searches indicate the jobs are more male-dominated than they actually are.”

While we don’t know exactly how Google image searches work, somewhere behind all of this the Google image search AI must have discovered some sort of correlation between images of people working and the job categories those images are typical of. The correlation led to the inference that male-ness causes CEO-ness; also, strangely, bartender-ness and claims-adjuster-ness, to name a few other misfires.

Skewed Google occupation image search results are, if not benign, probably quite low on the list of social ills that need correcting.

But it isn’t much of a stretch to imagine law-enforcement agencies adopting similar AI techniques, resulting in correlation-implies-causation driven racial, ethnic, and gender-based profiling.

Or, closer to home, to imagine your marketing department relying on equivalent demographic or psychographic correlations, leading to marketing misfires when targeting messages to specific customer segments.

I said the Google image results must have been the result of some sort of correlation technique, but that isn’t entirely true. It’s just as possible Google is making use of neural network technology, so called because it roughly emulates how AI researchers imagine the human brain learns.

I say “roughly emulates” as a shorthand for seriously esoteric discussions as to exactly how it all actually works. I’ll leave it at that on the grounds that (1) for our purposes it doesn’t matter; (2) neural network technology is what it is whether or not it emulates the human brain; and (3) I don’t understand the specifics well enough to go into them here.

What does matter about this is that when a neural network … the technical variety, not the organic version … learns something or recommends a course of action, there doesn’t seem to be any way of getting a read-out as to how it reached its conclusion.

Put simply, if a neural network says, “That’s a photo of a cat,” there’s no way to ask it “Why do you think so?”

Okay, okay, if you want to be precise, it’s quite easy to ask it the question. What you won’t get is an answer, just as you won’t get an answer if it recommends, say, a chess move or an algorithmic trade.

Which gets us to AI’s entry into the 2019 irony sweepstakes.

Start with big data and advanced analytics. Their purpose is supposed to be moving an organization’s decision-making beyond someone in authority “trusting their gut,” to relying on evidence and logic instead.

We’re now on the cusp of hooking machine-learning neural networks up to our big data repositories so they can discover patterns and recommend courses of action through more sophisticated means than even the smartest data scientists can achieve.

Only we can’t know why the AI will be making its recommendations.

Apparently, we’ll just have to trust its guts.

I’m not entirely sure that counts as progress.