I suspect you’re no more in the mood this week to read about business strategy, IT strategy, the intersection of business and IT … my usual stuff … than I was to write about it. And I vowed not to talk about anyone named Trump, Pence, Biden, or Harris, on the grounds that the odds of my having anything new to say is vanishingly small.

So I took one from the vault this week, originally published three years ago. For one reason or another it seems fitting this week. Hope you enjoy it.

# # #

The problem with quadrant charts isn’t that they have two axes and four boxes. It’s the magic part — why their contents are what they are.

Well, okay, that’s one of the problems. Another is that once you (you being me, that is) get in the quadrant habit, new ones pop into your head all the time.

Like, for example, this little puppy that came to me while I was watching Kong: Skull Island as my Gogo inflight movie.

It’s a new, Gartnerized test of actorhood. Preposterousness is the vertical axis. Convincing portrayal of a character is the horizontal. In Kong, Samuel L. Jackson, Tom Hiddleston, and John C. Reilly made the upper right. I leave it to KJR’s readers to label the quadrants.

While this might not be the best example, quadrant charts can be useful for visualizing how a bunch of stuff compares. Take, for example, my new Opinionization Quadrant. It visualizes the different types of thinking you and I run across all the time … and, if we’re honest with each other, the ones we ourselves engage in as well.

It’s all about evidence and certainty. No matter the subject, more and better evidence is what defines expertise and should be the source of confident opinion.

Less and worse evidence should lead to skepticism, along with a desire to obtain more and better evidence unless apathy prevails.

When more and better evidence doesn’t overcome skepticism, that’s just as bad as prejudice and as unfounded as belief. It’s where denial happens — in the face of overwhelming evidence someone is unwilling to change their position on a subject.

Rationality happens when knowledge and certainty positively correlate. Except there’s so much known about so many subjects that, with the possible exception of Professor Irwin Corey (the world’s foremost authority), we should all be completely skeptical about just about everything.

So we need to allow for once-removed evidence — reporting about those subjects we lack the time or, in some cases genius to become experts in ourselves.

No question, once-removed evidence — journalism, to give it a name — does have a few pitfalls.

The first happens when we … okay, I start my quest for an opinion in the Belief/Prejudice quadrant. My self-knowledge extends to knowing I’m too ignorant about the subject to have a strongly held opinion, but not to acknowledging to myself that my strongly held opinion might be wrong.

And so off I go, energetically Googling for ammunition rather than illumination. This being the age of the Internet and all, someone will have written exactly what I want to read, convincingly enough to stay within the boundaries set by my confirmation bias.

This isn’t, of course, actual journalism but it can look a lot like it to the unwary.

The second need for care is understanding the nature and limits of reportage.

Start here: Journalism is a profession. Journalists have to learn their trade. And like most professions it’s an affinity group. Members in good standing care about the respect and approval of other members in good standing.

So when it comes to reporting on, say, social or political matters, a professional reporter might have liberal or conservative inclinations, but are less likely to root their reporting in their political affinity than you or I would be.

Their affinity, when reporting, is to their profession, not to where they sit on the political spectrum. Given a choice between supporting politicians they agree with and publishing an exclusive story damaging to those same politicians, they’ll go with the scoop every time.

IT journalism isn’t all that different, except that instead of being accused of liberal or conservative bias, IT writers are accused of being Apple, or Microsoft, (or Oracle, or open source) fanbodies.

Also: As with political writing, there’s a difference between professional reporters and opinionators. In both politics and tech, opinionators are much more likely to be aligned to one camp or another than reporters. Me too, although I try to keep a grip on it.

And in tech publishing the line separating reporting and opinion isn’t as bright and clear as with political reporting. It can’t be. With tech, true expertise often requires deep knowledge of a specific product line, so affinity bias is hard to avoid. Also, many of us who write in the tech field aren’t degreed journalists. We’re pretty good writers who know the territory, so our journalistic affinity is more limited.

There’s also tech pseudojournalism, where those who are reporting and opinionating (and, for that matter, quadrant-izing) work for firms that receive significant sums from those being reported on.

As Groucho said so long ago, “Love goes out the door when money comes innuendo.”

Is your organization performing as well as it should? As it could?

Do you know? Can you know?

Random notions on the subject:

Notion #1: If you’re confident your organization is performing as well as it could, you’re right by definition. Neither you nor anyone reporting to you will try to improve it because why would you?

If, on the other hand, you’re confident it could be better and you’re wrong, you might do some damage, because if your organization is already doing as well as possible, the best any change can achieve is neutrality. That’s the best outcome. The rest must leave you worse off than where you started.

Notion #2: Benchmarks were popular because an executive could use them to “prove” a recalcitrant manager wasn’t performing as well as possible. They were flawed because they rarely avoided the sin of apples-to-basket-of-randomly-assembled-fruit comparisons.

“Best practices” have replaced them as the flogging tool of choice for those whose closest level of descent is 50,000 feet (15,240 meters if you’ve adopted altitude-measurement best practices).

Best practices are popular because what they prescribe rarely matches how we do things around here. Which means the manager responsible for following less-than-best practices surely deserves a whuppin’.

True story: I once saw a consultant’s PowerPoint slide that promised to “… institute best practices followed by a program of continuous improvement.”

Ahem. If the practices are best they can’t be improved. If they can be improved, continuously or otherwise, they aren’t best yet.

As the KJR Manifesto pointed out there are no best practices, only practices that fit best. Most so-called best practices are one-size-fits-no-one off-the-rack pants. They’re too small for your waist and too short for your inseam, but your boss insists you wear them anyway.

Notion #3: Fixing the root cause isn’t always the best way to deal with a problem.

Imagine, for example, that you, like me, suffer from cluster headaches. Your research determines the root cause is spontaneous activation of nociceptive pathways.

So what. We can’t do anything about the root cause. I don’t even know what the root cause means.

What we can do is take Sumatriptan as soon as a headache starts and wait 15 minutes or so for it to take effect.

Sometimes, suppressing symptoms is the best alternative. Not a good alternative, mind you, but the best one available.

Notion #4: A common and pernicious barrier to organizational change is the Assumption of the Present. It’s the Assumption of the Present when employees are sure a proposed change will fail because otherwise it would have already happened.

The Assumption of the Present is a close cousin of “We tried that and it didn’t work,” only you can suggest the reason it didn’t work is that, “Maybe we did it wrong.”

The Assumption of the Present, in contrast, is circular. And being circular there’s no entry point you can use to rebut it.

Notion #5: Agile isn’t a methodology. It isn’t a family of methodologies. Well, it is, but more importantly it’s a way of thinking about how to accomplish things.

It’s the practical application of Gall’s Law: A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system.”

What it means to you: If you want to try to improve how your organization functions and don’t want to risk doing more harm than good, figure out ways to improve it one small increment at a time. As you do, consider that each increment should be:

  • Easy to explain: If it’s complicated it isn’t incremental.
  • Easy to integrate: The increment shouldn’t disrupt how the rest of the work gets done, or at least it shouldn’t disrupt it badly.
  • Contained: Its scope should be limited to your organization. Processes have inputs, outputs, and methods. Incremental changes should focus on methods, unless a source of your inputs or consumer of your outputs wants to collaborate.
  • Non-limiting: To the extent you can tell, implementing the increment shouldn’t close off potentially desirable future changes.
  • Reversible: If it doesn’t work out, you should be able to stop doing it without difficulty.

Last Notion: Some managers are good at operations — at keeping the joint running. Others are good at making change happen — at making tomorrow look different from yesterday.

Neither skill is good enough by itself.

Managers who excel at operations but can’t make change happen will lead a long, slow slide into obsolescence. But those who excel at change without being competent at operations have the opposite problem.

They won’t survive until the future gets here.