The problem with quadrant charts isn’t that they have two axes and four boxes. It’s the magic part — why their contents are what they are.

Well, okay, that’s one of the problems. Another is that once you (you being me, that is) get in the quadrant habit, new ones pop into your head all the time.

Like, for example, this little puppy that came to me while I was watching Kong: Skull Island as my Gogo inflight movie.

It’s a new, Gartnerized test of actorhood. Preposterousness is the vertical axis. Convincing portrayal of a character is the horizontal. In Kong, Samuel L. Jackson, Tom Hiddleston, and John C. Reilly made the upper right. I leave it to KJR’s readers to label the quadrants.

While this might not be the best example, quadrant charts can be useful for visualizing how a bunch of stuff compares. Take, for example, my new Opinionization Quadrant. It visualizes the different types of thinking you and I run across all the time … and, if we’re honest with each other, the ones we ourselves engage in as well.

It’s all about evidence and certainty. No matter the subject, more and better evidence is what defines expertise and should be the source of confident opinion.

Less and worse evidence should lead to skepticism, along with a desire to obtain more and better evidence unless apathy prevails.

When more and better evidence doesn’t overcome skepticism, that’s just as bad as prejudice and as unfounded as belief. It’s where denial happens — in the face of overwhelming evidence someone is unwilling to change their position on a subject.

Rationality happens when knowledge and certainty positively correlate. Except there’s so much known about so many subjects that, with the possible exception of Professor Irwin Corey (the world’s foremost authority), we should all be completely skeptical about just about everything.

So we need to allow for once-removed evidence — reporting about those subjects we lack the time or, in some cases genius to become experts in ourselves.

No question, once-removed evidence — journalism, to give it a name — does have a few pitfalls.

The first happens when we … okay, I start my quest for an opinion in the Belief/Prejudice quadrant. My self-knowledge extends to knowing I’m too ignorant about the subject to have a strongly held opinion, but not to acknowledging to myself that my strongly held opinion might be wrong.

And so off I go, energetically Googling for ammunition rather than illumination. This being the age of the Internet and all, someone will have written exactly what I want to read, convincingly enough to stay within the boundaries set by my confirmation bias.

This isn’t, of course, actual journalism but it can look a lot like it to the unwary.

The second need for care is understanding the nature and limits of reportage.

Start here: Journalism is a profession. Journalists have to learn their trade. And like most professions it’s an affinity group. Members in good standing care about the respect and approval of other members in good standing.

So when it comes to reporting on, say, social or political matters, a professional reporter might have liberal or conservative inclinations, but are less likely to root their reporting in their political affinity than you or I would be.

Their affinity, when reporting, is to their profession, not to where they sit on the political spectrum. Given a choice between supporting politicians they agree with and publishing an exclusive story damaging to those same politicians, they’ll go with the scoop every time.

IT journalism isn’t all that different, except that instead of being accused of liberal or conservative bias, IT writers are accused of being Apple, or Microsoft, (or Oracle, or open source) fanbodies.

Also: As with political writing, there’s a difference between professional reporters and opinionators. In both politics and tech, opinionators are much more likely to be aligned to one camp or another than reporters. Me too, although I try to keep a grip on it.

And in tech publishing the line separating reporting and opinion isn’t as bright and clear as with political reporting. It can’t be. With tech, true expertise often requires deep knowledge of a specific product line, so affinity bias is hard to avoid. Also, many of us who write in the tech field aren’t degreed journalists. We’re pretty good writers who know the territory, so our journalistic affinity is more limited.

There’s also tech pseudojournalism, where those who are reporting and opinionating (and, for that matter, quadrant-izing) work for firms that receive significant sums from those being reported on.

As Groucho said so long ago, “Love goes out the door when money comes innuendo.”

Busy weekend – too busy to write a new KJR this week. So it’s re-run time once again. I don’t know if this one is timely or relevant, but I like it, which pretty much describes the entire governance process used to select something from the archives for you. – Bob


Evolutionary theory has to account for all the bizarre complexity of the natural world: the tail feathers of peacocks; the mating rituals of praying mantises; the popularity of Beavis and Butthead. One interesting question: Why do prey animals herd?

Herds are easy targets for predators. So why do animals join them?

One ingenious theory has it that even though the herd as a whole makes an easy target, each individual member is less likely to get eaten — they can hide behind the herd. One critter — usually old or infirm — gets eaten and the rest escape. When you’re solitary, your risk goes up.

Predators hunt in packs for entirely different reasons. Human beings, as omnivores, appear to have the instincts of both predators and prey: We hunt in packs, herd when in danger.

Which explains the popularity of “research reports” showing how many of our peers are adopting some technology or other. These reports show us how big our herd is and where it seems to be going. Infused with this knowledge we can stay in the middle of our herd, safely out of trouble.

And so it was that I found myself reading an “executive report” last week with several dozen bar charts. A typical chart segmented respondents into five categories, and showed how many of the twenty or so “yes” responses fell into each one.

Academic journals impose a discipline – peer review – which usually catches egregious statistical nonsense. But while academic publication requires peer review, business publication requires only a printing press.

Which lead to this report’s distribution to a large number of CIOs. I wonder how many of them looked at the bar charts, murmured, “No error bars,” to themselves, and tossed this information-free report into the trash.

We read over and over again about information glut. I sometimes wonder if what we really have is nonsense glut, with no more actual new information each year than a century ago.

Bar charts without error bars — those pesky black lines that show how uncertain we are about each bar’s true value — are only one symptom of the larger epidemic. We’re inundated with nonsense because we not only tolerate it, we embrace it.

Don’t believe me? Here’s a question: faced with a report like this and a critique by one of your analysts pointing out its deficiencies, would you say, “Thanks for the analysis,” as you shred the offending pages, or would you say, “Well, any information is better than none at all.”

Thomas Jefferson once said, “Ignorance is preferable to error,” and as usual, Tom is worth listening to. Next time you’re faced with some analysis or other take the time to read it critically. Look for sample sizes so small that comparisons are meaningless, like the bar charts I’ve been complaining about.

Also look for leading questions, like, “Would you prefer a delicious, flame-broiled hamburger, or a greasy, nasty looking fried chunk of cow?” (If your source has an axe to grind and doesn’t tell you the exact question asked, you can be pretty sure of the phrasing.)

Look for graphs presenting “data” with no hint as to how items were scored. How many graphs have you seen that divide the known universe into quadrants? You know the ones: every company is given a dot, the dots are all over the landscape, the upper right quadrant is “good”, and you have no clue why each dot landed where it did because the two axes both represent matters of opinion (“vendor stability” or “industry presence”).

Readers David Cassell and Tony Olsen, both statisticians, recently acquainted me with two measures, Data Density, and the Data-Ink Ratio, from Edward Tuft’s wonderful book, The Visual Display of Quantitative Information:.

To calculate the Data Density divide the number of data points by the total graph area. You express the result in dpsi (data per square inch.)

You calculate the Data-Ink Ratio by dividing the amount of ink used to display non-redundant data by the total ink used to print the graph. Use care when scraping the ink off the page — one sneeze and you’re out of luck.