Pop quiz!

Question #1: In the past 20 years, the proportion of the world population living in extreme poverty has (A) almost doubled; (B) Remained more or less the same; (C) almost halved.

Question #2: Worldwide, 30-year-old men have spent 10 years in school. How many years have women of the same age spent in school? (A) 9 years; (B) 6 years; (C) 3 years.

The correct answers are C and A. If you got them wrong, you have a lot of company. Across a wide variety of groups worldwide, faced with these and many more questions with factual answers, people do far worse than they would by choosing responses at random.

Which brings us to the next addition to your KJR bookshelf: Factfulness: Ten Reasons We’re Wrong About the World — and Why Things are Better Than You Think(Hans Rosling with Ola Rosling and Anna Rosling Rönnlund, Flatiron Books 2018). Unlike books that rely on cognitive science to explain why we’re all so illogical so often, Rosling focuses on the how of it. Factfulness is about the mistakes we make when data are available to guide us but, for one reason or another, we don’t consult it to form our opinions. Viewed through this lens, it appears we’re all prone to these ten bad mental habits:

  1. Gaps: We expect to find chasms separating one group from another. Most of the time the data show a continuum. Our category boundaries are arbitrary.
  2. Negativity: We expect news, and especially trends, to be bad.
  3. Extrapolation: We expect trend lines to be straight. Most real-world trends are S-shaped, asymptotic, or exponential.
  4. Fear: What we’re afraid of and what the most important risks actually are often don’t line up.
  5. Size: We often fall for numbers that seem alarmingly big or small, but for which we’re given no scale. Especially, we fall for quantities that are better expressed as ratios.
  6. Generalization: We often use categories to inappropriately lump unlike things together and fail to lump like things together. Likewise we use them to imagine an anecdote or individual is representative of a category we more or less arbitrarily assign them to when it’s just as reasonable to consider them to be members of an entirely different group.
  7. Destiny: It’s easy to think people are in the circumstances they’re in because it’s inevitable. In KJR-land we’ve called this the Assumption of the Present.
  8. Single Perspective: Beware the hammer and nail error, although right-thinking KJR members know the correct formulation is “If all you have are thumbs, every hammer looks like a problem.” Roslund’s advice: Make sure you have a toolbox, not just one tool.
  9. Blame: For most people, most of the time, assigning it is our favorite form of root-cause analysis.
  10. Urgency: The sales rep’s favorite. In most situations we have time to think, if we’d only have the presence of mind to use it. While analysis paralysis can certainly be deadly, mistaking reasonable due diligence for analysis paralysis is at least as problematic.

The book certainly isn’t perfect. There were times that, adopting my Mr. Yeahbut persona, I wanted to strangle the author, or at least have the opportunity for a heated argument. Example:

Question #3: In 1996, tigers, giant pandas, and black rhinos were all listed as endangered. How many of these three species are more critically endangered today? (A) Two of them; (B) One of them; (C) None of them.

The answer is C — none are more critically endangered, which might lead an unwary reader to conclude we’re making progress on mass species extinction. It made me wonder why Roslund chose these three species and not, say, Hawksbill sea turtles, Sumatran orangutans, and African elephants, all of which are more endangered than they were twenty years ago.

Yeahbut, this seems like a deliberate generalization error to me, especially as, in contrast to the book’s many data-supported trends, it provides no species loss trend analysis.

But enough griping. Factfulness is worth reading just because it’s interesting, and surprisingly engaging given how hard it is to write about statistical trends without a soporific result.

It’s also illustrates well why big data, analytics, and business intelligence matter, providing cautionary tales of the mistakes we make when we don’t rely on data to inform our opinions.

I’ll finish with a Factfulness suggestion that would substantially improve our world, if only everyone would adopt it: In the absence of data it’s downright relaxing to not form, let alone express, strongly held opinions.

Not having to listen to them? Even more relaxing.

“Could you please stop saying there’s no such thing as an IT project?” a reader politely asked. “When I have switches/routers/servers that are out of support from their vendors and need to be replaced with no business changes I have to call these IT projects.”

I get this a lot, and understand why folks on the IT infrastructure side of things might find the phrase irritating.

And I agree that projects related to IT infrastructure, properly executed, result in no visible business change.

But (you did know “but” was hanging in the air, didn’t you?) … but, these projects actually do result in significant business change.

It’s risk prevention. These projects reduce the likelihood of bad things happening to the business — bad things like not being able to license and run software that’s essential to operating the business; to purchase and use hardware that’s compatible with strategic applications, and so on.

It’s important for business executives to recognize this category of business change project, if for no other reason that none of us want a recurrence of what happened to IT’s reputation following our successful prevention of Y2K fiascoes. Remember? Everyone outside IT decided nothing important or interesting had happened, and that’s if they didn’t conclude we were just making the whole thing up.

Successful prevention is, we discovered, indistinguishable from the absence of risk. So we need to put a spotlight on the business risks we’re preventing so everyone recognizes our successes when we have them.

Not to mention the need for everyone to be willing to fund them.

Which leads to a quick segue to IT architecture, which, depending on the exact framework and source, divides the IT stack into information systems architecture, subdivided into applications and data; and technology architecture, subdivided into platforms, infrastructure, and facilities.

Switches and routers, along with everything else related to networking are infrastructure. With the exception of performance engineering, infrastructure changes ought to be invisible to everyone other than the IT infrastructure team responsible for their care and feeding.

Servers, though, belong to the platform sub-layer, along with operating systems, virtualization technology, development environments, database management systems … all of the stuff needed to build, integrate, and run the applications that are so highly visible to the rest of the business.

The teams responsible for platform updates know from painful experience that while in theory layered architectures insulate business users from platform changes, in fact it often turns out that:

  • Code written for one version of a development environment won’t run in the new version.
  • The vendors of licensed COTS applications haven’t finished adapting their software to make it compatible with the latest OS or DBMS version.
  • Especially in the case of cloud migrations, which frequently lead to platform, infrastructure, and facilities changes, performance engineering becomes a major challenge. And as everyone who has ever worked in IT infrastructure management knows, poor application performance is terribly, terribly visible to the business.

Et cetera.

Not that these platform update challenges are always problems. They can also be opportunities, for clearing out the applications underbrush. Part of the protocol for platform updates is making sure all application “owners” (really, stewards) aren’t just informed of the change but are actively involved in the regression testing and remediation needed to make sure the platform change doesn’t break anything.

The opportunity: If nobody as the steward for a particular application, retiring it shouldn’t be a problem.

On a related topic, regular readers will recall the only IT infrastructure metric that matters is the Invisibility Index. Its logic: Nobody notices the IT infrastructure unless and until something goes wrong.

Invisibility = success. Being noticed = failure.

Something else regular readers will recognize is that Total Cost of Ownership (TCO) is a dreadful metric, violating at least three of the 7 C’s of good metrics. TCO isn’t consistent, complete, or on a continuum: It doesn’t always go one way when things improve and the other when they deteriorate; it measures costs but not benefits; and it has no defined scale, so there’s no way to determine whether a given product’s TCO is good or bad.

But perhaps we should introduce a related metric. Call it TCI — the Total Cost of Invisibility. It’s how much of its operating budget a business needs to devote so those responsible for the IT infrastructure can continue to keep it invisible.

They’ll keep it invisible by running what aren’t IT projects. But are quite technical nonetheless.