Pop quiz!

Question #1: In the past 20 years, the proportion of the world population living in extreme poverty has (A) almost doubled; (B) Remained more or less the same; (C) almost halved.

Question #2: Worldwide, 30-year-old men have spent 10 years in school. How many years have women of the same age spent in school? (A) 9 years; (B) 6 years; (C) 3 years.

The correct answers are C and A. If you got them wrong, you have a lot of company. Across a wide variety of groups worldwide, faced with these and many more questions with factual answers, people do far worse than they would by choosing responses at random.

Which brings us to the next addition to your KJR bookshelf: Factfulness: Ten Reasons We’re Wrong About the World — and Why Things are Better Than You Think (Hans Rosling with Ola Rosling and Anna Rosling Rönnlund, Flatiron Books 2018). Unlike books that rely on cognitive science to explain why we’re all so illogical so often, Rosling focuses on the how of it. Factfulness is about the mistakes we make when data are available to guide us but, for one reason or another, we don’t consult it to form our opinions. Viewed through this lens, it appears we’re all prone to these ten bad mental habits:

  1. Gaps: We expect to find chasms separating one group from another. Most of the time the data show a continuum. Our category boundaries are arbitrary.
  2. Negativity: We expect news, and especially trends, to be bad.
  3. Extrapolation: We expect trend lines to be straight. Most real-world trends are S-shaped, asymptotic, or exponential.
  4. Fear: What we’re afraid of and what the most important risks actually are often don’t line up.
  5. Size: We often fall for numbers that seem alarmingly big or small, but for which we’re given no scale. Especially, we fall for quantities that are better expressed as ratios.
  6. Generalization: We often use categories to inappropriately lump unlike things together and fail to lump like things together. Likewise we use them to imagine an anecdote or individual is representative of a category we more or less arbitrarily assign them to when it’s just as reasonable to consider them to be members of an entirely different group.
  7. Destiny: It’s easy to think people are in the circumstances they’re in because it’s inevitable. In KJR-land we’ve called this the Assumption of the Present.
  8. Single Perspective: Beware the hammer and nail error, although right-thinking KJR members know the correct formulation is “If all you have are thumbs, every hammer looks like a problem.” Roslund’s advice: Make sure you have a toolbox, not just one tool.
  9. Blame: For most people, most of the time, assigning it is our favorite form of root-cause analysis.
  10. Urgency: The sales rep’s favorite. In most situations we have time to think, if we’d only have the presence of mind to use it. While analysis paralysis can certainly be deadly, mistaking reasonable due diligence for analysis paralysis is at least as problematic.

The book certainly isn’t perfect. There were times that, adopting my Mr. Yeahbut persona, I wanted to strangle the author, or at least have the opportunity for a heated argument. Example:

Question #3: In 1996, tigers, giant pandas, and black rhinos were all listed as endangered. How many of these three species are more critically endangered today? (A) Two of them; (B) One of them; (C) None of them.

The answer is C — none are more critically endangered, which might lead an unwary reader to conclude we’re making progress on mass species extinction. It made me wonder why Roslund chose these three species and not, say, Hawksbill sea turtles, Sumatran orangutans, and African elephants, all of which are more endangered than they were twenty years ago.

Yeahbut, this seems like a deliberate generalization error to me, especially as, in contrast to the book’s many data-supported trends, it provides no species loss trend analysis.

But enough griping. Factfulness is worth reading just because it’s interesting, and surprisingly engaging given how hard it is to write about statistical trends without a soporific result.

It’s also illustrates well why big data, analytics, and business intelligence matter, providing cautionary tales of the mistakes we make when we don’t rely on data to inform our opinions.

I’ll finish with a Factfulness suggestion that would substantially improve our world, if only everyone would adopt it: In the absence of data it’s downright relaxing to not form, let alone express, strongly held opinions.

Not having to listen to them? Even more relaxing.

All IT organizations test. Some test software before it’s put into production. The rest test it by putting it into production.

Testing before deployment is widely regarded as “best practice.” This phrase, as defined here, translates to “the minimum standard of basic professionalism.”

Which brings us to organizational change management (OCM), something else all organizations do, but only some do prior to deployment.

There is, you’ll recall, no such thing as an IT project, a drum I’ll continue to beat up to and beyond the anticipated publication date of There’s No Such Thing as an IT Project sometime in September of this year.

Which brings us to a self-evident difference between testing, aka software quality assurance (SQA), and OCM: SQA is about the software; OCM is about the business change that needs the new software.

As we (Dave Kaiser and I) point out in the upcoming book, organizational changes mostly fall into three major buckets: process, user experience, and decision-making.  Process change illustrates the SQA parallel well.

Probably the most common process change goal is cost reduction, and more specifically reducing the incremental cost of processing one more unit.

As a practical matter, cost reduction usually means layoffs, especially in companies that aren’t rapidly growing. For those that are growing rapidly it means employees involved in the process will have to handle their share of item processing more quickly.

In a word, employees will have to increase their productivity.

Some unenlightened managers still think the famous I Love Lucy chocolate factory episode illustrates the right way to accomplish this increase. But for the most part even the least sophisticated management understands that doing things the exact same way only faster rapidly reaches the point of diminishing returns.

Serious process change generally results in different, and probably fewer distinct tasks in the process flow, performed by fewer employees because there are fewer tasks and those that remain will be more highly automated.

Which brings us back to OCM and when it happens in the deployment sequence.

Managers don’t need a whole lot of OCM know-how to understand the need to re-train employees. But many still blow it, teaching employees how to operate the new software: Click here and this happens; click there and that happens.

Training shouldn’t be about how to operate software at all. It should be about how employees should do their changed jobs using the new software.

But training is just the starting point. What’s often also lost in translation are all the other organizational changes employees have to adjust to at the same time. Three among many:

> Realignments: Employees often find themselves reporting to new managers. This, in turn, usually leads to a severe case of heads-down-ism until employees figure out whether spotlighting problems in the new process flow will be welcomed, or if a new manager’s style is more along the line of messenger-shooting instead.

> Metrics: With new processes often come new process optimization goals, which in turn should mean new process metrics, but too-often doesn’t.

The first rule of business metrics is that you get what you measure — that’s the risk you take. So if a company changes a process without changing its metrics, employees will do their best to continue using the old process, as this is what’s being measured.

> Colleagues: Some work that had been performed by employees who work in a different city, building, floor, or cubicle down the hall, and oh, by the way, these folks used to know each other by name. That work might now be performed by total strangers who live in a different country and time zone, and speak a different native language.

Just adapting to different accents can be challenging enough. Add cultural and time-zone differences to the mix, make everyone involved unknown to each other, and the opportunity for process traffic jams increases, not by increments but by multiples.

No matter what the intended change, for it to be successful all these factors, and others, will have to be addressed.

Change leaders can address them before instituting the change, helping the organization and everyone in it prepare. Or, they can leave it up to everyone to muddle through.

Muddling through does have one advantage: Change leaders can blame anything and everything that goes wrong on change resistance.

Given a choice between effective planning and blaming the victims … well, it’s hardly even a choice, is it?