In grad school, at an animal behavior conference (long-time readers might recall my early history, researching the behavior of electric fish), I attended a workshop on quantitative ethology — the focal point of my own work (which might explain my ongoing fascination with business metrics).

A primatologist conducted the workshop. He was researching the social organization of an ape species. He’d tried a wide variety of statistical techniques. One after another showed a dominance hierarchy. But he persisted, and finally, using multidimensional scaling in a single dimension, was able to demonstrate the social classes he was certain existed.

I didn’t know it was valid for a researcher to shop around for analyses until one agreed with his preferred hypothesis, so I asked. The primates he studied might or might not have social classes, but ethologists definitely do: His answer was less than respectful; a fellow fish researcher came to my defense; and a free-for-all between the fish and primate social classes ensued.

Last week’s column started a discussion of a disquieting trend in American business, and in American society more generally — the rise of Intellectual Relativism. Intellectual Relativism is a democratic (as opposed to Democratic) approach to understanding. Given a choice among multiple explanations, an Intellectual Relativist equates popularity and validity.

Intellectual Relativism happens when people start with what they want to be true. From that wish they work backward. Sometimes they deliberately mislead others. Sometimes they start by fooling themselves. The story of the primatologist points out that scientists are just as human as everyone else.

Scientists are human. Because they can be seduced by what they want to be true, the scientific community has, since the Renaissance, developed a set of processes and criteria for assuring the integrity of scientific findings. They’re the benchmark. While you can’t easily apply them all in a business setting, they’re still worth understanding. Compare your confidence in the ideas and evidence available to you with how scientists achieve confidence in the theories and evidence they use.

The short version:

  • Other scientists must review the research.
  • Independent laboratories must reproduce the results.
  • Occam’s razor picks the winner. When two explanations account for all the known data, the simpler of the two — the one that assumes the fewest “entities” — is preferred.
  • Evidence isn’t fact; conclusions aren’t evidence. “The National Weather Service measured the temperature at the Minneapolis/St. Paul airport on October 8, 2005 at noon,” is a fact. “The temperature was 45 degrees Fahrenheit,” is evidence. “The earth was warmer in the 1990s than the 1980s,” is a conclusion based on the evidence.Evidence is subject to sampling error, flaws in experimental design or apparatus, and the desire of the scientist to support a preferred hypothesis — sources of doubt that make evidence less reliable than fact. Conclusions are susceptible to all the problems with evidence, and to flaws in logic. That evidence isn’t fact and conclusions aren’t evidence are important reasons independent laboratories must reproduce results before they’re accepted by the scientific community.
  • Evidence can only disprove explanations — by testing predictions. It can’t prove them.To illustrate this point, attributable to the philosopher Karl Popper: Einstein’s theory of general relativity says that the bending of space-time by mass causes gravity. It makes specific predictions about what (for example) astronomers should find when observing massive objects. When astronomers observe what the theory predicts, they haven’t proved general relativity right. They’ve failed to disprove it.By now, scientists have falsified every other explanation they can think of, so they’re confident general relativity should be trusted. Confident, not certain.To the extent scientific explanations are to be trusted more than those derived from other ways of knowing, it’s because scientists deal in doubt, not certainty. The next observation could point out a hole in the theory, or someone could construct a new explanation that also accounts for what’s been observed thus far, perhaps making general relativity a special case, as Newtonian physics is to general relativity.

    No good scientist who’s being careful about phrasing ever expresses certainty about anything. It violates the protocol.

The philosophy of science is a pretty dry topic, and scientists deal with a different challenge from business managers. Scientists are paid to find the best explanations of what’s going on in the universe. Business managers are paid to make the best decisions they can and still make their deadlines. So the applicability of all this might not be clear.

We’ll get to practical techniques next week. For now: While peer review and independent reproduction of results are unrealistic in most business settings, you can certainly apply Occam’s razor. You can recognize the difference between facts, evidence, and conclusions.

Most important of all, you can take this lesson to the bank: When you have no doubt, you’re certainly misleading yourself. And if you mislead yourself, you won’t lead anyone else very well.