Don’t make data-driven decisions. Make data-informed decisions, or so my friend Hank Childers advises.

It’s a nice distinction that recognizes both the value of evidence and its limitations.

For those who like to rely on evidence we live in tricky times. The increasing availability of evidence about just about any topic is accompanied by an at-least-equally increasing supply of disinformation, in direct proportion to the profit to be made by biasing things in the profit-maker’s favor.

Which is one reason I’m skeptical of the long-term reliability of IBM’s Watson as a medical diagnostician.

There’s a program called Resumix. What it does is scan resumes so as to find the best skill-to-task matches among the applicants. It’s popular among a certain class of recruiter because reading resumes is an eye-blearing chore.

Worse, the recruiter might, through fatigue, miss something on a resume. Worse still, the recruiter might inadvertently practice alphabetical discrimination, if, for example, the resumes are sorted in name order: Inevitably those at the front and back of the stack will receive more attention than those in the middle.

But on the other side of the Resumix coin is this: Most applicants know how to play the Resumix game. Using techniques similar to how those who write websites learn how to get the attention of search engines, job-seekers make sure Resumix sees what it’s supposed to see in their resumes.

If Watson becomes the diagnostician of choice, do you think there’s any chance at all that those who stand to profit from the “right” diagnosis won’t figure out how to insert what Watson is looking for in the text of the research papers they underwrite and pharmaceutical ads they run?

It’s one thing for those who developed and continue to refine IBM’s Watson division … for whom, by the way, I have immense respect … to teach it to read and understand medical journals and such. That task is merely incredibly difficult.

But teaching it to recognize and discard utter horse pucky as it does so? Once we try to move beyond the exclamation (“That’s a pile of horse pucky!”) to actual definition, it isn’t easy to find one that isn’t synonymous with “I don’t want that to be true!”

Well, not, that isn’t right. A useful definition is easy: Horse pucky is a plausible-sounding narrative that’s built on a foundation of misinformation and bad logic.

Defining horse pucky? Easy. Demonstrating that something is horse pucky, especially in an age of increasingly voluminous disinformation? Very, very hard. It’s much easier to declare that something is horse pucky and move on … easier, but intellectually bankrupt.

So imagine you’re leading a team that has to make an important decision. You want the team to make a data-informed decision — one that’s as free from individual biases as possible; one that’s the result of discussion (solving shared problems), not argument (one side winning; the other losing).

Is this even possible given the known human frailties that come into play when it comes to evaluating evidence?

No, if your goal is perfection. Absolutely if your goal is improvement.

While it’s fashionable to disparage the goal of objective inquiry because of “what we now know to be true about how humans think,” those doing the disparaging are relying on evidence built on the philosophical foundations of objective inquiry … and are drawing the wrong conclusions from that evidence.

Here’s the secret:

The secret of evidence-informed decision-making: Don’t start by gathering evidence.

Evidence does, of course, play a key role in evidence-informed decision-making (and what would you do without KJR to give you profound insights like this?). But it isn’t where you start, especially when a team is involved.

Starting with evidence-gathering ensures you’ll be presiding over an argument — a contest with winners and losers — when what you want is collaboration to solve a shared problem.

Evidence-gathering follows two essential prerequisite steps. The first is to reach a consensus on the problem you’re trying to solve or opportunity you’re trying to chase. Without this, nothing useful will happen. With it, everyone agrees on what success will look like when and if it eventually happens.

The second prerequisite step is consensus on the process and decision-making framework the team will use to make its decision. This means thinking through the criteria that matter for comparing the available alternatives, and how to apply evidence to evaluate each alternative for each of the criteria.

Only then should the team start gathering evidence.

Informed decisions take hard, detailed work. So before you start, it’s worth asking yourself — is this decision worth the time and effort?

Or is a coin-toss good enough?