Geeze alert! Geeze alert! Hide, hide, hide, hide, hide!

When I was a lad in high school we all took the SATs, and our scores had some bearing on our academic potential.

Now, we have SAT study guides, and SAT scores mostly reveal how hard a student studied for the SATs. As SAT scores have become more important they’ve become less reliable, and it’s cause and effect.

Sound like most professional certifications?

In the world of measurement, gauging someone’s potential is one of the three great unsolved, and very likely unsolvable challenges (the other two are customer loyalty and employee performance). We’ll save customer loyalty and actual employee performance for other days. Today …

So you’re trying to decide which of two applicants to hire for a project management position. One has a PMP. The other one doesn’t. Which one do you hire?

The answer is, whichever one:

  • Has brought more and more difficult projects to a successful conclusion.
  • Speaks intelligently and in enough depth about the projects they list to convince you they really did manage them and they really did reach a successful conclusion.
  • Leads you to conclude, from your conversation, that their “personal culture” will be compatible with your company’s business culture, and their personality will mesh with the people they’ll be working with.
  • Their potential peers think will be the stronger addition to the team after they’ve had a chance to talk with both applicants.

Or, even better, whichever one proves to be the better project manager in your organization after you’ve contracted with each of them to manage an actual project and they’ve either run their project to a successful conclusion or run it into the ground.

Understand, the problem isn’t with the certification itself, and in fact, to its credit, the Project Management Institute includes successful project management experience in its PMP requirements.

The problem is that using the certification … using any certification … to evaluate applicants is an example of the observer effect.

The observer effect, in case you aren’t familiar with it, is the scientific principle that all acts of observation affect whatever is being observed. Sometimes the effect is trivial … for example, the act of looking at a comet through a telescope doesn’t change the comet’s orbit in more than a quantum way.

But here, the more companies that use certifications in hiring decisions, the more the people who seek the certifications just want the piece of paper. Gaining actual competence becomes secondary at best.

This is true for professional certifications. It’s increasingly true for college degrees.

And it isn’t limited to individual certifications either.

Take, for example, ISO 9000 and its associated certifications. What they’re intended to be is evidence that a company has strong quality management practices. What they too-often are is evidence that companies need ISO 9000 credentials on the corporate resume and have learned how to tell a good quality story.

An actual commitment to quality on the part of its executives and managers? That’s optional. The International Standards Organization lacks the resources to actually investigate applicants in enough depth to ensure they truly qualify — just as well, many cures being worse than the diseases they treat.

What’s the solution? Here’s one: Every certifying organization forbids the use of their certifications for hiring or vendor selection.

That’ll happen. Just not here on Earth. Still, there are ways to improve the situation. What they have in common is moving beyond short-haul thinking.

Take medicine. There’s a reason most doctors are fundamentally competent, and it isn’t their getting a degree. To become a doctor you have to go through a residency … you have to practice medicine under the watchful eye of practicing doctors. It’s a long-haul, labor-intensive process, for which we should all be grateful.

With most certifications, both certifiers and those certified want a process for verifying competence that’s quick and cheap. Since you only get what you pay for if you’re lucky, the outcome is predictable.

Awhile back I wrote a column predicting a business failure (““Business failure in progress,” KJR 12/12/2011). The company is, in fact, gone. I mention it because its founders and leaders won an entrepreneurship award right around the time I wrote the column.

Demonstrating, I guess, that business awards are even less reliable than business certifications.

Deep in my dark past I was a teaching assistant for a course titled “Ecology for Physicists and Engineers.” A quiz I graded included a question along the lines of “If 84% of the population has brown eyes and 16% has blue eyes, and the gene for brown eyes is dominant, what is the frequency of the genes for blue and brown eyes in the population?”

The solution hinges on very basic population genetics, namely, knowing the binomial formula rules: x2+2xy+y2=1, where x and y are the frequencies of the genes for blue and brown eyes in the population. (Don’t worry — we’ll get to the punch line and how this connects to you in just a couple more paragraphs.)

Because the gene for blue eyes is recessive, x2 percent of the population has blue eyes and 2xy+y2 percent of the population has brown eyes. x2 =16% (0.16) so x=0.4, leaving y to equal 1-x … 0.6.

But these were physicists and engineers, or at least they were students of the disciplines. They knew a lot of math and all, which is probably why a lot of the class, spotting the squared term in the equation, immediately hauled out the quadratic formula to solve the problem instead: x=[-b±(b24ac)-0.5]/2a, never mind that this solves equations of the form ax2+bx+c=0 — an entirely different class of problem.

Some, needless to say, wanted partial credit for their answer.

Last week I proposed that we need more engineers in management, defining “engineer” as anyone who knows you can’t cool off your kitchen by leaving the refrigerator door open.

I proposed it on the theory that engineers know the difference between addressing a symptom and fixing the problem.

Regrettably, it isn’t really that simple (as if anything is). To understand the complexity, look at the interview question I suggested you add when interviewing candidates for managerial positions: “If you open the refrigerator door, how much will it cool off your kitchen?”

Like the engineers whose answers I graded in Ecology for Physicists and Engineers, it’s almost inevitable that some would answer this question by estimating the volume of air in the refrigerator, the volume of air in the kitchen, and the temperature differential (in kelvins), using them to compute the temperature of the mixed air volume.

This answer would, in fact, be accurate if you first unplugged the refrigerator, which is to say it would be correct in every detail, even though it would be completely wrong.

Lesson: No matter what type of human being we’re dealing with, we all have an unfortunate tendency to see the world through the blinders imposed by what we’re particularly good at doing. It’s the hammer/nail syndrome (if you know how to swing a hammer, every problem looks like a nail).

So if I’m a manager and I’m particularly good at extracting meaning from numbers, I’m likely to rely excessively on metrics and computer printouts, ignoring all the nuances of real-life in my department. But if I’m a more sociable type I’ll ignore the reports entirely, instead wandering around, chatting with the people I’m most comfortable with and allowing them to manipulate my perceptions to match whatever story they want me to believe.

Which gets us to the best answer to the refrigerator question — the one that tells you you’re dealing with someone who understands there’s always a backstory to the story and variations on a theme. And it isn’t an answer at all. It’s a question: “Is the refrigerator plugged in?”

There’s another trait, common among engineers, that can make those afflicted with it unsuitable for management. It’s an almost-inevitable concomitant of the problem-solving orientation that last week seemed so admirable. It’s the tendency to always see an even better alternative — to be seemingly unable to recognize when their design, code, or what-have-you has reached the exalted state of good enough.

It’s a tendency easily confused with the well-known analysis paralysis problem, or with simple perfectionism, but it’s distinct. Analysis-paralysis comes from fear of making the wrong decision, while perfectionism stems from an excessive aversion to defects.

But if you think like an engineer, spotting the even-better alternative is a matter of artistry … of design elegance … which up to a point is actually very good business.

Not all engineers recognize when they’ve passed that point, though. Which is the source of this well-known aphorism:

“There comes a point in every project when you have to shoot the engineers and put the product into production.”