Sometimes logic takes you places you’d rather not go.
Take, for example, the four fallacies of metrics described in the KJR Manifesto: Measuring the right things wrong, measuring the wrong things (right or wrong), failing to measure important things, and measuring employees.
I’ve been pondering the connection between fallacy #4 and the financial meltdown. It’s a pretty good connection. Fixing it, though, takes us into strange territory. Here goes:
Measuring employees is a bad idea because employees have a remarkable talent for gaming the system. They can: Work the system so the numbers look good; behave in ways that make the numbers look good while circumstances deteriorate; or just falsify the data outright.
I pointed this out once in an executive meeting, and one of the participants recommended firing any employee who would behave this way.
Interesting concept, as I was referring to a very large number of American CEOs.
Every time a CEO instructs managers to delay expenses just a bit so they fall into the next fiscal year, lay off employees to impress Wall Street, or indulge in full-blown Enron-style accounting they’re working the system to make themselves look good — metrics fallacy #4 at its finest, because by working the system the CEO gets to keep the corner office and get the big bonus.
Those lousy CEOs. We should fire them all!
And we should. Right along with the IT professionals who are supposed to fill out their timesheets accurately, including proper use of the entry “Doing nothing.”
When we measure employees, at any level from the CEO to the janitor and all points between, they’ll bend the data to their advantage. The only question is by how much.
We can try to tighten up the metrics so well that no loopholes remain. Or …
And it’s the “or” that has me worried, because I can’t escape a distressing conclusion: We should stop measuring employees on how well they achieve goals we set for them. The alternative? Measure them instead on whether they do their work properly … on their technique.
Up to and including the CEO.
It stands everything we think we know about assessing performance on its head. The standard model, after all, is to measure employees on goals and coach on technique.
It would be as if, in major league baseball, managers no longer assessed batters on their batting averages and RBI totals and instead assessed them on the quality of their swings and how often they correctly anticipated the pitches thrown at them.
If would be as if, in the practice of medicine, we no longer assessed physicians on the health of their patients … oh, wait, never mind. We stopped doing that years ago. Let me start over: It would be as if, in the practice of medicine, we no longer assessed physicians on their profitability and instead assessed them on how well they matched diagnosis to symptoms and treatment to diagnosis.
It would be as if, instead of assessing teachers on how well their students performed on standardized tests, we assessed them on their classroom technique, the nature of their homework assignments, and how appropriately they graded student results.
It’s a radical notion, especially in the winning-is-the-only-thing culture we’ve cultivated in the United States: Miss Congeniality is not what beauty pageant contestants aspire to.
Nor is it clear that it would be an improvement. Batters probably should be assessed on their batting averages and RBI totals, for example. One reason it works in the batter’s box: There’s no way a batter can fudge the data.
For “Management By Technique” to work, there’s a prerequisite, and in most organizations it’s something to aspire to, not an achieved competence: We need to understand How Things Work with a great deal of clarity.
That, in fact, is the heart of the concept: If you know how your organization works … the buttons you can push and the levers everyone else needs to pull in order to turn events into success … then you can assess employees on their technique, confident that if each one executes well then the organization will be successful.
If you don’t have this understanding, you’re best off setting goals and hoping for the best.
Hoping would be an apt description, too, because realistic goals are a result of knowing your business, not an alternative to it.
Should it be a pursued as an alternative to goals’ setting (I am in the middle of setting goals for 2009-looking at how to phrase for success)? Absolutely.
Can it spread in advance of the current/future cynicism generated by the bail outs to people who did have a hand in the collapse (oh yes many brokers/analysts/execs are absolute angels who fled from any “perception of evil”) – bit of a struggle to implement in my opinion.
One of the people here at work says “Do not confuse me with facts and logic!”
Where on the time sheet is “doing nothing” Heck there generally is not a blank for all of the “doing somethings” that people are supposed to be doing. There is a bit of a stink over here because a name brand chain fired an employee for falsifying their time sheet. They had been in the parking lot doing their work, when an accident occurred and someone was SERIOUSLY injured. They spent the next hour and a half coordinating getting the person immediate first aid, the fire department, assisting the fire department including making sure people in the store stayed clear, then roping off the area so that nobody else was hurt once the fire department cleared it. He just listed this hour and a half or so as being part of his cart retrieval time.
Hmmm. When my daughter began school, we had a choice of two first grade teachers. An “experienced” mother made a strong recommendation, with the caveat that if we visited the classrooms, we would probably disagree. One teacher’s classroom was neat, quiet and orderly, children working diligently at their desks. Everything seemed under control. The other was somewhat chaotic- kids talking, moving around, teacher somewhat harried. Our friend recommended the second, and my wife, who spent much time in the school, came to agree. I suppose it matters what technique one is looking for… (but you already said that).
Bob, spring needs to arrive in the upper midwest soon. It sounds like you have an advanced case of cabin fever.
We (you and me) present ourselves as consultants. We both have exquisite style, technique, are suave . . . but if we don’t produce some outcomes for our clients, then we are not likely to produce any outcomes for our selves and families (aka fees).
Of course we can’t hang on by fudging the data either.
Seems like the only alternative is to produce valuable outcomes. Lesson for GM, Chrylser, AIG, et al. Produce valuable outcomes for your clients and the odds of needing a bailout are lowered.
John Blair
Bob,
I think you would get a lot out of Tom Davenport’s article on What HR Analysts Can Learn from Basketball and the NY Times article it is derived from.
In particular, the idea of using analytics to determine the performance of team through a plus/minus metric (what happens when the person isn’t there?) is fascinating to me.
Just another option for measuring performance…
The incentives need to be fixed. If peoples’ bonuses are linked to profits in such a way that they take huge risks to gain those profits, and they still get them when the firm fails or loses a lot of money, you are rewarding thieves and looters. If the bonuses are kept in an account for a future disbursement, say in two-five years, then people will focus on steady earnings and long term earnings since the payoff isn’t immediate.
If doctors were paid more to keep their patients out of the hospital or to minimize their stays and the patient meet certain metrics in order to leave the hospital, then treatments might be more aggressive. However, with doctors, a lot of treatments are usually aimed at getting the body to heal itself. Cancer treatments still haven’t improved significantly except for a few cancers. Cancer screenings are better. The sooner one catches the malignancy before it metasizes, the easier it is to treat.
So, it comes down to both the employees’ motivations and the incentive structure. It wouldn’t hurt though to teach people how to spot liars. One might not catch the professional ones, but they can be caught by a careful analysis of their claims versus their actual results.
Management by technique suggests an analytic framework used in Monitoring and Evaluation in public health. It describes input, output, process, and impact metrics that define precursors to an outcome. Perhaps more granular performance evaluations would be useful.
Bob, your analogies lead me to think that some of the metrics people automatically want to use and score people by are actually aimed at the wrong group of people.
Viz your struggle with doctors should be they are scored by the quality of life of their patients. Teachers by the improvement they make to their pupils. Batting averages meet the same goal.
For instance our group have been given the aim of closing all calls in a target time of 5 days irrespective of the call’s content or the volume of calls incoming.
We can’t guarantee this goal , so immediately declare the goal irrelevant and don’t shoot for it. What happens next as you’ve noted depends on circumstance.
However the goal belongs to my manager and should not be passed down. Viz if the department is breaking the 5 day goal then the manager has to figure out why and make corrections – I.e. too difficult or too much for the team – be that breaking calls into smaller units, or hiring more staff, or worse – refusing certain calls; but handle it appropriately.
The mistake would be if the 5 day goal is then tied to my manager’s bonus.
Chris.
I spent a while doing teacher training (in the UK) and it was interesting to note that there was a historical cycle of alternating between assessing pupils and teachers via metrics and leaving teachers to get on with the job as they see fit. The fashion is gradually moving away from metrics again now due to the limitations you mention. I suspect this is not restricted to teaching.
My theory, for what little it’s worth is that you’ll always find people who are adept at abusing either type of system but since the ‘abusers’ tend to be different types of people, continuous evolution is probably the best of a bad bunch of options for managing people, contrary beings that we are…
Bob,
Technique and performance may not be separable.
Using you baseball example: It seems to me that ball players are taught by their coaches how to swing correctly and how to anticipate pitches so that they can improve their average and RBIs. If their techniques begins to slip as demonstrated by a lower average and fewer RBIs then they receive some more coaching.
Might something similar be true in management if we had a better understanding of what “good performance” actually meant?
Ray
Like the saying: ‘Quality (or: result) is free if you do everything else right.’
The ‘Balanced Scorecard’ came to mind as a blending of measurable goals and techniques (at least in the situations I saw it being used.) Unfortunately, the gaming still occurred. Which goes back to a leadership issue – it takes a special kind of leader to see what is going on, and to put a stop to the unhealthy gaming, while fostering creativity, as well as effectiveness.
I do agree with the idea that good technique across the board should lead to better performance overall and I think it is very worth while to pursue this train of thought. I also believe that some of the metrics used now are overly simplistic, easy to manipulate, and do not always highlight desired effects be they negative or positive.
It is easy to see the problems with current measurements by looking at spectacular failures and then wondering why current measurements didn’t show the problem (case in point Enron). The difficult part in changing the system is in coming up with easy ways to measure the values. Current systems tend to look at easy, ready-made measurements such as income for a doctor, RBI for a hitter, etc. These are cut and dried measurements and require very little competence to compile, evaluate or analyze. But think about measurements of technique. How does one easily look at every batter and grade their technique? In fact, if you took two or three evaluators of swing style and fundamentals, would they even agree that a given batter has value “7” technique on a scale of 1 to 10? Then you’ll always have those outliers. Take golf swings. Some pro golfers have these so-called perfect swings that have the right pace, defined swing plane, yadda, yadda yet hit short shots and put it out of bounds as often as not. Then comes along some guy that makes a big figure 8 with a hitch at the top and hits it 300 yards right down the fairway with accuracy almost every time.
So the difficulty is in finding more objective measurements of things that are probably more often considered to be subjective. In addition, you have to be able to do this repeatedly, consistently, and in large numbers. And how do you weed out such things as personal, racial, and social bias?
Joel Spolsky of Fog Creek software has a project scheduling sw that works with two caveats. One is that you charge all time to a task. Meetings, coffee breaks, and other wastes of times are folded into the task times (brilliant because that’s how it works in real life). The other is measuring how far off individual’s guesstimates are for time needed for new tasks (always underestimated). They don’t train employees to guess correctly; they measure how far off each employee usually is and simply factor that in.
It seems to be one of the ways to manage performance without trying to modify it as you measure it. He has an excellent article here:
http://www.joelonsoftware.com/items/2007/10/26.html
So if we don’t measure intended outcomes (such as student test scores, to use one of your analogies), how do we judge if classroom techniques (the processes) are, or are not, successful? Couldn’t multiple techniques all be “appropriate” with certain techniques being more optimal/successful than others?
Standardized test scores might not be (okay, are not) the best measures of successfully taught students. But having some baseline metric of a successful outcome (not process) is required for the process to be optimized over time.
Like it or not, there have been some really ugly swings in baseball that were far more effective at getting on base than their “pretty” counterparts.
I agree fully with you that evaluation is difficult. But I have to wonder about the benefit of dichotomizing performance/technique. Outcomes are important, and the means of obtaining outcomes are important. The rest of my response is going to be somewhat oblique. Make of it what you will.
I am a beneficiary of Circuit City’s mass layoff of two years ago. And no, that’s not just sarcasm. Everything you wrote about in your article rings true.
Here are some pointy-hair boss things I experienced:
1. For a while, we were told to record our hours (as exempt employees) up to forty hours per week, but nothing beyond. I’m sure there was more than one “benefit” to management of this manner of reporting.
2. Some managers believed that if there was any way a number could be attached to something, that automatically made it scientific.
3. I sat in on telephone interviews of candidates for a consultant position. One candidate had all of the “right stuff” on his resume, but in the telephone interview it was clear that he had a poor command of English. He misunderstood questions, and then would talk on and on. The manager who was conducting the interview would shake his head, throw his hands up, and in other ways indicate his frustration with the candidate’s monologues. Yet, at the end of the interview, he said “This is the perfect person!” never mind that we interviewed other candidates who also had strong resumes and who communicated well over the telephone. The manager made his decision based only on the resume, discounting the interview experience. By the way, his notion of behavioral interviewing was to ask questions about performance, driven by what could be seen on the resume. (What was your biggest success? What was your biggest failure? How did you handle this situation? etc.) Now, would you be at all surprised to learn that this consultant was not able to do the work that was assigned to him?
4. I know a lot of people who have experienced this next one. Consider a professional who has ten plus years of experience and is well-regarded. How much sense does it make for a manager who does not understand this profession to tell this person repeatedly that his or her knowledge and insights gained from ten plus years of experience don’t count for anything? The basic message is “Shut up and do what you’re told,” though usually not said so bluntly.
5. A number of years ago, IT management created an employee of the month award to motivate employees. The first award was given to a person who spent a gazillion hours over a weekend to repair a crashed system. The next month’s award was given to a person who spent a gazillion hours repairing a crashed system. The third month’s award…(I think you got the picture). None of the large number of highly talented people who built systems within forty hour work weeks that didn’t crash got an award.
I enjoy your columns. Keep it up.
Best regards,
Archie
Hi Bob,
I really like this topic.
A really big problem that we have, is that many times you can’t tell luck from skill. The history of our financial sector is the poster child for that.
All the Wall St types who have been bringing home the big paychecks and bonuses did so based on the principle that they were doing something better than the rest of us. Then they found out that any idiot can ride a bubble on the way up.
The book Fooled by Randomness: The Hidden Role of Chance in Life and in the Markets
is a really good exposition of exactly this idea.
Separating process from results is more difficult culturally though. Oil companies have gone a long way in that direction with managing exploration programs. Each individual exploration well is likely to fail, so they have pushed the accoutability for results of the chain. People doing the technical work are more accountable to a group of peer evaluators who understand the technical inputs.
If a well is successful, there is often some bonus given out, but we have largely gotten past the idea of the “oil-finder,” which actually made sense at one time, but not so much anymore.
Mr. Lewis, you come across a great point but I don’t think its new, radical or revolutionary.
Baring special circumstances or environmental factors the right processes or techniques will yield the correct results.
Dove tailing with the lean discussion thread in February its a principle of mentoring.
Doing this helps greatly reduce ‘gaming’ the outputs since people realize they are being evaluated & coached on how and why they do their work versus the results.
I imagine it’s a model most of us have no problem employing in parenting but for some reason can’t get our heads around in the corporate world.
We mold our children by concentrating on teaching/coaching them how to think and react to events in life knowing if we succeed a good output will result; a thinking, competent, capable, well adjusted member of society (well, at least after the teenage years are over).
While Bob said nothing earth-shattering or totally new, as usual, he said what many of us have been generally contemplating – but brought it together as a succinct concept.
Our school district has started to reward principals for the performance of their schools on the state tests and a few other quantifiable items. The district also scores itself on those same standardized test scores. Of course, all anybody is allowed to teach nowadays has to be directly aimed towards those tests and test taking skills. They are now going to find a way to evaluate teachers on those same criteria.
Of course this means that teachers have to follow the scripts given and there is no way to follow-up on that teachable moment if it falls outside the guidelines. As Mr. Pardee said above – the teacher with the chaotic classroom where everyone learns will now be penalized for teaching students to think and take responsibility rather than teach “cookie-cutting to fit the mold.”
And, as with industry, those teachers that know how to suck up and “game” the system and administrators will do best on their evaluations – and get the promotions. Those that think too much will probably not look good to the inflexible evaluation metrics.