Speaking of reading — last week‘s topic, as in why so many IT professionals don’t do it and what you can do so they do …
Many, many moons ago (1974 to be precise), Edwin Newman authored Strictly Speaking: Will America Be the Death of English? an eloquent diatribe about … well, the title says it all, doesn’t it?
Newman held out “hopefully” as a particularly noxious step on the road to linguistic perdition. It means “In a hopeful fashion,” but it’s most commonly used in place of “I hope,” presumably because the speaker (or writer) doesn’t want to specify who is doing the hoping.
I hope you’ll join me this week in burying an even worse usage, worse because it not only combines bad English with bad math, but takes it from a sports metaphor.
The phrase in question: “We have to give 110%!”
Why I mention it: Several correspondents wrote to point out that making reading mandatory, no matter how it’s done, will likely crash into the all too common practice of oversubscribing staff on the grounds that, for exempt employees, “… this isn’t a 40 hour a week job, you know.”
When employees are already oversubscribed, adding a low-urgency task to the pile of work they already have won’t endear you to their hearts no matter how noble your intentions.
The flaw, though, isn’t with last week’s suggestions. It’s with the practice of considering 100% (or 110%) staff utilization to be a sign of efficient management.
It isn’t. It’s a sign of bad engineering.
To illustrate the point, consider an airport. Any airport has a theoretical limit to its capacity. Modeling it would entail something along the lines of dividing 1,440 (the number of minutes in a day) by the average number of minutes needed for one airplane to take off or land, multiplied by the number of runways that are usable in average conditions.
Imagine the airline industry was foolish enough to try to asymptotically approach this limit. Then imagine something disrupted the schedule — say a flight can’t take off because something in the cockpit broke. The only alternative would be to cancel the flight and rebook its passengers into open seats in other flights to the same destination, because if the airport is operating at capacity there would be no available time slots to reschedule the flight.
The general principle: Systems need enough unused capacity to absorb shocks — unplanned situations that require some of their capacity.
That is, if everyone is giving everything they have already, they’ll have nothing left for handling a crisis. And if their management thinks they do have enough left to handle a crisis, that just means they aren’t operating at capacity, and so should be given even more work assignments.
It’s sloppy thinking, imagining management can make 110% of capacity the new 100%.
Now I’m not so semantically intolerant that I don’t know what a head coach means when he insists players must give 110%. It isn’t that the coach wants everyone to flunk math. It’s that the players are capable of more than they think they are.
Which works just fine when players have time to rest and recuperate after a game. In a retail business it can work well enough when the challenge is handling a spike in pick-pack-and-ship warehouse demand because Cyber Monday sales were off the charts, assuming that by Cyber Thursday everyone can get back to a more reasonable workload.
It doesn’t work so well when programmers are enjoined to go above and beyond when they don’t get time to rest and recuperate after a long week of coding, any more than it makes sense to tell marathoners to try to sprint the entire race.
While we’re on the subject of time management, a phrase of advice on a related subject, multitasking. The phrase of advice: don’t do it.
As, thankfully, no sports metaphors occur to me, let’s talk about virtual memory –temporarily spinning off some RAM contents to disk, so as to load a different computing task into the just-vacated RAM for a few moments of processing, then rinsing and repeating for all other active computing jobs.
This works so long as the number of concurrent jobs doesn’t result in task switching time becoming a significant fraction of total capacity.
What’s true for computers is true for those who program and use them — when we’re forced to multitask the impact of our own switching time is very real.
Which leads to the [stunningly obvious] moral of this story: Don’t undertake workloads that are beyond your organization’s capacity.
“Do more with less” has become, “Do more with nothing.”
Amen!
In my time as a software consultant, I saw clients struggle time and time again with doing their ‘day’ job and helping design and implement new systems. Management, seemed to think that adding a project that required 20+ hours per week to a person’s, or a group, workload would be successful. I wonder if managers thought of there people as computers with finite capabilities and acted accordingly if more projects would succeed.
Unfortunately, the manager who adds the workload gets improvement (at least on paper) to get a bonus and promotion.
The poor slob who takes his place is let go when he has to address the damage left behind by his predecessor.
Another analogy on overutilization of capacity is any highway at rush hour. When traffic is at design capacity, it flows smoothly. As we have all seen, once demand exceeds that capacity, the slightest problem – a breakdown, a slow moving vehicle, even unexpected lane changes that cause drivers to brake suddenly, cause traffic flow to slow dramatically and often back up for miles.
Regarding programmer demand vs capacity, I remember when I was a coder 20+ years ago. I found that once my day approached 12 hours, I started making more mistakes than I was correcting, and it was time to go home no matter how soon the deadline was. I remember late one night rewriting a section of code only to realize after two hours exactly why I had done it that way in the first place. I was very thankful for version control software.
Coincidentally, I recently read that batteries in iPhones and electric cars degrade if regularly charged to 100%, that apparently optimum performance is achieved by charging to 80%…
Slack in the system is necessary for many reasons. I’m worried about folks trying to decrease the 40% “waste” in our food system. If that’s decreased at all, I imagine food prices will rise. If we were truly efficient, I suppose we could have zero waste in our food system, but that would be at risk of famine for any disruption or else everyone would have to eat preserved (frozen/dried/canned) foods and we’d preserve all fresh food that way and always have a 2-year supply on hand.
The most bothersome thing about these initiatives is the reason they want to cut down on making food is “because” of global warming–cut down on emissions. 1) emissions aren’t the problem, fossil carbon is the problem (we drill 90 million barrels of oil a day). 2) global warming is going to mess up weather patterns, and weather is crucial for growing food.
It’s going to get harder to grow good in the future. Let’s not make our problems worse by getting “efficient.”
This is why I like reading your columns. Thank you.
Bob,
Your point can actually be proven using basic queuing theory. Basically, if one tries to utilize more than 75% or 80% of a shared resource serving a random stream of service requests, the response time will explode, and many jobs will be infinitely deferred. This is a mathematical property that is true whatever the nature of the shared resource and the services provided.
True conversation:
Boss: When will you be done?
Me: End of day tomorrow.
Boss: What?? Why so long??
Me: In case anything unexpected happens.
Boss: Like what??
Me: I do not know.
Boss: What??? Why don’t you know!!!???
Me: If I knew, it would not be “unexpected.”
Brings back “fond” memories of having to defend “contingency” as a line item for a capital project. Pretty much the same conversation, only with the CFO.