ManagementSpeak: We’ve kind of opened a can of worms.
Translation: It’s a 55-gallon drum of live, angry, rattlesnakes. You get to put the lid back on.
KJR Club member Ed Glasheen helps us differentiate between legless critters.
Year: 2006
Truth or unintended consequences
The “law of unintended consequences,” isn’t a statute whose violation carries a stiff fine, although some managers behave as if it’s just such a regulation.
When it comes to this so-called law, intellectual laziness imposes a double whammy. The first is that people hear the phrase and immediately assume they understand what’s behind it, without any actual research or thinking. The second is that most unintended consequences are the result of taking action without much actual research or thinking.
Someone once computed the computations for the perfect game of chess. He used the theoretical minimum time and energy required to perform a computation, and a reasonable estimate of the number of possible different permutations of chess moves (the number is astronomical but finite). He concluded that had someone built a computer with all the mass and energy of the universe, not enough time has passed yet since the big bang to have computed the perfect chess game.
Most figure the law of unintended consequences refers to this sort of situation, where the intrinsic complexity of most real-world physical and social systems makes predicting the outcomes of even simple actions impossible. And so, goes this line of thinking, there’s really no point in trying. Just do what you want to do and don’t worry about it.
It’s an excuse. Most consequences, while unintended, are actually quite easy to predict. In fact, they usually were predicted, only nobody wanted to listen to the inconvenient predictions.
An example from my sordid past: About 15 years ago, a couple of PC-based COBOL compilers appeared. They weren’t cheap, nor were the high-end PCs required to run them.
Another manager decided they were just the ticket for offloading the mainframe during the application development process. When I asked whether the product shipped with a CICS emulator, VSAM emulator, and emulators for the other bits and pieces of our mainframe environment needed for a program to actually do anything, I got hand-waving in response.
Sadly, at the time I lacked the character, manners and political judgment to avoid saying I-told-you-so when the workstations turned out to be useless. The manager, on the other hand, waved off the fiasco by chalking it up to something we couldn’t have foreseen without giving it a try. Waving, it appeared, was his core competency.
It turns out that the law of intended consequences never did mean what most people think it means. The American sociologist Robert Merton explored the subject in depth back in 1936 (I’m relying on Rob Norton’s excellent synopsis in the Concise Encyclopedia of Economics, Merton’s analysis recognized five causes of unintended consequences. They are (I’m paraphrasing): Bad logic, simple ignorance, willful ignorance, basic values, and self-preventing prophesy.
The definition of bad logic is self-evident, as is the connection between it and unintended consequences. The rest call for explanation.
The difference between willful ignorance and simple ignorance is that willful ignorance refers to the deliberate wearing of conceptual blinders, ignoring whatever evidence and lines of reasoning don’t lead to the desired conclusion. Those who are simply ignorant know little or no evidence of any kind.
“Basic values” is akin to what ethicists call deontological thinking. It means applying your values to the action itself, rather than its consequences — so of course the consequences will be unintended. Merton took this a step further, reflecting on situations where applying values leads to their violation. He used the example of “… the Protestant ethic of hard work and asceticism.” Since this results in the accumulation of wealth, it ends up leading to its own abandonment.
And finally, there’s the one I don’t think fits: self-preventing prophesy. The whole purpose of forecasting disaster — whether the forecast was Bob Metcalfe’s prediction of the Internet’s collapse or climate scientists’ warnings of global warming — is to prevent the problem’s occurrence. Calling the results unintended would seem to miss the point.
Which leads, at last, to you. You have two choices. You can do your best to think things through. Or you can rely on the chess-game definition of the “law” and use it as an excuse. But that’s all it is, because chess players don’t talk about the law of unintended consequences. They talk about establishing strong positions, and about thinking further ahead than their opponents.
It’s like this: Most unintended consequences are the result of conscious decisions — to not think, to not consider evidence, or both. It isn’t actually a paradox, although it sounds like one: The consequences might be unintentional, but their cause is deliberate.