Yesterday, I did a dive into different levels of the idea of unintended consequences. Here, I want to focus on the implications of different types of unintended consequences. Specifically, I’m looking at how we should apply these ideas to policy makers, and the policies they enact in order to achieve specific goals.
The simplest level is consequences that are anticipated even though they are not strictly intended. I used the example of a medication that is known to cause drowsiness as a side effect. When you take that medication, you don’t do it with the intent of becoming drowsy, but you can nonetheless anticipate it will occur. In this kind of situation, the usual dictum is to ensure that the treatment isn’t worse than the disease. In the world of policy, the dictum is to be sure that the benefits of the policy will outweigh the costs.
Consequences which are unintended as well as unanticipated are harder to judge. Part of how we evaluate them turns on whether the unanticipated consequences are deemed to have been predictable (or at least reasonably so). That’s why the phrase “you should have seen that coming” usually denotes blame, while the phrase “there’s no way you could have known that would happen” is considered exculpatory. I think hindsight bias leads us to overestimate how predictable a specific unanticipated consequence should have been. Descriptions of how programs have backfired due to unanticipated consequences are often tinged with a sort of schadenfreude, or a barely concealed smugness about how these fools didn’t see how things would turn out even though it’s so obvious. I’m not suggesting it’s never true that an adverse outcome could have been reasonably predicted. But I do think we should be cautious in saying so, given that we have the advantage of being in the future and are looking back on the outcome. Just because it seems obvious to you in hindsight, doesn’t mean it would have been as obvious to you prospectively.
However, my charity only extends so far. While I’m willing to grant that many (perhaps most) adverse outcomes probably couldn’t have been specifically predicted, I’m less forgiving on the meta level issue of the predictability of predictability. For me, most of the time, the seemingly exculpatory statement “There’s no way you could have predicted this outcome” is immediately followed by the indictment “and you should have known that.” The more complex a system is, the less predictable the outcomes of our interventions will be. Martin Gurri put it well in his book The Revolt of the Public:
Our species tends to think in terms of narrowly defined problems, and usually pays little attention to the most important feature of those problems: the wider context in which they are embedded. When we think we are solving the problem, we are in fact disrupting the context. Most consequences will then be unintended.
To which I would add, not merely unintended, but also unanticipated in a way that was predictably unpredictable.
Here, however, one might suggest that just because there will be unanticipated and unpredictable outcomes, that doesn’t mean these outcomes will be deleterious. Perhaps they will be salutatory instead? Albert Hirschman suggests this in his book The Rhetoric of Reaction, arguing “it is obvious that there are many unintended consequences or side effects of human actions that are welcome rather than the opposite.” What should we make of this possibility?
While it’s possible that unanticipated consequences might turn out to be beneficial, this doesn’t do much to recommend them. More importantly, the more complex, dynamic, and interwoven a system is, the less likely it is that unanticipated consequences of an intervention will be beneficial. It’s an unfortunate fact that there are more ways to make things worse than to make things better.
The human body is one example. Our biology is highly complicated and still not fully understood. This system is, by virtue of its complexity, also very delicate. There are far more ways to injure or sicken someone than heal them. Without detailed knowledge of human physiology, most interventions will be damaging, if not outright fatal. Only recently have doctors begun to understand the human body well enough to provide genuine and consistent improvements to health.
The biosphere is another example. We simply don’t understand the natural order well enough carry out targeted interventions in a way that brings about specific results. When Australian authorities decided to reduce the beetle population by introducing the cane toad into the local ecology, there was no way they could have anticipated how much the ecological equilibrium would be disrupted. But the fact that they couldn’t have anticipated the outcome is itself something they should anticipated. Altering the ecosystem is an area where the political left tends to be very sympathetic to arguments about complexity and negative unintended consequences. They freely grant, at least on this topic, that intervening in a system that is highly complex and only partially understood is far more likely to do damage than good. If you granted that we don’t understand ecology well enough to fully predict the outcomes of our interventions, but followed up by suggesting that this shouldn’t discourage us from intervening because maybe those unexpected outcomes will actually be improvements, almost nobody would find that compelling.
Those of us with a strong prior against intervention in the market order see things in the same way. Indeed, it’s very common for libertarians and classical liberals to explicitly describe both the market order and social order as an ecosystem – that is, a complex adaptive system that can’t be fully understood, predicted, or reliably controlled or steered by targeted interventions. In that kind of system, interventions made with limited and partial understanding are far more likely to cause more overall harm than good. Those who bring about this harm are properly blameworthy, because of their fatal conceit.
Kevin Corcoran is a Marine Corps veteran and a consultant in healthcare economics and analytics and holds a Bachelor of Science in Economics from George Mason University.
Comments are closed.