Simple moral theories are almost always easy to refute with simple hypotheticals. Yet in the real world, right and wrong rarely seem ambiguous to me. The reason isn’t that I think that consequences don’t count. I take consequences seriously. My moral judgments are clear-cut largely thanks to what I’ll call the Five-Organ Hypothetical, also known as the “Cut Up Chuck” Case:
The “Cut Up Chuck”
Case: A homeless guy named Chuck
comes into the ER with a treatable leg wound.
But instead of treating his wound, Dan the ER doctor realizes that
Chuck’s heart, lungs, liver, and kidneys are all healthy and in fact are all
matches for five patients upstairs who are at death’s door and in need of donor
organs. So Dr. Dan cuts up Chuck, passes
out his organs, and saves five people who otherwise would have died. Question:
Did Dr. Dan do the right thing?
Like almost everyone, I’m convinced that Dr. Dan did wrong. What’s distinctive about my position is that I leverage this hypothetical to resolve real-life moral dilemmas involving murder, slavery, theft, dishonesty, and other no-nos of common sense morality. (And from there, a strong libertarian presumption readily follows).
Admittedly, if you raise the stakes above 5:1, I might change my mind. Most people do. And I don’t doubt that benefits far in excess of 5:1 occasionally occur. But that’s mostly in 20/20 hindsight. In the real world, one can rarely reasonably expect the benefits of violating common sense morality to exceed the 5:1 threshold. The world’s just too uncertain.
My colleague Garett Jones suggested that the Five-Organ Hypothetical works for murder, but not much else. I agree that murder is the most clear-cut case – with strong implications for the moral permissibility of warfare. But the same intuition works for many other moral constraints. Stealing a car to save the world is morally laudable. But almost everyone can see that stealing a car to slightly raise social utility is wrong – even if you return the car to its original owner with a full tank of gas. When is car theft morally justified? It’s hard to give a precise answer, but a 5:1 minimum cost-benefit threshold is quite plausible.
What about lying? Social psychologists document that most people lie without guilt on a regular basis. Part of me wants to just condemn mankind’s low regard for truth, but on reflection, we still seem to be in the 5:1 ballpark. Most people self-righteously lie if they think that the social benefits are considerably greater than the costs: lying under duress, little white lies, etc.* Lying to make the world a slightly better place? Not cool.
Of course, bullet-biting consequentialist philosophers will either ignore these intuitions, or reinterpret them as concern for less obvious consequences. But their favorite trick is to act as if we have to choose between their bizarre view and the even bizarrer Kantian view that it’s wrong to lie to save your kids from an ax murderer. In reality, these two bizarre positions are merely endpoints on a moral continuum – and the plausible positions are somewhere in the middle.
* Indeed, even my case for stricter honesty comes down to, “There are effective rhetorical substitutes for lying, so you have much less moral excuse for lying than you think.”
P.S. I’m off for GenCon. If you see me there, please introduce yourself. And by the way, there’s still one available seat at “Capgras Conspiracy,” one of the three games I’ll be running.
READER COMMENTS
Lee Kelly
Aug 3 2010 at 2:50pm
The traditional problems of moral philosophy concern how one justifies their actions or beliefs. Students are taught to seek some apodictic foundation or self-evident first principle, and then construct a morality anew by pure ratiocination. Hayek called this style of thinking “rational constructivism.”
Justification–at least the kind pursued by philosophers–is impossible, by its own standards. If one starts from the position that justification is neither possible, desirable, nor necessary, then unjustifiability cannot function as a rational criticism, because it cannot distinguish between alternatives.
A more interesting problem for moral philosophy, in my view, is something like: how can we live peaceably, despite our conflicting interests, ends, and beliefs? The answer to this question begins by acknowledging that we are doing a pretty good job already. From there, we are more concerned with institutions and norms.
The point here is not to justify anything by some first principle, but merely to solve a problem. That some people will not share this problem is unfortunate, but irrelevant. Moral philosophy should not start by declaring that utilitarian- or Kantian-like rules are better or worse, but discover it. I suspect the truth lies somewhere in the middle.
Steve Z
Aug 3 2010 at 4:09pm
Bryan,
Are you then really resolving moral questions, or merely coming up with heuristics that approximate a resolution to moral questions?
If the hypothetical is that you know to a certainty that murdering A will mean that five people with a utility function exactly equal to A will live, and nothing else, is it wrong to murder A? If yes, then why would changing the numbers around matter?
James
Aug 3 2010 at 7:42pm
Bryan: I value a dollar five and a half times as much as you. See where this winds up?
ChrisA
Aug 3 2010 at 8:43pm
Why are people still trying to construct consistent moral theories given what we now know on how the brain works? It would be surprising if the moral structures in the brain were consistent beyond the needs to keep small groups of technologically unsophisticated individuals cooperating. Morals and ethics are without any doubt tools developed by the human brain through evolution to allow big brained apes to work together in groups, when you generalize to examples outside of this scenario you will get inconsistencies, your moral structures become confused. Bryan rarely sees a problem in real day to day life because glaring examples against the moral rules in our brains are quickly perceived and corrected. Of course subtle ones persist and are there all the time and are rationalised away – classic example is starving kids in Africa versus the next starbucks.
Lee Kelly
Aug 3 2010 at 9:04pm
“Why are people still trying to construct consistent moral theories given what we now know on how the brain works?”
Erm, because if we are smart enough to “know how the brain works” (an incredible achievement, really), then surely we are also smart enough to construct a consistent moral theory.
And if we are not capable of constructing a consistent moral theory, then I seriously doubt that we are smart enough to “know how the brain works.”
Of course, that a moral theory is consistent doesn’t mean it’s any good.
Hyena
Aug 3 2010 at 10:04pm
This is a problem with theories of morality in which you have a singular value like utility or “adherence to duty”. However, if you have multiple values, you don’t get this problem and moral dilemmas that are real in theory and mimic how we feel about ethics in practice start to pop up.
BZ
Aug 3 2010 at 11:31pm
The “cut up chuck” example is perfect for Dr. Caplan’s case. But all the elucidations of utilitarianism lately have clarified for me personally why I can’t buy it. Non-hypothetically, right now, you could make a case that killing 1 out of every X thousand people could save those among the thousands that need organs. Since your own prison time is insignificant in this utility function, I would suggest that utilitarians amongst you grab a gun and a knife and get to work!
Or perhaps, you’ll have a moment of moral insight and become one of those who walk away from Omelas….
http://harelbarzilai.org/words/omelas.txt
@ChrisA:
“Why are people still trying to construct consistent moral theories given what we now know on how the brain works?”
Knowing how the brain works won’t tell you whether you should nurture your offspring or eat them. It will only tell you which way your passions lie. Such thinking attempts to evade moral questions, not answer them.
Doc Merlin
Aug 4 2010 at 12:45am
Only 5+? Really? Wow…
You could really make the case that a poor person who works 80 hour weeks at mcdonalds values money at least 5 times more than a rich man who got his money from a trust fund. From there you can quickly get to forced redistribution (or possibly legalizing theft from the rich) and neither you nor I would be comfortable with that.
Gian
Aug 4 2010 at 1:26am
Mr Caplan,
Your numerology is odd and odder is that you seem to think murder is justified if N people benefit where N>>5.
Is this moral theory your own invention or does it have a name?
I thought you were a moral absolutist. That I undertook to mean as a believer in the existence in moral propositions such as Do not murder. Have you disowned your previous beliefs?
ChrisA
Aug 4 2010 at 2:22am
Lee
You said;
“Erm, because if we are smart enough to “know how the brain works” (an incredible achievement, really), then surely we are also smart enough to construct a consistent moral theory”.
Why does that follow? We seem to have made pretty good progress in the last couple of centuries on understanding how complex biological organisms work and evolved (I agree an incredible achievement)and one surely I don’t need to reference for you. I don’t see any progress at all in arriving at a moral theory that “works” and is consistent, i.e. one that can’t fail the kind of test Bryan proposes. We are still re-hashing the ancient Greeks.
To put my argument another way – most people are not moral philosophers who have developed their moral senses through reasoning – yet most people have a keen sense of right and wrong, even four or five year olds. Where could this come from if it is not genetic? If it is genetic – then it is a product of evolution and manifested in structures in the brain. What was the “purpose” of evolution putting these structures in our brains to tell right from wrong ? – I (and many others) suggest it is so that we can work together as groups rather than as individuals. That is not unusual – chimpanzees and many other animals clearly have genetic programming to work together as larger groups to improve their survival (I assume no-one thinks that Chimps reason their way to this position). It also makes sense in what we know of human history.
So our moral sense is a kludge or algorithm – it will work well enough to allow us to interact in small groups of hunter gathers – but there is no reason for it to be consistent in a wider situation, and guess what, that’s exactly what we find.
roman adhikari
Aug 4 2010 at 2:38am
We need to cosider every aspects of a game before
implementing it to society.Society doesnot value only one factors and leave all other behind. It is obivous that we human being seek benefits. It is well known that we should matter the human mankind but for the sake of human mankind we can not practice those things which have more negative values in positive negative equation.Culture, ethnicity, cost and morality can not be judged in one scale.Yes,we can save the lives of five people killing one man. But I am not sure in which direction does this practice lead human rational thinking?
Human life is the most essential not by number but norms and values.The total impression reflection of any action should morally and heartly justificable for implementation rather than the postulates of any theory.Cost is not alwasy case but we should maaintain the balance of cost with sound moral ethics and values.
sean
Aug 4 2010 at 3:54am
why do all these hypothetical often include a homeless man? I find that the most interesting about our moral intuitiveness.
BH
Aug 4 2010 at 6:42am
[Comment removed pending confirmation of email address. Email the webmaster@econlib.org to request restoring this comment. A valid email address is required to post comments on EconLog.–Econlib Ed.]
floccina
Aug 4 2010 at 11:46am
An interesting application of this is preemptive bans due to environmental or other dangers that are far greater than the perpetrators ability to compensate. This came up in the debate over BP’s spill and the ban on new deep water drilling when the damage was thought to be much greater than it now looks like it will be.
It also comes up in the area of gun control. I have no problem with my neighbor having an assault rifle but a bomb that can take out the whole neighborhood in one shot, no.
azmyth
Aug 4 2010 at 1:15pm
I disagree with the notion that the benefits must be 5 times greater than the costs. Any action where the benefits are greater than the costs is morally acceptable, but one must include all costs including opportunity costs. Opportunity cost is based on alternate courses of action. The best alternative to murdering homeless people for their organs is not letting people die who need transplants – it’s allowing a free market in organs. I don’t think you ever need fancy ratios, just clear headed analysis of all the costs and benefits of an action and an honest appraisal of alternate courses of action.
Pandaemoni
Aug 4 2010 at 4:32pm
I do not understand this in one respect: How are you distinguighing “little white lies” from lies that “make the world a slightly better place”?
It seems to me that essence of a little white lie is that it makes (and is limited to making) the world a slightly better place. I have never told a little white lie that had a truly significant positive impact. Usually they bypass minor unpleasantness or give people a minor boost in self-esteem.
Peter
Aug 4 2010 at 7:19pm
Just an amusing note on honesty .. from the current USAA poll:
What’s your favorite kind of advice?
The good kind 33.8%
The free kind 24.6%
The honest kind 19.9%
All of the above 21.7%
Nice to see honesty loses 🙂
roversaurus
Aug 5 2010 at 1:03pm
Gencon!???
I’m there! I’ll be looking for you!
Comments are closed.