
I just completed Tyler Cowen’s thought-provoking new book entitled “Stubborn Attachments“. In the book, he explains his views on the appropriate ethical perspective to use when thinking about public policy issues. Here Tyler discusses one common complaint about utilitarianism:
Are you willing to value the interests of others on par with your own, or those of your family or friends? If you buy into the standard utilitarian logic of beneficence, a mother might have to abandon or sell her baby in order to raise money to send food to the babies of others. At this point most people balk at the argument and search for some moral principle that limits our obligations to the very poor.
One problem is that the needs of the suffering are so enormous that only a few able or wealthy individuals would be able to carry out individual life projects of their own choosing. Most people would instead become a kind of utility slave, serving only the interests of others and feeding themselves just enough to survive. The result is that utilitarianism—or many forms of consequentialism, for that matter—is often seen as an excessively demanding moral philosophy. People fall into two camps: those who reject utilitarianism for its extreme and unacceptable implications, and those, like the early Peter Singer, who trumpet the call for greater sacrifice and pursue the utilitarian logic to a consistent extreme.
There is no single argument capable of rebutting this critique of utilitarianism. Nor is there any pair of arguments. There are three arguments, however, that in combination do rebut this argument, and restore utilitarianism to its rightly place at the core of any sound moral theory. I’ll take these three arguments one at a time:
1. The selfish gene.
2. What do you mean we, white man?
3. Think globally, act locally.
Let’s start with selfishness. What does it mean to say a moral philosophy is “excessively demanding”? Christianity calls for “turning the other cheek”. How many people are capable of doing this? Does that make Christian morality wrong?
Nature has hard-wired us to be somewhat selfish, favoring our own interests ahead of those closest to us, and their interests ahead of strangers. There are evolutionary models that predict this sort of selfishness. But the fact that people are naturally inclined to occasionally be selfish, or violent, or racist, or sexist, or cruel, or lazy, or dishonest, has no bearing on whether those traits should be considered good. I have no problem admitting that I fall far short of being an ideal person, or that Mother Theresa was much more worthy of admiration than I am. I’m much more selfish than I would be if I strictly adhered to the utilitarian moral code.
The world would probably be a better place if I wrote a check for $10,000, gave it to Tyler, and asked him to pass it on to the Ethiopian family that he plans to support with the royalties from his book project. But I don’t plan to do so. It would be comforting for me to dream up some sort of moral theory that exalted selfishness; but that would be dishonest, as deep down I would not believe it.
On the other hand, I’m not completely selfish. People I speak with are occasionally surprised when I say I favor the recent tax changes that hurt California residents. “You live in California now, why don’t you support unlimited deductions for S&L taxes?” Umm, because I’m not that selfish. Utilitarianism is a sort of lodestar, like the teachings of Jesus. Sensible people understand that most of us will fall well short of perfection, as we are highly fallible human beings. But that doesn’t mean that what seems good is not actually good, just because it’s difficult to achieve. My suggestion to people contemplating Peter Singer’s advice? Just do the best you can.
The second item on my list is the punch line from a joke referencing the famous radio show featuring the Lone Ranger and his trusty sidekick Tonto (who was a Native America). When Tyler implies that most of us recoil from a moral theory that demands that affluent Westerners give a big chunk of their income to poor residents of developing countries, I wonder whom he means by “most people”.
I say, “Tyler implies”, as he’s too smart to actually come out and make that precise argument, rather he uses an even more extreme example involving the sale of babies, which I’ll address in the third point. But others often do make this argument about charitable giving, so I’d like to deal with it here. It’s quite likely that if you polled all 7.3 billion humans, a majority might support large transfers of wealth from the rich countries to the not so rich countries. So it’s not at all clear that “most people” find this utilitarian implication to be unpalatable. On the other hand, the sale of babies might be a bridge too far, even for poor people in developing countries. I’d guess that most would not recommend that American moms sell their kids to raise money to deliver food to Guatemala.
One of the strengths of Tyler’s book is that he explains how the demands of utilitarianism are actually not as great as they seem:
I’ll instead focus on the broader conceptual question of whether growth or redistribution—in the public or private sector—is a more effective means of healing the poor. When framed in this manner, we’ll see that there are some strong and strict limits on our obligations to redistribute wealth, even if we accept the full utilitarian framework. (emphasis added).
People are most effective when they focus on helping their own children, not kids who live 10000 miles away, about which they know little or nothing. Think globally, act locally. That doesn’t mean that all charity is ineffective—not at all—rather that one should not naively assume that $1000 can be transferred from the rich to the poor at anything close to zero cost. This is a common mistake made by moral philosophers, who tend to underestimate the importance of economic incentives. I’ll explore the how Tyler addresses these issues (the most interesting part of his book) in a future post.
These three arguments need to taken jointly, not one at a time. Starting with the third argument, the demands of utilitarianism are large, but not as large as suggested in thought experiments by moral philosophers. If the thought experiment seems nightmarish, say selling babies, there’s a good chance that’s because the policy would not in fact make the world a happier place. Any time you see a nightmarish utilitarian thought experiment, which makes you recoil, remember that any truly utilitarian policy will make the world a happier place.
Here you might be thinking, “Of course Sumner would claim that utilitarian policies make for a happier world, as he’s a utilitarian.” No, I’m not “claiming” that utilitarian policies make for a happier world, that’s the definition of utilitarian policies.
Once we understand that utilitarianism calls for some sacrifices, but nowhere near as draconian as those suggested by some moral philosophers, we might still be confronted with the fact that many affluent people will recoil from the implied obligations. But the purpose of moral philosophy is not to describe what’s best for a small minority of affluent people; it’s to describe what’s best for humanity as a whole. It’s not at all implausible that the world would be better off if more money were donated to poor people in Ethiopia. And the fact that selfish people like me don’t want to do so doesn’t change the fact that this would make the world a better place (unless the disincentive effects of charity are even greater than I assume.)
When a thought experiment seems to reach a “repugnant conclusion”, ask yourself if you are thinking about the experiment in the correct way. Here’s Tyler:
Parfit’s repugnant conclusion compares two population scenarios. The first outcome has a very large, very fulfilled, and very happy population. The world also has many ideal goods, such as beauty, virtue, and justice. The second outcome has a much larger population, but few, if any, ideal goods. Parfit asks us to conceive of a world of “Muzak and potatoes.” Nonetheless, the lives in this scenario are still worth living, although perhaps by only the tiniest of margins. Parfit points out that if the population of the second scenario is large enough, the second scenario could welfare-dominate the first.
In my view, Derek Parfit’s thought experiment, and dozens of other similar anti-utilitarian examples, is nothing more than a cognitive illusion, artfully presented to lead the reader astray. In this case, the reader is tricked into thinking about the example from a sort of “veil of ignorance” perspective. Which society would you rather live in? But that’s not the question. The question is not which society is preferable to live in, rather it’s whether you’d prefer living in the poor one, or having a 0.01% chance of living in the nice one (and a 99.99% chance of never existing at all.)
Most people want to live.
Let’s reframe this thought experiment. Imagine a kingdom in central Asia with 1000 people living in a beautiful town in the mountains in the far north of the country. They are a powerful clan that is entitled to receive all of the oil revenues of the kingdom. They enjoy the arts and have an enviable moral code based on their Buddhist religion. It’s a lovely place. This mountain town is surrounded by a vast dry hinterland with 25 million poor uneducated peasants. An asteroid is approaching Earth, and seems likely to destroy the hinterland of this kingdom, while sparing the affluent mountain town of 1000. NASA can divert the asteroid to save the hinterland but only by destroying the mountain town. Should it do so? I say yes. Those that feel Parfit’s thought experiment is repugnant might say no. After all, the repugnant conclusion argument is used to suggest that the world would be a better place with the 1000 lucky residents of the mountain town, rather than the 25 million downtrodden residents of the hinterland.
Notice that this is another example of overlooking “Tonto’s perspective”.
PS. Just to head off some familiar comments, I’m a “rules utilitarian” who believes that the world is better off with a set of laws banning censorship, murder, and certain other bad things, even if there are some public utterances, and some people, that the world would be better off without, judged individually on a strict utilitarian framework. So my policy views end up pretty close to some natural rights advocates, except I don’t think the rights are “natural”, rather a “useful fiction”.
READER COMMENTS
Alan Goldhammer
Nov 20 2018 at 4:11pm
Good post. I read Tyler’s book when it was still a PDF pre-print over a year ago. I don’t know whether it falls into the ‘utilitarian’ category but there is the Giving Pledge that a number of high wealth individuals have signed onto. Some of the pledged money has already been given. Michael Bloomberg donated $1.8B to Johns Hopkins, his alma mater, so that no qualified student would ever have to worry about costs if they are admitted.
Hopaulius
Nov 25 2018 at 10:37am
“no qualified student would ever have to worry about costs if they are admitted.” That is quite the filter! This year JHU accepted 2,894 undergraduates, of which 1,319 enrolled. Now you have to look at how many of those are legacies, and how many come from truly impoverished backgrounds. I can’t see how this is utilitarian according to Scott’s definition. It’s more a matter of helping mini-Bloombergs.
Mark Z
Nov 20 2018 at 5:18pm
I think Tyler’s point about people’s apprehension regarding selflessness might be more morally relevant than you give it credit for in the context of rule consequentialism. It is very likely that a set of rules or norms that permits or even encourages people to enjoy the fruits of their labor than donating most of them to charity. Not only do people tend to be less productive when they are forced to give much of what they produce to others, but even if people felt that if they did earn significantly more, they would be morally obligated to give more to those poorer than themselves, they might feel disinclined to work long hours at a very difficult knowing they couldn’t, in good conscience, enjoy what they earn from it.
In other words, it is possible that a set of norms and rules based on Randian self-interest is the optimal one from a utilitarian perspective and, ironically, encouraging Singerian altruism could actually be harmful from a utilitarian perspective.
(A final note: I wouldn’t presume that people who find full-fledged Singerian utilitarianism revolting find it so for merely selfish reasons; speaking as a non-utilitarian myself, I think most such people find it perhaps even equally revolting to think that others are morally obligated to be self-sacrifice to such an extent for their sake).
Benjamin Cole
Nov 20 2018 at 7:43pm
Perhaps utilitarianism is a lot like monetary policy.
We can all agree on the results we want, that is in the case of monetary policy, solid growth and modest inflation. But the long daggers come out if one suggests a different pathway to that desired result.
I want the world to be a happier place too. I probably even agree with Sumner on most methods to achieve that end. But I can see that other people would disagree on how to achieve that end, or even if that end is achievable through public policies.
I did, however, always admire a good utility infielder.
Lawrence D'Anna
Nov 20 2018 at 9:43pm
“The selfish gene” refers to genes being selfish, not people. It’s the title of a book that argues that genes are the primary unit being selected by evolution, rather than organisms or groups. It doesn’t mean that humans are (or aren’t) genetically programmed to be selfish.
Philo
Nov 21 2018 at 12:12am
“The world would probably be a better place if I wrote a check for $10,000, gave it to Tyler, and asked him to pass it on to the Ethiopian family that he plans to support with the royalties from his book project.” I see absolutely no reason to accept this probability estimate. I suspect that, rather than using it for own-consumption, you would invest your marginal $10,000, thereby increasing future production of goods for consumption. On the other hand, the Ethiopian family would come closer to using the $10,000 for immediate consumption. By so doing they would make consumption increase in the short run, which is good in itself; but in the long run consumption would decrease, which is bad in itself. And the world’s being a better or worse place is a matter of the long run.
BC
Nov 21 2018 at 12:23am
It seems like there are two types of utilitarianism. The first is where a central authority or third-person observer tries to imagine how happy he (or she) would be if he were in everyone else’s shoes in each state of the world and that total utility is what is to be maximized. We might call that centralized utilitarianism. The second type is where we try to maximize the sum total of all individuals’ happiness as each individual perceives their own happiness. We might call that distributed utilitarianism, recognizing that the information required to determine total happiness is distributed among all individuals, not necessarily known to any central authority. It seems like distributed utilitarianism is closer to the ideal of “making a happier world” as it concerns actual people’s happiness rather than the hypothetical happiness that a central authority would have experienced had he been in each person’s shoes. However, it seems like lots of utilitarian puzzles (trolley problem, redistribution, etc.) are approached from a centralized utilitarian perspective.
For example, a Coasean approach to the NASA asteroid problem might involve asking whether the 1000 affluent would pay more than the 25M peasants to have NASA divert the asteroid. That would be distributed utilitarian. The centralized utilitarian approach, which seems more common, is to assume that most third-person observers would place an equal value on each person’s life and, hence, direct the asteroid to the mountains. But, we know that people do not all put the same value on their own lives because some people are willing to do risky jobs like coal mining while others are not. For example, we might be able to hire 25M peasants to work in coal mines for less total money than it would take to hire 1000 affluent mountain villagers. Similarly, it’s not obvious to me that an American mother would pay less to save her own baby than many poor African mothers pay in total to feed their children.
On redistribution, most people think it’s obvious that a poor person would gain more utility from $100 than a rich person would. But, maybe that’s just because we’re biased. Suppose the poor person would spend the $100 to buy blankets to keep his family warm in the winter while the rich person would buy caviar. Because it’s easier for us to imagine how much utility blankets would bring us if we didn’t have any than to imagine how much we would enjoy caviar if we already had more blankets (and other stuff) than we ever needed, we empathize more with the poor person. Our current or imagined utility preferences more closely align with the poor person’s. Objectively, though, if I try to craft a property right, the trading of which would reveal that the poor person values blankets more than the rich person values caviar, I’m not sure how to do it. If I ask the poor person how much I would have to pay him to accept some hypothetical externality that would destroy his blankets, he will answer $100 — the same amount that I would have to pay the rich person to accept an externality that would destroy his caviar. I only know that the poor person values his blankets more than any thing else he can buy for $100, and the rich person *also* values his caviar more than anything else he can buy for $100. Maybe, the lesson is that we all have a cognitive bias that causes us to instinctively favor more redistribution than purely (distributed) utilitarian concerns would otherwise merit.
There might be a close alignment between natural rights and distributed utilitarianism if the natural rights, and exercise thereof, help reveal individuals’ distributed utility preferences.
ChrisA
Nov 21 2018 at 1:58am
As probably one of those regular commentators, I wish I had more time to debate this, it is a fascinating conversation. But my regular day job intrudes too much. So I will confine myself to a perhaps snarky comment, that what Scott (and maybe Tyler) propose doesn’t sound much like utilitarianism, it sounds more like what I propose in that you start with a conclusion (I should support my family a lot and provide less and less support to people I don’t know) that is probably driven by genetics and then argue why this is correct. I personally believe we can’t do anything other than this in terms of morality. I don’t believe in any over-arching moral law, but I behave perfectly well according to usual moral norms of Western Society. Not because I reasoned my way to behave this way, but because I am made this way. And I suspect most everyone else behaves in the same way for this reason. Just to illustrate the point: Do five year olds know what is right and wrong – generally yes according to society rules – but I am sure they didn’t reason their way there. Do wolves act in a pack like manner because they have come up with this by reason as the best way to hunt – no it is instinct. etc etc
RPLong
Nov 21 2018 at 8:34am
In my opinion, utilitarianism works best when it’s used as a moral standard, rather than a moral system.
It is highly effective to consider the utilitarian consequences of a particular moral decision. Doing so gives you insight into the costs and benefits of the decision, who benefits and by how much. This kind of utilitarianism is obviously wonderful.
Another way to apply utilitarianism is to try to become some sort of monomaniacal utility robot, applying a utility evaluation to absolutely everything that you do: is it ethical to leave the water running while you brush your teeth, is it ethical to adopt a family dog – okay then what if the dog would otherwise be euthanized? – is it ethical to eat meat more than X times per week, is it ethical to buy my toddler a toy when with the same money I can buy three mosquito nets for children in the malaria zone, etc., etc.
I don’t think this latter kind of utilitarianism leads to anything productive.
Mark
Nov 21 2018 at 11:59am
From a utilitarian perspective, it is very important but very hard to draw a line between giving people an incentive to invest by letting them and people they care about keep the fruits of their investment versus alleviating conditions for the poorest. Clearly things like socialism erred too strongly in the direction of the latter and ended up reducing overall utility.
One potential clear guideline that I think works in most cases is “do no harm.” People are not obligated to help others, but nor may they take affirmative actions to disadvantage others. This allows for likely utility-enhancing rules like taking care of your own kids before having to feed others, while ruling out likely utility-destroying ones like passing protectionist laws. Obviously there will be exceptions but if you want a clear rule that many people can follow evenly this is probably better than any alternative.
Scott Sumner
Nov 21 2018 at 7:04pm
Mark, You said:
“Not only do people tend to be less productive when they are forced to give much of what they produce to others, but even if people felt that if they did earn significantly more, they would be morally obligated to give more to those poorer than themselves, they might feel disinclined to work long hours at a very difficult knowing they couldn’t, in good conscience, enjoy what they earn from it.”
That’s utilitarian argument, and one that I find quite plausible.
Lawrence, I understand that distinction, but I do think that people are genetically programmed to be somewhat selfish. As I said in my post, people also have altruistic motives—it’s complicated.
Philo, I should have made it clearer that my thought experiment involved diverting $10000 in consumption to the Ethiopian family.
BC, Don’t confuse means and ends. Utilitarianism is about the goals of policy. Centralization vs. decentralization is about the means. But I agree that the decentralized approach is generally best.
ChrisA, I think you misunderstood my argument. In my view, the amount of redistribution should be determined by utilitarian considerations, not family ties.
Ram
Nov 21 2018 at 7:28pm
I used to be a textbook utilitarian, but these days I’m more sympathetic to a kind of weighted utilitarianism.
Individuals belong to hierarchies of communities (nuclear family, extended family/close friends, neighbors/business associates, locals, fellow countrymen, humans, sentient critters, …), and communities engender social bonds between members. Communities vary in the strength of the bonds they foster. Moral obligations, in turn, derive from these bonds.
Utilitarianism seems to me to get the content of these obligations right–namely, to pursue the general interest of the community, which is a delicate balance of efficiency and distribution objectives. When such obligations to different communities conflict, however, I’m inclined to think that priority ought to be given to communities characterized by stronger bonds. Critically, this means not weighting everyone equally, which has implications for optimal distribution.
Not only does this modification seem correct to me at the level of fundamental principles, but it also seems to resolve a number of puzzling aspects of textbook utilitarianism, such as obligations to non-human animals, obligations to our future selves, obligations to extremely distant descendants, the repugnant conclusion, and much besides.
Many philosophy-types react negatively to the idea that, in brief, some folks count more than others, but my claim is not that this holds in some objective sense, but from the perspective of each individual. *Your* kids count more to you than someone else’s kids, for example, even though *my* kids count more to me than yours (I don’t have kids, but you get the idea).
The other difficulty is accounting for where these communities come from, and how we come to belong to them, and why some feature stronger bonds than others. I don’t have a good theory of that, but I also think it is undeniable that each of us does belong to some such communities, that this frequently isn’t a matter of choice, that some of these communities result in tighter connections than others, and that this matters for how we think about our obligations.
What I’m asserting is simply that these things are not unfortunate human biases to be corrected by rational progress in moral understanding, but are in fact the basic constituents of morality itself. Everyone counts, yes, but how much X counts to Y is not necessarily how much X counts to Z.
Bedarz Iliachi
Nov 22 2018 at 2:12am
Your difficulty about the origin of communities that you feel an obligation too is curious. Man is an embodied creature, located in particular time and place, and born into a particular family which is itself embedded in a particular political community (that may itself be embedded into yet another larger political community).
You are correct that morality must take account of these fundamental facts. These, for instance, generate the neighbor-stranger distinction -neighbors are those you share the political community with and strangers are all others. As we are not disembodied intellects but flesh-and-blood creatures, these accidents of time and place exert legitimate demands upon us, in particular as our minds are largely formed by the political community we are born into.
Scott Sumner
Nov 22 2018 at 11:33am
I agree that we naturally tend to feel emotionally closer to those people who are physically closer to us.
Hugh D'Andrade
Nov 21 2018 at 8:57pm
Moral principles arrived at intellectually will never be generally accepted, even intellectually. Suppose a train is going to hit a car and kill its two occupants but I could throw a switch to send it down a tract where it would hit a car occupied only by my mother. Who would want to live in a world where people would willingly, as volunteers, kill their own mothers to save two strangers?
ChrisA
Nov 22 2018 at 12:06am
But Scott don’t you think your conclusion (that it is most utilitarian to support your family first) is suspiciously similar to what a genetically based morality would determine? Again I don’t think you can simply redefine utilitarianism to be the same as whatever feels right to you. If it means anything, it means somehow measuring the short term, medium and long term consequences of your actions and deciding on the action that has the probability weighted least harm or most improvement in the utility of total humanity integrated across time, perhaps applying some discount rate. Of course this is literally impossible – we can’t predict even a range of of potential futures, we can’t decide on what utility really is or how it can change depending on a particular scenario and we cannot agree on a suitable discount rate. Most importantly we will encounter lots of repugnant conclusions if we apply this rule. Even worse we might have to include animal utility in the analysis now! Let’s face it the task is impossible. So I am actually totally fine with your approach to just do what feels right to you at the time, based on relatively short term analysis of the consequences. But just don’t call it utilitarianism.
ChrisA
Nov 22 2018 at 7:35am
Relevant;
https://www.smbc-comics.com/comics/1542801487-20181121%20(2).png
Scott Sumner
Nov 22 2018 at 11:31am
Ram, That sounds similar to my “Think globally, act locally” point.
Chris, My claim that the world would be better off if I sent a check for $10,000 to an Ethiopian family is an explicit rejection of the genetic impulse to focus on one’s family. If the two approaches were identical, I’d agree with you, but that’s not what I’m saying.
Almost any two moral theories will overlap on a few points—if one focuses on where they disagree you get a better sense of their differences. Utilitarianism says focus on the local only to the extent that it improves global welfare.
Thaomas
Nov 22 2018 at 11:45am
I think this is a right (I won’t say “the” right) way to think about private morality. The trade off between growth and equity, the metaphor of the leaky bucket, is an important consideration.
But another consideration is that some part of the utility that we gain from material goods is positional. I (I do not think I’m unique) will feel less harmed if everyone’s income went down by 10% than if only mine did. So there will be less collective loss of utility if a poverty-alleviating tax is levied than if the same amount is collected through private charity, although decentralized charity might well better direct the expenditure of any given amount of funds.
Ram
Nov 22 2018 at 12:37pm
Scott,
The distinction I would like to make is between prioritizing the interests of those with whom we have stronger social bonds (for whatever reasons) as being the optimal way to pursue everyone’s interests equally, and prioritizing those interest simply because they matter more. I agree that in practice we end up in the same place, and I suspect most utilitarians (besides Peter Singer) would agree with you, but I’m contending that whether or not prioritization is optimal from an unweighted utilitarian viewpoint is irrelevant. We care more about some than others because they actually matter more, morally speaking, to us. I think unweighted utilitarianism gets the reason for prioritization wrong, because if we imagine a world in which such prioritization is not optimal from a global aggregate utility standpoint, I would still think it morally required that we prioritize.
Scott Sumner
Nov 24 2018 at 1:49pm
Ram, I agree those close to us matter more to us subjectively, but that doesn’t mean it’s right. Our instinctive morality may be biased, and non-optimal. Think of societies where morality requires you take revenge for harm done to your relatives. Just because some societies have that morality doesn’t make it right.
Nepotism in doling out government jobs meets your definition of morality, but not mine. Or have I misunderstood you?
Mark Z
Nov 25 2018 at 12:01am
The fact that our relatives matter more to us subjectively doesn’t necessarily mean that it’s immoral or suboptimal for people to care more about their families than strangers. One might just as well characterize our interactions with others as a form of consumption, and our concern for others reflects our valuation of others’ consumption (e.g., we get ‘utility’ out of other people’s well-being). But how much utility I get our of one person’s well-being may differ from how much I get out of another’s. It makes no more sense to say my grater valuation of one person’s well-being is irrational than to say that my valuation of a particular good more than others identical to it for sentimental reasons is irrational, because utility is subjective.
In other words, if we employ a broad definition of self-interest, one’s greater concern for those one is close to (even if that closeness is a product of circumstance) is simply self-interest, and expecting people to care as much about strangers as their family or friends is akin to asking them to care as much about strangers as themselves.
James
Nov 22 2018 at 11:36pm
The comparison to Christianity shows poor understanding of the position being rebutted. When critics of utilitarianism point out that utilitarianism requires forgoing all leisure until the last starving child has been fed, the problem is not that utilitarians are hypocrites who fail to live up to that standard. The objection is that such a requirement is unreasonably demanding in the first place. No such duty exists and any theory that implies this duty exists is false.
Aside from that, rule utilitarianism is no fix for any of the problems with regular utilitarianism. For example there may, in principle, be societies where the public satisfaction from mob justice is so great that utility is maximized by adopting rules which permit vigilante justice. No utilitarian can deny this but once the possibility is admitted, the utilitarian has no way to tell that his own society is not a society in which the rules ought to permit mob justice.
Scott Sumner
Nov 24 2018 at 1:45pm
James, I’m sure that there are societies where mob justice is optimal.
You said:
“When critics of utilitarianism point out that utilitarianism requires forgoing all leisure until the last starving child has been fed”
My claim is that this is not true. Utilitarianism does not require this great a sacrifice. You are describing something closer to the Rawlsian view.
TravisV
Nov 22 2018 at 11:36pm
Prof. Sumner,
Great stuff, thanks! It’s spelled “lodestar,” by the way……. 🙂
Scott Sumner
Nov 24 2018 at 1:41pm
Thanks, that’s the second time I’ve done that. I’m hopeless at spelling.
Michael Rulle
Nov 23 2018 at 11:01am
Such a complex subject. Your presentation is impressive. You did make one comment that reminded me of a recent thought I had. “Utilitarianism is a lodestar like the teachings of Jesus”. I was raised as a Christian and while I could hardly be a worse Christian, I still identify as one. I keep trying to determine how we are supposed to live if we followed Jesus’ teachings. I really don’t know and I will spare you from various ways to think about it. But your analogy is still a good one and I like it, even as I cannot interpret it in true life. I know you are not saying Utilitarianism is the same as Christianity, (although there are de facto overlaps) but the analogy is useful.
One element of Utilitarianism which I find……hard to pick the best phrase or word…….distasteful might do……are these thought experiments of the Sophie’s choice variety. Most of them are just fantasies and cannot exist in real life. But some, like Sophie’s choice, are quite plausible, although so rare as to be considered an outlier to the core of Utilitarianism. For all practical purposes they should be ignored when thinking through actions. If Jesus were Sophie, what would he do? Don’t know. My guess is not what she did.
I do think of Utilitarianism as a Philosophy that has a broad range of potential applications. It cannot be one “set of rules”. Life is too complex. I prefer to think of its usefulness in small doses, when one starts with a certain set of moral priors in a situation (for example, say in crafting a law) and then seeks to optimize based on that (itself hard to define). Utilitarianism as the moral prior for some reason is “distasteful” to me even though it can be part of the moral prior (think Sophie’s Choice). I probably am just playing with words. Maybe I associate it with Atheism, which to me means “all is permitted”. It is probably Jeremy Bentham’s fault that I think that way. His arithmetic version of happiness was absurd——so I tend to have this bias against the word itself.
Enough of this. I liked your essay quite a bit.
JasonL
Nov 23 2018 at 1:14pm
The world being a happier place is a loaded claim. We don’t know how to aggregate happiness or even if happiness is particularly relevant to utility. I have utilitarian instincts, but I do have the constant feeling that values are smuggled in the back door in the definition of the good. Utilitarian evaluation tells us to look for “are we making things worse” but it doesn’t tell us what worse means. Parfit may be right under one view of the good and Scott may be right under another view.
Brian Slesinsky
Nov 23 2018 at 2:41pm
Re: “one should not naively assume that $1000 can be transferred from the rich to the poor at anything close to zero cost.”
What is your opinion of GiveDirectly? I’m under the impression that they’re pretty efficient.
Scott Sumner
Nov 24 2018 at 1:43pm
I’m not just referring to transactions costs, but also the impact on incentives.
Swami
Nov 24 2018 at 1:44pm
A few points…
I think utilitarian aid is easily exploited by selfishness in any universal application. It effectively funds its own demise and is thus self defeating and thus non utilitarian. It would be better to restrict utilitarian aid to fellow utilitarians. Thus to gain the benefits of a utilitarian world you need to adopt the utilitarian ethos (convincingly). This would reverse the negative self defeating dynamic of exploitation with one where everyone wanted to join the utilitarian social pact, thus self amplifying and further increasing the benefits of joining (and not being a parasite)
Utilitarianism suffers from the knowledge problem. Which use of my time, energy or resources is best? The best solution to this is a decentralized system where everyone focuses from the center out in mutually beneficial ways. Ask, what can I do to make my life better in ways that are positive sum?(making others lives better too, such as in most honest economic transactions). Once we each take into account ourselves, we expand our circle of empathy to family, friends, neighbors and so on (again in a mutually beneficial, positive sum way).
F. E. Guerra-Pujol
Nov 24 2018 at 10:55pm
Geesh! How can we reconcile consequentialism with Kantian duties? Cowen’s approach is based on his own unfalsifiable intuitions, and different persons (like Sumner’s “Tonto”) wi have different intuitions about where to draw this line.
andy
Nov 25 2018 at 7:26pm
I’m personally in favour of ‘human-rights first, utilitarianism second’. I.e. try to follow human-rights (i.e. classical liberalism), unless you get to an extreme, then use utilitarianism. Every ideology fails at extremes.
Human-rights are based on respect to other people. Utilitarianism is not. If we want to live in a moral society, in a society based on respect to other people, we should strive to do so. Even if it costs us some ‘amount’ of theoretical utility. Given that interpersonal utility comparison is generally impossible, we should reserve that for situations that are really extreme, where we would say ‘OK, some things are more important than absolute respect to other people’.
That’s what I find on Singer unconvincing; his example of drowning child is an extreme. I could very well defend the obligation to help based on the idea that is an extreme; and refuse his extrapolation to an obligation to help people in poor countries as you cannot extrapolate from extremes.
Comments are closed.