Part 4 (“Ethics”) of Knowledge, Reality, and Value contains four chapters that seem extremely reasonable to me, and one that continues to strike me as deeply wrong. As a result, I’m going to split the discussion into two parts. This week: the extremely reasonable Chapters 13-16. Next week: The deeply wrong Chapter 17.
As usual, I will focus almost entirely on my disagreements with Huemer’s careful, enlightening, and inspiring book.
Chapter 13: Metaethics
When defending moral realism, Huemer places a fair amount of weight on linguistic evidence:
The most obvious problem with non-cognitivism is that moral statements act exactly like proposition-asserting statements in all known respects. They do not act like interjections (like “Ouch!”), commands (like “Pass the tequila”), or any other non-assertive sentences.
While I agree with Huemer’s conclusion, I find this evidence less probative than he does. Why? Because human beings often frame non-assertions as assertions for rhetorical effect. “Yay for the Dodgers!” is almost equivalent in meaning to “Dodgers rule!” Yes, grammatically you can say, “The Dodgers rule is false,” but not “Yay for the Dodgers is false.” But at least for football fans, the former is almost equivalent to, “Boo on the Dodgers!”
The introspective evidence against non-cognitivism is much stronger. While many people treat ethics as a team sport rather than an intellectual endeavor, almost no one will admit to doing so. Why? Because almost everyone thinks that moral reasoning, unlike sports fandom, is supposed to be a search for moral truth, not a celebration of identity.
Okay, I think this might be what is really motivating nihilists and other anti-realists: Objective values are weird. In fact, one famous argument against moral realism is officially named “the argument from queerness”. If there are objective values, they are very different from all the things that science studies…
Maybe weirdness just amounts to being very different from other things. But then, lots of things are weird in that sense. Matter, space, time, numbers, fields, and consciousness are all weird (different from other things). Why should we believe that weird things don’t exist? This is just a very lame argument.
The underlying idea, I think, is that STEM contains the totality of “real knowledge” and everything else is just poetry (or garbage). Thus, I’ve known quite a few engineers who scoffed at the idea of “social science.” Why did they scoff? The problem was not merely that social science has been intellectually subpar so far; the problem is that social science is just too imprecise/subjective/whatever to ever be intellectually satisfactory. Of course, engineers have social science views, too. At the meta-level, though, they think these discussions are inherently phony. And if that goes for social science, obviously it will go for ethics as well.
Huemer and I would agree that this view is absurd, but that’s what we’re up against.
Chapter 14: Ethical Theory, 1: Utilitarianism
Huemer’s chapter on utilitarianism is great. My only notable disagreement comes near the end:
That being said, utilitarianism is not a crazy view (pace some of its opponents). I grow more sympathetic to it as time passes.
I say utilitarianism is utterly crazy. After all, as Huemer previously told us:
It’s worth taking a moment to appreciate how extreme the demands of utilitarianism really are. If you have a reasonably comfortable life, the utilitarian would say that you’re obligated to give away most of your money. Not so much that you would starve, of course (because if you literally starve, that’ll prevent you from giving away any more!). But you should give up any non-necessary goods that you’re buying, so you can donate the money to help people whose basic needs are not met. There are always plenty of such people. To a first approximation, you have to give until there is no one who needs your money more than you do.
If that’s not crazy, what is?
And yes, the right answer to the Trolley Problem is that you may not murder one man to save five. (Even if you think otherwise, however, don’t miss the Trial by Trolley card game!)
Chapter 15: Ethical Theory, 2: Deontology
Here’s another example, which Kant actually discusses: Say you’re sailing a cargo ship. Your ship has cargo that belongs to someone else, which you promised to deliver to its destination. The ship runs into a storm, and it is in danger of sinking unless some weight is thrown overboard. According to Kant, it would be wrong to throw any of the cargo overboard, since that would involve breaking your promise and intentionally destroying someone else’s property. So you just have to take your chances. Maybe the ship will sink, destroying the cargo and killing everyone aboard, but at least you would not have intentionally destroyed it.
As you’ve probably noticed, that’s also crazy. I think all this is much crazier than utilitarianism.
I’m tempted to say “equally crazy,” but Kant makes the added mistake of forgetting implicit and hypothetical contracts. Namely: Most customers wouldn’t want the crew to have this level of care, because they’d have to pay a markedly higher price to purchase this ultra-premium service. Even on his own terms, then, Kant should only condemn the crew for destroying cargo if customers paid an explicit upcharge to die before destroying cargo.
Though the chapter is great, Huemer is oddly ambivalent at the end:
Finally, moderate deontology requires drawing seemingly arbitrary lines, and it also seems to create the possibility of cases in which two or more actions are each wrong, and yet the combination of them is morally okay.
Overall, I judge the problems for moderate deontology to be the least bad.
There is little reason for “arbitrary line” problems to bother an intuitionist. Should you murder one innocent to save X lives? We have clear intuitions for X<5, which, given the uncertainty of the world, resolves almost all cases. And for the “two or more actions,” problems, what’s wrong with implicit or hypothetical consent, which Huemer analyzes in depth in The Problem of Political Authority?
Is moderate deontology fully intellectually satisfactory? No. But why the doleful “least bad” rather than the hopeful “rather good”?
Chapter 16: Applied Ethics, 1: The Duty of Charity
After a fantastic discussion of the Drowning Child argument, Huemer veers into social science:
It is true that there is a correlation between fertility (birth rates) and poverty – the countries with high fertility tend to also be poor. This is not, however, because fertility causes poverty. It’s the reverse: Poverty causes people to have more children. When people’s income goes up, they do not generally increase the number of children they have; they decrease it.
In terms of raw averages, Huemer is right: If you regress fertility on income alone, higher income predicts lower fertility. However, if you add more variables, the picture changes. At least in the US, for example, the highest-fertility people have high income combined with low education. (A few paragraphs later, Huemer discusses the fertility-education connection, but as far as I can tell he doesn’t treat this as a competing hypothesis).
How can this be? When people get more money, they can afford to have more children, so why doesn’t their fertility increase? The answer is basically that children take up your time, and, in wealthy nations, people have other things they’d like to do with their time. For one thing, if you have lots of kids, that can interfere with your career; so the better your career prospects are, the greater the deterrent to having kids.
Lots of economists say the same, but on reflection this is hardly adequate. Taking vacations “interferes with your career,” too, but richer people take more vacation time, not less. What’s really going on is quite puzzling. I spend most of chapter 5 of Selfish Reasons to Have More Kids trying to figure it out.
Taking all this into account, if we can alleviate world poverty, this would actually reduce population growth. We’ll get a smaller population, living at a higher standard of living.
If true, this is probably the best consequentialist argument against alleviating world poverty. After all, most poor people are happy to be alive. If saving the lives of the most miserable of the world’s poor causes their total population to greatly shrink, how is that a win?
Let me close with one of the best gems in this portion of the book:
Intuition is just like reason, observation, and memory in this respect: You can’t check its reliability without using it. You probably don’t think (and very few moral anti-realists think) that we should ignore reason, observation, and memory; therefore, you also shouldn’t ignore intuition merely because it can’t be checked without using intuition itself.
The analogy to memory is especially compelling. Memory is highly fallible. Memory varies so much between people. Yet we can’t do without it. Even math relies on memory! After all, at any stage in mathematical proof, you are relying on your memory that previous steps were accurate.
The same applies at least as strongly to natural science. Unless you’re directly staring at something, natural science is based not on observation and experimentation, but on what we remember about past observation and experimentation.
Fortunately, that’s OK.
READER COMMENTS
KevinDC
Jul 6 2021 at 11:44am
The more I’ve discussed moral philosophy with people, the more I suspect lots of people are just intuitionists in denial. Scott Alexander, who calls himself a utilitarian, was admirably upfront about this when he wrote:
Which raises the obvious question – why not just say you’re an intuitionist then and save yourself all the effort? A lot of brainpower is spent by very smart people trying to find ways to hack their ethical theories in the way Scott describes. Most of these efforts just seem like obvious attempts at reverse engineering an explanation to get to the desired and intuitive conclusion, and the obviousness of this backwards reasoning only further discredits that ethical theory to anyone who doesn’t share it. But even if the arguments were elegantly put, it would still be a waste of time and effort. If something isn’t worth doing, it’s not worth doing well. Creating complicated arguments to justify the obvious is definitely not worth doing.
One other point I’d make in favor of the use of intuition. A few sections ago, there was a discussion about what “knowledge” means and various attempts at technical definitions. We have an intuitive understanding of what it means to “know” something, but, so far, no unassailable technical definition of “knowledge.” But our intuitive understanding is better than any technical definition could be. After all, all that’s required to refute an attempted definition of “knowledge” is that it clashes with our intuition! That’s the whole point of Gettier cases, after all – showing a situation where someone meets the technical definition of holding a “justified true belief” about something, but it’s still intuitively obvious they don’t actually know it. Given that our intuitive understanding of “knowledge” trumps any technical definition, what’s to be gained by forming a perfect definition immune to all counterexamples? It wouldn’t tell us anything we didn’t already know, after all.
Liam
Jul 6 2021 at 11:54am
It’s hard to see how demandingness is a particularly strong objection to utilitarianism. All moral theories can be extremely demanding in certain circumstances. Here is an example:
John’s child is dying from cancer. He cannot afford to fly her to a country offering pioneering treatment. Let’s assume it has a reasonably high chance of saving her life. Can John hack into Bill Gates’ bank account and steal the money he needs?
I take it that Caplan thinks this would be wrong; and yet what could possibly be more demanding than a moral principle which requires you to step back and watch your own child die from cancer? In fact, utilitarianism would probably be less demanding in this case, since you may well increase overall utility by stealing the money.
What this suggests is that demandingness is not enough by itself to reject utilitarianism. It’s the facts of the world which place such strong demands on us. After all, I’m sure it was extremely hard for many slave-owners to give up their slaves and the lifestyle the institution supported. Some of them may have been plunged into poverty after slavery was abolished. Tough! Slavery was wrong and it had to be abolished, regardless of how difficult the change was for the slave-owners.
KevinDC
Jul 6 2021 at 2:09pm
I think the issue of scale matters regarding demandingness. You correctly point out that strict deontology can create high demands for particular people in extreme circumstances. Another example that makes the same point – suppose I got lost in the woods in very cold weather. Fortunately, I stumble across someone’s hunting cabin. They aren’t around – maybe they’ve left the cabin for the season. If I break in, I can warm myself and keep alive in hopes of rescue. According to a strict, property-rights supporting deontologist, however, I am obligated to stay outside and freeze to death rather than use the owner’s property without permission. You could probably get Murray Rothbard or Walter Block to bite that bullet. However, I think that Caplan and Huemer would both agree that in this case it would be permissible to break into the cabin – mild deontologists, and certainly intuitionists, are much less demanding in that regard.
But strict utilitarianism isn’t demanding merely to particular people in highly specific and extreme circumstances. It makes extremely high demands of almost all of us, in basically all circumstances. According to strict utilitarianism, I would be obligated to take the highest paying job I could manage, and work as many hours as I can, to make as much money as possible (maybe being an Uber driver on the side for even more cash) and then give away everything, beyond what I need to live at subsistence.
Also, I think you misjudge what utilitarianism would actually require in your thought experiment. You ask:
Utilitarianism would not, in fact, advocate you steal the money and then use it to save your child from cancer. Instead of spending that money on expensive cancer treatments to save the life of one child, a strict utilitarian would donate the money to a GiveWell charity where it could save the life of dozens of other children around the world. This would be true whether or not you stole the money from Bill Gates or if the money was your own, incidentally. The money and resources needed to save one child from cancer could also be used to save the lives of multiple other children from other causes. So to a strict utilitarian, even if you were Bill Gates and had no need to steal money to afford your child’s treatment, you should still “step back and watch your own child die from cancer” and instead use those resources to save the lives of multiple other children.
Of course, we can avoid that by advocating a more moderate utilitarianism, but we can also avoid the issues of strict deontology by advocating moderate deontology too.
Liam
Jul 6 2021 at 4:27pm
But it’s not just strict deontology which forbids you from stealing the money. My impression is that both Huemer and Caplan think it would be wrong to steal from Gates. That is extremely demanding on the person who has to watch their child die.
You are right that utilitarianism places strong demands on us all the time, but you have to remember that we live in a weird world with stark inequalities. As someone born in Britain, I have to acknowledge that I am part of the modern equivalent of the 18th century aristocracy. Indeed, I have luxuries that Louis XIV could never have dreamed of. It’s not clear to me that I’m morally entitled to keep all that money when I know that there are mothers crying themselves to sleep with their dead children in their arms every night. It’s an obscene world we are living in; it’s not a world to which our moral intuitions are well-adjusted.
KevinDC
Jul 6 2021 at 5:23pm
Maybe, maybe not. I’m not sure what Caplan would say, but I do know that Huemer does think that theft can be justified in extreme circumstances. He even made a note of this in one of his previous replies in the club earlier:
So Huemer would say (in fact, has said) it’s not wrong to steal to prevent yourself from dying of starvation – and I’ve no doubt he’d say the same about stealing to prevent your child from starving to death. So it’s not as obvious to me as it seems to be to you that he’d say what you think about your hypothetical. Especially given the arguments he lays out in chapter 16 of his book, where he concludes that there is in fact a moral obligation to make positive efforts to alleviate the suffering of the poor and to engage in charitable giving.
But that still doesn’t address the other issue I mentioned – according to utilitarianism, you’d still have to watch your child die of cancer, because the resources you’d expend to save their life from cancer could also be used to save the lives of more children from other causes of death, which results in more children’s lives saved and fewer grieving parents. And this remains true regardless if you acquired the money by stealing it, or earning it honestly. If it’s unduly demanding for deontology to forbid you from stealing other people’s resources to save your child, it’s even more unduly demanding for utilitarianism to forbid you to use your own resources to save your child.
Luckily, however, it’s not like we need to choose between deontology or utilitarianism, which is good, because both of them are false.
Liam
Jul 6 2021 at 5:41pm
Suppose you have been washed up on a desert island with your child after a plane crash. There are several other survivors. Your child needs medicine, but so do the other survivors. In fact, your child needs an unusually large dose. Suppose that dose is enough to save twenty of the other survivors. Can you keep the medicine for your child and let the other people die? It’s not clear that you can.
Singer will then point out that we face a similar dilemma in terms of global poverty. So maybe John should give Gates’ money to charity instead?
KevinDC
Jul 6 2021 at 9:02pm
It seems like you’ve at least tacitly moved away from using that example as indicating the demandingness of deontology and acknowledged that utilitarianism is more demanding still. So I think that’s some progress.
Still, you do raise a worthwhile question with your hypothetical, and I don’t want to leave that unengaged, so at the risk of boring all the other Econlog readers with my numerous comments, I’ll keep responding. But before looking at your hypothetical up close, I think it’s worth backing up a bit first.
First, while I’m not a consequentialist of any kind, and therefore also not a utilitarian (which is just a subset of consequentialism), I do not deny that consequences matter. Common sense morality would agree that consequences do matter. Consequentialism, on the other hand, is the stronger (and therefore less likely) claim that only consequences matter. Utilitarianism is the even stronger (and even less likely) claim that one particular kind of consequence is the only thing that matters. And utilitarians disagree among themselves if the relevant consequence is maximizing average utility, or total utility, or equalizing utility, or whatever, which means we end up arguing that one particular aggregate measurement of one particular consequence is the only thing that matters. One would need really strong arguments to justify such a strong claim, and the arguments utilitarians have given fall wildly short of that.
Some deontologist deny that consequences matter. I think this is foolish. I agree that consequences matter, but they are not the only thing that matters. Sufficiently large consequences can overrule other considerations. But consequentialists would say that any slight improvement in consequences overrules every other consideration. With that distinction in mind, what of your hypothetical where the resources needed to keep your child alive could save more lives elsewhere? You ask about a 20 to 1 ratio. I’ll make it more extreme – what if the resources needed for John to keep his child alive were so vast it would result in the death of half of North America? In that case, the consequences are simply too much. But what if the resources John needed to save his child could instead be used to save just two others? Would you say that, in the name of consequences, John should let his child die to save the lives of two strangers? In this case, I’m perfectly okay saying John is not only justified in saving his own child, he’s obligated to. If John said he left his child to die of drowning in a shallow pond so he could run across the field to save two strangers instead, we’d rightly consider John a horrible person for that.
But consequentialism would be even more extreme that this. If you could either save the life of your own child, or, with the same resources, save the life of one stranger who was slightly less sick and then use the leftover resources to make one other stranger’s life marginally better – the “best consequences” are to let your child die in order to save one life and slightly improve another life. And this is the problem I find with consequentialism/utilitarianism. It’s not enough to merely point out that with sufficiently extreme consequences you can overrule other concerns, as you attempt to do with your 20 to 1 ratio. You need to argue that even the slightest improvement in outcomes is enough to overrule everything else.
Philo
Jul 6 2021 at 1:02pm
On utilitarianism, Huemer writes: “To a first approximation, you have to give until there is no one who needs your money more than you do.” But, no, this is not a consequence of utilitarianism. Huemer is overlooking the fact that in utilitarianism future people count just as much as present people. If I invest my surplus wealth, I will benefit future people, probably doing more good than if I gave it away to people who will consume it immediately. (Also, given the inherent self-centeredness of virtually all people, the expectation of give-aways would undermine the recipients’ incentives to be productive.)
KevinDC
Jul 6 2021 at 2:57pm
I don’t think this is true. Utilitarianism is about the criterion of what is right or good or moral – in the utilitarian view, the answer is, well, utility. The question of to whom that criterion applies, and to what degree it applies (like in the case of future generations), is a separate question from the validity of utilitarianism as a moral theory. Some people believe we should consider future generations as exactly as important as people alive today, others argue that future generations deserve some consideration but not full consideration, others argue that future generations have precisely zero moral value. You can find utilitarians (or consequentialists more generally) all along that spectrum. None of those positions contradicts utilitarianism as a moral system – all they do is change the the outcome of the utilitarian calculus. Stephen Bickham outlined this debate among utilitarians in his paper Future Generations and Contemporary Ethical Theory.
I can imagine all kinds of ways people might argue for or against those different perspectives (I have no strong opinions about it myself), but anyone having that debate isn’t debating about utilitarianism. It’s a separate question.
Lance Bush
Jul 6 2021 at 2:39pm
Huemer claims that moral statements act like proposition-asserting statements “in all known respects.” His evidence for this is very thin, though. It seems to consist primarily of appeals to his own sense of how people use language, rather than a systematic analysis of how actual people actually use moral statements in the real world.
In other words, what people mean when they make moral statements is an empirical question. And without empirical data, Huemer is not appealing to linguistic or introspective evidence in any systematic and rigorous way: he’s simply appealing to his intuitions about how language works, and his own introspection. This is hardly a robust basis to determine a question as sweeping in scope as the existence of objective moral facts.
For one thing, we cannot simply ignore people who do not share the impression that moral language is generally committed to realism. I also study metaethics, am familiar with moral language, had many discussions about ethics and metaethics, and taught courses on the topic. Huemer’s judgment on the matter is hardly more privileged than mine, and I do not share his (or other realists) impression that moral realism captures how the majority of people speak and think about morality.
I don’t find the introspective evidence any stronger. The introspections he appeals to are his own, and perhaps a handful of people who agree with him. My introspections yield a completely antirealist picture of the world, as do the introspections of at least some other moral antirealists. The evidence here isn’t one-sided. So if Huemer’s introspections are evidence for realism, our introspections should be evidence against it. It’s not as though everyone who introspects reaches the same conclusions. So if we’re going to settle the matter, Huemer and other realists who rely on introspective data should be able to explain why their introspections are accurate, and ours aren’t. I’ve read what moral realists have to say on the topic, and I don’t find it at all convincing
But neither his nor my personal experiences, introspective reports, or intuitions are very good evidence of whether most people are moral realists. If you wanted to know whether people use moral statements as imperatives or to express non-propositional attitudes, you’d have to go and study how people use moral language. It’s a very strong claim to say that most people are moral realists. You would need a well-developed body of cross-cultural empirical research to determine whether that was the case, and we have nothing even approaching that. What little we do have does not support the notion that most people are moral realists.
You say that “Because almost everyone thinks that moral reasoning, unlike sports fandom, is supposed to be a search for moral truth, not a celebration of identity.”
Once again, what almost everyone thinks is an empirical question. Unfortunately, what empirical evidence we do have does not support the claim that everyone thinks morality is a search for moral truth, which I interpret as the notion that most people are moral realists (or at least think other people speak and think like moral realists).
Early work on lay metaethical belief has a host of methodological problems. More recent studies still suffer significant methodological shortcomings, but the most current, and best-designed studies find that, at least among the populations studied, a majority tend to favor antirealist responses to questions about morality, in both concrete and abstract cases, to a greater extent than they do realist responses:
Pölzler, T., & Cole Wright, J. (2020). An empirical argument against moral non-cognitivism. Inquiry, 1-29.
Pölzler, T., & Wright, J. C. (2020). Anti-realist pluralism: A new approach to folk metaethics. Review of Philosophy and Psychology, 11(1), 53-82.
Davis, T. (2021). Beyond objectivism: new methods for studying metaethical intuitions. Philosophical Psychology, 34(1), 125-153.
These studies find that a significant majority of respondents favor moral antirealist responses, including subjectivism and noncognitivism, across a range of paradigms. Even the earlier studies found equivocal evidence, with most participants expressing inconsistent and mixed metaethical standards.
There are undoubtedly shortcomings with these studies. I myself have identified many, and published on their weaknesses. However, what we do not have is a robust body of empirical data that favors the notion that almost everyone is a moral realist. This is a speculative hypothesis that has virtually no empirical support.
I specifically study what ordinary people think about realism and antirealism, and my own assessment of the empirical literature is that moral realism is not a stance most ordinary people hold or even understand, and that the way people speak about morality does not best fit with a realist analysis. On the contrary, people seem to evidence mixed and inconsistent metaethical standards that may not fit with any conventional uniform metaethical accounts. That is, the evidence more closely reflects what Gill and Loeb argue for:
Gill, M. B. (2009). Indeterminacy and variability in meta-ethics. Philosophical studies, 145(2), 215-234.
Loeb, D. (2008). Moral incoherentism: How to pull a metaphysical rabbit out of a semantic hat. In W. Sinnott-Armstrong (Ed.), Moral psychology: The cognitive science of morality (Vol. 2) (pp. 355-386). Cambridge, MA: The MIT Press.
I have yet to see Huemer or any other proponents of moral realism take this empirical research seriously and provide a convincing response to it. I’m not sure if they’re aware of it, or if so, what they make of it, but the longstanding presumption that ordinary people are moral realists enjoys very little support from efforts to actually empirically evaluate what ordinary people think. Finally, there are what I regard as fairly convincing general philosophical objections to the presumption in favor of folk realism:
Sinclair, N. (2012). Moral realism, face-values and presumptions. Analytic Philosophy, 53(2).
For many years, moral realists have presumed that their position is widely shared among nonphilosophers. Yet they have simply not done the work to show that this is the case. To me, it seems as though they’ve just repeated this so often people have come to take it for granted, and have not seriously considered the possibility that it simply isn’t true.
Alexander Davis
Jul 6 2021 at 7:51pm
All good points. I think Huemer is mentally simulating the “boo sportsball!” and “murder is wrong” types of statements in different scenarios and noticing when they seem correct. I mostly agree with his assessments, but perhaps that agreement is limited to us. To you, does “if murder is wrong, then assassination is wrong”, really seem just as well formed as “if boo Dodgers, then go Giants”? or “if shut the door, then yay cheese”?
I’m happy that you brought the recent research on how people in general use moral language to my attention. If the findings hold up, we could weaken the claim into “some people use moral language in a realist way”, which does seem true, as evidenced by Huemer. But in thinking about this, I grow suspicious of the idea that the way people actually use language tells us much about the nature of the things being talked about.
Would we look for an answer to scientific realism by checking how people in fact use the language of science? I think not, because most people probably have just not thought that much about it. Why would it be any different for moral language? My impression is that most people have just not thought that much about it either. And as you mention, most people seem to have inconsistent views on the topic. I take this to be further evidence that they haven’t thought that much about it. So why should we find their usage compelling?
If everyone went full post-modern and began to see science as just an exercise of power with no meaningful connection to reality, that wouldn’t make scientific realism false. So we can say something similar about moral discourse: the way people use it probably doesn’t affect whether or not moral realism is true.
What I can still take from the argument is that applying logical connectives to moral statements makes more sense to me than applying them to emotional expressions or imperative statements. Maybe this shows that it’s a coherent way of speaking– but so is fantasy, so it is not sufficient for full realism.
Lance Bush
Jul 6 2021 at 11:41pm
I agree that Huemer is doing that. But as I’ve said elsewhere, his method just seems like bad psychology: he’s welcome to simulate how he’s using these terms, but he’s just doing psychology with a sample size of one. I have a completely different reaction to “boo sportsball” and “murder is wrong” type remarks.
No, but I’m not a noncognitivst. And this would only stand as at best an objection to very old and flat-footed noncognitivist accounts. Even some of those could probably handle this fairly well, but contemporary expressivist accounts can handle the semantics of moral language even better, so considerations like these don’t put much of a dent in them.
I don’t find cases like this very persuasive, personally. Moral claims are clearly capable of being ratcheted into a kind of logical structure if one wishes to do so. But the intent or psychological states prompting a moral utterance could employ seemingly assertoric semantics without the person actually intending to make a propositional claim. If I say “Sportsball is garbage!” I don’t actually take myself to be making a propositional claim, I take myself to be expressing a con-attitude. One way of expressing this attitude in English uses seemingly assertoric language. In practice, people’s moral judgments may be very similar; they may serve to convey emotional states, or imperatives, or serve other functions. What people mean isn’t fully determined by the surface semantics of their utterances.
If we actually observe people’s moral utterances in practice, I wouldn’t be surprised to find many instances where a person would seem to be expressing something closer to my use of “Sportsball is garbage!”, which does not express anything truth-apt. Of course, I could take this very same type of utterance and convert it into a well-formed phrase that would imply cognitivism: “Sportsball is garbage, so football is garbage.” Simply because this particular remark, abstracted from my actual usage in real-world contexts and considered in the cold light of reason, seems to fit well with a cognitivist analysis, it does not mean that that is what I in fact meant when I made the remark in an actual context in which I sought to express that “Sportsball is garbage!”
And if utterances in those real world contexts, where I am expressing a noncognitive con-attitude, are just as central to moral thought and practice as instances of people making propositional moral claims, then I see no reason why we should regard noncognitivist moral utterances as parasitic, or non-central, or aberrant, or otherwise non-genuine instances of moral talk.
Such utterances may seem to be less paradigmatic of moral thought and language. But think about the contexts in which one would make such remarks: actual real-world contexts that prompt an emotional reaction. When Huemer or other philosophers are sitting around simulating these events, they are emotionally distant from the real world events that would prompt the relevant kind of pro- and con- attitudes.
Furthermore, real-world judgments typically don’t concern abstract moral considerations. Most people don’t go around thinking or talking about whether “murder,” considered in the abstract, is immoral, nor do they typically go around identifying and expressing logical relations between their moral values. Quite the contrary: most people rarely if ever consider these sorts of things. Instead, most of their judgments concern real, concrete events. Events that happen in their lives, or happen on the news. They don’t go around thinking about murder being wrong. They think about how terrible it was that a mugger murdered Mrs. Smith, the woman down the street, with a knife. They may say “I hope that scumbag gets what he deserves!”
Such utterances are plausible candidates for paradigmatic instances of moral thought and discourse, but they are not the kinds of cases philosophers consider, and, even if philosophers did try to consider them, they may not do such a good job. Philosopher’s simulation capacities are limited. I can try to simulate eating cake, but it’s not as good as the real thing.
Worse still, philosophers are a self-selected body of people who tend to think in more abstract and rational ways about philosophical topics. The very way in which they think about morality may not reflect how ordinary people think about morality, as a result of the natural disposition and training of philosophers.
So what we end up with is a group of people who are self-selected to be psychologically unrepresentative of ordinary people, who undergo training to develop modes of thinking that are unconventional and specifically favor a rational, emotionally-suppressed, intellectual approach to topics that makes them even less representative, and who do so under circumstances very different from those that reflect the contexts in which the matters about which they are considering actually occur (e.g. they are thinking about moral judgments, which may occur on the battlefield, or in the hospital, or in a prison cell, from an armchair or in a classroom). And on top of all that, most come from similar cultural and socioeconomic backgrounds, so all their intuitions and judgments are shaped by a largely shared history; when they speculate how other people think, they draw on experiences mostly from people from their own culture, who speak their language, etc.
Given all this, I have very little confidence their simulation capacities are up to the task of simulating how other people think.
Regarding the latter points, the philosophical relevance of what ordinary people mean when they engage in moral discourse is probably too complicated for me to get into much here. At least one consideration worth noting is that moral realists have often appealed to the claim that most people are moral realists as a kind of presumptive argument in favor of realism. If it turns out most people are not moral realists, then they cannot appeal to this claim to support a presumption in favor of realism.
Aside from this, matters get more complicated. Here’s one paper worth checking out that pushes the argument that moral realism does not require a semantic claim:
https://philpapers.org/rec/KAHMMR
I’m not sure I’m convinced, but it seems consistent with the concerns you raise.
Alexander Davis
Jul 13 2021 at 2:19am
I really appreciate your thoughtful response Lance! The time you spent writing it has gifted me with much to think about.
“What people mean isn’t fully determined by the surface semantics of their utterances.”
Putting it so concisely makes the thought easier to grasp, and it seems very true. The “Sportsball is garbage” example actually seems like it could go either way, depending on the context. It could just be an expression of sportsball hatred, or if the conversation was about which sports are best for betting, it could represent the proposition that betting on sportsball is ineffective.
“When Huemer or other philosophers are sitting around simulating these events, they are emotionally distant from the real world events that would prompt the relevant kind of pro- and con- attitudes.”
I can think of a variety of examples from my own life. At one point, I was intellectually reflecting on factory farming and dietary choices from a utilitarian perspective. I then wondered if the average person would cause great deal of animal suffering through their diet, more than their life would cancel out in happiness. It occurred to me that since most of them would not be able to be convinced, the only way to prevent this might be to kill them. Worse of all, all of my family is likely a member of this demographic. I tried to reason myself out of it by arguing that I couldn’t get away with it– but still, it’s possible that I could get away with at least one murder. I was made miserable by this line of reasoning for about a day, until I admitted that my utter disgust at the idea of murdering my own family would have to be sufficient reason to not do so– utility be damned! It seems that in this case, my ethics were more concrete and expressivist.
Yet at other times, my moral thought does seem to reside more firmly in the abstract cognitivist camp. Among a group of people watching a documentary on factory farming, I shed the least tears (0, they thought I was a psychopath lol), yet seem to be one of the few whose lifestyle changes lasted more than a few weeks. I give to developing world charities also seemingly mostly because of the moral arguments: I don’t feel particularly strong emotions about it, other than an occasional mild sense of having done a good deed. It would be nice to feel more strongly about it honestly, I’d appreciate a greater reward 😛
As you point out, it seems like not everyone uses moral discourse the same way. Hell, it seems that despite having thought a decent bit in the philosopher’s way about it, I don’t even use moral discourse in a singular way, and I’m just one person! I may have to agree with you that treating one style as the bastard cousin of the other is incorrect. Given the differences in temperament and situation of the people involved, it is ludicrous to let the philosophers speak for the rest of humanity. I have lowered my confidence in their ability to do so, despite my sympathy for their manner of speaking.
And thank you for that paper.
Jon Leonard
Jul 6 2021 at 3:50pm
I’m less confident that the Trolley Problem is a useful critique of utilitarianism. It’s quite far outside the normal range of human experience, and in practice we solve it with safety measures: It’s the responsibility of the people setting up the rail system and operating it to ensure that lethal accidents are very rare. Or more explicitly, the “right answer” to the Trolley Problem is to not leave such dangerous things lying around. Closer to ordinary experience approximations are things like soldiers jumping on grenades, or police facing armed suspects, and the answers there tend to differ from the Trolley case.
Elliott Thornley
Jul 8 2021 at 11:51am
Given the world as it is, utilitarianism makes some pretty stringent active demands on citizens of wealthy countries. But any moral theory that doesn’t make these demands thereby makes even more stringent passive demands on those in poverty.
Suppose, for example, that I can (a) buy a new car or (b) save the lives of ten children by donating to an anti-malaria charity. I really want the new car, so utilitarianism’s requirement to donate places a heavy burden on me. Common-sense morality permits me to buy the car, but it thereby places an even heavier burden on those ten children. They have to die of a preventable disease.
Once we understand that demands can be both active and passive, we see that utilitarianism is the least demanding of all moral theories.
Comments are closed.