I’m enjoying some reading for a conference I’m going to later this week. Two of the readings are by Matt Ridley. I loved the first; I don’t love the second. Here’s a quote about the Ultimatum Game from the second reading, Chapter 3 of The Rational Optimist:
The first player is given some money and told to divide it with the second player. The second player is told he can accept or refuse the offer, but not change it. If he accepts, he receives the money; if he refuses, neither he nor the first player gets a penny. The question is, how much money should the first player offer the second player? Rationally, he should tender almost nothing, and the second player should accept it, because however small the sum, refusal will only make the second player worse off than acceptance. But in practice, people usually offer close to half the money. Generosity seems to come naturally, or rather, ungenerous behaviour is irrationally foolish, because the second player will–and does–consider a derisory offer worth rejecting, if only to punish the selfishness of the first player.
When I first read of these kinds of results more than a decade ago, it warmed the cockles of my heart. But then I read Steven Landsburg’s fundamental critique. I posted about it here. Nothing I have seen since has answered that critique.
Here’s the critique, in my words. The experimenters seem to think there are only 2 parties with wealth at stake: the first player and the second player. But this game does not create wealth. It redistributes wealth from the experimenters to the players. The overall game, ignoring the cost of time, is zero-sum, no matter what Player 1 and Player 2 choose.
You could argue that, of course, neither player will take account of this loss to the experimenters. They can tell themselves that the experimenters want to spend this money so that it’s not really a loss. Ok, but even if that’s true, here’s the problem with that. Presumably the experimenters have a fixed budget. When they run out, the experiment stops. If the players in a particular round act so that neither gets any money, this leaves more money for future players. So somehow we’re supposed to think that people are considerate of the stranger they’re playing with but not of other strangers. Maybe that’s true. But then it certainly doesn’t imply what the experimenters say it implies. It suggests that the players focus on who’s playing with them and don’t care about anyone else.
And if we’re supposed to assume that the players don’t think about where the wealth comes from, then that means we’re supposed to assume that the players think there’s a free lunch. That’s hardly grounds for optimism.
Fortunately, the rest of Ridley’s chapter has actual examples of people in the real world cooperating with strangers to create wealth, not just redistribute it.
UPDATE: Here’s a link to one of the Landsburg pieces.
READER COMMENTS
Thaomas
Apr 4 2017 at 4:13pm
Whether I thought the ultimatum game is a fallacy or not, if I were a member of the 0.1% watching a decades long stagnation in median incomes, I’d be worried, maybe enough to want to call off the class war. Never mind if it is just, is it wise to take health insurance away from people so my taxes can be reduced? Is it wise to insist on large tax decreases for myself as part of tax “reform?”
Michael Albert
Apr 4 2017 at 4:17pm
Perhaps I’m missing something but are you suggesting that the players would act differently if the experimenters committed to burning the money in the event of a refusal? That seems unlikely to me.
MikeP
Apr 4 2017 at 4:27pm
Isn’t the entire explanation of players’ “generosity” that the appearance in the real world of anything like these games almost always happens in iterative scenarios?
Not only do players not want to be thought a jerk over what in these experiments is a trivial amount of money*, but they realize rationally or emotionally that the next time they play their reputation may precede them.
* As it concerns a trivial amount of money, it is entirely rational for the players not to worry themselves over where the money comes from. I doubt it should enter their consciousness at all.
Hazel Meade
Apr 4 2017 at 4:53pm
The biggest problem is that the money is a windfall- it is not “earned”. So using it as a model to argue that equal distribution is more generally “fair” is fallacious. People don’t respond to unequal distribution with cries of unfairness when the unequal distribution is the result of unequal effort or merit. There is all sorts of research showing that people (over the age of about 6) consider unequal distribution more fair when different amounts of effort are contributed by different players.
So the Ultimatum Game can’t model real economic distribution problems. It can only model windfall gains, and then only if the windfall gain is contingent on a situation like the one in the game. This really doesn’t fit many real-world economic situations.
However, what IS interesting is the way it reveals apparently “irrational” behavior in human subjects. For some reason, most players reject the unequal offers, and most players make equal offers. It’s exposing some behavioral traits that aren’t particularly rational in the context of the game, but may make sense in an evolutionary context. This is the area where economics and behavioral psychology meet. Game theory allows us to conduct these little experiments that let us poke at how the human brain is wired.
robc
Apr 4 2017 at 5:20pm
What stagnation?
David R. Henderson
Apr 4 2017 at 6:38pm
@Thaomas,
This blog is not there for you to opine on other issues. If you want to address the issue you raised, wait until I post on it.
MikeP
Apr 4 2017 at 10:12pm
Isn’t the entire explanation of players’ “generosity” that the appearance in the real world of anything like these games almost always happens in iterative scenarios?
Not only do players not want to be thought to be behaving poorly over what in these experiments is a trivial amount of money*, but they realize rationally or emotionally that the next time they play their reputation may precede them.
* As it concerns a trivial amount of money, it is entirely rational for the players not to worry themselves over where the money comes from. I doubt it should enter their consciousness at all.
Charlie
Apr 4 2017 at 11:26pm
I’m not sure I understand Landsburg’s point in reference to the ultimatum game. In the narrow view of “rationality” about which you posted, you are supposed to take the money as it make you better off. That is the only thing in your objective function.
If you decline the money to punish the other player or to benefit future players you’ll never meet, it still violates this narrow view of rationality.
David Chandler Thomas
Apr 5 2017 at 6:36am
A group of my experimental-economics students carried out a game theory experiment last semester I have named “the entitlement game.” The game works exactly like the ultimatum game except the students are told that the other player voluntarily donated the money to be divided up. The results are, that with very few exceptions, the split is accepted even when no money was shared. The students will be presenting a paper on the results next week at the conference.
Tim Worstall
Apr 5 2017 at 8:12am
By far the most interesting result of the ultimatum game is that results differ across societies. The standard results are reached by playing it with undergraduates at expensive American universities. That is, among those who have benefited from high trust free market economies.
Play the same game among Bolivian altiplano peasants (yes, it’s been done) and you get the 99/1 results that we don’t see among the undergraduates.
Sure, all of Landsburg’s critiques could be usefully correct but they wouldn’t explain that difference. But an answer to the question “Does willingness to punish cheaters cause markets to work, or does markets working cause the punishment of cheaters?” might answer it.
Or, at base, which came first, the chicken or the egg?
Jon Murphy
Apr 5 2017 at 8:30am
Maybe I’ve missed something, but why does it matter that the experimenters start out with the money? They’re not part of the experiment (nor are future players of the game). I’ve always understood the initial distribution of funds to be the endowment; it shouldn’t really matter that it came from the experimenters, should it?
David R. Henderson
Apr 5 2017 at 9:56am
@MikeP,
As it concerns a trivial amount of money, it is entirely rational for the players not to worry themselves over where the money comes from.
How do you decide that it’s too trivial that they don’t need to worry about it but so untrivial that they do need to worry about themselves or the other player?
@Jon Murphy,
I’ve always understood the initial distribution of funds to be the endowment; it shouldn’t really matter that it came from the experimenters, should it?
It’s not the endowment. Look at the rules again. If Player B turns you down, neither of you gets anything and the money reverts to the experimenters.
Jon Murphy
Apr 5 2017 at 10:08am
@David Henderson
Oh! I get what you’re saying now. Thanks!
JFA
Apr 5 2017 at 10:17am
There have been ultimatum game experiments done in which Player 1 first performs some task to “earn” the money, and then he is asked to split it with player 2. The references escape me (I’m sure google will tell you something), but if I recall correctly, the splits that are accepted are less generous than in the basic ultimatum game experiments (you will need to fact check this). This gets at David’s/Landsburg’s concern (somewhat) over the game not taking into account where wealth comes from.
From a different perspective, I don’t think I ever interpreted the ultimatum games as a measure of generosity (whatever your philosophical issues with the experiment). Rather, they are a measure of spitefulness (and the players’ beliefs about spitefulness) in the population from which the players are pulled from (see Joe Heinrich’s stuff on performing the experiments with people other than college students from the West).
It’s basically Bob Frank’s idea of developing personality traits as signalling devices (i.e. being a non-money maximizer can leave you with a higher payoff than if you solely maximized the amount of money). If the person who is assigned player 1’s position knows that the average person in the population would reject an offer less than, say, 30% of the prize (because of spite), then the equilibrium is that 30% of the prize is offered. But as you can see, it doesn’t necessarily have anything to do with generosity; it’s just based on people’s belief about the level of spite in a community.
MikeP
Apr 5 2017 at 4:09pm
How do you decide that it’s too trivial that they don’t need to worry about it but so untrivial that they do need to worry about themselves or the other player?
Two people who don’t know each other walk towards each other. Midway between them on the sidewalk is a quarter they both see at the same time.
Here are my estimated probabilities of what can happen next:
0.40 at least one of the two says to the other, “You take it.”
0.30 they both pass it by.
0.20 at least one of them avoids eye contact, reaches down, and picks it up, and the other one does not stop him.
0.08 one of them races to be the one to pick it up.
0.01 the two of them both race to be the one to pick it up.
0.01 at least one of them drops a larger amount of money next to the quarter to provide the punchline of an economics joke.
0.00 at least one of them makes an effort to figure out who dropped the quarter.
People do not like being known as unpleasant, even by people they will never meet again.
David R. Henderson
Apr 5 2017 at 4:35pm
@MikeP,
You didn’t answer my question.
MikeP
Apr 5 2017 at 5:01pm
How do you decide that it’s too trivial that they don’t need to worry about it…
The players reasonably recognize that this is a small amount of money to them, to the other player, and to the experimenter. They further reasonably recognize that this is coming out of even deeper pockets of the experimenters’ employer, probably through some grant for giving out exactly like this. If they don’t take it, it will be given to some other participant in some other similar experiment. They are expected to treat it as windfall, so they treat it as windfall.
…but so untrivial that they do need to worry about themselves or the other player?
Most people feel bad when they cut off another person in traffic or in a grocery line. If there is a low cost — e.g., if a dollar amount to be split is small — then people choose being perceived as pleasant over fighting to win.
Luke Juarez
Apr 5 2017 at 6:34pm
Scott Sumner says, “Never reason from a price change.”
I say, never reason from a social science experiment.
Pedro Romero
Apr 5 2017 at 11:51pm
An everyday example of an ultimatum-like situation is a posted price exchange between an anonymous seller and an anonymous customer: wherein is the customer does not take the offer, she does not get anything the same as the seller. But if she buys the item, then the surplus is shared between her and the seller. How fair is that surplus division is an empirical/experimental question.
Another one I could think of is when two parties inherit what could be a huge amount of money from a pretty much unknown relative, and both parties may also be unfamiliar and even unrelated at all.
But it worries me from your argument is the critique of the presence of the experimenter and that is her money what is being redistributed or I should say shared. If you follow the logic of this, then most if not all experiments suffer from this problem, and therefore are invalid. And this is why I do not agree with you.
Even in the case of the ultimatum game, the money is just there to make sure there is saliency in people’s decisions. There are sound critiques you could raise against the UG but that the money is from the experimenter does not make one.
David R. Henderson
Apr 6 2017 at 5:29am
@MikeP,
They further reasonably recognize that this is coming out of even deeper pockets of the experimenters’ employer, probably through some grant for giving out exactly like this. If they don’t take it, it will be given to some other participant in some other similar experiment.
Exactly. That’s the problem I wrote about above.
@Pedro Romero,
Another one I could think of is when two parties inherit what could be a huge amount of money from a pretty much unknown relative, and both parties may also be unfamiliar and even unrelated at all.
This is not anything like the ultimatum game. The two heirs do end up getting, in total, the whole thing.
If you follow the logic of this, then most if not all experiments suffer from this problem, and therefore are invalid. And this is why I do not agree with you.
That’s not a good reason to disagree. If someone points out a problem with X and you start thinking, “Wow, if that’s a problem with X, then it’s a problem with Y and Z,” it does not follow that it’s not a problem with X.
Miguel Madeira
Apr 6 2017 at 6:45am
I can’t understand the point of Landsburg argument.
The initial vision of the Ultimatum game is that person A prefers {person A: 0 dollars; person B: 0 dollars} to {person A: 1 dollar; person B: 99 dollars}. If I understand, the argument of Landsburg is that person A prefers {person A: 0 dollars; person B: 0 dollars; person C: 100 dollars} to {person A: 1 dollars; person B: 99 dollars; person C: 0 dollars}. And? Attending that person C is the person that there is ready to give away 100 dollars, and person B is the person that refuses to share with A these 100 dollars, make some sense (from a moralist, not from an utilitarian point of view) that A prefers that C remains with 100 dollars instead of 99 going to B
Pedro Romero
Apr 6 2017 at 12:24pm
@David Henderson,on your point
“Wow, if that’s a problem with X, then it’s a problem with Y and Z,” it does not follow that it’s not a problem with X. Well, just think about a prisoner dilemma game where again the experimenter incentivizes both players with her money; the nicest thing would be that both players decide not to play the game so the ‘real’ owner keeps it. As a subject how can I know that the money is from the experimenter, or does it belong to the university? and if during the experiment there is a clue that the research is funded by a private NGO, then is not either money from the university? or not even taxpayers?….So the nicest thing is to keep asking whom the real owner is.
In the example on the inheritance think of situations in which there are feuds because parties are unhappy, and after a long legal dispute in actuality they do not enjoy it.
Swimmy
Apr 6 2017 at 2:42pm
I think Michael Albert got to the heart of the issue in the second comment:
But that doesn’t seem unlikely to me. I can totally imagine player B responding in a different way. They might think something like, “It’s a shame if absolutely nobody got the money–a total waste. So destroying it is wrong, even if player A is a jerk.” I would probably think this way.
Then again, player B might think, “Destroying the money raises the value of everyone else’s money slightly, including mine, so it’s fine.” (Unlikely, I think.) Or player B might not think about it at all. (Fairly likely.) I genuinely don’t know what the average person would do.
Unfortunately it would be difficult to run this experiment in the US, since burning money is technically illegal and you could only draw conclusions if nobody got the money.
Comments are closed.