(Forthcoming in the Oxford Handbook of Philosophy of Economics)
Introduction. This article surveys some of the philosophical issues raised by recent experimental work on so-called social preferences. More broadly, my focus is on experimental explorations of the conditions under which people behave co-operatively or in a prosocial way or, alternatively, fail to do so. These experiments raise a number of fascinating methodological and interpretive issues that are of central importance both to economics and to social and political philosophy. It is commonly claimed that the experiments demonstrate that (at least some) people not only have selfish preferences concerning their own material pay-offs, but that they also have preferences concerning the well-being of others—that is, social preferences. (More concretely, it is claimed that some subjects have well-behaved utility functions in which monetary pay-offs to others, as well as to themselves occur as arguments) Moreover, the contention is not just that some subjects have such social preferences, but that these can have large and systematic effects on behavior, both in the experiments under discussion and in real life contexts outside the laboratory.
These experimental results are thus taken to show the falsity or limited applicability of the standard homo economicus model of human behavior as entirely self- interested. For example, Henrich et al write, in the opening paragraph of the “Overview and Synthesis” chapter of their (2004):
The 1980’s and 1990’s have seen an important shift in the model of human motives used in economics and allied rational actor disciplines. …In the past, the assumption that actors were rational was typically linked to what we call the selfish axiom- the assumption that individuals seek to maximize their own material gains in the interactions and expect others to do the same. However, experimental economists and others have uncovered large and consistent deviations from the predictions of the textbook representation of Homo economicus … Literally hundreds of experiments in dozens of countries using a variety of experimental protocols suggest that, in addition to their own material payoffs, people have social preferences: subjects care about fairness and reciprocity, are willing to change the distribution of material outcomes among others at a personal cost to themselves, and reward those who act in a pro-social manner while punishing those who do not, even when these actions are costly. Initial skepticism about the experimental evidence was waned as subsequent experiments with high stakes and with ample opportunity for learning failed to substantially modify the initial conclusions. (p. 8)
Other economists have challenged this interpretation of the experimental results, contending that they may be accounted for entirely in terms of selfish preferences and conventional game theory assumptions. In addition, even among those who agree that the experimental results cannot be fully accounted for just in terms of selfish preferences, some deny that the invocation of social preferences provides an illuminating explanation of behavior. They urge instead that the experimental results should be accounted for in some other way – e. g., by appeal to social norms. In support of this position, it is observed that the behavior in the games which is taken to be evidence for social preferences (and afortiori the preferences themselves) often seem to be highly context dependent and non- robust, in the sense that a large number of different changes in the experimental set up lead to different behavioral results. Economists who invoke the notion of social preferences typically assume (or argue) that such preferences are not only well-behaved in the sense of satisfying the usual axioms of revealed preference theory, but alsothat they are sufficiently stable that we can use them to predict behavior across some interesting range of contexts. (This assumption is more or less explicit among those who think of experimental games as ways of measuring social preferences). If this stability assumption is not true, one might well wonder whether whatever accounts for non –selfish behavior is usefully conceptualized as a social preference rather than in some alternative way. This issue will also be explored below.
If it is true that the behavior exhibited in the games discussed below cannot be fully accounted for by selfish preferences, then, whatever the positive explanation for the behavior may be, a number of other questions arise. What is the evolutionary history of such prosocial behaviors (as we will call them) and the preferences/motivations that underlie them? To what extent are these behaviors and motivations “innate” or genetically specified and to what extent do these reflect the influence of learning and culture? How much variability with respect to prosocial behavior/ motivations is there among people within particular societies or groups and how much variation exists across groups? To the extent that people exhibit pro-social behavior and motivations, what is the content of these—are (many) people unconditional altruists, conditional co-operators or reciprocators of one or another kind, norm followers, or some mixture of all of these? Experimental investigations of social preferences have little directly to say about the first of these questions but are at least suggestive about many of the others.
My plan is to proceed as follows. I begin with an overview of some the experimental results (Section 2) and then turn to issues about their robustness and the implications thereof (Section 3). Section 4 explores the possible role of neurobiological evidence in addressing issues of robustness and discriminating among alternative explanations of experimental results. I next turn to a more systematic comparison of different approaches to explaining the experimental results, considering in turn explanations that appeal to social preferences (Section 5), explanations that appeal to selfish preferences, and explanations that appeal to norms (section 6). I will conclude with some very brief remarks about the implications of all of this for normative social and political theory.
Some Experimental Results. In an ordinary ultimatum game (UG) a proposer (P) proposes a division of a monetary stake to a responder R. That is, if the stake is $n, P may propose any amount $x up to $n for himself, with $n-x going to R. R may then either accept or reject this offer. If R accepts, both players get the proposed division. If R rejects, both players get nothing. The identities of both P and R are unknown to one another. If the game is one-shot and both players have entirely selfish preferences, the sub-game perfect equilibrium is that P offers R the smallest possible positive amount of money (e.g, one cent if the stake is divisible down to pennies) and R accepts. This is not what is observed experimentally in any population. In most populations in developed countries, Ps offer an average of 0.3- 0.4 of the total stake, and offers under 0.2 are rejected half the time. Offers of 0.5 are also common.
Dictator games (DGs) are like UGs except that the responder has no opportunity to reject: the proposer (dictator) unilaterally decides on the allocation. If the dictator has only self interested preferences, he will allocate the entire amount to himself. Instead, in DGs in populations in developed countries the mean allocated is 0.2 of the total stake, although there is considerable variance with many allocations of 0 and also many of 0.5.
In a public goods game, each of N players can contribute an amount ci of their choosing from an initial endowment which is the same for each player. The total amount ci contributed by all players is multiplied by some factor m (m< 1/N) and divided equally among all of the players, regardless of how much they contribute. In other words, each player i’s endowment is changed by –ci + m ci. In this game if players care only about their own monetary pay-offs, the dominant strategy is to contribute nothing—that is to free ride on the contributions of the other players. In one- shot public goods games in developed countries, subjects contribute on average about half of their endowment, although again there is a great deal of variation, with a number of subjects contributing nothing.
In repeated public good games, subjects begin with substantial mean contribution which then significantly declines under repetition. If a costly punishment option is introduced which allows subjects to punish non-contributors but at a cost to themselves, a number will do so, even in the final round, in which punishment cannot influence future behavior. Introduction of this option prevents the decline in contributions with repeated play. Allowing discussion also boosts contributions.
In trust games, the trustor has the opportunity to transfers some amount X (from an initial stake) of her own choosing to a second party (the trustee). This amount is increased by the experimenter by some multiple k>0. (e.g., X may be tripled). The trustee then has the opportunity to transfer some portion of this new amount kX back to the trustor. In a one-shot game, a purely self- interested trustee will return nothing to the trustor and recognizing this, the trustor will transfer nothing to the trustee in the initial step. Subjects in developed societies tend to transfer around 0.4-0.6 of their stake and the rate of return by trustors is around 0—that is, trustors return approximately the result transferred but no more.