Figures
Abstract
Social dilemmas are situations in which collective interests are at odds with private interests: pollution, depletion of natural resources, and intergroup conflicts, are at their core social dilemmas. Because of their multidisciplinarity and their importance, social dilemmas have been studied by economists, biologists, psychologists, sociologists, and political scientists. These studies typically explain tendency to cooperation by dividing people in proself and prosocial types, or appealing to forms of external control or, in iterated social dilemmas, to long-term strategies. But recent experiments have shown that cooperation is possible even in one-shot social dilemmas without forms of external control and the rate of cooperation typically depends on the payoffs. This makes impossible a predictive division between proself and prosocial people and proves that people have attitude to cooperation by nature. The key innovation of this article is in fact to postulate that humans have attitude to cooperation by nature and consequently they do not act a priori as single agents, as assumed by standard economic models, but they forecast how a social dilemma would evolve if they formed coalitions and then they act according to their most optimistic forecast. Formalizing this idea we propose the first predictive model of human cooperation able to organize a number of different experimental findings that are not explained by the standard model. We show also that the model makes satisfactorily accurate quantitative predictions of population average behavior in one-shot social dilemmas.
Citation: Capraro V (2013) A Model of Human Cooperation in Social Dilemmas. PLoS ONE 8(8): e72427. https://doi.org/10.1371/journal.pone.0072427
Editor: Attila Szolnoki, Hungarian Academy of Sciences, Hungary
Received: April 19, 2013; Accepted: July 10, 2013; Published: August 29, 2013
Copyright: © 2013 Valerio Capraro. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Funding: No current external funding sources for this study.
Competing interests: The author has declared that no competing interests exist.
Introduction
Social dilemmas are situations in which collective interests are at odds with private interests [1]. In other words, they describe situations in which the fully selfish and rational behavior leads to an outcome smaller than the one the individuals would obtain if they acted collectively. Social dilemmas create then a tension between private interests and public interests, between selfishness and cooperation. Classically, several different social dilemmas have been distinguished, including the Prisoner’s dilemma, Chicken, Assurance, Public Goods, the Tragedy of the Commons [2], and, more recently, the Traveler’s dilemma [3], [4]. Each of these games has been studied by researchers from different disciplines, as economists, biologists, psychologists, sociologists, and political scientists, because of the intrinsic philophical interest in understanding human nature and since many concrete and important situations, as pollution, depletion of natural resources, and intergroup conflict, can be modelled as social dilemmas.
The classical approaches explain tendency to cooperation dividing people in proself and prosocial types [5], [6], [7], [8], [9], or appealing to forms of external control [10], [11], [12], or to long-term strategies in iterated social dilemmas [13]. But, over the years many experiments have been accumulated to show cooperation even in one-shot social dilemmas without external control [14], [15], [16], [17], [18], [19], [20]. These and other earlier experiments [21], [22], [23], [24] have also shown that the rate of cooperation in the same game depends on the particular payoffs, suggesting that most likely humans are engaged in some sort of indirect reciprocity [25], [26] and the same person may behave more or less cooperatively depending on the payoffs. Consequently, the problem of making a predictive division in proself and prosocial types becomes extremely difficult, if not even impossible.
From these experiments, we can argue two conclusions: first, the observation of cooperation in one-shot social dilemmas without external controls suggests that the origin of cooperation relies in the human nature; second, the fact that the rate of cooperation depends on the payoffs suggests that it could be computed, at least approximatively, using only the payoffs. The word approximatively stands for the fact that numerous experimental studies have shown that cooperation is based on a number of factors, as family history, age, culture, gender, even university course [27], religious beliefs [19], and decision time [28]. Therefore, we cannot expect a theory able to say, given only the payoffs, the individual-level rate of cooperation in a social dilemma. We can expect instead a model predicting quite accurately population average behaviour using the mean value of parameters that could be theoretically updated at an individual-level.
In this article we make the first step in this direction: (1) we develop the first predictive model of cooperation; (2) we show that it explains a number of puzzling experimental findings that are not explained by the standard economic model, such as the fact that the rate of cooperation in the Prisoner’s dilemma increases when the cost-benefit ratio decreases, the rate of cooperation in the Traveler’s dilemma increases when the bonus/penalty decreases, the rate of cooperation in the Public Goods game increases when the pro-capite marginal return increases, the rate of cooperation in the Chicken game is larger than the rate of cooperation in the Prisoner’s dilemma with similar payoffs; (3) we show that it makes satisfactorily accurate quantitative predictions of population average behaviour in social dilemmas.
We mention that there are many other models that can be applied to explain deviation towards cooperation in social dilemmas, including the cognitive hierarchy model [29], the quantal level-k theory [30], the level k-theory [31], the quantal response equilibrium [32], the inequity aversion models [33], [34] and the noisy introspection model [35]. Nevertheless, all these models use free parameters and so they are not predictive, but descriptive.
The key idea behind the model is very simple: since experimental data suggest that humans have attitude to cooperation by nature, we formalize the intuition that people do not act a priori as single agents, but they forecast how the game would be played if they formed coalitions and then they act according to their most optimistic forecast.
We anticipate that forecasts will be defined by making a comparison between incentive and risk for an agent to deviate from the collective interest. This comparison leads to associate a probability to the event “agent defects”. As mentioned, we will show that this procedure works satisfactorily well in the prediction of population average behavior. The problem in passing to individual-level predictions is that the event “player defects”, given only the payoffs, is not measurable at an individual-level in any universal and objective sense and the dream is to use the factors mentioned above (family history, age, culture, incentives, iterations, etc.) to define parameters to update the measure of the event “player defects” at an individual-level. In fact, an attempt to extend the present model to iterated social dilemmas has been done in [36], leading to promising results: predictions tend to get close to experimental data as the number of iterations increases.
Even though our model is very general and can be applied to every symmetric game, we treat explicitly only four but very relevant and widely studied social dilemmas: the Prisoner’s dilemma, the Traveler’s dilemma, the Public Goods, and the Tragedy of the Commons. We begin with a short review of these games.
Prisoner’s Dilemma
Two players can choose to either “Cooperate” or “Defect”. If both players cooperate, they both receive the monetary reward, , for cooperating. If one player defects and the other cooperates, then the defector receives the temptation payoff, , while the other receives the sucker payoff, . If both players defect, they both receive the punishment payoff, . Payoffs are subject to the condition .
Traveler’s Dilemma
Fix a bonus/penalty . Two travelers have to claim for a reimbursement between 180 and 300 monetary units for their (identical) luggage that has been lost by the same air company. The air company wants to avoid that the travelers ask for unreasonably high reimbursements and so it decides to adopt the following rule: the traveler who claims the lowest, say , gets a reimbursement of monetary units, and the other one gets a reimbursement of only monetary units. If both players claim the same, , then they both get reimbursed of monetary units.
Public Goods Game
agents receive an initial endowment of monetary units and simultaneously choose an amount to contribute to a public pool. The total amount in the pot is multiplied by and then divided equally by all group members. So agent receives a payoff of , where . The number is assumed to belong to the interval and it is called constant marginal return.
Tragedy of the Commons
Consider a village with farmers, that has limited grassland. Each of the farmers has the option to keep a sheep or not. Let the monetary utility of milk and wool from the sheep be . Let the monetary damage to the environment from one sheep grazing over the grassland be denoted by . Assume and let . Let be a variable that takes values 0 or 1 and denotes whether the farmer keeps the sheep or not. The payoff of farmer is .
All these games share the same feature: selfish and rational behavior leads to suboptimal outcomes. In the Prisoner’s dilemma, the unique Nash equilibrium is to defect, while both players would be better off if they both cooperate; in the Traveler’s dilemma, the unique Nash equilibrium is to claim for the lowest possible amount, producing an outcome smaller than the one they would obtain if they both claim for the largest possible amount; in the Public Goods game, the unique Nash equilibrium is not to contribute anything, while all players would be better off if they all contribute everything; in the Tragedy of the Commons, the unique Nash equilibrium is to keep the sheep, while all farmers would be better off if they all agree not to keep the sheep.
An Informal Description of the Model
Before introducing the model in general, we describe it informally in a particular case. Consider the Prisoner’s dilemma (recently experimented using MTurk in [20]) with monetary outcomes (expressed in dollars) . The idea is that players forecast how the game would be played if they formed coalitions. In a two-player game there are only two possible coalition structures: in the selfish coalition structure players are supposed to follow their private interests and in the cooperative coalition structure they are supposed to follow the collective interest. The analysis of these two coalition structures proceed as follows:
- In players follow their private interest and therefore, by definition, they play the Nash equilibrium . Since there is no incentive to deviate from a Nash equilibrium, each player gets 0.05 for sure and we say that the value of is 0.05 and write .
- To define the value of we argue as follows. If the players follow the collective interest, their largest possible payoff is 0.15 in correspondence to the profile of strategies . Since this profile of strategies is not stable (i.e., each player has a non-zero incentive to deviate from it), we introduce a probability to measure how likely such deviations are. To define this probability, we observe that:
- the incentive to deviate from the collective interest is , since each player can get 0.20 - instead of 0.15, if she defects and the other cooperate;
- the risk to deviate from the collective interest is , since each player can get only 0.05 instead of 0.15 if she follows her private interest but also the other one does the same.
We define the prior probability that a player abandons the coalition structure by making a sort of proportion between incentive and risk. Specifically, we define the probability that a player abandons to be . Now, note that the smallest payoff achievable by a player when she follows but the other player does not is the sucker payoff . Therefore, we define
The numbers and are interpreted as forecasts of the expected payoff for an agent playing according to and , respectively. Since and , the most optimistic forecast is in correspondence of the cooperative coalition structure . We use this best forecast to generate common beliefs or, in other words, to make a tacit binding between the players: to play only strategies which give a payoff of at least 0.10 to both players. More formally, we restrict the set of profiles of strategies and we allow only profiles , such that , for all . We define the cooperative equilibrium to be the unique Nash equilibrium of this restricted game.
From Fig. 1, it is clear that the cooperative equilibrium is in correspondence of the point in the red set that is closest to . This point can be computed directly by finding the smallest such thatthat is . Consequently, the cooperative equilibrium of this variant of the Prisoner’s dilemma is for both players. Notice that in [20] it has been reported that players cooperated with probability 58 per cent in one treatment and 65 per cent in another treatment and the over-cooperation in the second experiment was explained in terms of framing effect due to the different ways in which the same game was presented.
The red set represents the set of allowed profiles of strategies in the restricted game.
The Model
We now describe the general model. We recall that, motivated by the observation that attitude to cooperation seems to be intrinsic in the human nature, our main idea is to assume that players do not act a priori as single agents, but they forecast how the game would be played if they formed coalitions and then they play according to their most optimistic forecast. The only technical difficulty to formalize this idea is to define the forecasts. Following the example described in the previous section, they will be defined by assigning to each player and to each partition of the player set , interpreted as a possible coalition structure, a number which represents the expected payoff of player when she plays according to the coalition structure . This value will be indeed defined as an averagewhere represents the prior probability that players assigns to the event “players in abandon the coalition structure ” and is the infimum of payoffs of player when she plays according to the coalition structure and players in abandon the coalition.
This idea is very general and indeed, in a long-term working paper, we are developping the theory for every normal form game [37]. In case of the classical social dilemmas in consideration the theory is much easier, because of their symmetry.
- Symmetry. All players have the same set of strategies and for each player , for each permutation of the set of players and for each one has
(1)Coming to the description of the model, let be a symmetric game and denote the set of players, each of which has pure strategy set , mixed strategies and payoff function . We start by assuming, for simplicity, that and we will explain, at the end of this section, how the model generalizes to -player games.
A coalition structure is a partition of the set of players, that is a collection of pairwise disjoint subsets of whose union covers . Every set in the partition is called coalition. Given a coalition structure , we denote by the game associated to , whose players in the same coalition play as a single player whose payoff is the sum of the payoffs of the players belonging to that coalition. Call the set of Nash equilibria of the game . Now fix and let denote the other player. We denote by the maximal payoff that player can obtain leaving the coalition structure . Formally,(2)
will be called incentive of player to abandon the coalition structure .
Given a profile of strategies , a strategy is called -deviation from if .
We denote by the maximal loss that players can incur if she decides to leave the coalition structure to try to achieve her maximal possible gain, but also player deviates from the coalition structure either to follow her selfish interests or to anticipate player ’s deviation. Formally,(3)where runs over the set of Nash equilibria of and, for each such , runs over the set of strategies such that is maximized and runs over the strategies that are -deviations from either or . is called risk for player in abandoning the coalition structure .
We define the probability of deviating from the coalition structure by making a comparison between incentive and risk. There are certainly many ways to do such comparison. In this paper we use a quite intuitive and seemingly natural way to make it and, in future research, it would be important to investigate some others. Specifically, we define(4)and we interpret this number as prior probability that player assigns to the event “player abandons the coalition structure ”. Therefore is interpreted as prior probability that nobody abandons the coalition structure . Now, let be the infimum of payoffs for player if nobody abandons the coalition structure , that is the infimum of payoffs for player when each player plays according to a Nash equilibrium of , and let be the infimum of payoffs of player when she plays according to a Nash equilibrium of and plays a -deviation from a Nash equilibrium of . The value for player of the coalition structure is by definition
(5)Symmetry implies that , for all . Consequently, there is a coalition structure (independent of ) which maximizes . We use the number to define common beliefs or, in other words, to make a tacit binding among the players.
Definition 0.1
The induced game is the same game as except for the set of allowed profiles of strategies: in the induced game only profiles of strategies such that , for all , are allowed.
Observe that the induced game does not depend on the maximizing coalition structure, that is, in case of multiple coalition structures maximizing the value, one can choose one of them casually to define the induced game and this game does not depend on such choice.
Since the set of allowed strategies in the induced game is convex and compact (and non-empty) one can compute Nash equilibria of the induced game.
Definition 0.2
A cooperative equilibrium for is a Nash equilibrium of the game .
Observe that this model implicitly assumes that it is common knowledge that both players apply the same method of reasoning, that is, each player knows that the other player thinks about coalitions when making her decision. As we elaborate in Section, we believe that this assumption is not unreasonable and may provide a realistic picture of the mental processes that real subjects perform during the game.
In case of -player games the idea is to define for every single player and then use the law of total probabilities to extend this measure to a probability measure on the set . To use the law of total probabilities we need to know the probabilities that two or more given players deviate from . This is easy in situations of perfect anonimity: one can just assume that the events “player deviates” and “player deviates” are independent and then multiply the respective probabilities. The situation where a player may influence the choice of another player is much more interesting and worthy of being explored.
Finally, we observe that the N-person classical social dilemmas in consideration are computationally very simple, since it is enough to study only the fully selfish coalition structure (in which all players play according to a Nash equilibrium of the original game) and the fully cooperative coalition structure (in which all players play collectively). More formally, given a coalition structure , one has . Therefore, in order to find a coalition structure that maximizes the value, it is enough to know the values and .
Predictions of the Model
Prisoner’s Dilemma
We compute the cooperative equilibrium of the Prisoner’s dilemma in two variants, starting from the one already discussed in Section with monetary outcomes (expressed in dollars) . In this case, the reader can easily check, following the computation sketched in Section, that the cooperative equilibrium is for both players. Notice that in [20] it has been reported that players cooperated with probability 58 per cent in one treatment and 65 per cent in another treatment and the over-cooperation in the second experiment was explained in terms of framing effect due to the different ways in which the same game were presented.
Similar results can be obtained making a comparison between the experimental data reported in [19] on the one-shot prisoner’s dilemma with and its cooperative equilibrium: 37 per cent of subjects cooperated in the laboratory, while the cooperative equilibrium is . We mention that the same experiment was repeated using MTurk and ten times smaller outcomes, giving a slightly larger percentage of cooperation (47 per cent). Nevertheless, it was shown in [19] that this difference was not statistically significant.
Now we consider a parametric Prisoner’s dilemma. Fix and consider the following monetary outcomes: . The intuition suggests that people should be perfectly selfish for , they should get more cooperative as increases and they should tend to be perfectly cooperative as approaches infinity. This qualitative behavior was indeed observed in iterated treatments in [18].
We show that this is in fact the behavior of the cooperative equilibrium. Indeed, one obtains that the cooperative equilibrium coincides with Nash equilibrium for , while, for , it iswhich moves continuously and monotonically from defection to cooperation as increases and tends to cooperation as tends to infinity. Note that the fact that the cooperative equilibrium coincides with Nash equilibrium for shows also that Nash equilibrium and cooperative equilibrium are not disjoint solution concepts. Colloquially speaking, players get selfish when they understand that cooperating is not fruitful.
Traveler’s Dilemma
One has , since (180,180) is the unique Nash equilibrium of the Traveler’s dilemma, andsince the unique Nash equilibrium of is (300,300), , in correspondence of (300,299), in correspondence of (298,299), in correspondence of , and clearly in correspondence of (300,300).
Consequently the cooperative equilibrium strongly depends on : the predicted claims get smaller as get larger. In other words, cooperation is more difficult as the bonus/penalty increases. This behaviour has been indeed qualitatively observed both in one-shot and iterated games [38], [16], [17], and [39]. We are aware of only two experimental studies devoted to one-shot Traveler’s dilemmas. For these experiments, the prediction of the the cooperative equilibrium are even quantitatively close. Indeed, (1) for one finds that the unique cooperative equilibrium is a suitable convex combinations of the strategies 296 and 297. This meets the experimental data reported in [16], where they observed that about 80 per cent of the subjects played a strategy between 290 and 300 with an average of 295; (2) For , one has , and then the cooperative equilibrium coincides with the Nash equilibrium. This matches the experimental data reported in [16], where they observed that about 80 per cent of the players played the Nash equilibrium; (3) For and strategy sets , in [17] it has been reported that 38 out of 45 game theorists chose a strategy between 90 and 100 and 28 of them chose a strategy between 97 and 100. In this case and therefore the cooperative equilibrium is close to the pure strategy 99.
Public Good Game
The unique Nash equilibrium is , for all , in correspondence of which each player gets . Consequently . On the other hand, one has
Therefore, if and only if . In other words, when is small - recall that is assumed to belong to the interval - the cooperative equilibrium reduces to Nash equilibrium and the larger is the larger is the rate of cooperation predicted by the cooperative equilibrium. The fact that human behavior depends on in this way has been indeed observed several times (see, e.g., [14], [40]). As a quantitative comparison, we consider the experimental data reported in [41], with . We normalize to be equal to 1 (in the experiment dollars). In this case the cooperative equilibrium is supported between 0.66 and 0.67. In [41] it has been reported that the average of contributions was 0.50, but the mode was 0.60 (6 out of 32 times) followed by 0.80 (5 out of 32 times).
Tragedy of the Commons
One easily sees that the Tragedy of the Commons and the Public Goods game represent the same strategic situation, just by setting , that can be interpreted as the effective cost of having a sheep. In particular, one finds that if and only if .
Comparison between the Prisoner’s Dilemma and Chicken
We recall that the Chicken game is basically the same as the Prisoner’s dilemma except for the fact that payoffs are subject to the condition . The Chicken game has two pure Nash equilibria, and , and a symmetric evolutionarily stable mixed Nash equilibrium depending on the payoffs. Observe that , since it is the infimum of payoffs of player when each player plays in according to a Nash equilibrium. Such infimum is attained in correspondence to the profile of strategies .
It has been observed in [42] that the rate of cooperation in the iterated Prisoner’s dilemma is significantly less than the rate of cooperation in the iterated Chicken game with similar payoffs, that is, with payoffs such that the average payoffs across outcomes is the same in both games.
We now show that this behavior is predicted by the cooperative equilibrium in one-shot games, giving a qualitative explanation of why we observe more cooperation in the iterated Chicken game than in the iterated Prisoner’s dilemma. The expression qualitative explanation stands for the fact that, of course, a direct comparison between iterated and one-shot games cannot be done, since the former have a much richer set of strategies. Nevertheless, we find quite remarkable the fact that this difference in behavior observed in iterated treatments is predicted for one-shot treatments: we believe that this connection is not casual and deserves to be investigate better.
The payoffs used in [42] are for the Prisoner’s dilemma and for the Chicken game. One finds that the cooperative equilibrium of this variant of the Prisoner’s dilemma is and the cooperative equilibrium of this variant of the Chicken game coincides with the evolutionarily stable strategy . So the rate of cooperation predicted by the cooperative equilibrium is significantly higher in the Chicken game.
Conclusions
Many experiments over the years have shown that humans may act cooperatively even in one-shot social dilemmas without forms of external controls and the rate of cooperation depends on the payoffs. This suggests that humans have attitude to cooperation by nature and therefore they do not act a priori as single players, as typically assumed in economics, but they forecast how the game would be played if they formed coalitions and then they play according to their most optimistic forecast.
We have formalized this idea assuming that each player makes an evaluation of the probability that another player abandons the collective interest to follow her private interest. This probability is defined by making a comparison between incentive and risk to deviate from the collective interest and gives rise to common beliefs that, mathematically, correspond to define a suitable restriction of the original game. On the one hand, this procedure seems qualitatively reasonable and we believe it provides a realistic picture of the mental processes that real subjects perform during the game. On the other hand, the formalization of this process, that is, the definitions of the risk, incentive, probabilities, and the induced game, is mathematically simple and seemingly natural but certainly deserves to be investigated better and possibly improved in future research.
However, the actual model makes us optimistic about this direction of research, being the first predictive model able to: (1) make satisfactorily accurate predictions of population average behavior in social dilemmas; (2) explain a number of experimental findings, such as the fact that the rate of cooperation in the Prisoner’s dilemma increases when the cost-benefit ratio decreases, the rate of cooperation in the Traveler’s dilemma increases when the bonus/penalty decreases, the rate of cooperation in the Public Goods game increases when the pro-capite marginal return increases, the rate of cooperation in the Chicken game is larger than the rate of cooperation in the Prisoner’s dilemma with similar payoffs.
The dream is to incorporate other components (as family history, age, culture, incentives, iterations, etc.) into the model in order to make individual-level predictions.
Author Contributions
I am the only author, so, I did everything! The manuscript do not contain any experiment, I would say only “wrote the manuscript”. Wrote the paper: VC.
References
- 1. Kerr N (1983) Motivation losses in small groups: A social dilemma analysis. Journal of Personality and Social Psychology 45: 819–828.
- 2. Kollock P (1988) Social dilemmas: The anatomy of cooperation. Annual Review of Sociology 24: 183–214.
- 3. Basu K (1994) The traveler’s dilemma: Paradoxes of rationality in game theory. The American Economic Review 84: 391–395.
- 4. Manapat M, Rand D, Pawlowitsch C, Nowak M (2012) Stochastic evolutionary dynamics resolve the traveler’s dilemma. J Theo Biol 303: 119–127.
- 5.
Liebrand W (1984) The effect of social motives, communication and group size on behavior in an n-person multi-stage mixed-motive game. Eur J Soc Psychol.
- 6. Liebrand W, Wilke H, Vogel R, Wolters F (1986) Value orientation and conformity in three types of social dilemma games. J Conict Resolut 30: 77–97.
- 7. Kramer K, McClintock C, Messick D (1986) Social values and cooperative response to a simulated resource conservation crisis. J Pers 54: 576–591.
- 8.
Kuhlman D, Camac C, Cunha D (1986) Individual differences in social orientation. In Experimental Social Dilemmas, ed HAM Wilke, DM Messick, C Rutte Frankfurt: Verlag Peter Lang : 151–174.
- 9. McClintock C, Liebrand W (1988) Role of interdependence structure, individual value orientation, and another’s strategy in social decision making: a transformational analysis. J Pers Soc Psychol 55: 396–409.
- 10.
Olson M (1965) The Logic of Collective Action: Public Goods and the Theory of Groups. Cambridge, MA: Harvard Univ. Press.
- 11. Hardin G (1968) The tragedy of the commons. Science 162: 1243–1248.
- 12. Dawes R (1980) Social dilemmas. Annu Rev Psychol 31: 169–193.
- 13.
Axelrod R (1984) The Evolution of Cooperation. New York: Basic Books.
- 14. Isaac M, Walker J (1988) Group size effects in public goods provision: The voluntary contribution mechanism. Quarterly Journal of Economics 103: 179–200.
- 15. Cooper R, DeJong D, Forsythe R, Ross T (1996) Cooperation without reputation: Experimental evidence from prisoner’s dilemma games. Games and Economic Behavior 12: 187–218.
- 16. Goeree J, Holt C (2001) Ten little treasures of game theory and ten intuitive contradictions. American Economic Review 91: 1402–1422.
- 17.
Becker T, Carter M, Naeve J (2006) Experts playing the travelers dilemma. Working Paper 252, Institute for Economics, Hohenheim University.
- 18. Dreber A, Rand D, Fudenberg D, Nowak M (2008) Winners don’t punish. Nature 452: 348–351.
- 19. Horton J, Rand D, Zeckhauser R (2011) The online laboratory: conducting experiments in a real labor market. Experimental Economics 14: 399–425.
- 20.
Dreber A, Ellingsen T, Johannesson M, Rand D (2013) Do people care about social context? framing effects in dictator games. Experimental Economics. In press.
- 21. Kelley H, Grzelak J (1972) Conict between individual and common interest in an n-person relationship. J Pers Soc Psychol 21: 190–197.
- 22. Bonacich P, Shure G, Kahan J, Meeker R (1976) Cooperation and group size in the n- person prisoner’s dilemma. J Conict Resolution 20: 687–706.
- 23. Komorita S, Sweeney J, Kravitz D (1980) Cooperative choice in the n-person dilemma situation. J Pers Soc Psychol 38: 504–516.
- 24. Isaac R, Walker J, Thomas S (1984) Divergent evidence on free riding: an experimental examination of possible explanations. Public Choice 43: 113–149.
- 25. Nowak M, Sigmund K (1998) Evolution of indirect reciprocity by image scoring. Nature 393: 573–577.
- 26. Nowak M (2006) Five rules for the evolution of cooperation. Science 314: 1560–1563.
- 27. Marwell G, Ames R (1981) Economists free ride, does anyone else? Journal of Public Economics 15: 295–310.
- 28. Rand D, Greene J, Nowak M (2012) Spontaneous giving and calculated greed. Nature 489: 427–430.
- 29. Camerer C, Ho T, Chong J (2004) A cognitive hierarchy model of games. Quaterly J of Economics 119: 861–898.
- 30. Stahl D, Wilson P (1994) Experimental evidence on players’ models of other players. J Economic Behavior and Organization 25: 309–327.
- 31. Costa-Gomes M, Crawford V, Broseta B (2001) Cognition and behavior in normal form games: An experimental study. Econometrica 69: 1193–1235.
- 32. McKelvey R, Palfrey T (1995) Quantal response equilibria for normal form games. Games and Economic Behavior 10: 6–38.
- 33. Fehr E, Schmidt K (1999) A theory of fairness, competition and cooperation. Quaterly Journal of Economics 114: 817–868.
- 34. Bolton G, Ockenfels A (2000) Erc: A theory of equity, reciprocity and competition. The American Economic Review 90: 166–193.
- 35. Goeree J, Holt C (2004) A model of noisy introspection. Games and Economic Behavior 46: 365–382.
- 36.
Capraro V, Venanzi M, Polukarov M, Jennings N (2013) Cooperative equilibria in iterated social dilemmas. In: Proceedings of the 6th International Symposium in Algorithmic Game Theory. In press.
- 37.
Capraro V (2013) A solution concept for games with altruism and cooperation. Available: http://arxivorg/pdf/13023988pdf.
- 38. Capra M, Goeree J, Gomez R, Holt C (1999) Anomalous behavior in a traveler’s dilemma? The American Economic Review 89: 678–690.
- 39. Basu K, Becchetti L, L S (2011) Experiments with the traveler’s dilemma: welfare, strategic choice and implicit collusion. Social Choice and Welfare 37: 575–595.
- 40. Gunnthorsdottir A, Houser D, McCabe K (2007) Disposition, history and contributions in public goods experiments. Journal of Economic Behavior and Organization 62: 304–315.
- 41. Goeree J, Holt C, Laury S (2002) Private costs and public benefits: Unraveling the effects of altruism and noisy behavior. Journal of Public Economics 83: 255–276.
- 42. Kümmerli R, Colliard C, Fiechter N, Petitpierre B, Russier F, et al. (2007) Human cooperation in social dilemmas: comparing the snowdrift game with the prisoner’s dilemma. Proc R Soc B 274: 2965–2970.