On the evolutionary origins of equity

Equity, defined as reward according to contribution, is considered a central aspect of human fairness in both philosophical debates and scientific research. Despite large amounts of research on the evolutionary origins of fairness, the evolutionary rationale behind equity is still unknown. Here, we investigate how equity can be understood in the context of the cooperative environment in which humans evolved. We model a population of individuals who cooperate to produce and divide a resource, and choose their cooperative partners based on how they are willing to divide the resource. Agent-based simulations, an analytical model, and extended simulations using neural networks provide converging evidence that equity is the best evolutionary strategy in such an environment: individuals maximize their fitness by dividing benefits in proportion to their own and their partners’ relative contribution. The need to be chosen as a cooperative partner thus creates a selection pressure strong enough to explain the evolution of preferences for equity. We discuss the limitations of our model, the discrepancies between its predictions and empirical data, and how interindividual and intercultural variability fit within this framework.

Having a higher productivity is only one way to contribute more to a cooperative interaction.

6
Another natural way is to spend more time to amass resources. To test the robustness of our 7 partner choice mechanism, we thus created a third set of simulations in which there are no more 8 dierences of productivity between individuals, but one of the two individuals in a cooperating 9 dyad has to invest m times more time than her partner. We thus model the possibility that 10 there is a cooperative role more time-consuming than the other. In practice, we model this  Analytical model. 37 We developed an analytical model to model the situation where individuals dier by their pro-38 ductivity (but not eort), and where only two productivities coexist in the population. The interaction, however, they are forced to postpone their social interaction to a later encounter.

46
We assume that this entails an explicit cost expressed as a discounting factor δ (0 ≤ δ < 1). If 47 we call the average payo of an individual of productivity i G i , then δG i will be the average 48 expected payo in the next interaction after rejecting an oer. When δ equals 1, refusing an 49 interaction carries no cost; when δ equals 0, refusing an interaction will result in zero payo 50 from the next interaction. In practice, we will neglect the case where δ equals 1, as it leads to 51 artefactual results (see below).

52
The assumption that only partners can decide of the division in our model is necessary so 53 that the evolution of fairness is not explained trivially. When only one individual can decide, 54 natural selection favors selshness [1]. This is easy to understand. On the one hand, whatever interaction is refused) will be δG i . As a consequence, a decision maker should never refuse 96 a reward that is above the corresponding δG i , but should always refuse rewards that 97 are below this level. At the equilibrium, because rewards from partners should evolve 98 toward the minimum that decision makers will accept, individuals will always demand 99 and accept exactly δG i , no matter who they are interacting with (regardless of their 100 partner's productivity). We thus have: 3. Partners give their decision makers what they want at the evolutionary equi- Knowing (1) and (2)

117
Similarly, the payo of a HP individual in this population is Solving this system gives us an expression for G HP and G LP as a function of x and δ at 159 the evolutionary equilibrium: From (5)  In this case, the average payo of LP and HP individuals respectively can be written as: Solving this system gives us an expression for G HP and G LP as a function of x and δ at the 174 evolutionary equilibrium: From (6)  also give the condition that must be satised for each situation to be possible at the evolutionary 196 equilibrium; it is then straightforward to show that, given our parameter values (0 ≤ x ≤ 1, 197 0 ≤ δ < 1 ), this condition can never be satised.

198
Situation A: Situation C: Situation B: Situation A & C: As explained in the previous section, the verication that it is not possible for some (but is not necessarily the case that neural networks will produce proportional oers for the whole 290 range of inputs they are exposed to. Imagine an individual who oers proportional rewards only 291 to the best producers in the population, while oering less-than-proportional rewards to other 292 individuals. At the evolutionary equilibrium, our model predicts that these unfair rewards will 293 be rejected. But as long as nding a new partner is not costly, being rejected does not lead to 294 a loss of tness. As a consequence, any individual can oer less-than-proportional rewards to a 295 fraction of the population, as long as another fraction still accepts the rewards she makes that