Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Evolution reinforces cooperation with the emergence of self-recognition mechanisms: An empirical study of strategies in the Moran process for the iterated prisoner’s dilemma

  • Vincent Knight ,

    Contributed equally to this work with: Vincent Knight, Marc Harper

    Roles Conceptualization, Data curation, Formal analysis, Methodology, Software, Validation, Visualization, Writing – original draft, Writing – review & editing

    knightva@cardiff.ac.uk

    Affiliation Cardiff University, School of Mathematics, Cardiff, United Kingdom

  • Marc Harper ,

    Contributed equally to this work with: Vincent Knight, Marc Harper

    Roles Conceptualization, Methodology, Software, Writing – original draft, Writing – review & editing

    Affiliation Google Inc., Mountain View, CA, United States of America

  • Nikoleta E. Glynatsi ,

    Roles Visualization, Writing – original draft, Writing – review & editing

    ‡ These authors also contributed equally to this work.

    Affiliation Cardiff University, School of Mathematics, Cardiff, United Kingdom

  • Owen Campbell

    Roles Software, Writing – review & editing

    ‡ These authors also contributed equally to this work.

    Affiliation Independent Researcher, Chester, United Kingdom

Abstract

We present insights and empirical results from an extensive numerical study of the evolutionary dynamics of the iterated prisoner’s dilemma. Fixation probabilities for Moran processes are obtained for all pairs of 164 different strategies including classics such as TitForTat, zero determinant strategies, and many more sophisticated strategies. Players with long memories and sophisticated behaviours outperform many strategies that perform well in a two player setting. Moreover we introduce several strategies trained with evolutionary algorithms to excel at the Moran process. These strategies are excellent invaders and resistors of invasion and in some cases naturally evolve handshaking mechanisms to resist invasion. The best invaders were those trained to maximize total payoff while the best resistors invoke handshake mechanisms. This suggests that while maximizing individual payoff can lead to the evolution of cooperation through invasion, the relatively weak invasion resistance of payoff maximizing strategies are not as evolutionarily stable as strategies employing handshake mechanisms.

Introduction

The Prisoner’s Dilemma (PD) [1] is a fundamental two player game used to model a variety of strategic interactions. Each player chooses simultaneously and independently between cooperation (C) or defection (D). The payoffs of the game are defined by the matrix , where T > R > P > S and 2R > T + S. The PD is a one round game, but is commonly studied in a manner where the prior outcomes matter. This repeated form is called the Iterated Prisoner’s Dilemma (IPD). As described in [24] a number of strategies have been developed to take advantage of the history of play. Strategies referred to as zero determinant (ZD) strategies [4] can manipulate some players through extortionate mechanisms.

The Moran Process [5] is a model of evolutionary population dynamics that has been used to gain insights about the evolutionary stability in a number of settings. Several earlier works have studied iterated games in the context of the prisoner’s dilemma [6, 7], however these often make simplifying assumptions or are limited to classes of strategies such as memory-one strategies that only use the previous round of play.

This manuscript provides a detailed numerical analysis of agent-based simulations of 164 complex and adaptive strategies for the IPD. This is made possible by the Axelrod library [8], an effort to provide software for reproducible research for the IPD. The library now contains over 186 parameterized strategies including classics like TitForTat and WinStayLoseShift, as well as recent variants such as OmegaTFT, zero determinant and other memory one strategies, strategies based on finite state machines, lookup tables, neural networks, and other machine learning based strategies, and a collection of novel strategies. Not all strategies have been considered for this study: excluded are those that make use of knowledge of the number of turns in a match and others that have a high computational run time. The large number of strategies are available thanks to the open source nature of the project with over 50 contributors from around the world, made by programmers and researchers [3]. Three of the considered strategies are finite state machines trained specifically for Moran processes (described further in the Methods section).

In addition to providing a large collection of strategies, the Axelrod library can conduct matches, tournaments and population dynamics with variations including noise and spatial structure. The strategies and simulation frameworks are automatically tested to an extraordinarily high degree of coverage in accordance with best research software practices.

Using the Axelrod library and the many strategies it contains, we obtain the probability with which a given strategy takes over a population (referred to as fixation probability) for all pairs of strategies, identifying those that are effective invaders and those resistant to invasion, for population sizes N = 2 to N = 14. Moreover we present a number (16) of strategies that were created via reinforcement algorithms (evolutionary and particle swarm algorithms) that are among the best invaders and resistors of invasion known to date, and show that handshaking mechanisms naturally arise from these processes as an invasion-resistance mechanism.

In 2016, work has argued that agent-based simulations can provide insights in evolutionary game theory not available via direct mathematical analysis [9]. The results and insights contained in this paper would be difficult to derive analytically.

In particular the following questions are addressed:

  1. What strategies are good invaders?
  2. What strategies are good at resisting invasion?
  3. How does the population size affect these findings?

While the results agree with some of the published literature, it is found that:

  1. Zero determinant strategies are not effective invaders or defenders for N > 2.
  2. Complex strategies can be effective, and in fact can naturally evolve through evolutionary processes to outperform designed strategies.
  3. The strongest resistors specifically evolve or possess a handshake mechanism.
  4. Strong invaders are generally cooperative strategies that do not defect first but retaliate to varying degrees of intensity against strategies that defect.
  5. Strategies evolved to maximize their total payoff can be strong invaders and achieve mutual cooperation with many other strategies.

The notion of a handshake has been described previously in [10] and corresponds to the idea that an individual exhibits behaviour that starts with a recognisable pattern. This in turn allows them to identify individuals of their own type (who would exhibit the same pattern). This is analogous to the biological notion of ‘kin recognition’ where individuals have the ability to recognise the phenotype of their own kin [11].

Materials and methods

The Moran process

A Moran process is a stochastic birth death process on a finite population in which the population size stays constant over time. Individuals are selected according to a given fitness landscape. The fitness landscape in this work is defined as the total utility against all other individuals in the population. Once selected, the individual is reproduced and similarly another individual is chosen to be removed from the population (a uniform random selection is used). This is shown diagrammatically in Fig 1. In some settings mutation is also considered but without mutation (the case considered in this work) this process will arrive at an absorbing state where the population is entirely made up of players of one strategy. The probability with which a given strategy, starting from a single individual takes over a population is called the fixation probability. A more detailed analytic description of this is given later. In our simulations offspring do not inherit any knowledge or history from parent replicants.

The Moran process was initially introduced in [5]. It has since been used in a variety of settings including the understanding of the spread of cooperative and non-cooperative behaviour such as cancer [12] and the emergence of cooperative behaviour in spatial topologies [13]. However these works mainly consider relatively simple strategies. A few works looked at evolutionary stability of agent-based strategies within the Prisoner’s Dilemma [14] but this is not done in the more widely used setting of the Moran process, rather in terms of infinite population stability. In [15] Moran processes are studied in a theoretical framework for a small subset of strategies. The subset included memory one strategies: strategies that recall the events of the previous round only.

Of particular interest are the zero determinant strategies introduced in [4]. It was argued in [7] that generous ZD strategies are robust against invading strategies. However, in [16], a strategy using machine learning techniques was capable of resisting invasion and also able to invade any memory one strategy. In 2017, [17] has investigated the effect of memory length on strategy performance and the emergence of cooperation but this is not done in a Moran process context and only considers specific cases of memory 2 strategies. In [18] it was recognised that many zero determinant strategies do not fare well against themselves. This is a disadvantage for the Moran process where the best strategies cooperate well with other players using the same strategy.

This work uses pair-wise Moran processes in a similar way to matches in the many IPD tournaments published since Axelrod’s original work [2]. A population-based perspective is given which adds additional evolutionary components to the IPD, namely the evolutionary dynamics of invasion and resistance.

Strategies considered

To carry out this numerical experiment, 164 strategies, listed (with their properties) in the Appendix, are used from the Axelrod library. The appendix also includes citations to the original description of each strategy. There are 43 stochastic and 121 deterministic strategies. Their memory depth, defined by the number of rounds of history used by the strategy each round, is shown in Table 1. The memory depth is infinite if the strategy uses the entire history of play (whatever its length). For example, a strategy that utilizes a handshaking mechanism where the opponent’s actions on the first few rounds of play determines the strategies subsequent behavior would have infinite memory depth.

Using families of strategies that depend on given parameters it is possible to find specific parameters through a training process called reinforcement learning. A detailed description of the various types considered is given in [19].

A number of these strategies have been trained this way (see [19]) prior to this study and not specifically for the Moran process. For example:

  • Evolved ANN: a neural network based strategy;
  • Evolved LookerUp: a lookup table based strategy;
  • PSO Gambler: a stochastic version of the lookup table based strategy;
  • Evolved HMM: a hidden Markov model based strategy.

Apart from the PSO Gambler strategy, which was trained using a particle swarm optimisation algorithm, these strategies are trained with an evolutionary algorithm that perturbs strategy parameters and optimizes the mean total score against all other opponents [20]. They were trained to win IPD tournaments by maximizing their mean total payoffs against a variety of opponents. Variation is introduced via mutation and crossover of parameters, and the best performing strategies are carried to the next generation along with new variants. Similar methods appear in the literature [21]. There has also been some work on strategies using an evolutionary algorithm in real time: in [22] an evolutionary algorithm is used to build a model of the opponent and attempt to exploit any potential weakness. In this work all strategies resulting from evolutionary algorithms are pre-trained.

More information about each player can be obtained in the documentation for [8] and a detailed description of the performance of these strategies in IPD tournaments is described in [19].

All of the training code is archived at [23]. This software is (similarly to the Axelrod library) available on GitHub (https://github.com/Axelrod-Python/axelrod-dojo) with documentation to train new strategies easily. Training typically takes less than 100 generations and can be completed within several hours on commodity hardware.

One particular family of strategies that has been studied in the literature are called finite state machines. These mathematical models consist of states and responses to actions which indicate a next state given an action. In the context of the IPD, a finite state machine, is a mapping from an arbitrary list of states and opponent actions (cooperation or defection) to states and an action (cooperation or defection). For further details, the reader is referred to [19, 21, 24, 25].

There are three further strategies trained specifically for this study; Trained FSM 1, 2, and 3 (TF1 TF3). These are finite state machines of 16, 16, and 8 states respectively. These are shown in Figs 2, 3 and 4, using the notation common in the literature where A1/A2 is the action of the opponent A1 and the response of the player A2 as well as arrows corresponding to changes of state.

thumbnail
Fig 2. TF1: A 16 state finite state machine with a handshake leading to mutual cooperation at state 4.

https://doi.org/10.1371/journal.pone.0204981.g002

thumbnail
Fig 3. TF2: A 16 state finite state machine with a handshake leading to mutual cooperation at state 16.

https://doi.org/10.1371/journal.pone.0204981.g003

As opposed to the previously described strategies [19], these strategies were trained with the objective function of mean fixation probabilities for Moran processes starting at initial population states consisting of N/2 individuals of the training candidates and N/2 individuals of an opponent strategy, taken from a selection of 150 opponents from the Axelrod library:

  • TF1 N = 12, 0% noise.
  • TF2 N = 10, 0% noise.
  • TF3 N = 8, 1% noise.

The trained algorithms were run for fewer than 50 generations. Training data for this is available at [26].

TF1 has an initial handshake of CCD and cooperates if the opponent matches. However if the opponent later defects, TF1 will respond in kind, so the handshake is not permanent. Only one player (Prober 4 [27]) manages to achieve cooperation with TF1 after about 20 rounds of play. TF1 is functionally very similar to a strategy known as “Collective Strategy”, which has a handshake of CD and cooperates with opponents that matched the handshake until they defect, defecting thereafter if the opponent ever defects [28]. Collective Strategy was specifically designed for evolutionary processes.

TF2 always starts with CD and will defect against opponents that start with DD. It plays CDD against itself and then cooperates thereafter; Fortress3 and Fortress4 also use a similar handshake and cooperate with TF2. Cooperation can be rescued after a failed handshake by a complex sequence of plays which sometimes results in mutual cooperation with Firm but Fair, Grofman, and GTFT, and a few others with low probability. TF2 defects against all other players in the study, barring unusual cases arising from particular randomizations. Fig 3 shows all 16 states of the strategy (states 6 and 7 are not reachable).

TF3 cooperates and defects with various cycles depending on the opponent’s actions. TF3 will mutually cooperate with any strategy and only tolerates a few defections before defecting for the rest of match. It is similar to but not exactly the same as Fool Me Once, a strategy that cooperates until the opponent has defected twice (not necessarily consecutively), and defects indefinitely thereafter. Though a product of training with a Moran objective, it differs from TF1 and TF2 in that it lacks a handshake mechanism. Fig 4 shows all 8 states of the strategy produced by the training process (states 3 and 8 are not reachable).

For both TF1 and TF2 a handshake mechanism naturally emerges from the structure of the underlying finite state machine. This behavior is an outcome of the evolutionary process and is in no way hard-coded or included via an additional mechanism.

Data collection

Each strategy pair is run for 1000 repetitions of the Moran process to fixation with starting population distributions of (1, N − 1), (N/2, N/2) and (N − 1, 1), for N from 2 through 14. The fixation probability is then empirically computed for each combination of starting distribution and value of N. The Axelrod library can carry out exact simulations of the Moran process. Since some of the strategies have a high computational cost or are stochastic, samples are taken from a large number of 200 turn match outcomes for the pairs of players for use in computing fitnesses in the Moran process (i.e. a stochastic cache of matches is used). This approach was verified to agree with unsampled calculations to a high degree of accuracy in specific cases. This is described in Algorithms 1 and 2.

Algorithm 1 Data Collection

1: for player_one in players_list do

2:  for player_two in (players_list—player_one) pair do

3:   pair ← (player_one, player_two)

4:   for starting_population_distributions in [ do

5:    while repetitions ≤ 1000 do

6:     simulate moran process (pair, starting distribution)

7:    end while

8:    yield fixation probabilities

9:   end for

10:  end for

11: end for

Algorithm 2 Moran process

1: initial population ← (pair, starting distribution)

2: population ← initial population

3: while population not uniform do

4:  for player in population do

5:   for opponent in (population—player) do

6:    match ← (player, opponent)

7:    results ← stochastic_cache (200 round match)

8:   end for

9:  end for

10:  population ← sorted(results)

11:  parent ← selected randomly in proportion to its total match payoffs

12:  offspring ← parent

13:  kill off ← uniformly random player from population

14:  population ← offspring replaces kill off

15: end while

The next section will further validate the methodology by comparing simulated results to analytical results in a few selected cases. The main results of this manuscript will present a detailed analysis of all the data generated. Finally, a discussion and conclusion will offer future avenues for the work presented here.

Results

Validation

As described in [6] consider the payoff matrix: (1)

The expected payoffs of i players of the first type in a population with Ni players of the second type are given by: (2) (3)

The transitions within the birth death process that underpins the Moran process are then given by: (4) (5) (6)

Using this the fixation probability of the first strategy in a population of i individuals of the first type and Ni individuals of the second, is given by [13]: (7) where: A neutral strategy will have fixation probability xi = i/N.

Comparisons of x1, xN/2, xN−1 are shown in Fig 5 for Alternator and WSLS (a 5% confidence interval computed using an asymptotic normal approximation is also included [29]). The points represent the simulated values and the line shows the theoretical value. Note that these are deterministic strategies and show a good match between the expected value of (7) and the actual Moran process for all strategy pairs. These means have been compared using a t-test and the p values are shown in Table 2 which confirms the fact that the theoretic and simulated values are a good match.

thumbnail
Fig 5. Comparison of theoretic and actual Moran process fixation probabilities for deterministic strategies: Alternator and Cooperator.

5% confidence intervals calculated using an asymptotic normal approximation. The top most line on all figures (in red and using a circle) corresponds to xN−1, the middle line (in green and using a cross) corresponds to xN/2 and the bottom line (in blue and using an x) corresponds to x1.

https://doi.org/10.1371/journal.pone.0204981.g005

thumbnail
Table 2. p values resulting from a t test comparing the theoretic value with the simulated value of the Moran process fixation probabilities for deterministic strategies: Alternator and Cooperator.

https://doi.org/10.1371/journal.pone.0204981.t002

Fig 6 shows the fixation probabilities for stochastic strategies: Calculator and arrogant Q Learner. These are no longer a good match (confirmed with a t-test in Table 3). This demonstrates that assuming a given interaction between two IPD strategies can be summarised with a set of utilities as shown in (1) is not correct. For any given pair of strategies it is possible to obtain pi, i−1, pi, i+ 1, pii exactly (as opposed to the approximations offered by (4), (5) and (6)). Obtaining these requires particular analysis for a given pair and can be quite a complex endeavour for stochastic strategies with long memory: this is not necessary for the purposes of this work. All data generated for this validation exercise can be found at [26].

thumbnail
Fig 6. Comparison of theoretic and actual Moran process fixation probabilities for stochastic strategies: Calculator and Arrogant Q Learner.

5% confidence intervals calculated using an asymptotic normal approximation. The top most line on all figures (in red and using a circle) corresponds to xN−1, the middle line (in green and using a cross) corresponds to xN/2 and the bottom line (in blue and using an x) corresponds to x1.

https://doi.org/10.1371/journal.pone.0204981.g006

thumbnail
Table 3. p − values resulting from a t − test comparing the theoretic value with the simulated value of the Moran process fixation probabilities for stochastic strategies: Calculator and Arrogant Q Learner.

https://doi.org/10.1371/journal.pone.0204981.t003

Empirical results

This section outlines the data analysis carried out, all data for this study is available at [26].

  • First the specific case of N = 2 is considered.
  • The effect of population size on the ability of a strategy to invade another population is investigated. This will highlight how complex strategies with long memories outperform simpler strategies.
  • Then a similar investigation of the ability to defend against an invasion is given.
  • Finally the relationship between performance for differing population sizes as well as taking a close look at zero determinant strategies [4] is analysed.

The special case of N = 2.

When N = 2 the fixation probabilities of the Moran process are effectively measures of the distribution of relative mean payoffs over all possible matches between two players. The strategy that scores higher than the other more often will fixate more often. For N = 2 the two cases of x1 and xN−1 coincide, but will be considered separately for larger N in the following sections. The top 16 (10%) strategies are shown in Table 4 and figures showing the performance of all strategies are available in the appendix. The top five ranking strategies are:

  1. The top strategy is the Collective Strategy (CS) which has a simple handshake mechanism described above.
  2. Defector: it always defects. Since it has no interactions with other defectors (recall that N = 2), its aggressiveness is rewarded.
  3. Aggravater, which plays like Grudger (responding to any defections with unconditional defections throughout) however starts by playing 3 defections.
  4. Predator, a finite state machine described in [21].
  5. Handshake, a slightly less aggressive version of the Collective Strategy [10]. As long as the initial sequence is played then it cooperates. Thus it will do well in a population consisting of many members of itself just as the Collective Strategy does. The difference is that CS will defect after the handshake if the opponent defects while Handshake will not.
thumbnail
Table 4. Top strategies for N = 2 (neutral fixation is p = 0.5).

https://doi.org/10.1371/journal.pone.0204981.t004

It is also noted that TF1, TF2 and TF3 all perform well for this case of N = 2. This is also the value of N for which a zero determinant strategy does appear in the top 10% ranking strategies: ZD-extort-4. The performance of zero determinant strategies will be examined more closely.

As will be demonstrated the results for N = 2 differ from those of larger N. Hence these results do not concur with the literature which suggests that zero determinant strategies should be effective for larger population sizes, but these analyses consider stationary behaviour, while this work runs for a fixed number of rounds [7]. The stationarity assumption allows for a deterministic payoff matrix leading to the conclusions about zero determinant strategies in the space of memory-one strategies that do not generalize to this context.

Strong invaders.

In this section the focus is on the ability of a mutant strategy to invade: the probability of one individual of a given type successfully fixating in a population of N−1 other individuals, denoted by x1. The ranks of each strategy for all considered values of N according to mean x1 are shown in Fig 7.

thumbnail
Fig 7. Invasion: Ranks of all strategies according to x1 for different population sizes.

https://doi.org/10.1371/journal.pone.0204981.g007

The top 16 strategies are given in Tables 5, 6 and 7. A variety of figures showing the performance of all strategies is available in the supporting information.

It can be seen that apart from CS, none of the strategies for N = 2 of Table 4 perform well for N ∈ {3, 7, 14}. The new top performing strategies are:

  • Grudger (which only performs well for N = 3), starts by cooperating but will defect if at any point the opponent has defected.
  • MEM2, an infinite memory strategy that switches between TFT, TF2T, and Defector [14].
  • TF3, the finite state machine trained specifically for Moran processes described.
  • Prober 4, a strategy which starts with a specific 20 move sequence of cooperations and defections [27]. This initial sequence serves as approximate handshake.
  • PSO Gambler and Evolved Lookerup 2 2 2: strategies that make use of a lookup table mapping the first 2 moves of the opponent as well as the last 2 moves of both players to an action. PSO gambler is a stochastic version of Lookerup which maps those states to probabilities of cooperating. Lookerup was described in [3].
  • The Evolved ANN strategies are neural networks that map a number of attributes (first move, number of cooperations, last move, etc.) to an action. Both of these have been trained using an evolutionary algorithm.
  • Evolved FSM 16 is a 16 state finite state machine trained to perform well in tournaments.

Only one of the above strategies is stochastic although close inspection of the source code of PSO Gambler shows that it makes stochastic decisions rarely, and is functionally very similar to its deterministic cousin Evolved Looker Up. PSO Gambler Mem1 is a stochastic memory one strategy that has been trained to maximise its utility and does perform well. Apart from TF3, the finite state machines trained specifically for Moran processes do not appear in the top 5, while strategies trained for tournaments do. This is due to the nature of invasion: most of the opponents will initially be different strategies. The next section will consider the converse situation.

Strong resistors.

In addition to identifying good invaders, strategies resistant to invasion by other strategies are identified by examining the distribution of xN−1 for each strategy. The ranks of each strategy for all considered values of N according to mean xN−1 are shown in Fig 8.

thumbnail
Fig 8. Resistance: Ranks of all strategies according to xN−1 for different population sizes.

https://doi.org/10.1371/journal.pone.0204981.g008

Tables 8, 9 and 10 show the top strategies when ranked according to xN−1 for N ∈ {3, 7, 14} and figures showing results for all strategies are available in the supplementary materials. Once again none of the short memory strategies previously discussed perform well for high N.

Interestingly none of these strategies are stochastic: this is explained by the value of not provoking typically cooperative opponent strategies with speculative defections. This includes opponents using the same strategy. Acting stochastically increases the chance of reducing the score of individuals of the same type in a Moran process. However it is possible to design a strategy with a stochastic or error-correcting handshake that is an excellent resistor even in noisy environments [16].

There are only two new strategies that appear in the top ranks for xN−1: TF1 and TF2. These two strategies are with CS the strongest resistors. They all have handshakes, and whilst the handshakes of CS and Handshake (which ranks highly for the smaller values of N) were programmed, the handshakes of TF1 and TF2 evolved without any priming.

As described previously the strategies trained with the payoff maximizing objective are among the best invaders in the library however they are not as resistant to invasion as the strategies trained using a Moran objective function. These strategies include trained finite state machine strategies, but they do not appear to have handshaking mechanisms. Therefore it is reasonable to conclude that the objective function is the cause of the emergence of handshaking mechanisms. More specifically, TF1 and TF2 evolved handshakes for high invasion resistance. TF3 is a better total payoff maximizer which makes it a better invader along with the strategies trained to maximize total payoff since successful fitness proportionate selection is necessary for invasion. Training with an objective with initial population mix other than (N/2, N/2) may favor invasion or resistance.

The payoff maximizing strategies typically will not defect before the opponent’s first defection, possibly because the training strategy collection contains some strategies such as Grudger and Fool Me Once that retaliate harshly by defecting for the remainder of the match if the opponent has more than a small number of cumulative defections. Paradoxically for handshaking strategies it is advantageous to defect (as a signal) in order to achieve mutual cooperation with opponents using the same strategy but not with other opponents. Nevertheless an evolutionary process is able to tunnel through the costs and risks associated with early defections to find more optimal solutions, so it is not surprising in hindsight that handshaking strategies emerge from the evolutionary training process.

A handshake requires at least one defection and there is selective pressure to defect as few times as possible to achieve the self-recognition mechanism. It is also unwise to defect on the first move as some strategies additionally retaliate in response to first round defections. So the handshakes used by TF1, TF2, and CS are in some sense optimal.

It is evident through the work presented that performance of strategies not only depends on the initial population distribution but also that there seems to be a difference depending on whether or not N > 2. This will be explored further in the next section, looking not only at x1 and xN−1 but also considering xN/2.

The effect of population size.

Fig 9 complements Figs 7 and 8 showing the ranks of each strategy for all considered even values of N according to mean xN/2.

thumbnail
Fig 9. Fixation ranks of all strategies according to xN/2 for different population sizes.

https://doi.org/10.1371/journal.pone.0204981.g009

Tables 11, 12 and 13 show the ranks for a selection of strategies:

  • The strategies that ranked highly for N = 2;
  • The strategies that ranked highly for N = 14;
  • The zero determinant strategies.
thumbnail
Table 11. Invasion: Fixation ranks of a few selected strategies according to x1 for different population sizes.

https://doi.org/10.1371/journal.pone.0204981.t011

thumbnail
Table 12. Resistance: Fixation ranks of a few selected strategies according to xN−1 for different population sizes.

https://doi.org/10.1371/journal.pone.0204981.t012

thumbnail
Table 13. Ranks of a few selected strategies according to xN/2 for different population sizes.

https://doi.org/10.1371/journal.pone.0204981.t013

The results for xN/2 show similarities to the results for xN−1 and in particular TF1, TF2 and TF3 ranked first, third and eighth. This is to be expected since, as described previously these strategies were trained in an initial population of (N/2, N/2) individuals.

For all starting populations i ∈ {1, N/2, N − 1} the ranks of strategies are relatively stable across the different values of N > 2 however for N = 2 there is a distinct difference. This highlights that there is little that can be inferred about the evolutionary performance of a strategy in a large population from its performance in a small population. This is confirmed by the performance of the zero determinant strategies: while some do rank relatively highly for N = 2 (ZD-Extort-4 has rank 16) this rank does not translate to larger populations.

Fig 10 shows the correlation coefficients of the ranks of strategies in differing population size. How well a strategy performs in any Moran process for N > 2 has low correlation with the performance for N = 2. This illustrates why the strong performance of zero determinant strategies predicted in [4] does not extend to larger populations. This was discussed theoretically in [18] and observed empirically in these simulations.

thumbnail
Fig 10. Heatmap of correlation coefficients of rankings by population size.

https://doi.org/10.1371/journal.pone.0204981.g010

Discussion

Training strategies to excel at the Moran process leads to the evolution of cooperation, but only with like individuals in the case of TF1 and TF2. This may have significant implications for various biological and social phenomena such as human social interactions, particularly the evolution of ingroup/outgroup mechanisms and other sometimes costly rituals that reinforce group behavior.

While TF1 and TF2 are competent invaders, the best invaders in the study do not appear to employ strict handshakes, and are generally cooperative strategies. TF3, which does not use a handshake, is a better invader than TF1 and TF2 but not as good a resistor. Nevertheless it was the result of the same kind of training processes and is a better combined invader-resistor than the invaders that were trained previously to maximize payout. It is of interest to note that when trained in a payoff maximising criteria the finite state machines do not evolve to obtain a handshake, this highlights the importance of the evolutionary effect on this important mechanism.

The strategies trained to maximize payoff in head-to-head matches are generally cooperative and are effective invaders. Combined with the fact that handshaking strategies are stronger resistors, this suggests that while maximizing individual payoff can lead to the evolution of cooperation, these strategies are not the most evolutionarily stable in the long run. A strategy with a handshaking mechanism is still capable of invading and is more resistant to subsequent invasions. Moreover, the best resistor of the payoff maximally trained strategies (Evolved Looker Up 1_1_1), which always defects if the opponent defects in the first round, is effectively employing a one-shot handshake of C. Similarly, Grudger (also known as Grim), which emerged from training memory one strategies for the Moran process, also effectively employs a handshake of always cooperating, as it defects for the remainder of the match if the opponent ever defects.

The insights that payoff maximizers are better invaders and that handshakers are better resistors suggests that a strategy aware of the population distribution could choose to become a handshaker at a critical threshold and use a strategy better for invasion when in the minority. Information about the population distribution was not available to our strategies. Previous work has showed that strategies able to retain memory across matches can infer the population distribution and act in such a manner, resulting in a strategy effective at invasion and resistance [16].

We did not attempt other objective functions that may serve to select for both invasion and resistance better than training at a starting population of (N/2, N/2). Nevertheless our results suggest that there is not much room for improvement. Any handshake more sophisticated than always cooperate necessarily involves a defection. (A strategy with a handshake consisting of a long sequence of cooperations is effectively a grudger.) For TF3 or EvolvedLookerUp1_1_1 to become better resistors they need a longer or more strict handshake. But if this handshake involves a defection then likely the invasion ability is diminished for N > 2: the top invaders for larger N are nice strategies that do not defect before their opponents. This is because good invaders need to maximize match payoff to benefit from fitness proportionate selection, and so in the absence of a handshake mechanism, knowledge of the population distribution, or some identifying label on the opponent, a strategy must be generally cooperative. Aggressive strategies are only effective invaders for the smallest N, dropping dramatically in rank as the population size increases.

We did, however, attempt to evolve CS using finite state machines and lookup table based players, which resulted in some very similar strategies. In particular we evolved a lookup strategy that had a handshake of DC and played TFT with other players after a correct handshake while defecting otherwise, which is quite close in function to CS (full grudging is not possible with a lookup table of limited depth).

Finally we note that it may be possible to achieve similar results with smaller capacity finite state machine players.

Conclusion

A detailed empirical analysis of 164 strategies of the IPD within a pairwise Moran process has been carried out. All possible ordered pairs of strategies have been placed in a Moran process with different starting values allowing each strategy to attempt to invade the other. This is the largest such experiment carried out and has led to many insights.

When studying evolutionary processes it is vital to consider N > 2 since results for N = 2 cannot be used to extrapolate performance in larger populations. This was shown both observationally but also by considering the correlation of the ranks in different population sizes.

Memory one strategies do not perform as well as longer memory strategies in general in this study. Several longer memory strategies were high performers for invasion, particularly the strategies which have been trained using a number of reinforcement learning algorithms. Interestingly they have been trained to perform well in tournaments and not Moran processes specifically. In some cases these strategies utilize all the history of play (the neural network strategies and the lookup table strategies, the latter using the first round and some number of trailing rounds).

There are no memory one strategies in the top 5 performing strategies for N > 3. Training memory-one strategies specifically for the Moran process typically led to Grudger / Grim, a memory-one strategy with four-vector (1, 0, 0, 0). It appears to be the best resistor of the memory-one strategies. The highest performing memory-one strategy for invasion is PSO Gambler Mem 1, training to maximize total payout, which has four-vector (1, 0.52173487, 0, 0.12050939). For comparison, training for maximum score difference between the player and the opponent resulted in a strategy nearly the same as Grudger, with four-vector (0.9459, 0, 0, 0) (not included in the study).

One of the major findings discussed, is the ability of strategies with a handshake mechanism to resist invasion. This was not only revealed for CS (a human designed strategy) but also for two FSM strategies (TF1 and TF2) specifically trained through an evolutionary process. In these two cases, the handshake mechanism was a product of the evolutionary process. Fig 11 shows the cooperation rate of TF1, TF2, TF3 and CS for each round of a match against all the opponents in this study. This corresponds to the fraction of cooperation played by that strategy observed in a given round (out of the first 15) where each matchup is repeated 10000 times to obtain the mean.

thumbnail
Fig 11. Cooperation rate per round (over 10000 repetitions).

Rows correspond to all the strategies considered in this work (ordered alphabetically by name). Columns correspond to round of an IPD match.

https://doi.org/10.1371/journal.pone.0204981.g011

While TF3 does not have a strict handshake mechanism it is clear that all these strategies start a match by cooperating. It is then evident that TF3 cooperates more than the other strategies thus explaining the difference in performance. It is also clear that CS only cooperates with itself and Handshake: it is a very aggressive strategy.

These findings are important for the ongoing understanding of population dynamics and offer evidence for some of the shortcomings of low memory which has started to be recognised by the community [17].

All source code for this work has been written in a sustainable manner: it is open source, under version control and tested which ensures that all results can be reproduced [3032]. The raw data as well as the processed data has also been properly archived and can be found at [26].

There are many opportunities to build on this work. In particular, an analysis of the effect of noise should offer insights regarding the stability of the findings, particularly for the handshaking strategies. They may be less dominant for larger amounts of noise since the handshaking mechanisms may become brittle. There are many other variations to explore including populations with more than one type, spatial structure, and mutation.

One final point to recognise: the large set of strategies used here does not in itself constitute an authoritative set. Whilst it is not only large but also very diverse, the results (and rankings) presented might change given a different set of strategies. A further piece of work could look at subgroups of strategies and how they fair against other subgroups. Note that because of the open nature of the work here (not only is the source code archived but so is the data) this and any other further analysis is possible to carry out.

Appendix: List of players

  1. ϕDeterministicMemory depth: ∞. [8]
  2. πDeterministicMemory depth: ∞. [8]
  3. eDeterministicMemory depth: ∞. [8]
  4. ALLCorALLD—StochasticMemory depth: 1. [8]
  5. Adaptive—DeterministicMemory depth: ∞. [36]
  6. Adaptive Pavlov 2006—DeterministicMemory depth: ∞. [37]
  7. Adaptive Pavlov 2011—DeterministicMemory depth: ∞. [33]
  8. Adaptive Tit For Tat: 0.5—DeterministicMemory depth: ∞. [38]
  9. Aggravater—DeterministicMemory depth: ∞. [8]
  10. Alternator—DeterministicMemory depth: 1. [39, 40]
  11. Alternator Hunter—DeterministicMemory depth: ∞. [8]
  12. Anti Tit For Tat—DeterministicMemory depth: 1. [41]
  13. AntiCycler—DeterministicMemory depth: ∞. [8]
  14. Appeaser—DeterministicMemory depth: ∞. [8]
  15. Arrogant QLearner—StochasticMemory depth: ∞. [8]
  16. Average Copier—StochasticMemory depth: ∞. [8]
  17. Better and Better—StochasticMemory depth: ∞. [27]
  18. Bully—DeterministicMemory depth: 1. [42]
  19. Calculator—StochasticMemory depth: ∞. [27]
  20. Cautious QLearner—StochasticMemory depth: ∞. [8]
  21. CollectiveStrategy(CS)—DeterministicMemory depth: ∞. [28]
  22. Contrite Tit For Tat(CTfT)—DeterministicMemory depth: 3. [43]
  23. Cooperator—DeterministicMemory depth: 0. [4, 39, 40]
  24. Cooperator Hunter—DeterministicMemory depth: ∞. [8]
  25. Cycle Hunter—DeterministicMemory depth: ∞. [8]
  26. Cycler CCCCCD—DeterministicMemory depth: 5. [8]
  27. Cycler CCCD—DeterministicMemory depth: 3. [8]
  28. Cycler CCCDCD—DeterministicMemory depth: 5. [8]
  29. Cycler CCD—DeterministicMemory depth: 2. [40]
  30. Cycler DC—DeterministicMemory depth: 1. [8]
  31. Cycler DDC—DeterministicMemory depth: 2. [40]
  32. Davis: 10—DeterministicMemory depth: ∞. [2]
  33. Defector—DeterministicMemory depth: 0. [4, 39, 40]
  34. Defector Hunter—DeterministicMemory depth: ∞. [8]
  35. Desperate—StochasticMemory depth: 1. [44]
  36. Doubler—DeterministicMemory depth: ∞. [27]
  37. EasyGo—DeterministicMemory depth: ∞. [27, 36]
  38. Eatherley—StochasticMemory depth: ∞. [45]
  39. Eventual Cycle Hunter—DeterministicMemory depth: ∞. [8]
  40. Evolved ANN—DeterministicMemory depth: ∞. [8]
  41. Evolved ANN 5—DeterministicMemory depth: ∞. [8]
  42. Evolved ANN 5 Noise 05—DeterministicMemory depth: ∞. [8]
  43. Evolved FSM 16—DeterministicMemory depth: 16—Number of states: 14. [8]
  44. Evolved FSM 16 Noise 05—DeterministicMemory depth: 16—Number of states: 14. [8]
  45. Evolved FSM 4—DeterministicMemory depth: 4—Number of states: 4. [8]
  46. Evolved HMM 5—StochasticMemory depth: 5. [8]
  47. EvolvedLookerUp1_1_1—DeterministicMemory depth: ∞. [8]
  48. EvolvedLookerUp2_2_2—DeterministicMemory depth: ∞. [8]
  49. FSM Player: [(0, ‘C’, 0, ‘C’), (0, ‘D’, 3, ‘C’), (1, ‘C’, 5, ‘D’), (1, ‘D’, 0, ‘C’), (2, ‘C’, 3, ‘C’), (2, ‘D’, 2, ‘D’), (3, ‘C’, 4, ‘D’), (3, ‘D’, 6, ‘D’), (4, ‘C’, 3, ‘C’), (4, ‘D’, 1, ‘D’), (5, ‘C’, 6, ‘C’), (5, ‘D’, 3, ‘D’), (6, ‘C’, 6, ‘D’), (6, ‘D’, 6, ‘D’), (7, ‘C’, 7, ‘D’), (7, ‘D’, 5, ‘C’)], 0, C(TF3)—DeterministicMemory depth: ∞—Number of states: 8.
  50. FSM Player: [(0, ‘C’, 13, ‘D’), (0, ‘D’, 12, ‘D’), (1, ‘C’, 3, ‘D’), (1, ‘D’, 4, ‘D’), (2, ‘C’, 14, ‘D’), (2, ‘D’, 9, ‘D’), (3, ‘C’, 0, ‘C’), (3, ‘D’, 1, ‘D’), (4, ‘C’, 1, ‘D’), (4, ‘D’, 2, ‘D’), (5, ‘C’, 12, ‘C’), (5, ‘D’, 6, ‘C’), (6, ‘C’, 1, ‘C’), (6, ‘D’, 14, ‘D’), (7, ‘C’, 12, ‘D’), (7, ‘D’, 2, ‘D’), (8, ‘C’, 7, ‘D’), (8, ‘D’, 9, ‘D’), (9, ‘C’, 8, ‘D’), (9, ‘D’, 0, ‘D’), (10, ‘C’, 2, ‘C’), (10, ‘D’, 15, ‘C’), (11, ‘C’, 7, ‘D’), (11, ‘D’, 13, ‘D’), (12, ‘C’, 3, ‘C’), (12, ‘D’, 8, ‘D’), (13, ‘C’, 7, ‘C’), (13, ‘D’, 10, ‘D’), (14, ‘C’, 10, ‘D’), (14, ‘D’, 7, ‘D’), (15, ‘C’, 15, ‘C’), (15, ‘D’, 11, ‘D’)], 0, C(TF2)—DeterministicMemory depth: ∞—Number of states: 16.
  51. FSM Player: [(0, ‘C’, 7, ‘C’), (0, ‘D’, 1, ‘C’), (1, ‘C’, 11, ‘D’), (1, ‘D’, 11, ‘D’), (2, ‘C’, 8, ‘D’), (2, ‘D’, 8, ‘C’), (3, ‘C’, 3, ‘C’), (3, ‘D’, 12, ‘D’), (4, ‘C’, 6, ‘C’), (4, ‘D’, 3, ‘C’), (5, ‘C’, 11, ‘C’), (5, ‘D’, 8, ‘D’), (6, ‘C’, 13, ‘D’), (6, ‘D’, 14, ‘C’), (7, ‘C’, 4, ‘D’), (7, ‘D’, 2, ‘D’), (8, ‘C’, 14, ‘D’), (8, ‘D’, 8, ‘D’), (9, ‘C’, 0, ‘C’), (9, ‘D’, 10, ‘D’), (10, ‘C’, 8, ‘C’), (10, ‘D’, 15, ‘C’), (11, ‘C’, 6, ‘D’), (11, ‘D’, 5, ‘D’), (12, ‘C’, 6, ‘D’), (12, ‘D’, 9, ‘D’), (13, ‘C’, 9, ‘D’), (13, ‘D’, 8, ‘D’), (14, ‘C’, 8, ‘D’), (14, ‘D’, 13, ‘D’), (15, ‘C’, 4, ‘C’), (15, ‘D’, 5, ‘C’)], 0, C(TF1)—DeterministicMemory depth: ∞—Number of states: 16.
  52. Feld: 1.0, 0.5, 200—StochasticMemory depth: 200. [2]
  53. Firm But Fair—StochasticMemory depth: 1. [46]
  54. Fool Me Forever—DeterministicMemory depth: ∞. [8]
  55. Fool Me Once—DeterministicMemory depth: ∞. [8]
  56. Forgetful Fool Me Once: 0.05—StochasticMemory depth: ∞. [8]
  57. Forgetful Grudger—DeterministicMemory depth: 10. [8]
  58. Forgiver—DeterministicMemory depth: ∞. [8]
  59. Forgiving Tit For Tat(FTfT)—DeterministicMemory depth: ∞. [8]
  60. Fortress3—DeterministicMemory depth: 3—Number of states: 3. [21]
  61. Fortress4—DeterministicMemory depth: 4—Number of states: 4. [21]
  62. GTFT: 0.33—StochasticMemory depth: 1. [22, 47]
  63. General Soft Grudger: n = 1,d = 4,c = 2—DeterministicMemory depth: ∞. [8]
  64. Gradual—DeterministicMemory depth: ∞. [48]
  65. Gradual Killer: (‘D’, ‘D’, ‘D’, ‘D’, ‘D’, ‘C’, ‘C’)—DeterministicMemory depth: ∞. [27]
  66. Grofman—StochasticMemory depth: ∞. [2]
  67. Grudger—DeterministicMemory depth: 1. [2, 36, 44, 48, 49]
  68. GrudgerAlternator—DeterministicMemory depth: ∞. [27]
  69. Grumpy: Nice, 10, -10—DeterministicMemory depth: ∞. [8]
  70. Handshake—DeterministicMemory depth: ∞. [10]
  71. Hard Go By Majority—DeterministicMemory depth: ∞. [40]
  72. Hard Go By Majority: 10—DeterministicMemory depth: 10. [8]
  73. Hard Go By Majority: 20—DeterministicMemory depth: 20. [8]
  74. Hard Go By Majority: 40—DeterministicMemory depth: 40. [8]
  75. Hard Go By Majority: 5—DeterministicMemory depth: 5. [8]
  76. Hard Prober—DeterministicMemory depth: ∞. [27]
  77. Hard Tit For 2 Tats(HTf2T)—DeterministicMemory depth: 3. [50]
  78. Hard Tit For Tat(HTfT)—DeterministicMemory depth: 3. [51]
  79. Hesitant QLearner—StochasticMemory depth: ∞. [8]
  80. Hopeless—StochasticMemory depth: 1. [44]
  81. Inverse—StochasticMemory depth: ∞. [8]
  82. Inverse Punisher—DeterministicMemory depth: ∞. [8]
  83. Joss: 0.9—StochasticMemory depth: 1. [2, 50]
  84. Level Punisher—DeterministicMemory depth: ∞. [52]
  85. Limited Retaliate 2: 0.08, 15—DeterministicMemory depth: ∞. [8]
  86. Limited Retaliate 3: 0.05, 20—DeterministicMemory depth: ∞. [8]
  87. Limited Retaliate: 0.1, 20—DeterministicMemory depth: ∞. [8]
  88. MEM2—DeterministicMemory depth: ∞. [14]
  89. Math Constant Hunter—DeterministicMemory depth: ∞. [8]
  90. Meta Hunter Aggressive: 7 players—DeterministicMemory depth: ∞. [8]
  91. Meta Hunter: 6 players—DeterministicMemory depth: ∞. [8]
  92. Naive Prober: 0.1—StochasticMemory depth: 1. [36]
  93. Negation—StochasticMemory depth: 1. [51]
  94. Nice Average Copier—StochasticMemory depth: ∞. [8]
  95. Nydegger—DeterministicMemory depth: 3. [2]
  96. Omega TFT: 3, 8—DeterministicMemory depth: ∞. [37]
  97. Once Bitten—DeterministicMemory depth: 12. [8]
  98. Opposite Grudger—DeterministicMemory depth: ∞. [8]
  99. PSO Gambler 1_1_1—StochasticMemory depth: ∞. [8]
  100. PSO Gambler 2_2_2—StochasticMemory depth: ∞. [8]
  101. PSO Gambler 2_2_2 Noise 05—StochasticMemory depth: ∞. [8]
  102. PSO Gambler Mem1—StochasticMemory depth: 1. [8]
  103. Predator—DeterministicMemory depth: 9—Number of states: 9. [21]
  104. Prober—DeterministicMemory depth: ∞. [36]
  105. Prober 2—DeterministicMemory depth: ∞. [27]
  106. Prober 3—DeterministicMemory depth: ∞. [27]
  107. Prober 4—DeterministicMemory depth: ∞. [27]
  108. Pun1—DeterministicMemory depth: 2—Number of states: 2. [21]
  109. Punisher—DeterministicMemory depth: ∞. [8]
  110. Raider—DeterministicMemory depth: 3—Number of states: 4. [53]
  111. Random Hunter—DeterministicMemory depth: ∞. [8]
  112. Random: 0.5—StochasticMemory depth: 0. [2, 38]
  113. Remorseful Prober: 0.1—StochasticMemory depth: 2. [36]
  114. Resurrection—DeterministicMemory depth: 1. [52]
  115. Retaliate 2: 0.08—DeterministicMemory depth: ∞. [8]
  116. Retaliate 3: 0.05—DeterministicMemory depth: ∞. [8]
  117. Retaliate: 0.1—DeterministicMemory depth: ∞. [8]
  118. Revised Downing: True—DeterministicMemory depth: ∞. [2]
  119. Ripoff—DeterministicMemory depth: 2—Number of states: 3. [54]
  120. Risky QLearner—StochasticMemory depth: ∞. [8]
  121. SelfSteem—StochasticMemory depth: ∞. [55]
  122. ShortMem—DeterministicMemory depth: 10. [55]
  123. Shubik—DeterministicMemory depth: ∞. [2]
  124. Slow Tit For Two Tats—DeterministicMemory depth: 2. [8]
  125. Slow Tit For Two Tats 2—DeterministicMemory depth: 2. [27]
  126. Sneaky Tit For Tat—DeterministicMemory depth: ∞. [8]
  127. Soft Go By Majority—DeterministicMemory depth: ∞. [39, 40]
  128. Soft Go By Majority: 10—DeterministicMemory depth: 10. [8]
  129. Soft Go By Majority: 20—DeterministicMemory depth: 20. [8]
  130. Soft Go By Majority: 40—DeterministicMemory depth: 40. [8]
  131. Soft Go By Majority: 5—DeterministicMemory depth: 5. [8]
  132. Soft Grudger—DeterministicMemory depth: 6. [36]
  133. Soft Joss: 0.9—StochasticMemory depth: 1. [27]
  134. SolutionB1—DeterministicMemory depth: 3—Number of states: 3. [56]
  135. SolutionB5—DeterministicMemory depth: 5—Number of states: 6. [56]
  136. Spiteful Tit For Tat—DeterministicMemory depth: ∞. [27]
  137. Stochastic Cooperator—StochasticMemory depth: 1. [18]
  138. Stochastic WSLS: 0.05—StochasticMemory depth: 1. [8]
  139. Suspicious Tit For Tat—DeterministicMemory depth: 1. [41, 48]
  140. Tester—DeterministicMemory depth: ∞. [45]
  141. ThueMorse—DeterministicMemory depth: ∞. [8]
  142. ThueMorseInverse—DeterministicMemory depth: ∞. [8]
  143. Thumper—DeterministicMemory depth: 2—Number of states: 2. [54]
  144. Tit For 2 Tats(Tf2T)—DeterministicMemory depth: 2. [39]
  145. Tit For Tat(TfT)—DeterministicMemory depth: 1. [2]
  146. Tricky Cooperator—DeterministicMemory depth: 10. [8]
  147. Tricky Defector—DeterministicMemory depth: ∞. [8]
  148. Tullock: 11—StochasticMemory depth: 11. [2]
  149. Two Tits For Tat(2TfT)—DeterministicMemory depth: 2. [39]
  150. VeryBad—DeterministicMemory depth: ∞. [55]
  151. Willing—StochasticMemory depth: 1. [44]
  152. Win-Shift Lose-Stay: D(WShLSt)—DeterministicMemory depth: 1. [36]
  153. Win-Stay Lose-Shift: C(WSLS)—DeterministicMemory depth: 1. [47, 50, 57]
  154. Winner12—DeterministicMemory depth: 2. [58]
  155. Winner21—DeterministicMemory depth: 2. [58]
  156. Worse and Worse—StochasticMemory depth: ∞. [27]
  157. Worse and Worse 2—StochasticMemory depth: ∞. [27]
  158. Worse and Worse 3—StochasticMemory depth: ∞. [27]
  159. ZD-Extort-2 v2: 0.125, 0.5, 1—StochasticMemory depth: 1. [59]
  160. ZD-Extort-2: 0.1111111111111111, 0.5—StochasticMemory depth: 1. [50]
  161. ZD-Extort-4: 0.23529411764705882, 0.25, 1—StochasticMemory depth: 1. [8]
  162. ZD-GEN-2: 0.125, 0.5, 3—StochasticMemory depth: 1. [59]
  163. ZD-GTFT-2: 0.25, 0.5—StochasticMemory depth: 1. [50]
  164. ZD-SET-2: 0.25, 0.0, 2—StochasticMemory depth: 1. [59]

Supporting information

S1 Fig. The fixation probabilities x1 for N = 3.

https://doi.org/10.1371/journal.pone.0204981.s001

(EPS)

S2 Fig. The fixation probabilities x1 for N = 4.

https://doi.org/10.1371/journal.pone.0204981.s002

(EPS)

S3 Fig. The fixation probabilities x1 for N = 5.

https://doi.org/10.1371/journal.pone.0204981.s003

(EPS)

S4 Fig. The fixation probabilities x1 for N = 6.

https://doi.org/10.1371/journal.pone.0204981.s004

(EPS)

S5 Fig. The fixation probabilities x1 for N = 7.

https://doi.org/10.1371/journal.pone.0204981.s005

(EPS)

S6 Fig. The fixation probabilities x1 for N = 8.

https://doi.org/10.1371/journal.pone.0204981.s006

(EPS)

S7 Fig. The fixation probabilities x1 for N = 9.

https://doi.org/10.1371/journal.pone.0204981.s007

(EPS)

S8 Fig. The fixation probabilities x1 for N = 10.

https://doi.org/10.1371/journal.pone.0204981.s008

(EPS)

S9 Fig. The fixation probabilities x1 for N = 11.

https://doi.org/10.1371/journal.pone.0204981.s009

(EPS)

S10 Fig. The fixation probabilities x1 for N = 12.

https://doi.org/10.1371/journal.pone.0204981.s010

(EPS)

S11 Fig. The fixation probabilities x1 for N = 13.

https://doi.org/10.1371/journal.pone.0204981.s011

(EPS)

S12 Fig. The fixation probabilities x1 for N = 14.

https://doi.org/10.1371/journal.pone.0204981.s012

(EPS)

S13 Fig. The fixation probabilities xN−1 for N = 3.

https://doi.org/10.1371/journal.pone.0204981.s013

(EPS)

S14 Fig. The fixation probabilities xN−1 for N = 4.

https://doi.org/10.1371/journal.pone.0204981.s014

(EPS)

S15 Fig. The fixation probabilities xN−1 for N = 5.

https://doi.org/10.1371/journal.pone.0204981.s015

(EPS)

S16 Fig. The fixation probabilities xN−1 for N = 6.

https://doi.org/10.1371/journal.pone.0204981.s016

(EPS)

S17 Fig. The fixation probabilities xN−1 for N = 7.

https://doi.org/10.1371/journal.pone.0204981.s017

(EPS)

S18 Fig. The fixation probabilities xN−1 for N = 8.

https://doi.org/10.1371/journal.pone.0204981.s018

(EPS)

S19 Fig. The fixation probabilities xN−1 for N = 9.

https://doi.org/10.1371/journal.pone.0204981.s019

(EPS)

S20 Fig. The fixation probabilities xN−1 for N = 10.

https://doi.org/10.1371/journal.pone.0204981.s020

(EPS)

S21 Fig. The fixation probabilities xN−1 for N = 11.

https://doi.org/10.1371/journal.pone.0204981.s021

(EPS)

S22 Fig. The fixation probabilities xN−1 for N = 12.

https://doi.org/10.1371/journal.pone.0204981.s022

(EPS)

S23 Fig. The fixation probabilities xN−1 for N = 13.

https://doi.org/10.1371/journal.pone.0204981.s023

(EPS)

S24 Fig. The fixation probabilities xN−1 for N = 14.

https://doi.org/10.1371/journal.pone.0204981.s024

(EPS)

Acknowledgments

This work was performed using the computational facilities of the Advanced Research Computing @ Cardiff (ARCCA) Division, Cardiff University.

A variety of software libraries have been used in this work:

  • The Axelrod library (IPD strategies and Moran processes) [8].
  • The matplotlib library (visualisation) [33].
  • The pandas and numpy libraries (data manipulation) [34, 35].

References

  1. 1. Flood M. Some Experimental Games, 1958.
  2. 2. Axelrod R. Effective choice in the prisoner’s dilemma. Journal of conflict resolution, 24(1):3–25, 1980.
  3. 3. Knight V, Campbell O, Harper M, Langner K, Campbell J, Campbell T, et al. An Open Framework for the Reproducible Study of the Iterated Prisoner’s Dilemma. 2016.
  4. 4. Press W and Dyson F. Iterated Prisoner’s Dilemma contains strategies that dominate any evolutionary opponent. Proceedings of the National Academy of Sciences of the United States of America, 109(26):10409–13, 2012. pmid:22615375
  5. 5. Moran P. Random Processes in Genetics. Mathematical Proceedings of the Cambridge Philosophical Society, 54(1):60–71, 1957.
  6. 6. Nowak M. Evolutionary Dynamics: Exploring the Equations of Life. Cambridge: Harvard University Press.
  7. 7. Stewart A and Plotkin J. From extortion to generosity, evolution in the iterated prisoner’s dilemma. Proceedings of the National Academy of Sciences, 110(38):15348–15353, 2013.
  8. 8. The Axelrod project developers. Axelrod: v2.9.0. http://dx.doi.org/10.5281/zenodo.499122, April 2016.
  9. 9. Adami C, Schossau J, and Hintze A. Evolutionary game theory using agent-based methods. Physics of Life Reviews, 19(Supplement C):1–26, 2016. pmid:27617905
  10. 10. Robson A. Efficiency in evolutionary games: Darwin, Nash and the secret handshake. Journal of theoretical Biology, 144(3):379–396, 1990. pmid:2395377
  11. 11. Mateo J. Kin recognition in ground squirrels and other rodents. In Journal of Mammalogy, 84(4), 1163–1181
  12. 12. West J, Hasnain Z, Mason J, and Newton P. The prisoner’s dilemma as a cancer model. Convergent Science Physical Oncology, 2(3):035002, 2016. pmid:29177084
  13. 13. Allen B, Lippner G, Chen Y, Fotouhi B, Momeni N, Shing-Tung Yau et al. Evolutionary dynamics on any population structure. 544:227–230, March 2017.
  14. 14. Li J and Kendall G. The effect of memory size on the evolutionary stability of strategies in iterated prisoner’s dilemma. IEEE Transactions on Evolutionary Computation, 18(6):819–826, 2014.
  15. 15. Seung Ki B, Hyeong-chai J, Christian H and Martin N. Comparing reactive and memory- one strategies of direct reciprocity. Nature Publishing Group, pages 1–13, 2016.
  16. 16. Lee C, Harper M, and Fryer D. The Art of War: Beyond Memory-one Strategies in Population Games. Plos One, 10(3):e0120625, 2015. pmid:25803576
  17. 17. Hilbe C, Martinez-Vaquero L, Chatterjee K, and Nowak M. Memory-n strategies of direct reciprocity. Proceedings of the National Academy of Sciences, page 201621239, 2017.
  18. 18. Adami C and Hintze A. Evolutionary instability of zero-determinant strategies demonstrates that winning is not everything. Nature communications, 4(1):2193, 2013. pmid:23903782
  19. 19. Harper M, Knight V, Jones M, Koutsovoulos G, Glynatsi N, and Campbell O. Reinforcement Learning Produces Dominant Strategies for the Iterated Prisoner’s Dilemma. PLOS ONE, 12(12): 12 2017.
  20. 20. Affenzeller M, Wagner S, Winkler S, and Beham A. Genetic Algorithms and Genetic Programming: Modern Concepts and Practical Applications. Numerical Insights. CRC Press, 2009.
  21. 21. Ashlock W and Ashlock D. Changes in prisoner’s dilemma strategies over evolutionary time with different population sizes. In Evolutionary Computation, 2006. CEC 2006. IEEE Congress on, pages 297–304. IEEE, 2006.
  22. 22. Gaudesi M, Piccolo E, Squillero G, and Tonda A. Exploiting evolutionary modeling to prevail in iterated prisoner’s dilemma tournaments. IEEE Transactions on Computational Intelligence and AI in Games, 8(3):288–300, 2016.
  23. 23. Harper M, Knight V, and Jones M. Axelrod-python/axelrod-dojo: v0.0.1. https://doi.org/10.5281/zenodo.824264, July 2017.
  24. 24. Ashlock D. Evolutionary computation for modeling and optimization, Springer Science & Business Media 2006
  25. 25. Ashlock W Why some representations are more cooperative than others for prisoner’s dilemma Foundations of Computational Intelligence, 2007. FOCI 2007. IEEE symposium on, 314–321, 2007.
  26. 26. Knight V, Harper M, and Glynatsi N. Data for: Evolution Reinforces Cooperation with the Emergence of Self-Recognition Mechanisms: an empirical study of the Moran process for the iterated Prisoner’s dilemma using reinforcement learning. https://doi.org/10.5281/zenodo.1040129, July 2017.
  27. 27. LIFL. Prison. http://www.lifl.fr/IPD/ipd.frame.html, 2008.
  28. 28. Li J and Kendall G. A strategy with novel evolutionary features for the iterated prisoner’s dilemma. Evolutionary Computation, 17(2):257–274, 2009. pmid:19413490
  29. 29. Brown L, Cai T, and DasGupta A. Interval estimation for a binomial proportion. Statistical science, pages 101–117, 2001.
  30. 30. Prlić A and Procter J. Ten Simple Rules for the Open Development of Scientific Software. PLOS Computational Biology, 8(12):1–3, 12 2012.
  31. 31. Sandve G, Nekrutenko A, Taylor J, and Hovig E. Ten Simple Rules for Reproducible Computational Research. PLoS Computational Biology, 9(10):1–4, 2013.
  32. 32. Wilson G, Aruliah D, Brown T, Chue Hong N, Davis M, Guy R, et al. Best Practices for Scientific Computing. PLOS Biology, 12(1):1–7, 01 2014.
  33. 33. Hunter J. Matplotlib: A 2D graphics environment. Computing In Science & Engineering, 9(3):90–95, 2007.
  34. 34. McKinney W et al. Data structures for statistical computing in python. In Proceedings of the 9th Python in Science Conference, volume 445, pages 51–56. van der Voort S, Millman J, 2010.
  35. 35. van der Walt S, Colbert C, and Varoquaux G. The NumPy array: a structure for efficient numerical computation. Computing in Science & Engineering, 13(2):22–30, 2011.
  36. 36. Li J, Hingston P, and Kendall G. Engineering Design of Strategies for Winning Iterated Prisoner’s Dilemma Competitions. IEEE Transactions on Computational Intelligence and AI in Games, 3(4):348–360, 2011.
  37. 37. Kendall G, Yao X, and Chong S. The Iterated Prisoners’ Dilemma: 20 Years on. Advances in natural computation. World Scientific, 2007.
  38. 38. Tzafestas E. Toward adaptive cooperative behavior. From Animals to animals: Proceedings of the 6th International Conference on the Simulation of Adaptive Behavior (SAB-2000), 2:334–340, 2000.
  39. 39. Axelrod R. The Evolution of Cooperation. Basic books. Basic Books, 1984.
  40. 40. Mittal S and Deb K. Optimal strategies of the iterated prisoner’s dilemma problem for multiple conflicting objectives. IEEE Transactions on Evolutionary Computation, 13(3):554–565, 2009.
  41. 41. Hilbe C, Nowak M, and Traulsen A. Adaptive dynamics of extortion and compliance. PloS one, 8(11):e77886, 2013. pmid:24223739
  42. 42. Nachbar J. Evolution in the finitely repeated prisoner’s dilemma. Journal of Economic Behavior & Organization, 19(3):307–326, 1992.
  43. 43. Wu J and Axelrod R. How to cope with noise in the iterated prisoner’s dilemma. Journal of Conflict resolution, 39(1):183–189, 1995.
  44. 44. Van den Berg P and Weissing F. The importance of mechanisms for the evolution of cooperation. In Proc. R. Soc. B, volume 282, page 20151382. The Royal Society, 2015.
  45. 45. Axelrod R. More Effective Choice in the Prisoner’s Dilemma. Journal of Conflict Resolution, 24(3):379–403, 1980.
  46. 46. Frean M. The prisoner’s dilemma without synchrony. Proceedings of the Royal Society of London B: Biological Sciences, 257(1348):75–79, 1994.
  47. 47. Nowak M and Sigmund K. A strategy of win-stay, lose-shift that outperforms tit-for-tat in the Prisoner’s Dilemma game. Nature, 364(6432):56–58, 1993. pmid:8316296
  48. 48. Beaufils B, Delahaye J, and Mathieu P. Our meeting with gradual, a good strategy for the iterated prisoner’s dilemma. In Proceedings of the Fifth International Workshop on the Synthesis and Simulation of Living Systems, pages 202–209, 1997.
  49. 49. Banks J and Sundaram R. Repeated games, finite automata, and complexity. Games and Economic Behavior, 2(2):97–117, 1990.
  50. 50. Stewart A and Plotkin J. Extortion and cooperation in the Prisoner’s Dilemma. Proceedings of the National Academy of Sciences, 109(26):10134–10135, 2012.
  51. 51. Unkwown. www.prisoners-dilemma.com. http://www.prisoners-dilemma.com/, 2017.
  52. 52. Eckhart A. Coopsim v0.9.9 beta 6. https://github.com/jecki/CoopSim/, 2015.
  53. 53. Ashlock W, Tsang J, and Ashlock D. The evolution of exploitation. In Foundations of Computational Intelligence (FOCI), 2014 IEEE Symposium on, pages 135–142. IEEE, 2014.
  54. 54. Ashlock D and Kim E. Fingerprinting: Visualization and automatic analysis of prisoner’s dilemma strategies. IEEE Transactions on Evolutionary Computation, 12(5):647–659, 2008.
  55. 55. Carvalho A, Rocha H, Amaral F, and Guimaraes F. Iterated Prisoner’s Dilemma-An extended analysis. pages 1–6, 2013.
  56. 56. Ashlock D, Brown J, and Hingston P. Multiple Opponent Optimization of Prisoner’s Dilemma Playing Agents. IEEE Transactions on Computational Intelligence and AI in Games, 7(1):53–65, 2015.
  57. 57. Kraines D and Kraines V. Pavlov and the prisoner’s dilemma. Theory and decision, 26(1):47–79, 1989.
  58. 58. Mathieu P and Delahaye J. New Winning Strategies for the Iterated Prisoner’s Dilemma (Extended Abstract). 14th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2015), pages 1665–1666, 2015.
  59. 59. Kuhn S. Prisoner’s dilemma. In Edward N. Zalta, editor, The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University, spring 2017 edition, 2017.