Figures
Abstract
Interacting strategies in evolutionary games is studied analytically in a well-mixed population using a Markov chain method. By establishing a correspondence between an evolutionary game and Markov chain dynamics, we show that results obtained from the fundamental matrix method in Markov chain dynamics are equivalent to corresponding ones in the evolutionary game. In the conventional fundamental matrix method, quantities like fixation probability and fixation time are calculable. Using a theorem in the fundamental matrix method, conditional fixation time in the absorbing Markov chain is calculable. Also, in the ergodic Markov chain, the stationary probability distribution that describes the Markov chain’s stationary state is calculable analytically. Finally, the Rock, scissor, paper evolutionary game are evaluated as an example, and the results of the analytical method and simulations are compared. Using this analytical method saves time and computational facility compared to prevalent simulation methods.
Citation: Hajihashemi M, Aghababaei Samani K (2022) Multi-strategy evolutionary games: A Markov chain approach. PLoS ONE 17(2): e0263979. https://doi.org/10.1371/journal.pone.0263979
Editor: Jun Tanimoto, Kyushu Daigaku, JAPAN
Received: December 27, 2021; Accepted: February 1, 2022; Published: February 17, 2022
Copyright: © 2022 Hajihashemi, Aghababaei Samani. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: Codes related to this article is hosted on Github at https://github.com/mehdiphy/rock-scissors-paper-evolutionary-game.
Funding: This research is financially supported by Iran National Science Foundation (INSF) under Postdoctoral Research Grant number 99007738.
Competing interests: The authors have declared that no competing interests exist.
Introduction
Today, Evolutionary Game theory (EGT) is a progressive topic in many branches of science from economy to biology [1–10]. EGT provides powerful tools for many problems in which the system’s dynamics depend on the interaction between agents. The interactions between strategies are often described by evolutionary games. The performance of strategies in evolutionary games is determined by the game’s payoff matrix, which determines each strategy’s spread rate. Greater payoff in the game leads to more tendency to spread in the population for any strategy. In an infinite well-mixed population, dynamics of the system is governed by a deterministic equation called replicator equation [11, 12], but in a finite population the dynamics is stochastic [13–21].
In a stochastic evolutionary game, the population is divided into several strategies and individuals interact with each other based on their strategies. The process is advanced by discrete time steps. In each time step, the frequency of strategies changes by one or remains unchanged. The game’s payoff matrix and frequency of each strategy identify the probability of events at each time step. Another factor that influences the dynamics of the population is the update rule. Update rule identifies how payoff matrix and frequencies distribute the probabilities of events in each time step. Depending on the update rule, the evolutionary game can be stopped when one of the strategies overcomes all other strategies (fixation), or continues forever. The structure of the population can also affect the dynamics of population. Unfolding an evolutionary game in graph-structured populations is the subject of many investigation [13, 22–29]. Cooperative behaviors in games like public good game or prisoner’s dilemma is a charming topic in evolutionary games investitations [30–36].
In stochastic evolutionary games, the fixation of a strategy is the favorite subject. Numerical simulation is the subject of many studies in finite populations [37–40]; also there are many investigations that evaluated the dynamics of evolutionary games analytically [18, 41–46]. In analytical evaluation, the evolutionary process is often considered as a generalization of the Moran process [47], and it has been done for games with two strategies. The most famous analytical method for analyzing evolutionary games is the recursive equation method [48, 49]. In this method, two interesting quantities, fixation probability and fixation time obtain in terms of finite series. Evolutionary games with more than two strategies are not studied analytically so far.
Considering the individual’s mutation, the population’s dynamics are governed by an evolutionary game with no fixation strategy. So, after many time steps, the configuration of the population reaches a stable state. This steady state is described by a stationary probability distribution which determines after a long run, each configuration of population how much is possible. In both cases (games with fixed strategies and games with no fixed strategies), as the number of strategies increases, more time and computational facilities are needed for simulation of the evolutionary game, so proposing an analytical method for evaluating evolutionary games with more than two strategies is helpful. This study aims to provide an analytical method for obtaining concepts in evolutionary games that getting them by simulation takes long time and needs extensive computational facilities.
Markov chain method has been used for analyzing evolutionary games sincessfully [50–52] but it has never been used in an organized and intensive way. In this paper we stabilize the Markov chain method as a reliable method for evaluating evolutionary games. In this method corresponding to each evolutionary game, a Markov chain is introduced. Essential concepts in evolutionary games such as fixation probability, conditional fixation time, and stationary probability distribution are related to concepts in the Markov chain. Using the fundamental matrix method in the equivalence Markov chain, we can calculate essential concepts in the Markov chain, which leads to calculating essential quantities in the evolutionary game. Although this method is designed for a discrete-time system, it could be used for a time-continuous system by considering some approximation.
The organization of the paper is as follows. In general method section we review the Markov chain method and explain a practical theorem for obtaining conditional fixation times which is proven in tha Appendix. In evolutionary game section we establish correspondence between evolutionary games and Markov chains and will clarify how essential concepts in evolutionary games can be obtained from the fundamental matrix method. In result and discussion we apply our approach to an evolutionary game with three strategies. Here the famous rock, scissor, paper evolutionary game is used and results of analytical method and simulations are compared to each other. Conclusion is devoted to a summary and concluding remarks.
General method
Markov chain and fundamental matrix method
In this section, We briefly review the fundamental matrix method in Markov chains and obtain a formula for calculating conditional absorption time. In the next section, by establishing a correspondence between states of the Markov chain and states of the evolutionary game, this theorem provides handy information about the dynamic of the evolutionary process in the fixation path.
A Markov chain is described as S set of states S = {s1, s2, s3, …} and a process which starts in one of these states and move to another state. If the chain is currently in state si, then it moves to state sj with probability denote by pij. The point is that the probability that the chain moves from state si to state sj depends on the initial state si and final state sj not upon which states the chain was in before the state si. The probabilities pij constructed the transition matrix P. If vi be a vector that determines the probability distribution in step i, then probability distribution in step i + 1 is vi+1 = vi P. If there are states in the Markov chain that leaving these states is impossible, these states are called absorbing states and Markov chain called absorbing Markov chain. If i be an absorbing state, then pii = 1 and when the chain is in this state, the Markov chain ends. Other states which are not absorbing are called transient. There are three valuable concepts related to absorbing Markov chain. The first is the probability that the chain starts from transient state i, will be absorbed in absorbing state j (bij). The second is absorption time (ti), the expected number of steps before the chain is absorbed in one of absorbing states, given that the chain starts from state i and the last concept is conditional absorption time (τij), the expected number of steps before the chain is absorbed in the absorbing state sj given that the chain starts in transient state i. It is necessary to emphasize that absorption time differs from conditional absorption time. In fact, absorption time is a weighted average of conditional absorption time among different absorbing states. There is a helpful method for calculating absorption probabilities and absorption time called the fundamental matrix method. In this method, at first the transition matrix is written in the canonical form as follows:
(1)
In other words, in canonical form, we labeled states so that the absorbing states consider as final states. The so-called fundamental matrix is defined as N = (I − Q)−1 and is useful to obtain absorption probabilities and absorption time. Let us define ti to be the (average) absorption time of the Markov chain starting from state i and
,
,… the absorption probabilities correspond to absorption states a1, a2,… starting from state i, respectively. According to the approach in Ref [53] the matrix notation can be used to denote these quantities:
where in the above T is the number of transient states. Using the fundamental matrix method one can obtain absorption probabilities and times as follow:
(2)
where c = (1, 1, ⋯, 1)t. If there is no absorbing state in Markov chain then the Markov chain is called ergodic. In the ergodic Markov chain, it is possible to go from every state to every other state after finite steps. If P is the transition matrix of the ergodic Markov chain then for n → ∞ the Pn approach a limiting matrix W with all rows the same vector w, called fixed row vector for P. It means after a long run, the Markov chain reaches an equilibrium which probability that chain be in state j determine by wj. Obviously, wP = w means w is the left null vector of matrix P − I.
(3)
In other words, the fixed row vector of P is left eigenvector of P with eigenvalue one. The fundamental matrix method does not represent a recipe for calculating the conditional fixation time. Now we describe a theorem to calculate the conditional fixation time for any absorbing state by adding some details to the fundamental matrix method.
Theorem: Let τia be the conditional fixation time for absorption in absorbing state a given that Markov chain starts from transient state i. Using matrix notation, we have
(4)
where in above equation
and T is the number of transition states.
The proof of this theorem present in the Appendix. Also there is a proof with different notation for the theorem in Ref [54].
Evolutionary games corresponding Marokov chains
This section develops a method based on correspondence between Markov chain dynamics and evolutionary game dynamics. This correspondence provides a sound mathematical device for analyzing evolutionary games.
Consider a population with size N which n strategies interact with each other according to a payoff matrix
In each time step, the expected payoff of each strategy is obtained in terms of frequency of strategies and payoff matrix as
(5)
where π(i) is excepted payoff of strategy i and fj is frequency of strategy j. Generally, the expected payoff interpreted as the fitness of strategy in evolutionary game theory, in other words, strategies spread with rates that are proportional to their expected payoff. There are many ways to obtain the fitness of a strategy from its expected payoff, like an exponential payoff to fitness mapping. Depending on the update rule of dynamic, there is a possibility that the evolutionary process leads to the fixation of a strategy which means one strategy overcomes other strategies and occupies the whole population forever. In evolutionary games with fixation strategies, three concepts are noteworthy. Fixation probability, the probability that a strategy fix in population, the fixation time, the average steps of time that an evolutionary process fixed to one of its fixation strategies and conditional fixation time, the average steps of time that evolutionary game fixed in a specific strategy. Update rule could be in such a way that there is no possibility for any strategy that overcomes other strategies forever. In this situation, after a long run with many steps of time, the population reaches to stable condition, which means the probability that the evolutionary process is in each state approach a stationary value.
In the evolutionary process the state of population describe by frequency of each strategy like {fs1, fs2, fs3, …} which fs1 + fs2 + …fsn = N. Direct calculation shows that the number of states is
(6)
In each time step, one strategy is chosen for reproduction and replaces its offspring inplace another strategy. In other words, in each step, the frequency of a strategy increases by one, and frequency of another strategy decreases by one, and the state of the evolutionary game changes. Update rule of the evolutionary game determines which strategy has a higher probability of reproduction and which strategy has a higher probability of being replaced. It is possible that the strategy that is chosen for reproduction and the strategy that vanishes be the same, in this situation, the state of the evolutionary game remain unchanged.
Corresponding to each evolutionary game with l fixation strategies, there is a Markov chain with l absorption states, also, corresponding to each evolutionary game with no fixation strategy, there is an ergodic Markov chain. States in evolutionary game dynamic can be considered as Markov chain states. Transition matrix of corresponding Markov chain obtains by update rule of the evolutionary game.
Fixation probability, Fixation time, and conditional fixation time in the evolutionary game correspond to absorption probability, absorption time, and conditional absorption time in the Markov chain. Since we have the fundamental method in Markov chain theory, this duality between Markov chain dynamics and evolutionary game dynamics is so helpful to analyze the evolutionary games. In games with fixation strategies using the theorem of section, one can obtain conditional fixation time for each strategy, and in evolutionary games with no fixation strategy, the stationary probability distribution of strategies is obtained by calculating the left null vector of matrix P − I. In the next section, we use this correspondence for analyzing rock, scissor paper game.
Results and discussion
As an example of what we said, in this section, we analyze the most famous game with three strategies, the rock, scissors, paper game (RSP game) [4, 55–67]. In the RSP game, each strategy overcomes the next one cyclically.
In the real world, coexistence of many species occurs over three competing species interacting with each other like the rock-paper-scissors game. According to the anticipation of some models, the coexistence of all three competitors is possible if the interaction between them becomes local. In reference [68], the coexistence of three populations of Escherichia coli was empirically studied. According to this, coexistence is preserved when the interaction between species is localized. When dispersal and interaction are nonlocal, the diversity is lost, and one species occupies the whole population. Another example of the rock-paper-scissors evolutionary game in biology is changing in the frequency of adult side-blotched lizards. In reference [69], the authors studied the frequencies of three side-blotched lizard morph from 1990-95. According to their observations, the fitness of each morph is dependent on other morphs. They suggest an evolutionary stable strategy model which predicts each morph frequency. Estimating parameters of payoff matrix of RSP game by field data, the model predicted the morphs oscillation frequencies.
Without loss of generality, the payoff matrix of RSP game can be depicted as follow
where ai, bi > 0. At first, we set the update rule so that the evolutionary process ends when the whole population occupies by one strategy. Therefore, the Markov chain is an absorbing Markov chain. By changing the update rule of the evolutionary game, we establish the possibility of mutation, which means when a strategy extincts, there is a probability that other strategies mutate to extincted strategy and it appears in the population again. In this situation, the evolutionary process never ends but after a long run, it reaches a stable position, and the corresponding Markov chain is an ergodic Markov chain.
RSP game with absorbing states
Consider a population with size N that each member of the population can be one of three types rock, scissor, and paper. We denote the three strategies rock, scissors, and paper as 1, 2 and 3, respectively. The evolutionary process runs upon a birth and death update rule. According to this update rule, one member of the population is chosen for reproduction at each step of time. The chosen member selects randomly another member of the population to be replaced with its offspring. The probabilities of selection for reproduction and being replaced for each strategy are proportional to their frequency. The expected payoff of each strategy is involved in the update rule via the Fermi distribution function [70]. The probability that in each step of time, strategy k replaced with strategy l is
(7)
which fk and fl are frequency of strategies k and l respectively and F is fermi function define as follow
(8)
where β > 0 is constant. The expected payoff for strategies (πk) can be calculate for k = 1, 2, 3 as
(9)
According to Eq (7), when a strategy extincted, there is no possibility that appears again in the population, and sooner or later, the whole population occupies by one of the strategies. It means the corresponding Markov chain is an absorption Markov chain. According to Eq (6) the number of states in this Markov chain is . States of the Markov chain can be arranged in an equilateral triangle. Fig 1 shows the corresponding Markov chain of the evolutionary game with this specific update rule for N = 10. Arrows show the allowed transitions between states. Fig 1 can also be considered as a simplex that determined states of the evolutionary game. The vertices of the triangle are absorption states that are correspond to fixation strategies in the evolutionary game. When Markov chain is on the triangle’s sides, it is impossible to return inside the triangle because by this specific update rule, when a strategy extinct, it never comes back. When Markov chain is on a triangle’s side, it absorbs in one of two vertices side. We are interested in obtaining fixation probability and conditional fixation time for any state in the simplex. After constructing the transition matrix using Eq (7) and calculating the fundamental matrix, one can obtain the fixation probability of every state of simplex for three absorption states.
The total number of states is 55, and the arrows determine the allowed transitions between states. Some arrows are two-way, and some of them are one-way arrows. Inside the simplex, all the states are transient, and transitions between them are two ways. Transitions between the inside of simplex and sides are one way. It means when the Markov chain is in the states of sides, it never goes back inside the simplex. In other words, when a strategy extinct, it never appears in population anymore. Transitions between states of sides are two-way too, except transitions between absorption states and their neighbors, which is one way.
After finding the fixation probability of states, by using the theorem of section, we can obtain conditional fixation time for any state of the simplex.
To observe the footprint of the RSP game, we set the elements of the payoff matrix in the neutral case and strong selection both. In the neutral case, the elements of payoff matrix are a1 = a2 = a3 = 1, b1 = b2 = b3 = 2. In the strong selection case, we set the elements of the payoff matrix extremely in favor of the paper strategy and in detriment of rock strategy. In this case, we have a1 = a3 = 1, a2 = 300, b1 = b3 = 0, b2 = 300. Figs 2–4, show the fixation probability of paper, scissors and rock strategies respectively, when the process begins in each state in the simplex. In the neutral selection case, when the distance between the beginning state and absorption state decreases, the probability of absorption increases. After changing the payoff matrix in favor of the strategy paper, the probability of absorption to the strategy paper increased for all states inside the simplex. In this case, states with long distance to fixation state R = 0, S = 0, P = N also have a high probability of absorbing to this fixation state.
The simulation (a) and analytical (b) results for fixation probability of strategy paper in an RSP game with neutral selection. The size of population is 50. The states close to absorption state P = N, S = 0, R = 0 have a higher chance of absorbing in this absorption state. In (c) and (d) same results were shown with a strong selection in favor of paper strategy and detriment of rock strategy. Compared to the neutral case, many states have a higher chance to absorb to paper strategy.
The simulation (a) and analytical (b) results for conditional fixation time of strategy paper for an RSP game with a neutral selection. The size of the population is 50. No wonder that states are close to absorption state P = N, S = 0, R = 0 reaches this absorption state by the fewer steps. In (c) and (d) same results were shown for a strong selection in favor of paper strategy and detriment of rock strategy. Compared to the neutral case, the number of steps for reaching P = N, S = 0, R = 0 is reduced due to strong selection in favor of strategy P.
The simulation (a) and analytical (b) results for fixation probability of strategy rock in an RSP game with neutral selection. The size of the population is 50. In (c) and (d) same results were shown with a strong selection in favor of paper strategy and detriment of rock strategy. Since the payoff matrix is in detriment of rock strategy, many states, even those who are close to absorption state P = 0, S = 0, R = N have fewer chances to absorb to P = 0, S = 0, R = N.
Also there are states that have high probability of absorbing to scissors strategy in neutral case, but in strong selection case, they have high probability of absorbing to paper strategy. That is because the payoff matrix changed in favor of paper strategy. Also, some states have a high probability of absorbing to rock strategy in neutral case, but in the strong selection case, they have a high probability of absorbing to scissors strategy. That is because we changed the payoff matrix to the detriment of the rock strategy. In the strong selection case, there are fewer states with a high probability of absorbing to rock strategy. Changing the payoff matrix has effects on conditional fixation time too. Figs 5–7, show the conditional fixation time of paper, scissors and rock strategy respectively, when the process begins in each state in the simplex.
The simulation (a) and analytical (b) results for fixation probability of strategy paper in an RSP game with neutral selection. The size of population is 50. The states close to absorption state P = N, S = 0, R = 0 have a higher chance of absorbing in this absorption state. In (c) and (d) same results were shown with a strong selection in favor of paper strategy and detriment of rock strategy. Compared to the neutral case, many states have a higher chance to absorb to paper strategy.
The simulation (a) and analytical (b) results for fixation probability of strategy scissors in an RSP game with neutral selection. The size of the population is 50. The states close to absorption state P = 0, S = N, R = 0 have a higher chance of absorbing in this absorption state. In (c) and (d) same results were shown with a strong selection in favor of paper strategy and detriment of rock strategy. Compared to the neutral case, some states are close to P = 0, S = 0, R = N but have a high chance to absorb in P = 0, S = N, R = 0 strategy. The reason is imposing strong selection to the detriment of rock strategy.
The simulation (a) and analytical (b) results for conditional fixation time of strategy rock for an RSP game with a neutral selection. The size of the population is 50. In (c) and (d) same results were shown for a strong selection in favor of paper strategy and detriment of rock strategy. Due to strong selection against strategy rock, conditional fixation time increase for all states of simplex.
Comparing conditional fixation time in the neutral and strong selection cases shows that absorption to the paper strategy happens in a shorter time in the strong selection case. As shown in Fig 6, the states which are close to fixation state R = N, S = 0, P = 0, in the strong selection case, absorb in strategy scissors in a shorter time. Also, the conditional fixation time for absorbing in the rock strategy increases in the strong selection case for all simplex states. The reason again is changing the payoff matrix to the detriment of rock strategy. In all figures, the results from the analytical approach and simulations are compared to each other. In most of them, simulation results coincide with analytical results. Still, in Figs 6 and 7 in the part of strong selection, the similarity is not so obvious. Since in some states, the probability of absorption to rock strategy is very low in the strong selection case, we need a lot of realization of the evolutionary game to reach a limited number realization ended in rock strategy. It means simulation should repeat more times for obtaining an accurate result. The same is true for conditional fixation time of scissors strategy. The hardness of getting simulation results in some conditions shows the necessity of invent of an analytical method.
RSP game without absorbtion states
One may set the update rule in such a way that none of the strategies fix forever. In this situation, the corresponding Markov chain is an ergodic Markov chain. To compare our final result to the numerical result obtained in previous works, we use the update rule of Ref. [39]. According to this update rule, the probability that in each time step, one member of the population switches from strategy l to strategy k is proportional to Tl→k = ε + W(πk − πl) where ε is a positive value which guarantees mutation in the process and W is zero when the argument is negative. W works like the identical function when the argument is positive or zero. Elements of the transition matrix can be calculated as follow
(10)
where i → j are all allowed transitions in each state of the Markov chain. Fig 8 shows the Markov chain corresponding to this update rule. Unlike the previous update rule, when the Markov chain is in any state, there is a nonzero probability that exits from that state and therefore, there is no absorption state. The limiting probability distribution of the evolutionary game can be obtained by calculating the left null vector of matrix P − I. Fig 9 shows the analytical and simulation results for limiting probability distribution after a long run (100 million steps). The simulation and analytical results agree with each other. As a double-check, one can compare the results with simulation results obtained with the same update rule in Re. [39].
The total number of states is 55 and the arrows determine the allowed transition between states. All arrows are two-way which means when the Markov chain is in a state there is a non-zero probability to escape from it.
The update rule is according to Eq 10. The payoff matrix in (a) and (b) is ai = 1, bi = .5 in (c) and (d) ai = 1, bi = 1 and in (e) and (f) ai = 1, bi = 2. To evaluate non-neutral selection in (g) and (h) the payoff matrix set as ai = 1, b1 = .5, b2 = 2/3, b3 = 3.
Conclusion
This paper introduced the Markov chain method as an accurate analytical method for analyzing evolutionary game dynamics. Before this, the Makov chain method was used for studying two strategies evolutionary game or Moran process, but using the theorem explained in section, the Markov chain method can be used for any evolutionary game with any number of strategies. This method is flexible with changing the update rule of the evolutionary game. In the case of update rules which determine some fixation strategies, the fixation probability of each strategy and fixation time were calculable by the typical Markov chain method. By the theorem of section one can obtain conditional fixation time for each strategy. As an example, RSP games are evaluated with two update rules. In the first update rule, each of the three strategies can be fixed. Using the fundamental method, fixation probability and conditional fixation time of one of the strategies obtained were consistent with simulation results. In the second update rule, mutation is possible in the evolutionary game, and there is no fixed strategy. Getting the left null vector of matrix P − I leads to the limited probability distribution in agreement with simulation results. This method could also be applied to evolutionary games with more than three strategies.
There is wide possibility of application of Markov chain method not only RPS game. In refrences [50, 51], we used Markov chain method for evaluating the Moran process. In many situations the issue of social dilemma represented by either Prisoner’s Dilemma, Chicken, or Stag Hunt games [71, 72], therefore, applying this method on archetype 2 × 2 symmetric games will lead to significant results.
Codes related to this article is hosted on Github at https://github.com/mehdiphy/rock-scissors-paper-evolutionary-game.
Appendix
In this appendix, we will prove the theorem of section
The theorem is about calculating conditional absorption time in absorbing the Markov chain. It has already been proven [53] that in absorbing the Markov chain the fundamental matrix N = (I − Q)−1 is exists and can be written in an infinite series
(11)
Let si and sj be two transient states. We assume that the chain starts in state si. Let X(k) be a random variable which equals 1 if the chain is in state j after k steps and equals 0 otherwise. Let denote the outcome that corresponds to the absorbing of the chain to the absorbing state sa. We need to calculate
to obtain conditional absorbing time, τia. To this end, we use the following relation for conditional probability
(12)
Clearly and
.
Now, using
(13)
we arrive at
(14)
The expected number of times the chain is in state sj in the first m steps given that it absorb in state sa and starts in state si is
(15)
when m goes to infinity we have
(16)
Using these conditional probabilities we can calculate the conditional absorbing time, τia, as
where
.
This way, one can obtain the average conditional absorption time for the processes which are eventually absorbed to each arbitrary absorbing state.
Acknowledgments
The authors would like to thank Arne Traulsen for reading the first draft of this paper and for useful comments and suggestions.
References
- 1. Smith JM, Price GR. The logic of animal conflict. Nature. 1973 Nov;246(5427):15–8.
- 2.
Hofbauer J, Sigmund K. The theory of evolution and dynamical systems: mathematical aspects of selection. 1988.
- 3.
Weibull JW. Evolutionary game theory. MIT press; 1997.
- 4. Szolnoki A, Mobilia M, Jiang LL, Szczesny B, Rucklidge AM, Perc M. Cyclic dominance in evolutionary games: a review. Journal of the Royal Society Interface. 2014 Nov 6;11(100):20140735. pmid:25232048
- 5. Taylor C, Fudenberg D, Sasaki A, Nowak MA. Evolutionary game dynamics in finite populations. Bulletin of mathematical biology. 2004 Nov;66(6):1621–44. pmid:15522348
- 6. Perc M, Grigolini P. Collective behavior and evolutionary games-an introduction. Chaos, Solitons Fractals. 2013 56, 1–5
- 7. Hofbauer J, Sigmund K. Evolutionary game dynamics. Bulletin of the American mathematical society. 2003;40(4):479–519.
- 8. Amaral MA, Wardil L, Perc M, da Silva JK. Evolutionary mixed games in structured populations: Cooperation and the benefits of heterogeneity. Physical Review E. 2016 Apr 6;93(4):042304. pmid:27176309
- 9. Traulsen A, Nowak MA, Pacheco JM. Stochastic dynamics of invasion and fixation. Physical Review E. 2006 Jul 17;74(1):011909. pmid:16907129
- 10. Perc M, Szolnoki A. Coevolutionary games—a mini review. BioSystems. 2010 Feb 1;99(2):109–25. pmid:19837129
- 11.
Zeeman EC. Population dynamics from game theory. In Global theory of dynamical systems 1980 (pp. 471–497). Springer, Berlin, Heidelberg.
- 12. Ohtsuki H, Nowak MA. The replicator equation on graphs. Journal of theoretical biology. 2006 Nov 7;243(1):86–97. pmid:16860343
- 13. Lieberman E, Hauert C, Nowak MA. Evolutionary dynamics on graphs. Nature. 2005 Jan;433(7023):312–6. pmid:15662424
- 14. Nowak MA, Sasaki A, Taylor C, Fudenberg D. Emergence of cooperation and evolutionary stability in finite populations. Nature. 2004 Apr;428(6983):646–50. pmid:15071593
- 15. Li X, Hao G, Wang H, Xia C, Perc M. Reputation preferences resolve social dilemmas in spatial multigames. Journal of Statistical Mechanics: Theory and Experiment. 2021 Jan 18;2021(1):013403.
- 16. Traulsen A, Claussen JC, Hauert C. Coevolutionary dynamics: from finite to infinite populations. Physical review letters. 2005 Dec 2;95(23):238701. pmid:16384353
- 17. Black AJ, McKane AJ. Stochastic formulation of ecological models and their applications. Trends in ecology and evolution. 2012 Jun 1;27(6):337–45. pmid:22406194
- 18. Broom M, Rychtář J. An analysis of the fixation probability of a mutant on special classes of non-directed graphs. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences. 2008 Oct 8;464(2098):2609–27.
- 19. Altrock PM, Gokhale CS, Traulsen A. Stochastic slowdown in evolutionary processes. Physical Review E. 2010 Jul 28;82(1):011925. pmid:20866666
- 20. Hilbe C. Local replicator dynamics: a simple link between deterministic and stochastic models of evolutionary game theory. Bulletin of mathematical biology. 2011 Sep;73(9):2068–87. pmid:21181502
- 21. Park JI, Kim BJ, Park HJ. Stochastic resonance of abundance fluctuations and mean time to extinction in an ecological community. Physical Review E. 2021 Aug 26;104(2):024133. pmid:34525626
- 22. Nowak MA, May RM. Evolutionary games and spatial chaos. Nature. 1992 Oct;359(6398):826–9.
- 23. Szabó G, Fath G. Evolutionary games on graphs. Physics reports. 2007 Jul 1;446(4-6):97–216.
- 24. Taylor PD, Day T, Wild G. Evolution of cooperation in a finite homogeneous graph. Nature. 2007 May;447(7143):469–72. pmid:17522682
- 25. Débarre F, Hauert C, Doebeli M. Social evolution in structured populations. Nature Communications. 2014 Mar 6;5(1):1–7. pmid:24598979
- 26. Ohtsuki H, Hauert C, Lieberman E, Nowak MA. A simple rule for the evolution of cooperation on graphs and social networks. Nature. 2006 May;441(7092):502–5. pmid:16724065
- 27. Duh M, Gosak M, Perc M. Public goods games on random hyperbolic graphs with mixing. Chaos, Solitons, Fractals. 2021 Mar 1;144:110720.
- 28. Poncela J, Gómez-Gardeñes J, Traulsen A, Moreno Y. Evolutionary game dynamics in a growing structured population. New Journal of Physics. 2009 Aug 24;11(8):083031.
- 29. Dehghani MA, Darooneh AH, Kohandel M. The network structure affects the fixation probability when it couples to the birth-death dynamics in finite population. PLoS computational biology. 2021 Oct 27;17(10):e1009537. pmid:34705822
- 30. Brush E, Brännström Å, Dieckmann U. Indirect reciprocity with negative assortment and limited information can promote cooperation. Journal of theoretical biology. 2018 Apr 14;443:56–65. pmid:29337264
- 31. Traulsen A, Hauert C, De Silva H, Nowak MA, Sigmund K. Exploration dynamics in evolutionary games. Proceedings of the National Academy of Sciences. 2009 Jan 20;106(3):709–12. pmid:19124771
- 32. Wu B, Park HJ, Wu L, Zhou D. Evolution of cooperation driven by self-recommendation. Physical Review E. 2019 Oct 14;100(4):042303. pmid:31770974
- 33. Li Y, Wang H, Du W, Perc M, Cao X, Zhang J. Resonance-like cooperation due to transaction costs in the prisoner’s dilemma game. Physica A: Statistical Mechanics and its Applications. 2019 May 1;521:248–57.
- 34. Hilbe C, Wu B, Traulsen A, Nowak MA. Cooperation and control in multiplayer social dilemmas. Proceedings of the National Academy of Sciences. 2014 Nov 18;111(46):16425–30. pmid:25349400
- 35. Shen C, Chu C, Shi L, Perc M, Wang Z. Aspiration-based coevolution of link weight promotes cooperation in the spatial prisoner’s dilemma game. Royal Society open science. 2018 May 2;5(5):180199. pmid:29892454
- 36. Liu Y, Chen X, Zhang L, Wang L, Perc M. Win-stay-lose-learn promotes cooperation in the spatial prisoner’s dilemma game. PloS one. 2012 Feb 17;7(2):e30689. pmid:22363470
- 37. Wang X, Chen X, Wang L. Evolution of egalitarian social norm by resource management. PloS one. 2020 Jan 30;15(1):e0227902. pmid:31999744
- 38. Huberman BA, Glance NS. Evolutionary games and computer simulations. Proceedings of the National Academy of Sciences. 1993 Aug 15;90(16):7716–8. pmid:8356075
- 39. Yu Q, Fang D, Zhang X, Jin C, Ren Q. Stochastic evolution dynamic of the rock–scissors–paper game based on a quasi birth and death process. Scientific reports. 2016 Jun 27;6(1):1–9. pmid:27346701
- 40. Askari M, Miraghaei ZM, Samani KA. The effect of hubs and shortcuts on fixation time in evolutionary graphs. Journal of Statistical Mechanics: Theory and Experiment. 2017 Jul 28;2017(7):073501.
- 41. Frean M, Rainey PB, Traulsen A. The effect of population structure on the rate of evolution. Proceedings of the Royal Society B: Biological Sciences. 2013 Jul 7;280(1762):20130211. pmid:23677339
- 42. Hindersin L, Möller M, Traulsen A, Bauer B. Exact numerical calculation of fixation probability and time on graphs. Biosystems. 2016 Dec 1;150:87–91. pmid:27555086
- 43. Broom M, Rychtář J, Stadler BT. Evolutionary dynamics on graphs-the effect of graph structure and initial placement on mutant spread. Journal of Statistical Theory and Practice. 2011 Sep 1;5(3):369–81.
- 44. Antal T, Scheuring I. Fixation of strategies for an evolutionary game in finite populations. Bulletin of mathematical biology. 2006 Nov;68(8):1923–44. pmid:17086490
- 45. Altrock PM, Traulsen A, Nowak MA. Evolutionary games on cycles with strong selection. Physical Review E. 2017 Feb 13;95(2):022407. pmid:28297871
- 46. Hindersin L, Traulsen A. Most undirected random graphs are amplifiers of selection for birth-death dynamics, but suppressors of selection for death-birth dynamics. PLoS computational biology. 2015 Nov 6;11(11):e1004437. pmid:26544962
- 47. Moran PA. Random processes in genetics. In Mathematical proceedings of the cambridge philosophical society 1958 Jan (Vol. 54, No. 1, pp. 60–71). Cambridge University Press.
- 48. Askari M, Samani KA. Analytical calculation of average fixation time in evolutionary graphs. Physical Review E. 2015 Oct 13;92(4):042707. pmid:26565272
- 49. Shakarian P, Roos P, Johnson A. A review of evolutionary graph theory with applications to game theory. Biosystems. 2012 Feb 1;107(2):66–80. pmid:22020107
- 50. Hajihashemi M, Samani KA. Path to fixation of evolutionary processes in graph-structured populations. The European Physical Journal B. 2021 Feb;94(2):1–9.
- 51. Hajihashemi M, Samani KA. Fixation time in evolutionary graphs: A mean-field approach. Physical Review E. 2019 Apr 12;99(4):042304. pmid:31108590
- 52. Vasconcelos VV, Santos FP, Santos FC, Pacheco JM. Stochastic dynamics through hierarchically embedded Markov chains. Physical Review Letters. 2017 Feb 1;118(5):058301. pmid:28211729
- 53.
Grinstead CM, Snell JL. Introduction to probability. American Mathematical Soc.; 1997.
- 54.
Ewens WJ. Mathematical population genetics: theoretical introduction. New York: Springer; 2004 Jan 9.
- 55. Frean M, Abraham ER. Rock-paper-scissors and the survival of the weakest. Proceedings of the Royal Society B: Biological Sciences. 2001;268:1323–7. pmid:11429130
- 56. Cheng H, Yao N, Huang ZG, Park J, Do Y, Lai YC. Mesoscopic interactions and species coexistence in evolutionary game dynamics of cyclic competitions. Scientific reports. 2014 Dec 15;4(1):1–7. pmid:25501627
- 57. Reichenbach T, Mobilia M, Frey E. Mobility promotes and jeopardizes biodiversity in rock–paper–scissors games. Nature. 2007 Aug;448(7157):1046–9. pmid:17728757
- 58. Szolnoki A, Perc M. Zealots tame oscillations in the spatial rock-paper-scissors game. Physical Review E. 2016 Jun 10;93(6):062307. pmid:27415280
- 59. Jiang LL, Zhou T, Perc M, Wang BH. Effects of competition on pattern formation in the rock-paper-scissors game. Physical Review E. 2011 Aug 8;84(2):021912. pmid:21929025
- 60. Park HJ, Pichugin Y, Traulsen A. Why is cyclic dominance so rare?. Elife. 2020 Sep 4;9:e57857. pmid:32886604
- 61. Kabir KA, Tanimoto J. The role of pairwise nonlinear evolutionary dynamics in the rock–paper–scissors game with noise. Applied Mathematics and Computation. 2021 Apr 1;394:125767.
- 62. Yoshida T, Mizoguchi T, Hatsugai Y. Chiral edge modes in evolutionary game theory: A kagome network of rock-paper-scissors cycles. Physical Review E. 2021 Aug 3;104(2):025003. pmid:34525642
- 63. Verma T, Gupta AK. Evolutionary dynamics of rock-paper-scissors game in the patchy network with mutations. Chaos, Solitons and Fractals. 2021 Dec 1;153:111538.
- 64. Mobilia M. Oscillatory dynamics in rock–paper–scissors games with mutations. Journal of Theoretical Biology. 2010 May 7;264(1):1–0. pmid:20083126
- 65.
Fisher L. Rock, paper, scissors: game theory in everyday life. Basic Books; 2008 Nov 4.
- 66. Szolnoki A, Perc M. Biodiversity in models of cyclic dominance is preserved by heterogeneity in site-specific invasion rates. Scientific Reports. 2016 Dec 5;6(1):1–9. pmid:27917952
- 67. Xu B, Zhou HJ, Wang Z. Cycle frequency in standard rock–paper–scissors games: evidence from experimental economics. Physica A: Statistical Mechanics and its Applications. 2013 Oct 15;392(20):4997–5005.
- 68. Kerr B, Riley MA, Feldman MW, Bohannan BJ. Local dispersal promotes biodiversity in a real-life game of rock–paper–scissors. Nature. 2002 Jul;418(6894):171–4. pmid:12110887
- 69. Sinervo B, Lively CM. The rock–paper–scissors game and the evolution of alternative male strategies. Nature. 1996 Mar;380(6571):240–3.
- 70. Szabó G, Tőke C. Evolutionary prisoner’s dilemma game on a square lattice. Physical Review E. 1998 Jul 1;58(1):69.
- 71. Tanimoto J. Evolutionary games with sociophysics. Evolutionary Economics. 2019.
- 72.
Tanimoto J. Sociophysics Approach to Epidemics. Springer Nature; 2021.