Fig 1.
The absorbing Markov chain corresponds to an evolutionary RSP game with a population N = 10.
The total number of states is 55, and the arrows determine the allowed transitions between states. Some arrows are two-way, and some of them are one-way arrows. Inside the simplex, all the states are transient, and transitions between them are two ways. Transitions between the inside of simplex and sides are one way. It means when the Markov chain is in the states of sides, it never goes back inside the simplex. In other words, when a strategy extinct, it never appears in population anymore. Transitions between states of sides are two-way too, except transitions between absorption states and their neighbors, which is one way.
Fig 2.
The simulation (a) and analytical (b) results for fixation probability of strategy paper in an RSP game with neutral selection. The size of population is 50. The states close to absorption state P = N, S = 0, R = 0 have a higher chance of absorbing in this absorption state. In (c) and (d) same results were shown with a strong selection in favor of paper strategy and detriment of rock strategy. Compared to the neutral case, many states have a higher chance to absorb to paper strategy.
Fig 3.
The simulation (a) and analytical (b) results for conditional fixation time of strategy paper for an RSP game with a neutral selection. The size of the population is 50. No wonder that states are close to absorption state P = N, S = 0, R = 0 reaches this absorption state by the fewer steps. In (c) and (d) same results were shown for a strong selection in favor of paper strategy and detriment of rock strategy. Compared to the neutral case, the number of steps for reaching P = N, S = 0, R = 0 is reduced due to strong selection in favor of strategy P.
Fig 4.
The simulation (a) and analytical (b) results for fixation probability of strategy rock in an RSP game with neutral selection. The size of the population is 50. In (c) and (d) same results were shown with a strong selection in favor of paper strategy and detriment of rock strategy. Since the payoff matrix is in detriment of rock strategy, many states, even those who are close to absorption state P = 0, S = 0, R = N have fewer chances to absorb to P = 0, S = 0, R = N.
Fig 5.
The simulation (a) and analytical (b) results for fixation probability of strategy paper in an RSP game with neutral selection. The size of population is 50. The states close to absorption state P = N, S = 0, R = 0 have a higher chance of absorbing in this absorption state. In (c) and (d) same results were shown with a strong selection in favor of paper strategy and detriment of rock strategy. Compared to the neutral case, many states have a higher chance to absorb to paper strategy.
Fig 6.
The simulation (a) and analytical (b) results for fixation probability of strategy scissors in an RSP game with neutral selection. The size of the population is 50. The states close to absorption state P = 0, S = N, R = 0 have a higher chance of absorbing in this absorption state. In (c) and (d) same results were shown with a strong selection in favor of paper strategy and detriment of rock strategy. Compared to the neutral case, some states are close to P = 0, S = 0, R = N but have a high chance to absorb in P = 0, S = N, R = 0 strategy. The reason is imposing strong selection to the detriment of rock strategy.
Fig 7.
The simulation (a) and analytical (b) results for conditional fixation time of strategy rock for an RSP game with a neutral selection. The size of the population is 50. In (c) and (d) same results were shown for a strong selection in favor of paper strategy and detriment of rock strategy. Due to strong selection against strategy rock, conditional fixation time increase for all states of simplex.
Fig 8.
The Markov chain corresponds to an ergodic RSP game for N = 10.
The total number of states is 55 and the arrows determine the allowed transition between states. All arrows are two-way which means when the Markov chain is in a state there is a non-zero probability to escape from it.
Fig 9.
Stationary probability distribution of RSP game with population 50.
The update rule is according to Eq 10. The payoff matrix in (a) and (b) is ai = 1, bi = .5 in (c) and (d) ai = 1, bi = 1 and in (e) and (f) ai = 1, bi = 2. To evaluate non-neutral selection in (g) and (h) the payoff matrix set as ai = 1, b1 = .5, b2 = 2/3, b3 = 3.