Skip to main content
Advertisement
  • Loading metrics

Autocratic strategies in Cournot oligopoly game

  • Masahiko Ueda ,

    Roles Conceptualization, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Validation, Writing – original draft, Writing – review & editing

    m.ueda@yamaguchi-u.ac.jp

    Affiliation Graduate School of Sciences and Technology for Innovation, Yamaguchi University, Yamaguchi, Japan

  • Shoma Yagi,

    Roles Data curation, Investigation, Methodology, Resources, Software, Visualization

    Affiliation Department of Mathematical and Systems Engineering, Shizuoka University, Hamamatsu, Japan

  • Genki Ichinose

    Roles Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing

    Affiliation Department of Mathematical and Systems Engineering, Shizuoka University, Hamamatsu, Japan

Abstract

An oligopoly is a market in which the price of goods is controlled by a few firms. Cournot introduced the simplest game-theoretic model of oligopoly, where profit-maximizing behavior of each firm results in market failure. Furthermore, when the Cournot oligopoly game is infinitely repeated, firms can tacitly collude to monopolize the market. Such tacit collusion is realized by the same mechanism as direct reciprocity in the repeated prisoner’s dilemma game, where mutual cooperation can be realized whereas defection is favorable for both prisoners in a one-shot game. Recently, in the repeated prisoner’s dilemma game, a class of strategies called zero-determinant strategies attracts much attention in the context of direct reciprocity. Zero-determinant strategies are autocratic strategies which unilaterally control payoffs of players by enforcing linear relationships between payoffs. There were many attempts to find zero-determinant strategies in other games and to extend them so as to apply them to broader situations. In this paper, first, we show that zero-determinant strategies exist even in the repeated Cournot oligopoly game, and that they are quite different from those in the repeated prisoner’s dilemma game. Especially, we prove that a fair zero-determinant strategy exists, which is guaranteed to obtain the average payoff of the opponents. Second, we numerically show that the fair zero-determinant strategy can be used to promote collusion when it is used against an adaptively learning player, whereas it cannot promote collusion when it is used against two adaptively learning players. Our findings elucidate some negative impact of zero-determinant strategies in the oligopoly market.

Author summary

Repeated games have been used to analyze the rational decision-making of multiple agents in a long-term interdependent relationship. Recently, a class of autocratic strategies, called zero-determinant strategies, was discovered in repeated games, which unilaterally controls payoffs of players via enforcing linear relations between payoffs. So far, properties of zero-determinant strategies in social dilemma situations have extensively been investigated, and it has been shown that some zero-determinant strategies promote cooperation. Moreover, zero-determinant strategies have been found in several games. However, it has not been known whether zero-determinant strategies exist in oligopoly games. In this paper, we investigate zero-determinant strategies in the repeated Cournot oligopoly game, which is the simplest mathematical model of oligopoly. We prove the existence of zero-determinant strategies which unilaterally enforce linear relations between the payoff of the player and the average payoff of the opponents. Furthermore, we numerically show that some zero-determinant strategy can promote collusion in a duopoly case, although it cannot promote collusion in a triopoly case. Our results imply that zero-determinant strategies can be used to promote cooperation between firms even in an oligopoly market.

1 Introduction

Oligopoly is one of the simplest game-theoretic situations in economics, where the price of goods is controlled by a few firms. Cournot introduced a simple model of oligopoly, where multiple firms produce the same goods and the good’s price decreases as the total amount of the goods increases [1,2]. Each firm wants to maximize its profit, but needs to choose its production while considering production of the other firms. When all firms are rational, the Cournot-Nash equilibrium is realized, where production of each firm is the best response against production of the other firms. Because total production in this equilibrium is smaller than that in perfect competition, oligopoly results in market failure.

When the Cournot oligopoly game is infinitely repeated, the situation becomes worse for consumers. Since firms become taking total profits obtained in future into account, firms can tacitly collude to monopolize the market [3], even if the Cournot-Nash equilibrium production is the only rational behavior of each firm in a one-shot game. This is an example of the folk theorem in repeated games. This result is similar to direct reciprocity in the repeated prisoner’s dilemma game [4]. In the prisoner’s dilemma game, two prisoners independently choose cooperation or defection. Defection is the best action in terms of the payoff of each prisoner regardless of the action of the other prisoner, and mutual defection is realized as a result of rational behavior. Because mutual cooperation improves the payoffs of both prisoners in mutual defection, this game describes a kind of social dilemmas. However, when the game is infinitely repeated, mutual cooperation can be realized as a result of long-term perspectives, similarly to tacit collusion in the repeated Cournot oligopoly game.

In 2012, a novel class of strategies, called zero-determinant (ZD) strategies, was discovered in the repeated prisoner’s dilemma game [5]. Counterintuitively, ZD strategies unilaterally control payoffs of players by enforcing linear relations between payoffs. ZD strategies can be regarded as generalization of the equalizer strategy [6], which unilaterally sets the payoff of the opponent. ZD strategies also contain the extortionate ZD strategy, by which a player never obtains payoff lower than that of the opponent, and the generous ZD strategy [7], by which a player never obtains payoff higher than that of the opponent but promotes cooperation. So far, ZD strategies were discovered in the prisoner’s dilemma game [5,7,8], the public goods game [9,10], the continuous donation game [11], the asymmetric prisoner’s dilemma game [12,13], two-player potential games [14,15], and symmetric games with no generalized rock-paper-scissors cycles [16,17]. Particularly, ZD strategies in games with infinite action sets are called autocratic strategies [11]. Furthermore, ZD strategies were extended to broader situations, such as games with a discount factor [11,1820], games with observation errors [2123], asynchronous games [24], and stochastic games [25,26]. Roles of ZD strategies in the context of evolution of cooperation have been extensively investigated [7,8,2733].

Although the Cournot oligopoly game contains some social dilemma structure as noted above, it is not completely the same as social dilemmas. This is because the action sets in the Cournot oligopoly game are infinite sets, whereas the action sets in typical social dilemma games contain only two actions, that is, cooperation and defection. Probably for that reason, ZD strategies have not been discovered in the Cournot oligopoly game, and it is not clear whether players can autocratically control payoffs in the game.

If such unilateral payoff control is possible in the Cournot oligopoly game, a firm can lead the other firms to collusion. In fact, such control is possible by ZD strategies in the repeated prisoner’s dilemma game [5]. If the payoffs of two players are positively correlated in a linear relation enforced by a ZD strategy, increase in payoff of the opponent implies increase in payoff of a player using the ZD strategy. Therefore, if the opponent gradually improves its payoff by learning, the payoff of the player using the ZD strategy also increases up to some local optima. Because ZD strategies do not assume rationality of other players, they are useful even if some players are boundedly rational [34], in contrast to traditional strategies in classical game theory [18].

In this paper, we make two key contributions. First, we prove that ZD strategies (or, autocratic strategies) also exist in the repeated Cournot oligopoly game. Concretely, we show that ZD strategies pinning their own payoffs, positively correlated ZD strategies [35,36], which are the class containing the extortionate ZD strategy and the generous ZD strategy, and negatively correlated ZD strategies exist in the game. Especially, we find that a fair ZD strategy exists in the game, by which a player unilaterally obtains the average payoff of the opponents. Second, we specify performance of the fair ZD strategy in evolution of collusion. We numerically investigate whether collusion is achieved when the fair ZD strategy is used against adaptively learning players [5,35,36]. We find that, in a two-player case, the fair ZD strategy promotes collusion of a learning player, but in a three-player case, it cannot promote collusion.

This paper is organized as follows. In Sect 2, we introduce the Cournot oligopoly game and its repeated version. In Sect 3, we introduce autocratic strategies, and explain their properties related to the existence. In Sect 4, we prove that some types of autocratic strategies exist in the repeated Cournot oligopoly game. In Sect 5, we provide numerical results. Sect 6 is devoted to discussion. The proofs of all theoretical results are provided in Sect 7.

2 Model

We consider the N-player Cournot oligopoly game with [1,2]. In this game, multiple firms produce the same goods at the same cost. The model assumes that the good’s price is determined by consumer demand, and the price decreases as the total amount of the goods increases. Each firm needs to choose its production while considering production of the other firms. If the production of each firm is too small, it obtains small positive profit. However, if the production of a firm is too large, all firms may obtain negative profit. Therefore, the situation is game-theoretic. In the simplest model, the price linearly decreases as the total amount of the goods increases.

This oligopoly game is mathematically described as follows. The action space of player (firm) j is given by with for all j. The action of player j is described as , which represents production of the goods by player j. We collectively write the action profile as . We also use the notation , , and . When we want to emphasize xj in , we write . The payoff of player j when the action profile is corresponds to the profit (the difference between sales and the total production cost) described as

(1)

where a > c > 0, b > 0, and is the step function. That is, represents a price of the goods, and c represents a cost of production. The price is a non-negative decreasing function of the total production , and becomes zero when the total production is too large: . In this paper, we assume that .

It is known that the Nash equilibrium of this game is and for all j, where . On the other hand, if all firms collude to monopolize the market, and share the payoffs equally, the realized state is and for all j, where . Because , the game can be regarded as one example of social dilemmas. In addition, the Walras equilibrium, which corresponds to perfect competition, is and for all j, where . In perfect competition, firms cannot influence the price, and the price is equal to the marginal cost c. In the limit , the total production in the Nash equilibrium converges to , where is the total production in perfect competition. This result implies that perfect competition is realized if the number of firms is very large, and is known as the Cournot limit theorem.

We consider a repeated version of the Cournot oligopoly game. We write the probability measure of an action profile at time t by . The payoff of player j in the repeated game is given by

(2)

where is the expected value of the quantity with respect to the limit probability of an action profile

(3)

and δ is the discount factor satisfying . If the limit exists, we obtain

(4)

In this paper, we focus only on the undiscounted case .

3 Preliminaries

The memory-one strategy of player j is described as the conditional probability measure of action xj when the action profile in the previous round is . We introduce the notation . For the case of bounded payoffs , McAvoy and Hauert introduced the concept of autocratic strategies as an extension of zero-determinant (ZD) strategies.

Definition 1. [11] A memory-one strategy of player j is an autocratic strategy when its strategy Tj can be written in the form

(5)

with some coefficients and some bounded function .

Surprisingly, the autocratic strategy (5) unilaterally enforces a linear relation between payoffs

(6)

Therefore, autocratic strategies can be used to control payoffs unilaterally. The left-hand side of Eq (5) is the difference between an integral of the function ψ with respect to a memory-one strategy Tj and an integral of the function ψ with respect to the Repeat strategy [8]. The general meaning of the function ψ is not clear, and it has been heuristically obtained [15,16]. For two-player potential games, it is related to potential functions [14]. Recently, for stochastic games, the method for finding ψ numerically was proposed [37]. It is also noteworthy that autocratic strategies are equally powerful in multi-player games although they were originally introduced in two-player games [11], because their proof did not use the number of players. In this paper, we use the two words “ZD strategy” and “autocratic strategy” interchangeably.

Below we write . We write the Dirac measure by . McAvoy and Hauert proved the following proposition.

Proposition 1. [11] Suppose that there exist two actions and W > 0 such that

(7)

Then, when we restrict the action set of player j from Aj to , the memory-one strategy of player j

(8)

with

(9)

is an autocratic strategy unilaterally enforcing .

Indeed, we can find

(10)

with and . McAvoy and Hauert called such autocratic strategies two-point autocratic strategies, because such an autocratic strategy uses only two actions. This proposition on two-point autocratic strategies is useful, since we can easily construct an autocratic strategy only by specifying the two actions and .

Although they found that the condition (7) is a sufficient condition for the existence of general autocratic strategies, Ueda proved that it is also a necessary condition when the number of actions of each player is finite [16]. However, we do not know whether it is also a necessary condition when the number of actions of some player is infinite. At this stage, we only prove the following proposition about two-point autocratic strategies.

Proposition 2. If a two-point autocratic strategy (which uses only two actions) of player j controlling B exists, then there exist two actions such that

(11)

See Sect 7.1 for the proof. Because B is bounded, the condition in Proposition 2 is equivalent to that in Proposition 1. Therefore, specifying two actions and satisfying the condition (7) is equivalent to specifying a two-point autocratic strategy. These propositions are frequently used in order to specify necessary and sufficient conditions for the existence of several types of two-point autocratic strategies.

4 Theoretical results

In this section, we investigate the existence of autocratic strategies in the Cournot oligopoly game. We remark that, in the Cournot oligopoly game, the relation

(12)

holds. Therefore, from the viewpoint of player j, the opponents –j can be essentially regarded as one player with the action and the payoff . However, the domain of the action of “player” –j is different from that of player j, since . Taking this difference into account, as far as we focus on the relation between sj and , we identify xj with below.

4.1 Equalizer strategy

First, we seek for equalizer strategies of player j [5,6], that is, those unilaterally enforcing with . For such a case, we need to set B as .

Theorem 1. Two-point equalizer strategies in the Cournot oligopoly game do not exist for any r.

See Sect 7.2 for the proof. Therefore, players cannot unilaterally set the opponents’ total payoffs by two-point autocratic strategies. This result is quite different from that in the repeated prisoner’s dilemma game, because two-point equalizer strategies exist in that game. (It should be noted that all ZD strategies in two-action games are two-point autocratic strategies.)

4.2 Strategy pinning its own payoff

Next, we seek for autocratic strategies of player j pinning its own payoff, that is, those unilaterally enforcing with . For such a case, we need to set B as .

Theorem 2. Two-point autocratic strategies pinning its own payoff in the Cournot oligopoly game exist only for .

See Sect 7.3 for the proof. We find that and if such two-point autocratic strategies exist.

This result is also quite different from that in the repeated prisoner’s dilemma game, because ZD strategies pinning its own payoff do not exist in that game [5]. We also remark that the result for r = 0 seems to be intuitive, because it is realized by producing nothing (xj = 0) in all rounds. Theorem 2 claims that such control of its own payoff is possible even for . This comes from the fact that the one-shot payoff (1) can be unilaterally controlled to a negative value if production of the player is too large: for . However, players have no incentive to adopt such autocratic strategies, because players may obtain positive payoffs by using other strategies.

4.3 Positively correlated strategies

Here, we seek for positively correlated autocratic strategies of player j [35,36], which unilaterally enforce

(13)

with and . For such a case, we need to set B as

(14)

Theorem 3. Two-point positively correlated autocratic strategies exist only for (i) or (ii) and .

See Sect 7.4 for the proof. We find that and if such autocratic strategies exist.

This result is also different from that in the repeated prisoner’s dilemma game, because two-point positively correlated ZD strategies exist only for in that game [5,7]. For the case , if , Eq (13) implies . Therefore, setting and leads to low-risk low-return autocratic strategies. We also remark that, when , the two-point positively correlated autocratic strategy is reduced to the autocratic strategy in Theorem 2, since . Moreover, when , the two-point positively correlated autocratic strategy can be regarded as a fair autocratic strategy [9], since it unilaterally enforces

(15)

that is, it is guaranteed to obtain the average payoff of the opponents. When N = 2, it is reduced to the autocratic strategy which unilaterally equalizes the payoffs of two players [14,16]. Such a strategy is contained in the class of unbeatable strategies [38,39] (or, rival strategies [18]), which always obtain payoffs no less than that of the opponent irrespective of the opponent’s strategy in two-player symmetric games. For , the two-point positively correlated autocratic strategy with can be interpreted as an unbeatable strategy with respect to group average. Note that, when , the action is one in the Walras equilibrium . It has been known that the Walras equilibrium action is an unbeatable action against any monomorphic opponents [40]. The fair autocratic strategy uses this property of the Walras equilibrium action to control payoffs in the repeated game.

4.4 Negatively correlated strategies

Here, we seek for negatively correlated autocratic strategies of player j, which unilaterally enforce

(16)

with and . For such a case, we again need to set B as

(17)

Theorem 4. Two-point negatively correlated autocratic strategies exist only for and .

See Sect 7.5 for the proof. We find that and if such two-point autocratic strategies exist.

It should be noted that when , the two-point negatively correlated autocratic strategy is also reduced to the autocratic strategy in Theorem 2. When , must hold, and the two-point negatively correlated autocratic strategy is similar to the ZD strategy pinning the sum of payoffs of two players in the prisoner’s dilemma game [18,41]. However, for the Cournot oligopoly game, must hold, and the use of such autocratic strategies may be limited.

5 Numerical results

In this section, we numerically investigate performance of the autocratic strategies in the previous section. We set model parameters a = 2.0, b = 1.0, c = 1.0, and . Because B in Eqs (14) and (17) for the two-point autocratic strategies satisfies

(18)

we set

(19)

We approximately calculate the payoffs in the repeated game by using time average of one sample: . We remark that the limit (4) exists in all simulations below because we use fixed finite-memory strategies or adaptive memory-one strategies which seem to converge, as the opponents’ strategies.

5.1 Autocratic strategies against fixed memory-zero opponents

First, we consider situations where player j uses the two-point autocratic strategies with various and all opponents repeat fixed actions. We set N = 10 and T = 106. We assume that all other players use the same memory-zero strategy

(20)

with , and change the integer in the range . The procedure is summarized in Algorithm 1.

Algorithm 1 Autocratic strategies against fixed memory-zero opponents.

Input: Parameters of models , total time T

Input: Feasible parameters of two-point autocratic strategies

Output: Time-averaged payoffs Sl for all l

1: Set and

2: for to 200 do

3:   Initial condition:

4:   for do

5:   

6:   end for

7:   Initialize time-averaged payoffs for all l

8:   Calculate B from Eq (14) or (17)

9:   for t = 1 to T do

10:    Update xj by transition probability (8) and (9)

11:    Calculate payoffs (1) for all players

12:    Update total payoffs for all l

13:    Calculate B from Eq (14) or (17)

14:   end for

15:   Output time-averaged payoffs Sl/T for all l

16: end for

In Fig 1, we display relations between and for various .

thumbnail
Fig 1. Linear relations between and when player j uses the two-point autocratic strategies with various and all opponents repeat fixed actions.

https://doi.org/10.1371/journal.pcsy.0000081.g001

We find that a linear relation (13) or (16) is indeed enforced for each . We also find that both payoffs and become positive only for restricted regions, such as the right ends of the lines for . Therefore, it implies that the use of the autocratic strategies is very limited, considering incentive for adopting such strategies. Below we focus on the fair autocratic strategy .

5.2 Autocratic strategies against random memory-one opponents

Next, we consider the situation where player j uses a two-point autocratic strategy with , and the opponents adopt random memory-one strategies. We assume that each opponent takes either the cooperative action or the Nash equilibrium action , which are the most representative actions in the Cournot oligopoly game. A memory-one strategy of player k is written as

(21)

where corresponds to the probability to use when the action profile in the previous round was . For simplicity, we below write and as and , respectively. We represent the memory-one strategies by a vector [9]

(22)

The entries denote the probability to take in the next round, given that the autocratic player previously played , player k previously played , and i of the other N–2 players took . We randomly generate 200 memory-one strategy profiles from a uniform distribution. The procedure is summarized in Algorithm 2. We again set N = 10 and T = 106.

Algorithm 2 Autocratic strategies against random memory-one opponents.

Input: Parameters of models , total time T

Input: Feasible parameters of two-point autocratic strategies

Output: Time-averaged payoffs Sl for all l

1: Set and

2: Set and

3: for to 200 do

4:   Initial condition:

5:   for do

6:    Initialize each normal agent’s strategy vector

  randomly

7:    Initial condition:

8:   end for

9:   Initialize time-averaged payoffs for all l

10:   Calculate B from Eq (14) or (17)

11:   for t = 1 to T do

12:    Update xj by transition probability (8) and (9)

13:    for do

14:     Update xk by transition probability (22)

15:    end for

16:    Calculate payoffs (1) for all players

17:    Update total payoffs for all l

18:    Calculate B from Eq 14 or (17)

19:   end for

20:   Output time-averaged payoffs Sl/T for all l

21: end for

Fig 2 shows a relationship between the payoff of the autocratic player (horizontal) and the average payoffs of N–1 players (vertical). We find that the autocratic strategy indeed unilaterally enforces a linear relationship between payoffs even against memory-one strategies. As shown in Figs 1 and 2, this holds regardless of memory length (0 or 1) or how the opponents’ strategies are defined, whether using fixed values with a certain separation or randomized memory-one rules. The two-point autocratic strategy consistently enforces the intended linear relation under all these conditions. Furthermore, both and are positive for this case. This result suggests that a player may have incentive to adopt the fair autocratic strategy as far as the opponents use the two representative actions and .

thumbnail
Fig 2. A linear relation between and when player j uses a two-point autocratic strategy with against random memory-one strategies.

https://doi.org/10.1371/journal.pcsy.0000081.g002

5.3 Autocratic strategies against adaptive memory-one opponents

Finally, we investigate the scenario in which each memory-one opponent independently attempts to maximize its own payoff against the other players. We refer to such a player as an adaptive learning player. In the repeated prisoner’s dilemma game, it has been shown that the positively correlated ZD strategy can force an adaptive learning player to cooperate unconditionally [5,35,36]. Here we again assume that memory-one strategies of the opponents take the form (22), and the autocratic player adopts the fair autocratic strategy .

To numerically implement this setting, we first initialize actions of all agents and the opponents’ memory-one strategies with random values. The opponents then adaptively update their memory-one strategies using a greedy method independently. Specifically, at each round, an opponent independently selects one of its strategy parameters q (corresponding to a given previous action profile), and perturbs it by either  + 0.01 or –0.01. Both the original q and the perturbed are used to determine the action xk and , respectively. Then, the payoff is calculated against the other players. If the perturbation leads to a higher payoff, the change is accepted; otherwise, it is rejected. This process is repeated iteratively, allowing each opponent to gradually improve its strategy against the other players. The detailed algorithm is provided in Algorithm 3. We set N = 2 for Fig 3 and N = 3 for Fig 4. Although T = 108 was used, plotting was terminated once no further payoff improvements were observed.

thumbnail
Fig 3. Numerical results for in the duopoly case ().

(a) Strategy adaptation of an adaptive memory-one player against the fair autocratic strategy. (b) Time-averaged payoffs of a fixed autocratic player and an adaptive memory-one player. Data points are plotted every 1000 time steps.

https://doi.org/10.1371/journal.pcsy.0000081.g003

thumbnail
Fig 4. Numerical results for in the triopoly case ().

(a) Strategy adaptation of two adaptive memory-one players against the fair autocratic strategy. (b) Time-averaged payoffs of a fixed autocratic player and two adaptive memory-one players. Data points are plotted every 1000 time steps.

https://doi.org/10.1371/journal.pcsy.0000081.g004

Algorithm 3 Autocratic strategies against adaptive memory-one opponents.

Input: Parameters of models , total time T

Input: Feasible parameters of two-point autocratic strategies

Output: Time-averaged payoffs Sl for all l and average strategies

1: Set and

2: Set and

3: Initial condition: randomly choose or

4: for do

5:   Initialize each normal agent’s strategy vector randomly

6:   Initial condition: randomly choose or

7: end for

8: Initialize time-averaged payoffs for all l

9: Calculate B from Eq (14) or (17)

10: for t = 1 to T do

11:   for do

12:    Sample action xk with probability corresponding to previous action profile

13:    Create perturbed strategy with

14:    Sample alternative action using

15:    Calculate payoffs sk (from xk) and (from )

16:    if then

17:     Update , adopt

18:    else

19:     Keep and xk

20:    end if

21:   end for

22:   Update xj by transition probability (8) and (9)

23:   Calculate payoffs (1) for all players

24:   Update total payoffs for all l

25:   Calculate B from Eq (14) or (17)

26:   Output time-averaged payoffs Sl/t for all l and average strategies

27: end for

Fig 3a shows the time evolution of the strategy used by the adaptive memory-one player against a fixed autocratic strategy in the case of N = 2, while Fig 3b shows the time evolution of the time-averaged payoffs of both players. From Fig 3a, we observe that the memory-one strategy converges to and , indicating that the adaptive memory-one player chooses with probability one in all subsequent rounds. In addition, as time progresses, the adaptive player’s time-averaged payoff increases and gradually approaches the theoretical expected payoff of 0.05405 obtained when it consistently chooses (Fig 3b; see Sect A in S1 Appendix for derivation). Furthermore, in order to check the validity of this numerical result, we also performed 100 numerical simulations, and find that 85 runs converge to the similar result as in Fig 3 (See Sect B in S1 Appendix). In the remaining 15 runs, did not reach 1. But this is most likely because the total number of time steps (108) was insufficient. In sum, due to the linear payoff relationship between the two players unilaterally enforced by the autocratic strategy, an adaptive memory-one player attempting to maximize its own payoff is inevitably driven to adopt the cooperative action , similarly to the case of the prisoner’s dilemma game [5,35,36].

Fig 4a shows the time evolution of the strategy used by the adaptive memory-one players against a fixed autocratic strategy in the case of N = 3, while Fig 4b shows the time evolution of the time-averaged payoffs of all players. According to Fig 4b, in the case of N = 3, the payoffs of the two adaptive memory-one players do not continue to increase but rather converge around 0.04015. Notably, this value is significantly lower than the theoretical expected payoff of 0.05039, which would be achieved if both adaptive players consistently chose (see Sect A in S1 Appendix). This result indicates that, in the case of N = 3, the adaptive players do not evolve their strategies toward cooperation. Indeed, as shown in Fig 4a, the strategy converges to , , , and , suggesting that the two adaptive players evolve their strategies in a synchronized manner. In other words, rather than attempting to maximize their payoffs against the fixed autocratic strategy, the adaptive random memory-one players appear to have evolved strategies that optimize their payoffs against each other. Again, in order to check the validity of this numerical result, we also performed 100 numerical simulations, and find that all 100 runs converge to the similar result as in Fig 4 (See Sect B in S1 Appendix). Furthermore, we also tested a Grim/Trigger-like strategy, setting for the two adaptive players. We performed ten simulations, and in all cases, the adaptive players’ strategies converged to the similar result as in Fig 4 (See Sect B in S1 Appendix). These results show that the result in Fig 4 does not depend on the initial conditions and realizations of random variables.

The above results show that the fair autocratic strategy can promote collusion of adaptively learning players in the duopoly (N = 2) case, but cannot promote collusion in the triopoly (N = 3) case. The result in the duopoly case is similar to that in the repeated prisoner’s dilemma game [5,35,36]. In the prisoner’s dilemma game, the extortionate ZD strategy can force an adaptively learning player (without a theory of mind) to cooperate unconditionally. This comes from a property of positively correlated ZD strategies: An increase in payoff of the opponent implies an increase in payoff of a player using the ZD strategies. However, in the triopoly case, the fair autocratic strategy cannot force the adaptive opponents to collude. This means that enforcing a linear relation (15) is not sufficient to directly control all other opponents; the fair autocratic strategy can control only the average payoff of the opponents.

The failure of control by the fair autocratic strategy in the triopoly case may come from the nature of two adaptive agents. We provide the values of one-shot payoffs for N = 2 and N = 3 in Table 1 and Table 2, respectively. When two adaptive agents regard an autocratic agent as an environment [42], they selfishly learn their best response , since brings an agent larger one-shot payoff than as long as an autocratic agent takes . (It should be noted that an autocratic agent frequently takes and rarely takes in our parameters.) Concretely, in Table 2, when player 1 is an autocratic agent, the payoff of player 2 satisfies and . Therefore, they come to repeat for most rounds. Only when the state of “environment” xj switches to , they synchronously take , since and in Table 2. This is as if they synchronously take in order to bring back the state of “environment” from to , because transition between states of “environment” occurs with probability proportional to . In Sect C in S1 Appendix, we provide a numerical result in a situation where only two adaptive agents exist. Both adaptive agents finally learned to repeat the Nash equilibrium action . The failure of control in the triopoly case seems to come from the same reason as this situation. It should be noted that a similar phenomenon occurs in the repeated public goods game [43], where a cooperation-enforcing strategy is adopted against two independent agents using reinforcement learning. The cooperation-enforcing strategy is known to be an unbeatable ZD strategy [17]. Robustness of our numerical result against other learning algorithms should be investigated in future. In addition, we must remark that our numerical simulations are performed for only one set of payoff parameters . For other sets of parameters, the fair autocratic strategy may induce collusion even in the triopoly case.

thumbnail
Table 1. Payoffs for N = 2. Player 1 is an autocratic agent, and player 2 takes either or .

https://doi.org/10.1371/journal.pcsy.0000081.t001

thumbnail
Table 2. Payoffs for N = 3. Player 1 is an autocratic agent, and players 2 and 3 take either or .

https://doi.org/10.1371/journal.pcsy.0000081.t002

Although the fair autocratic strategy cannot promote collusion of adaptive agents for N = 3 (and probably for ), our result for N = 2 suggests another way to collusion. As a special property of the Cournot oligopoly game, players –j can be regarded as one effective player, as shown in Eq (12). Therefore, when players –j use a ZD alliance [9], which is a ZD strategy implemented by a group of players, they can lead the opponent j to collusion. In Sect D in S1 Appendix, we prove the existence of a fair ZD alliance in the Cournot oligopoly game. We expect that the fair ZD alliance by N–1 players can promote collusion of the other player.

6 Discussion

In this paper, we proved the existence of several autocratic strategies in the repeated Cournot oligopoly game, which unilaterally enforce linear relations between the payoff of the autocratic player and the average payoff of the opponents with both positive and negative slopes. Particularly, we found that a fair autocratic strategy is useful because it can be used for enforcing the payoffs into positive values. Furthermore, we numerically showed that a fair autocratic strategy can promote collusion in the duopoly case, although it cannot promote collusion in the triopoly case.

Autocratic strategies are useful because they unilaterally control payoffs irrespective of rationality of other players [37,44]. This is quite different from previous results in classical game theory. For example, in the folk theorem [3], it is assumed that players are rational (in other words, want to maximize their own payoffs), and how the equilibrium strategies are obtained has not been clear. In contrast, by using positively correlated autocratic strategies, the autocratic player can force the opponents to optimize the payoff of the autocratic player [5]. This result is not limited to the prisoner’s dilemma game. Indeed, in this paper, we found that such enforcement is possible even in a duopoly game. Even if there are more firms, once the opponents recognize an autocratic party, they may quit optimizing their payoffs independently and start to collude. Investigation on such multi-firm situations is a subject of future work.

Additionally, we want to emphasize that the results in this paper do not simply mimic those in the prisoner’s dilemma game. As noted in Sect 2, when action sets of all players are restricted to and N = 2, the Cournot oligopoly game is reduced to the prisoner’s dilemma game, where and correspond to cooperation and defection, respectively (see Sect C in S1 Appendix). However, when action sets are , resulting autocratic strategies are quite different from those in the prisoner’s dilemma game. In the prisoner’s dilemma case, the two-point equalizer strategies exist [5,6], contrary to our Theorem 1. (It should be noted that all ZD strategies in two-action games are two-point autocratic strategies.) Similarly, although the autocratic strategies pinning its own payoff do not exist in the prisoner’s dilemma game [5], they exist in the Cournot oligopoly game (Theorem 2). Furthermore, whereas our Theorem 3 claims that two-point positively correlated autocratic strategies exist only for , two-point positively correlated autocratic strategies exist for for the prisoner’s dilemma game [5,7]. These differences also exist even if we compare the Cournot oligopoly game with the public goods game [9,10], which is an N-player version of the prisoner’s dilemma game. These differences seem to come from the nature of the payoff function of the Cournot oligopoly game.

Obviously, these differences in equalizer strategies and positively correlated autocratic strategies may come from the fact that we only consider two-point autocratic strategies. However, it is known that, when action sets of all players are finite sets, the existence condition of two-point zero-determinant strategies is equivalent to the existence condition of general zero-determinant strategies [15,16]. We expect that this equivalence also holds when an action set of some player is an infinite set. Specifying a necessary and sufficient condition for the existence of zero-determinant strategies in the case that action sets of some players are infinite sets is an important future problem.

7 Methods

7.1 Proof of Proposition 2

If a two-point autocratic strategy Tj of player j controlling B exists, then it satisfies

(23)

with some set and some bounded function . Because consists of two actions and ψ is bounded, there exist

(24)

and

(25)

Due to the normalization condition of probability distribution, we can rewrite Eq (23) as

(26)(27)

Then we find

(28)

and

(29)

because Tj is probability. Therefore, we can regard and as and , respectively.

7.2 Proof of Theorem 1

According to the propositions above, two actions satisfying

(30)

are necessary for the existence of two-point equalizer strategies of player j. For r > 0, when we choose xj = 0, we find

(31)

for any xj. Therefore, does not exist, and the existence condition is never satisfied. Similarly, for r < 0, when we choose xj = 0, we find

(32)

for any xj. Therefore, does not exist, and the existence condition is never satisfied. For r = 0, when we choose xj = a/b, we find

(33)

for any xj. Therefore, does not exist, and the existence condition is never satisfied.

7.3 Proof of Theorem 2

According to the propositions above, two actions satisfying

(34)

are necessary for the existence of the two-point autocratic strategies of player j. When r > 0, when we choose xj = a/b, we find

(35)

for any xj. Therefore, does not exist, and the existence condition is never satisfied.

We remark that for any . Therefore, the two-point autocratic strategies with do not exist. When , we find that

(36)

Therefore, according to Proposition 1, we can construct two-point autocratic strategies pinning its own payoff by using and .

7.4 Proof of Theorem 3

According to the propositions above, two actions satisfying

(37)

are necessary for the existence of two-point positively correlated autocratic strategies of player j.

We first rewrite the payoffs as

(38)

Then, B is rewritten as

(39)

We find that

(40)

The first term on the right-hand side is always non-negative. It should be noted that, when , the second term satisfies

(41)

Therefore, we obtain

(42)

Furthermore,

(43)

We also calculate

(44)

The first term on the right-hand side is always non-positive. It should be noted that, when , the second term satisfies

(45)

Therefore, we obtain

(46)

We first consider the case . When , according to Eq (42), we find

(47)

Furthermore, by using in Eq (43), we obtain

(48)

where we have used . Therefore, according to Proposition 1 with and , two-point positively correlated autocratic strategies exist for and . When , according to Eq (46), we find

(49)

for any xj. Therefore, does not exist, and the existence condition is never satisfied.

Second, we consider the case . For this case, B does not depend on . According to Eqs (42) and (43), we obtain

(50)(51)

Therefore, according to Proposition 1 with and , two-point positively correlated autocratic strategies exist for .

Finally, we consider the case . When , from Eq (39), we find

(52)

for any xj. Therefore, does not exist, and the existence condition is never satisfied. When , according to Eq (46), we obtain

(53)

for any xj. Therefore, does not exist, and the existence condition is never satisfied.

7.5 Proof of Theorem 4

According to the propositions above, two actions satisfying

(54)

are necessary for the existence of two-point negatively correlated autocratic strategies of player j. We remark that B can be explicitly written as

(55)

When with any , we find

(56)

for all xj. Therefore, does not exist.

When and , we find

(57)

for all xj. Therefore, does not exist.

When and , we find

(58)

Therefore, we can use xj = 0 as . Furthermore, we also find

(59)

Therefore, we can use as . According to Proposition 1, two-point negatively correlated autocratic strategies exist for this parameter region.

Finally, when and , we find

(60)

for all xj. Therefore, does not exist.

Supporting information

S1 Appendix. This appendix contains four sections, “Calculation of expected payoffs when the opponents use ”, “Additional numerical results”, “Numerical results for only two adaptive agents”, and “Fair zero-determinant alliance”.

https://doi.org/10.1371/journal.pcsy.0000081.s001

(PDF)

References

  1. 1. Fudenberg D, Tirole J. Game theory. Massachusetts: MIT Press; 1991.
  2. 2. Osborne MJ, Rubinstein A. A course in game theory. Massachusetts: MIT Press; 1994.
  3. 3. Gibbons R. Game theory for applied economists. Princeton University Press; 1992.
  4. 4. Nowak MA. Five rules for the evolution of cooperation. Science. 2006;314(5805):1560–3. pmid:17158317
  5. 5. Press WH, Dyson FJ. Iterated Prisoner’s Dilemma contains strategies that dominate any evolutionary opponent. Proc Natl Acad Sci U S A. 2012;109(26):10409–13. pmid:22615375
  6. 6. Boerlijst MC, Nowak MA, Sigmund K. Equal pay for all prisoners. The American Mathematical Monthly. 1997;104(4):303–5.
  7. 7. Stewart AJ, Plotkin JB. From extortion to generosity, evolution in the Iterated Prisoner’s Dilemma. Proc Natl Acad Sci U S A. 2013;110(38):15348–53. pmid:24003115
  8. 8. Akin E. The iterated prisoner’s dilemma: good strategies and their dynamics. Ergodic Theory, Advances in Dynamical Systems. 2016;:77–107.
  9. 9. Hilbe C, Wu B, Traulsen A, Nowak MA. Cooperation and control in multiplayer social dilemmas. Proc Natl Acad Sci U S A. 2014;111(46):16425–30. pmid:25349400
  10. 10. Pan L, Hao D, Rong Z, Zhou T. Zero-determinant strategies in iterated public goods game. Sci Rep. 2015;5:13096. pmid:26293589
  11. 11. McAvoy A, Hauert C. Autocratic strategies for iterated games with arbitrary action spaces. Proc Natl Acad Sci U S A. 2016;113(13):3573–8. pmid:26976578
  12. 12. Taha MA, Ghoneim A. Zero-determinant strategies in repeated asymmetric games. Applied Mathematics and Computation. 2020;369:124862.
  13. 13. Kang K, Tian J, Zhang B. Cooperation and control in asymmetric repeated games. Applied Mathematics and Computation. 2024;470:128589.
  14. 14. Ueda M. Unbeatable tit-for-tat as a zero-determinant strategy. J Phys Soc Jpn. 2022;91(5):054804.
  15. 15. Ueda M. On the implementation of zero-determinant strategies in repeated games. Applied Mathematics and Computation. 2025;489:129179.
  16. 16. Ueda M. Necessary and sufficient condition for the existence of zero-determinant strategies in repeated games. J Phys Soc Jpn. 2022;91(8).
  17. 17. Ueda M. Unexploitable games and unbeatable strategies. IEEE Access. 2023;11:5062–8.
  18. 18. Hilbe C, Traulsen A, Sigmund K. Partners or rivals? Strategies for the iterated prisoner’s dilemma. Games Econ Behav. 2015;92:41–52. pmid:26339123
  19. 19. Ichinose G, Masuda N. Zero-determinant strategies in finitely repeated games. J Theor Biol. 2018;438:61–77. pmid:29154776
  20. 20. Govaert A, Cao M. Zero-Determinant strategies in repeated multiplayer social dilemmas with discounted payoffs. IEEE Trans Automat Contr. 2021;66(10):4575–88.
  21. 21. Hao D, Rong Z, Zhou T. Extortion under uncertainty: zero-determinant strategies in noisy games. Phys Rev E Stat Nonlin Soft Matter Phys. 2015;91(5):052803. pmid:26066208
  22. 22. Mamiya A, Ichinose G. Zero-determinant strategies under observation errors in repeated games. Phys Rev E. 2020;102(3–1):032115. pmid:33075945
  23. 23. Mamiya A, Miyagawa D, Ichinose G. Conditions for the existence of zero-determinant strategies under observation errors in repeated games. J Theor Biol. 2021;526:110810. pmid:34119498
  24. 24. McAvoy A, Hauert C. Autocratic strategies for alternating games. Theor Popul Biol. 2017;113:13–22. pmid:27693412
  25. 25. Deng C, Rong Z, Wang L, Wang X. Modeling replicator dynamics in stochastic games using Markov chain method. In: Proceedings of the 20th International Conference on Autonomous Agents and Multiagent Systems. 2021. p. 420–8.
  26. 26. Liu F, Wu B. Environmental quality and population welfare in Markovian eco-evolutionary dynamics. Applied Mathematics and Computation. 2022;431:127309.
  27. 27. Hilbe C, Nowak MA, Sigmund K. Evolution of extortion in Iterated Prisoner’s Dilemma games. Proc Natl Acad Sci U S A. 2013;110(17):6913–8. pmid:23572576
  28. 28. Adami C, Hintze A. Evolutionary instability of zero-determinant strategies demonstrates that winning is not everything. Nat Commun. 2013;4:2193. pmid:23903782
  29. 29. Hilbe C, Nowak MA, Traulsen A. Adaptive dynamics of extortion and compliance. PLoS One. 2013;8(11):e77886. pmid:24223739
  30. 30. Szolnoki A, Perc M. Evolution of extortion in structured populations. Phys Rev E Stat Nonlin Soft Matter Phys. 2014;89(2):022804. pmid:25353531
  31. 31. Hilbe C, Wu B, Traulsen A, Nowak MA. Evolutionary performance of zero-determinant strategies in multiplayer games. J Theor Biol. 2015;374:115–24. pmid:25843220
  32. 32. Chen X, Wang L, Fu F. The intricate geometry of zero-determinant strategies underlying evolutionary adaptation from extortion to generosity. New J Phys. 2022;24(10):103001.
  33. 33. Chen X, Fu F. Outlearning extortioners: unbending strategies can foster reciprocal fairness and cooperation. PNAS Nexus. 2023;2(6):pgad176. pmid:37287707
  34. 34. Bischi GI, Naimzada A. Global analysis of a dynamic duopoly game with bounded rationality. Advances in Dynamic Games and Applications. Birkhäuser Boston. 2000. p. 361–85. https://doi.org/10.1007/978-1-4612-1336-9_20
  35. 35. Chen J, Zinger A. The robustness of zero-determinant strategies in Iterated Prisoner’s Dilemma games. J Theor Biol. 2014;357:46–54. pmid:24819462
  36. 36. Miyagawa D, Mamiya A, Ichinose G. Adapting paths against zero-determinant strategies in repeated prisoner’s dilemma games. J Theor Biol. 2022;549:111211. pmid:35810777
  37. 37. McAvoy A, Madhushani Sehwag U, Hilbe C, Chatterjee K, Barfuss W, Su Q, et al. Unilateral incentive alignment in two-agent stochastic games. Proc Natl Acad Sci U S A. 2025;122(25):e2319927121. pmid:40523172
  38. 38. Duersch P, Oechssler J, Schipper BC. Unbeatable imitation. Games and Economic Behavior. 2012;76(1):88–96.
  39. 39. Duersch P, Oechssler J, Schipper BC. When is tit-for-tat unbeatable?. Int J Game Theory. 2013;43(1):25–36.
  40. 40. Vega-Redondo F. The evolution of Walrasian behavior. Econometrica. 1997;65(2):375.
  41. 41. Ueda M. Memory-two zero-determinant strategies in repeated games. R Soc Open Sci. 2021;8(5):202186. pmid:34084544
  42. 42. Hilbe C, Šimsa Š, Chatterjee K, Nowak MA. Evolution of cooperation in stochastic games. Nature. 2018;559(7713):246–9. pmid:29973718
  43. 43. Li K, Hao D. Cooperation enforcement and collusion resistance in repeated public goods games. AAAI. 2019;33(01):2085–92.
  44. 44. Hilbe C, Chatterjee K, Nowak MA. Partners and rivals in direct reciprocity. Nat Hum Behav. 2018;2(7):469–77. pmid:31097794