Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

The reconstruction on the game networks with binary-state and multi-state dynamics

  • Junfang Wang,

    Roles Data curation, Formal analysis, Software, Validation, Writing – original draft, Writing – review & editing

    Affiliations Business school, University of Shanghai Science & Technology, Shanghai, China, School of Mathematics & Statistics, North China University of water Resources & Electric Power, Zhengzhou, China

  • Jin-Li Guo

    Roles Writing – review & editing

    phd5816@163.com, wangjunfang@ncwu.edu.cn

    Affiliation Business school, University of Shanghai Science & Technology, Shanghai, China

Abstract

Reconstruction of network is to infer the relationship among nodes using observation data, which is helpful to reveal properties and functions of complex systems. In view of the low reconstruction accuracy based on small data and the subjectivity of threshold to infer adjacency matrix, the paper proposes two models: the quadratic compressive sensing (QCS) and integer compressive sensing (ICS). Then a combined method (CCS) is given based on QCS and ICS, which can be used on binary-state and multi-state dynamics. It is found that CCS is usually a superior method comparing with compressive sensing, LASSO on several networks with different structures and scales. And it can infer larger node correctly than the other two methods. The paper is conducive to reveal the hidden relationship with small data so that to understand, predicate and control a vast intricate system.

1 Introduction

Objects are often abstracted into complex networks in many social, economic, engineering and scientific fields, such as electric power networks, transportation networks, social networks, game networks, economic networks, protein interaction networks in biological systems, and so on. Many researches focused on network topology, network controlling and dynamic behavior of complex networks [15]. And yet mastering the structure networks is a prerequisite for understanding, predicating and controlling the system. Studies have shown that the information of adjacency among the group is often unknown and difficult to acquire directly [6], such as synaptic connections between neurons in the brain [7]. Therefore, it’s necessary to reconstruct it using other observable data, and we call it “dynamic network reconstruction”. Methods based on continuous and discrete dynamic systems have been proposed and applied one after another, such as Pearson or Spearman Correlation [8], Bayes [9,10], Mutual Information [11], Granger Causality method [12], Optimal Causation Entropy [13,14], Compressive sensing (CS) [1519], LASSO [20], noise-driven [21,22], Maximum Likelihood [2330], deep learning [31] and so on. And methods for dynamic network are also proposed [32]. More overviews about network reconstruction can be found in the literature [3336]. Among them, CS is a method to solve the problems brought by high-dimensional data. For a sparse adjacency matrix, CS minimizes its L1 norm under the linear equations to reconstruct the adjacency matrix. It has been used in the fields of signal processing, numerical computation, computer vision, neuroscience, and so on.

During reconstructing, we maybe encounter the following questions. The first one, contradictory inferring, happened in the undirected and unweighted networks. To overcome the contradictory, Ma et al. [37] proposed the CBM basing on conflict frequency to solve the contradiction. Huang [2] proposed a compressed sensing model with constrain. Zhang et al. [38] identified large nodes through clustering and adjusting large nodes’ inferences via compressive sensing to solve conflict. Second, inferring the structures of nodes one by one demands extremely time for the large network. Shi et al. proposed an iteratively thresholded ridge regression screener for dimension reduction, and then employed the LASSO method to recover the network structure [39]. Another, if using the connecting probability to infer whether two nodes are adjacent, we must give a threshold, which is subjective. Moreover, the data we get are often insufficient for reconstructing the large network, especially on the large degree nodes. How to infer the network in a high precision with small data?

Here, two reconstruction models are proposed for game networks. They are compressive sensing model basing on 0−1 program (ICS) and compressive sensing model basing on quadratic program (QCS). We also give a combined method of ICS and QCS and discuss its performance. The main contribution of this article is as follows.

  1. For evolutionary game systems, we propose an effective index to identify large nodes and network’s type.
  2. Two reconstruction models, QCS and ICS, are proposed, and it needn’t to determine thresholds with ICS model.
  3. A combined method for binary-state and multi-state systems is proposed, which has a higher performance, especially on multi-state dynamic. And the method is not limited to the reconstruction of the game network.

The rest of the article is organized as follows. In the section I, the paper gives the evolutionary game mechanism and evaluating indexes. In Section II, ICS and QCS models are given. Then, in Section III, the combined model of ICS and QCS is proposed on binary-state dynamics, and the performance of proposed method is compared with CS and LASSO. Section IV carries out a series of experiments to validate the performance of the proposed algorithm on multi-state dynamics including on some real networks. Finally, the conclusion and discussion remarks are given in Section V.

2. Game theory and reconstruction evaluation standard index

2.1 Evolutionary game theory

In a group, assuming individuals repeat the prisoner’s dilemma game in pairs, and they can adopt the strategies: C (unconditional cooperation), D (unconditional defection), ZD (zero-determinant strategy) [40], TFT (tit-for-tat) and WSLS (win stay, lost shift) [41] with a score matrix A between strategies as illustrated in Table 1.

thumbnail
Table 1. The scores for players X and Y in a single play of prisoner’s dilemma.

https://doi.org/10.1371/journal.pone.0263939.t001

In the round τ, the individual i plays games according to Table 1 with her immediate neighbors, and she obtains the accumulated payoff (1) where Γ(i) is the neighbor set of individual i, and u(i,j) is the payoff of individual i from the game with neighbor j.

To optimize her behavior, her strategy si is replaced by the strategy of one randomly chosen neighbor, say j (with strategy sj), with the probability (2) where a represents her rational degree [42]. In the following, let a = 0.1.

2.2 Threshold model

The elements in the adjacency matrix should be binary, however the estimators we obtained will not be exactly 0 or 1. For node i, supposing that the adjacency values with other nodes are kj, j = 1,2,⋯N, ji. In a similar way to what done in Ref [26], we use a threshold value (3) to separate the actual from the nonexistent links.

2.3 Three evaluation standard indexes

By comparing the inferring results to the true adjacency matrix, nodes can be classified into true positive (TP), false positive (FP), true negative (TN), or false negative (FN). To evaluate the reconstruction performance, we take three standard indexes: the area under the receiver operating characteristic curve (AUROC), the area under the precision-recall curve (AUPR) and success rate(SR) (4) where .

3 ICS and QCS models on game networks

For an undirected and unweighted network with N nodes, let xij = 1 if node i and j are connected, else xij = 0. Supposing that the strategy of each node is known in each round, then accumulated payoffs of node i from time period 1 to M can be expressed as (5) where Sij(t) is known, which represents the score of the player i gained from playing with player j at round t. Let and matrix (6)

For node i, the adjacent vector X satisfies the equation (7)

Since the elements in X are either 0 or 1, and X is a sparse vector, it can be solved by the following model (I), which minimize the number of non-zero elements in vector X under the constraints (7). (8) where ‖X‖0 is the L0 norm of vector X.

On the other hand, as x(1−x) approaches 0, x approaches 0 or 1. If the elements of X are not limited to 0 and 1, we can minimize xij(1−xij) to make xij approach 0 or 1, which helps to determine the threshold too, so we propose the second reconstruction model (9)

In model (I), the element xij is integer, so we call the model ICS (Integer Compressed Sensing). While in model (II), we minimize a quadratic function ∑xij(1−xij), thus we call it QCS (Quadratic Compressed Sensing). We can solve the model I and II with CPLEX and the function “quadprog” of MATLAB.

4. A combined compressive sensing model with binary-state dynamics

4.1 The influence of degree and sample size on reconstruction accuracy

To obtain the features of ICS and QCS, we analyze their performance from the perspective of node’s degree and sample size. Supposing that individuals play games on a scale-free network with 500 nodes and an average degree of 6 (unless noted, the following networks are similar), and their strategies are C and D strategy only. The payoffs between the strategies are listed in Table 1 with b = 1.5, c = 1. There are the following notes for the reconstruction.

(1) Considering that the game with strategies C and D approaches the steady state quickly and once it reaches the steady state, the data almost no longer change. In order to demonstrate such phenomenon, we simulate it on different scale networks. In each round, we count the frequency of nodes whose strategies are changed. The results are shown in Fig 1(A) and 1(C). It can be found that the strategies of all nodes are no longer changed after 4 rounds no matter the scale of networks, their payoffs as well. Furthermore, if there are four strategies in the games, although their strategies are not stable for a long time (see Fig 1(B)), yet the matrix F in Eq (7) is no longer changed. For example, the revenue of strategy D is always 0 no matter his co-player is D or ZD. Moreover, their payoffs also stabilize quickly (see Fig 1(D)). Hence, we make the nodes adopt strategy randomly for diverse data every 3 rounds.

thumbnail
Fig 1.

The frequencies of nodes who change strategies in each round in games with {C, D} strategy (Fig 1A) and {C, D, WSLS, ZD} strategy (Fig 1B). The frequencies of nodes whose accumulated payoffs are changed in each round in games with {C, D} strategy (Fig 1C) and {C, D, WSLS, ZD} strategy (Fig 1D).

https://doi.org/10.1371/journal.pone.0263939.g001

(2) Since the feasible region of the ICS is discrete, it may take a long time to solve it. We will terminate the solution once the number of iteration exceeds 1×106, and this node’s accuracy will be considered as 0.

(3) In the experiments, the nodes are sorted by the degree. The top 15% of the large nodes are grouped together, called large group, and the rest are the other group, called small group. It is worth noting that the division need not too strict.

Under the above agreements, the experiments are repeated 10 times. The results of ICS, QCS on two groups are shown in Table 2 with the samples size M (rounds of the games) from 10 to 40, where the evaluation index is average success rate of all nodes.

thumbnail
Table 2. Accuracy of QCS and ICS models on large and small group.

https://doi.org/10.1371/journal.pone.0263939.t002

It can be found that QCS always has the highest accuracy no matter on large group or small group if data are scarce (10 samples), wherever ICS is always better than QCS if the samples are sufficient (more than 25 samples). Only if the sample size is 15 and 20 (insufficient), the results are complicated: the QCS is better on the large group, and ICS is better on small group. Therefore, if the sample is very scarce, it’s better to reconstruct with QCS on the whole network. If the sample is sufficient, we prefer use ICS. For the other case, we should combine them to infer the network: QCS for large nodes, ICS for small nodes.

From the above analysis, we should choose a proper model according to the sample size and the degree of node. Therefore, for the scale-free networks, we divide the sample size M into three levels according to its capacity: scarcity (M∈(0, M1)), insufficient (M∈[M1, M2]) and sufficient (M∈(M2, +∞)), wherever, for the networks of WS and ER, we divide the samples into two grades (M∈(0, M3)∪[M3, +∞)) since there are almost no large nodes in the networks. Next, we’ll analyze the threshold of sample size according to the type of the networks.

4.2 Judgment of sample capacity

Obviously, the threshold of sample size is related to the scale and density of the network. Seeing from Table 2, we can conclude that QCS is better than ICS on the majority of nodes if with scarcity samples. Hence define threshold (10)

In the same way, let (11) where I(∙) is indicator function and SR(∙) is the success rate.

In order to find the regularity of the threshold, we simulate it on the scale-free networks with 100–3000 nodes and average degree 5.97–12. Some intervals of [M1, M2] have been shown in Fig 2(A). It can be found that intervals have strong statistical regularity. Hence, we establish polynomial regression models of M1 and M2 for N and average degree <k> with high fitness 0.924 and 0.925, respectively. They are (12) (13)

thumbnail
Fig 2.

(a)The interval [M1, M2] on various networks. (b)The estimation of average degree with 500 nodes.

https://doi.org/10.1371/journal.pone.0263939.g002

For the networks without large nodes, let (14)

If the predicted value is a decimal, we take the smallest integer not less than it. We can utilize above equations to evaluate the samples so that to choose a proper reconstruction model, yet we also notice that it is necessary to acquire the average degree <k> of network with (Eqs 12 and 13).

In fact, the average degree of the network can be estimated with ICS alone. We can use the ICS model to make initial inference on the neighbors of each node so as to estimate the average degree of the whole network. We simulate it on the networks of 500 nodes with real average degrees 5.97, 7.96, 9.94,11.91 respectively (unknown), we estimate them with 15, 20, 25, 30 samples respectively. The results have been shown in Fig 2(B). We can see that the estimate fluctuates around the true value. Even though the estimate deviation reaches 1, it can be concluded from (Eqs 12 and 13) that the deviation of the thresholds about M1 and M2 is about 1.5 only, which hardly affects the evaluation of sample capacity.

Next, we’ll use above methods to predict M1 and M2 so that to judge the sample capacity on the scale-free network of 500,1000 nodes with 20 samples. The results are represented in Table 3. For the estimate of average degree is more accurate, the estimation of M1 is only 1 or 2 less than the real value, and M2 is more perfect with no difference from the real value on the network with 1000 nodes. It shows that (Eqs 12 and 13) are effective to estimate M1 and M2. In a word, we can evaluate the sample capacity through ICS and the number of nodes.

4.3 Identification of large nodes

For a scale-free networks, we should know which nodes are large nodes so as to choose proper model to reconstruct their neighbors if samples are insufficient (M∈[M1, M2]). How to identify the large nodes?

For an undirected network, inferring node’s neighbors one by one may lead to contradiction. CBM [37] believes that the accuracy of large node is lower than the small node with the method of CS, then the large node has a larger contradiction number. So CBM method identifies the large node through their contradiction number. For the game networks, we believe that the variance of node’s revenue sequence (VR) can also reflect its degree. It is because that node’s revenue will change as the neighbor’s strategy changes, and the large node has more neighbors, so her benefit fluctuation is larger than small node. For a node i, define (15) where M is the length of revenue sequence, and is the average revenue. We sort the VRs of all nodes and identify the large nodes with their order.

In the following, we simulate it on the network of BA, WS and ER with sample size 15 (insufficient samples). The VR of each node has been shown in Fig 3(A). We can see that VRs of the large nodes (at the right) are 20–100 times than that of the small nodes (at the left) in BA network, while the difference in other two networks is no obvious. Therefore, VR can be used to infer who are large nodes, in addition, we can evaluate the type of network with it.

thumbnail
Fig 3.

(a) The variance of each node’s payoff sequence (VR) on BA, WS and ER with 100 nodes and average degree 6, and the degree of the node on the right is larger than that on the left. (b) The correlation coefficient between VR (CBM) and the degree as the relative sample (M/N) increases. (c) The accuracy of identifying the top 10% large nodes with VR and CBM on the scale-free network with 100,300,500 nodes and 15, 20, 25 samples, respectively.

https://doi.org/10.1371/journal.pone.0263939.g003

Does the order of VR reflect the order of nodes’ degree really? We calculate the correlation coefficient between the VR and degree and compare it with the CBM. Fig 3(B) shows the results of two methods as the relative sample (M/N) increases. It is obvious that the correlation of VR is higher than that of CBM, and it is close to 1 as the sample size increases. Even if the sample size is only 5, the correlation coefficient is also more than 0.85, so VR can be used as an alternative indicator of the degree. We use it to identify the top 10% of large nodes on the scale-free network with 100,300,500 nodes withing sample size 15,20 and 25, respectively (insufficient in all cases). We do the same thing with CBM. The results are shown in Fig 3(C). The accuracy of VR is obviously higher than the CBM. It is no less than 0.9 in all cases, wherever CBM is no more than 0.5. It means that the VR is an effective index in identifying large nodes, which lays the foundation for the selection of model with insufficient data.

4.4 The combination compressive sensing model based on ICS and QCS

Basing on the features of ICS and QCS, we propose a combined model to reconstruct the whole network. The implementation steps are as follows:

Algorithm Proposed method for reconstruct the whole network based on evolutionary game data via ICS and QCS.

  Input:

  Strategy matrix S and accumulated payoffs matrix Y of each agent from time period 1 to M.

Output:

  The identification structure of the network.

  Step 1: For i = 1:N

    Extract the revenue vector Yi and Fi = SiAS(i) of node i

    Using ICS model with cplexbilp package to estimate the neighbor set x0(i) of node i preliminarily.

    End For

    Calculating the average degree of the network.

  Step 2: For i = 1:N

    

   End For

  Identify the type of network through the scatter plot of VR. If it is scale-free network, switch to Step 3, else to Step 4.

  Step 3:

   Taking and time period M into Eqs 12 and 13 to estimate M1 and M2. If sample size M<M1,switch to step 3.1, and if M>M2,switch to step 3.2, else to step 3.3.

   Step 3.1: Using QCS model with quadprog package to estimate the neighbors of each node.

   Step 3.2: Using ICS model with cplexbilp package to estimate the neighbors of each node if the number of iteration is no more than 1×106, otherwise with QCS.

   Step 3.3: After finding the large nodes according to the VR values, QCS are used to reconstruct their neighbors, while the other nodes are reconstructed using ICS (if the number of iteration is no more than 1×106).

  Step 4:

   Taking and M into Eqs 12 and 13 to estimate M3. If M<M3, switch to step 3.1, else to step 3.2.

End

Since the approach above combines ICS and QCS according to the scale of network and the sample size, it is called the combined compressed sensing method (CCS).

4.5 Reconstruction with CCS on the networks with different types and different sizes

Firstly, we simulate the performance of CCS and compare it with CS and LASSO on scale-free, small world and random networks, where the networks have the same size and same average degree. The results are shown in Table 4. It can be found that the accuracy of the CCS is almost the highest regardless of the type of the network, except the result on the scale-free network withing 10 samples. And CCS is always better than LASSO. It means that the advantages of CCS are hardly affected by the type of network.

thumbnail
Table 4. The success rate of three methods on different types of networks (N = 500, <k> = 6).

https://doi.org/10.1371/journal.pone.0263939.t004

Next, we increase the scale to 2000 and 5000 with 40, 60 samples, respectively. The results are shown in Fig 4. It is not difficult to find that CCS has obvious advantages over CS in two (AUPR and SR) of the three evaluation indexes, and there isn’t much difference between CCS and CS in the third indicator (AUROC). Moreover, CCS is always better than LASSO on three indexes.

thumbnail
Fig 4. The performance of CCS, CS and LASSO on the scale-free networks with 2000, 5000 nodes, and sample sizes are 40, 60, respectively.

https://doi.org/10.1371/journal.pone.0263939.g004

Synthesizing the simulations of the three methods on network with different types and different scales, we can conclude CCS is a superior method.

5 The combination compressive sensing model on multi-state dynamics

5.1 The influence of strategy to CCS model

In the following, the impact of game strategy and their number on reconstruction accuracy will be discussed. There are 5 strategies: C, D, TFT, WSLS and ZD. We design 8 groups with 5 strategies. The success rate (SR) on the scale-free network is shown in Table 5 withing 20 samples on the scale-free network with 500 nodes. We can find that the accuracy of CCS is always the best in three methods except the group of {C, TFT}. CCS is even 26–37% higher than the other two methods for the group (8). It means that the strategies in system play an important role on the CCS.

thumbnail
Table 5. The succeed rate (SR) of three methods under different strategy combinations.

https://doi.org/10.1371/journal.pone.0263939.t005

To find out the deep reasons for the high accuracy of CCS, we introduce two influencing factors: the number of different elements in payoff matrix (NDP) in each group, the maximum frequency of these element in the matrix F (MF). For example, in group (1), strategy C and strategy TFT take part in the games. There are only two different elements {bc, (bc)/2} in payoff matrix (see Table 1), and in the 10 experiments, the frequency of element bc in matrix F is 81.93%. In the following, we’ll analyze how and why such two factors affect the accuracy of CCS through correlation and regression analysis of SR on NDP and MF. The results can be seen in Table 6.

thumbnail
Table 6. The multiple linear regression analysis of SR on NDP and MF.

https://doi.org/10.1371/journal.pone.0263939.t006

Firstly, the Pearson correlation coefficient between SR and NDP is 0.743, which means that the larger the NDP, the better the CCS. And the correlation coefficient between MF and SR is −0.905, which means there is a strong negative correlation between them. A regression equation is got with a high fitness (R2 = 0.933) (16)

And the influence on SR of MF is larger than NDP according to standardized coefficients (0.382, -0.711). The results show that NDP and MF of matrix F strongly affect the accuracy of CCS, especially MF. What do these two indicators mean? Why do they affect the accuracy of CCS?

As we all know the location identification of 0 and 1 in vector X decides the accuracy of CCS, and it depends on the diversity of matrix F in Eq (7). The more diverse the matrix F, the easier it is to identify their locations in vector X and the higher the reconstruction accuracy. If all elements are the same in matrix F extremely, the locations of 1s are not unique. In fact, the diversity of matrix F is reflected in two aspects: the number of different elements and the uniformity of the distribution of elements. The larger the number of different elements, the more uniform their distribution, the higher the accuracy of CCS. And if the maximum frequency is too high, which means that the elements are not uniform in matrix F, it will lead to a low precision.

Explain it in another way, the diversity of the matrix F depends on the number of strategies. The more the strategies, the better the CCS. For instance, the accuracy of the three-strategy groups (group 3,4,5,7) is higher than that of the two-strategy groups (group 1,2), at the same time it is lower than that of the four-strategy group (8). So multi-strategy game system is friendly to CCS, which is helpful for CCS to infer its structure.

Moreover, the diversity of the matrix F also depends on the strategies of the games. If one strategy makes other strategies disappear quickly, only one strategy left extremely, then the elements of matrix F are the same in many ranks. Available information from the data is little, so it is difficult to identify the true neighbors (all 1s in vector X). Hence, the strategies, such as WSLS, ZD, which can protect the cooperators and coexist with other strategies are helpful to acquire diverse samples so as to reconstruct the network accurately. For example, unlike strategy D, which always eliminates strategy C, ZD strategy is a catalyst for cooperation. It makes the ratio of C and ZD more even and their payoffs are different, so it is more diverse in the group {C, ZD} than in group {C, D}. But in group {ZD, D, C}, as ZD and D get the same payoff 0 if they encounter, then the difference of elements’ frequency in this group is larger than that in the group {C, D}, but smaller than that in the group {C, ZD}. As a matter of course, SRC,D<SRC,D,ZD<SRC,ZD.

In summary, besides the number of initial strategies, the type also affects the performance of reconstruction directly.

5.2 The performance of CCS on scale-free network

To study the performance of CCS on the multi-strategy game network, we design the following two groups:

  1. Three strategies {C, D, WSLS} in the games
  2. Four strategies {C, D, WSLS, ZD} in the games.

We compare the reconstruction performance of CCS, CS and LASSO in different sample sizes. The results are shown in Fig 5. It is obvious that CCS is almost the best with AUPR and success rate on both groups regardless of sample size. Although it is not the best on AUROC, its advantage gradually increases if ZD strategy involving (Fig 5.b1). As a whole, it stands out more when the samples are insufficient. For example, if M = 20, the success rate is more than 0.9 with CCS, however LASSO is only about 0.55, and CS is no more than 0.69. Moreover, the advantage of CCS is more obvious on four-strategy networks than on three-strategy networks.

thumbnail
Fig 5.

The performance of three methods as sample size increases from10 to 40. (a1-a3): Group {C, D, WSLS} (b1-b3): Group {C, D, WSLS, ZD}.

https://doi.org/10.1371/journal.pone.0263939.g005

5.3Noise environment experiments

In the real world, the practical environments are not as good as experimental environments. Limited by the accuracy and cost of observers, the observation data are not clean and pure, which contain a certain degree of noise. We test the performance of CCS in a noisy environment. Here, in the network with four strategies {C, D, WSLS, ZD} in the games, we assume that the observation noise is , and u% of the samples are contaminated by noise. In Table 7, the variance of the noise is set as σN = 0.3, and u is set as 0.5, 1, respectively. The results demonstrate that the proposed method can cope with noise efficiently to a certain extent. The specific as follows:

thumbnail
Table 7. The success rate of CS, CCS, LASSO in a noisy environment.

https://doi.org/10.1371/journal.pone.0263939.t007

Firstly, although high rate of noise pollution is adverse to CCS, it still achieves the highest accuracy if the samples aren’t sufficient, as in the case of pure environment. Secondly, the bigger the network, the better for CCS. For example, increasing the scale of network from 200 to 1000, CCS can maintain its advantage in more cases (from M≤20 to M≤30). But it should be noticed that LASSO is more robust to the noise pollution if with large samples. Therefore, CCS is robust to noise in the case of small samples, and LASSO is robust in the case of large samples.

5.4 The results of CCS on real network

Compare with the generative small-world, random network and scale-free network, real networks may not have the obvious behavior characteristics; therefore, we choose 4 real networks to test the generalization of the proposed method: the football, the dolphin, the elegans and the social networks, including 115, 62, 453, 1858 nodes and average degrees 10.66, 5.13, 8.9 13.45, respectively.

5.4.1 The macro-analysis of the reconstruction accuracy of the network.

Suppose that the nodes can adopt four strategies: {C, D, ZD, WSLS}. The accuracy of three methods is shown in Fig 6. CCS always has the highest accuracy about AUPR and success rate, except for the occasional case with 10 samples. Furthermore, its reconstruction accuracy increases rapidly with the increase of samples, especially on the football network. It is worth mentioning that CCS is not always the best on AUROC. The sparser the network, the higher the reconstruction accuracy. It means that the sparse networks are better for CCS. It validates that CCS maintains similar referring features on real networks.

thumbnail
Fig 6. The performance of CCS, CS and LASSO is shown with three evaluation indexes on football, dolphin, elegans and social networks.

https://doi.org/10.1371/journal.pone.0263939.g006

5.4.2 The influence of node degree on completely correct frequency.

As we know, not all nodes can be reconstructed correctly with insufficient samples. Which node is most likely to be reconstructed correctly? Is it related to its degree? Define completely correct frequency of node i as (17) where n is the number of experiments, and ni is the frequency of completely correct reconstruction on node i. For example, if node i has a 100% success rate in 9 out of 10 experiments, then fr(i) = 0.9. In the following, three reconstruction methods are compared to analyze which method is most likely to be affected by the degree. Assuming that the samples size is 50, and each node is reconstructed 10 times. We calculate the completely correct frequency of each node and draw the scatter of completely correct frequency and degree in Fig 7.

thumbnail
Fig 7. The completely correct frequency in 10 experiments on the elegans network under 50 samples.

https://doi.org/10.1371/journal.pone.0263939.g007

Within expectation, the larger the degree, the lower the accuracy. The completely correct frequencies of all the large nodes are 0. To analyze the case of the smaller nodes, we zoom in the graph where degree is from 1 to 80. The difference among three methods can be found easily. Define the minimum degree of the nodes whose completely correct frequencies are less than 1 (18)

For the CCS, it is 16 (the degree on blue line). In other words, the nodes whose degree is no more than 15 can always be reconstructed correctly. In contrast, CS and LASSO are 2 and 5, respectively (green line and red line). It means that CCS can correctly reconstruct the larger nodes than CS and LASSO despite that all of them are not very friendly to large nodes. Therefore, CCS is more tolerant to the larger nodes.

5.4.3 Coupled oscillations dynamics.

In order to further verify the generality of the proposed structure identification method, the oscillator dynamics network is introduced and explored. Here, consider a complex Kuramoto model (19) where θi and ωi are the phase and the natural frequency of the ith oscillator. Assuming that (20)

Our target is to identify the network structure matrix A = (aij). The identification results are shown in Fig 8. It is obvious that the proposed method is the best in three methods no matter what evaluation indexes are used. In particular, if there are only 10 observations, the success rate is near to 1 which is well above the other two methods. It means that CCS can also be used in other real systems besides evolutionary game systems.

thumbnail
Fig 8. The performance of CCS, CS and LASSO is shown with three evaluation indexes on the coupled-oscillator dynamics. Network size N = 100, average degree <k> = 6.

https://doi.org/10.1371/journal.pone.0263939.g008

6 Conclusions and discussions

In this paper, we propose two compressive sensing models: ICS and QCS firstly. According to the samples size, we also propose a combined model: CCS basing on ICS and QCS. It has been shown that the combined models usually have a higher accuracy compared with CS and LASSO on the networks with different types and different scales. In the multi-strategy system, it has an even better performance. The more strategies, the better performance. And the participation of strategies, which can improve group cooperation level, such as WSLS, ZD, etc., helpful for reconstruction with less data. In addition, CCS can correctly reconstruct the larger nodes than CS and LASSO. At the same time, it is robust under noise environment to a certain extent.

It is worth noting that the method is not limited to reconstruct the game networks although the paper discusses the reconstruction with evolutionary game data. As long as the linear constraint equation can be found from the system, the network can be reconstructed with our method. In addition, samples do not have to be time series data. It is worth mentioning that the combined compression sensing method should be improved in the following aspects. For example, sometimes we have to select model from ICS and QCS according to samples. Furthermore, sometimes we spend too long time on ICS. How to save time on ICS? They are worthy of attention in the future.

References

  1. 1. Agliari E, Barra A, Galluzzi A, Guerra F, Moauro F. Multitasking associative networks. Physical Review Letters. 2011; 109(26): 268101.
  2. 2. Huang K, Wang Z, Jusup M. Incorporating Latent Constraints to Enhance Inference of Network Structure. IEEE Transactions on Network Science and Engineering. 2020; 7(1)466–475.
  3. 3. Perc M, Jordan J J, Rand D G, Wang Z, Boccaletti S, Szolnoki A. Statistical physics of human cooperation, Physics Reports. 687(8) (2017) 1–51.
  4. 4. Lu R Q, Yu W W, Lu J H, Xue A K. Synchronization on Complex Networks of Networks. IEEE Transactions on Neural Networks and Learning Systems. 2014; 25(11): 2110–2118. pmid:25330433
  5. 5. Hongli Dong; Nan Hou; Zidong Wang; Weijian Ren. Variance-Constrained State Estimation for Complex Networks With Randomly Varying Topologies. IEEE Transactions on Neural Networks and Learning Systems. 2018; 29(7): 2757–2768. pmid:28541916
  6. 6. Hempel S, Koseska A, Kurths J, Nikoloski Z. Inner composition alignment for inferring directed networks from short time series. Physical Review Letters. 2011; 107(5): 3214–3219. pmid:21867072
  7. 7. S Ganguli H Sompolinsky. Compressed sensing, sparsity, and dimensionality in neuronal information processing and data analysis, Annual Review of Neuroscience. 2012; 35: 485–508. pmid:22483042
  8. 8. Eisen M B, Spellman P T, Brown P O, Botstein D. Cluster analysis and display of genome-wide expression patterns. Proceedings of the National Academy of Sciences of the United States of America.1998; 95(25): 14863–14868. pmid:9843981
  9. 9. Golan E H. A Bayesian networks approach for predicting protein-protein interactions from genomic data. Science. 2003; 302(5644): 449–453. pmid:14564010
  10. 10. Zhang Y, Li Y, Deng W, Huang K, Yang C. Complex networks identification using Bayesian model with independent Laplace prior. Chaos. 2021; 31: 013107. pmid:33754749
  11. 11. Margolin AA, Nemenman I, Basso K, Klein U, Wiggins C, Stolovitzky G, et al. ARACNE: An algorithm for the reconstruction of gene regulatory networks in a mammalian cellular context. BMC Bioinformatics. 2006;7(SUPPL.1): 1–15.
  12. 12. S Guo J Wu, M Ding, J Feng, K J Friston. Uncovering interactions in the frequency domain. PLoS Computational Biology. 2008; 4(5): e1000087. pmid:18516243
  13. 13. Yuan Y, Li C T, Windram O. Directed partial correlation: inferring large-scale gene regulatory network through induced topology disruptions. PLoS ONE. 2011; 6(4): e16835. pmid:21494330
  14. 14. Thomas A H, Lucy J C, Robert S, Burkhard R, Chris S, Debora SM. Three-dimensional structures of membrane proteins from genomic sequencing. Cell. 2012; 149(7):1607–1621. pmid:22579045
  15. 15. Wang WX, Lai YC, Grebogi C, Ye J. Network reconstruction based on evolutionary-game data via compressive sensing. Physical Review X. 2011;1(2): 021021.
  16. 16. Wang WX, Yang R, Lai YC, Kovanis V, Grebogi C. Predicting catastrophes in nonlinear dynamical systems by compressive sensing. Physical Review Letters.2011; 106(15):154101. pmid:21568562
  17. 17. Su R Q, Lai Y C, Wang X, Do Y. Uncovering hidden nodes in complex networks in the presence of noise. Scientific Reports. 2014; 4: 3944. pmid:24487720
  18. 18. Shen Z, Wang W X, Fan Y, Di Z, Lai YC. Reconstructing propagation networks with natural diversity and identifying hidden sources. Nature Communications.2014; 5: 4323. pmid:25014310
  19. 19. Han X, Shen Z, Wang W X, Zeng R. Robust reconstruction of complex networks from sparse data. Physical Review Letters. 2015; 114(2): 028701. pmid:25635568
  20. 20. Han X, Shen Z, Wang W X, Lai YC, Grebogi C. Reconstructing direct and indirect interactions in networked public goods game. Scientific Reports. 2016; 6:30241. pmid:27444774
  21. 21. Chen Y, Zhang Z, Chen T, Wang S, Hu G. Reconstruction of noise-driven nonlinear networks from node outputs by using high-order correlations. Scientific Reports. 2017; 7: 44639. pmid:28322230
  22. 22. E S C Ching, H C Tam. Reconstructing links in directed networks from noisy dynamics. Physical Review E. 2017; 95(1): 010301. pmid:28208378
  23. 23. Wang W X, Lai Y C, Grebogi C. Data based identification and prediction of nonlinear and complex dynamical systems. Physics Reports. 2016; 644:1–76.
  24. 24. Li J, Shen Z, Wang W X, Grebogi C, Lai YC. Universal data-based method for reconstructing complex networks with binary-state dynamics. Physical Review E. 2017; 95(3): 032303. pmid:28415181
  25. 25. Ma C, Zhang H F, Lai Y C. Reconstructing complex networks without time series. Physical Review E. 2017; 96(2): 022320. pmid:28950596
  26. 26. Ma C, Chen H S, Lai Y C, Zhang H F. Statistical inference approach to structural reconstruction of complex networks from binary time series. Physical Review E. 2018; 97(2): 022301. pmid:29548109
  27. 27. Xiang B B, Ma C, Chen H S, Zhang H F. Reconstructing signed networks via Ising dynamics. Chaos. 2018; 28(12): 123117. pmid:30599526
  28. 28. Liu Q M, Ma C, Xiang B B, Zhang H F. Inferring network structure and estimating dynamical process from binary-state data via logistic regression. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2019; PP(99):1–11.
  29. 29. Ma C. Reconstruction prediction and mining of structures in complex networks. Anhui: Anhui University,2019.
  30. 30. Zhang H F, Xu F, Bao Z K, Ma C. Reconstructing of networks with binary-state dynamics via generalized statistical inference. IEEE Transactions on Circuits and Systems I: Regular Papers. IEEE. 2019; 66(4): 1608–1619.
  31. 31. Huang K K, Wu S J, Li F B, Yang C H, Gui W H. Fault diagnosis of hydraulic systems based on deep learning model with multirate data samples. IEEE Transactions on neural networks and learning systems. 2021;70: 1–14. pmid:34111001
  32. 32. Zhang Y, Yang C, Huang K, Zhou C, Li Y G. Robust structure identification of industrial cyber-physical system from sparse data: A network science perspective. IEEE Transactions on Automation Science and Engineering. 2021; 99: 1–15.
  33. 33. Ma C, Chen H S, Li X, Liu Y C, H F . Data based reconstruction of duplex networks. SIAM Journal on Applied Dynamical Systems. 2020;19(1): 124–150.
  34. 34. Zhang C Y, Chen Y, Mi Y Y, Hu G. From data to network structure—Reconstruction of dynamic networks. Sci Sin-Phys Mech Astron. 2020; 50: 010502.
  35. 35. Zhang H F, Wang W X. Complex system reconstruction. Acta Phys. Sin. 2020; 69(8): 088906.
  36. 36. M Timme J Casadiego. Revealing networks from dynamics: An introduction. J. Phys. A, Math. Theor., 2014; 47(34): 343001
  37. 37. Ma L, Han X, Shen Z, Wang W X, Di Z R. Efficient reconstruction of heterogeneous networks from time series via compressed sensing. PLOS ONE. 2015; 10(11): 0142837. pmid:26588832
  38. 38. Zhang Y, Yang C, Huang K, M ,Li X. Reconstructing heterogeneous networks via compressive sensing and clustering. IEEE Transactions on Emerging Topics in Computational Intelligence. 2021; 5: 920–930.
  39. 39. Shi L, Shen C, Shi Q, Wang Z; Zhao J H; Li X L, Boccaletti S. Recovering network structures based on evolutionary game dynamics via secure dimensional reduction. IEEE Transactions on Network Science and Engineering. 2020; 7(3): 2027–2036.
  40. 40. W H Press F J Dyson. Iterated prisoner’s dilemma contains strategies that dominate any evolutionary opponent, Proceedings of the National Academy of Sciences. 2012; 109(26): 10409–10413.
  41. 41. Imhof L A, Fudenberg D, Nowak M A. Tit-for-tat or win-stay, lose-shift? Journal of Theoretical Biology. 2007; 247(3): 574–580. pmid:17481667
  42. 42. Xu XR, Rong ZH, Tse CK. Bounded rationality optimizes the performance of networked systems in prisoner’s dilemma game. In: Proceedings of the IEEE International Symposium on Circuits and Systems; 2018.