Browse Subject Areas

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Ranking Competitors Using Degree-Neutralized Random Walks

  • Seungkyu Shin,

    Affiliation Graduate School of Culture Technology, Korea Advanced Institute of Science & Technology, Daejeon, Republic of Korea

  • Sebastian E. Ahnert,

    Affiliation Theory of Condensed Matter, Cavendish Laboratory, CB3 0HE, Cambridge, The United Kingdom

  • Juyong Park

    Affiliation Graduate School of Culture Technology, Korea Advanced Institute of Science & Technology, Daejeon, Republic of Korea

Ranking Competitors Using Degree-Neutralized Random Walks

  • Seungkyu Shin, 
  • Sebastian E. Ahnert, 
  • Juyong Park


Competition is ubiquitous in many complex biological, social, and technological systems, playing an integral role in the evolutionary dynamics of the systems. It is often useful to determine the dominance hierarchy or the rankings of the components of the system that compete for survival and success based on the outcomes of the competitions between them. Here we propose a ranking method based on the random walk on the network representing the competitors as nodes and competitions as directed edges with asymmetric weights. We use the edge weights and node degrees to define the gradient on each edge that guides the random walker towards the weaker (or the stronger) node, which enables us to interpret the steady-state occupancy as the measure of the node's weakness (or strength) that is free of unwarranted degree-induced bias. We apply our method to two real-world competition networks and explore the issues of ranking stabilization and prediction accuracy, finding that our method outperforms other methods including the baseline win–loss differential method in sparse networks.


Competition is one of the most essential mechanisms for the survival and evolution of species or components in a complex system, be it from the biological, the social, or the technological realm [1][3]. Therefore in many complex systems a robust and effective “rating” or “ranking” method for determining the most successful or superior component can be essential for understanding its dynamics [4][8]. It can also contribute to a system's success and confidence: In an enterprise, for instance, a fair competition-and-reward mechanism would be the basis for earning the confidence of its employees and success. The same argument would apply to an economic or financial institution; the confidence in the fairness of the ratings system by the participants such as investors and customers is crucial for its sustainability and development.

Here we propose a ranking method where the competing species are represented as nodes of a competition network. The most familiar example of a competition network can be found in sports where the nodes represent competing teams, and the edges the competitions or the games played between them [5]. The ranking of the competitors (teams) in a sport is an issue of much interest, as it functions as the basis of many events (e.g., playoffs) and decisions (e.g., marketing) that could determine its popularity and success. While the ranking methods differ from sport to sport, they are almost always some type of a generalization of the simplest scheme of counting the wins and losses that can often be insufficient for the purpose of producing satisfactory and useful rankings [9].

The ranking of nodes in a network has a long and rich history of development [10][13]. It is most often formulated in the network context as a problem of “centrality,” i.e. the measure of a node's prominence or importance deduced from the network structure. Of many popular centralities (Google's PageRank [14] is perhaps the best-known modern example, to be discussed below) in existence, in Fig. 1 (a) we show three fundamental ones that often serve as bases for more elaborate ones [13], [15]. The first is the degree centrality, or simply the degree, which is the number of nodes connected to the node, rendering the node in the middle the most central. The second is the eigenvector centrality, which is the leading positive eigenvector of the adjacency matrix. It generalizes the degree taking into account the “quality” of a connection, thereby differentiating the two shaded nodes with the same degree (1) in the picture. The third is the betweenness or Freeman centrality which measures how often a node sits on the shortest path(s) between two nodes [16]. Thus shaded node in the center is the most central using betweenness, although its degree is smaller than those of its neighbors.

Figure 1. Basic concept of network centralities and our ranking method.

(a) Three basic network centralities. The Degree is the number of node's neighbors; the shaded node in the most central. The Eigenvalue Centrality considers the quality of a connection, so that being connected to a central node raises one's centrality in turn; the larger shaded node is more central than the smaller shaded node, although their degrees are equal. The Betweenness quantifies the node's role in acting as an intermediary between nodes by measuring how often it sits on the geodesic (shortest) paths between two nodes; the shaded node, even though its degree is low, is the most central. (b) In PageRank, with probability a random walker follows a randomly chosen outgoing link (solid lines) to travel to another node, and with probability makes a random jump to any node in the network (red dotted lines). The nodes are ranked by their stationary occupation probability. (c) A competition network is a directed network with weighted directional edges, where the weights can represent the number of wins or the points scored by one node against another. Our ranking method is based on random walk where the edge weights define a gradient between nodes. We use the stationary occupation probability as the measure of node's strength or weakness, depending on the defined directionality of the gradient. (d) A high degree can unfairly favor and penalize a node, necessitating a degree-neutralizing procedure.

Fig. 1 (b) shows how PageRank works. Devised for ranking webpages in the Worldwide Web with pages as nodes and hyperlinks as directed edges of a network, it employs the concept of random walk where at each time step the walker moves from a node to another by following a randomly chosen outgoing link with probability (called the “Google alpha”), or jumps to a randomly chosen node (regardless of connection) with probability . Interpreting an incoming edge (i.e. being cited by a webpage) as indicating a node's significance, PageRank is then given by the stationary occupation probabilities for each node under the random walk. The idea of the random walk is used in our proposed method explained below, along with the difference between the two methods.


A competition network can be represented as a weighted directed network, as shown in Fig. 1 (c). We call the weight of the edge to from , which can be the points scored by against in a sports match, the number of times that beat in a series of encounters, etc. [4], [5] (This is a matter of convention. In ecology, for instance, it is more common to allow an edge point from the prey (loser) to its predator (winner)). Therefore Fig. 1 (c) might represent the result of soccer game in which team (left) beat team by the score of 2∶1 (i.e. and ).

To determine the global ranking of nodes from the strongest to the weakest based on the weights , we picture a random walker who travels indefinitely from node to node along the network edges that have a slope (gradient) defined by the weights. Let us, for the time being, assume a downward slope from the winner to the loser; then this will cause the walker to visit the weaker nodes more often, so that we can rank the nodes in the order of the increasing occupation probability where is the number of nodes in the network, and . We allow, however, the walker to travel up the slope as well as down it, only not as easily. There are two reasons for this. First, since the outcome of a real competition event is inherently stochastic, it may well be the case that a truly weaker node may have defeated a stronger opponent, which we call an “upset” in sports parlance. Second, such a bidirectional travel prevents the pathological cases where the random walker gets stuck at a node with no exits (i.e. a node that has lost all contests).

A possible form for the transition probability between connected nodes (i.e. the adjacency matrix element )(1)

We put to ensure that even when , e.g. a scoreless tie in a game.

We need to make one more consideration before presenting the final form of the transition matrix in light of the case depicted in Fig. 1 (d). Here one could argue that the relationship between the three teams — (left) tied with (center) tied with (right) — ought to drive the teams' rankings to be equal due to the transitive property. We see, however, that the two edges incident upon node penalizes it by acting as two pathways into it, raising the occupation probability in comparison with the other two teams. We correct for such a bias via following final form for the Markov transition matrix :(2)

Here the probability of entering a node is discounted by its degree (not to be confused with the node labeled in Fig. 1 (d)), and is the normalizing factor so that . Finally, the stationary occupation probability vector is the leading eigenvector of with eigenvalue , i.e. [17]

For completeness, we note that we could have equally let the walker prefer to move to the stronger node, in which case the occupation probability would represent the node's strength. This can be achieved by introducing a different transition matrix where we simply set from Eq. (1). Then its occupation probability vector which we label satisfies . Finally, since there is no a priori reason to favor one picture over the other, we combine them into to use as the final strength measure.

Our method has a number of differences from PageRank, Fig. 1 (b). First, our method allows a bidirectional walk with a gradient defined by the scores, Eq. (1), rendering the random jump component of PageRank unnecessary as long as the network is connected. Second, the transition probability in our method (Eq. (2)) neutralizes for the degree of the potential target node unlike in PageRank where having a large indegree generally leads to a higher centrality.

Results and Discussion

To gauge the performance of our method and to better understand its implications we apply it to two real-world competition networks found in sports. We use two competition networks, the National Football League (NFL, of the USA, and the English Premier League (EPL, The schedules for the NFL (year 2013) and the EPL (year 2012–13) are shown in Fig. 2 as undirected networks. The NFL consists of 32 teams divided into American Football Conference (AFC) and National Football Conference (NFC) that are further divided into four divisions of 4 teams, respectively. Annually they play 256 regular-season games (16 games for each team), the outcomes of which act as the the basis of the playoff that culminate in the championship game (called the Super Bowl). The EPL consists of 20 teams, each playing against each other twice during the season for a total of 38 games for each team. Such a full network is called “complete” or a “round-robin.” When two teams play multiple times (in the EPL it happens between every pair of teams, and in the NFL it happens between teams belonging to the same conference) we let and represent the cumulative points, and update the gradients and accordingly. The necessary condition on in Eq. (1) is that it is positive, and we set here.

Figure 2. The schedule networks for (a) the National Football League (NFL) in 2013 and (b) the English Premier League (EPL) of 2012–2013.

The NFL consists of 32 teams divided equally into American Football Conference and National Football Conference, further divided into four divisions (shaped differently) corresponding to regions in the country. The EPL consists of 20 teams, forming a complete network or a round-robin.

The final ratings and the rankings of the teams are shown in Fig 3, with the error bars obtained using the jackknife method [18], [19]. We used the 2012 and 2013 regular season data for the NFL, and the 2012–13 and 2011–12 data for the EPL. We first discuss the results from the NFL (top). In 2013 (top left), the Seattle Seahawks show a noticeably high score in comparison with other teams, reflecting the dominance they showed during the regular season; as a matter of fact, they proceeded to defeat every opponent in the postseason and capture the championship. The score of 43–8 against the Denver Broncos in the Super Bowl was the most lopsided in NFL history. This is not always the case, though, as the error bars suggest. In 2012 (top right) it was the 14th-ranked Baltimore Ravens that won the championship after entering the postseason as the lowest-seeded team. The result was a surprise to many, as their progress to the championship was considered a series of “upsets” – a lower-ranked team defeating a higher-ranked team. The EPL (bottom) lacks a postseason, but our method agreed with the official EPL ranking system in choosing the four top teams that get to represent the EPL in the UEFA (Union of European Football Associations) Champions League, an annual European competition played between clubs.

Figure 3. The final regular season ratings and rankings of the teams for two professional sports, NFL (top) and EPL (bottom).

The errors were estimated using the jackknife method [18], [19]. In NFL 2013 the top-ranked team (Seattle Seahwaks) enjoyed an exceptionally regular season and won the Super Bowl championship in a dominant fashion.

The lack of upsets – likely noting the stability of the ranking – in the EPL is likely due to its larger connectance defined as(3)where is the number of edges, i.e. the actual games played. Since in the EPL two games are played between every pair of teams, , meaning that we have more information on which to base our final predictions. In general, since our ranking method produces a set of ratings of the teams based on data (i.e. past performance), how well it functions as a predictor of future outcomes is an interesting problem to look at. We explore this by studying the weekly prediction accuracies of our method as the season progressed, given by the number of correct wins predicted divided by the total number of games played (a tied game was considered half correct). They are shown in Fig. 4, given as a function of connectance . For reference, we compare our method with others. As it would be infeasible to perform a comprehensive comparison encompassing all existing methods it is important to select those that are practically impactful or scientifically illustrative. We therefore chose the following four:

Figure 4. The weekly prediction accuracies of our method and four other methods compared.

Predictions were made based on cumulative data. In the case of NFL (top panels) our method shows a noticeably higher prediction accuracy in comparison with others, while for the EPL (bottom panels) the methods exhibit smaller differences, mainly due to the significantly higher connectance .

  1. Winloss differential with tie breaker. This predicts the team with a higher win–loss margin to win. In case the margins are tied, a “tie breaker” is employed by which the team that has scored more net points during the season is predicted to win. This is the official ranking system of the EPL.
  2. Park-Newman network ranking method. Developed by Park and Newman in 2005 [5], this method ranks teams according to their generalized wins–losses that take into account the number of indirect paths between nodes. They showed that it corresponds to a directional version of Katz centrality [11].
  3. Colley's matrix method. Devised by W. N. Colley, nodes are ranked by ratings scores calculated from an iterative scheme. This method is notable for being an official computational method to be used in the US college football, and one of the few whose detail is made public [20].
  4. PageRank. In accordance with our definition of the directed edge pointing from the winner to the loser of a game, we interpret the occupation probability as indicating a team's weakness [14].

The changes in the weekly prediction accuracies are given in Fig. 4 for the five methods. We see that our method consistently outperformed others in the NFL, while they were more or less on par in the EPL (except for PageRank that noticeably underperformed): the aggregate prediction accuracies of our method and the four methods (in the order in which they were presented above) were for NFL in 2013, for NFL in 2012, for EPL in 2012–13, and for EPL in 2011–12. While some methods may perform slightly better than our method in the early stages of the season in the EPL, our method starts to perform equally well or better as the season progresses, reaching a larger value. The comparison results in Fig. 4 appear to indicate the effectiveness of the aspects of our method absent in other methods, namely the score-based gradients for random walk and degree neutralization. We see that PageRank underperforms other method noticeably, which we believe can be attributed to the random jump mechanism working as an indiscriminate equalizer of nodes.

We now study how quickly the rankings stabilize and converge towards the final ones as a function of . A ranking of teams produced when is often called a partial ranking. As the seasons progress and games are played more information become available (for us, in the form of and ), the rankings are likely to stabilize by experiencing fewer and fewer changes. In Fig. 5 we show the weekly rankings of the teams based on cumulative records, connected by colored lines to show more clearly how the rankings changed over time. As expected, with the progress of the season the rankings stabilize, evidenced by the decreasing number of line crossings (switching of rankings between teams). The numbers of line crossings appear to follow an exponential fit for both sports, with a faster rate of decrease in the beginning than near the end. We can also observe this from the Spearman Rank Correlations (SRC) and the jackknife errors of the weekly rankings with the final ones (so that SRC when the seasons end): We find the points SRC reaches 0.9, which is the 11th week for NFL 2013 (), 8th week for NFL 2012 (), the 14th week for EPL 2012–13 (), and 11th week for EPL 2011–12 (), each corresponding to only , , , and of the full seasons. Therefore, the season has to have progressed less than two thirds, often half, to produce a ranking that we can state is substantially similar to the final ones. This also implies that in the later stages the line crossings occur between teams with closer rankings. We see that it is the middle tier where the line crossings are most common and persistent, showing that they are the most competitive: In the EPL the #1 and #2 teams remain stable past midseason (2012–13) or from the very early stages (2011–12), and in the NFL we see a similar behavior (albeit slightly weaker) in the top tier. This also suggests another possible reason for the prediction performances we see in Fig. 4 for the EPL: With connectance (which is very unusual for real-world networks), enough information is available for even the simplest of schemes, and therefore two ranking methods may not differentiate themselves as readily. This tells us that our method is more effective on sparser networks, in this case the NFL. This is in line with the general characteristics of network centralities (some of which are shown in Fig. 1 (a)) that they become less effective as a network becomes denser and the topology more uniform around each node.

Figure 5. The stabilization of rankings and convergence towards the final ranking.

The colored lines (top) track the weekly rankings of teams, which show large fluctuations in the early stages that attenuate as the seasons progress. The fluctuations are quantified by the numbers of line crossings (middle) that show an exponential decrease. This is also reflected in the Spearman Ranking Correlation of the weekly rankings with the final ones (bottom), which reaches 0.9 when fewer than 50% of the games have been played with the exception of NFL in 2013, where 64.7% of the games had to be played.


In this paper, we have introduced a random-walk based ranking system for competition networks. Our method possessed two properties that render it generally applicable for any network: First, walks were allowed in both directions governed by gradients defined by the edge weights. Second, the effects of a high degree was neutralized, eliminating unwarranted advantage or disadvantage caused by utilizing the steady-state occupancy as the measure of a node's strength and weakness.

We applied our method to two popular sports, the National Football League and the English Premier League, to explore the performance and potential uses of our method. We compared the prediction accuracies of our method with four other methods including the win–loss scheme with a net points-based tie breaker, finding that ours outperforms significantly in the NFL, and is on par in the EPL. We also studied in detail the converging behavior of rankings, finding large early-stage instabilities replaced by smaller-scale fluctuations between mid-range teams.

We found out that the connectance was an important factor in these behaviors, and that our method was more effective when the network was sparser than the EPL network with . This does not lessen the necessity of a sophisticated methods such as ours, however: Since most known real-world networks are sparse, the enhanced performance in such cases is an indicator of the value of such methods.

The strength of our method is that it is generally applicable to any system that can be represented as a network of competitions (edges) between components (nodes) where the ranking of nodes is necessary or useful. We have only explored two networks out of many that can be studied, and we hope to our method applied to more systems in the future, including networks with many-body (not merely two-body, as in our examples) competitions and those from other practical areas of application, such as product recommendation systems based on customer reviews as competitions.

Supporting Information

S1 File.

The 2013 regular season game result data for the NFL.


S2 File.

The 2012 regular season game result data for the NFL.


S3 File.

The 2012–13 regular season game result data for the EPL.


S4 File.

The 2011–12 regular season game result data for the EPL.


Author Contributions

Conceived and designed the experiments: SS SEA JP. Performed the experiments: SS JP. Analyzed the data: SS JP. Contributed reagents/materials/analysis tools: SS SEA JP. Wrote the paper: SS SEA JP.


  1. 1. Darwin C (2013) On the Origin of Species. London: Arcturus.
  2. 2. Kauffman SA (1993) The origins of order: Self-organization and selection in evolution. Oxford: Oxford University Press.
  3. 3. Drossel B (2001) Biological evolution and statistical physics. Adv Phys 50:209–295.
  4. 4. Williams RJ, Martinez ND (2000) Simple rules yield complex food webs. Nature 404:180–183.
  5. 5. Park J, Newman MEJ (2005) A network-based ranking system for us college football. J Stat Mech 2005:P10014.
  6. 6. Motegi S, Masuda N (2012) A network-based dynamical ranking system for competitive sports. Sci Rep 2:, 904.
  7. 7. Park J, Yook S (2014) Bayesian inference of natural rankings in competition networks. Sci Rep 4:, 6212.
  8. 8. Balinski ML, Laraki R (2010) Majority judgment: measuring, ranking, and electing. Cambridge: MIT Press.
  9. 9. Stefani RT (1997) Survey of the major world sports rating systems. J Appl Stat 24:635–646.
  10. 10. Freeman LC (2004) The development of social network analysis: A study in the sociology of science. Vancouver: Empirical Press.
  11. 11. Katz L (1953) A new status index derived from sociometric analysis. Psychometrika 18:39–43.
  12. 12. Freeman LC (1979) Centrality in social networks conceptual clarification. Soc Networks 1:215–239.
  13. 13. Newman MEJ (2010) Networks: An Introduction. New York: Oxford University Press.
  14. 14. Brin S, Page L (1998) The anatomy of a large-scale hypertextual web search engine. Comput Networks ISDN 30:107–117.
  15. 15. Wasserman S, Faust K (1994) Social network analysis: Methods and applications. Cambridge: Cambridge University Press.
  16. 16. Freeman LC (1977) A set of measures of centrality based on betweenness. Sociometry 40:35–41.
  17. 17. Brémaud P (1999) Markov chains: Gibbs fields, Monte Carlo simulation, and queues. New York: Springer.
  18. 18. Efron B (1979) Computers and the theory of statistics: thinking the unthinkable. SIAM Rev 21:460–480.
  19. 19. Newman MEJ, Barkema GT (1999) Monte Carlo Methods in Statistical Physics. New York: Oxford University Press.
  20. 20. Colley WN (2002) Colley's bias free college football ranking method: the Colley matrix explained. Ph.D. thesis, Princeton University. Available: Accessed 2014 October 21.