Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Optimal Allocation of Node Capacity in Cascade-Robustness Networks

  • Zhen Chen,

    Affiliations School of Electronic and Information Engineering, Beihang University, Beijing 100191, People’s Republic of China, Beijing Key Laboratory for Network-based Cooperative Air Traffic Management, Beijing 100191, People’s Republic of China

  • Jun Zhang ,

    buaazhangjun@vip.sina.com (JZ); wenbodu@buaa.edu.cn (WBD)

    Affiliations School of Electronic and Information Engineering, Beihang University, Beijing 100191, People’s Republic of China, Beijing Key Laboratory for Network-based Cooperative Air Traffic Management, Beijing 100191, People’s Republic of China

  • Wen-Bo Du ,

    buaazhangjun@vip.sina.com (JZ); wenbodu@buaa.edu.cn (WBD)

    Affiliations School of Electronic and Information Engineering, Beihang University, Beijing 100191, People’s Republic of China, Beijing Key Laboratory for Network-based Cooperative Air Traffic Management, Beijing 100191, People’s Republic of China, School of Engineering & IT, University of New South Wales at the Australian Defence Force Academy, Canberra, Australia

  • Oriol Lordan,

    Affiliation Universitat Politècnica de Catalunya-BarcelonaTech, C/Colom no. 11, Terrassa 08222, Spain

  • Jiangjun Tang

    Affiliation School of Engineering & IT, University of New South Wales at the Australian Defence Force Academy, Canberra, Australia

Abstract

The robustness of large scale critical infrastructures, which can be modeled as complex networks, is of great significance. One of the most important means to enhance robustness is to optimize the allocation of resources. Traditional allocation of resources is mainly based on the topology information, which is neither realistic nor systematic. In this paper, we try to build a framework for searching for the most favorable pattern of node capacity allocation to reduce the vulnerability to cascading failures at a low cost. A nonlinear and multi-objective optimization model is proposed and tackled using a particle swarm optimization algorithm (PSO). It is found that the network becomes more robust and economical when less capacity is left on the heavily loaded nodes and the optimized network performs better resisting noise. Our work is helpful in designing a robust economical network.

Introduction

Modern human societies very much depend on large scale critical infrastructures to deliver resources and services to consumers and businesses. These infrastructures are complex systems of structural and functional elements. Such systems work reliably in our everyday life. However, as some rare occurrences in the past have shown, they are still vulnerable to major outages, such as telecommunication outages [1], blackouts in power grids [24] and financial bankruptcy [5].

The theory of complex networks has emerged in recent years, and has proved to be a valid tool to describe, model and quantify complex systems in system robustness and cascading failures [616]. In these fields, two important issues are always focused on by most researchers: how to design man-made networks with higher robustness to cascading failures and how to reduce the cost to maintain the robustness of networks. High robustness and low cost seem to be contradictory, and it is difficult to achieve both in most cases. For instance, dense networks are more likely to be robust to cascading failures, but more edges always mean more resources and a higher cost. In short, building a robust network is very expensive.

Some models and strategies have been proposed to balance robustness and cost. Motter et al. [17] first introduced a homogeneous capacity-load relationship model which is widely used today. In this model, the capacity of a node is proportional to the initial load of the node which is based on the flow along the shortest hop path. Wang et al. [18] introduced a model in which the capacity of a vertex is assigned as a nonlinear monotonically increasing function of the load. It aims to enhance the robustness of the network with a lower cost via protecting the higher load vertices merely.

However, most previous works allocate the capacity resources simply according to network topology information, such as degree and betweenness. Kim et al. [19, 20] argued that these models are unrealistic due to the fact that empirical networks always show a nonlinear capacity-load relationship and heavily loaded nodes usually have relative less unoccupied capacity. Actually, the essence of resource allocation is an optimization problem, but the cascading failure process is hard to be described functionally, rendering traditional optimization methods powerless. It was not until very recently intelligent optimization algorithms, which have been proven to be valid for solving practical non-functional optimization problems, have been applied to network optimization [2125]. Huang et al. [21] proposed a multi-objective simulated annealing algorithm to optimize the network topology for packet routing. Zhou et al. [22] introduced a memetic algorithm to optimize the structure of networks in order to enhance the robustness of scale-free networks against cascading failures. Fang et al. [23] applied the non-dominated sorting binary differential evolution algorithm to the power generation allocation of the existing buses in the 400kV French power transmission network. It turns out that optimization algorithms work well in optimizing the robustness of complex networks.

Motivated by their works, in this paper we formulate the problem of resource allocation within a large-scale, nonlinear and multi-objective optimization framework and construct an optimal model of allocating the capacity resources. To solve it, an effective algorithm named particle swarm optimization (PSO) is utilized. We investigate the cost-efficiency, capacity-load and vulnerability-cost relationships and the effect of noise.

Materials and Methods

In this section, we describe (1) network and cascade model, (2) optimal model of capacity allocation, and (3) PSO algorithm based solution. For clarity, the terms “node” and “vertex” will be used interchangeably in this paper.

Network and cascade model

To capture the heterogeneity of many real-world networks, the Barabási-Albert (BA) scale-free network [26] is adopted. The BA network is generated by starting from a small amount of m0 fully connected nodes. It increases by adding a new node at each time step. This new node is connected preferentially to m(mm0) old nodes in such a way that the probability of connecting to an existing node is proportional to the old node’s degree. The BA scale-free network exhibits a power-law distribution P(k) ∼ kγ. In the following, the networks are set to N = 500, m0 = 2 and m = 2.

In the cascade model, the node load is estimated through node betweenness if the traffic flow travels along the shortest-path. As in Refs. [17, 27] we adopt the data-transport model which uses node betweenness to denote the node load: (1) where is the number of shortest paths between nodes s and t that run through node i, and θst denotes the total number of available shortest paths from s to t. The capacity of a node is the maximum load that the node can handle. Following Refs. [17, 18], the capacity of node i is assigned as (2)

Where is the initial load and αi(αi ≥ 0) is the tolerance parameter.

The cascading failure is triggered by removing the highest-load node in the network resulting in the globally redistribution of the flow and node loads [17, 18]. If a surviving node is overloaded, i.e. Li > Ci, then it fails, leading to further load redistribution. This process continues recursively until no more nodes overload. Moreover, it is assumed that only the nodes in the giant component remain functional.

Optimal model of capacity allocation

Methods of resource allocation are generally framed to reduce the vulnerability (enhance the robustness) of networks against cascading failures with a limited cost. The variables to be optimized are defined as the tolerance parameter vector (i.e. , is a vector of network size N). Motter-Lai (ML) model [17] is the most basic and widely used model, in which αi are the same values c(c > 0) for all nodes. Wang et al. [18] proposed a model in which , where Θ(x) = 0(1) for x < 0(x > 0), is a two-valued function. The latter model performs better than the former one, so it is possible that a pattern of that yields lower vulnerability and cost can be found by means of a variation approach.

The vulnerability of a network is defined as (3) where is the number of failed vertices under the tolerance vector and N is the total number of vertices in the network. Obviously, the smaller V is, the more robust the network will be. The cost of whole network resources is defined, in accordance with previous works [18], as (4) where is a column vector of ones. The larger the E is, the more resources the network costs.

The cascade-robustness network is a network with redundant vertex capacity. However, redundancy usually means extra cost. Robustness and cost are expected to be in opposition, requiring trade-offs. Therefore, we define the overall objective function of a network by aggregating vulnerability and cost through an adjusting parameter ω(0 < ω ≤ 1) balancing the importance of vulnerability and cost: (5)

When ω is small, we pay more attention to controlling the cost rather than reducing the vulnerability. On the contrary, when ω is large, we focus more on enhancing the network robustness and care less about the cost. It is easily found that the smaller the objective function is, the better the results are.

Maximum cost is always a constraint in real world projects, which are subject to a budget. Thus, it is necessary to set whole network maximum cost as a constraint. Since ML model is the fundamental model, without loss of generality, we set the whole network cost of the initial pattern of ML model, as the maximum cost.

Then the node capacity allocation optimization model can be formulated as (6)

The dimension of the variable is N, which represents the number of vertices in the network. is the objective function which is the linear combination of vulnerability and cost and is not an analytic expression. λ is the upper bound of the total node capacity cost. The value is a feasible solution if it satisfies all the constraints, otherwise it is an infeasible solution. One feasible solution is , which means feasible solutions do exist.

Particle swarm optimization based solution

Particle swarm optimization is an effective optimization algorithm proposed by Kennedy and Eberhart [28] which is inspired by the social behavior of swarms such as fish schooling or bird flocking [29]. In PSO, to search for the global optimum, a flock of particles move in a constrained parameter space, interacting with each other and updating their positions and velocities. Owing to its simplicity, effectiveness and low computational cost, PSO is widely used in solving practical optimization problems [30, 31] and there are a number of variations [3236].

For our capacity allocation optimization problem with N variables and an objective function , the PSO algorithm represents the potential solutions with a flock of particles. Each particle i has a position and a velocity in the N-dimensional space. The position represents the tolerance parameter and the velocity represents the variance of . The goal is to find an optimal position of any particle i that makes the objective function minimum. Initially the particles’ positions and velocities are generated randomly within the constraint. Then, at each iteration each particle updates its position and velocity according to the following equations [28]: (7) (8) where , φ = c1 + c2 > 4. Here pi is the best historical position found by i’s neighbors, c1 and c2 are the acceleration coefficients. is a random number drawn at each iteration from the uniform distribution [a, b]. Therefore, c1 and c2 balance the impacts of each particle’s own and its neighbors’ experiences, and η indicates the learning rate. Based on previous extensive analysis [37], we choose the appropriate setting as c1 = c2 = 2.05 and η = 0.7298. The number of particle is 50 and the number of iterations is 1000.

To handle the constraints, we follow Ref. [38]:

  1. A volition function is assigned to make up the final fitness function with . The volition function is defined as follows:
(9)
  1. 2) A better solution is chosen by the tournament selection operator. The following three criteria are satisfied during the selection operator:
    1. when two feasible solutions are compared, the one with smaller value of is chosen;
    2. when two infeasible solutions are compared, the one with smaller value of is chosen;
    3. when one feasible and one infeasible solutions are compared, the infeasible solution is chosen only if it has a smaller value of and is less than a constant ε, otherwise the feasible solution is chosen.
  2. 3) The constant ε is used to limit the number of infeasible solutions. To keep the proportion of infeasible solutions a fixed proportion l, ε is defined as:
(10)

It is worth noting that a handful of infeasible solutions do exist in the population. They are helpful due to the fact that sometimes the optimal solution is found in the boundary of the constraints.

Results and Discussion

Solving the optimal model of allocation of resources using the PSO algorithm described in previous section, we analyze (1) the optimization performance of the network; (2) cost-efficiency, (3) capacity-load and (4) vulnerability-cost relationships; and (5) the effect of noise.

Optimization performance of the network

We start by analyzing the optimization performance on BA scale-free networks. The basic Motter-Lai model [17] is set as the initial model to compare with. As reducing the vulnerability is the main purpose of real network design, we set the adjusting parameter ω = 0.8. In our optimization model this is also a well-considered trade-off which is discussed in the part of vulnerability-cost relationship of the network. Fig 1A shows that the objective function is much smaller after optimization for all values of λ. The optimized capacity allocation pattern does perform better than the initial model pattern. To get a full scenario, Fig 1B and 1C show the vulnerability and cost of the network, respectively. The network’s vulnerability decreases monotonously as the maximum cost increases in both optimized and non-optimized situations, which is in agreement with the intuition that more redundancy results in lower vulnerability. The optimized networks are always less vulnerable than the initial ones. Regarding the cost, the optimized pattern’s cost is only approximately 70% of the initial pattern. Consequently, the optimized networks are more robust at a lower cost.

thumbnail
Fig 1. Performance of the optimized pattern and the initial model pattern under different maximum costs.

(a) Objective function. (b) Vulnerability of the network. (c) Cost of the network. Each point is averaged from 20 different networks. The adjusting parameter is set to ω = 0.8. The algorithm runs for 1,000 iterations each time.

https://doi.org/10.1371/journal.pone.0141360.g001

Cost-efficiency relationship of the network

In order to further investigate the robustness and cost of the network, we set up a cost-efficiency indicator: (11)

Here H represents the average robustness gain from a unit cost, which is akin to the marginal utility in economics. Fig 2 shows that the cost-efficiency of optimized pattern is higher than that of initial pattern, especially when the maximum cost λ is small. In general terms, if the resources are allocated in the optimized pattern, the unit cost can produce much more robustness benefit, and more nodes will be able to survive a cascading failure. Besides, as the λ increases, each homogenous unit cost contribution to the network’s robustness becomes smaller. This interesting observation is just like the law of diminishing marginal utility. It shows that it is not wise to invest too much extra cost in protecting facilities when designing a system.

thumbnail
Fig 2. The cost-efficiency indicator H of different maximum costs.

Each point is averaged from 20 different networks. The adjusting parameter is set to ω = 0.8. The algorithm runs for 1,000 iterations each time.

https://doi.org/10.1371/journal.pone.0141360.g002

Capacity-load relationship of the network

Now we investigate the relationship between optimized node capacity and initial node load in the network reported in Fig 3. The capacity-load relationship in the initial model is linear and the tolerance parameter αi is the same for all nodes. On the contrary, the relationship in the optimized model is nonlinear and αi changes depending on the node. What is more, Fig 3 shows that nodes with large initial loads tend to have less unoccupied capacity in the optimized model. This interesting phenomenon is due to the fact that the flow fluctuations in heavily loaded nodes are relatively small when cascading failures occur and flow redistributes. Assigning excessive unoccupied capacity to heavily loaded nodes is actually a waste of resources and cannot reinforce the robustness of the network greatly. Assigning more resources to some lightly loaded but fragile nodes or simply taking out these redundancy capacities is a better alternative. It is worth to note that this finding is consistent with the empirical observations and results such as airports network, power grids and Internet router network [19, 20].

thumbnail
Fig 3. Capacity-load relationship of the network.

(a) Node capacity as a function of node initial load. (b) Node tolerance parameter α as a function of node initial load. The adjusting parameter is set to ω = 0.8. The maximum cost λ = 1.0. The algorithm runs for 1,000 iterations each time.

https://doi.org/10.1371/journal.pone.0141360.g003

Vulnerability-cost relationship of the network

There is a critical parameter in the optimal model of allocation of resources, ω. It is a trade-off of the vulnerability and the cost in the objective function. Fig 4 shows the Pareto frontier of ω from 0.01 to (considering that the vulnerability of the network should be taken into account, ω cannot be equal to 0). As we can see, all the solutions of the initial model pattern are dominated by the solutions of the optimized pattern, which means that no matter what the adjusting parameter ω is, the results of the optimized model are always better. When ω is relatively small, the solutions mainly gather in the upper left corner. Networks that follow these node capacity allocation patterns possess a lower cost as well as a higher vulnerability. When ω is relative large, the solutions tend to distribute evenly. Networks following these allocation patterns are more robust against cascading failures at a relatively lower cost. Therefore, the parameter ω can be chosen in accordance to the actual demand. It should also be noted that the solutions for ω = 1 do not lie on the frontier, and they are dominated by other solutions, indicating that enhancing robustness at all costs is not a good solution. ω = 0.6 and ω = 0.8 are well-considered trade-offs of vulnerability and cost.

thumbnail
Fig 4. The Pareto frontiers of various values for parameter ω.

Each point is averaged from 20 different networks. The algorithm runs for 1,000 iterations each time.

https://doi.org/10.1371/journal.pone.0141360.g004

Noise effect

In reality, nodes capacity is inevitably fluctuant resulting from the outside interference such as noise. Thus, we are interested in how the presence of noise impacts the cascade behaviors within our optimized model. Following Ref. [18], the effects of noise are introduced as an erroneous assignment of the capacity function. In detail, at a given error probability q (qO and qI for optimized and initial models, respectively), node i’s capacity is changed from Ci to : (12) where τ(τ ∈ [−0.2, 0.2]) obeys uniform distribution with zero mean. This is plausible in real-world applications due to the fact that the ignorance of the true value of the load for each node often results in an erroneous assignment of that node’s capacity. Fig 5 shows that the vulnerability of both the optimized network and the initial network increases as the error probabilities qO and qI increase, indicating the negative effects of noise. For the same error probability, although the redundancy capacity is reduced (please see the inset of Fig 5), the optimized network performs better, especially when λ is small. The optimized pattern of node capacity allocation has a stronger ability to resist noise.

thumbnail
Fig 5. The effect of noise.

The dot lines represent the initial networks and the solid lines represent the optimized networks. Each point is averaged from 20 different networks. The adjusting parameter is set to ω = 0.8.

https://doi.org/10.1371/journal.pone.0141360.g005

Conclusions

In summary, we propose a framework for searching for the most favorable pattern of node capacity allocation. It is modeled by a large-scale, nonlinear and multi-objective problem and solved by particle swarm optimization. We demonstrate that the optimized pattern does indeed make the network less vulnerable while at the same time reducing the cost of assigning capacities. We also introduce an indicator H to measure the cost-efficiency of the network. It is found that the cost-efficiency increases a lot through optimization and approximately follows the law of diminishing marginal utility. Besides, we found that the capacity-load relationship of the optimized network is nonlinear: heavily loaded nodes tend to decrease their capacity. Finally, the effect of noise is investigated, and the optimized pattern of capacity allocation shows a stronger ability to resist noise.

The optimal framework we propose here is systemic and generally applicable, which can be easily adopted in other circumstances, with the consideration of different constraints and objectives. Our results show that it is not wise to invest too much extra cost in protecting facilities when designing a network or a system due to the cost-efficiency is very small at a large maximum cost. Moreover, assigning less unoccupied capacity to the heavily loaded nodes, and more extra capacity to some lightly loaded nodes can make the network most robust as well as least cost. We believe that our work should be helpful in designing infrastructure networks from an economic point of view.

Author Contributions

Conceived and designed the experiments: ZC JZ WBD OL JJT. Performed the experiments: ZC JZ WBD. Analyzed the data: ZC JZ WBD OL JJT. Contributed reagents/materials/analysis tools: ZC JZ WBD OL. Wrote the paper: ZC JZ WBD OL JJT.

References

  1. 1. Newman ME, Forrest S, Balthrop J. Email networks and the spread of computer viruses. Phys. Rev. E 2002; 66: 035101.
  2. 2. Final report on the August 14, 2003 blackout in the United States and Canada: Causes and recommendation. Tech. Rep. 2004
  3. 3. Final report system disturbance on 4 Nov. 2006. Tech. Rep. 2007
  4. 4. Romero JJ. Blackouts illuminate India’s power problems. IEEE Spectr. 2012; 49: 11–12
  5. 5. Battiston S, Gatti DD, Gallegati M, Greenwald BCN, Stiglitz JE. Credit chains and bankruptcy propagation in production networks. J. Econom. Dyn. Control 2007; 31: 2061–2084.
  6. 6. Crucitti P, Latora V, Marchiori M. Model for cascading failures in complex networks. Phys. Rev. E 2004; 69: 045104.
  7. 7. Mirzasoleiman B, Babaei M, Jalili M, Safasi M. Cascade failures in weighted networks. Phys. Rev. E 2011; 84: 046114.
  8. 8. Moreira AA, Andrade JS Jr, Herrmann HJ, Indekeu JO. How to make a fragile network robust and vice versa. Phys. Rev. Lett. 2009; 102: 018701. pmid:19257248
  9. 9. Zhao L, Park K, Lai Y-C, Ye N. Tolerance of scale-free networks against attack-induced cascades. Phys. Rev. E 2005; 70: 035101.
  10. 10. Simonsen I, Buzna L, Peters K, Bornholdt S, Helbing D. Transient dynamics increasing network vulnerability to cascading failures. Phys. Rev. Lett. 2008; 100: 218701. pmid:18518644
  11. 11. Albert R, Jeong H, Barabási AL. Error and attack tolerance of complex networks. Nature 2000; 406: 378–382. pmid:10935628
  12. 12. Lordan O, Sallan JM, Simo P, Gonzalez-Prieto D. Robustness of the air transport network. Transp. Res. Part E Logist. Transp. Rev. 2014; 68: 155–163.
  13. 13. Cohen R, Erez K, Ben-Avraham D, Havlin S. Breakdown of the Internet under intentional attack. Phys. Rev. Lett. 2001; 86: 3682. pmid:11328053
  14. 14. Holme P, Kim BJ, Yoon CN, Han SK. Attack vulnerability of complex networks Phys. Rev. E 2002; 65: 056109.
  15. 15. Wang B, Tang HW, Guo CH, Xiu ZL. Entropy optimization of scale-free networks’ robustness to random failures. Physica A 2006; 363: 591–596.
  16. 16. Lordan O, Sallan JM, Simo P, Gonzalez-Prieto D. Robustness of airline alliance route networks. Commun. Nonlinear Sci. Numer. Simul. 2015; 22: 587–595.
  17. 17. Motter AE, Lai YC. Cascade-based attacks on complex networks. Phys. Rev. E 2002; 66: 065102.
  18. 18. Wang B, Kim BJ. A high-robustness and low-cost model for cascading failures. Europhys. Lett. 2007; 78: 48001.
  19. 19. Kim DH, Motter AE. Fluctuation-driven capacity distribution in complex networks. New J. Phys. 2008; 10: 053022.
  20. 20. Kim DH, Motter AE. Resource allocation pattern in infrastructure networks. J. Phys. A, Math. Theor. 2008; 41: 224019.
  21. 21. Huang W, Chow TWS. Network topological optimization for packet routing using multi-objective simulated annealing method. Physica A 2010; 389: 871–880.
  22. 22. Zhou MX, Liu J. A memetic algorithm for enhancing the robustness of scale-free networks against malicious attacks. Physica A 2014; 410: 131–143.
  23. 23. Fang YP, Zio E. Optimal Production Facility Allocation for Failure Resilient Critical Infrastructures. Proc. 22nd ESREL Annu. Conf 2013: 2605–2612.
  24. 24. Xia YX, Hill D. Optimal capacity distribution on complex networks. Europhys. Lett. 2010; 89: 58004.
  25. 25. Liu HY, Xia YX. Optimal resource allocation in complex communication network. IEEE Trans. Circuits Syst. II: Express Briefs 2015; 62: 706–710.
  26. 26. Barabási AL, Albert R. Emergence of scaling in random networks. Science 1999; 286: 509–512. pmid:10521342
  27. 27. Tan F, Xia YX, Zhang WP, Jin XY. Cascading failures of loads in interconnected networks under intentional attack. Europhys. Lett. 2013; 102: 28009.
  28. 28. Kennedy J, Eberhart R. Particle swarm optimization. Proceedings of the International Conference on Neural Networks 1995: 1942–1948.
  29. 29. Vicsek T, Czirók A, Ben-Jacob E, Cohen I, Shochet O. Novel type of phase transition in a system of self-driven particles. Phys, Rev. Lett. 1995; 75: 1226–1229.
  30. 30. Selvan SS, Xavier CC, Karssemeijer N. Parameter estimation in stochastic mammogram model by heuristic optimization techniques. IEEE T. INF. TECHNOL B. 2006; 10: 685–695.
  31. 31. Gaing ZL. Particle swarm optimization to solving the economic dispatch considering the generator constraints. IEEE Trans. Power Sys. 2003; 18: 1187–1195.
  32. 32. Mendes R, Kennedy J, Neves J. Watch thy neighbor or how the swarm can learn from its environment. IEEE Proc. Swarm Intell. Symp. 2003: 88–94.
  33. 33. Li CH, Yang SX, Trung TN. A self-learning particle swarm optimizer for global optimization problems. IEEE Trans. Syst., Man, Cybern. B, Cybern. 2012; 42:627–646.
  34. 34. Liu C, Du WB, Wang WX. Particle swarm optimization with scale-free interaction. Plos ONE. 2014; 9: e97822. pmid:24859007
  35. 35. Gao Y, Du WB, Yan G. Selectively-informed particle swarm optimization. Sci. Rep. 2015; 5: 9295. pmid:25787315
  36. 36. Du WB, Gao Y, Liu C, Zheng Z, Wang Z. Adequate is better: particle swarm optimization with limited-information. Appl. Math. Comput. 2015; 268: 832–838.
  37. 37. Clerc M, Kennedy J. The particle swarm-explosion, stability, and convergence in a multidimensional complex space. IEEE Trans. Evol. Comput. 2002; 6: 58–73 (2002).
  38. 38. Deb K, Agrawal S. A niched-penalty approach for constraint handling in genetic algorithms. Artificial Neural Nets and Genetic Algorithms 1999: 235–243.