SCM: A method to improve network service layout efficiency with network evolution

Network services are an important component of the Internet, which are used to expand network functions for third-party developers. Network function virtualization (NFV) can improve the speed and flexibility of network service deployment. However, with the evolution of the network, network service layout may become inefficient. Regarding this problem, this paper proposes a service chain migration (SCM) method with the framework of “software defined network + network function virtualization” (SDN+NFV), which migrates service chains to adapt to network evolution and improves the efficiency of the network service layout. SCM is modeled as an integer linear programming problem and resolved via particle swarm optimization. An SCM prototype system is designed based on an SDN controller. Experiments demonstrate that SCM could reduce the network traffic cost and energy consumption efficiently.


Introduction
Middleboxes [1], hardware-based network services, are widely deployed in the Internet and being recognized as important components of networks, such as their uses as firewalls, intrusion detection systems/intrusion prevention systems (IDS/IPS), load balancers, agencies, network address translators (NAT), and wide area network (WAN) optimizers etc. Service chains [2] are created by combining these service instances according to network policies and user requirements. Service chains satisfy various needs of users and provide value-added services to networks.
The flexibility and expansibility of the service deployment can be improved by network function virtualization (NFV) [3][4][5]. In NFV, special hardware middleboxes are replaced by virtual services, thus service chain customization and dynamical creation can be achieved. In addition, software defined networks (SDN) [6][7][8] are widely used to orchestrate network services according to policies and control traffic to pass special service chains [9][10][11][12]. The "SDN +NFV" network paradigm [13] provides a flexible, scalable, and adaptable architecture for deploying virtual services and creates new opportunities for service chain management.
Based on the "SDN+NFV" architecture, there has been extensive researches about service chain deployment [12,[14][15][16][17]. In these studies, service chains are deployed with consideration of the quality of service, resource allocation, and network security. However, these service chain placement approaches only consider the current network static status, and the changes of the network state are neglected. With network evolution, some old flows may disappear and new flows may arise. Therefore, the status of network changes and the service chain deployment may lose optimality with network evolution, resulting in a waste of network resources and energy. Moreover, the quality of services will decrease due to the degraded service chain layout.
This problem is difficult to resolve in service chain deployment due to the limited ability to anticipate the evolution of network states. However, if the deployment of service chains is dynamic throughout their life cycles, the network resource allocation can be adjusted dynamically to adapt to the changing network status. In this paper, a service chain migration (SCM) framework is proposed to address this problem, in which service chains are migrated dynamically to adapt to network evolution. SCM satisfies policy demands and improves the effectiveness of the network service instance layout. We model SCM as an integer linear programming (ILP) problem, and particle swarm optimization (PSO) [18] is used to solve the SCM problem.
The contributions of this work are as follows.
1. Two scenarios are shown to illustrate that the efficiency of the network resource layout decreases with network evolution and that service chain migration is utilized to improve the situation.
2. An SCM framework is proposed to optimize the network resource layout. We model the SCM as an ILP problem, and a modified PSO algorithm is implemented to provide the optimal solution under certain network resource constraints.
3. An SCM prototype system is designed based on an SDN controller [19,20], and the efficiency of the SCM is corroborated by several numerical simulations.

Related work
In traditional networks, services are realized by middleboxes, which is a major way for thirdparty developers to expand network functions. Extensive research has been conducted in middlebox management. StEERING [12], a flexible and extensible middlebox management framework, can efficiently control network traffic to pass middleboxes. SIMPLE [14], a service policy execution layer, can instantiate network service policies into service chains, where load balance and costs of network are considered in the service chain construction. Stratos [9], a network service orchestration framework, considers the changes of network load, and the service chains are constructed efficiently. The above three kinds of frameworks put effort into the service chains to optimize the trafic or load balance. However, the network evolution is not considered, which may limit the effectiveness of the network. Similar to the work above, SCM is also designed to manage network service, but the goal of SCM is to optimize service layouts for network evolution. SCM dynamically adjusts the deployed service chains, and the network resource allocation is optimized according to network state. In additoin, more benefits can be obtained in SCM, such as traffic optimization, energy saving and lower network cost. Virtual machine migration has been proven to have the ability to increase the resource utilization rate and reduce energy consumption. A dynamic virtual machine migration method is proposed in [21], where migration traffic, available bandwidth, and balance of bandwidth capacity are considered during migration. In [22], virtual machine migration and network routing optimization are utilized to reduce the network energy consumption in the data center. Nidhi et al. [23] proposed an energy-aware virtual machine migration method in a cloud environment to improve the efficiency and decrease the energy consumption of the network. For energy saving, a novel placement selection policy of live virtual machine (VM) migration named PS-ES [24] is proposed. This method combines particle swarm optimization algorithm and simulated annealing to obtain the placement selection policy of the live VM migration. From the point of VM controller, Wen and et al. propose LS-STR [25], an adaptive controller for data center, based on the least square self-tuning regulator. The method can adjust VM resources dynamically and reduce the energy cost. Instead of migrating a single virtual machine or service, as described in the studies above, we migrate service chains to further optimize traffic cost and energy consumption. More specifically, SCM migrates multiple virtual machines periodically and the best migration strategy is determined by solving a ILP problem.

Motivating scenarios
For convenience, the abbreviations used throughout this paper are listed in Table 1.
Since the demands of users are dynamic, new policies are added in the network and outdated policies are deleted. With the network evolution, the fixed layout of service instances may lower the utilization efficiency of network resources. However, by migrating service chains, the efficiency of resource layout may be improved. In this paper, two scenarios are shown where service chain migration can reduce traffic cost and energy consumption, respectively.
1. To optimize service layout and reduce traffic cost: The network state evolution may lower the efficiency of the service layout. As shown in Fig 1(A), there are two service instances e 1 and e 2 in the network, and the traffic cost on each link is shown in the figure.
Two policies are deployed successively in the network. Policy 1: The communication between hosts h 1 and h 2 needs two service instances e 1 and e 2 . The traffic cost is given by Cost(h 1 ,h 2 ) = 9. Policy 2: The hosts h 3 and h 4 need service instance e 2 during communication, and the traffic cost is Cost(h 3 ,h 4 ) = 19, which is larger. If the service instance e 2 is migrated to the server connected with s 7 , the service chains will be migrated accordingly.
The paths h 1 ! h 2 and h 3 ! h 4 will be adjusted as shown in Fig 1(B). In this circumstance, Cost(h 1 ,h 2 ) = 10 and Cost(h 3 ,h 4 ) = 11, and the total traffic cost will be decreased.

2.
To reduce the number of running servers and save energy: As the network evolves over time, one server may carry only one or very few running service instances, which leads to a waste of energy. As shown in Fig 2(A), the current routing paths h 1 ! h 2 and h 1 ! h 4 are marked by dashed arrows, and both policies need the two service instances e 1 and e 2 . The server server 1 carries only one service e 1 , while server 2 still has sufficient resource to hold one more service instance. Obviously, energy consumption will be reduced if we migrate service e 1 to server 2 (Fig 2(B)) and turn off server 1 .

Modeling
In order to solve the problem of low efficiency of the resource layout caused by network evolution, service chains are considered to be migrated periodically under constraints. Service chain migration means migrating service instances in server chains from a server to another position without violating the policy demand. We first present relevant definitions, and then the service chain migration model is proposed. Finally, we present an intelligent optimization algorithm to solve the model.

Basic definitions
Service instance: A service instance is an instantiated service equipment, which can be represented as a 3-unit set: e = <ID,T,V>, where ID is the unique identification of a service instance, T is the type of service, and V is the amount of data to be transferred for migrating this service instance. Let E = {e 1 ,e 2 ,Á Á Á,e h } denote the set of all service instances deployed in the network. Service chain: A service chain is a group of sequential service instances, which can satisfy user's specific demands for communication. Let E p ¼ fe 1 ; Á Á Á ; e j 0 ; Á Á Á ; e j @ g denote a service chain.
Policy: A policy is defined as p≜ðO p ; E p ; D p Þ, where O p denotes the policy target, i.e., the specific traffic processed by policy p, E p denotes the service chain that processes traffic O p . The number of service instances in E p is denoted by |p|. And p denotes the bandwidth demand, and d c p denotes the path cost demand. Server: A server is a running platform for virtual service instances, which can provide storage and bandwidth resources for multiple service instances. Let Z = {z 1 ,z 2 ,Á Á Á,z k } be the set of all servers in the network. For z s 2 Z, w s denotes the maximum number of service instances that z s can carry and t s denotes the maximum bandwidth that z s can process.

SCM model
Compared with traditional virtual machine migration, the SCM model is more complex for two reasons. First, policy demands, such as the bandwidth and path cost, are taken into account in SCM. Second, the migration of one service instance in a service chain may affect the migration of other service instances in this service chain because of the correlations between them. We intent to get the best migration stragety under multiple factors to minimize the network cost. We define a utility value as the measurement of network cost of a migration and intent to get the minimum utility value of a migration under certain constrains, therefore, it can be regarded as a minimum value problem. As described below, the objective function and constraints are considered to be linear in our study. In addition, the solutions of SCM belongs to the integer domain. Therefore, we modeled SCM as a problem of ILP.
For a migration M, the optimization goal of SCM is shown in Eq (1): where Utility is the utility value of migration M; U is the traffic cost of the whole network policies; R is the number of running servers; Q is the migration cost of the service instance; α 1 ,α 2 , α 3 are the weights of U, R, and Q, respectively, whose total sum is 1; and β 1 ,β 2 ,β 3 are the parameters that unify the dimensions of U, R and Q.
The whole network traffic cost is calculated by Eq (3). The boolean variable y s i indicates whether the service instance e i is on server z s after migration. Cost(s 0 ,s) is the minimum path cost between server z s 0 and z s . f(p,i 0 ,i) is the traffic injecting into the subsequent service e i from service e i 0 . E p ½0 denotes the traffic source of policy p. The traffic from the source to the first The migration of a service instance consumes network bandwidth resources. Eq (4) shows the bandwidth consumption caused by a service instance migration. The boolean variable x s i indicates whether the service instance e i is on server z s in the original network. The boolean variable m s i indicates whether e i is migrating to z s . w i is the volume of e i , and l ss 0 is the shortest distance between server z s and z s 0 .
A server with no service running on it will be turned off to reduce energy consumption. Eq (5) gives the number of running servers after migration, where y s i indicates whether e i is on z s after migration.
Based on the flow conservation principle, Eq (6) declares that the outflow of one service is equal to the inflow of the subsequent service. Network services may affect traffic (for example, a firewall may drop a part of traffic in a flow). In SCM, traffic influence factors are introduced to model the traffic influence caused by services. Let g j p be the traffic influence on the j-th service of policy p, which is calculated by g j p ¼ outflow=inflow. For policy p, the traffic from the source to the first service is the bandwidth demand d b p , as shown in Eq (7). For 8p 2 P; 8j 2 ½2; jpj; Eq (8) shows the relationship between service instance e i is running on the server z s and has not been migrated; ð1 À x s i Þm s i ¼ 1 indicates the service instance e i is not running on z s originally, but has been migrated to z s . x s i and m s i cannot be 1 at the same time. Eq (9) adds the constraint that the service instance e i cannot be migrated to server z s from z s itself.
Policy p demands that the cost from the source to the destination should not exceed d c p . Eq (10) indicates that the service chain should satisfy the cost demand after migration.
For 8z s 2 Z, its bandwidth is finite. Eq (11) indicates that the traffic injected from the network to the server z s should not exceed its maximum bandwidth t s .
For 8z s 2 Z, the number of service instances running on the server z s is restrained by the capacity of z s , as shown in Eq (12).
For one migration, one service instance can be migrated to no more than one server, which is indicated by Eq (13). Eq (14) shows the range of the variable m s i .
Resolving algorithm SCM(M) is a typical ILP problem, which is called the SCM problem in this paper. An exhaustive algorithm can be used to solve the SCM problem by traversing the solution space. However, the time consumption will be huge due to the large size of this problem. An alternative would be the heuristic search approach, such as the evolutionary algorithm (EA). EA has been studies extensively [26][27][28] and has shown to be suitable for solving real-world optimization problems with large solution space, containing multiple fields [29][30][31][32].We use PSO algorithm, a particularly interesting algorithm of EA family, to solve the SCM problem due to its advantage in convergence rate. Here, we adopt the binary PSO to solve the SCM problem as the solution space of the SCM problem is binary. The position and velocity parameters of the binary PSO are defined as follows.

Position (M):
The boolean variable m s i denotes whether service instance e i is migrating to server z s or not; the vector ðm 1 i ; m 2 i ; Á Á Á ; m jZj i Þ indicates the migration of e i (e i 2 E). M g is position vector of the g-th particle in a particle swarm, which is defined as one migration of all service instances as shown in Eq (15), where |Z| is the number of servers and |E| is the number of service instances. Let m gd be the d-th component of M g .
Velocity (V): The velocity vector ðv 1 i ; v 2 i ; Á Á Á v jZj i Þ is used to migrate a service instance e i into a better solution. In Eq (16), the vector V g is the velocity of the g-th particle in the particle swarm. Based on the component v s i in V g , the probability of component m s i in M g being 1 can be determined. Let v gd be the d-th component of the velocity vector V g .
In the traditional binary PSO, the particle velocity and position updating equations are shown in Eqs (17)- (20). In Eq (17), c 1 and c 2 are constants, ξ,η are two independent random variables with uniform distribution on the interval of [0,1], pBest gd is the d-th component in the historical optimal solution pBest g of particle g (i.e. the local optimal solution), and gBest d is the d-th component of the optimal solution gBest among all particles (i.e. the global optimum solution). ω(t) is the inertia weight in PSO. m gd is the d-th component of M g and v gd is the d-th component of V g . In Eq (18), the random variable random has uniform distribution on the interval of [0,1]. In Eq (20), ω max and ω min are the maximum and minimum values of ω(t), respectively, t is the current number of iterations, and MI is the maximum number of iterations.
From Eq (18), it can be known that the value of boolean variable m gd have randomness. Therefore, the position updating of one particle may cause one service instance to be migrated to multiple servers, which violates the constraint in Eq (13). Thus, it is difficult to find feasible solutions. To solve this problem, the position updating equation Eq (18) is modified with consideration to the constraint in Eq (13).
In the velocity vector V g , the |Z| components ðv 1 i ; v 2 i ; Á Á Á ; v jZj i Þ represent the migration of the service instance e i , and Sigðv s i Þ represents the probability that m s i is 1 among the |Z| components ðm 1 i ; m 2 i ; Á Á Á ; m jZj i Þ in the position vector M g . Let: Eq (22) represents the probabilities that a service instance e i is migrated to servers in Z. Since a service instance e i can be migrated to only one server, there is at most one component If no component in ðm 1 i ; m 2 i ; Á Á Á ; m jZj i Þ is equal to 1, i.e., all the components are 0, then the probability is marked as l onlyð0Þ i and calculated by Eq (24).
There are |Z| + 1 cases for updating the position vector ðm 1 i ; m 2 i ; Á Á Á ; m jZj i Þ under the constraint in Eq (13): one component is 1 while the other components are zero (|Z| cases) and all the components are 0 (1 case). The probability of each case can be calculated by Eq (25). Obviously, X k2½0;jSj A probabilistic search procedure [33] is used to select y k i . If k = 0, then all components in To reduce the number of running servers, the service instances in small-load servers should be migrated with a high probability. Thus, Eq (21) is modified into Eq (27) by adding an encouragement factor, where Threshold is the server load threshold, Enc(t) is an encouragement function, as showed in Eq (28).
In Eq (28), Enc(t) changes with the number of iterations t and the parameter θ, as shown in Fig 3. We set θ = 1000. As shown in the figure, when the number of iterations is small, the instance migration is not encouraged, which enables PSO to escape from local optima and converge to the global optimum. When the number of iterations is large, it intends to migrate the instances in small-load servers to reduce the number of running servers.
To solve the SCM problem, Eq (1)

Design of SCM Framework and Experiments
In this section, a design of the SCM framework is elaborated. Then, three metrics are proposed to evaluate the performance and cost of SCM. Finally, the performance and cost of SCM are compared and analyzed under different parameters.

SCM framework
We designed an SCM prototype system FlowMover based on the SDN controller. As shown in Fig 4, FlowMover consists of three planes: a policy plane, a control plane, and a data plane. The policy plane and the control plane are constructed based on the SDN controller. The Policy manager of the policy plane sends policies to the network. The control plane deploys the service instances according to the service chain policies and network state, and installs flows through the Fow manager (via the southbound API). The FlowMover in the control plane migrates the service chains periodically through service chain interface API (SC API).
In FlowMover, the traffic tag method proposed in [14] is applied. Specific traffic is tagged to pass a certain service chain based on the policy. There are many methods available for online virtual machine migration without interrupting services [34][35][36] and the virtual machine migration method proposed in [34] is adopted in SCM.

Evaluation metrics
To evaluate the effectiveness and cost of SCM, we define three evaluation metrics: optimized rate of traffic cost OE U , optimized rate of number of running servers OE R , and migration rate of service instance OE Q , which can be calculated according to Eqs (29), (30) and (31), respectively. U and U 0 denote the whole network traffic cost before and after service chain migration, R and R 0 denote the numbers of running servers in the network before and after service chain migration, and Q and Q 0 denote the numbers of all and migrated service instances in one service chain migration.
Performance and cost assessment Simulations of network evolution and service chain migration are implemented in MATLAB. BRITE [37], a network topology generator, is used in the experiments to generate a random topology with 100 nodes of Waxman model (α = 0.2, β = 0.15). In the experiment topology, the delay of each link in is considered as the cost of the link. Each network node is connected with a server, and all servers have the same capacity w s and bandwidth resource t s . t s is set to 50 unit traffic per unit time, and the threshold Threshold is set to dw s /3e. The length of the service chain of policies has uniform distribution on the interval of [1,15], while the bandwidth demand of the policies has uniform distribution on the interval of [1,5]. The policy request follows a Poisson distribution with an average of 4 requests in 100 time units. The policy life cycle follows an exponential distribution with an expectation of 1000 time units. In the optimization goal of Eq (1), and R Ã denote the traffic cost and the number of running servers before service chain migration. Q Ã represents the summation of service instance volumes multiplied by the average link length. For the service p[j] in the policy p, we set g j p ¼ 1. To simulate the evolution of the network state, the service chain deployment algorithm proposed in [38] is used in the experiments, and the service chains are deleted from the network when policies are outdated. The minimum cost between service instances is calculated by Dijkstra algorithm. The parameters setted in PSO are listed in Table 2. The simulation program runs on a computing platform with Intel Core i7 @3.10 GHz and 8G RAM. 1) Performance. In this section, service chain deployment methods, SCI [38] and Stratos [9], are compared with SCM in terms of traffic cost and number of running servers. In the experiments, the migration period is set to 500 time units, and the policy cost demand d c p is defined as the maximum path cost in the network. We set α 1 = 0.5, α 1 = 0.3, and α 1 = 0.2. The comparison of the traffic cost of the three methods is shown in Fig 5(A). In all three cases, the traffic cost of network increases in the beginning, which can be explained by the increasing number of the policies deployed in the network. We can see that the traffic cost of SCI is the highest among the three methods, since the traffic cost is not considered in SCI. By contrast, Stratos minimizes the traffic cost when server chains are deployed and the traffic cost is decreased effectively. However, in Stratos, the server chains cannot be moved once they are deployed. The layout of server chains will degenerate with the evolution of the network. Yet, SCM moves the server chains along with the evolution of the network and keeps low traffic cost. Therefore, the traffic cost of SCM is the lowest, about 20% lower than SCI and 10% lower than Stratos on average. The number of running servers with time for the three methods are shown in Fig 5(B). SCM has the lowest number of running servers. This is because SCM moves the server chains to reduce the number of running servers for energy saving. As can be seen, almost all the servers are running after 3000 time units when SCI or Stratos is used. The number of running servers is decreased by about 15% in SCM.
2) Optimization of traffic cost. The service chains are periodically migrated to optimize the layout of service instances. The migration period is 500 time units. The policy cost demand d c p is defined as the maximum path cost in the network. Fig 6 shows the curves of OE U with respect to time and different server capacities. With the network evolution, the number of policies deployed in the network rises at first, and then keeps stable. At the initial period of the network (Fig 6(A)), the ability of the traffic cost optimizaton is limited, and OE U is close to 1 due to the small number of deployed policies. OE U decreases with the increase of the number of policies in the network. This is because SCM migrates the service chains along with network evolution to decrease the traffic cost. The higher the weight of traffic cost α 1 in the utility function (Eq (1)) is, the more significantly SCM optimizes the traffic cost. Therefore, OE U is minimized when α 1 = 1 and maximized when α 1 = 0.2. As shown in Fig 6(A) and 6(B), when the weights are the same, OE U at w s = 10 is smaller than that when w s = 5. The reason is that when the capacity of the server is enhanced, there are more resources in the network for server chain migration. Thus, better results can be achieved. When w s = 15, there are a lot of idle server capacity resources. The layout of the service instances is already optimized when deploying service instances; therefore, OE U is not significantly reduced compared with that when w s = 10. 3) Optimization of energy consumption. The optimized rate of number of running servers OE R is used to evaluate the optimization of energy consumption. When there is no service instance running on a server, this server will be turned off to save energy. The service chains are migrated with a period of 500 time units and the demand of cost d c p is defined as the maximum path cost in the network. Fig 7 shows the curves of OE R under different server loads. As shown in Fig 7(A), at the initial period, the number of deployed policies is small, and the server resources are sufficient. Therefore, the value of OE R is low. With the increase of the number of policies, OE R increases, as it becomes difficult to gather all the service instances in a small number of servers. The higher the weight α 2 in the utility function is, the more significantly SCM optimizes the number of running servers. Therefore OE R is minimized at α 2 = 1 and maximized at α 2 = 0.2. With the enhancement of server capability, more service instances can be held in each server, thus fewer servers can hold the same number of policies. Therefore, a larger w s leads to a smaller OE R , as shown in With the increase of the number of deployed policies, the amount of idle resources in the server declines. Because of the resource competition, the migration of a single service instance may lead to the re-layout of multiple service instances, leading to the increase of OE Q . When α 3 in Eq (1) increases, the effect of Q on Utility is enhanced. Therefore, OE Q tends to decrease with the increase of α 3 . Thus, OE Q is maximized at α 3 = 0.1 and minimized at α 3 = 0.5 as shown in Fig 8(A), 8(B) and 8(C). When the capabilities of the servers are enhanced, the amount of idle resources of servers increases and the resource competition is less intense, resulting in a decrease in the value of OE Q . Thus, when the weights are equal, OE Q decreases with an increase in the value of w s , as shown in Fig 8. 5) Efficiency of PSO. In PSO, solutions may exist which do not satisfy the constraints. Eq (26) is used in SCM to update the particles' positions to decrease the number of infeasible solutions. Let N I be the number of infeasible solutions generated in the optimization. The occurrence probability of infeasible solutions can be calculated by Eq (32).
The comparison of P(N I ) between the traditional PSO and our modified PSO is shown in Fig 9, where the x-axis represents the network operation time, and y-axis represents P(N I ). The weights in the optimization target are α 1 = 0.5, α 2 = 0.3, α 3 = 0.2. As shown in Fig 9, it is difficult for traditional PSO to find feasible solutions, because it is difficult to obtain solutions meeting the constraint in Eq (13) in traditional PSO due to the randomness in updating particle locations. SCM avoids the occurrence of numerous infeasible solutions using a modified PSO, which improves the optimization efficiency. At the early stage, a small number of policies are deployed in the network, so that there are enough server resources for deploying server chains. Hence, the constraints can be easily satisfied, and the occurrence probability of infeasible solutions is low. With an increasing number of policies in the network, server resources become tight and the occurrence probability of infeasible solutions increases. The larger w s is more likely to meet the server capacity constraint. Therefore, the occurrence probability of infeasible solutions decreases when w s increases. 6) Efficiency of encouragement function. The encouragement function tends to migrate the instances in low-load servers to reduce the number of running servers. Let R m and R m 0 be the number of running servers after migration with and without the encouragement function, respectively. The reduction rate of number of running servers rRate is proposed, which can be calculated by Eq (33).
In this experiment, we deploy 40 policies in the network. The weights in the utility function are α 1 = 0.5, α 2 = 0.3, α 3 = 0.2. Fig 10 shows the influence of the encouragement function to the number of running servers, where the x-axis represents server capacity, and y-axis represents rRate. Clearly, the number of running servers decreases when the encouragement function is used (rRate > 0). The encouragement function encourages service instances to migrate out of small-load servers. Thus, these service instances tend to be gathered to less servers, and the number of running servers decreases. The larger w s is, the number of running servers will be reduced by a larger amount. This can be explained by the fact that the server capacity is the bottleneck  against reducing the number of running servers, when w s is small. With the increase of w s , the optimization of the encouragement function is more significant. When w s ! 10, the encouragement function brings about a 20% reduction of the number of running servers.

Conclusions and prospects
In the network deployed with services, the service layout may become inefficient with network evolution, resulting in a waste of network resources and energy. In this paper, we show two typical scenarios to illustrate the inefficiency of network resource deployment with network evolution. Regarding this problem, the SCM method is proposed to optimize the deployment of network resources. SCM migrates service chains and adjusts network resource allocation with the consideration of network performance, resource constraints, and instance migration cost. We model SCM as an ILP problem and solve it via modified PSO. An SCM prototype system, FlowMover, is designed based on the SDN controller. Our experiments show that SCM can reduce the traffic cost and energy consumption effectively. SCM is designed to increase the efficiency of the service instance layout, but the service type is ignored. However, different types of service have different requirements to networks. For example, IPTV services are sensitive to the network delay while file transmission services are sensitive to the bit error rate. In future work, service type will be considered in service chain migration for a better service quality.