Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Load balance -aware dynamic cloud-edge-end collaborative offloading strategy

Abstract

Cloud-edge-end (CEE) computing is a hybrid computing paradigm that converges the principles of edge and cloud computing. In the design of CEE systems, a crucial challenge is to develop efficient offloading strategies to achieve the collaboration of edge and cloud offloading. Although CEE offloading problems have been widely studied under various backgrounds and methodologies, load balance, which is an indispensable scheme in CEE systems to ensure the full utilization of edge resources, is still a factor that has not yet been accounted for. To fill this research gap, we are devoted to developing a dynamic load balance -aware CEE offloading strategy. First, we propose a load evolution model to characterize the influences of offloading strategies on the system load dynamics and, on this basis, establish a latency model as a performance metric of different offloading strategies. Then, we formulate an optimal control model to seek the optimal offloading strategy that minimizes the latency. Second, we analyze the feasibility of typical optimal control numerical methods in solving our proposed model, and develop a numerical method based on the framework of genetic algorithm. Third, through a series of numerical experiments, we verify our proposed method. Results show that our method is effective.

1 Introduction

Due to the fast advancement of micro-computer technology, Internet-of-Things (IoT) devices have gained widespread adoption for data collection [1]. However, because of their limitations in energy and computation capability, IoT devices face great challenges in supporting resource-intensive applications. In this context, edge computing, a computation paradigm that enables IoT tasks to be processed at the edge of the Internet, has been suggested for effective computation offloading and has become a foundational technology in IoT architectures [2, 3].

Because of many objective factors such as construction costs, the power of edge servers is generally far inferior to that of cloud centers. In this context, though edge computing can process IoT tasks with low latency, the constrained capability of edge servers is still an Achilles heel when edge computing has to tackle immense amounts of data. To address this issue, a hybrid computation paradigm, known as cloud-edge-end (CEE) computing [46], has been proposed in recent years.

As illustrated in Fig 1, a CEE system is built on a hierarchical architecture and is generally composed of a resource-rich cloud center, numerous capability-constrained edge servers, and a huge number of end devices. From a bottom-up view, end devices are connected to edge servers through the core network, and edge servers are connected to the cloud center through the Internet. A typical workflow of the CEE system can be described as follows. First, computation tasks are constantly produced by end devices, and end devices will offload these tasks to edge servers through the core network under the control of an edge offloading scheme. Then, edge servers receive the arriving tasks and push them into a built-in queue. Next, in each edge server, a proportion of on-queue tasks are processed locally, while the remaining tasks will be either migrated to other edge servers under the control of a load balance scheme, or further offloaded to the cloud center for remote assistance under the control of a cloud offloading scheme. Finally, after being processed, tasks are responded to end devices from edge servers or the cloud center. A detail sequence chart describing the above steps is given in Fig 2.

From a macroscopic perspective, CEE computing converges the design principles of cloud computing and edge computing—end devices and edge servers together compose a typical edge computing system, while edge servers and the cloud center compose a cloud computing system. On one hand, compared with edge computing, CEE computing integrates sufficient resources from the cloud center, so that when edge servers are overloaded, a proportion of tasks on edge servers can be further offloaded to the cloud center for load reduction; on the other hand, compared with cloud computing, CEE computing provides low-latency services for end devices through numerous edge servers. Thus, CEE computing is a promising combination of edge and cloud computing [7, 8].

1.1 Research statement

Computation offloading, the manner in which resource-intensive tasks are migrated from a resource-constrained device to a resource-rich infrastructure, is a crucial design issue in CEE systems [8]. In this paper, we are devoted to developing efficient CEE offloading strategies that are aware of load balance. More specifically, we consider the following problem:

  1. Load balance -aware offloading (LBAO) problem: Consider a CEE system composed of a cloud center, multiple edge servers, and multiple end-device collections. Suppose the task producing rate of each end-device collection is known or predictable. Suppose load balance is available in the CEE system. Then, for a finite time horizon, how to collaboratively and dynamically determine the cloud offloading rates of edge servers and the many-to-many edge offloading proportions between end-device collections and edge servers, such that the total task latency is minimized?

1.2 Contribution

To address the LBAO problem, our contributions are as follows.

  • On mathematical modeling, we propose a load evolution model to characterize the influences of different CEE offloading strategies on a CEE system, establish a latency model as the performance criterion of CEE offloading strategies, and reduce the LBAO problem to an optimal control model.
  • On problem solving, we analyze the feasibility of typical optimal control numerical methods in solving the LBAO problem. Then, based on the framework of genetic algorithm (GA) [9], we develop a numerical algorithm called the LBAO algorithm to solve the LBAO problem, and make a rough analysis of the time complexity of the LBAO algorithm.
  • On simulations, we perform a series of numerical experiments to verify the proposed LBAO algorithm. First, we discuss the parameter setting of our experiments. Second, we investigate the optimal configuration of the LBAO algorithm. Third, by comparing the LBAO algorithm with other commonly used methods, we examine the performance of the LBAO algorithm. Finally, we give insight into the influence of load balance in CEE collaborative offloading.

The remainder of this paper is organized in the following manner. Section 2 reviews the related work. Section 3 presents the mathematical modeling of the LBAO problem. Section 4 discusses the solution to the LBAO problem. Section 5 shows numerical experiments. Finally, this paper is closed by Section 6.

2 Related work

In this section, we review the literature related to our work, and highlight the novelty of our work.

Typically, computation offloading is defined as the process in which computation tasks are migrated from a capability-constrained device to a resource-sufficient infrastructure to obtain remote assistance [10]. As computation offloading is an indispensable feature of edge computing, how to develop efficient edge offloading strategies is a crucial issue in the design of edge computing [11]. In the past years, edge offloading problems have been comprehensively investigated under various backgrounds and methodologies [1215]. Below, some representative examples are sketched. In [12], offloading issues are studied under an edge-fog hierarchical network, and an incentive mechanism is designed to shift selfish users’ preference from the edge layer to the fog layer according to user delay tolerance. In [13], task-dependent offloading issues are investigated, and offloading strategies are jointly optimized based on a directed cyclic graph model that presents the dependence between tasks. Also, the literature [14] takes insight into the offloading problems for wireless powered edge networks and develops a reinforcement learning algorithm to attain effective binary offloading decisions. Besides, the research [15] proposes a lightweight mobility prediction and offloading framework by using a machine learning method, aiming to jointly handle computation offloading and mobility management issues in edge computing.

Although edge offloading problems have been extensively studied, the research on CEE offloading issues is still in an early stage. Different from the cases in edge computing, CEE offloading problems focus more on the collaboration of the edge offloading between end devices and edge servers and the remote cloud assistance between edge servers and cloud centers. In this context, the existing edge offloading strategies cannot be directly applied to solve CEE offloading problems, because the system architecture of edge computing rarely includes the remote connection between cloud centers and edge servers, which is exactly a core component of CEE systems. Thus, it is necessary to develop new schemes for CEE systems based on the existing edge offloading techniques. Towards this direction, a large number of CEE offloading strategies have been proposed [4, 6, 8, 1623]. Below, some representative examples are sketched. In [4], efficient CEE offloading schemes are investigated in consideration of the joint optimization with resource allocation mechanisms. In [16, 18], hierarchical and horizontal CEE computing architectures are discussed, and CEE offloading problems are addressed through game theory. In [19], offloading issues are formulated as a multi-objective optimization problem and solved by using the Non-dominated Sorting Genetic Algorithm III (NSGA-III). In addition, the literature [23] focuses on the joint optimization of task offloading and service deployment for sequential tasks.

Unfortunately, as far as we know, CEE offloading strategies in the existing research are generally developed in consideration of transmission/propagation latency, energy consumption, task dropping rate, channel congestion, computation efficiency and financial payment, while load balance in CEE systems has rarely been accounted for. Load balance is an indispensable mechanism in CEE systems to guarantee a full utilization of resources of edge servers [2426]. Typically, load balance is implemented based on a technique known as computation migration [27]; that is, when an edge server becomes overloaded due to the flash crowd caused by end devices (i.e., the network congestion which occurs when a huge number of end devices request the edge server simultaneously [28]), tasks on the queue of that edge server (or containers or virtual machines of the tasks) can be migrated to another edge server through a centralized management platform (e.g., software defined networking [29]) for load balance. Because there is such a non-negligible load balance mechanism in reality, the existing CEE offloading strategies may produce certain errors when estimating the load distribution of edge servers and may not be able to make the most efficient decisions. Therefore, it is necessary to propose a load balance -aware CEE offloading strategy.

To fill the above research gap, in this paper we are devoted to developing a load balance -aware CEE offloading strategy. The novelty of our work is sketched as follows. First, we propose a novel load evolution model to characterize the influences of different offloading strategies on the load balance of a CEE system. Second, we optimize a time-varying offloading strategy based on the system load distribution and overall latency. To our best knowledge, this is the first time to make these attempts.

3 Problem formulation

In this section, we present the mathematical formulation of the LBAO problem. First, we formalize the mathematical expression of CEE offloading strategies. Second, we propose a load evolution model to capture the effects of a CEE offloading strategy on a CEE system. Third, we establish a latency model to evaluate different CEE offloading strategies. Finally, we reduce the LBAO problem to a continuous-time optimal control model, with the CEE offloading strategy as the control variable, the latency model as the objective functional, and the load evolution model as a constraint.

3.1 Basic terms and notations

Let us consider a CEE system composed of a remote cloud center, M edge servers, and N collections of end devices, as illustrated in Fig 3. Denote the set of the M edge servers by S = {s1, …, sM} and the set of the N end-device collections by D = {d1, …, dN}.

Consider a finite time horizon [0, T]. At any time t ∈ [0, T], denote αi(t) as the task producing rate of the end-device collection di. Denote xij(t) as the proportion of tasks offloaded from the end-device collection di to the edge server sj at time t. By definition, holds for all i and t. We refer to the function x(t) = [xij(t)]N×M, 0 ≤ tT, as the edge offloading strategy. At time t, denote lj(t) as the load (i.e., the length of the task queue) of the edge server sj and l0(t) as the load of the cloud center. Then, we refer to the function l(t) = (l0(t), l1(t), …, lM(t)), 0 ≤ tT, as the load evolution trajectory.

Once tasks have arrived at the queue of an edge server, they will be processed locally, migrated to other edge servers through a load balance scheme, or further offloaded to the cloud center for remote assistance. Suppose at any time, the task executing speed of a server is relatively stable. Thus, denote βj as the average task processing rate of the edge server sj and β0 as that of the cloud center. Also, denote the load balance scheme by f such that at time t, the edge server si will migrate its own tasks to the edge server sj at the rate of f(i, j, l(t), t). Because it is no need for an edge server to migrate its own tasks to itself, let f(i, i, l(t), t) = 0 for all i. Moreover, denote yj(t) as the cloud offloading rate of the edge server sj at time t. Then, we refer to the function y(t) = (y1(t), …, yM(t)), 0 ≤ tT, as the cloud offloading strategy. Denote ymax as the common maximum of cloud offloading rates.

Based on the above discussions, we combine together the edge offloading strategy x and cloud offloading strategy y, and refer to the function pair (x(t), y(t)), 0 ≤ tT, as the CEE offloading strategy. Clearly, the feasible set of CEE offloading strategies is (1)

Remark 1 In this paper, the user mobility in edge computing is presented by the time-varying task producing rates αi(t). To explain that, let us introduce a numerical example as follows. Denote as the average task producing rate of an end device at any time. Suppose there are ni and nj devices within the collections di and dj at time t, respectively. Then, the task producing rates at time t are and . If there are Δn devices moving from di to dj during the time interval [t, t + Δt], then the task producing rates at time t + Δt will be and . A diagram illustrating the above scenario is shown in Fig 4.

3.2 Load evolution model

Next, let us investigate the relationship between the CEE offloading strategy (x, y) and the load evolution trajectory l.

Suppose each server has a task queue of infinite length. In this paper, we refer to the term “load dynamics” as the trajectories of load changes on all servers in a CEE system. Below, we analyze the load dynamics of a CEE system.

According to the discussions in the previous subsection, the load of an edge server is determined by the following four aspects. Firstly, if the queue of the server is not empty, the load of the server will constantly decrease because tasks on the queue are being processed by the server. Secondly, the load of the server will increase because the server is receiving tasks from end devices. Thirdly, due to the load balance scheme, the server will migrate its own tasks to other servers or receive tasks from other servers. Finally, due to cloud offloading, the load will reduce if not empty.

Different from the case of edge servers, the load of the cloud center is only determined by the following two aspects. Firstly, if the queue of the cloud center is not empty, the load of the cloud center will decrease because of task executing. Secondly, because of cloud offloading, the load of the cloud center will increase.

Based on the above analysis, we derive a dynamical system in Theorem 1 to characterize the load dynamics of edge servers and the cloud center.

Theorem 1 Denote the initial load distribution of the CEE system by L0. Then, the load evolution trajectory l under the control of the CEE offloading strategy (x, y) satisfies the dynamical system (2). (2) where Γ(0) = 0, Γ(x) = 1, ∀x > 0.

Proof 1 Let Δt be a variation of time. Consider the tiny time interval [t, t + Δt]. In this time interval, the load of the cloud center will increase by because of cloud offloading and meanwhile decrease by Γ(l0(t))β0Δt because of task processing. Thus, the load of the cloud center at time t + Δt will be (3) Because (4) the first equation in the dynamical system (2) is obtained by direction calculation.

Similarly, during the time interval [t, t + Δt], the load of the edge server sj will increase by due to edge offloading, decrease by Γ(lj(t))βjΔt and Γ(lj(t))yj(tt due to task consumption and cloud offloading, respectively, and change by due to load balance. Because (5) and (6) the remaining equations in the dynamical system (2) are obtained from direct calculation. The proof is complete.

The dynamical system (2) reveals the relationship between the CEE offloading strategy (x, y) and the load evolution trajectory l, with which we can predict the load dynamics of the CEE system under an arbitrary CEE offloading strategy. Thus, we refer to this dynamical system as the load evolution model.

To help understand the proposed load evolution model, we provide a simple numerical example as follows. Consider a tiny CEE system composed of only a cloud center, two edge servers and two device collections, as shown in Fig 5. Let a second be a unit time. Suppose the task executing rates are β0 = β1 = β2 = 10 tasks per second (tps). Suppose the tasks producing rates are α1(t) = α2(t) ≡ 100 tps. Suppose the initial load distribution is L0 = (l0(0), l1(0), l2(0)) = (100, 200, 300) tasks. Suppose the edge offloading strategy is x11(t) = x21(t) ≡ 0.2, x12(t) = x22(t) ≡ 0.8. Suppose the cloud offloading strategy is y1(t) ≡ 10, y2(t) ≡ 0 tps. Suppose the maximum of task migration rate is fmax = 10 tps, and the load balance scheme is presented by the function (7)

thumbnail
Fig 5. Simple numerical example for load evolution model.

https://doi.org/10.1371/journal.pone.0296897.g005

Then, during the time interval [0, 1], there are α1(0)x11(0) = 20 tasks offloaded from the device d1 to the edge server s1, α1(0)x12(0) = 80 tasks from d1 to s2, α2(0)x21(0) = 20 tasks from d2 to s1, and α2(0)x22(0) = 80 tasks from d2 to s2. Meanwhile, because the edge server s2 has higher load than s1, i.e., l2(0) > l1(0), there are f(2, 1, l(0), 0) = 10 tasks migrated from s2 to s1 for load balance. Besides, there are y1(0) = 10 tasks offloaded from the edge server s1 to the cloud center. In addition, there are β1 = 10 tasks processed by the edge server s1, β2 = 10 tasks by s2.

As a consequence, during the time interval [0, 1], the load of the edge server s1 will change by (8) the load of the edge server s2 will change by (9) and the load of the cloud center will change by (10)

3.3 Latency model

Next, we need to establish a performance model to evaluate different CEE offloading strategies, so that we can have a criterion to select the optimal strategy from its feasible set.

In this paper, we consider the overall latency of the CEE system as a performance metric of CEE offloading strategies, as low latency is the most important feature of edge computing [30]. Specifically, we consider the following two types of latency: (i) task queuing latency occurred at edge servers and the cloud center; (ii) task propagation latency caused by edge offloading, cloud offloading, and the task migration for load balance.

To achieve that, let us consider the simplified communication model shown in Fig 6. Suppose each (cloud or edge) server contains a task queue and a business model responsible for handling tasks. Tasks arriving at a server will firstly be restored on the task queue and then be processed by the business model. Suppose the time interval [0, T] is short enough such that the network environment in this period is relatively stable. Denote Dd as the average propagation delay for offloading one task from an end device to an edge server through the access network, i.e., the time within which a task is transmitted from an end device to the task queue of an edge server. Denote as the average propagation delay for migrating one task from the edge server si to the edge server sj through the core network, i.e., the time within which a task is transmitted from the task queue of the server si to the task queue of the server sj. Denote as the average propagation delay for offloading one task from the edge server sj to the cloud center through the Internet, i.e., the time within which a task is transmitted from the task queue of the server sj to the task queue of the cloud server. Then, the overall latency of the CEE system is calculated in Theorem 2.

Theorem 2 Given a CEE offloading strategy (x, y), the overall latency of the CEE system is (11)

Proof 2 Firstly, we quantify the task queuing latency. It is known that at time t, the queue length of the edge server sj is lj(t) and that of the cloud center is l0(t). Thus, during the time horizon [0, T], the total time of all tasks staying in the queue of the server sj is and that of the cloud center is . Therefore, the overall queuing latency of the CEE system in [0, T] is (12)

Secondly, we quantify the task propagation latency. At time t, tasks are offloaded from end devices to edge servers at the total rate of . Therefore, during the time interval [0, T], the total latency of edge offloading is (13)

Also, at time t, tasks are offloaded from the edge server sj to the cloud center at the rate of Γ(lj(t))yj(t). Hence, in the time interval [0, T], the total latency of cloud offloading is (14)

In addition, at time t, tasks in the edge server si are migrated to the server sj at the rate of f(i, j, l(t), t). Accordingly, in the time interval [0, T], the total latency of task migration is (15)

Combining the above discussions, we obtain the result by directly calculating (16)

The proof is complete.

We refer to Eq (11) as the latency model. A better CEE offloading strategy should result in less latency.

3.4 Optimal control problem

Now, we are ready to formulate an optimization model to describe the LBAO problem, with the CEE offloading strategy as the decision variable, the latency model (11) as the objective functional, and the load evolution model (2) as a constraint. (17)

The above optimization problem is a continuous-time optimal control problem [31]. After solving it, we will attain the CEE offloading strategy that minimizes the overall latency of the CEE system.

4 Solution

In this section, we discuss the solution to the optimal control model (17). First, we analyze the feasibility of some typical numerical methods in solving the model (17). Second, we apply GA to solve the model (17). Third, we make a rough analysis of the time complexity of GA in solving the model (17).

4.1 Analysis of feasible numerical methods

Recall that the problem (17) is a continuous-time optimal control model. As the survey [32] reports, numerical methods to solve a continuous-time optimal control problem can be divided into two categories: indirect methods and direct methods. In an indirect method, a set of necessary conditions of optimality will be derived and then the oringinal optimal control problem will be equivalently transformed to a multi-point boundary value (MPBV) problem [33]. On this basis, a proper iterative algorithm will be developed to solve the MPBV problem, and a candidate optimal solution satisfying all the known necessary conditions of optimality will be attained. When the original optimal control problem is extremely simple, indirect methods can perform well with regard to solution accuracy and algorithm complexity. However, if the original optimal control problem is high-dimensional and complex, it would be very difficult to guess a proper initial solution for iteratively solving the MPBV problem. Thus, indirect methods are less common than direct methods [34].

Contrary to indirect methods, direct methods suggest treating optimal control problems as nonlinear optimization problems. In a direct method, time-varying decision variables will be represented (or approximated) by some kind of static parameters, and then the original optimal control problem will be addressed by using well-known optimization techniques. Compared with indirect methods, direct methods do not rely on the optimality necessary conditions of the original optimal control problem and is therefore more versatile in practical applications. Thus, in this paper, we intend to address the optimal control model (17) through a direct method.

The survey [32] also points out that direct methods can be further classified into two categories: nonlinear programming (NLP) -based methods and heuristic-based methods. For simple optimal control models or the cases where the control and state variables are low dimensional, NLP-based methods can achieve good performance. However, NLP-based methods may not be suitable to solve the optimal control model (17) due to the heavy dependence on function approximation. Specifically, with regard to the optimal control model (17), NLP-based methods require using four interpolation polynomials to approximate the strategies x(t) and y(t), the differential equations of the load evolution trajectory l(t), and the definite integration of the latency model J. Because polynomial interpolation will inevitably produce errors, the solution of the approximate problem would deviate from the solution of the original optimal control problem (17). Thus, an appropriate method should reduce the dependence on function approximation as much as possible.

Fortunately, heuristic-based methods, which require only the approximation for the decision functions x(t) and y(t), provide a promising way to solve the LBAO problem. Therefore, in this paper we solve the optimal control model (17) through a heuristic-based method. GA [9] is a commonly used meta-heuristic method to find satisfactory solutions to a complex optimization problem. As the performance of GA has been widely proven, in the subsequent subsections, we apply GA to solve the optimal control problem (17). Specifically, we proceed into the following four steps. First, design an encoding scheme to represent CEE offloading strategies by chromosomes (i.e., a set of parameters), and design a decoding scheme to safely map chromosomes to their corresponding CEE offloading strategies, so as to ensure the resulting CEE offloading strategies must satisfy the feasible set (1). Second, design an initialization scheme to generate random feasible chromosomes. Third, design a fitness evaluation scheme to select high-quality chromosomes. Finally, design a crossover-mutation operator to produce better chromosomes from the existing ones. The details are as follows.

4.2 Encoding and decoding

An encoding scheme is responsible for transferring a CEE offloading strategy to a set of controllable parameters called a chromosome. In this paper, we use the widely adopted Legendre-Gauss (LG) method [35], which has been proven to be an accurate and efficient discretization technique. Specifically, we proceed into the following three steps.

First, we map the time interval t ∈ [0, T] to τ ∈ [−1, 1] by (18)

Second, we determine an integer n and select n collocation points [35] by solving the roots of the equation Pn(τ) = 0, where Pn(τ) is the Legendre polynomial [36]. Denote the resulting n collocation points by (τ1, …, τn).

Third, we approximate the CEE offloading strategy (x, y) by (19) where Lk(τ) are the Lagrange polynomials [37] (20)

It follows from observation that xij(τ) = Xij(τ) and yj(τ) = Yj(τ) hold true for all the collocation points τ = τ1, …, τn. This way, the approximate CEE offloading strategy (X, Y) is determined by all the fixed points xij(τk) and yj(τk), where i = 1, …, N, j = 1, …, M, k = 1, …, n.

Therefore, we define a chromosome as an [(N + 1) × (M × n)]-dim matrix (21)

After designing the encoding scheme, we need a decoding scheme to map a chromosome to its corresponding CEE offloading strategy. As the encoding scheme is an approximate mapping, a challenge in decoding is to ensure the resulting CEE offloading strategy must satisfy the feasible set (1). To this end, a heuristic decoding scheme is shown in Algorithm 1. The main idea of this heuristic algorithm is straightforward. First, we forcefully make the parts of x and y that exceed their upper and lower bounds become their upper and lower bounds. Second, for the parts of x that do not satisfy the conditions , we modify these parts through normalization.

Algorithm 1 Decoding

Input: A chromosome c.

Output: A CEE offloading strategy (x, y).

1: // Below, calculate the approximate functions.

2: Calculate (X, Y) with the chromosome c from Eqs (19) and (21).

3: // Below, ensure the conditions 0 ≤ xij(t) ≤ 1, ∀i, j.

4: for i ← 1: N

5:  for j ← 1: M

6:   .

7: // Below, ensure the conditions , for alli.

8: for i ← 1: N

9:  if

10:   for j ← 1: M

11:    .

12:  else

13:   for j ← 1: M do

14:    .

15: // Below, ensure the conditions 0 ≤ yj(t) ≤ ymax, for allj.

16: for j ← 1: M do

17:  yj(t) ← min{ymax, max{0, Yj(t)}}, 0 ≤ tT.

18: // This way, (x, y) must satisfy the feasible set (1). Thus, return it as a result.

19: return (x, y).

4.3 Initialization

An initialization scheme is responsible for generating random chromosomes. As [38] points out, what is the most important in this step is to maintain good diversity of the generated chromosomes to prevent premature convergence. Thus, it is needed to make the generated chromosomes uniformly distributed throughout their feasible set. To this end, an initialization scheme is displayed in Algorithm 2.

Algorithm 2 Initialization

Input: Not applicable.

Output: A random chromosome c.

1: // Below, consider the expression of c in Eq (21).

2: for k ← 1: n do

3:  for i ← 1: N do

4:   Generate a M-dim random vector from [0, 1]M. Denote it by r.

5:   for j ← 1: M

6:    .

7:  Generate a M-dim random vector from [0, ymax]M. Denote it by r.

8:  for jM

9:   yj(τk) ← rj.

10: return c.

4.4 Fitness evaluation

A fitness evaluation scheme aims to measure the quality of different chromosomes. The higher the fitness, the better the chromosome. In this paper, we simply define the fitness of a chromosome by the negative number of the latency it causes. The detail of the fitness evaluation scheme is given in Algorithm 3, whose main idea is straightforward.

Algorithm 3 Fitness

Input: A chromosome c.

Output: The fitness value of the input chromosome.

1: (x, y) ← Decoding(c).

2: Calculate l with (x, y) based on the load evolution model (2).

3: Calculate the latency J(x, y) by the latency model (11).

4: returnJ(x, y).

4.5 Crossover and mutation

A crossover-mutation operator is responsible for producing chromosomes with higher quality from the existing ones. In this paper, we adopt the standard uniform crossover method [39] and the standard uniform mutation method [39]. A pseudo-code for this step is given in Algorithm 4.

Algorithm 4 Evolution

Input: A pair of chromosomes (c(1), c(2)), crossover probability pc, and mutation probability pm.

Output: Two new chromosomes.

1: for i ← 1: (N + 1) do

2:  for j ← 1: (M × n) do

3:   // Crossover

4:   Generate a random number r from [0, 1].

5:   if r < pc

6:    Swap and .

7:   // Mutation.

8:   for q ← 1: 2

9:    Generate a random number r from [0, 1].

10:    if r < pm

11:     if i = N + 1

12:      Set by a random value from [0, ymax].

13:     else

14:      Set by a random value from [0, 1].

15: // Check if . If not, modify the related elements.

16: for q ← 1: 2 do

17:  for i ← 1: N do

18:   for k ← 1: n do

19:    if

20:     for j ← 1: M

21:      .

22: return (c(1), c(2))

4.6 Overview of LBAO algorithm

According to the above discussions, a overall procedure of our GA-based method is presented in Algorithm 5. We refer to it as the LBAO algorithm. Also, a flow chart illustrating the LBAO algorithm is shown in Fig 7. In addition, the source codes of the LBAO algorithm have been uploaded to GITHUB, whose link is [40]. Through the LBAO algorithm, we can obtain a satisfactory CEE offloading strategy.

Algorithm 5 LBAO

Input: Population size NP, crossover probability pc, mutation probability pm, and maximal iteration step Q.

Output: A satisfactory CEE offloading strategy (x*, y*).

1: // Population initialization and fitness evaluation

2: for i ← 1: NP do

3:  c(i) ← Initialization().

4:  F(c(i)) ← Fitness(c(i)).

5: // Population evolution

6: for q ← 1: Q do

7:  // Select chromosomes using the standard roulette-wheel operator [39].

8:   ← RouletteWheel(F1, …, FNP).

9:  // Crossover and mutation

10:  for i ← 1: 2: NP do

11:   (c(i), c(i+1)) ← Evolution(c(i), c(i+1), pc, pm).

12: // Fitness evaluation for the new generation of population.

13:  for i ← 1: NP do

14:   F(c(i)) ← Fitness(c(i)).

15: // Find the best chromosome from the population, and return its strategy

16: .

17: (x*, y*) ← Decoding(c*).

18: return(x*, y*)

4.7 Time complexity of LBAO algorithm

Next, let us make a rough analysis of the time complexity of the proposed LBAO algorithm. Denote the average time consumption of the initialization, fitness evaluation, and evolution operators by CI, CF, and CE, respectively. Then, by directly observing the LBAO algorithm shown in Algorithm 5, we can obtain that the total time consumption is approximately equal to (22)

In practice, the initialization, fitness evaluation, and evolution operators are all implemented by matrix-based operations (e.g., matrix multiplication). Thus, we define matrix-based operations as basic operations and denote by C the average time consumption of a basic operation. Then, by direct observation, we can obtain that CIC, CE ≈ 2C. Next, let us estimate the time consumption of the fitness evaluation operator. As shown in Algorithm 3, it mainly contains the calculation of the load evolution model (2) and latency model (11), which are essentially differential equations and a function integration, respectively. In this paper, we use the Euler method [41] (an efficient numerical algorithm to solve differential equations and function integration) to calculate the load evolution model and latency model. When the Euler method is applied, the total time horizon [0, T] is uniformly divided into NT small time intervals, and for each time interval, the derivative of every function is calculated through only one matrix-based operation. Because the load evolution model contains (M + 1) functions lj and the latency model contains just one function J, the time consumption of the fitness evaluation operator is CF = [(M + 1) + 1]NTC.

Combining the above discussions, it follows that (23)

From Eq (23), we can see that the time consumption of the LBAO algorithm is determined by the population size NP, the maximum iteration step Q, and the discrete precision NT, provided that the number of edge servers M is given by reality conditions. In practice, the population size NP and maximum iteration step Q will affect the optimality of the GA-based LBAO algorithm, whereas the discrete precision NT will affect the precision of the Euler-based numerical result. Thus, if we intend to obtain an offloading strategy with better quality, we may set the parameters NP, Q, NT by large values but the total time consumption of running the LBAO algorithm will increase. Otherwise, if we intend to apply the LBAO algorithm to real-time scenarios, we may set the parameters NP, Q, NT by small values such that the time consumption is low enough (though doing so may lead to some sacrifices in the quality of the results).

5 Numerical experiment

In the previous section, we proposed the LBAO algorithm based on the GA framework. In this section, we are devoted to verifying the LBAO algorithm. First, we describe the parameters used in our numerical experiments. Second, we investigate the optimal configuration of the LBAO algorithm. Third, we examine the performance of the LBAO algorithm. Finally, we give insight into the influence of load balance in CEE offloading.

5.1 Parameter setting

In our numerical experiments, we consider a CEE system composed of a remote cloud server, two edge servers, and three end-device collections. Thus, let M = 2, N = 3. Let T = 6 seconds. As the propagation delay between end devices and edge servers can range from 30 milliseconds (ms) to 40 ms [42], we set Dd = 35 ms by the average value. In addition, as the propagation delay between edge servers can range from 0.1 ms [43] to 5 ms [44], we set each randomly within the range [0.1, 5]. Besides, we assume that the propagation delay between edge servers and cloud centers can range from 100 ms to 500 ms (without network congestion), and we set each randomly within the range [100, 500].

Next, we suppose the load balance scheme is supported by the Round Robin (RR) algorithm [45], one of the most widely used load balance algorithms. In the RR scheme, tasks from an edge server are equally dispatched to all the idle servers. Suppose the maximum of task migration rate is fmax = 200 tasks per second (tps). Following [25], the load balance function f can be defined as (24)

Moreover, we suppose the task processing rate of each edge server can range from 2000 to 3000 tps, and that of the cloud center is 3000 tps. Thus, we set β0 = 3000 and set each βj, j = 1, …, M, randomly within the range [2000, 3000]. To present a case of flash crowd (i.e., the network congestion which occurs when a huge number of end devices request an edge server simultaneously), we set the edge offloading rates by αi(t) = 6000i tps. Besides, let the maximum cloud offloading rate be ymax = 200 tps.

5.2 Optimal configuration of LBAO algorithm

Having described the parameters used in our experiments, let us investigate the optimal configuration of the LBAO algorithm. Because the LBAO algorithm is based on the GA framework, there are three crucial parameters that can affect the quality of the resulting solution: the population size NP, the crossover probability pc, and the mutation probability pm. Below, we determine these three parameters by setting them by different values and examining their resulting algorithm performance.

Let NP ∈ {120, 240, 360}, pc ∈ {0.05, 0.10, …, 0.50}, pm ∈ {0.025, 0.050, …, 0.100}. Then, we run the LBAO algorithm for each parameter combination with the same iteration steps and compare the results. Denote J* as the optimal latency yielded from the LBAO algorithm. Then, Tables 13 display the algorithm performance under different parameter combinations with regard to NP = 120, NP = 240, and NP = 360, respectively. From these three tables, it is seen that when NP = 360, pc = 0.50, pm = 0.075, the LBAO algorithm can achieve the best performance as the optimal latency is J* = 182.839 thousands seconds. Thus, we recommend NP = 360, pc = 0.50, pm = 0.075 as the optimal configuration of the LBAO algorithm with respect to the case we examine.

thumbnail
Table 1. Optimal latency under different crossover and mutation probabilities when NP = 120.

https://doi.org/10.1371/journal.pone.0296897.t001

thumbnail
Table 2. Optimal latency under different crossover and mutation probabilities when NP = 240.

https://doi.org/10.1371/journal.pone.0296897.t002

thumbnail
Table 3. Optimal latency under different crossover and mutation probabilities when NP = 360.

https://doi.org/10.1371/journal.pone.0296897.t003

5.3 Effectiveness of LBAO algorithm

Though the GA framework is considered effective in tackling most optimization problems [39], there is still a need to verify if the solution obtained from the GA-based LBAO algorithm achieves a satisfactory performance. To this end, we intend to compare the LBAO solution with a large number of randomly generated CEE offloading strategies and verify if the performance of the LBAO solution is better than the best performance among all the random CEE offloading strategies. The reason for this experiment approach is straightforward. According to the Monte Carlo theory [46], with the increase of the number of randomly generated CEE offloading strategies, the best performance among all the random strategies will gradually approach the global optimal value. Thus, when the number of random strategies is large enough and the LBAO solution is better than all the random strategies, it is reasonable to recommend the LBAO solution as a satisfactory result.

Let NR denote the number of randomly generated CEE offloading strategies. Then, a comparison between the LBAO algorithm and the Monte Carlo method is shown in Table 4. From this table, it is seen that the optimal latency of random CEE offloading strategies decreases with the increase of the number of random CEE offloading strategies, which satisfies the statement of the Monte Carlo theory. However, even though the number of random strategies is as large as NR = 10, 000, 000, the optimal latency of random strategies is much higher than that of the LBAO solution. Besides, all the numerical experiments are conducted on the same PC machine with an AMD 5800X CPU and 32 GB memory, and it is seen that the LBAO algorithm is much more efficient than the Monte Carlo method as the runtime of the LBAO algorithm is far less than that of the Monte Carlo method. Thus, the LBAO algorithm is effective.

thumbnail
Table 4. Comparison between LBAO algorithm and Monte Carlo (MC) method.

https://doi.org/10.1371/journal.pone.0296897.t004

In addition, we compare the LBAO algorithm with other methods, including but not limited to those applied to industrial IoT scenarios. Let us introduce several baseline methods as follows.

  • Cloud Horizon (ClHo) scheme [6]. At any time, each end device selects a random edge server to offload all computational tasks and each edge server always performs the cloud offloading with the maximum capability. With respect to our load evolution model, the offloading strategy is configured by , ∀i, j, yj(t) ≡ ymax, ∀j.
  • First Come First Service (FCFS) scheme [19]. At any time, each task produced by end devices is offloaded to edge servers in sequence, following the first come first service principle. Similarly, each task on the queue of an edge server is offloaded to the cloud center in sequence, following the first come first service principle. With regard to our load evolution model, the offloading strategy is configured by , ∀i, j, , ∀j.
  • Edge Processing Only (EPS) scheme [47]. At any time, each end device can only select a random edge server to offload all computational tasks and the cloud offloading is invalid. With respect to our load evolution model, the offloading strategy is configured by , ∀i, j, yj(t) ≡ 0, ∀j.

A performance comparison of the ClHo, FCFS, EPS, MC, LBAO methods is shown in Table 5, from which it is seen that the proposed LBAO algorithm is better than the four baseline methods in terms of task total latency. In addition, denote the load evolution trajectories under the ClHo, FCFS, EPS schemes by lClHo, lFCFS, lEPS, respectively. Then, the load evolution trajectories under the LBAO algorithm and the ClHo, FCFS, EPS methods are compared in Fig 8. From this figure, it is seen that the cloud load under the LBAO algorithm is much lower than those under the baseline methods. This result implies that in the current network environment, the propagation delay for cloud offloading may be relatively high. Thus, the LBAO strategy does not recommend the excessive use of cloud offloading, but rather suggests fully utilizing the potential capability brought by the load balance between edge servers.

thumbnail
Table 5. Comparison of different methods in terms of total latency.

https://doi.org/10.1371/journal.pone.0296897.t005

thumbnail
Fig 8. Comparison between the load evolution trajectories lClHo, lFCFS, lEPS and l*.

https://doi.org/10.1371/journal.pone.0296897.g008

Next, let us investigate the relationship between the time consumption and result quality of the LBAO algorithm. Recall that the time complexity of the LBAO algorithm is mainly determined by the population size NP, the maximum iteration step Q and the discrete precision NT. Let Q = 100, NP ∈ {40, 80, 120}, NT ∈ {20, 40, 60, 80, 100}. Then, we run the LBAO algorithm for each parameter combination and record the corresponding total time consumption and task latency. The results are shown in Fig 9, from which the following two observations can be attained:

  • With the increase of the discrete precision NT, the total time consumption of the LBAO algorithm increases in a roughly linear manner, whereas the total latency decreases slower and slower. This observation implies that it seems not a cost-efficient way to blindly increase the precision NT because the resulting improvement in algorithm quality would be very limited. For example, when NP = 120, if the precision NT is enlarged from 20 to 100, the quality improvement would be just only , while the time consumption would increase by . On the contrary, if the precision is reduced, the algorithm time consumption will decrease quickly and the LBAO algorithm can be applied to real-time scenarios with just litter quality reduction.
  • With the increase of the population size NP, the total time consumption of the LBAO algorithm increases dramatically while the total latency just changes slightly. This observation also suggests us not to blindly increase the population size NP for better algorithm quality. Thus, we recommend setting the population size NP by a small value in practical applications.
  • From the above two observations and analyses, we should understand that setting the population size NP and discrete precision NT by small values is a cost-effective way to solve the LBAO problem. Also, from Fig 9, it is seen that when these two parameters are as small as NP = 40, NT = 20, the result quality is acceptable and the time consumption is just nearly one second, which is much less than the considered time horizon T = 6 seconds. Thus, it is reasonable to consider that the LBAO algorithm has the potential to be applied to real-time applications.
thumbnail
Fig 9. Relationship between time consumption and result quality of LBAO algorithm.

https://doi.org/10.1371/journal.pone.0296897.g009

5.4 Influence of load balance in CEE offloading

Finally, let us investigate the influence of load balance in CEE collaborative offloading. To achieve that, we perform the following four experiment steps:

  1. Under the actual load balance scheme f shown in (24), calculate the optimal strategy (x*, y*) through the LBAO algorithm and denote the corresponding load evolution trajectory by l*. The strategy (x*, y*) is the optimal decision that is aware of load balance.
  2. Let the load balance scheme be f(i, j, l(t), t) ≡ 0. Calculate the optimal strategy through the LBAO algorithm. This strategy means the erroneous optimal decision that ignores the effect of load balance.
  3. Under the actual load balance scheme f shown in (24), calculate the actual load evolution trajectory for the erroneous strategy .
  4. Compare the load evolution trajectories l* and . The result is shown in Fig 10.
thumbnail
Fig 10. Comparison between the load evolution trajectories under the correct and erroneous optimal strategies.

https://doi.org/10.1371/journal.pone.0296897.g010

From Fig 10, it is seen that the influence of load balance on the optimal decision of CEE offloading strategies is non-negligible. More specifically, it follows from Fig 10A that the erroneous optimal strategy tends to adopt higher cloud offloading rates because it ignores the influence of load balance and underestimates the capacity of edge servers, leading to higher load in the cloud center; on the contrary, the correct optimal strategy (x*, y*), which has accounted for the load balance scheme, tends to adopt lower cloud offloading rates because it perceives that the edge servers are able to accommodate more tasks through load balance. In addition, Fig 10B and 10C show that the load balance scheme can effectively reduce the load difference between the two edge servers. Thus, load balance is a crucial factor in determining a proper CEE offloading strategy.

6 Conclusion and future work

In this paper, we have addressed the LBAO problem. First, we have proposed a novel load evolution model to characterize the influences of different CEE offloading strategies on the load dynamics of a CEE system. On this basis, we have established a latency model to evaluate different CEE offloading strategies and formulated an optimal control model to describe the LBAO problem. Second, we have analyzed the feasibility of typical optimal control numerical methods in solving the LBAO problem, developed a numerical method (the LBAO algorithm) based on the GA framework to solve the LBAO problem, and made a rough analysis of the algorithm time complexity. Third, through a series of numerical experiments, we have verified the effectiveness of the LBAO algorithm.

In our research, we have shown that load balance is a crucial factor in designing CEE offloading strategies. If a strategy ignores the influence of load balance between edge servers, there would be bias in estimating the load dynamics of edge servers and the performance of offloading may not be well improved. Thus, developing a load balance -aware offloading strategy is necessary.

Still, there exist some limitations in our work. First, as we discussed earlier, it is a dilemma to simultaneously increase the result quality and decrease the time consumption of our proposed LBAO algorithm. Thus, in the future work, it would be valuable to study an improved method to address or mitigate this issue. Second, we notice that artificial intelligence (AI) -based algorithms, such as reinforcement learning [48] and adaptive dynamic programming [49], are an emerging type of numerical method to solve optimal control problems. Thus, in the future work, it is worth investigating the feasibility of AI-based methods in solving the LBAO problem. Further, if AI-based methods are available to solve the LBAO problem, it would be significant to compare their performance with our proposed GA-based algorithm. Third, in our work, the network environment in a CEE system is supposed to be relatively stable in a short time interval. In the future work, it is worth studying offloading strategies under an unstable network environment, and we may extend our work by introducing some noises in the mathematical formulation of the LBAO problem.

References

  1. 1. Laghari AA, Wu K, Laghari RA, Ali M, Khan AA. A review and state of art of Internet of Things (IoT). Archives of Computational Methods in Engineering. 2021; p. 1–19.
  2. 2. Khan LU, Yaqoob I, Tran NH, Kazmi SA, Dang TN, Hong CS. Edge-computing-enabled smart cities: A comprehensive survey. IEEE Internet of Things Journal. 2020;7(10):10200–10232.
  3. 3. Qiu T, Chi J, Zhou X, Ning Z, Atiquzzaman M, Wu DO. Edge computing in industrial internet of things: Architecture, advances and challenges. IEEE Communications Surveys & Tutorials. 2020;22(4):2462–2488.
  4. 4. Kai C, Zhou H, Yi Y, Huang W. Collaborative cloud-edge-end task offloading in mobile-edge computing networks with limited communication capability. IEEE Transactions on Cognitive Communications and Networking. 2020;7(2):624–634.
  5. 5. Yang Z, Liang B, Ji W. An intelligent end–edge–cloud architecture for visual IoT-assisted healthcare systems. IEEE Internet of Things Journal. 2021;8(23):16779–16786.
  6. 6. Ding Y, Li K, Liu C, Li K. A potential game theoretic approach to computation offloading strategy optimization in end-edge-cloud computing. IEEE Transactions on Parallel and Distributed Systems. 2021;33(6):1503–1519.
  7. 7. Duan S, Wang D, Ren J, Lyu F, Zhang Y, Wu H, et al. Distributed artificial intelligence empowered by end-edge-cloud computing: A survey. IEEE Communications Surveys & Tutorials. 2022;25(1):591–624.
  8. 8. Wang B, Wang C, Huang W, Song Y, Qin X. A survey and taxonomy on task offloading for edge-cloud computing. IEEE Access. 2020;8:186080–186101.
  9. 9. Katoch S, Chauhan SS, Kumar V. A review on genetic algorithm: past, present, and future. Multimedia tools and applications. 2021;80:8091–8126. pmid:33162782
  10. 10. Lin H, Zeadally S, Chen Z, Labiod H, Wang L. A survey on computation offloading modeling for edge computing. Journal of Network and Computer Applications. 2020;169:102781.
  11. 11. Feng C, Han P, Zhang X, Yang B, Liu Y, Guo L. Computation offloading in mobile edge computing networks: A survey. Journal of Network and Computer Applications. 2022;202:103366.
  12. 12. Diamanti M, Charatsaris P, Tsiropoulou EE, Papavassiliou S. Incentive mechanism and resource allocation for edge-fog networks driven by multi-dimensional contract and game theories. IEEE Open Journal of the Communications Society. 2022;3:435–452.
  13. 13. Maray M, Mustafa E, Shuja J, Bilal M. Dependent task offloading with deadline-aware scheduling in mobile edge networks. Internet of Things. 2023;23:100868.
  14. 14. Mustafa E, Shuja J, Bilal K, Mustafa S, Maqsood T, Rehman F, et al. Reinforcement learning for intelligent online computation offloading in wireless powered edge networks. Cluster Computing. 2023;26(2):1053–1062.
  15. 15. Zaman SKu, Jehangiri AI, Maqsood T, Haq Nu, Umar AI, Shuja J, et al. LiMPO: Lightweight mobility prediction and offloading framework using machine learning for mobile edge computing. Cluster Computing. 2023;26(1):99–117.
  16. 16. Sun C, Li H, Li X, Wen J, Xiong Q, Wang X, et al. Task offloading for end-edge-cloud orchestrated computing in mobile networks. In: 2020 IEEE Wireless Communications and Networking Conference (WCNC). IEEE; 2020. p. 1–6.
  17. 17. Peng K, Huang H, Wan S, Leung VC. End-edge-cloud collaborative computation offloading for multiple mobile users in heterogeneous edge-server environment. Wireless Networks. 2020; p. 1–12.
  18. 18. Chen Y, Zhao J, Wu Y, Huang J, Shen XS. QoE-aware decentralized task offloading and resource allocation for end-edge-cloud systems: A game-theoretical approach. IEEE Transactions on Mobile Computing. 2022; p. 1–17.
  19. 19. Peng K, Huang H, Zhao B, Jolfaei A, Xu X, Bilal M. Intelligent computation offloading and resource allocation in IIoT with end-edge-cloud computing using NSGA-III. IEEE Transactions on Network Science and Engineering. 2022;10(5):3032–3046.
  20. 20. Dai B, Niu J, Ren T, Atiquzzaman M. Toward Mobility-Aware Computation Offloading and Resource Allocation in End–Edge–Cloud Orchestrated Computing. IEEE Internet of Things Journal. 2022;9(19):19450–19462.
  21. 21. Du R, Liu C, Gao Y, Hao P, Wang Z. Collaborative cloud-edge-end task offloading in NOMA-enabled mobile edge computing using deep learning. Journal of Grid Computing. 2022;20(2):14.
  22. 22. Tang T, Li C, Liu F. Collaborative cloud-edge-end task offloading with task dependency based on deep reinforcement learning. Computer Communications. 2023;.
  23. 23. Teng M, Li X, Zhu K. Joint Optimization of Sequential Task Offloading and Service Deployment in End-Edge-Cloud System for Energy Efficiency. IEEE Transactions on Sustainable Computing. 2023;.
  24. 24. Ma Z, Shao S, Guo S, Wang Z, Qi F, Xiong A. Container migration mechanism for load balancing in edge network under power Internet of Things. IEEE Access. 2020;8:118405–118416.
  25. 25. Li T. Optimal cloud assistance policy of end-edge-cloud ecosystem for mitigating edge distributed denial of service attacks. Journal of Cloud Computing. 2021;10(1):1–17.
  26. 26. Kashani MH, Mahdipour E. Load Balancing Algorithms in Fog Computing. IEEE Transactions on Services Computing. 2022;16(2):1505–1521.
  27. 27. Yuan Q, Li J, Zhou H, Lin T, Luo G, Shen X. A joint service migration and mobility optimization approach for vehicular edge computing. IEEE Transactions on Vehicular Technology. 2020;69(8):9041–9052.
  28. 28. David J, Thomas C. Discriminating flash crowds from DDoS attacks using efficient thresholding algorithm. Journal of Parallel and Distributed Computing. 2021;152:79–87.
  29. 29. Rafique W, Qi L, Yaqoob I, Imran M, Rasool RU, Dou W. Complementing IoT services through software defined networking and edge computing: A comprehensive survey. IEEE Communications Surveys & Tutorials. 2020;22(3):1761–1804.
  30. 30. Bai T, Pan C, Deng Y, Elkashlan M, Nallanathan A, Hanzo L. Latency minimization for intelligent reflecting surface aided mobile edge computing. IEEE Journal on Selected Areas in Communications. 2020;38(11):2666–2682.
  31. 31. Teo KL, Li B, Yu C, Rehbock V. Applied and computational optimal control. Optimization and Its Applications. 2021;.
  32. 32. Rao AV. A survey of numerical methods for optimal control. Advances in the astronautical Sciences. 2009;135(1):497–528.
  33. 33. Liu B, Zhao Z. A note on multi-point boundary value problems. Nonlinear Analysis: Theory, Methods & Applications. 2007;67(9):2680–2689.
  34. 34. Dal Bianco N, Bertolazzi E, Biral F, Massaro M. Comparison of direct and indirect methods for minimum lap time optimal control problems. Vehicle System Dynamics. 2019;57(5):665–696.
  35. 35. Guo By, Wang Zq. Legendre–Gauss collocation methods for ordinary differential equations. Advances in Computational Mathematics. 2009;30:249–280.
  36. 36. Dattoli G, Ricci PE, Cesarano C. A note on Legendre polynomials. International Journal of Nonlinear Sciences and Numerical Simulation. 2001;2(4):365–370.
  37. 37. Erkuş E, Altın A. A note on the Lagrange polynomials in several variables. Journal of mathematical analysis and applications. 2005;310(1):338–341.
  38. 38. Kazimipour B, Li X, Qin AK. A review of population initialization techniques for evolutionary algorithms. In: 2014 IEEE congress on evolutionary computation (CEC). IEEE; 2014. p. 2585–2592.
  39. 39. Mirjalili S, Mirjalili S. Genetic algorithm. Evolutionary Algorithms and Neural Networks: Theory and Applications. 2019; p. 43–55.
  40. 40. https://github.com/fanyueqi/LBAO-algorithm.git.
  41. 41. Biswas B, Chatterjee S, Mukherjee S, Pal S. A discussion on Euler method: A review. Electronic Journal of Mathematical Analysis and Applications. 2013;1(2):2090–2792.
  42. 42. Chen X, Wang Q, Mi Y, Guo L. Transmission delay simulation for edge computing network with service integrating mode. In: 12th International Conference on Quality, Reliability, Risk, Maintenance, and Safety Engineering (QR2MSE 2022). IET; 2022. p. 1052–1056.
  43. 43. Rodrigues TG, Suto K, Nishiyama H, Kato N. Hybrid method for minimizing service delay in edge cloud computing through VM migration and transmission power control. IEEE Transactions on Computers. 2016;66(5):810–819.
  44. 44. Tajiri K, Kawahara R, Matsuo Y. Optimizing edge-cloud cooperation for machine learning accuracy considering transmission latency and bandwidth congestion. IEICE Transactions on Communications. 2023;106(9):827–836.
  45. 45. Rasmussen RV, Trick MA. Round robin scheduling–a survey. European Journal of Operational Research. 2008;188(3):617–636.
  46. 46. Kroese DP, Brereton T, Taimre T, Botev ZI. Why the Monte Carlo method is so important today. Wiley Interdisciplinary Reviews: Computational Statistics. 2014;6(6):386–392.
  47. 47. Sun Z, Yang H, Li C, Yao Q, Wang D, Zhang J, et al. Cloud-edge collaboration in industrial internet of things: A joint offloading scheme based on resource prediction. IEEE Internet of Things Journal. 2021;9(18):17014–17025.
  48. 48. Perrusquía A, Yu W. Identification and optimal control of nonlinear systems using recurrent neural networks and reinforcement learning: An overview. Neurocomputing. 2021;438:145–154.
  49. 49. Liu D, Xue S, Zhao B, Luo B, Wei Q. Adaptive dynamic programming for control: A survey and recent advances. IEEE Transactions on Systems, Man, and Cybernetics: Systems. 2020;51(1):142–160.