Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

An evolutionary algorithm based on approximation method and related techniques for solving bilevel programming problems

Abstract

In the engineering and economic management fields, optimisation models frequently involve different decision-making levels. These are known as multi-level optimisation problems. Because the decision-making process of such problems are hierarchical, they are also called a hierarchical optimisation problems. When the problem involves only two-level decision-making, the corresponding optimisation model is referred to as a bilevel programming problem(BLPP). To address the complex nonlinear bilevel programming problem, in this study, we design an evolutionary algorithm embedded with a surrogate model-that it is a approximation method and correlation coefficients. First, the isodata method is used to group the initial population, and the correlation coefficients of the individuals in each group are determined based on the rank of the leader and follower objective functions. Second, for the offspring individuals produced by the evolutionary operator, the surrogate model is used to approximate the solution of the follower’s programming problem, during which the points in the population are screened by combining the correlation coefficients. Finally, a new crossover operator is designed by the spherical search method, which diversifies the generated offspring. The simulation experimental results demonstrate that the proposed algorithm can effectively obtain an optimal solution.

Introduction

Problem models

BLPP is a typical representative of multilevel hierarchical optimisation problems. In contrast to multi-objective optimisation, the decision makers in BLPP are at two different levels. This hierarchical structure often leads to a corresponding problem that is neither convex nor differentiable, which is also strongly non-deterministic polynomial hard as well. The BLPP model is expressed as follows: (1)

Here, x = (x1, …, xn) and y = (y1, …, ym) are the leader’s and follower’s variables. F: Rn+mR and f: Rn+mR are the leader’s and follower’s objective function. G: Rn+mR and g: Rn+mR are the leader’s and follower’s constraints, respectively.

It can be seen from (1) that BLPP is an interaction optimisation model between two decision makers with their own objectives. The decision-making procedure is executed as follows: The leader, located at upper level, makes a decision by selecting a variable value. Then the follower observes the follower’s selection and responds to the leader’s decision by optimizing his/her objective and providing an optimal solution y. The point pair (x, y) is called a feasible point of BLPP. The bilevel optimisation arms to select a value x that optimizes the leader’s objective among all feasible points.

During the optimisation procedure, the leader influences the follower by providing a value x fixed as a parametric value. Subsequently, the follower alters the values of the objective and constraints at the leader’s problems by reacting a value y to address the leader’s problem. When the optimal solution (x, y) is obtained, the optimisation process stops.

The nested nature of BLPP poses a number of additional challenges compared with conventional single-level optimization problems. In particular:

  1. The leader’s behaviour of a bilevel problem may be nonlinear, even if the problems at both levels are linear. It has been proved that BLPP are non-deterministic polynomial hard problem.
  2. Theoretically, a leader’s solution is considered valid/feasible only if the corresponding follower’s variables are the true global optimum of the follower problem. Global optimality can only be assured in very limited cases, such as convex and linear problems. However, for most nonlinear and black-box problems, it is not possible to ensure global optimality.
  3. In the deceptive case, incorrect follower’s optimal values may cause the objective value to be better than the leader’s true optimal value, which poses a severe challenge to the ranking strategy used in evolutionary optimization technology.
  4. Due to each variable of the leader needs to solve the follower programming problem to obtain a feasible solution of the BLPP, the cost of calculation is significantly high.

In particularly, if the problem is mathematically not suited to be solved using exact techniques, thus, evolutionary/hybrid techniques are used.

Related work

With the advancements and developments in human science and technology, various increasingly complex optimisation models have emerged, particularly in the fields of equipment installation, task scheduling, production planning, line optimisation, software design, and tariff setting. Therefore, how these models can be effectively solved has emerged as an important main research topic in the optimisation field. Over the past few decades, many optimisation research results have mainly focused on single-objective and multi-objective optimisation models, whereas relatively few studies have been conducted on hierarchical optimisation models.

Leveraging the one-time decision theory to produce multiple short-life cycle products, Zhu and Guo [1]studied BLPP applications with follower problem operators for manufacturers and used classical optimisation methods to solve them. Nasrolahpour et al. [2] developed an energy storage system for merchant pricing based on a two-tier complementary model that can determine the most favourable transaction behaviour. Under a multi-objective framework, Ahmad et al. [3] proposed a simple multi-objective bilevel linear programming method, which considered reservoir managers and multiple water-use departments as a hierarchical structure to optimally allocate limited water resources. More applications of optimisation models can be found in the literature [413].

Many practical applications have promoted theoretical research on bilevel programming problems, such as developing efficient algorithms and obtaining optimality conditions. However, owing to the computational complexity of the BLPP itself, it is often very challenging to adopt traditional optimisation methods based on gradient information to address such problems. Currently, only special bilevel programming approachs, such as linear and convex quadratic programs, can obtain the optimal solution to these problem through optimality conditions. For other types of bilevel programming problems research has focused on designing swarm intelligence and hybrid algorithms, which are currently more effective algorithm frameworks and have achieved better solutions when tested against certain problems. Existing methods for solving the BLPP can be grouped into the following categories:

  1. A). Classical approaches
    Some classic methods for BLPP include the simplex [14], branch-and-bound method [15], descending gradient method [16], and penalty function methods [1719]. Classical methods typically apply the optimality conditions of the follower to convert the BLPP into a single-level problem. Dempe [14]used the simplex method to propose an algorithm for solving linear BLPP. The algorithm introduces slack variables to find a basis as a feasible solution of the BLPP, through a simplex and iterative method. Although this method is more effective for solving small-scale linear BLPP it cannot be directly extended to nonlinear problems. Susanne et al. [16] replaced the the follower problem a with Karush-Kuhn-Tucker condition and applied the optimal value method to transform the original problem into a single-level problems. This approach could effectively satisfy the complementary conditions in the mathematical programming problem in Banach space and introduce M stationarity. Using the dual gap as a penalty function, the literature of White [17] proposed an effective algorithm for solving the BLPP. It adopts a newly designed precise penalty function method to obtain the global optimal solution of the linear situation and conducts a novel theoretical analysis. In the literature [18, 19], a weak linear BLPP dealt with the penalty function and Karush-Kuhn-Tucker conditions.
  2. B). Evolutionary approaches
    The evolutionary algorithms, as representatives of the swarm intelligence optimisation technology, have been widely used to solve various BLPP over the past few decades. An early evolutionary algorithm was proposed by Mathieu et al. [20], who applied linear programming methods to handle followers’ problems and used a genetic algorithm to explore the search space of leaders’ problems. Focused on BLPPs whose follower’s problem is convex programming, Wang [21] proposed an evolutionary algorithm with an embedded constraint processing method. This method applies Karush-Kuhn-Tucker conditions to transform the followers’ problems, turning the original model into a single-level programming model. In addition, to obtain sufficient feasible solutions, a constraint-processing method was designed. Based on similar optimality conditions, Li [22] presented a genetic algorithm that solves nonlinear/linear fractional BLPP. In another study on the presence of the so-called pseudo-feasible solutions in evolutionary bilevel optimisation(QBCA-2), the focus was on determining how pseudo-feasible solutions can affect the performance of an evolutionary algorithm. Moreover, a novel and scalable set of test problems with characterised pseudo-feasible solutions was introduced in [23]. In the literature [24], a co-evolutionary algorithm was proposed to solve BLPP. In the evolution process, the follower problem was solved in two stages. Aboelnaga et al. [25] proposed an improved genetic algorithm and chaotic search method. Goshu et al. [26] proposed a metaheuristic algorithm for random BLPP. An evolutionary algorithm for solving nonlinear bilevel programming problems has been presented in the literature [27]. The algorithm is designed by reflecting the optimal solution of the follower problem to the leader problem. To ensure the quality of each iteration, the algorithm adaptive changes the population size during the evolution process and generates individuals using the tabu search method.
  3. C). Hybrid approaches
    The hybrid algorithm [28] is a common method used to solve BLPP. Abo-Elnaga et al. [29] proposed a multi-sine-cosine algorithm to solve nonlinear BLPP. The sine-cosine algorithms based on three different populations were presented. The first population was used to deal with the leader programming problem, while the second one addressed the follower programming problem. In addition, the Karush-Kuhn-Tucker condition was applied to transform the initial problem into a constrained optimisation problem, which was solved using the third population. If the objective function value is equal to zero, then the solution obtained by solving the leader and follower problems is deemed feasible. Wang [30] proposed a particle swarm distribution estimation algorithm with an embedded Estimation of Distribution Algorithm to solve nonlinear BLPP. Before executing the speed and location update rules in Partical Swarm Optimization, a Gaussian distribution was applied to generate offspring to replace some inferior individuals (particles) in the current population.
  4. D). EA based on approximate methods
    To avoid lengthy calculation processes, some evolutionary algorithms utilise approximation techniques to improve the efficiency of the intermediate calculation process. A bilayer covariance matrix adaptive evolution strategy(BLCMAES) is proposed in [31]. The method designed a sharing mechanism so that prior knowledge of the follower problem can be extracted from the leader optimizer, reduced the number of evaluations of the follower problem. Furthermore, an optimization-based elite retention mechanism is proposed to keep track of the elite and avoid incorrect solutions. Sinha et al. [32] proposed an evolutionary algorithm that uses approximate functions to adress BLPPs. For offspring generated by evolutionary operators, the approximation method can reduce the number of evaluations of the follower objective function. Using approximate Karush-Kuhn-Tucker conditions, Sinha et al. [33, 34] transformed the BLPP into a single-level optimisation problem; then leveraged an evolutionary algorithm embedded with the idea of neighbourhood measurements for the transformed model. Sinha et al. [35] presented an evolutionary optimisation algorithm (BLEAQ-2) that fits an extreme value mapping using the relationship between the leader and follower variables. Islam et al. [36] introduced an evolutionary algorithm that employs three types of surrogate models to approximate the optimal solution of the follower problem. Based on the linear programming optimality conditions, Li [37] proposed a genetic algorithm for solving linear BLPP. In this method, the follower is adopted as the search object of the evolutionary algorithm, and the lower variables in the leader problems are replaced by possible solution functions. This process locally optimises the leader variables.

Research motivation

In the field of engineering and economic management, optimization models involving different decision-making levels often appear, such problems are called multi-level optimization problems. Since the decision-making process of the problem is hierarchical, it is also called a hierarchical optimization problem. BLPP is a typical representative of multi-level hierarchical optimization problems, and has become an important research field for optimization problems due to its extensive practical application background and algorithmic challenges. Different from multi-objective optimization, the decision makers of BLPP are at two different levels, and this hierarchical structure often leads to problems that are non-convex and non-differentiable. Based on the above characteristics, it is often difficult for traditional optimization algorithms based on gradients to find the global optimal solution to the BLPP. Evolutionary algorithm be increasingly used to solve BLPP because it have the characteristics of global convergence, and there is no restriction on functions that are convex and differentiable.

In this paper, driven the correlation coefficient and surrogate model, an evolutionary algorithm (TCEA) is proposed to solve complex nonlinear BLPP. The algorithm has the following characteristics: First, the isodata clustering method [38] is used to group the initial populations, and then determine the correlation coefficients of the leader and follower objective functions in each group based on the rank of the leader (follower) objective function, after that, some points are selected and updated in the offsprings based on the correlation coefficient value in each group. Second, for the offspring individuals produced by the crossover and mutation operators, the surrogate model is used to approximate the solution of the follower programming problem, and so does to reduce the number of evaluations of follower problems.

The remainder of this paper is organized as follows. The basic concepts of BLPP are described in the next Section. The correlation coefficients and surrogate models are presented in Section 3. A new evolutionary algorithm based on the above methods are stated in Section 4. Experimental results and comparisons are provided in Section 5. We conclude our approach in Section 6.

Basic concepts

Some basic definitions for problem (1) are summarized as follows:

  1. Constraint region:
  2. Follower’s feasible set for x fixed:
  3. Projection of S onto the leader’s decision space:
  4. Follower’s rational reaction set for each xS(x)
  5. Inducible region:

In terms of the aforementioned definitions, problem (1) can also be witten as

In order to ensure that problem (1) is well posed, in the remainder, we always assume that

(A1) S is nonempty and compact;

(A2) For all decisions taken by the leader’s, the follower’s has some room to react, that is, S(x)≠ϕ;

(A3) The follower’s problem has unique optimal solution for each fixed x.

Main improvement schemes

Correlation coefficients

The challenge faced in solving BLPP is that evaluating the follower programming problem involves a large amount of calculation. Therefore, to reduce the optimization times of the follower problem, we update a part of the population points by means of the relationship between the objective functions of the leader and follower (called correlation coefficients), thereby reducing the calculation times of the follower optimization problem. The correlation coefficients are defined as follows:

Given z points, we acquire the leader and the follower objective function values F(xi, yi) and f(xi, yi), i = 1, 2, …, z. Then, we sort the objective functions based on their values, and the sequence numbers are denoted and , i = 1, 2, …, z, respectively.

The sequence number difference between the sorted leader and follower objectives is obtained as follows: (2)

Correlation coefficient is defined as follows: (3)

The larger the value of ρ is, the more similar the changing trends of the objective functions of the leader and the follower are. Conversely, the changing trends of the objective functions of the leader and the follower are different. Particularly, if ρ = 1, the objective functions of the leader and follower exhibit exactly the same trends. When ρ = −1, the objective functions of the leader and the follower exhibit opposite trends. For example, we take z = 5 and set

F(x1, y1) = 2.5, F(x2, y2) = 2.3, F(x3, y3) = 1.4, F(x4, y4) = 1.6, F(x5, y5) = 5.8.

After sorting rank:

F(x3, y3) = 1.4, F(x4, y4) = 1.6, F(x2, y2) = 2.3, F(x1, y1) = 2.5, F(x5, y5) = 5.8.

.

And given

f(x1, y1) = 5.5, f(x2, y2) = 2.7, f(x3, y3) = 1.6, f(x4, y4) = 3.8, f(x5, y5) = 6.6.

After sorting rank:

f(x2, y2) = 1.6, f(x3, y3) = 2.7, f(x1, y1) = 3.8, f(x5, y5) = 5.5, f(x4, y4) = 6.6.

.

Then and the correlation coefficients

It turns to be that, the objective functions of the leader and the follower have the more similar the changing trends.

Surrogate models

In BLPP, the procedure to finding a feasible solution results in a significant amount of computation in solving BLPP, particularly when the problem is large. And the optimal solutions to the follower’s problem are always determined by the leader’s variables. This means that the optimal solution of the follower problem is a function of the leader’s variables. However, the function is often implicit and can not be obtained analytically. In the proposed approach, we take the polynomial fitting as surrogate models [39] to estimate the optimal solutions to the follower’s problems.

The polynomial fitting demonstrates better performance in fitting unknown functions and can efficiently decrease the computational times of the follower problems. It is noteworthy that for these fitting points, each follower’s variable value must be optimal when the leader’s components are fixed. In the proposed algorithm, the polynomial fitting is generated as follows. First, an initial population of N points xi, i = 1, 2, …, N is gotten, the optimal solutions to the follower problem are denoted by yi, i = 1, 2, …, N. thus N point pairs of can be obtained. These point pairs are used as fitting nodes to generate an polynomial curve. (4) where (5) i.e. each of yj, j = 1, …, m, is a function of x and y(x) = (y1, y2, …, ym). Where k is the highest degree of the polynomial, a0, a1, a2, …, ak is the undetermined coefficient and calculated by: (6)

According to the above-mentioned method, we can obtain the approximate optimal solutions to the follower problem.

Proposed algorithm

In this manuscript, an evolutionary algorithm based on surrogate models and correlation coefficients, denoted by TCEA, is developed to solve BLPP. Fig 1 gives the flowchart of TCEA.

The detailed procedure of the proposed algorithm can be described as follows:

Step 1 (Initial population)

The idea of uniform design [40] is adopted to produces N points xi, i = 1, …, N, resulting in an initial population pop(0) of size N. Set gen = 0, D = Φ.

Step 2 (Fitness evaluation)

For each xi, solve the follower problems and obtain the optimal solution yi, i = 1, …, N. These points are put into D. The value of the leader objectives are taken as F(xi, yi), i = 1, …, N. Construct the polynomial fitting (surrogate models), as in Section 3.2. Use the isodata method to divide the generated points into p groups, denoted as I1, I2, …, Ip. Then take advantage of the correlation coefficients method in Section 3.1 to acquire the value of ρ in each group, denoted by ρ1, ρ2, …, ρp.

Step 3 (Crossover)

For each crossover parent individual xi, take an best individual as , and perform the following crossover operator using the spherical search method:

Set: here α ∈ (0, 1) is a constant, which is called the shrinkage rate of the radius, and the method is as follows: (7) ui, li is the leader and follower’s bounds of xi. Take a uniformly distributed value of θ1, θ2, …, θJ ∈ [0, 2π],β1, β2, …, βJ ∈ (−π/2, π/2) then use the spherical search method to generate a crossover operator as follows: (n is the dimension of the leader variable) (8) (9)

Step 4 (Mutation)

Gaussian mutation is adopted. Suppose that is an individual chosen for mutation, then the offspring of is generated as follows: (10)

Step 5 (Offspring population pop′(gen))

For offspring set (xo1, xo2, …, xoλ) generated through the crossover and mutation operation. A surrogate model, the polynomial fitting, is used to obtain the approximated solutions to the follower’s program. We only update some of offspring points based on the value of ρ1, ρ2, …, ρp in each group as follows:

Case 1: If in the group τ, τ = 1, 2, …, p, the value of ρτ is greater than the given threshold μ > 0 (according to the results of the experiment), that is to say, this value is near to 1, it means that the leader’s and the follower’s objectives have the same changing trend. If the leader objective function at point (xoi, yoi) satisfies (a predetermined threshold), it means that the leader’s objective maybe become minimizer when the follower’s objective is optimized. As a result, point (xoi, yoi) needs to be updated, that is to say, the follower problem is solved and the solution to the follower’s problem is updated. If it is at point (xoi, yoi), it means that the point is unpromising even if the follower’s solution is updated. As a result, point (xoi, yoi) is not updated.

Case 2: If in the group τ, τ = 1, 2, …, p, the value of ρτ is less than the given threshold −μ (according to the results of the experiment), i.e. The value is near to −1, it means that the leader and the follower objective functions have opposite trends. At this case, the leader’s objective maybe become worse when the follower’s objective is minimized. However, it is expected the worsen objective values are still better than the predetermined threshold F*(a bilevel feasible objective value). This means these points with small objective values are potential to be refined. The smallest objective is denoted by Fbest(maybe infeasible), and these points with objective value in [Fbest, F*] should be further updated in a probabilistic sense. At point (xoi, yoi), if the objective satisfies that F(xoi, yoi)<F* and F(xoi, yoi) ≥ Fbest, then the point will be updated with probability prob(oi). (11)

Obviously,

Then, an offspring set pop′(gen) with size η, η ≤ λ, is obtained, and the values of the leader objective functions are F(xoi, yoi), i = 1, 2, …, η. Put these η solutions into archive set D.

Step 6 (Selection)

Select the best N individuals from set pop(gen)⋃D to form the next generation of population pop(gen + 1);

Step 7 (Termination condition)

If the stopping criterion is satisfied, then stop and output the best one in set D; otherwise, set gen = gen + 1, go to Step 3.

Simulation results

Test examples

To demonstrate the feasibility and efficiency of the proposed algorithm TCEA, it was compared with the three already existing algorithms [23, 35, 31] developed for dealing with BLPP. Furthermore, we tested TCEA on six examples taken from the literature [23]; All six examples are presented as follows:

Example F01

Example F02

Example F03

Example F04

Example F05

Example F06

Parameter settings

In order to compare with the experimental data in the literature, the selected parameters are consistent with those in [23]. When the leader’s and follower’s variables are 5-dimensional, the parameters are chosen as follows:

Leader’s variable dimension:n = 2; Follower’s variable dimension:m = 3;

Popsize:N = 25; Maximum running algebra:Gmax = 50;

Mutation probability:Pm = 0.1; Crossover probability:Pc = 0.8;

Number of runs:Numrun = 10; k = 1;

When the leader’s and follower’s variables are 10-dimensional, the parameters are chosen as follows:

Leader’s variable dimension:n = 5; Follower’s variable dimension:m = 5;

Popsize:N = 50; Maximum running algebra:Gmax = 50;

Mutation probability:Pm = 0.1; Crossover probability:Pc = 0.8;

Number of runs:Numrun = 10; k = 2;

When the leader’s and follower’s variables are 20-dimensional, the parameters are chosen as follows:

Leader’s variable dimension:n = 10; Follower’s variable dimension:m = 10;

Popsize:N = 100; Maximum running algebra:Gmax = 100;

Mutation probability:Pm = 0.1; Crossover probability:Pc = 0.8;

Number of runs:Numrun = 10; k = 5;

The optimal solution obtained by the algorithm TCEA is recorded as: (x*, y*), the leader’s and follower’s objective function values are respectively denoted as:F(x*, y*) and f(x*, y*).

Result analysis

We executed an algorithm on a computer (Intel(R) Core(TM)i5-8250U CPU@ 160 GHz 1.80 GHz)using the MATLAB. For all six examples, TCEA was compared with the three algorithms in the literature [23, 31, 35]. Tables 13 shows the average value of optimal results running 10 times independently for all examples, and we calculated the results of dimension 5, dimension 10 and dimension 20. In order to facilitate the comparison of TCEA, we take the same stopping criterion as that in the literature, that is: the evaluation times of the objective function of the leader’s is the stopping criterion of TCEA. Among them, when the variable dimensions are 5 and 10, the evaluation times of the leader’s objective function are 2500 and 3500, respectively. The optimal solutions obtained are listed in Tables 13.

thumbnail
Table 1. Comparison between the results obtained by TCEA and the real objective values in the case of 5 dimensions.

https://doi.org/10.1371/journal.pone.0273564.t001

thumbnail
Table 2. Comparison between the results obtained by TCEA and the real objective values in the case of 10 dimensions.

https://doi.org/10.1371/journal.pone.0273564.t002

thumbnail
Table 3. Comparison between the results obtained by TCEA and the real objective values in the case of 20 dimensions.

https://doi.org/10.1371/journal.pone.0273564.t003

When the problem’s dimension is 5, we can draw it from Table 1 that TCEA can find the same optimal value as the analytical solution on all test cases; When the dimension of the variable is 10, as can be seen from Table 2, for case F02, F04, and F05, TCEA can get the same optimal value as the analytical solution. In case F01 and F06, TCEA can attain the optimal value with a small error comparing with the analytical solution; In the case of variable dimensions are 20, in order to shorten the search time, we shrink the corresponding search space to half of the original search space, which is aimed at testing the computational savings of the algorithm on medium-scale problems in a short period of time. From Table 3, as far as the leader’s objective value is concerned, in Examples F01, F03, F04, and F05, the optimal solution of the leader objective function with small error corresponding to the analytical solution can be found. However, for the calculation examples F02 and F06, the error of the leader objective function value corresponding to the analytical solution is relatively large, which means that TCEA requires more calculation algebra.

In addition, in order to illustrate the performance of the proposed algorithm, the success rate of runs and an index of performance measurement, is introduced in algorithms. If the difference between the leader’s objective value obtained by TCEA and the known analytic solution in one run is less than ε1(ε1 = 1 × 10−2), then the algorithm is successful. The value of ε1 is consistent with the literature [23], and the success rate is defined as the number of successful runs dominates the total number of independent runs.

Table 4 shows the results of 31 times independent runs. We recorded the success rate of finding the optimal solutions of the leader objective functions. It can be seen from Table 4 that when the dimension is 5, the results of Examples F04 and F06 are better than the other three algorithms, and when the dimension is 10, the success rate of example F06 are also superior to the algorithms in the literature. In addition, TCEA has also been used to test the case when the variable is 20-dimensional. TCEA is successful in the tested case, except cases F02 and F06.

thumbnail
Table 4. Success rates of QBCA-2, BLCMAES, BLEAQ-2 and TCEA on 5, 10 and 20-dimensional test problems.

https://doi.org/10.1371/journal.pone.0273564.t004

The symbols “+”, “–” and ≈ indicate that the computational result is better than, worse than, and almost equal to that obtained by our algorithm, respectively. The best results are highlighted in bold in Table 4.

Tables 57 show the median and standard deviation(Std) in 10 runs by the TCEA when the variable dimensions are 5, 10, and 20. Meanwhile, the computational results are compared with those in the literature [23] for 5-dimensional and 10-dimensional cases. UL and LL accuracy statistics stand for the objective values obtained at the leader’s and the follower’s levels, respectively.

thumbnail
Table 5. UL and LL accuracy statistics from 31 independent runs by QBCA2, BLCMAES, BLEAQ-2 and TCEA on 5-dimensional test problems.

https://doi.org/10.1371/journal.pone.0273564.t005

thumbnail
Table 6. UL and LL accuracy statistics from 31 independent runs by QBCA2, BLCMAES, BLEAQ-2 and TCEA on 10-dimensional test problems.

https://doi.org/10.1371/journal.pone.0273564.t006

thumbnail
Table 7. UL and LL accuracy statistics from 31 independent runs by TCEA on 20-dimensional test problems.

https://doi.org/10.1371/journal.pone.0273564.t007

To facilitate comparison and illustrate the effectiveness of the TCEA, when the variable is 5 and 10 dimensions, we multiplied both the median and standard deviation (Std) by 1 × 105 in Tables 5 and 6, and stipulated that the generated values greater than or equal to 100 are assigned 100. Figs 2 and 3 show the histogram corresponding to the median and Std in Table 5 for the case of 5-dimensional variables, and Figs 4 and 5 display the histogram corresponding to Median and Std in Table 6 for the case of 10-dimensional variable, respectively.

thumbnail
Fig 2. Histogram of the median values on 5-dimensional problems.

https://doi.org/10.1371/journal.pone.0273564.g002

thumbnail
Fig 3. Histogram of the Std values on 5-dimensional problems.

https://doi.org/10.1371/journal.pone.0273564.g003

thumbnail
Fig 4. Histogram of the median values on 10-dimensional problems.

https://doi.org/10.1371/journal.pone.0273564.g004

thumbnail
Fig 5. Histogram of the Std values on 10-dimensional problems.

https://doi.org/10.1371/journal.pone.0273564.g005

It can be seen from Fig 2 that under the case of 5-dimensional variables, the median corresponding to the leader’s and the follower’s objective values are more effective than other methods in F01-03, and F05. Meanwhile, as can be seen from Fig 3, the Std corresponding to the leader and the follower objective function values of our algorithm in F03 and F05 is equivalent to the algorithm BLCMAES, but it is superior to the other two algorithms. On Problem F06, our algorithm is better than the other methods, and the advantage effect is obvious on Problems F03 and F05.

As seen in Fig 4, for the problems with 10-dimensional variable, the medians corresponding to the leader’s and the follower’s objectives are more effective than other methods in Problems F02, F03, and F06. As shown in Fig 5, the Std corresponding to the leader and the follower objective functions obtained by TCEA are better than those by other compared algorithms on F01, F02, F04 and F06.

It is quite difficult to solve the BLPP because we have to consider the hierarchy of the problem. Therefore, the amount of calculation required to solve the BLPP is very large. In the proposed algorithm, both an approximate computation scheme and correlation coefficients are adopted to reduce the computational cost caused by the follower’s optimization. To illustrate the efficiency of the proposed algorithm on the above computational examples, we execute the proposed algorithm using two kinds of the follower solution methods. One updates all offspring, whereas the other is to update only a part of them. The two algorithms stop once the optimal solutions are found. We recorded the CPU time for the purpose of comparison in Table 8 for cases of 5, 10 and 20-dimensional variables.

thumbnail
Table 8. Comparison of CUP time on 5, 10 and 20-dimensional test problems.

https://doi.org/10.1371/journal.pone.0273564.t008

OMCPU represents CPU time needed by the method of updating all offspring, whereas CPU stands for the computational time cost by the method of only updating a part of individuals. Table 8 reveals that the proposed algorithm can save computational cost effectively for each example, which indicates that the proposed approximate scheme is efficient.

Conclusion

BLPP is one of the hardest optimization models because it always accumulates the computational complexity of the hierarchical structure. Solving nonlinear BLPP is more challenging than solving linear BLPP. In this paper, we study a class of nonlinear BLPP problems, therefore, it has certain difficulties in both the accuracy of the solutions and the amount of calculations. Three efficient techniques are embedded in the proposed algorithm to improve the accuracy of the solutions and reduce the computational cost of the problem. One is the correlation coefficient method used to select the offspring obtained through crossover and mutation, this technique can appropriately reduce the computational complexity of solving the follower problem, that is to say it is can save a lots of CPU time of the algorithm. The other is the surrogate model which can efficiently reduce the computational cost of obtaining bilevel feasible solutions. The last technique is that the designed crossover and mutation operators can make the offspring individuals develop in a better direction, thereby improving the accuracy of the solutions. The simulation results in six computational examples show the efficiency of the proposed algorithm.

In the future research, the proposed algorithm will be adjusted and applied to solve some real-world problems which can be modeled as BLPP.

Acknowledgments

We thank the editors and the anonymous reviewers for their professional and valuable suggestions.

References

  1. 1. Zhu XD, Guo PJ (2019) Bilevel programming approaches to production planning for multiple products with short life cycles. 4OR Quarterly Journal of the Belgian: French and Italian Operations Research Societies 2:1–25.
  2. 2. Nasrolahpour E, Kazempour J, Zareipour H, et al. (2018) A bilevel model for participation of a storage system in energy and reserve markets. IEEE Transactions on Sustainable Energy 9(2):582–598.
  3. 3. Ahmad I, Zhang F, Liu JG, et al (2018) A linear bilevel multi-objective program for optimal allocation of water resources. Plos One 13(2):1–25.
  4. 4. Li DW, Song YC, Chen Q (2020) Bilevel programming for traffic signal coordinated control considering pedestrian crossing. Journal of Advanced Transportation 2020(5):1–18.
  5. 5. Baskan O (2019) A multiobjective bilevel programming model for environmentally friendly traffic signal timings. Advances in Civil Engineering 1:1–13.
  6. 6. Jensen TV, Kazempour J, Pinson P (2018) Cost-optimal ATCs in zonal electricity markets. IEEE Transactions on Power Systems 33(4): 3624–3633.
  7. 7. Dupin R, Michiorri A, Kariniotakis G (2019) Optimal dynamic line rating forecasts selection based on ampacity probabilistic forecasting and network operators’ risk aversion. IEEE Transactions on Power Systems 34(4): 2836–2845.
  8. 8. Bostian MB, Barnhart BL, Kurkalova LA, et al (2021) Bilevel optimization of conservation practices for agricultural production. Journal of Cleaner Production 300(1): 1–16.
  9. 9. Kostarelou E, Kozanidis G (2021) Bilevel programming solution algorithms for optimal price-bidding of energy producers in multi-period day-ahead electricity markets with non-convexities. Optimization and Engineering 22(1): 449–484.
  10. 10. Grimm V, Orlinskaya G, Schewe L, et al (2021) Optimal design of retailer-prosumer electricity tariffs using bilevel optimization. Omega 102(1): 1–17.
  11. 11. Milicka P, Sucha P, Vanhoucke M, et al (2021) The bilevel optimisation of a multi-agent project scheduling and staffing problem. European Journal of Operational Research 1:1–34.
  12. 12. Wang HD, Jin YC (2020) A random forest-assisted evolutionary algorithm for data-driven constrained multiobjective combinatorial optimization of trauma systems. IEEE Transactions on Cybernetics 50(2): 536–549. pmid:30273180
  13. 13. Ting CK, Wang TC, Liaw RT, et al (2017) Genetic algorithm with a structure-based representation for genetic-fuzzy data mining. Soft Computing 21(11): 2871–2882.
  14. 14. Dempe S (1987) A simple algorithm for the linear bilevel programming problem. Optimization 18(1): 373–385.
  15. 15. Liu SN, Wang MZ, Kong N, et al (2020) An enhanced branch-and-bound algorithm for bilevel integer linear programming. European Journal of Operational Research 291(1): 661–679.
  16. 16. Susanne F, Patrick M, Maria P (2017) Optimality conditions for the simple convex bilevel programming problem in banach spaces. Optimization 67(4): 1–32.
  17. 17. White DJ, Anandalingam G (1993) A penalty function approach for solving bilevel linear programs. Journal of Global Optimization 3(4): 397–419.
  18. 18. Tuo Q, Lan HY (2019) New exact penalty function methods with ε-approximation and perturbation convergence for solving nonlinear bilevel programming problems. Journal of Computational Analysis and Applications 26(3): 449–458.
  19. 19. Liu J, Zhang T, Fan YX, et al (2018) An objective penalty method for optimistic bilevel programming problems. Journal of the Operations Research Society of China 8(1): 177–187.
  20. 20. Mathieu R, Pittard L, Anandalingam G (1994) A robust method for linear and nonlinear optimization based on genetic algorithm cybernetica. Rairo Recherche Opérationnelle 28(1): 1–21.
  21. 21. Wang YP, Jiao YC, Li H (2005) An evolutionary algorithm for solving nonlinear bilevel programming based on a new constraint-handling scheme. IEEE Transactions on Systems Man and Cybernetics: Applications and Reviews 35(2): 221–232.
  22. 22. Li HC (2015) A genetic algorithm using a finite search space for solving nonlinear/linear fractional bilevel programming problems. Ann Oper Res 235(1): 543–558.
  23. 23. Jeśus AM, Efŕen MM, Porfirio TH (2015) Pseudo-feasible solutions in evolutionary bilevel optimization: test problems and performance assessment. Transactions on Cybernetics 1:1–17.
  24. 24. Li HC, Fang L (2014) Co-evolutionary algorithm: an efficient approach for bilevel programming problems. Engineering Optimization 46(3): 361–376.
  25. 25. Aboelnaga Y, Nasr S (2020) Modified evolutionary algorithm and chaotic search for bilevel programming problems. Symmetry 12(5): 767–796.
  26. 26. Goshu NN, Kassa SM (2020) A systematic sampling evolutionary (SSE) method for stochastic bilevel programming problems. Computers and Operations Research 120:1–14.
  27. 27. Ma LM, Wang GM (2020) A solving algorithm for nonlinear bilevel programing problems based on human evolutionary model. Algorithms 13(10): 260–272.
  28. 28. Ramírez C, Selene M, Vallejo C, et al (2017) Solving the p-median bilevel problem with order through a hybrid heuristic. Applied Soft Computing, 60:73–86.
  29. 29. Abo-Elnaga Y, El-Shorbagy MA (2020) Multi-sine cosine algorithm for solving nonlinear bilevel programming problems. International Journal of Computational Intelligence Systems 13(1): 1–12.
  30. 30. Wang GM, Ma LM (2020) The estimation of particle swarm distribution algorithm with sensitivity analysis for solving nonlinear bilevel programming problems. IEEE Access 1:1–24.
  31. 31. He XY, Zhou YR, Chen ZF (2018) Evolutionary bilevel optimization based on covariance matrix adaptation. IEEE Transactions on Evolutionary Computation 23(2): 258–272.
  32. 32. Sinha A, Malo P, Deb K (2017) Evolutionary algorithm for bilevel optimization using approximations of the lower level optimal solution mapping. European Journal of Operational Research 257(2): 395–411.
  33. 33. Sinha A, Soun T, Deb K (2019) Using Karush-Kuhn-Tucker proximity peasure for polving bilevel optimization problems. Swarm and Evolutionary Computation 44(1): 496–510.
  34. 34. Sinha A, Malo P, Deb K (2017) Approximated set-valued mapping approach for handling multi-objective bilevel problems. Computers and Operation Research 1(1): 1–43.
  35. 35. Sinha A, Lu Z, Deb K, et al (2020) Bilevel optimization based on iterative approximation of multiple mappings. Journal of Heuristics 26(2): 151–185.
  36. 36. Islam MM, Singh HK, Ray T (2017) A surrogate assisted approach for single-objective bilevel optimization. IEEE Transactions on Evolutionary Computation 5(1): 1–16.
  37. 37. Li HC, Wang YP (2010) A genetic algorithm based on optimality conditions for nonlinear bilevel programming problems. Journal of Applied Mathematics and Informatics 28(3): 597–610.
  38. 38. Ball GH, Hall J (1965) A novel method of data analysis and pattern classification.
  39. 39. Wang HD, Jin YC, Sun CL, et al (2019) Offline data-driven evolutionary optimization using selective surrogate ensembles. IEEE Transactions on Evolutionary Computation 23(2): 203–216.
  40. 40. Fang KT, Bentler WPM (1994) Some applications of number-theoretic methods in statistics. Statistical Science 9(3): 416–428.