Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Many-objective African vulture optimization algorithm: A novel approach for many-objective problems

  • Heba Askr,

    Roles Methodology, Software, Writing – original draft

    Affiliations Faculty of Computers and Artificial Intelligence, Information Systems Department, University of Sadat City, ‎Sadat City, Egypt, Scientific Research Group in Egypt (SRGE), Nasr City, Egypt

  • M. A. Farag,

    Roles Methodology, Software, Validation, Writing – original draft

    Affiliations Scientific Research Group in Egypt (SRGE), Nasr City, Egypt, Faculty of Engineering, Department of Basic Engineering Science, Menoufia University, Shebin El-Kom, ‎Egypt

  • Aboul Ella Hassanien ,

    Roles Conceptualization, Formal analysis, Methodology, Writing – review & editing

    aboitcairo@cu.edu.eg

    Affiliations Scientific Research Group in Egypt (SRGE), Nasr City, Egypt, Faculty of Computers and Artificial Intelligence, Cairo University, Giza, Egypt

  • Václav Snášel,

    Roles Project administration, Supervision

    Affiliation Faculty of Electrical Engineering and Computer Science, VŠB-Technical University of Ostrava, Poruba-Ostrava, Czech Republic

  • Tamer Ahmed Farrag

    Roles Software, Validation, Visualization, Writing – original draft

    Affiliation Department of Computer Engineering, MISR Higher Institute for Engineering and Technology, Mansoura, Egypt

Abstract

Several optimization problems can be abstracted into many-objective optimization problems (MaOPs). The key to solving MaOPs is designing an effective algorithm to balance the exploration and exploitation issues. This paper proposes a novel many-objective African vulture optimization algorithm (MaAVOA) that simulating the African vultures’ foraging and navigation behaviours to solve the MaOPs. MaAVOA is an updated version of the African Vulture Optimization Algorithm (AVOA), which was recently proposed to solve the MaOPs. A new social leader vulture for the selection process is introduced and integrated into the proposed model. In addition, an environmental selection mechanism based on the alternative pool is adapted to improve the selection process to maintain diversity for approximating different parts of the whole Pareto Front (PF). The best-nondominated solutions are saved in an external Archive based on the Fitness Assignment Method (FAM) during the population evolution. FAM is based on a convergence measure that promotes convergence and a density measure that promotes variety. Also, a Reproduction of Archive Solutions (RAS) procedure is developed to improve the quality of archiving solutions. RAS has been designed to help reach out to the missing areas of the PF that the vultures easily miss. Two experiments are conducted to verify and validate the suggested MaAVOA’s performance efficacy. First, MaAVOA was applied to the DTLZ functions, and its performance was compared to that of several popular many-objective algorithms and according to the results, MaAVOA outperforms the competitor algorithms in terms of inverted generational distance and hypervolume performance measures and has a beneficial adaptation ability in terms of both convergence and diversity performance measures. Also, statistical tests are implemented to demonstrate the suggested algorithm’s statistical relevance. Second, MaAVOA has been applied to solve two real-life constrained engineering MaOPs applications, namely, the series-parallel system and overspeed protection for gas turbine problems. The experiments show that the suggested algorithm can tackle many-objective real-world applications and provide promising choices for decision-makers.

1. Introduction

MaOPs are optimization problems with more than three objectives that must be solved simultaneously [1]. Most real-world applications may have more than four conflicting objective functions and are mathematically being modelled as MaOPs. Some of these applications include automotive engineering, aerospace engineering, many-objective simplified nurse scheduling problem, the five-objective water resource management problem, the ten-objective general aviation aircraft design problem, the many-objective space trajectory design problem, many-objective software refactoring, the hybrid car controller optimization problem with six objectives, optimization of three centrifugal design problems having six to nine objectives, the many-objective 0/1 knapsack problem, Heuristic Learning, Travelling Salesman Problem (TSP), Job shop scheduling, flight control system, supersonic wing design, six-objective design of a factory-shed truss [2], Big data applications which need sophisticated architectures with inherent capabilities to be scaled and optimized [3], NP-hard workflow allocation problems in cloud systems [4], Multicore computers are transforming the embedded computing market [5], and recently Internet of Everything (IoE) [6]. The difficulty of the MaOPs returns to the increase in the problem scale; as the number of objectives grows, the number of nondominated solutions grows exponentially [1]. Solving MaOPs is more difficult for several reasons: the high computational cost of PF approximation due to increased evaluation of several points, the inability of existing evolutionary multi-objective algorithms to solve MaOPs, and the difficulty of visualizing the PF with more than four objectives [2].

The difficulties that Multi-objective Evolutionary Algorithms (MOEAs) experience in solving MaOPs have raised the demand for the development and the deployment of evolutionary algorithms for MaOPs. MOEAs are not scalable enough and have problems addressing the MaOPs. These problems are summarized by [11] as follows: (1) As the number of objective functions grows, the obtained results become non-dominated; (2) As the size of the objective space grows, the conflict between diversity and convergence grows; (3) For computational efficiency, the population size can be small; (4) Computational complexity grows exponentially as the number of objectives grows (for example, hypervolume calculation); (5) Balancing diversity and convergence becomes more complicated; and (6) Due to the vast dimensions, visualizing the Pareto-optimal front is difficult. Due to these challenging issues, MaOPs are more complex and need to be handled using more effective and scalable evolutionary algorithms.

Various ways to solve MaOPs have been proposed as the MOEAs community pays more attention. These approaches can be roughly divided into four categories [2].

1.1. Decomposition-based approaches

These non-Pareto-based methods combine the objectives into a scalar function. The weight vector is a weighted coefficient that represents the relevance of each objective. A MaOP is split into numerous single-objective sub-problems that can be optimized simultaneously using a set of weighting vectors.

Scalarization techniques also balance the diversity and convergence of solutions in the objective space. For dealing with MaOPs, [1] presented a new reference direction-based density estimator, a new FAM, and new environmental selection algorithms. To increase the diversity of decomposition-based Evolutionary Algorithm (EA) [7], adopted a dynamical decomposition technique. Reference vectors were employed by [8] to break down the original MaOP into several single objective subproblems and clarify user preferences to target a preferred subset of the entire PF. The reference points are automatically selected from the solutions and matched to the PF pattern. As a result, these reference points might provide a diversified range of possibilities for guiding the population to explore new areas.

Recently [9], suggested an adaptive decomposition EA (MaOEA/ADEI) based on environmental information. The ecological information determines the penalty factor of Penalty boundary intersection decomposition and includes population and weight vector distribution information. In addition, the weight vectors adaptation approach is employed when dealing with problems involving scaled targets.

1.2. The indicator-based approach

The value of the performance indicator is used to direct the search process in this approach. The algorithm in this category used the performance indicator instead of fitness to select individuals. For example [10], introduced a hypervolume estimation algorithm. The exact Hypervolume Values (HV) were approximated using Monte Carlo simulation, and the solutions were rated using the HV indication. An indicator based MOEA with reference point adaption (ARMOEA) was presented by [11]. For MaOPs [12], presented a two-stage R2 indicator-based EA (TS-R2EA). The primary selection is based on an R2 indicator-based achievement scalarizing function. After that, the reference vector guided objective space partition approach is applied as the second selection strategy. A two-stage selection technique yield a good mix of convergence and diversity. In addition, several efficient and effective indicators based MOEAs [1315] have been presented in the context of these performance metrics.

1.3. Pareto-dominance approach

Is the most popular class of MaOPs. Some improved Pareto rank solutions are chosen using dominance-based selection criteria in these approaches. In addition, a diversity-related method will be used to ensure that the Pareto optimal solutions are distributed evenly. Grid domination and grid difference were utilized to strengthen the selection pressure in the authors’ [16] Grid-based many-objective evolutionary algorithm (GREA). In addition, to introduce a fuzzy mechanism to Pareto dominance, the authors in [17] employed a continuous function to quantify the degree of non-dominance between two solutions. As a result, solutions with a higher non-dominance degree can be selected. In addition, a novel dominance relation [18] and a reinforced dominance relation [19] were presented to classify just more precisely the best convergent solutions as non-dominated, hence speeding up population convergence. In addition, various efficient and effective Pareto-dominance strategies [2022] for solving MaOPs have recently been published.

1.4. Preference-based approach

This category has three types: a priori, interactive, and posterior. The preference information is supplied before the search in an a priori class. The decision-maker is expected to offer preference information interactively in an interactive class. Similarly, the preference information is introduced after the search in the a posteriori class. Several efficient and effective preference-based EA approaches [2325] have been proposed to solve MaOPs.

The authors in [26] presented a new nature-inspired metaheuristic algorithm called AVOA in 2021, and it has since been used in several real-world engineering applications. AVOA was created to simulate and model African vultures’ foraging behaviour and living habits. Compared to state-of-the-art optimization techniques, the AVOA was determined to be very promising and powerful. In addition, this technique is substantially faster than any comparable algorithms in terms of computational complexity and running time, and it works well in large-scale applications. The population of African vultures is divided into three groups based on their habits. The first group is to find the best feasible solution among all vultures. The second group is to find the second-best feasible solution among all vultures. The final group is made up of the surviving vultures. The rationale for the division is that each group of vultures has a different ability to locate and consume food. It is assumed that the worst vultures are the weakest and hungriest vultures, and the best vultures are the strongest and most abundant vultures at present. The strongest and best vultures are two of the best solutions in AVOA, while the other vultures are trying to approach the best.

This paper presents a modified version of AVOA to handle MaOPs. This version is called MaAVOA. The AVOA required two best vultures to guide the other vultures to reach the best solution. A new selection process for the MaOPs is introduced and integrated into the proposed model. In addition, an environmental selection mechanism based on the alternative pool is adapted to improve the selection pressure to maintain diversity for approximating different parts of the whole PF. Also, an external Archive based on the FAM is set up to keep track of the best non-dominant solutions as the population evolves. The FAM is based on a convergence measure that promotes convergence and a density measure that promotes variety. Furthermore, a RAS procedure is developed to improve the quality of archiving solutions. The RAS procedure helps to reach out to the missing areas of the PF that the vultures easily miss.

The main contributions of this paper are summarized as follow:

  • The proposed MaAVOA is a novel algorithm to solve many objectives problems which achieves promising solutions that promotes diversity and fast convergence.
  • The proposed MaAVOA is compared to certain current five best-practice algorithms and achieves results superiority over them, including a unified evolutionary optimization algorithm (U-NSGAIII) [27], a reference-point-based many-objective evolutionary algorithm based on NSGA-II (NSGA-III) [28], A multi-objective evolutionary algorithm based on decomposition (MOEA/D) [29], constrained two-archive evolutionary algorithm (CTAEA) [30], and AGEMOEA adaptive geometry estimation based MOEA (AGE-MOEA) [31].
  • The performance of the proposed MaAVOA was evaluated using benchmark functions for DTLZ test suites with some objectives ranging from three to fifteen objectives.
  • In addition, it was applied on two real life engineering applications to validate its performance to tackle many-objective real-world applications.

The rest of the paper is organized as follows. The MaOPs and AVOA are presented in Section 2. The proposed algorithm’s framework is illustrated in Section 3. The proposed framework’s implementation methodology is presented, and the results are discussed in Section 4. In section 5, two engineering applications are introduced. The paper’s conclusions and future research initiatives are presented in Section 6.

2. Preliminaries

2.1. Many-objective optimization problem

Many-objective optimization problems (MaOPs) can be stated as follows: (1)

Subjected to

Where F: Ω→Rm is a set of m conflicting objective functions in the form of a vector, (m≥4), is n-dimensional decision space, x = (x1, x2,…,xn)∈Ω is a vector of n decision variables (candidate solutions) and Rm is called the objective space [27].

Definition 1. (Pareto-dominance) A solution xp is considered to dominate another solution xq (xpxq) if and only if (2)

Definition 2. (Pareto-optimal)

A solution xp is assigned to be Pareto optimal iff: ∃xp∈Ω: xpxq

Definition 3. ((Pareto-optimal set (POS)): the set of non-dominant solutions POS includes all solutions that balance the objectives in a unique and optimum manner.

(3)

Definition 4. ((Pareto-optimal front (POF)): The values of all the objective functions corresponding to the Pareto-optimal solutions in POS are included in the set POF.

(4)

The dimension of the POF is expected to be m−1, and the POF is becoming more complex with increasing the number of objective functions, which is the challenge of many-objective optimization problems [7].

2.2. Standard African vulture’s optimization algorithm

Authors in [26] introduced a novel nature-inspired metaheuristic algorithm, AVOA, used to solve several engineering applications [32]. AVOA was developed by simulating and modelling African vultures’ foraging behaviour and living habits.

To simulate the AVOA biological life, four assumptions are considered:

  • In the African vulture population, there are NpopF vultures. Each vulture’s position is n-dimensional, with a maximum number of iterations (MaxIter). can be used to indicate the position of each vulture i (1≤iNpopF) at different iterations t (1≤t≤MaxIter).
  • The population of African vultures is classified into three groups based on their life habits. The first group is to find the best feasible solution among all vultures. The second group is to find the second-best feasible solution among all vultures. The final group is made up of the surviving vultures.
  • The division is that each group of vultures has a unique incapacity to discover and eat food.
  • The worst vultures are thought to be the weakest and most hungry, while the best vultures are the strongest and most numerous. The strongest and best vultures in AVOA are two of the best solutions, and the other vultures aim to approach the best.

3. The proposed many-objective African vulture optimization algorithm (MaAVOA)

This paper presents a modified version of AVOA to handle MaOPs. This version is called MaAVOA. Initially, NpopF Vultures are randomly generated in the decision space using a uniform distribution. After that, vultures are evaluated according to the fitness functions, and the nondominated solutions are identified according to Pareto dominance of NSGA-III [31] then stored in the external archive (ARC). The ARC is based on the FAM, created to keep track of the best solutions as the population evolved. The AVOA required two best vultures to guide the other vultures to reach the best solution. The proposed algorithm uses a set of social leader vultures to guide solutions in the search space. Some of these social leader vultures are chosen from the ARC to lead the other vultures in the population. The proposed algorithm uses FAM in [33], focusing on convergence and diversity to select the first-social leader vultures from ARC. FAM was employed with two objectives to enforce these potential leaders’ preferences and learn more about them. MaAVOA iteratively performs a series of steps, the most important of which are (1) obtaining the social leaders for the vultures and moving the solutions in the decision space by using AVOA; (2) Applying polynomial mutation to 10% of the vulture position (candidate solution) to enhance the diversity while avoiding the premature convergence; (3) perform the environmental selection by using the alternative pool to select the best NpopF vultures for the next generation and (4) Update the external archive to contain only the non-dominated solutions; i,e based on the dominance relation on all objectives. The nondominated solutions in the alternative pool and the old archive are stored in the archive. These steps are repeated up to MaxIter is reached. The parameter of the proposed algorithm is shown in Table 1. The MaAVOA framework is shown in Fig 1 and Algorithm (1) and they are being explained in greater depth in the following subsections.

Algorithm 1: MaAVOA

Input: population size NpopF, MaxIter, and the related parameters.

Output: The position of best vultures and their fitness value

Processing:

 Initialize a random population of vultures Xv(v = 1,2,…,NpopF)

 Use Pareto dominance of NSG-III to identify non-dominated solutions.

 Save all non-dominated individuals in the archive (ARC)

While (stopping criteria are not met) do

 • For v = 1: NpopF

 • Select social leader Vultures (Algorithm 4)

 • pop = the new position of vultures after updating their position by AVOA

 • pop = polynomial mutation to 10% of pop

 • Evaluate the objective values for each individual in pop

 • Combine the old and new offspring populations, denoted as px = popXv

 • pop = sorting px by a non-dominated sorting technique of NSGA-III and choosing NpopF solutions from px

 • pop = Environment Selection from the alternative pool (Algorithm 5)

 • Update the ARC by the nondominated solution in the alternative pool.

end while

 return the position of best vultures and their fitness value from ARC

3.1. Fitness Assignment Method (FAM)

The MaAVOA’s FAM is presented in Algorithm (2) and is based on a convergence measure that promotes convergence and a density measure that promotes variety. This method used a set of reference points to calculate both metrics. These points are utilized to cluster the solutions and, as a result, estimate their density in the objective space. These points are also used to push solutions that are near to the PF.

MaAVOA uses a collection of reference points to find well-distributed solutions and near the PF. A method for obtaining this set of points was proposed by [34]. This process produces a set of evenly spaced reference points on a hyperplane in the objective space. This hyperplane is in the first quadrant and intersects each axis equally. At position one on each axis, the interception is considered, followed by Nr divisions. As a result, () gives the total number of reference points nRef.

Algorithm 2: FAM

Input: a population of vultures Xv(v = 1,2,…,NpopF)

Output: the convergence and density measures

Processing:

• Calculate the fitness vector of each vulture.

• Determine the set of reference points RP = {rp1, rp2,…,rpnRef}

• Compute the approximated ideal point Pideal

• Compute the new extreme points from ∪ ARC

• Compute the hyperplane from extreme points.

• Compute the density measure and the convergence measure of each solution

The basic steps for calculating the density measure and the convergence of the solutions in ARC are illustrated in the following steps.

  • Step1: Generate a set of reference points RP = {rp1, rp2,…,rpnRef} by using a method proposed by [33], where nRef is the total number of reference points. For example, if m = 4 objective functions, the reference points are created on a rectangle with apex at (1, 0, 0,0), (0, 1, 0, 0), (0, 0, 1,0) and (0, 0, 0,1) with considering four divisions (Nr = 4), and 35 reference points will be generated.
  • Step 2: identify ideal point , then each objective function for each solution in ARC is transformed to F~ by subtracting the value in objective F(X) by Pideal, i.e. the translated objective i is obtained from .
  • Step 3: Compute the set of the extreme solutions from all solutions in ARC up to the current iteration (t) of the algorithm. The solution is an extreme solution for objective n if this solution i minimize the scalarizing achievement function (AS) as follows.
(5)

where rpn = {rpn1, rpn2,…,rpnm} is a unitary vector that corresponds to the direction in the axis n, that is, rpnj = 0 if nj and rpnk = 1 otherwise in which n ∈ {1, 2, …, m}. Using this method, all the solutions found in the ARC thus far are used to update the set extreme solutions. Then, the m objective vectors of the in are used to build a hyperplane in the objective space and extended to reach these m objective vectors. The intercept di of the i-th objective axis and the linear hyperplane can then be obtained by calculating the distance from the interception point and the origin and using this value to normalize the objective functions.

(6)
  • Step 4: Associate the solutions in ARC to the reference point. For this purpose, each reference point is joined with the origin to construct a reference line corresponding to each reference point on the hyperplane. The distance perpendicular to each solution in ARC to the reference lines is computed. Each solution is associated with the closest reference point, whose reference line is closest to it in the normalized objective space. As a result, each reference point will have a set of solutions. Now the density around each reference point can be estimated by, counting the number of ARC solutions linked to it. Therefore, the density measure (Dmj) of each solution in ARC is equal to the size of the group in which it is associated with it. For example, if the solutions ωi = {x, w, y, z} form the cluster ωi of a reference point rpi, then the Dmj of these solutions is equal to 4.
  • Step 5: Compute the convergence measure (convj) to promote convergence. For each solution in ARC, the AS function and associated reference point are calculated, which is the convergence measure for this solution and donated by (convj). For each reference point rp, the AS is calculated of the solutions from the external archive associated with this reference point concerning it (using Eq (5)). Mathematically, the convergence measure of a solution j (convj) is calculated as follows.
(7)

Based on the above four assumptions of AVOA in section 2.2, To simulate the diverse vulture behaviours in the foraging stages, MaAVOA can be divided into five phases. The first phase is the social vulture’s selection, the rate of hunger of vultures is the second phase, the exploration and exploitation phases are the third and fourth phases, respectively, and finally, the environmental selection phase is to select the best NpopF vultures for the next generation. The flowchart for simulating various vulture behaviours in the foraging stages is shown in Fig 2 and presented in more detail in next subsections.

thumbnail
Fig 2. The flowchart of simulating of various vulture behaviours in the foraging stages.

https://doi.org/10.1371/journal.pone.0284110.g002

3.2. The social leader vultures selection

MaAVOA needs two social leader vultures to guide the other vultures in the population. According to MaOPs, there is no one best solution over the population for the investigated problem. Instead, there are a set of non-dominated solutions, so we will select two sets of social leader vultures. In the proposed MaAVOA, the social leader vultures will be divided into two sets: the first social leader vultures (FSLV) and the second-social leader vultures (SSLV). The FSLV set contains all non-dominated solutions in the ARC. For each vulture in the population, the first social leader is chosen from the FSLV by using the measurements of diversity and convergence in FAM to separate the ARC’s solutions such that the best solutions are chosen based on these criteria The tournament selection procedure is used to assign the first social leader vulture (fslv) from the FSLV to this solution v. A solution i ∈ ARC is better than a solution j ∈ ARC in the tournament selection procedure if it has a density value lower than the second one. If the two solutions have the same density, the convergence measure determines which is preferable. In the event of a tie, we choose solution i as a leader if Dmi<Dmj. Otherwise, solution j is selected. Aside from that, solution j is chosen.

For the second-social leader vultures (SSLV) set, the guiding vultures, in this case, will be a set of best solutions corresponding to each objective function from all vultures in the population. This selection procedure aims to find the best solutions that are closer to the PF. As a result, each iteration’s hyperplane is pushed closer to the PF, improving the convergence. The set of second-social leader vulture position SSLV = {sl1, sl2,…,slm} consists of m best solutions, one for each objective. Thus, each vulture in SSLV is dedicated to bringing the new vultures closer to the PF’s ideal point. For selecting the second social leader sslv for a vulture v, the random selection process assigns this leader from SSLV to this solution.

The selection process of the set of SSLV is given in Algorithm (3), and the social leader vulture selection process (fslv and sslv) for each vulture in the population is given in Algorithm (4).

Algorithm 3: Second-Social vulture set selection for each vulture

Input: population of vultures

Output: SSLV

Processing:

 • Compute the objective functions for each vulture.

 • Assign the minimum objective function for each objective

 • Define each vulture corresponding to each objective (i.e v1 corresponding to , v2 corresponding to , vm corresponding to )

 • Output SSLV = {sl1, sl2,…,slm}

Algorithm 4: Social leaders vultures’ selection

for v = 1: NpopF)

  [ARC(i), ARC(j)] = tournament selection (ARC)

  Compute the density measure (DM) and the convergence measure(conv) of each ARC(i) and ARC(j) (Algorithm 2)

  If Dmi<Dmj

   fslv = ARC(i)

elseif Dmj<Dmi

   fslv = ARC(j)

elseif Dmj = Dmi

    if coni<conj

     fslv = ARC(i)

    else

      fslv = ARC(i)

    end

end

s = random[1, m]

sslv = SSLV(s)

end

3.3. Vultures’ hungry rate

The vulture has the strength to fly to obtain food if it is not hungry. If the vulture is very hungry, it lacks the strength to fly large distances. As a result, hungry vultures will stick near the vultures with food rather than searching for food on their own. The exploration and exploitation stages of vultures can thus be formed based on the above behaviour. The degree of hunger indicates when vultures are transitioning from the exploration to the exploitation stage. The ith vulture’s hunger degree at the tth iteration can be calculated by (8) (9) where is a random number between 0 and 1, zt is a random number between -1 and 1, ht is a random number between -2 and 2, and k is a parameter that has been set in advance, this denotes the likelihood of the vulture carrying out the exploitation stage.

When is greater than 1, vultures enter the exploration stage, searching for new food in various locations. When is less than 1, vultures enter the exploitation stage, looking for better food in the immediate vicinity.

3.4. Exploration stage

Vultures in AVOA can investigate different random locations using two alternative tactics, which are selected using a parameter called p1. This parameter p1 is given with the algorithm’s initialization, and the range is [0,1]. The exploration stage of the vulture can be expressed as (10) (11) where is the the ith vulture’s position at the t+1th iteration, and pr are random numbers that are uniformly distributed in the range [0,1]. is the social leader vulture as fslFSLV and sslSSLV, which are chosen for vulture i in Algorithm (4). s1 and s2 are parameters that were measured in advance, with values ranging from 0 to 1, and the sum of both being one. is calculated according to Eq (8), ub, lb represent the upper and lower bounds of the solutions, and represents the distance between the vulture and the current optimal vulture and calculated by: (12) where C is the vultures move randomly to protect food from other vultures.

3.5. Exploitation stage at medium level

If the value of is less than 1, then AVOA enters the exploitation phase, divided into two phases, each with two alternative methods (medium and later).

3.5.1. Competition for food.

The weaker vultures try to exhaust the healthier vultures and get food from them by congregating around them and provoking minor confrontations. Based on this behaviour, the vultures’ position is updated and the updated formula can be expressed as: (13) (14)

3.5.2. Rotating flight of vultures.

When a vulture is full and active, it will not only compete for food but also hover at high altitudes, according to AVOA’s spiral model. The updated formula can be expressed as: (15) (16) (17)

3.6. Exploitation stage at later level

When the value is less than 0.5, almost all vultures in the population were full, but after a long period of time, the best two species of vultures were hungry and feeble. Vultures will attack food at this time, and several different vultures will congregate around the same food source.

3.6.1. Aggregation behaviour.

Vultures have digested a large portion of the food during the late stages of AVOA. Where there is food, many vultures will congregate, and competition will ensue. At this point, the vulture position update formula is as follows: (18) (19) (20)

3.6.2.Attack behaviour.

When AVOA is in its last stages, the vulture will flock to the best vulture to scavenge the remaining food. The vultures’ position update formula can be expressed at this point as in Eq (21). (21) where dim represents the solution’s dimension, Levy(dim) represents the Levy flight [26], and its calculation formula is given by the following Equation. (22) where r1 and r1 are uniformly distributed random numbers in the range [0,1], δ is a constant, which is usually set to 1.5, and the calculation formula of σ is given by nest equation. (23) where Γ(x) = (x−1)!

3.7. Environmental selection operator

The ARC stores non-dominated solutions found by the algorithm during the search process until the algorithm is completed. The archive stores the nondominated solutions from all vultures for information sharing. A vulture may have very poor values on some objectives when the number of objectives increases. These poorly performing objectives need the solutions of ARC for information sharing. Thereby the vulture is pushed to converge to the PF. Although the MaAVOA’s environmental selection operator has achieved a reasonably balanced performance in terms of convergence and diversity, the MaAVOA’s new progeny may have a diversity problem with other solutions. The idea of integrating into the alternate pool is being tested as a solution to this challenge. By integrating the reproduction generated by the genetic operators to the solutions in the ARC to construct the alternative pool containing the new offspring generated by the MaAVOA operator and archive offspring generated by reproduction of archive solutions to select the best NpopF vultures according to the dominance relation on all objectives. Under the pressure of the alternative pool, the algorithm assures that the operators work together to find more extended alternative solutions in the population’s evolutionary process. As a result of the effect of the alternate pool, the algorithm’s overall evolutionary efficiency improves. Population convergence and distribution are ensured because of the environmental operator’s influence. We used some ideas and schemes from [35] to develop this environmental selection.

Archive solutions (RAS) were reproduced on 50% of the ARC solutions. In RAS, crossover and mutation processes inherit different dimensions from different solutions. Some parents from the archive are selected randomly and then perform simulated binary crossover (SBX) and polynomial mutation (PM), and these new solutions are then added into the alternative pool then choose the best NpopF individuals according to the dominance relation on all objectives.

Algorithm 5: Environment Selection (RAS, pop) Operator.

Input: pop (offspring generated by the MaAVOA), ARC (solutions in archive)

Output: pop (new generation of vultures).

Processing:

PAS = choose random 50%of vultures from ARC

RAS = Genetic operators to (PAS)

for i = 1 to |RAS|

for j = 1 to |pop|

  Judge the dominance relation between RAS(i) and pop(j);

  if the nondominated solution is located in pop

   Retain the corresponding nondominated pop solutions;

  end if

  if the nondominated solution lies in RAS

  add the corresponding nondominated RAS(i) 1to pop;

  end if

end for

end for

if |pop| > Npop

 Compute the fitness values using the FAM method; (Algorithm 2)

 Remove some solutions with the worst fitness values;

end if

Output the pop with size NpopF for the next generation.

end

3.8. Updating the external archive

Because the social leader vultures are chosen from the ARC, good administration of this archive is crucial and significantly impacts the algorithm’s performance. The external archive is updated at each iteration. We attempt to place each non-dominated solution from the vultures obtained after environmental selection in the external archive. If any archive solution dominates this added solution, it is neglected. Otherwise, this solution is saved to the external archive, and the solutions dominated by this new non-dominated solution are deleted from the archive.

4. MaAVOA implementation

Five state-of-the-art algorithms are compared to our proposed algorithm, namely a unified evolutionary optimization algorithm (U-NSGAIII) [27], a reference-point-based many-objective evolutionary algorithm following NSGA-II [28], A multi-objective evolutionary algorithm based on decomposition (MOEA/D) [29], constrained two-archive evolutionary algorithm (CTAEA) [30], and AGEMOEA adaptive geometry estimation based MOEA (AGE-MOEA) [31]. These algorithms have been developed to solve MaOPs. The proposed and the state-of-the-art algorithms have been implemented and added to the modern Multi-Objective Optimization package (Pymoo). To evaluate the performance of the proposed algorithm, it is applied to both benchmark problems (DTLZ1-DTLZ7) and two engineering application: Series-parallel system problem and Overspeed protection for gas turbine [36] as case studies. Wilcoxon Test Statistic has applied on all the experiments.

All experiments are tested on a machine with the following specifications: CPU: Core i5 Processor 2.5 GHz /16GB RAM /500GB SSD, GPU: NVIDIA GeForce GTX1050 4GB, compute capability 6.1.

4.1. Benchmark problems

In the proposed work, we used the DTLZ1-DTLZ7 benchmark problems, and they are commonly used due to their scalability for any number of objective functions. It is a widespread test suite conceived for MaOPs with scalable fitness dimensions [37]. All the problems in this test set are scalable in the fitness dimension and are continuous n-dimensional many-objective issues. The decision space has a dimension of k + m + 1, where m is the number of objectives, with k = 5 for DTLZ1, k = 10 for DTLZ 2–6, and k = 20 for DTLZ7, as proposed in [37]. Table 2 lists the properties of the decision space and the PF for each problem.

4.2. Parameter settings

Concerning the recommended parameter settings for the compared algorithms, crossover and mutation probabilities are set to 1 and 1/D, respectively. The mutation and crossover distribution parameters have been set to 20. The population size of all algorithms is set to be the same to make a fair comparison with other algorithms. Table 3 shows the number of reference points (nRef) for problems with different objectives. We set population size (popsize) equal to nRef for both the state of the art and MaAVOA algorithms. We used the same settings in [38]. One layer of reference points for three- and five-objective problems and two layers of reference points for eight-ten-fifteen-objective problems are used according to [39]. The reference points (or popsize) are set according to parameters Nr1 and Nr2 for the different number of objectives. Nr1 and Nr2 are parameters controlling nRef along the boundary and inside of the Pareto optimal front (used in the previously mentioned calculation of the number of reference points ().

thumbnail
Table 3. Settings of the reference points and population size.

https://doi.org/10.1371/journal.pone.0284110.t003

For a fair comparison, each state-of-the-art algorithm is applied to solve the DTLZ benchmark functions using three different cases or scenarios to analyse the proposed algorithm’s performance and discuss its weak points. The first case is terminating the algorithms after 500 generations for each test problem. The second case is terminating the algorithms after 100000 function evaluations for each run. The third case is setting the computational time of each run to 3 seconds for all the algorithms. In each of the three scenarios, each algorithm is run 20 times separately on each test problem.

4.3. Performance indicators

Three widely used performance metrics are utilized to evaluate the performance of algorithms in this paper. Generational Distance (GD), Inverted Generational Distance (IGD), and hypervolume (HV). All of them can be an indicator for the convergence and distribution of a solution set as comprehensive performance measures [40].

  • Inverted Generational Distance (IGD) and Generational Distance (GD) are two measurement indicators used to validate the results. The GD performance indicator measures the solution’s distance to the PF. Let us assume the points found by our algorithm are the objective vector set A = {a1, a2,…,a|A|} and the set of evenly sampled solutions from the genuine Pareto optimum front is Z = {z1, z2,…,z|A|} then,
(24)

where di represents the Euclidean distance (p = 2) from ai to its nearest reference point in Z. Basically, this result is the average distance from any point A to the closest point in the PF.

  • The IGD performance indicator inverts the generational distance and measures the distance from any point in Z to the closest point in A.
(25)

where represents the Euclidean distance (p = 2) from zi to its nearest reference point in A.

  • Hypervolume (HV): The volume covered by the obtained PF in the object region, defined as the HV between the front surface and the reference vector, is used to represent the volume covered by the obtained PF in the object region. As a result, the HV reflects the PF’s solution distribution. To calculate HV, we set the reference point of HV to p = (1,1,…,1)T [41]. To guarantee that the individuals in the population can contribute to HV as much as possible, the objective values are normalized by 1.1 times the nadir point of the PF. The HV metric is calculated accurately when the number of objectives is less than 5. When the number of objectives is greater than 5, the Monte Carlo method is adopted to calculate HV. We used 106 sample points for a more accurate result.

4.4. Results and discussion

This section analyses all the outcomes acquired from various experiments conducted throughout the implementation phase in this paper.

4.4.1 Convergence analysis.

The ability of the global search method to converge is a critical performance criterion for MaOPs. This part looked at the MaAVOA’s convergence as a function of the number of iterations using the IGD measure. The convergence trajectories have been chosen randomly from 30 algorithm runs of the MaAVOA and the other five algorithms on DTLZ1-4 with three and ten objectives in Fig 3.

thumbnail
Fig 3. Convergence trajectories of seven algorithms on DTLZ1-4 with 3 and 10 objectives.

https://doi.org/10.1371/journal.pone.0284110.g003

In case of DTLZ1 and DTLZ4 with 3 and 10 objectives, all algorithms exhibit a similar and strong ability to converge to PF, except CTAEA and AGE-MOEA have the worst convergence in all test problems. In addition, it is noted that the convergence of the proposed MaAVOA towards PF is better than the convergence of NSGAIII in most problems. This returns to MaAVOA uses an external archive where the non-dominated solutions found by the algorithm during the search process are stored through the algorithm. In contrast, NSGA-III worked only on the updated population. The solutions in the external archive are used to lead the other solutions in the population, and the MaAVOA uses FAM with two objectives having the simultaneous goal of imposing preferences among these potential social leaders. The proposed approach uses Pareto dominance and information about density and proximity to push the vultures towards the PF, which comprises a significant difference between the proposed MaAVOA and NSGA-III.

Although MaAVOA and U-NSGAIII have approximately the same convergence, MaAVOA is still better in convergence in all the problems except DTLZ with 3 objectives. In addition, MOED/D shows a decrease in convergence in the case of DTLZ1 and DTLZ 4 with 3 objectives and DTLZ 3 and DTLZ 4 with 10 objectives. Furthermore, MaAVOA shows great performance on DTLZ1 with 3 or 10 objectives, DTLZ3 with 3 or 10 objectives, and DTLZ2 and DTLZ 4 with 10 objectives. This demonstrates its great ability to solve MaOPs problems with concave PF. We can observe that the proposed algorithms show terrible performance on DTLZ2 and DTLZ 4 with 3-objectives compared to those with 10 objectives. This returns to MaAVOA new strategy to choose social leader vultures that guide the other vultures to PF. On several tests, it demonstrates good scalability in terms of the number of decision variables and it is concluded that the suggested algorithm has a promising convergence ability to PF.

The other obtained PFs for all DTLZs can be found on https://github.com/tfarrag2000/MaAVOA.We can clearly observe the convergence and diversity of MaAVOA solutions for the high dimensional MaOPs.

In Figs 4 and 5, the approximate PF obtained by the six competing algorithms on DTLZ3 and DTLZ7 with 3,4, and 10 objectives problems is presented to further explain the results.

thumbnail
Fig 4. The parallel coordinates of the non-dominated front obtained by each algorithm (used in the comparison) on DTLZ3 and DTLZ7 with 3 objectives.

https://doi.org/10.1371/journal.pone.0284110.g004

thumbnail
Fig 5. The parallel coordinates of non-dominated front obtained by each algorithm (used in the comparison) on DTLZ3 and DTLZ 7 or any function with 10 objectives.

https://doi.org/10.1371/journal.pone.0284110.g005

As shown in Fig 4, NSGA-III, CTAEA, MOEA/D, and MaAVOA have a good distribution, indicating that they performed well on DTLZ3 with three objectives. But U-NSGA-III is unable to maintain convergence and distribution of the solutions. In addition, AGEMOEA failed to converge to the true PF. When the algorithms being tested on DTLZ7 with 3 objectives, MaAVOA and UNSGA-III show superior performance than the other algorithms. It is well observed that the proposed MaAVOA has great diversity and convergence but NSGA-III, CTAEA, AGEMOEA and MOEA/D cannot converge to the true PF on DTLZ7 with 3 objectives.

As shown in Fig 5, NSGA-III, U-NSGA-III, and MaAVOA demonstrate a good dispersion, displaying their excellent performance on DTLZ3 and DTLZ7 with 10 objectives, while CTAEA, AGEMOEA and MOEA/D have a bad ability of convergence and diversity. Figs 4 and 5 show that MaAVOA has shown a good dispersion, demonstrating their superior performance.

4.4.2 Results for GD, IGD, and hypervolume.

Case 1: the termination condition is set to be 500 generation.

The IGD results of the six algorithms on the seven DTLZ tasks with 3,5,8, and 10 objectives are presented in Table 4 while Table 5 shows the values of the GD results of several algorithms and Table 6 shows the values of the HV results as well.

thumbnail
Table 4. Performance comparison between the proposed MaAVOA and other algorithms in terms of IGD.

https://doi.org/10.1371/journal.pone.0284110.t004

thumbnail
Table 5. Performance comparison between MaAVOA and other algorithms in terms of GD value on DTLZs.

https://doi.org/10.1371/journal.pone.0284110.t005

thumbnail
Table 6. Performance comparison between MaAVOA and other algorithms in terms of HV value on DTLZs.

https://doi.org/10.1371/journal.pone.0284110.t006

The results of Tables 3 and 4, show that the proposed MaAVOA has achieved competitive performance for most test cases, indicating that MaAVOA can better balance convergence and diversity than the five comparative algorithms.

Case 2: the termination condition is set to be 100000 function evaluations.

Tables 79 give the values of the three metrics IGD, GD, and HV of the six algorithms on the seven DTLZ problems with 3,5,8 and10 objectives when the termination condition of all algorithms is set to be 100000 function evaluations.

thumbnail
Table 7. The performance metrics comparison between MaAVOA and other algorithms in terms of IGD value on DTLZs in the case of the 100000 function evaluations.

https://doi.org/10.1371/journal.pone.0284110.t007

thumbnail
Table 8. The performance metrics comparison between MaAVOA and other algorithms in terms of GD value on DTLZs in the case of the 100000 function evaluations.

https://doi.org/10.1371/journal.pone.0284110.t008

thumbnail
Table 9. The performance metrics comparison between MaAVOA and other algorithms in terms of HV value on DTLZs in the case of the 100000 function evaluations.

https://doi.org/10.1371/journal.pone.0284110.t009

As seen from Tables 79, MaAVOA has achieved competitive performance for most test problems.

Case 3: The termination condition is set to be computational time equal 30 seconds

Tables 1012 gives the values of the three metrics IGD, GD, and HV of different algorithms on the seven DTLZ problems with 3,5,8 and10 objectives when the termination condition of all algorithms is set to be 30 seconds. In addition, Table 13 compares the proposed MaAVOA with the other algorithms in terms of the number of generations and number of function evaluations on DTLZs in the case of the computational time being 30 seconds.

thumbnail
Table 10. The performance metrics comparison between MaAVOA and other algorithms in terms of IGD value on DTLZs in case of the computational time is 30 seconds.

https://doi.org/10.1371/journal.pone.0284110.t010

thumbnail
Table 11. The performance metrics comparison between MaAVOA and other algorithms in terms of GD value on DTLZs in case of the computational time is 30 seconds.

https://doi.org/10.1371/journal.pone.0284110.t011

thumbnail
Table 12. The performance metrics comparison between MaAVOA and other algorithms in terms of HV value on DTLZs in case of the computational time is 30 seconds.

https://doi.org/10.1371/journal.pone.0284110.t012

thumbnail
Table 13. Comparison between MaAVOA and other algorithms in terms of the number of generation and number of function evaluations on DTLZs in case of the computational time is 30 seconds.

https://doi.org/10.1371/journal.pone.0284110.t013

As shown in Table 13, when all algorithms end after 30 seconds, the MaAVOA has achieved competitive performance for most test problems, the same as in the other two previous cases. On the other hand, Table 13 shows that the proposed algorithm has implemented for several generations that is smaller than the NSGAIII, U-NSGAIII, and AGEMOEA and also several function evaluations of MaAVOA is smaller than NSGAIII and U-NSGAIII as well for 30 seconds.

5. MaAVOA for engineering applications

In this section, the performance of the proposed MaAVOA has been tested on two real-life engineering applications, namely, the Series-parallel system and Overspeed protection for gas turbine. These applications are used to show the efficiency and effectiveness of the proposed MaAVOA in real-life problems. Since the optimal Pareto front is not known for real-life applications, a reference set for the real-life problem was used for computing the IGD and HV, which was formed with the non-dominated solutions resulting from the union of all the approximation sets to the PF obtained by each algorithm at the end of every run.

5.1 Series-parallel system problem

The series-parallel system has subsystems in series and parallel combinations. Fig 6 shows an example of a series-parallel system with five subsystems where the final reliability function is divided into two parts. The first part contains subsystems 1 and 2, and the second part has subsystems 3, 4, and 5. For the first part, as subsystems are in series, therefore the product of R1 and R2 is used. For the second part, R3 and R4 are parallel, so the function will be R3+R4R3R4. The combination of R3 and R4 are in series with R5. Therefore, the product of (R3+R4R3R4) and R5 is used in the final function as shown in Eq (26). Volume and weight increase with extra components under permissible limits and restrictions. In Eq (27), wi represents the weight and vi represents the volume of component i with n number of redundant components. As shown in Eq (28), system cost Cs also contains two additional factors and exp (0.25ni), where the first one represents the cost of a single component ith in the subsystem, and the second one is due to the cost of interconnecting hardware. In Eq (29), for system weight Ws, there is an extra factor exp (0.25ni) for the interconnecting hardware. The mathematical formulation of the problem is a nonlinear mixed-integer programming problem given as follows: (26) (27) (28) (29) where for the ith subsystem, αi and βi are constraints representing the physical characteristic of each component at stage i.

Table 14 provides the input data for a series-parallel system where ri, αi and βi are uniformly generated from the ranges [0.95,1.0], [6,10], [1,5], and [11,20] respectively.

The algorithms are terminated after 250, 500, 1000, 2000, 4000, and 5000 generations. The engineering problem has 4 objective functions. Accordingly, the population size is chosen to be 969 (Nr1 = 16, Nr2 = 0, and nRef = 969).

In Table 15, the values of the performance measures for a series-parallel system with five subsystems are presented. NSGA-III and U-NSGA-III have performed better in terms of GD and IGD. In terms of HV, MaAVOA is better. CTAEA and AGEMOEA have the worst performance in all metrics. Fig 7 shows the final solution set obtained for all algorithms.

thumbnail
Fig 7. Final solution set for a series-parallel system with five subsystems (termination condition is 500 iteration).

https://doi.org/10.1371/journal.pone.0284110.g007

thumbnail
Table 15. The performance measures values of series-parallel system.

https://doi.org/10.1371/journal.pone.0284110.t015

In Fig 7, the approximated PF obtained by the competing algorithms for the series-parallel system is presented to further explain the results.

5.2 Overspeed protection for gas turbine problem

This system comprises of a fuel-supplied gas turbine through various valves. Fig 8 depicts a four-valved overspeed prevention system for gas turbines. The valves regulate the fuel flow when overspeed is detected. The problem can be expressed mathematically in the following way: (30) (31) (32) (33)

αi and βi are constants representing the actual features of each item at stage i and T is the operating time during which the item should not fail. Table 16 provides the input data for an overspeed protection for gas turbine system.

thumbnail
Table 16. Data used in Overspeed protection for gas turbine system.

https://doi.org/10.1371/journal.pone.0284110.t016

The algorithms are terminated after 250, 500, 1000, 2000, 4000, 5000, and 10000 generations. The engineering problem has 4 objective functions. Accordingly, the population size is chosen to be 969 (Nr1 = 16, Nr2 = 0, and nRef = 969).

The results for the overspeed protection for gas turbine problem are given in the Table 17 and Fig 9.

thumbnail
Fig 9. Final solution set for overspeed protection for gas turbine problem (termination condition is 500 iteration).

https://doi.org/10.1371/journal.pone.0284110.g009

thumbnail
Table 17. The performance measures values of Overspeed protection for gas turbine.

https://doi.org/10.1371/journal.pone.0284110.t017

As observed in Table 17, MOAVA based solution approach performed better in terms of IGD, GD, and HV. The performance measures have been drawn as histograms in the Fig 9 which shows the final solution set obtained for all algorithms with termination condition of 500 iteration. It is concluded that the proposed MOAVA provides very competitive results as compared to five well-known optimization algorithms in solving the investigated engineering real life applications.

6. Conclusion and future research directions

A novel many-objective African vulture optimization algorithm, named MaAVOA, is proposed in this paper. MaAVOA is an updated version of AVOA to handle the MaOPs. It integrates a new social leader vultures selection process. In addition, an environmental selection mechanism based on the alternative pool was adapted to improve the selection pressure to maintain diversity for approximating different parts of the whole PF. An external Archive based on the FAM is established to save the best-nondominated solutions during the population evolution. Also, a RAS procedure is developed to improve the quality of archiving solutions and help reach out to the PF’s missing areas that the vultures easily miss. The proposed MaAVOA was evaluated using well-known benchmark functions. Comparing the proposed MaAVOA results to five states of the art algorithms showed that MaAVOA outperformed the five algorithms in terms of IGD, GD, and HV in most of the benchmark test functions when all algorithms terminated according to several function evaluations or in case of terminating according to a maximum number of generations. To verify the performance of the proposed MaAVOA for real life many objectives’ applications, it was applied and tested on two real-life engineering constrained problems. The findings show that among all the successful algorithms, MaAVOA has promising and competing performance.

There are many directions of research that can be recommended for future works to handle the limitations of the proposed work. The variation in operators of the proposed MaAVOA algorithm can motivate the future work to minimize the execution time of MaAVOA. Also, extending this algorithm to solve more constrained engineering many objective optimization problems can be seen as a future point for research. In addition, the computational time of the proposed algorithm is considered greater than both NSGAIII and UNSGAIII algorithms which can be considered as a future point for research. Furthermore, breaking out from the local optimum still difficult, so we suggest using a clustering strategy in the future to help.

References

  1. 1. Wang Y. and Sun X., “A many-objective optimization algorithm based on weight vector adjustment,” Comput. Intell. Neurosci., vol. 2018, 2018. pmid:30425733
  2. 2. Mane S. and Rao M. N., “Many-objective optimization: Problems and evolutionary algorithms–a short review,” Int. J. Appl. Eng. Res., vol. 12, no. 20, pp. 9774–9793, 2017.
  3. 3. Singh N., Hamid Y., Juneja S., Srivastava G., Dhiman G., Gadekallu T. R., et al. (2023), “Load balancing and service discovery using Docker Swarm for microservice-based big data applications”, Journal of Cloud Computing, vol.12(1), pp.1–9, 4 https://doi.org/10.1186/s13677-022-00358-7.
  4. 4. Ahmad F., Shahid M., Alam M., Ashraf Z., Sajid M., Kotecha K., et al. (2022), “Levelized Multiple Workflow Allocation Strategy under Precedence Constraints with Task Merging in IaaS Cloud Environment”, IEEE Access, vol.10, pp.92809–92827,
  5. 5. Shukla Surendra Kumar, Pant Bhaskar, Viriyasitavat Wattana, Verma Devvret, Kautish Sandeep, et al (2022), "An integration of autonomic computing with multicore systems for performance optimization in Industrial Internet of Things." IET Communications,
  6. 6. Singh S. P., Dhiman G., Viriyasitavat W., and Kautish S. (2022), “A Novel Multi-Objective Optimization Based Evolutionary Algorithm for Optimize the Services of Internet of Everything”, IEEE Access, vol.10, pp.106798–106811,
  7. 7. Mane S. U. and Narasinga Rao M. R., “A Non-dominated Sorting based Evolutionary Algorithm for Many-objective Optimization Problems,” Sci. Iran., 2021.
  8. 8. Safi H. H., Ucan O. N., and Bayat O., “On the real world applications of many-objective evolutionary algorithms,” in Proceedings of the First International Conference on Data Science, E-learning and Information Systems, 2018, pp. 1–6.
  9. 9. Wei Z., Yang J., Hu Z., and Sun H., “An adaptive decomposition evolutionary algorithm based on environmental information for many-objective optimization,” ISA Trans., vol. 111, pp. 108–120, 2021. pmid:33162057
  10. 10. Bader J., Deb K., and Zitzler E., “Faster hypervolume-based search using Monte Carlo sampling,” in Multiple criteria decision making for sustainable energy and transportation systems, Springer, 2010, pp. 313–326.
  11. 11. Tian Y., Cheng R., Zhang X., Cheng F., and Jin Y., “An indicator-based multiobjective evolutionary algorithm with reference point adaptation for better versatility,” IEEE Trans. Evol. Comput., vol. 22, no. 4, pp. 609–622, 2017.
  12. 12. Li F., Cheng R., Liu J., and Jin Y., “A two-stage R2 indicator based evolutionary algorithm for many-objective optimization,” Appl. Soft Comput., vol. 67, pp. 245–260, 2018.
  13. 13. Brockhoff D., Wagner T., and Trautmann H., “2 indicator-based multiobjective search,” Evol. Comput., vol. 23, no. 3, pp. 369–395, 2015.
  14. 14. Lopez E. M. and Coello C. A. C., “IGD+-EMOA: A multi-objective evolutionary algorithm based on IGD+,” in 2016 IEEE Congress on Evolutionary Computation (CEC), 2016, pp. 999–1006.
  15. 15. Lopez E. M. and Coello C. A. C., “An improved version of a reference-based multi-objective evolutionary algorithm based on IGD+,” in Proceedings of the Genetic and Evolutionary Computation Conference, 2018, pp. 713–720.
  16. 16. Yang S., Li M., Liu X., and Zheng J., “A grid-based evolutionary algorithm for many-objective optimization,” IEEE Trans. Evol. Comput., vol. 17, no. 5, pp. 721–736, 2013.
  17. 17. He Z., Yen G. G., and Zhang J., “Fuzzy-based Pareto optimality for many-objective evolutionary algorithms,” IEEE Trans. Evol. Comput., vol. 18, no. 2, pp. 269–285, 2013.
  18. 18. Yuan Y., Xu H., Wang B., and Yao X., “A new dominance relation-based evolutionary algorithm for many-objective optimization,” IEEE Trans. Evol. Comput., vol. 20, no. 1, pp. 16–37, 2015.
  19. 19. Tian Y., Cheng R., Zhang X., Su Y., and Jin Y., “A strengthened dominance relation considering convergence and diversity for evolutionary many-objective optimization,” IEEE Trans. Evol. Comput., vol. 23, no. 2, pp. 331–345, 2018.
  20. 20. Zhang M. et al., “Many-Objective Evolutionary Algorithm based on Dominance Degree,” Appl. Soft Comput., vol. 113, p. 107869, 2021.
  21. 21. Li L., Li G., and Chang L., “A many-objective particle swarm optimization with grid dominance ranking and clustering,” Appl. Soft Comput., vol. 96, p. 106661, 2020.
  22. 22. Liu Y., Zhu N., and Li M., “Solving many-objective optimization problems by a Pareto-based evolutionary algorithm with preprocessing and a penalty mechanism,” IEEE Trans. Cybern., 2020.
  23. 23. Sharma D., Vats S., and Saurabh S., “Diversity Preference-based Many-Objective Particle Swarm Optimization Using Reference-Lines-based Framework,” Swarm Evol. Comput., p. 100910, 2021.
  24. 24. Szlapczynski R. and Szlapczynska J., “W-dominance: Tradeoff-inspired dominance relation for preference-based evolutionary multi-objective optimization,” Swarm Evol. Comput., vol. 63, p. 100866, 2021.
  25. 25. Zou J., Yang Q., Yang S., and Zheng J., “Ra-dominance: A new dominance relationship for preference-based evolutionary multiobjective optimization,” Appl. Soft Comput., vol. 90, p. 106192, 2020.
  26. 26. Abdollahzadeh B., Gharehchopogh F. S., and Mirjalili S., “African vultures optimization algorithm: A new nature-inspired metaheuristic algorithm for global optimization problems,” Comput. Ind. Eng., vol. 158, p. 107408, 2021.
  27. 27. Seada H. and Deb K., “A unified evolutionary optimization procedure for single, multiple, and many objectives,” IEEE Trans. Evol. Comput., vol. 20, no. 3, pp. 358–369, 2015.
  28. 28. Deb K. and Jain H., “An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, part I: solving problems with box constraints,” IEEE Trans. Evol. Comput., vol. 18, no. 4, pp. 577–601, 2013.
  29. 29. Zhang Q. and Li H., “MOEA/D: A multiobjective evolutionary algorithm based on decomposition,” IEEE Trans. Evol. Comput., vol. 11, no. 6, pp. 712–731, 2007.
  30. 30. Ke Li, Renzhi Chen, Guangtao Fuand Xin Yao, "Two-Archive Evolutionary Algorithm for Constrained Multiobjective Optimization", IEEE Transactions on Evolutionary Computation, vol. 23, no. 2, pp. 303–315, April 2019.
  31. 31. Panichella A., “An adaptive evolutionary algorithm based on non-Euclidean geometry for many-objective optimization,” in Proceedings of the Genetic and Evolutionary Computation Conference, 2019, pp. 595–603.
  32. 32. Fan J., Li Y., and Wang T., “An improved African vultures optimization algorithm based on tent chaotic mapping and time-varying mechanism,” PLoS One, vol. 16, no. 11, p. e0260725, 2021. pmid:34847188
  33. 33. Figueiredo E. M. N., Ludermir T. B., and Bastos-Filho C. J. A., “Many objective particle swarm optimization,” Inf. Sci. (Ny)., vol. 374, pp. 115–134, 2016.
  34. 34. Das I. and Dennis J. E., “Normal-boundary intersection: A new method for generating the Pareto surface in nonlinear multicriteria optimization problems,” SIAM J. Optim., vol. 8, no. 3, pp. 631–657, 1998.
  35. 35. Cui Z. et al., “Hybrid many-objective particle swarm optimization algorithm for green coal production problem,” Inf. Sci. (Ny)., vol. 518, pp. 256–271, 2020.
  36. 36. Nath R. and Muhuri P. K., “Evolutionary Optimization based Solution approaches for Many Objective Reliability-Redundancy Allocation Problem,” Reliab. Eng. Syst. Saf., p. 108190, 2021.
  37. 37. Deb K., Thiele L., Laumanns M., and Zitzler E., “Scalable test problems for evolutionary multiobjective optimization,” in Evolutionary multiobjective optimization, Springer, 2005, pp. 105–145.
  38. 38. Wang H., Sun C., Zhang G., Fieldsend J. E., and Jin Y., “Non-dominated sorting on performance indicators for evolutionary many-objective optimization,” Inf. Sci. (Ny)., vol. 551, pp. 23–38, 2021.
  39. 39. Luo J., Huang X., Yang Y., Li X., Wang Z., and Feng J., “A many-objective particle swarm optimizer based on indicator and direction vectors for many-objective optimization,” Inf. Sci. (Ny)., vol. 514, pp. 166–202, 2020.
  40. 40. Riquelme N., Von Lücken C., and Baran B., “Performance metrics in multi-objective optimization,” in 2015 Latin American computing conference (CLEI), 2015, pp. 1–11.
  41. 41. Bader J. and Zitzler E., “HypE: An algorithm for fast hypervolume-based many-objective optimization,” Evol. Comput., vol. 19, no. 1, pp. 45–76, 2011. pmid:20649424