Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Multi‑strategy Equilibrium Optimizer: An improved meta-heuristic tested on numerical optimization and engineering problems

  • Yu Li,

    Roles Conceptualization, Data curation, Formal analysis

    Affiliation Institute of Management Science and Engineering, Henan University, Kaifeng, China

  • Xiao Liang,

    Roles Writing – original draft

    Affiliation School of Business, Henan University, Kaifeng, China

  • Jingsen Liu ,

    Roles Methodology, Software, Supervision

    ljs@henu.edu.cn

    Affiliation Institute of Intelligent Network Systems, and Software School, Henan University, Kaifeng, China

  • Huan Zhou

    Roles Resources, Supervision, Visualization

    Affiliation School of Business, Henan University, Kaifeng, China

Abstract

The Equilibrium Optimizer (EO) is a recently proposed intelligent optimization algorithm based on mass balance equation. It has a novel principle to deal with global optimization. However, when solving complex numerical optimization problems and engineering problems, the algorithm will get stuck into local optima and degrade accuracy. To address the issue, an improved Equilibrium Optimizer (IEO) based on multi-strategy optimization is proposed. First, Tent mapping is used to generate the initial location of the particle population, which evenly distributes the particle population and lays the foundation for diversified global search process. Moreover, nonlinear time parameter is used to update the position equation, which dynamically balances the exploration and exploitation phases of improved algorithm. Finally, Lens Opposition‑based Learning (LOBL) is introduced, which avoids local optimization by improving the population diversity of the algorithm. Simulation experiments are carried out on 23 classical functions, IEEE CEC2017 problems and IEEE CEC2019 problems, and the stability of the algorithm is further analyzed by Friedman statistical test and box plots. Experimental results show that the algorithm has good solution accuracy and robustness. Additionally, six engineering design problems are solved, and the results show that improved algorithm has high optimization efficiency achieves cost minimization.

1. Introduction

In recent years, optimization problems have become an important topic in the modern management field. They help provide the optimal solution for the application problems in various fields of society. On the premise of comprehensive consideration of all aspects of constraints, the practical application problems are abstracted as objective functions and solved. Because of the development of science and technology, many optimization problems become more and more complex. Meta-heuristic algorithms with high flexibility have subsequently attracted the attention of a large number of researchers [1]. The meta-heuristic algorithms put forward a set of new research ideas and solutions, some in the modeling and simulation of complex systems, others analyzing complex decision and solving optimization problems. Meta-heuristic algorithms are used to solve optimization problems by simulating biological or physical phenomena. The algorithms are divided into four categories: evolutionary algorithms, physics-based algorithms, human-based algorithms and swarm intelligence algorithms [2].

Evolutionary algorithms realize the overall progress of the population and achieve the optimal solution by simulating the evolution law of survival of the fittest in nature. The representative ones include the genetic algorithm (GA) that searches for the optimal solution by simulating the natural evolution process [3], and the differential evolution (DE) that simulates the crossover and mutation mechanism in heredity [4]. Similarly, population-based incremental learning (PBIL) is proposed based on two strategies: genetic search and competitive learning [5]. Biogeography-based optimization (BBO) is proposed based on the biogeography theory of species’ migration and drift between geographical regions in nature [6].

Physics-based algorithms are inspired by the laws of physics in nature. Representative algorithms mainly include simulated annealing (SA) [7], whose inspiration comes from the annealing process of solid materials. In addition, big-bang big-crunch (BBBC) is inspired by the Big Bang and contraction theory [8]. Gravitational search algorithm (GSA) is based on Newton’s law of universal gravity to guide the motion of each particle to search for the optimal solution [9]. Inspired by the theory of physics kinematics, central force optimization (CFO) realizes the update of the optimal solution by updating the acceleration [10].

Human-based algorithms are modeled after human behaviors, such as human teaching behaviors and social behaviors. The representative algorithms are listed below. Tabu search (TS) is a search process guided by memory. It is also a simulation of human intelligence process and a manifestation of artificial intelligence [11]. Teaching learning based optimization (TLBO) conducts search optimization by simulating the learning methods of human teaching and techniques in the learning process [12]. The inspiration of harmony search (HS) comes from the process of human musical performance [13], by repeatedly adjusting the solution variables in the memory bank, the algorithm makes the function value continuously converge with the increase of the number of iterations.

The swarm intelligence algorithm comes from the simulation of biological groups’ behavior process. The individuals in the biological group follow the cooperative mode of aggregation, division of labor, collision avoidance, and convergence, until the swarm intelligence emerges. Representative swarm intelligence algorithms include: particle swarm optimization (PSO), which simulates the foraging behavior of birds [14]; ant colony optimization (ACO), which simulates the foraging path of ants by secretion concentration [15]; artificial bee colony (ABC), which simulates the honey gathering behavior of bees [16], and whale optimization algorithm (WOA), which is inspired by the feeding behavior of whales in the ocean [17].

Equilibrium Optimizer(EO) is a physics-based meta-heuristic algorithm proposed in 2020 [18], The algorithm was inspired by mass balance equations in physics. The EO algorithm is inspired by the control volume mass balance, where the dynamic state and equilibrium state of the particles can be estimated. In EO, the important parameters are: equilibrium pool (), exponential term () and generation rate (). During the optimization process, the search agent randomly updates its concentration (position) for certain particles called equilibrium pool, eventually reaching an equilibrium state (the best result). The unique update mechanism of EO algorithm makes it equipped with fast convergence speed and precision. Some scholars have studied EO algorithm are listed here. Gupta et al. [19] introduced Gaussian variation and new exploratory search mechanism on the basis of EO algorithm to improve the diversity of solutions. Abdel-Basset et al. [20] introduced an equation and Gaussian mutation strategy when EO was performing position update to improve the exploration and exploitation capability of the algorithm. Jia et al. [21] combined EO with thermal exchange optimization (TEO) in order to improve the optimization accuracy of EO algorithm. Fan et al. [1] proposed an improved EO algorithm based on opposition-based learning and new update rules, which improve the exploration ability of the algorithm and avoid falling into local optimal value.

EO has a competitive advantage compared with other intelligent optimization algorithms. Still, it has problems such as slow convergence speed, low solution accuracy and tendency to fall into local optima when solving complex function and engineering problems. These defects are mainly caused by the following reasons: low quality of randomly generated initial particle population, unbalanced exploration and exploitation abilities in the iteration phase, and the decrease of population diversity caused by particle aggregation in the later iteration phase. Therefore, an improved Equilibrium Optimizer (IEO) based on multi-strategy optimization is proposed. It includes three improvements: first, the random initialization is replaced by a Tent chaotic map, so the particles are evenly distributed in the search space as far as possible, and the quality of the initial solution is improved. Second, a dynamic control parameter strategy is proposed to promote the balance between the exploration and exploitation phases of the algorithm through the dynamic changes of parameters. Finally, the Lens Opposition‑based Learning (LOBL) strategy is introduced in a late iteration, which prevents the algorithm from falling into local optimum by finding other valuable search areas through generating new candidate solutions. In the simulation experiment, three different complexity test sets are optimized, namely 23 reference functions, IEEE CEC2017 and IEEE CEC2019 test sets. When all the experimental results are compared with the six meta-heuristic algorithms, the results show that the improved algorithm IEO has significant advantages in convergence accuracy and effectiveness. Among statistical tests, the Friedman test and the Wilcoxon rank sum test prove that IEO has superior performance when optimizing each test set. In addition, this paper also selected a convergence curve and boxplot analysis to show the stability of IEO from different perspectives. Finally, the improved algorithm IEO is applied to six engineering design problems: the pressure vessel problem, the welded beam problem, the tension/compression spring problem, the three-bar truss problem, the speed reducer problem and optimal design of industrial refrigeration system. Experimental results show that IEO has good optimization efficiency in solving practical application problems. With knowledge of above discussion, the innovations of this paper are listed as follows.

  • To improve the quality of the initial particle population, Tent chaotic mapping is introduced to increase the diversity of initial solutions through the map’s uniformity distribution.
  • A new dynamic control parameter strategy is proposed to facilitate an effective transition from exploration to exploitation, to improve search efficiency and avoid premature convergence.
  • To avoid falling into a local optimal solution due to population aggregation phenomenon at the later stage of iterations, a LOBL strategy is introduced to expand the search space and improve the convergence speed.
  • Experiments are carried out on three function test sets of different difficulty, namely 23 benchmark functions, IEEE CEC2017 and IEEE CEC2019. IEO has the advantages of high solution accuracy, fast convergence speed and strong robustness when optimizing complex functions.
  • The effectiveness and optimization efficiency of IEO are tested on six engineering problems of different complexity.
  • This paper includes two different types of experiments, the first is about the analysis of numerical experimental results, and the second is about the application of engineering problems. Two different types of tests prove that the proposed IEO algorithm has excellent performance.

The remaining structure of this article is as follows: Section 2 introduces the research methods, including the basic Equilibrium Optimizer and the improved Equilibrium Optimizer. In section 3, three function test suites are selected for simulation experiments, such as 23 classical functions and IEEE CEC2017 and IEEE CEC2019. In section 4, the IEO algorithm is applied to six engineering design problems. Finally, the fifth section summarizes the work of this paper.

2. Method

In this section, the Equilibrium Optimizer (EO) and the improved Equilibrium Optimizer (IEO) are described in detail respectively. Among them, the improved Equilibrium Optimizer includes three innovation points, and the uniqueness of improved algorithm is reflected by introducing each innovation point.

2.1. Equilibrium Optimizer

Equilibrium Optimizer (EO) [18] is a new intelligent algorithm proposed by Faramarzi et al., which is inspired by the mass balance equation in physics. The mass balance equation reflects the physical process of mass entering, leaving and producing in the control volume. In EO, the concentration of each particle is updated in a random way until it reaches equilibrium. EO algorithm constructs three mathematical models: 1. Initialization phase 2. Equilibrium pool and candidates 3. Updating the concentration. The specific description is as follows:

Step1. Initialization phase

Similar to most meta-heuristic algorithms, EO initiates the optimization process by initializing the population. The initial concentration is constructed by randomly initializing the particles in the D - dimensional search space. The initial concentration of each particle is described below: (1)

Where is the initial concentration of the i-th particle, Ub and Lb represent the maximum and minimum values of particles in the search space, randi is a random vector in the range of [0,1], and n represents the number of particles.

Step2. Equilibrium pool and candidates ()

In order to improve the global search ability of the algorithm and avoid falling into the local optimal solution of low quality, after the initialization phase is completed, the concentration of the generated particles is evaluated and the four particles with the highest fitness value are selected to prepare for the formation of the equilibrium pool.

The equilibrium pool is used to provide candidate solutions during the algorithm optimization process. It consists of four particles with optimal fitness values and one average particle generated during the initialization phase. The mathematical definition is as follows: (2) (3)

Among them, represent the four particles with the highest concentration selected after the initialization of the algorithm, represents the average particle, and represents the equilibrium pool. Especially, in the equilibrium pool, the four particles with the highest concentration contribute to the exploration of the algorithm, while the average particle plays an important role in the exploitation phase.

In the iterative process of the algorithm, each particle is selected from the five candidate particles in the equilibrium pool with the same probability, which contributes to the generation of the global optimal solution.

Step3. Updating the concentration

The exponential term is an important indicator to balance the exploration and exploitation capability of EO algorithm. The calculation of F is as follows: (4)

Where a1 is a constant that controls the exploration ability of the algorithm. indicates the direction of exploration and exploitation. and represent vectors within the interval of [0,1], t is the coefficient updated with the number of iterations, which can be calculated as follows: (5)

Where Iter represents the current iteration number of the algorithm, and Max_iter represents the maximum iteration number of the algorithm. a2 is a constant that can control the exploitation ability of the algorithm. According to the experimental data [18], when a1 = 2 and a2 = 1, the performance of algorithm EO is the best. In order to improve the exploitation capability of EO, an equally important indicator is generation rate (), which is defined as follows: (6) (7)

In the formula, represents the control parameter vector of generation rate, represents the current particle concentration, r1 and r2 are random numbers within the interval [0,1], and GP is a constant with value of 0.5. To sum up, after the concentration update phase of EO algorithm, the update formula of each particle is as follows: (8)

Where V is considered as unit.

According to the above description, the updating rule of algorithm EO is to construct the initial concentration of each particle in the initialization phase, select four particles with the highest concentration and form an equilibrium pool with an average particle, which provides candidate solutions for algorithm iteration. Then, the concentration of each particle is calculated using two important indexes: exponential term () and generation rate ().

2.2. The improved Equilibrium Optimizer

In this paper, an improved Equilibrium Optimizer (IEO) algorithm is proposed. In IEO, there are three improved strategies: Firstly, the algorithm is initialized by using Tent chaotic map instead of randomly generating initial population. The uniform initial population generated by Tent mapping improves the quality of final optimization solution. Second, nonlinear dynamic control parameter is introduced to maintain the balance between the exploration and exploitation phases of the algorithm. Third, Lens Opposition-based Learning (LOBL) is used to calculate the relative population of each iteration process to expand the search space of the algorithm and improve the accuracy of the solution.

2.2.1. Tent chaotic sequence initialization.

In the initialization phase, the basic EO algorithm uses the method of random generation to determine the initial solution, which cannot guarantee that the randomly generated initial solution is evenly distributed in the search space. Therefore, in order to improve the quality of the initial solution, Tent chaotic map is introduced [22]. The mathematical expression is as follows: (9)

Where xi shows the chaos variable of i-th particle, i∈[1,n].

Tent chaotic mapping has a rich dynamic space, which is a nonlinear phenomenon between determinism and randomness, and has neither periodicity nor convergence. The randomness and ergodic characteristics of Tent chaotic mapping enable the search individual to experience all states without repetition. In EO algorithm, Tent chaotic mapping is introduced to disperse the population as much as possible in the initialization phase, so as to maintain the diversity of the population and improve the global search ability of the algorithm.

In IEO, the Tent chaotic map is used to replace the random distribution to increase the diversity of the population and accelerate the convergence rate of the algorithm. The number of particles is set as n and the dimension is set as d. The basic steps of initializing particles by using Tent chaotic map within the search range are as follows:

Step1: In the search range, the number of particles is set as n, and a group of 1×d vectors are randomly generated, which are taken as the position information of the first particle.

Step2: Using Eq (9), the position information of the remaining n-1 particles is calculated to form a chaotic sequence.

Step3: The resulting chaotic sequence is initialized by the Eq (10).

(10)

In order to verify the rationality of this method, the Tent mapping chaotic sequence is compared with the random initialization in EO algorithm, the particle number is set to 30, and Sphere function among 23 classical reference functions and F15 function in CEC2017 are taken as examples. The details are shown in Figs 1 and 2.

thumbnail
Fig 1. The population distribution of the Sphere function.

https://doi.org/10.1371/journal.pone.0276210.g001

Each point in the figure represents a search individual. As shown in Fig 1, in the search space [–100,100] where Sphere function is located, compared with the population generated by random initialization in Figure (a), the initial population generated by Tent chaotic mapping in Figure (b) is more evenly distributed within the search space. In Fig 2, the F15 function in the CEC2017 test suite is selected for the experiment. Within the search range of [–100,100], it can be clearly seen that the population initialized through the Tent mapping chaotic sequence has a relatively uniform distribution. This improves IEO’s global search capabilities.

2.2.2. Dynamic parameter strategy.

In EO, the exponential term is an important index that balances the exploration and exploitation capability of EO algorithm. According to Eq (4), it can be seen that the exponential term is affected by the time parameter t. In addition, According to Eq (5), the expression of the time parameter t contains the constant a2, so the change of the parameter t largely determines the performance of EO algorithm. In EO, the time parameter t decreases from 1 to 0, which is a process that produces nonlinear changes as the number of iterations increases.

According to literature [1], the parameter t is redefined and expressed in a new way, as follows: (11) (12)

Among them, tstart = 1; tend = 0. Iter indicates the number of iterations, Max_iter represents the maximum number of iterations of the algorithm. The change curve of time parameter t in nonlinear dynamic parameter strategy, original EO algorithm [18], and linear decline strategy [23] is shown in Fig 3. The dynamic control parameter strategy proposed in this paper reduces slowly in the early stage of algorithm iteration, which avoids premature convergence when the particle is updated, and makes the particle fully search globally in the search space. In addition, in the late iteration of the algorithm, the decreasing speed of parameter t is slowed down, so that the particles can search accurately in the search space, thereby the balanced state can be reached more effectively.

2.2.3. Lens Opposition‑based Learning.

The original EO algorithm often appears the phenomenon of population aggregation in the late iteration, which makes the algorithm fall into the local extreme value due to the lack of population diversity. In order to strengthen the global search ability of the algorithm and improve the solving accuracy, Lens Opposition Based Learning strategy (LOBL) is applied to EO algorithm. LOBL is used to calculate the opposite solution of candidate solutions in the optimization process of the algorithm. By expanding the opposite region of candidate solutions, the population diversity of the algorithm in the iterative process is enhanced.

The LOBL strategy is a combination of Opposition-based Learning (OBL) strategy and lens imaging principle [24]. When the distance between the object and the convex lens is set to be more than two focal lengths, the process of particles searching for opposite solutions in the search space can be regarded as the process of lens imaging, as shown in Fig 4.

In Fig 4, a convex lens of focal length r is placed on the origin O (this paper takes (Ub+Lb)/2). An object of height h is placed x away from the point O and x is two focal lengths away. By the lens imaging principle, an image of height h′ is generated at point x′ on the other side. In other words, the point x takes O as the basis point to obtain the corresponding reverse point x′, and the mathematical relationship is described as follows: (13)

In the above equation, , and the scaling factor k represents the scaling relationship between the object and the corresponding real image. Therefore, the Eq (13) can be transformed into the formula to calculate the opposite solution of x′: (14)

Since the above equation is only applicable to the opposite solution in one dimensional space, when the optimization problem is multi-dimensional, the solution equation of LOBL strategy is as follows: (15)

Where represents the opposite solution generated by LOBL strategy in the i-th dimension, and Lbi and Ubi respectively represent the lower bound and upper bound of the i-th dimension in the search range.

Sphere function among the 23 reference functions and F15 function in the IEEE CEC2017 test suite are taken as examples, and the positions of each particle generated by LOBL strategy are shown in Figs 5 and 6. In the figure, the blue points represent the positions of particles generated by the original EO algorithm when optimizing the function, and the red points represent the positions of particles generated by LOBL strategy. As shown in Fig 5, the original EO algorithm falls into local optimum when optimizing Sphere function, and the positions of each particle produce high overlap. The specific magnified part is shown in the lower left corner of Fig 5. In addition, as can be seen in detail from Fig 6, the original EO algorithm is easy to fall into the plight of local optimal, resulting in a high degree of location overlap of each particle, and the search space becomes increasingly narrow. By introducing LOBL strategy, the opposite solution is generated, which significantly expands the search space of each particle and avoids the algorithm falling into the local optimal solution.

thumbnail
Fig 5. The position of Sphere function generated by LOBL.

https://doi.org/10.1371/journal.pone.0276210.g005

In this paper, the LOBL strategy is used to calculate the opposite solution of the candidate solution generated in each iteration. If the fitness value of the opposite solution is better than the candidate solution, the opposite solution is used to replace the candidate solution to become the current optimal solution, and the iterative operation continues. Therefore, the LOBL strategy significantly improves the ability of particles to escape from the extreme region, which effectively avoids the algorithm falling into the local optimal solution and makes the particles reach the balance state better.

In IEO, firstly, the Tent chaotic sequence is used to initialize the particle concentration, so that the initial solution is evenly distributed in the search space as far as possible, and the solving efficiency is improved. Secondly, the new nonlinear dynamic parameter strategy can better balance the exploration and exploitation phases. Finally, the LOBL strategy is used to calculate the opposite solution of the candidate solution generated by each iteration. This strategy avoids falling into the local optimal by increasing the population diversity. These three improved methods can effectively improve the solving speed and accuracy of the algorithm. The pseudocode of the IEO algorithm is illustrated in Algorithm 1. The flow chart of IEO is shown in Fig 7.

Algorithm 1 Improved Equilibrium Optimizer (IEO)

01. Initialize the particle’s populations via Eq (10)

02. Assign equilibrium candidates’ fitness a large number

03. Assign parameter’s value a1 = 2; a2 = 1; GP = 0.5

04. while (Iter<Max_iter)

05. for each particle

06. Calculate fitness of the i-th particle

07. If f(C1)<f(Ceq1)

08. Replace Ceq1 with C1 and f(Ceq1) with f(C1)

09. elseif f(C1)>f(Ceq1) and f(C1)<f(Ceq2)

10. Replace Ceq2 with C1 and f(Ceq2) with f(C1)

11. elseif f(C1)>f(Ceq1) and f(C1)>f(Ceq2) and f(C1)<f(Ceq3)

12. Replace Ceq3 with C1 and f(Ceq3) with f(C1)

13. elseif f(C1)>f(Ceq1) and f(C1)>f(Ceq2) and f(C1)>f(Ceq3) and f(C1)<f(Ceq4)

14. Replace Ceq4 with C1 and f(Ceq4) with f(C1)

15. end if

16. end for

17. calculate the average particle Ceq_ave via Eq (2)

18. construct the equilibrium pool Ceq,pool via Eq (3)

19. Implement the memory saving

20. Calculate the value of t via Eq (12)

21. for each particle

22. Choose a random candidate from the concentration pool

23. Calculate the values of vectors F and G via Eq (4), Eq (6) and Eq (7)

24. Update the concentration of the particle via Eq (8)

25. Calculate the opposite position of the particle via Eq (15)

26. Choose the better one as the next initial solution

27. end for

28. Iter = Iter +1

29. end while

2.2.4. Computational complexity.

Time complexity is one of the criteria for checking algorithm performance. In this article, big-O notation is used to represent complexity [1]. The computational complexity of the algorithm includes three main parts: initialization, fitness evaluation and population updating mechanism. The complexity calculations for the original EO and the improved IEO are as follows.

The original EO initializes the concentration of each particle in O(N×D) time, where N represents the number of particles and D represents the dimension of the problem. The fitness assessment of each particle requires O(N) time. And the selection of the particle with the highest concentration requires O(N) time. The concentration of update mechanism in the original EO requires O(N×D) time. Thus for the total Max-iter iterations, the total computation complexity of the original EO is equivalent to O(N×D×Max-iter).

The initialization of particle concentration in IEO requires O(N×D) time, where N represents the number of particles and D represents the dimension of the problem. The fitness assessment of each particle requires O(N) time. Moreover, the selection of the particle with the highest concentration requires O(N) time. The concentration of update mechanism in the IEO requires O(N×D) time. Tent chaos strategy takes O (N) time. And the Lens Opposition-based Learning strategy requires O (N) time. Thus for the total Max-iter iterations, the total calculation complexity of IEO is equivalent to O(N×D×Max-iter). Consequently, the original EO and the proposed IEO are identical in terms of time complexity.

3. Numerical experiment results and analysis

In this section, 23 classical benchmark functions and two test suite functions IEEE CEC2017 and IEEE CEC2019 with complex changes are selected to carry out simulation experiments. The CEC2017 and CEC2019 test suites contain different functions, which are divided into different categories: basic functions, hybrid functions, and composition functions. Among them, the CEC2017 data suite includes 30 complex composite functions, and the CEC2019 test suite includes 10 functions. These functions have different rotation matrices, each matrix is generated from standard normally distributed entries by Gram-Schmidt ortho-normalization with condition number c that is equal to 1 or 2. Therefore, these functions have stable performance in numerical experiments, which can show the optimization performance of the test algorithm.

In the experiment, the performance of IEO optimization results is analyzed and compared with six famous meta-heuristic algorithms, which are Equilibrium Optimizer (EO) [18], salp swarm algorithm (SSA) [25], sine and cosine algorithm (SCA) [23], butterfly optimization algorithm (BOA) [26], the particle swarm optimization (PSO) [14] and bat algorithm (BA) [27]. Among them, PSO and BA are classical swarm intelligence algorithms. In recent years, these two algorithms are not only widely used in algorithm optimization [28], but also applied to hot fields such as neural network [29] and artificial intelligence [30]. SSA, SCA and BOA algorithms are representative intelligent algorithms emerging in recent years. These three algorithms have good global optimization ability and are widely used in function optimization problems [31]. The experimental parameter values of all algorithms are shown in Table 1, which are selected from the original paper of each algorithm.

For fair comparison, three standards of mean value (mean), standard deviation (std) and running time (time) are used as the evaluation indexes. The average value intuitively shows the results of function optimization of each algorithm, and the standard deviation reflects the dispersion degree of optimization data. The smaller the standard deviation is, the higher the stability of the algorithm is. And the running time directly shows the convergence speed of the algorithm. The following three experiments describe in detail the optimization of the improved algorithm IEO for different problems.

3.1. 23 classical benchmark functions

In this section, 23 classical benchmark functions are selected for simulation experiments [17]. These functions are divided into unimodal functions and multimodal functions. Among them, F1-F7 is a unimodal function with only one global optimal solution, which is mainly used to test the optimization accuracy of the algorithm. The multimodal functions F8-F23 has multiple optima and is easy to fall into local optima. It is often used to test the exploration ability of the algorithm and the ability to avoid local optima. In addition, F14-F23 belong to fixed dimensional multimodal functions, whose dimensions are lower and fixed, so they have fewer local optimal solutions.

In order to ensure the fairness and comparability of the experiment, this paper sets the population number of the seven algorithms as 30, the maximum number of iterations as 500, and the dimension of the algorithms as 30. Each algorithm runs the test function for 30 times independently and records the mean value (mean), standard deviation (std) and running time (time) of the experimental data. The optimization results of all the algorithms for the 23 classical functions are shown in Table 2. Note that the bold values in the table represent the best results for each function optimization.

thumbnail
Table 2. The comparison results of different algorithms on 23 benchmark functions with D=30.

https://doi.org/10.1371/journal.pone.0276210.t002

As can be seen from Table 2, IEO improves accuracy by several orders of magnitude for most benchmark functions. For unimodal function F1-F7, the optimization efficiency of the algorithm IEO is greatly improved, which indicates that IEO has strong exploitation ability and can find the global optimal solution. For multimodal functions F8-F23, the average value of fitness obtained by IEO in the optimization process of ten functions F9-F12, F15-F19 and F23 is optimal. This shows that the improved IEO has strong exploration ability and robustness when dealing with multimodal functions. In general, three improvement strategies enhance the optimization performance of the algorithm and accelerates convergence speed. The improved Equilibrium Optimizer achieves complementary advantages to enhance the global search ability so that the IEO can find an accurate solution.

The convergence speed of the algorithm is reflected by calculating the running time of each function. For the 30-dimensional functions F1-F13, except for functions F1, F7 and F13, the running time of IEO algorithm is relatively fast. That is to say, IEO has a good convergence speed when solving unimodal and multimodal functions. For low dimensional multimodal functions F14-F23, these functions belong to 2-dimensional, 4-dimensional and 6-dimensional functions, and the optimization effect of IEO is not outstanding. In general, the functions F1-F13 are tested on the basis of 30 dimensions, and IEO can improve the solution accuracy and avoid local optima when dealing with high-dimensional functions. However, for low-dimensional functions F14-F23, the IEO algorithm tends to fall into the local optimum when solving lower-dimensional functions, which greatly reduces the convergence speed and solution efficiency of the algorithm.

In order to analyze the optimization results of IEO and other algorithms more effectively, the Friedman test is selected as a further evaluation index in the experiment. The order is based on the mean and standard deviation of all the algorithms. If the mean is smaller, the rank obtained by the Friedman test is smaller. In the experiment, the software IBM SPSS24 is used to calculate each algorithm, and the sorting results are shown in Table 2. IEO’s average rank is 2.18, ranking first among the seven algorithms. In addition, in order to view the average rank results of each algorithm more clearly, the Fig 8 is drawn, which shows that the ranking of IEO is better than other algorithms. The results of Friedman simulation experiment show that IEO has superior performance compared with the original EO algorithm and PSO. The improved Equilibrium Optimizer not only has strong robustness and stability but also has a faster convergence speed and higher calculation accuracy.

thumbnail
Fig 8. Mean rank of Friedman test on 23 benchmark functions.

https://doi.org/10.1371/journal.pone.0276210.g008

3.2. IEEE CEC2017 functions

The conditions of the CEC2017 test suite are more complex and challenging than unconstrained functions. Therefore, the CEC2017 is selected as the optimization problem to evaluate the algorithm IEO. For the CEC2017 test suite [32], there are four types of problems: they are: F1-F3 are unimodal rotation displacement functions, which are usually used to evaluate the convergence rate and optimization accuracy of the algorithm; F4-F10 are multimodal rotation displacement functions, which are usually used to reflect the ability of the algorithm to avoid local optimality. In addition, F11-F20 are hybrid functions, and F21-F30 are composition functions. It is difficult for most algorithms to reach the global optimal solution of the hybrid functions and the composite functions. Among them, F2 is deleted from the function list due to its instability and is not studied. Therefore, this section conducts simulation experiments on the basis of 29 functions, and the specific information is shown in Table 3.

In the experiment, the above six meta-heuristic algorithms are selected for comparison. In order to maintain the fairness of experimental data, the population number is uniformly set as 30, the maximum number of iterations is set as 500, and the dimension of the function is 30. All functions are independently run for 30 times and their average value (mean), standard deviation (std) and running time (time) are recorded. Details of the experimental data are shown in Table 4.

thumbnail
Table 4. The comparison results of different algorithms on CEC2017 functions with D=30.

https://doi.org/10.1371/journal.pone.0276210.t004

It is clear from Table 4 that the algorithm IEO is superior to other methods. Specifically, the average value of fitness obtained by IEO reaches the minimum when optimizing 18 functions (F3-F5, F10-F12, F14-F19, F21-F24, F26 and F30). In addition, the original EO algorithm achieves the lowest average fitness values for the 10 functions (F6-F9, F13, F20, F25, F27-29). From the experimental data of CEC2017, it can be seen that IEO is more efficient than the original EO algorithm, especially for hybrid functions and composition functions, IEO algorithm has superior performance, which indicates that IEO can avoid premature convergence to effectively solve the function optimization problem. Moreover, it can be seen from the data in the table that the running time of IEO and EO is the same order of magnitude, which indicates that IEO algorithm has excellent performance in optimizing complex functions. For the complex and challenging function set CEC2017, IEO can avoid falling into local optimal values when optimizing complex functions, so the convergence accuracy and running time of IEO are significantly improved.

As shown in Fig 9, for the Friedman statistical test, the value of IEO’s mean rank is 1.38, which is much better than the value of the original EO algorithm. According to the Friedman test results in Fig 9, IEO ranks best, followed by EO, SSA, BA, PSO, SCA, and BOA, which indicates that the improved Equilibrium Optimizer has strong stability and feasibility. These results also indicate that IEO has higher convergence accuracy when optimizing the CEC2017 test suite. The Tent chaotic map and LOBL strategy has a strong global search ability, and the dynamic parameter strategy has a strong local search ability, so that the improved algorithm can effectively balance the global search ability and the local search ability to find the optimal solution.

3.3. IEEE CEC2019 functions

In this section, the more challenging test suite IEEE CEC2019 [33] is selected to evaluate the algorithms. These test functions are minimal and scalable. As shown in Table 5, the functions F1, F2 and F3 are 9-dimensional, 16-dimensional and 18-dimensional problems respectively, and they have different value ranges. Moreover, the functions F4-F10 are all 10-dimensional problems and have the same search scope [–100,100]. In addition, the functions F4-F10 have different rotation matrices. In the experiment, IEO is compared with Equilibrium Optimizer (EO) [18], salp swarm algorithm(SSA) [25], sine and cosine algorithm (SCA) [23], butterfly optimization algorithm (BOA) [26], the particle swarm optimization (PSO) [14] and bat algorithm (BA) [27]. For all the algorithms, the population number of each algorithm is set to 30, the maximum iteration number is set to 500, and all the algorithms are independently run on each function for 30 times, the average value (mean), standard deviation (std) and running time (time) of the fitness value are recorded. The results are shown in Table 6.

thumbnail
Table 6. The comparison results of different algorithms on CEC2019 functions.

https://doi.org/10.1371/journal.pone.0276210.t006

As shown in Table 6, it is obvious that the IEO obtains the minimum average value of the fitness values of the ten functions F1-F10. In particular, compared with the original EO algorithm, the optimization accuracy of the improved IEO algorithm for the CEC2019 functions has been improved to varying degrees. In other words, IEO is significantly better than other algorithms in optimizing the CEC2019 function. Moreover, the running time of IEO and EO is the same order of magnitude, and there is almost no difference in the running time of the two.

In addition, IEO has a mean rank of 1.1 in the Friedman statistical test, which is smaller than the other six comparison algorithms. The Fig 10 plots the average rank of each algorithm obtained in Friedman test. It can be clearly seen that IEO ranks first, while EO, SSA, BA, PSO, SCA and BOA rank second to seventh. Overall, it is clear from function optimization and Friedman test that IEO performs well with CEC2019. For the function suite CEC2019 with different rotation matrices, IEO improves the convergence accuracy when optimizing these functions. This further shows that IEO has excellent performance in complex functions.

In conclusion, the experimental result can prove adding three strategies into the Equilibrium Optimizer, the robustness and solution accuracy of the algorithm is improved, and the performance of EO has the significant improvement. Comparison with other algorithm, the IEO has better accuracy and speed to solve numerical optimization problems.

3.4. Analysis of convergence curve

The convergence speed and accuracy of each algorithm can be seen in detail from the convergence curve. The CEC2017 test suite contains functions of different complexity levels, which best represent the optimization performance of the algorithm. Therefore, this section analyzes the convergence status of each algorithm by drawing the convergence curve of the algorithm when optimizing the CEC2017 test suite. CEC2017 can be divided into three types of functions: shifted and rotated functions (F1, F3-F10), hybrid functions(F11-F20) and composition functions(F21-F30). According to the above three different types of functions, so three figures are drawn: Fig 11 describes the convergence curve of the shifted and rotated functions (F1, F3-F10), Fig 12 shows the convergence curve of hybrid functions (F11-F20), and Fig 13 shows the convergence curve of the composition functions (F21-F30). The Figs 1113 describe the convergence results of IEO, EO, SSA, SCA, BOA, PSO and BA algorithms when the dimension D=30. In the figure, the horizontal axis represents the maximum iteration times of the algorithm 500 times, and the vertical axis represents the log mode of fitness value obtained in the optimization process of the algorithm.

thumbnail
Fig 11. Convergence curve of CEC2017 functions: Shifted and rotated functions (F1, F3-F10).

https://doi.org/10.1371/journal.pone.0276210.g011

thumbnail
Fig 12. Convergence curve of CEC2017 functions: Hybrid functions (F11-F20).

https://doi.org/10.1371/journal.pone.0276210.g012

thumbnail
Fig 13. Convergence curve of CEC2017 functions: Composition functions (F21-F30).

https://doi.org/10.1371/journal.pone.0276210.g013

It is clear from the figure that IEO has improved significantly for most functions. For other algorithms, EO has the best convergence, followed by SSA, PSO, BA, SCA and BOA.

For shifted and rotated functions (F1, F3-F10), the convergence speed and accuracy of IEO algorithm are obviously better than other algorithms except for functions F1, F5 and F9. It can be seen from the convergence curve that IEO converges faster and reaches the minimum fitness value at the early stage of iteration. In Fig 11, SCA has a slightly special convergence curve for some functions. It can be clearly seen from the figure that for shifted and rotated functions F1 and F3-F10, SCA has a slow convergence speed, and compared with other algorithms, SCA has a slow iteration speed and is easy to fall into the local optimal solution, which leads to a large change in the convergence curve. SCA algorithm is easy to fall into local optimization and cannot get the optimal value in a short iteration period, which leads to poor stability of the convergence curve and makes the convergence curve of SCA show significant changes. This shows that SCA is not effective in solving complex functions.

For the hybrid functions (F11-F20), although the convergence curve of IEO algorithm differs little from that of other algorithms, the convergence curve of IEO algorithm is relatively smooth and can form fast convergence within 100 iterations, showing good performance in terms of convergence speed. The improved algorithm has a strong global search ability for finding the better optimal value.

For the composition functions (F21-F30), the convergence curve of IEO algorithm is better than other algorithms except for the functions F21 and F26. In addition, the step size of IEO algorithm is obviously smaller than that of other algorithms, and it can always converge to the optimal solution at the beginning of iteration, while other algorithms tend to fall into the local optimal solution, resulting in slower convergence speed and lower convergence accuracy.

In general, convergence curve of the seven algorithms shows that IEO algorithm has significantly better convergence curve in optimizing CEC2017 function. When dealing with hybrid and composition functions, IEO has faster convergence speed and higher convergence accuracy. Especially, from some functions, the IEO can obtain the optimal value in 100 iterations. The convergence curve of CEC2017 also verifies the results in Table 4 again.

3.5. Stability analysis

In this section, the boxplot is used to show the data distribution of CEC2017 test suite running for 30 times independently. It describes the stability and optimization performance of experimental data by using five statistics such as the maximum value, minimum value, upper quartile, lower quartile and median in the data [34]. In addition, boxplot can not only reflect the fluctuation degree of data through the height of box, but also show the stability of data through the number of outliers. The Figs 1416 show the boxplot when dimension is set to 30, population number is 30, and maximum number of iterations is 500. Among them, Fig 14 describes the boxplot of the shifted and rotated functions (F1, F3-F10), Fig 15 shows the boxplot of hybrid functions(F11-F20), and Fig 16 shows the boxplot of the composition functions(F21-F30). In the figure, the horizontal axis represents each comparison algorithm, and the vertical axis represents the range of fitness values obtained by the optimization function.

thumbnail
Fig 14. The boxplot of CEC2017 functions: Shifted and rotated functions (F1, F3-F10).

https://doi.org/10.1371/journal.pone.0276210.g014

thumbnail
Fig 15. The boxplot of CEC2017 functions: Hybrid functions(F11-F20).

https://doi.org/10.1371/journal.pone.0276210.g015

thumbnail
Fig 16. The boxplot of CEC2017 functions: Composition functions(F21-F30).

https://doi.org/10.1371/journal.pone.0276210.g016

For unimodal and multimodal functions (F1, F3-F10), except for functions F6, F7 and F9, the differences between the maximum, minimum and median of five indexes of IEO algorithm in the boxplot are the smallest, and there are no outliers, which indicates that IEO algorithm has strong stability in the process of optimizing unimodal and multimodal functions.

By observing the boxplot of hybrid functions (F11-F20), it can be seen that the data difference of IEO algorithm is small, and the number of outliers is also small, which indicates that IEO has strong stability when processing the hybrid functions. For function F20, although the difference between BOA’s maximum value and minimum value is the smallest, BOA produces a large number of outliers in the optimization process, while IEO does not produce outliers when processing function F20, and its median value is the smallest among seven algorithms.

Compared with other algorithms, the IEO algorithm is the most stable when optimizing the composition functions (F21-F30). Specifically, in the process of running IEO algorithm for 30 times independently, the data difference is the smallest, and the median value is also the smallest among the seven algorithms, which indicates that IEO algorithm has superior optimization performance.

Overall, the boxplot shows that IEO has better performance. In the initial phase of IEO algorithm, Tent chaotic mapping is introduced to improve the quality of initial solution, and dynamic control parameter strategy is introduced to maintain the balance between exploration and exploitation phase in the iterative process, so that the particles can reach balance state more effectively. In addition, the LOBL strategy is used to calculate the opposite solution of candidate solution for each iteration, which improves the population diversity. Therefore, compared with the original EO algorithm, the stability and optimization performance of the improved IEO algorithm are greatly improved.

3.6. Wilcoxon rank sum test analysis

In order to further analyze the significance of experimental data from a statistical perspective, Wilcoxon rank sum test is conducted at the significance level of 5% in this section [35]. Through statistical analysis of each two groups of sample data, the p-value and h-value obtained are used as indicators to evaluate whether each algorithm has statistical significance. In the experiment, if p-value<0.05 and h =1 are obtained, it means that the data of the two groups are statistically significant different [36]. For three groups of functions with different characteristics (23 classical functions, IEEE CEC2017 and IEEE CEC2019), the comparison results of IEO and other six algorithms are shown in Tables 79 respectively.

thumbnail
Table 7. The results of Wilcoxon rank sum test on 23 benchmark functions with D=30.

https://doi.org/10.1371/journal.pone.0276210.t007

thumbnail
Table 8. The results of Wilcoxon rank sum test on CEC2017 functions with D=30.

https://doi.org/10.1371/journal.pone.0276210.t008

thumbnail
Table 9. The results of Wilcoxon rank sum test on CEC2019 functions.

https://doi.org/10.1371/journal.pone.0276210.t009

As can be seen from Table 7, when Wilcoxon rank sum test is performed for function F1-F13, IEO is significantly different from EO. However, IEO’s performance is not outstanding for fixed dimensional multimodal functions F14-F23. When IEO and SCA perform statistical checks, all test functions are significantly different. Since BOA cannot optimize F17, the comparison result is expressed as NAN. Compared with this algorithm, IEO has significant differences for the other 22 functions. Compared with PSO, except F9, F11 and F16, the other 20 functions showed significant differences in statistical angle. When IEO and BA perform Wilcoxon rank sum test, the other 17 functions have significant statistical differences except the six fixed-dimension functions F16-F21.

As can be seen from the statistical test results in Table 8, in the Wilcoxon rank sum test of IEO and EO, the two algorithms have significant differences for 13 functions. When IEO performs statistical tests with SSA, SCA and BOA algorithms, all functions except F28 have significant differences. Compared with PSO algorithm, the differences of other functions except F17, F20 and F29 are statistically significant. When comparing the experimental data of IEO and BA, all functions have significant statistical differences. In general, IEO is statistically significantly different from other algorithms, which indicates that IEO algorithm has higher convergence accuracy.

As can be seen from Table 9, except for function F1, when IEO is respectively compared with SSA, SCA, BOA and BA, the other 9 functions have significant differences. Compared with the classical PSO algorithm, IEO has significant statistical differences for five functions. In the statistical tests of IEO and EO, the two algorithms converge to the theoretical optimal value in the test of function F1, resulting in the representation of p-value as NAN and h-value as 0. In addition, the two algorithms have significant differences for the other four functions. In general, when optimizing the function IEEE CEC2019, the improved algorithm IEO is statistically significant different from other algorithms.

Wilcoxon rank sum test is performed on three function test sets to verify the significance of IEO algorithm from a statistical point of view, and further shows that IEO has higher convergence accuracy.

4. Engineering design problems

In this section, the efficiency of IEO algorithm in solving practical application problems is tested by solving six engineering problems. The solution of engineering optimization problems is to give the optimal design scheme under the premise of satisfying multiple constraints. In the experiment, the overall size is set as 30, and the maximum number of iterations is 500. IEO is compared with various meta-heuristic algorithms, and the optimal solution of each problem is shown in bold in the table.

4.1. Pressure vessel design problem

The optimization objective of pressure vessel design problem [37] is to minimize the total cost of cylindrical pressure vessels, a schematic diagram of this problem is shown in Fig 17, where four key optimization variables are involved: thickness of the head (Th), the thickness of the shell (Ts), the inner radius (R), and the length of the cylindrical section without considering the head(L). The mathematical expression of this problem is as follows [38]:

Consider                

Minimize                

Subject to                

                                

                                

                                

Variable range 0≤x1≤99, 0≤x2≤99, 10≤x3≤200, 10≤x4≤200

Four key variables of the pressure vessel problem are optimized by IEO algorithm and the optimal values are obtained, and the results are compared with the data of 12 algorithms that have solved the problem. Table 10 shows the values of the lowest costs and associated variables derived from each algorithm. To more clearly reflect the optimal cost of each algorithm, the Fig 18 is drawn.

thumbnail
Fig 18. The optimal cost of pressure vessel design problem.

https://doi.org/10.1371/journal.pone.0276210.g018

thumbnail
Table 10. Comparison of result on pressure vessel design problem.

https://doi.org/10.1371/journal.pone.0276210.t010

As can be seen from Table 10, IEO algorithm can obtain the minimum cost when solving the pressure vessel problem, and the values of four related parameters are relatively good, which effectively saves the engineering design cost.

4.2. Welded beam design problem

The objective of welded beam design problem is to minimize the manufacturing cost of welded beam design [46], as shown in Fig 19. The following constraints must be met during the optimization of welded beam problem: height (t), thickness of weld (h), length (l), and thickness (b) of the bar.

The mathematical model of this problem is expressed as follows:

Consider                

Minimize                

Subject to                

                                

                                

                                

                                

                                

                                

Variable range 0.1≤x1≤2, 0.1≤x2≤10

                                0.1≤x3≤10, 0.1≤x4≤2

where                    

                                       

                                

                                

                                p = 6000lb, L = 14 in., δmax = 0.25 in.

                                E = 30×106psi, G = 12×106psi

                                τmax = 13,600psi, σmax = 30,000psi

The welded beam problem is optimized by IEO and the original EO algorithm, and the experimental results are compared with 13 algorithms in other literatures. Table 11 shows the results of different algorithms and the values of four related parameters. To more clearly reflect the optimal cost of IEO, the Fig 20 is drawn.

thumbnail
Fig 20. The optimal cost of welded beam l design problem.

https://doi.org/10.1371/journal.pone.0276210.g020

thumbnail
Table 11. Comparison of result on welded beam design problem.

https://doi.org/10.1371/journal.pone.0276210.t011

It can be seen from Table 11 that the IEO algorithm has the smallest manufacturing cost for welded beam problem. In other words, when the value of four key parameters are 0.20573, 3.4703, 9.0372, 0.20573, the manufacturing cost of welded beam is 1.7249. This shows that the performance of IEO is better than other algorithms. It not only improves the optimization efficiency, but also reduces the cost of solving welded beam problem.

4.3. Tension/Compression spring design problem

The tension/compression spring problem is a classic structural engineering design problem [57], whose purpose is to minimize the weight of tension/compression spring. To solve the problem, three core variables are needed: wire diameter (d), mean coil diameter (D), and number of active coils (N) [58]. The details of the spring and the three parameters are shown in Fig 21.

The mathematical model of this problem is as follows:

Consider                ,

Minimize                ,

Subject to                ,

                                ,

                                ,

                                ,

Variable range 0.05≤x1≤2, 0.25≤x2≤1.30, 2≤x3≤15,

On the basis of IEO and EO, the tension/compression spring problem is optimized and the values of relevant parameters are obtained. The optimization results are compared with 10 algorithms in other literatures. The detailed information is shown in Table 12. And the optimal cost of IEO and comparison algorithms are shown in Fig 22.

thumbnail
Fig 22. The optimal cost of tension/compression spring design problem.

https://doi.org/10.1371/journal.pone.0276210.g022

thumbnail
Table 12. Comparison of result on tension/compression spring design problem.

https://doi.org/10.1371/journal.pone.0276210.t012

As can be seen from Table 12, compared with other algorithms, the spring weight obtained by IEO algorithm is 0.012665. In general, IEO algorithm can effectively obtain the optimal solution of engineering problems and get the best parameter values.

4.4. Three-bar truss design problem

The problem of three-bar truss is a common application problem in civil engineering field [64], and its optimization purpose is to minimize the weight of the three-bar truss. This engineering problem includes two core parameters: A1 and A2 [65]. The details are shown in Fig 23.

The mathematical formula of this problem is expressed as follows:

Consider                ,

Minimize                

Subject to                ,

                                ,

                                ,

Variable range 0≤x1, x2≤1

                                l = 100cm, P = 2 km/cm2, σ = 2 km/cm2.

The IEO algorithm is applied to the three-bar truss problem, and the experimental results are compared with 8 other algorithms in other literatures. The results are shown in Table 13. Compared with other algorithms, it is obvious that IEO obtains the optimal solution for solving three-bar truss problem. It can also be seen from Fig 24 that IEO also has superior performance in solving practical engineering problems.

thumbnail
Fig 24. The optimal cost of three-bar truss design problem.

https://doi.org/10.1371/journal.pone.0276210.g024

thumbnail
Table 13. Comparison of result on three-bar truss design problem.

https://doi.org/10.1371/journal.pone.0276210.t013

4.5. Speed reducer design problem

Speed reducer problem is an engineering problem with complex constraints, and its optimization purpose is to minimize the weight of speed reducer itself. The constraint variables are shown in Fig 25.

The mathematical model of speed reducer is as follows:

Consider                

Minimize                

Subject to                

                                

                                

                                

                                

                                

                                

                                

                                

                                

                                

Variable range 2.6≤x1≤3.6, 0.7≤x2≤0.8, 17≤x3≤28, 7.3≤x4, x5≤8.3, 2.9≤x6≤3.9, 5≤x7≤5.5

On the basis of improved algorithm IEO, the speed reducer problem is optimized and the values of relevant parameters are obtained. The optimization results are compared with seven algorithms in other literatures. The details are shown in Table 14. To more clearly reflect the optimal cost of each algorithm, the Fig 18 is drawn. In addition, Fig 26 presents the optimal cost of IEO and comparison algorithms for speed reducer design problem.

thumbnail
Fig 26. The optimal cost of speed reducer design problem.

https://doi.org/10.1371/journal.pone.0276210.g026

thumbnail
Table 14. Comparison of result on speed reducer design problem.

https://doi.org/10.1371/journal.pone.0276210.t014

Compared with other algorithms, the improved algorithm IEO in this paper has higher accuracy in dealing with speed reducer engineering problem. In other words, The IEO algorithm find the best values for seven design variables to minimize the weight of speed reducer.

4.6. Optimal design of industrial refrigeration system

At present, energy saving and emission reduction work has become the focus of various fields. Industrial refrigeration system accounts for a large proportion of energy consumption, so it is necessary to optimize and control the industrial refrigeration system. Optimal design of industrial refrigeration system is an extremely complex engineering design problem, which has fourteen design variables and fifteen constraints. Its mathematical model is shown as follows:

Consider                

Minimize                

Subject to                

                                

                                

                                

                                

                                

                                

                                

                                

                                

                                

                                

                                

                                

                                

Variable range 0.001≤xi≤5, i = 1,…,14

Fourteen key variables of optimal design of industrial refrigeration system are optimized by IEO algorithm and the optimal values are obtained, and the results are compared with other meta-heuristic algorithms. Table 15 shows the lowest cost of each algorithm and the values of related variables. Fig 27 shows the optimal cost of the five algorithms for optimal design of industrial refrigeration system.

thumbnail
Fig 27. The optimal cost of optimal design of industrial refrigeration system.

https://doi.org/10.1371/journal.pone.0276210.g027

thumbnail
Table 15. Comparison of result on optimal design of industrial refrigeration system.

https://doi.org/10.1371/journal.pone.0276210.t015

As can be seen from the optimization results in Table 15, IEO algorithm still has good performance in dealing with highly complex engineering design problems. Compared with the original EO algorithm, the optimal value obtained by IEO algorithm is improved by several orders of magnitude. In general, for engineering problems with fourteen objective variables and constraints, IEO algorithm can also get the optimal value.

In this section, six engineering optimization problems of pressure vessel, welded beam, tension/compression spring, three-bar truss, speed reducer and optimal design of industrial refrigeration system with constraint state are solved by IEO algorithm, and the design scheme given by IEO is compared with the scheme proposed by the algorithms in the existing literature. The comparison results show that the design cost of IEO is much lower than the original EO algorithm and other comparison algorithms, and it is an algorithm that can effectively solve engineering optimization problems. At the same time, the good optimization results also show that IEO has better optimization efficiency and performance in practical application. Each engineering optimization problem has different variables and constraints. According to the number of variables and constraints, it can be divided into engineering problems of different complexity. The mathematical model of each engineering optimization problem corresponds to the actual engineering problems in real life. By using IEO to solve engineering optimization problems, it shows that the algorithm has superior robustness, which indicates that IEO can be applied to practical engineering problems in the future.

Through the above numerical experiments and the application of six engineering problems, we can see that the IEO algorithm has excellent performance. Specifically, the effectiveness of IEO is measured by three criteria: mean value, standard deviation and running time. For F14-F23 in the 23 benchmark function sets, these functions belong to 2-dimensional, 4-dimensional and 6-dimensional functions, and the optimization effect of IEO is not outstanding. However, for functions F1-F13 with a dimension of 30, the solution efficiency of IEO is obviously improved. And for the complex function sets CEC2017 and CEC2019 in later sections, the convergence accuracy of IEO is improved. In addition, through Friedman test and Wilcoxon rank sum test, the significance of IEO algorithm can be clearly seen from the perspective of statistics. In section 4, IEO is used to solve six engineering problems of different complexity. By comparing the optimal cost with other algorithms, the practicability of IEO in engineering problems is known, which further verifies the effectiveness of IEO algorithm. Overall, the superiority of IEO is analyzed from different perspectives through complex numerical experiments and the application of algorithms to engineering problems.

5.Conclusion

In this paper, a multi-strategy improved Equilibrium Optimizer (IEO) is proposed to solve numerical optimization and engineering problems. Tent mapping is used to initialize the population and produce the initial solution with rich diversity, which lays a good foundation for the global search of the search population in space. A nonlinear time parameter strategy is also introduced into the update equation of the algorithm, which dynamically coordinates the exploration and exploitation phase of IEO algorithm. The Lens Opposition‑based Learning (LOBL) strategy is adopted in the late iteration of the algorithm to improve the diversity of the population and prevent the algorithm form falling into local optimal. Simulation experiments are carried out by using 23 classical functions, IEEE CEC2017 and IEEE CEC2019. The experimental results show that compared with the other six meta-heuristic algorithms, the improved IEO algorithm has obvious advantages in solving accuracy and convergence speed. In addition, the stability and effectiveness of IEO are proved from different perspectives by Friedman statistical test and Wilcoxon rank sum test. Finally, IEO is applied to six engineering design problems: the pressure vessel problem, the welded beam problem, the tension /compression spring problem, the three-bar truss problem, the speed reducer problem and optimal design of industrial refrigeration system. The research results show that the improved IEO algorithm has good optimization efficiency when solving practical application problems. In the future, the IEO will be tried to combine with other meta-heuristic algorithms to better improve the performance. The IEO may be implemented on complex real-world application problems, such as feature selection and robot path planning. The IEO also can be applied to multi-objective problems and more complex practical engineering problems.

References

  1. 1. Fan Q., Huang H., Yang K., Zhang S., Yao L., & Xiong Q. (2021). A modified equilibrium optimizer using opposition-based learning and novel update rules. Expert Systems with Applications, 170, 114575.
  2. 2. Heidari A. A., Mirjalili S., Faris H., Aljarah I., Mafarja M., & Chen H. (2019). Harris hawks optimization: Algorithm and applications. Future generation computer systems, 97, 849–872.
  3. 3. Metawa N., Hassan M. K., & Elhoseny M. (2017). Genetic algorithm based model for optimizing bank lending decisions. Expert Systems with Applications, 80, 75–82.
  4. 4. Storn R., & Price K. (1997). Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. Journal of global optimization, 11(4), 341–359.
  5. 5. Baluja S. (1994). Population-based incremental learning. a method for integrating genetic search based function optimization and competitive learning. Carnegie-Mellon Univ Pittsburgh Pa Dept Of Computer Science.
  6. 6. Simon D. (2008). Biogeography-based optimization. IEEE transactions on evolutionary computation, 12(6), 702–713.
  7. 7. Kirkpatrick S., Gelatt C. D., & Vecchi M. P. (1983). Optimization by simulated annealing. science, 220(4598), 671–680. pmid:17813860
  8. 8. Erol O. K., & Eksin I. (2006). A new optimization method: big bang–big crunch. Advances in Engineering Software, 37(2), 106–111.
  9. 9. Rashedi E., Nezamabadi-Pour H., & Saryazdi S. (2009). GSA: a gravitational search algorithm. Information sciences,179(13), 2232–2248.
  10. 10. Formato R. A. (2007). Central force optimization. Prog Electromagn Res, 77(1), 425–491.
  11. 11. Glover F. (1989). Tabu search—part I. ORSA Journal on computing, 1(3), 190–206.
  12. 12. Rao R. V., Savsani V. J., & Vakharia D. P. (2012). Teaching–learning-based optimization: an optimization method for continuous non-linear large scale problems. Information sciences, 183(1), 1–15.
  13. 13. Geem Z. W., Kim J. H., & Loganathan G. V. (2001). A new heuristic optimization algorithm: harmony search. simulation, 76(2), 60–68.
  14. 14. Kennedy, J., & Eberhart, R. C. (1997, October). A discrete binary version of the particle swarm algorithm. In 1997 IEEE International conference on systems, man, and cybernetics. Computational cybernetics and simulation (Vol. 5, pp. 4104-4108). IEEE.
  15. 15. Dorigo M., Maniezzo V., & Colorni A. (1996). Ant system: optimization by a colony of cooperating agents. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 26(1), 29–41. pmid:18263004
  16. 16. Karaboga D., & Basturk B. (2007). A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm. Journal of global optimization, 39(3), 459–471.
  17. 17. Mirjalili S., & Lewis A. (2016). The whale optimization algorithm. Advances in engineering software, 95, 51–67.
  18. 18. Faramarzi A., Heidarinejad M., Stephens B., & Mirjalili S. (2020). Equilibrium optimizer: A novel optimization algorithm. Knowledge-Based Systems, 191, 105190.
  19. 19. Gupta S., Deep K., & Mirjalili S. (2020). An efficient equilibrium optimizer with mutation strategy for numerical optimization. Applied Soft Computing, 96, 106542.
  20. 20. Abdel-Basset M., Mohamed R., Mirjalili S., Chakrabortty R. K., & Ryan M. J. (2021). MOEO-EED: A multi-objective equilibrium optimizer with exploration–exploitation​ dominance strategy. Knowledge-Based Systems, 214, 106717.
  21. 21. Jia H., & Peng X. (2021). High equilibrium optimizer for global optimization. Journal of Intelligent & Fuzzy Systems, (Preprint), 1–12.
  22. 22. Gandomi A. H., Yang X. S., Talatahari S., & Alavi A. H. (2013). Firefly algorithm with chaos. Communications in Nonlinear Science and Numerical Simulation, 18(1), 89–98.
  23. 23. Mirjalili S. (2016). SCA: a sine cosine algorithm for solving optimization problems. Knowledge-based systems, 96, 120–133.
  24. 24. Yu F., Li Y. X., Wei B., Xu X., & ZHAO Z. Y. (2014). The application of a novel OBL based on lens imaging principle in PSO. ACTA ELECTONICA SINICA, 42(2), 230.
  25. 25. Mirjalili S., Gandomi A. H., Mirjalili S. Z., Saremi S., Faris H., & Mirjalili S. M. (2017). Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Advances in Engineering Software, 114, 163–191.
  26. 26. Arora S., & Singh S. (2019). Butterfly optimization algorithm: a novel approach for global optimization. Soft Computing, 23(3), 715–734.
  27. 27. Yang X. S., & Gandomi A. H. (2012). Bat algorithm: a novel approach for global engineering optimization. Engineering computations.
  28. 28. Wang D., Tan D., & Liu L. (2018). Particle swarm optimization algorithm: an overview. Soft computing, 22(2), 387–408.
  29. 29. Garro B. A., & Vázquez R. A. (2015). Designing artificial neural networks using particle swarm optimization algorithms. Computational intelligence and neuroscience, 2015.
  30. 30. Bui D. T., Bui Q. T., Nguyen Q. P., Pradhan B., Nampak H., & Trinh P. T. (2017). A hybrid artificial intelligence approach using GIS-based neural-fuzzy inference system and particle swarm optimization for forest fire susceptibility modeling at a tropical area. Agricultural and forest meteorology, 233, 32–44.
  31. 31. Zhang H., Cai Z., Ye X., Wang M., Kuang F., Chen H., et al. (2020). A multi-strategy enhanced salp swarm algorithm for global optimization. Engineering with Computers, 1–27.
  32. 32. Awad N. H., Ali M. Z., Liang J. J., Qu B. Y., and Suganthan P. N., "Problem Definitions and Evaluation Criteria for the CEC 2017 Special Session and Competition on Single Objective Real-Parameter Numerical Optimization," 2016.
  33. 33. Price K., Awad N., Ali M., and Suganthan P., "Problem definitions and evaluation criteria for the 100-digit challenge special session and competition on single objective numerical optimization," in Technical Report: Nanyang Technological University, 2018.
  34. 34. Liu Y., Fang Z., Cheung M. H., Cai W. and Huang J., "An Incentive Mechanism for Sustainable Blockchain Storage," in IEEE/ACM Transactions on Networking, 2022, pp.1–14
  35. 35. Derrac J., García S., Molina D., & Herrera F. (2011). A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm and Evolutionary Computation, 1(1), 3–18.
  36. 36. Liu Y., Fang Z., Cheung M. H., Cai W. and Huang J., "Economics of Blockchain Storage," ICC 2020 - 2020 IEEE International Conference on Communications (ICC), 2020, pp. 1–6
  37. 37. Neshat M., Mirjalili S., Sergiienko N. Y., Esmaeilzadeh S., Amini E., Heydari A., et al. (2022). Layout optimisation of offshore wave energy converters using a novel multi-swarm cooperative algorithm with backtracking strategy: A case study from coasts of Australia. Energy, 239, 122463.
  38. 38. Van den Bergh F., & Engelbrecht A. P. (2004). A cooperative approach to particle swarm optimization. IEEE transactions on evolutionary computation, 8(3), 225–239.
  39. 39. Fan Q., Chen Z., Zhang W., & Fang X. (2020). ESSAWOA: enhanced whale optimization algorithm integrated with salp swarm algorithm for global optimization. Engineering with Computers, 1–18.
  40. 40. Kamboj V. K., Nandi A., Bhadoria A., & Sehgal S. (2020). An intensify Harris Hawks optimizer for numerical and engineering optimization problems. Applied Soft Computing, 89, 106018.
  41. 41. Hameed I. A., Bye R. T., & Osen O. L. (2016, December). Grey wolf optimizer (GWO) for automated offshore crane design. In 2016 IEEE symposium series on computational intelligence (SSCI) (pp. 1–6). IEEE.
  42. 42. Li Y., Zhao Y., & Liu J. (2021). Dimension by dimension dynamic sine cosine algorithm for global optimization problems. Applied Soft Computing, 98, 106933.
  43. 43. Mezura-Montes E., & Coello C. A. C. (2008). An empirical study about the usefulness of evolution strategies to solve constrained optimization problems. International Journal of General Systems, 37(4), 443–473.
  44. 44. Baykasoğlu A., & Ozsoydan F. B. (2015). Adaptive firefly algorithm with chaos for mechanical design optimization problems. Applied soft computing, 36, 152–164.
  45. 45. Lu Y., Zhou Y., & Wu X. (2017). A hybrid lightning search algorithm-simplex method for global optimization. Discrete Dynamics in Nature and Society, 2017.
  46. 46. Coello C. A. C. (2000). Use of a self-adaptive penalty approach for engineering optimization problems. Computers in Industry, 41(2), 113–127.
  47. 47. Mirjalili S., Mirjalili S. M., & Lewis A. (2014). Grey wolf optimizer. Advances in engineering software, 69, 46–61.
  48. 48. Ragsdell K. M., & Phillips D. T. (1976). Optimal design of a class of welded structures using geometric programming.
  49. 49. Huang F. Z., Wang L., & He Q. (2007). An effective co-evolutionary differential evolution for constrained optimization. Applied Mathematics and computation, 186(1), 340–356.
  50. 50. Kaveh A., & Talatahari S. (2010). An improved ant colony optimization for constrained engineering design problems. Engineering Computations.
  51. 51. Kaveh A.,&Khayatazad M.(2012). A new meta-heuristic method: ray optimization. Computers & structures, 112, 283–294.
  52. 52. Muthusamy H., Ravindran S., Yaacob S., & Polat K. (2021). An improved elephant herding optimization using sine–cosine mechanism and opposition based learning for global optimization problems. Expert Systems with Applications, 172, 114607.
  53. 53. Guo W., Li W., Zhang Q., Wang L., Wu Q., & Ren H. (2014). Biogeography-based particle swarm optimization with fuzzy elitism and its applications to constrained engineering problems. Engineering Optimization, 46(11), 1465–1484.
  54. 54. He S., Prempain E., & Wu Q. H. (2004). An improved particle swarm optimizer for mechanical design optimization problems. Engineering optimization, 36(5), 585–605.
  55. 55. Babalik A., Cinar A. C., & Kiran M. S. (2018). A modification of tree-seed algorithm using Deb’s rules for constrained optimization. Applied Soft Computing, 63, 289–305.
  56. 56. Liu J. L. (2005). Novel orthogonal simulated annealing with fractional factorial analysis to solve global optimization problems. Engineering Optimization, 37(5), 499–519.
  57. 57. Sadollah A., Bahreininejad A., Eskandar H., & Hamdi M. (2013). Mine blast algorithm: A new population based algorithm for solving constrained engineering optimization problems. Applied Soft Computing, 13(5), 2592–2612.
  58. 58. Wansasueb K., Panmanee S., Panagant N., Pholdee N., Bureerat S., & Yildiz A. R. (2022). Hybridised differential evolution and equilibrium optimiser with learning parameters for mechanical and aircraft wing design. Knowledge-Based Systems, 239, 107955.
  59. 59. Hsieh T. J. (2014). A bacterial gene recombination algorithm for solving constrained optimization problems. Applied Mathematics and Computation, 231, 187–204.
  60. 60. Zahara E., & Kao Y. T. (2009). Hybrid Nelder–Mead simplex search and particle swarm optimization for constrained engineering design problems. Expert Systems with Applications, 36(2), 3880–3886.
  61. 61. Mahdavi M., Fesanghary M., & Damangir E. (2007). An improved harmony search algorithm for solving optimization problems. Applied mathematics and computation, 188(2), 1567–1579.
  62. 62. Chen H., Xu Y., Wang M., & Zhao X. (2019). A balanced whale optimization algorithm for constrained engineering design problems. Applied Mathematical Modelling, 71, 45–59.
  63. 63. Arora J. (2004). Introduction to optimum design. Elsevier.
  64. 64. Ray T., & Saini P. (2001). Engineering design optimization using a swarm with an intelligent information sharing among individuals. Engineering Optimization, 33(6), 735–748.
  65. 65. Gabr A. R., Roy B., Kaloop M. R., Kumar D., Arisha A., Shiha M., et al. (2021). A novel approach for resilient modulus prediction using extreme learning machine-equilibrium optimiser techniques. International Journal of Pavement Engineering, 1–11.
  66. 66. Gupta S., & Deep K. (2019). A hybrid self-adaptive sine cosine algorithm with opposition based learning. Expert Systems with Applications, 119, 210–230.
  67. 67. Gandomi A. H., Yang X. S., & Alavi A. H. (2013). Cuckoo search algorithm: a metaheuristic approach to solve structural optimization problems. Engineering with computers, 29(1), 17–35.
  68. 68. Tsai J. F. (2005). Global optimization of nonlinear fractional programming problems in engineering design. Engineering Optimization, 37(4), 399–409.
  69. 69. Akay B., & Karaboga D. (2012). Artificial bee colony algorithm for large-scale problems and engineering design optimization. Journal of intelligent manufacturing, 23(4), 1001–1014.