Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

A novel hybrid PSO based on levy flight and wavelet mutation for global optimization

  • Yong Gao ,

    Contributed equally to this work with: Yong Gao, Hao Zhang

    Roles Methodology, Software, Writing – original draft, Writing – review & editing

    Affiliation Department of Electronic Engineering, Ocean University of China, Qingdao, China

  • Hao Zhang ,

    Contributed equally to this work with: Yong Gao, Hao Zhang

    Roles Conceptualization, Methodology, Supervision, Writing – review & editing

    zhanghao@ouc.edu.cn

    Affiliations Department of Electronic Engineering, Ocean University of China, Qingdao, China, Department of Electrical and Computer Engineering, University of Victoria, Victoria, BC, Canada

  • Yingying Duan ,

    Roles Data curation, Visualization

    ‡YD and HZ also contributed equally to this work.

    Affiliation Department of Electronic Engineering, Ocean University of China, Qingdao, China

  • Huaifeng Zhang

    Roles Data curation, Visualization

    ‡YD and HZ also contributed equally to this work.

    Affiliation Department of Electronic Engineering, Ocean University of China, Qingdao, China

Abstract

The concise concept and good optimization performance are the advantages of particle swarm optimization algorithm (PSO), which makes it widely used in many fields. However, when solving complex multimodal optimization problems, it is easy to fall into early convergence. The rapid loss of population diversity is one of the important reasons why the PSO algorithm falls into early convergence. For this reason, this paper attempts to combine the PSO algorithm with wavelet theory and levy flight theory to propose a new hybrid algorithm called PSOLFWM. It applies the random wandering of levy flight and the mutation operation of wavelet theory to enhance the population diversity and seeking performance of the PSO to make it search more efficiently in the solution space to obtain higher quality solutions. A series of classical test functions and 19 optimization algorithms proposed in recent years are used to evaluate the optimization performance accuracy of the proposed method. The experimental results show that the proposed algorithm is superior to the comparison method in terms of convergence speed and convergence accuracy. The success of the high-dimensional function test and dynamic shift performance test further verifies that the proposed algorithm has higher search stability and anti-interference performance than the comparison algorithm. More importantly, both t-Test and Wilcoxon’s rank sum test statistical analyses were carried out. The results show that there are significant differences between the proposed algorithm and other comparison algorithms at the significance level α = 0.05, and the performance is better than other comparison algorithms.

Introduction

Optimization problems are widely found in signal processing, image processing, automatic control, and many other fields. With the emergence of complex combinatorial optimization problems, traditional optimization methods suffer from the inability to complete the problem in a short period of time and are prone to the phenomenon of “combinatorial explosion“. Population intelligence is a computational technique based on the behavioral laws of biological groups, which provides ideas for solving complex distributed problems.

In recent years, many metaheuristic algorithms based on animal behavior in nature have been proposed by researchers, such as the particle swarm algorithm (PSO) [1] that simulates the foraging process of a flock of birds; the ant colony algorithm (ACO) [2] that simulates ants finding the shortest path from a food source to a nest; the gray wolf optimizer (GWO) [3] that imitates the social hierarchy and navigation mechanism of grey wolves in nature; the whale optimization algorithm (WOA) [4] that is inspired by the humpback whale bubble net foraging and the ant-lion optimizer (ALO) [5] imitating the hunting mechanism of ant lion in nature. As well, the salp swarm algorithm (SSA) [6] that is primarily inspired by the clustering behavior of salmon as they navigate and feed in the ocean; The artificial gorilla troop optimizer (AGTO) [7] inspired by the social intelligence of gorilla troops in nature; The african vulture optimization algorithm (AVAO) [8] that simulates the foraging and navigation behavior of african vultures; The bald eagle search Algorithm (BES) [9] that simulates the hunting strategy or intelligent social behavior of vultures when searching for fish; The moth flame optimizer (MFO) [10] that mimics the navigation methods of moths in nature; The sperm swarm optimizer (SSO) [11], inspired by the dynamics of sperm fertilizing egg cells, and the dragonfly algorithm (DA) [12], inspired by the static and dynamic swarming behavior of dragonflies in nature, etc. Researchers have also focused their research points to physical facts, such as reference to mathematical models based on sine and cosine functions fluctuating outward or toward the optimal solution, to emphasize the exploration of the search space at different stages of optimization by the sine and cosine optimizer (SCA) [13]; The arithmetic optimization algorithm (AOA) [14] that exploits the distributional behavior of the main arithmetic operators in mathematics (multiplication (M), division (D), subtraction (S), and addition (A)); the gravitational search algorithm (GSA) [15] based on the law of universal gravitation and mass interaction; The main inspiration comes from the three concepts of white hole, black hole and wormhole in cosmology, and their mathematical models are used to perform multi-verse optimizer (MVO) [16] for exploration, development and local search, respectively; The cooperative search algorithm (CSA) [17], inspired by the cooperative behavior of modern corporate teams and the proposed hybrid algorithm AGSA-PS [18] by combining the adaptive gravitational search algorithm (AGSA) and pattern search (PS); The hybrid optimization algorithm HSSOGSA [19], which combines the gravitational search algorithm (GSA) and sperm swarm optimization (SSO), to combine the advantages of both algorithms, etc.

As mentioned above, almost all metaheuristic algorithms mimic the processes of selecting and adapting to the environment that already exist in nature and model them as two search behaviors, exploitation and exploration. Among the known meta-heuristic algorithms, the PSO algorithm based on swarm intelligence proposed by Kennedy et al. in 1995 has been applied to many fields because of its simple structure and easy implementation. However, the particle swarm optimization algorithm has the problem of premature convergence into local optimum and slow convergence when approaching or entering the optimal region when solving high-dimensional complex problems. In order to overcome these problems, literature review found that researchers mainly focus on two improvement methods. One is to improve the optimization effect of particle swarm by improving the parameter update method in the particle swarm update formula. As in the literature [20] proposes a time-varying acceleration factor that allows the algorithm to improve by focusing on the learning of Pbest in the early stages of evolution and on Gbest in the later stages. The literature [21] accomplishes the particle position update in the form of Gaussian sampling by eliminating the velocity term of traditional PSO, which makes the algorithm structure more concise and easy to operate. To make the particle swarm more applicable to the dynamic environment, the updating method of inertia weights becomes the main improvement direction. For example, the literature [22] proposes a stochastic inertia weight update method; the literature [23] proposes a linear inertia weight for balancing the needs of global search in the early stage and local search in the later stage; and the literature [24] treats the success rate as a feedback coefficient and proposes an adaptive inertia weighting technique.

The other one is to enhance the search capability by meritively combining with other optimization algorithms to form a new hybrid algorithm. As in literature [25] combines PSO with DE algorithm and proposes a hybrid DE and PSO (DEPSO) algorithm to solve the economic scheduling problem. A hybrid algorithm of hybrid PSO and GSA is proposed in document [26], and the performance is verified and analyzed using the benchmark function. The literature [27] uses Levy flight combined into PSO to update the position equations and enhance the global search capability. In the literature [28], a new hybrid particle swarm optimization method is proposed which combines wavelet theory based variational operations to enhance the performance of exploring the solution space. Similarly, the literature [29] uses Gaussian distribution to update the PSO position formula without parameter adjustment. In recent years, there are many other ways to improve PSO search performance by combining, such as the combination of PSO and GA [30], the combination of PSO and ACO [31], the combination of PSO and ACO [32], the combination of PSO and SCA [33], the combination of PSO and SCA and levy flight [34], the combination of PSO and BFO [35], the combination of PSO and WOA [36], the combination of PSO and MFB [37] and so on.

In this paper, a PSOLFWM algorithm is proposed for the purpose of solving PSO early convergence and adapting to nonlinear complex multidimensional problems. The algorithm combines the Levy distribution and wavelet theory to update the position formula of PSO. In this method, PSO runs in the direction of the improved vector, and the random walk characteristic of Levy flight makes the search space of PSO larger. Wavelet theory is used as a mutation operator to modify decision vector, increase convergence stability and improve the quality of solution space. The proposed algorithm has been tested using well-known mathematical test functions. The results show the accuracy and robustness of the method in complex optimization problems, and it is a very effective search algorithm.

The remainder of this paper is organized as follows. Section 2 describes the preparation of PSO, wavelet theory, and Levy flight. Section 3 introduces the proposed optimization technology. Section 4 describes the experimental results and discussion of the proposed algorithm. The conclusions are given in Section 5.

Preliminaries

In this section, we briefly introduce the basic framework of PSO, Levy flight, Wavelet Mutation, and some basic concepts.

Particle swarm optimization

Particle swarm optimization (PSO) is a random search algorithm derived from the study of bird predation behavior. Particle swarm optimization algorithm often uses two concepts: exploration, which means that particles leave the original search track to a certain extent and search in a new direction, reflecting the ability to explore unknown areas. The other is exploitation, which means that the particles continue to search in a finer step on the original search trajectory to some extent, mainly referring to further exploration of the area searched during the exploration. All particles adjust their motion in real time according to the velocity and position [38] of Eqs 1 and 2. (1) (2) where Xid and Vid are the position and velocity components of the ith particle in the dth dimension, respectively. Pid and Pgd are the local optimal solution of the ith individual particle and the global optimal solution of the particle population, respectively. c1 and c2 are cognitive acceleration factors. w is the inertia weight, which determines the inheritance of the current velocity of the particle,updated by the equation Eq 3.

(3)

Subsequently, Clerc et al. [39] expanded the search space to improve the quality of the solution by adding a constraint factor to the velocity update formula. The velocity update formula for the compression factor method is. (4) (5) where λ is the compression factor and ζ=c1+c2.

Wavelet mutation (WM)

Wavelet analysis is a rapidly growing new field in applied mathematics and engineering disciplines. After years of exploration and research, the formal system of mathematics has been established with a solid theoretical foundation. Wavelet transform can carry out a multi-scale detailed analysis of functions or signals through the operation functions of scaling and translation. This solves many difficulties and problems that Fourier transform can not solve. A particle swarm optimization algorithm based on wavelet mutation is proposed [28], whose mutation function acts as a fine-tuning of the particles. Assuming that is the ith particle at the kth iteration and is the χth dimension (1 ≤ χd) of that particle, ub and lb are the upper and lower bounds of the search space, respectively, the formula is updated as follows. (6) where represents the value after the mutation. σ is the wavelet function value, and when σ is close to 1, the closer the variational particle value is to the maximum value ub. When σ is close to −1, the closer the variant particle value is to the lower limit lb. It can be seen that σ determines the size of the search space to some extent. Morlet wavelet function is selected here, and its calculation formula is as follows: (7) where φ takes values in the range of pseudo-random numbers in the interval [−2.5a, 2.5a]. The a is called the scale parameter and is calculated as follows. (8) where ζwm is the monotonically increasing shape parameter and g is the upper limit of the parameter a. The mutation operation using wavelet theory has good convergence ability on the basis of the stability of the improved algorithm. In this paper, we set g = 10000 and sigma = 5.

Levy flight

Levy flight is a random walk whose step length follows the Levy distribution. The Levy distribution differs from the normal and Cauchy distributions in that it is a heavy-tailed distribution with a higher probability of being in the same position. For the same conditions, the Levy flight under the Levy distribution has a much larger search area than the Brownian motion under the uniform distribution. The Levy distribution can be expressed by a mathematical formula as follows. (9) where μ and s are the transmission parameters and samples, respectively. The step length of Levy flight can be calculated as follows according to Mantegna’s algorithm. (10) where the variables u and v are following a normal distribution, as shown in the following equation, and β is a fixed parameter. (11) where σv = 1, and the variable σu is updated by the following equation. (12) In this paper, we set scale = 0.01, so the step size of the search space is stepsize = 0.01 * s.

Hybrid PSO based on levy flight and wavelet mutation

In meta heuristic algorithm, particle swarm optimization algorithm is a kind of population optimization algorithm with fast calculation speed, few parameters and simple implementation. However, the two defects of easy premature convergence and falling into local minima limit its wide application to a certain extent. In view of this, many researchers have made many improvements to the particle swarm algorithm. Since the use of Levy flight allows for a more efficient search in the search space. In the literature [27, 40, 41], the researchers achieved an improvement of the PSO algorithm by combining its velocity and position equations with Levy flight.

As shown in Section 2.1, the compression factor PSO expands the search space to some extent. In order to further increase the quality of the solution space, we try to combine the compression factor PSO and levy flight. The speed and position update formulas are as follows. (13) (14) In the above equation, Levy (Xid (t), dim) is the expression for the combination of Levy flight and PSO, which is calculated as shown in the literature [27]. (15) (16) where dim is the dimension of the optimization search problem, ⊕ is the element multiplication, and w is calculated based on Eq 3.

Considering that, as described in Section 2.2, wavelet mutation can improve the algorithm’s stability and have good convergence ability. For example, in the literature [28, 42], researchers use the mutation function of wavelet change to fine-tune the particle swarm by combining the position update equation of particle swarm with wavelet change. Here, to generate more solutions of different schemes in the whole design space, improve the quality of solution space and increase the fine-tuning stability of the algorithm, Levy flight is considered to be added to wavelet mutation. The update formula is as follows. (17) In the above equation, σ is calculated based on the Eq 7.

Therefore, we give the pseudo-code of the proposed algorithm PSOLFWM as shown in Algorithm 1. Where, SearchAgents_no indicates population size, Max_iteration is the maximum number of iterations, dim is the dimension of the optimization problem, ub, lb is the upper and lower limits of the particle search space, Vmax,Vmin,wmax,wmin are the upper and lower limits of the flight speed and the upper and lower limits of the inertia weight during particle search, φ1, φ2 is the scaling factor. After the initialization of the algorithm is completed, the inertia weight and scaling factor used in the algorithm are updated by using the formulas Eqs 3 and 5. We use the method of setting trial in the literature [27] for reference, and judge the update method of the selected location through the size range of trial. A penalty mechanism is added to increase diversity by reinitializing the population when trial is greater than 10*limit and starting to update the position according to Eq 13.

Algorithm 1 Pseudo code of the PSOLFWM algorithm.

Initailize:

1: Initailize parameters: SearchAgents_no, Max_iteration, dim, ub, lb

2: ,Vmax, Vmin, wmax, wmin, φ1, φ2, trial, limit

3: Initailize populations:Positions, velocity

4: Initailize individual&population optimum:Pbest, Gbest

Main Loop:

5: while iter < Max_iteration do

6:  for j = 1 : SearchAgents_no do

7:   Update w with Eq 3

8:   Update λ with Eq 5

9:   if (trial < limit) then

10:    Update Positions(j, :), velocity(j, :) with Eq 13, Eq 14

11:   else if (limittrial < 10*limit) then

12:    if σ > 0 then

13:     Update Positions(j, :) with Eq 17

14:    else

15:     Update Positions(j, :) with Eq 15

16:    end if

17:   else

18:    ReInitial velocity(j, :) and Positions(j, :)

19:    ReUpdate velocity(j, :) and Positions(j, :) with Eq 13, Eq 2

20:   end if

21:   Boundary processing

22:   if (fobj(Positions(j, :)) < Pbest(j)) then

23:    trial = 0

24:    p(j, :) = Positions(j, :)

25:    Pbest(j) = fobj(Positions(j, :))

26:   else

27:    trial = trial + 1

28:   end if

29:   Update global optima(Gbest)

30:  end for

31:  Record the Gbest solution

32:  iter = iter + 1

33: end while

34: Output the global best (Gbest) solution

With the repeated operation of the algorithm, the particle updates the velocity and position continuously. The individual optimum Pbest of the group of particles is selected by performing a comparison of the fitness value fobj(Positions(j, :)) with the individual optimum Pbest(i). Subsequently, by comparing the current individual optimum Pbest with the global optimum Gbest, the global optimum Gbest is selected and assigned to the global optimum Gbest. The program stops when the maximum number of iterations is reached or when the set optimal fitness value is obtained, the global optimum is output and the fitness value convergence curve is plotted as required.

Experimental results and discussion

In this section, the performance of the proposed PSOLFWM algorithm will be evaluated. For performance comparison, two types of algorithms are reproduced and used. They are improved particle swarm family optimization algorithms based on particle swarm algorithm, including particle swarm algorithm (PSO), standard particle swarm algorithm (SPSO), mutation operator particle swarm algorithm (HPSOM), wavelet mutation particle swarm algorithm (HPSOWM), bone backbone particle swarm algorithm (BBPSO), Levy flying particle swarm algorithm (PSOLF), Levy flying sine cosine particle swarm algorithm (PSOSCALF) and gray wolf particle swarm algorithm (PSOGWO). The other is other meta-heuristic optimization algorithms, including the gray wolf optimization algorithm (GWO), the differential evolution algorithm (DE), the sine cosine optimization algorithm (SCA), the whale optimization algorithm (WOA), the ant-lion optimization algorithm (ALO), the salp swarm algorithm (SSA), the dragonfly algorithm (DA), the moth flame optimizer (MFO), the bald eagle search optimization algorithm (BES), the cooperative search algorithm (CSA), and the sperm swarm optimization algorithm (SSO). Aiming at the stochastic nature of the optimization algorithms, their performance is evaluated using a set of benchmark functions with different characteristics. All algorithms were performed in Windows 10 OS using Inter Core i5, 3.3GHz, 16GB RAM.

Parameter settings

Table 1 reveals the empirically obtained control parameters affecting the performance of the PSOLFWM method. The control parameters of some of the particle swarm family algorithm and other meta-inspired optimization algorithms used as comparison algorithms are shown in Table 2.

Benchmark functions

In this paper, 21 benchmark functions in three categories are given in S1 Appendix for evaluating the proposed PSOLFWM algorithm. The first is a unimodal function with a single optimal solution (F1-F7), the second is a multimodal function with multiple local optimal solutions (F8-F13), and the third is a fixed dimensional multimodal function (F14-F21). The mathematical expression (Column 1), dimension (Column 2), search space range and initialization range (Column 3) and global optimal solution (Column 4) of the function are given in S1 Appendix.

Comparison of algorithm accuracy

Solution accuracy is an important index to measure the algorithm. In this subsection, the performance metrics of the proposed algorithm in terms of convergence accuracy are measured using 21 benchmark test functions. To better verify the superiority of the proposed algorithm, the optimization performance of the proposed algorithm is compared with eight particle swarm optimization algorithms and eleven meta-heuristic optimization algorithms under the same test function.

Comparison of particle swarm family algorithms.

Since the particle swarm optimization algorithm was proposed, it has been improved by many researchers and finally formed the particle swarm family optimization algorithm. This subsection completes the performance comparison with the proposed optimization algorithm by reproducing eight particle swarm family optimization algorithms. They are PSO [38], SPSO [45], HPSOM [46], HPSOWM [28], BBPSO [21], PSOLF [41], PSOGWO [47] and PSOSCALF [34].

In this paper, we carry out the optimization test for 21 tested benchmark functions with a population size of 50, a single maximum iteration of 200, and 30 runs. S2S4 Appendices gives the indicator data collected during the optimization process for each algorithm to find the best value, including the mean, standard variance value, median value, optimal value, worst value, average running time, and algorithm classification. The classification basis of the algorithm in this paper is to sort according to the minimum order of mean value and standard variance. We bolded all individuals with the best value in the comparison algorithm, and those with an algorithm grading of 1. The shortest average running time was italicized and bolded and underlined.

As shown in S2 Appendix, for the F1-F7 unimodal benchmark test functions, the proposed PSOLFWM algorithm is able to find the optimal values quickly. Among them, the five functions F1, F2, F3, F4, and F7 find the optimal value in the average value, standard deviation, median value, optimal value, and worst value. However, F5 has the best mean, standard deviation, median, and minimum worst value in addition to the optimal values. The best standard deviation shows that it has better search stability than HPSOWM. Therefore, the PSOLFWM algorithm ranks first in the hierarchical ranking, which is more robust than other algorithms. For function F6, PSOLFWM search ability is a bit weaker compared to HPSOWM, and the algorithm ranking is 2nd place. From the average running time of 30 iterations, it can be concluded that PSO has the fastest running time, BBPSO takes the longest time to run, and the proposed algorithm PSOLFWM has an average running time in the middle, which is within the acceptable range. From the convergence curves of the fitness values in Fig 1, we can see that the PSOLFWM has the best convergence speed and accuracy when performing the optimization search of these six functions. The search process for F6 is very smooth. In the early stage, the global search effect is good, but in the later stage, it falls into a local minimum value very close to 0. The final optimal value is slightly worse compared to the algorithm HPSOWM.

thumbnail
Fig 1. Convergence curve of unimodal functions compared with the PSO family algorithms.

(a) F1, (b) F2, (c) F3, (d) F4, (e) F5, (f) F6, and (g) F7.

https://doi.org/10.1371/journal.pone.0279572.g001

In the multimodal function test of F8-F13, although the multimodal function has multiple locally optimal solutions, the proposed algorithm can still complete the problem optimization. From the S3 Appendix, we can see that the BBPSO algorithm performs well in optimizing F8 and F11 functions, ranking first. The HPSOWM algorithm performs well in finding the optimal F12,F13 function and ranks first. The PSOLF algorithm and the proposed PSOLFWM algorithm are perfect in the five performance metrics when optimizing multimodal functions such as F9, F10 and F11, and rank first. The proposed algorithm achieves a suboptimal value in the optimization of F8, F12 and F13 functions, and the algorithm ranks second. As seen from the convergence curve Fig 2, the proposed algorithm is slightly insufficient in the optimization process of F8, F12 and F13 functions. However, it is still in the second-ranking position in terms of standard deviation and other indicators, and the search process is relatively stable. Therefore, when optimizing the multimodal function, the convergence speed and convergence accuracy of the proposed algorithm are better than other algorithms to a certain extent, and the average running time is in the average level.

thumbnail
Fig 2. Convergence curve of multimodal functions compared with the PSO family algorithms.

(a) F8, (b) F9, (c) F10, (d) F11, (e) F12, and (f) F13.

https://doi.org/10.1371/journal.pone.0279572.g002

As shown in Fig 3, for the F14-F21 test function, the proposed algorithm can complete the global optimization within 200 iterations. However, in the algorithm grading with standard deviation as the criterion performance generally only F14 is ranked first, F15, F21 is ranked second. Nevertheless, the standard deviation of the proposed algorithm performs generally in the F16-F20 test function. By carefully observing the S4 Appendix, it is found that the proposed algorithm obtains the same optimal value as other algorithms, and has faster convergence speed and convergence accuracy. For the average running time, the PSO algorithm is the fastest to complete 200 iterations, which also explains, to some extent, the low complexity of the algorithm.

thumbnail
Fig 3. Convergence curve of others functions compared with the PSO family algorithms.

(a) F14, (b) F15, (c) F16, (d) F17, (e) F18, (f) F19, (g) F20, and (h) F21.

https://doi.org/10.1371/journal.pone.0279572.g003

Comparison with other meta-heuristics.

After completing the comparison with the particle swarm optimization algorithm, to further verify the effectiveness of the algorithm, this paper also lists and compares eight meta-heuristic algorithms, which are GWO [3], DE [48], SCA [13], WOA [4], ALO [5], SSA [6], DA [12], MFO [10], BES [9], CSA [17], and SSO [49]. These algorithms are different from particle swarm optimization and are also popular swarm intelligence algorithms among researchers in recent years.

The comparison method is the same as that of particle swarm optimization. For all algorithms, set the maximum number of iterations to 200, the number of single runs to 30, and the population size to 50. The initial values of the search space are randomly generated from the initial population. S5S7 Appendices gives the performance index results of the proposed algorithm and eleven meta-heuristic algorithms when optimizing 21 test benchmark functions. In the same way, as in S2S4 Appendices, the best solution in the table is shown in bold, and the average running time of the algorithm is shown in italics with bold underlines.

S5 Appendix shows the optimization results of the above optimization algorithm for the unimodal functions F1-F7. From the table, we can understand that the proposed algorithm performs exceptionally well in terms of mean, standard variance, median, best and worst values for F1, F2, F3, F4, F5 of the unimodal function, and the algorithm ranks first in the classification. The BES algorithm ranked first under the condition that the standard deviation is the grading criterion of the algorithm when the functions F1,F2,F3,F4,F6, and F7 are searched for superiority. However, comparing the average running time of the proposed algorithm PSOLFWM and BES, it can be found that the proposed algorithm can obtain the optimal value in the shortest time and converge earlier. Although the ability of the proposed PSOLFWM to obtain the optimal value when optimizing the F6,F7 functions is poor compared to the BES and CSA algorithms. Yet, for F7, the proposed algorithm is weaker than BES and SSA algorithms by an order of magnitude of 1e-5, and convergence to zero is acceptable. From Fig 4, it can be seen that the convergence curves of the proposed algorithm in F1-F7 compared with other meta-algorithms converge quickly and efficiently to the function extremum 0.

thumbnail
Fig 4. Convergence curve of unimodal functions compared with the meta-heuristic algorithms.

(a) F1, (b) F2, (c) F3, (d) F4, (e) F5, (f) F6, and (g) F7.

https://doi.org/10.1371/journal.pone.0279572.g004

Similarly, for multimodal functions, the proposed algorithm can still maintain the optimal solution compared with other meta heuristic algorithms. It can be seen from S6 Appendix that the proposed algorithm shows excellent optimization ability in F9-F13 (except F12), ranking first. For function F8, the algorithm WOA converges to the global optimum at the earliest and is ranked first, with the PSOLFWM algorithm ranked second. PSOLFWM, BES, CSA, and SSO achieved the same global optimal values in all metrics in F9, F11, and ranked first in the hierarchical ranking. However, in terms of average running time, the proposed algorithm PSOLFWM takes less time and converges to the global optimum faster. When performing global search for F12, the proposed algorithm PSOLFWM converges to the global optimum value of 0 by an order of magnitude of 1e-4 within 200 iterations, although it is weaker than the BES,CSA algorithm ranked in the the third position. The algorithm WOA takes the shortest time in the optimization search process from F8-F13, indicating the simplicity of the algorithm structure and also the tendency to fall into premature convergence. According to Fig 5, the proposed algorithm exhibits the best convergence speed and accuracy for both F8-F13.

thumbnail
Fig 5. Convergence curve of multimodal functions compared with the meta-heuristic algorithms.

(a) F14, (b) F15, (c) F16, (d) F17, (e) F18, (f) F19, (g) F20, and (h) F21.

https://doi.org/10.1371/journal.pone.0279572.g005

F14-F21 convergence curves for the test function are given in Fig 6. As can be seen from the figure, all optimization algorithms complete the search for the global optimum within a finite number of iterations. It has been found that the proposed algorithm performs better for the test functions F14,F15, and F21 by further analyzing the results of the search values given in S7 Appendix. Sorted by the global optimal value of the standard deviation, the DE algorithm is ranked first in the search for the optimal F14,F16,F17,F18, and F20 functions, the MFO is ranked first in the search for the optimal F17, and F19, the BES is ranked first in the search for the optimal F17, and the CSA is ranked first in the search for the optimal F17, and F21. Although the proposed algorithm ranks lower in F16, F17, F18, F19, and F20, it has little difference from the optimal global solution, and all obtain the global optimal or suboptimal values. And the ranking backward is related to the choice of our ranking rules and does not precisely represent the degree of superiority of the algorithm. As shown in Fig 6, the proposed algorithm searches steadily throughout the process. For the average running time, we can see from the table that the WOA algorithm takes the least time, i.e., it completes 200 search iterations the fastest.

thumbnail
Fig 6. Convergence curve of others functions compared with the meta-heuristic algorithms.

(a) F8, (b) F9, (c) F10, (d) F11, (e) F12, and (f) F13.

https://doi.org/10.1371/journal.pone.0279572.g006

Statistical analysis tests

t-Test [43] is a common statistical method used to distinguish between the superiority and inferiority of two algorithms. If the result t value is positive, it means that the optimization performance of algorithm x is better than that of algorithm y, and conversely, the optimization performance of algorithm y is better than that of the algorithm. (18) In Eq 18, and denote the sample means of algorithm x and algorithm y, respectively, sx and sy are the sample standard variances of algorithm x and algorithm y, and n and m denote the sample sizes. It can be seen from Tables 3, 4 that the proposed PSOLFWM algorithm, whether compared with the particle swarm optimization algorithm or meta-heuristic algorithm, most of the t-values are greater than zero, which can show good optimization performance. The NaN term indicates that both sides of the comparison algorithm can reach the optimal value within a finite number of iterations, but it does not apply to t-Test.

thumbnail
Table 3. Comparison of PSOLFWM with PSO families algorithms using t-Test.

https://doi.org/10.1371/journal.pone.0279572.t003

thumbnail
Table 4. Comparison of PSOLFWM with other meta-heuristic algorithms using t-Test.

https://doi.org/10.1371/journal.pone.0279572.t004

To further determine whether the results of the PSOLFWM optimization algorithm obtained in the previous sections are statistically significant compared to the results of other comparison optimization algorithms, the Wilcoxon’s rank sum test is supplemented [50]. For this purpose, we performed a nonparametric test on the optimization result data for 30 times at the significance level α = 0.05. S8 Appendix reports the p, h, and z values of the Wilcoxon’s rank sum test results of PSOLFWM compared with other optimization algorithms. Similar to the t-test, the NaN entries in the table indicate that both optimization algorithms are not suitable for Wilcoxon’s rank sum test. By studying and analyzing the results in the table, it was found that when testing F9,F10, and F11 in the multimodal function, the NaN term appeared in the statistical test results, which means that the above algorithms all searched for the optimal value during the running, but the statistical test was not available. From the p-values in S8 Appendix, we know that for unimodal functions F1-F7, the PSOLFWM outperforms the other comparison algorithms (except for the PSOLF algorithm testing F7). For the multimodal functions F8-F13, the performance of PSOLFWM outperforms other comparison algorithms (except the WOA algorithm). It also means that there is a significant difference between the performance of PSOLFWM and other comparative optimization algorithms. For F14, F20, and F21 in the fixed-dimensional multimodal test function, the proposed method in this paper does not show any significant performance improvement compared with other comparative algorithms. As for all tested functions F1-F21, the performance of PSOLFWM is significantly better than that of PSO, DE and SSO algorithms. Therefore, the statistical test analysis shows that the results of the proposed algorithm in this paper are significantly better than the comparative optimization algorithm cited in this paper.

Comparison of high-dimensional function tests

In the face of large and complex problems, the traditional intelligent optimization algorithm often needs to traverse the whole search space, resulting in the combination explosion phenomenon, and can not complete the task search in the specified time. For many engineering optimization problems, such as the optimization and design of the complex controller, its search space is ample, and there are strict requirements for calculation speed and convergence. Therefore, the optimization ability of high-dimensional functions must be considered in the design of an optimization algorithm. In order to verify the optimization performance of the proposed algorithm for high-dimensional functions, six typical test benchmark functions of three types, F1, F3, F7, F9, F12, and F15, with 200, 500, 800, 1000, 2000, and 3000 dimensions, are selected in this subsection. The test performance metrics are the average number of iterations, the average convergence time, and the average convergence rate within the convergence threshold of the optimized value obtained by designing to satisfy Eqs 19 and 20.

Property 1. (19)

Property 2. (20) where F* is the global optimum, Fbest is the optimum obtained in a single iteration, Vaccept is the designed convergence threshold, and Ns is the number of successful convergences in N runs. This paper is designed to automatically exit the current iteration when the algorithm reaches the convergence threshold. Set the maximum number of iterations of the algorithm to 500 steps and the number of test runs of the algorithm to 50. The given high-dimensional functions and convergence thresholds are shown in Table 5.

As can be seen from the data in Table 6, the six high-dimensional test functions we selected all converge to an acceptable range, while their number of iterations is not large. As seen in Fig 7, the average number of iterations of the proposed algorithm PSOLFWM for seeking optimization does not occur significantly with the increase of the dimensionality. For the ultra-high-dimensional function with dimension dim = 3000, the optimization search can also be completed with almost the same number of iterations, which indicates that the proposed algorithm has obvious advantages in the optimization of high-dimensional functions with high unit search efficiency. Also, as the number of dimensions increases, the optimization runtime increases accordingly. This is because the increase in the number of dimensions makes the computational complexity increase, resulting in a corresponding increase in the search time, but the time cost is still acceptable. The above experiments have verified that PSOLFWM has a strong search capability for high-dimensional systems.

thumbnail
Fig 7. Relationship curve between iteration times and function dimension.

https://doi.org/10.1371/journal.pone.0279572.g007

thumbnail
Table 6. High-dimensional function test results (50 tests).

https://doi.org/10.1371/journal.pone.0279572.t006

To further demonstrate the excellent high-dimensional function optimization performance of the PSOLFWM algorithm, three optimization algorithms, PSO, PSOLF, and HPSOWM, are selected as comparison algorithms to test their optimization performance in the cases of dimension 500, 1500, and 3000. The test functions still use the above three classical test functions: F1, F3, F7, F9, F12 and F15. Table 7 gives the optimization performance indexes of four optimization algorithms under high-dimensional test functions, in which the classification of algorithms is based on the optimal value of standard deviation. From the table, it can be concluded that PSO, PSOLF, HPSOWM and PSOLFWM can obtain the optimal value when the four functions of F1, F3, F7 and F9 are tested in high dimension. Among them, the PSOLFWM algorithm ranks first in classification and does not change with the increase of function dimension. In the high-dimensional test of F12 and F15 functions, it is found that the optimization performance of PSOLF algorithm decreases with the increase of dimension, while PSOLFWM can still quickly obtain the global optimal value, and the algorithm is graded as the first. The above experiments have verified that the PSOLFWM has a strong search capability for high-dimensional systems.

thumbnail
Table 7. The comparison test results of PSOLFWM and PSO, PSOLF, HPSOWM algorithms for high-dimensional functions (50 tests).

https://doi.org/10.1371/journal.pone.0279572.t007

Comparison of dynamic shift performance tests

In order to avoid the situation where the standard test function obtains the same covariate for the global optimal point, the shift function is chosen to test the dynamic optimization performance of the algorithm. The shift function is constructed according to the formula in the literature [51], where F(x) is the newly synthesized function, f(x) is the original function, Oold is the global optimum of the original function, and Onew is the global optimum of the newly synthesized function. The following six dynamic functions are selected for experimentation in this subsection, and their function expressions are shown in Table 8. Set the maximum number of iterations for a single time to 200 and run independently for 30 times.

It can be seen from Fig 8 that the proposed algorithm has excellent performance in solving shift dynamic function problem. As it is seen, the proposed algorithm has a faster convergence speed compared to the other 19 optimization algorithms.

thumbnail
Fig 8. Convergence curve of shift function.

(a) F1, (b) F3, (c) F7, (d) F9, (e) F10, and (f) F11.

https://doi.org/10.1371/journal.pone.0279572.g008

Table 9 shows the performance results of this dynamic shift test. It can be seen from the table that the proposed algorithm is not affected by dynamic shift, and the global optimal solution is obtained within the iteration steps, which has strong anti-interference performance. Observing the test results of t-Test, it is found that the PSOLFWM shows better dynamic optimization ability than other optimization algorithms when the test functions are F1, F3, F7, F9, F10 and F11. As mentioned above, the NaN term means that t-Test does not apply to the two optimization algorithms compared. Compared with other algorithms, the proposed algorithm has good dynamic performance due to the adjustment of Levy flight distribution and wavelet function to particle swarm optimization algorithm, and dynamically balances the global and local search ability. Therefore, the proposed algorithm can better track the changes in environment and also has good stability, which is suitable for solving complex nonlinear dynamic problems.

Conclusion

In this paper, the first attempt to combine wavelet theory and levy flight into the PSO algorithm successfully improves the population diversity and global search for optimal solutions of PSO. By comparing the PSOLFWM with the particle swarm family algorithm and other metaheuristic algorithms proposed in recent years on a set of classical test functions, it is found that the PSOLFWM is easier to find the global optimal solution than other comparative optimization algorithms on most test functions, and has higher convergence speed. The results of statistical analysis and test further reveal that under the significance level α = 0.05, the PSOLFWM has significant differences with other comparison algorithms and its optimization performance is better than other comparison algorithms. Subsequently, high-dimensional function comparison test and dynamic shift comparison test were carried out. The results show that the PSOLFWM has stronger search ability than other comparison algorithms. When solving complex nonlinear dynamic problems, it can not only track the changes of the environment, but also has good stability. In the future, this method will be used to solve complex nonlinear dynamic problems, such as complex controller parameter tuning.

Supporting information

S1 Appendix. Benchmark functions.

There are 21 benchmark functions in three categories are given for evaluating the proposed PSOLFWM algorithm.

https://doi.org/10.1371/journal.pone.0279572.s001

(PDF)

S2 Appendix. PSO family F1—F7.

The numerical results of the proposed algorithm and the particle swarm family algorithms are given for the optimization of the unimodal benchmark test functions of F1-F7.

https://doi.org/10.1371/journal.pone.0279572.s002

(PDF)

S3 Appendix. PSO family F8—F13.

The numerical results of the proposed algorithm and the eight particle swarm family algorithms are given for the optimization of the multimodal benchmark test functions of F8-F13.

https://doi.org/10.1371/journal.pone.0279572.s003

(PDF)

S4 Appendix. PSO family F14—F21.

The numerical results of the proposed algorithm and the eight particle swarm family algorithms are given for the optimization of the others benchmark test functions of F14-F21.

https://doi.org/10.1371/journal.pone.0279572.s004

(PDF)

S5 Appendix. Meta-heuristic F1—F7.

The numerical results of the proposed algorithm and the eight meta-heuristic algorithms are given for the optimization of the unimodal benchmark test functions of F1-F7.

https://doi.org/10.1371/journal.pone.0279572.s005

(PDF)

S6 Appendix. Meta-heuristic F8—F13.

The numerical results of the proposed algorithm and the eight meta-heuristic algorithms are given for the optimization of the multimodal benchmark test functions of F8-F13.

https://doi.org/10.1371/journal.pone.0279572.s006

(PDF)

S7 Appendix. Meta-heuristic F14—F21.

The numerical results of the proposed algorithm and the eight meta-heuristic algorithms are given for the optimization of the others benchmark test functions of F14-F21.

https://doi.org/10.1371/journal.pone.0279572.s007

(PDF)

S8 Appendix. Comparison of PSOLFWM with other algorithms using Wilcoxon’s rank sum test.

Numerical results of Wilcoxon’s rank sum test for PSOLFWM and other optimization algorithms in the optimization search process for benchmark test functions F1-F21 are given.

https://doi.org/10.1371/journal.pone.0279572.s008

(PDF)

Acknowledgments

Thanks to Mengting Guan and Tingting Lu for their help on this project.

References

  1. 1. Eberhart RC, Shi Y, Kennedy J. Swarm intelligence. Elsevier; 2001.
  2. 2. Dorigo M, Di Caro G. Ant colony optimization: a new meta-heuristic. In: Proceedings of the 1999 congress on evolutionary computation-CEC99 (Cat. No. 99TH8406). vol. 2. IEEE; 1999. p. 1470–1477.
  3. 3. Mirjalili S, Mirjalili SM, Lewis A. Grey wolf optimizer. Advances in engineering software. 2014;69:46–61.
  4. 4. Mirjalili S, Lewis A. The whale optimization algorithm. Advances in engineering software. 2016;95:51–67.
  5. 5. Mirjalili S. The ant lion optimizer. Advances in engineering software. 2015;83:80–98.
  6. 6. Mirjalili S, Gandomi AH, Mirjalili SZ, Saremi S, Faris H, Mirjalili SM. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Advances in Engineering Software. 2017;114:163–191.
  7. 7. Abdollahzadeh B, Soleimanian Gharehchopogh F, Mirjalili S. Artificial gorilla troops optimizer: A new nature-inspired metaheuristic algorithm for global optimization problems. International Journal of Intelligent Systems. 2021;36(10):5887–5958.
  8. 8. Abdollahzadeh B, Gharehchopogh FS, Mirjalili S. African vultures optimization algorithm: A new nature-inspired metaheuristic algorithm for global optimization problems. Computers & Industrial Engineering. 2021;158:107408.
  9. 9. Alsattar HA, Zaidan A, Zaidan B. Novel meta-heuristic bald eagle search optimisation algorithm. Artificial Intelligence Review. 2020;53(3):2237–2264.
  10. 10. Mirjalili S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowledge-based systems. 2015;89:228–249.
  11. 11. Shehadeh HA, Ahmedy I, Idris MYI. Empirical study of sperm swarm optimization algorithm. In: Proceedings of SAI Intelligent Systems Conference. Springer; 2018. p. 1082–1104.
  12. 12. Mirjalili S. Dragonfly algorithm: a new meta-heuristic optimization technique for solving single-objective, discrete, and multi-objective problems. Neural computing and applications. 2016;27(4):1053–1073.
  13. 13. Mirjalili S. SCA: a sine cosine algorithm for solving optimization problems. Knowledge-based systems. 2016;96:120–133.
  14. 14. Abualigah L, Diabat A, Mirjalili S, Abd Elaziz M, Gandomi AH. The arithmetic optimization algorithm. Computer methods in applied mechanics and engineering. 2021;376:113609.
  15. 15. Rashedi E, Nezamabadi-Pour H, Saryazdi S. GSA: a gravitational search algorithm. Information sciences. 2009;179(13):2232–2248.
  16. 16. Mirjalili S, Mirjalili SM, Hatamlou A. Multi-verse optimizer: a nature-inspired algorithm for global optimization. Neural Computing and Applications. 2016;27(2):495–513.
  17. 17. Feng Zk, Niu Wj, Liu S. Cooperation search algorithm: A novel metaheuristic evolutionary intelligence algorithm for numerical optimization and engineering optimization problems. Applied Soft Computing. 2021;98:106734.
  18. 18. Khajehzadeh M, Taha MR, Eslami M. Multi-objective optimisation of retaining walls using hybrid adaptive gravitational search algorithm. Civil Engineering and Environmental Systems. 2014;31(3):229–242.
  19. 19. Shehadeh HA. A hybrid sperm swarm optimization and gravitational search algorithm (HSSOGSA) for global optimization. Neural Computing and Applications. 2021;33(18):11739–11752.
  20. 20. Ratnaweera A, Halgamuge SK, Watson HC. Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients. IEEE Transactions on evolutionary computation. 2004;8(3):240–255.
  21. 21. Kennedy J. Bare bones particle swarms. In: Proceedings of the 2003 IEEE Swarm Intelligence Symposium. SIS’03 (Cat. No. 03EX706). IEEE; 2003. p. 80–87.
  22. 22. Li M, Chen H, Shi X, Liu S, Zhang M, Lu S. A multi-information fusion “triple variables with iteration” inertia weight PSO algorithm and its application. Applied Soft Computing. 2019;84:105677.
  23. 23. Zheng Yl, Ma Lh, Zhang Ly, Qian Jx. Empirical study of particle swarm optimizer with an increasing inertia weight. In: The 2003 Congress on Evolutionary Computation, 2003. CEC’03.. vol. 1. IEEE; 2003. p. 221–226.
  24. 24. Nickabadi A, Ebadzadeh MM, Safabakhsh R. A novel particle swarm optimization algorithm with adaptive inertia weight. Applied soft computing. 2011;11(4):3658–3670.
  25. 25. Sayah S, Hamouda A. A hybrid differential evolution algorithm based on particle swarm optimization for nonconvex economic dispatch problems. Applied soft computing. 2013;13(4):1608–1619.
  26. 26. Jiang S, Ji Z, Shen Y. A novel hybrid particle swarm optimization and gravitational search algorithm for solving economic emission load dispatch problems with various practical constraints. International Journal of Electrical Power & Energy Systems. 2014;55:628–644.
  27. 27. Haklı H, Uğuz H. A novel particle swarm optimization algorithm with Levy flight. Applied Soft Computing. 2014;23:333–345.
  28. 28. Ling SH, Iu HH, Chan KY, Lam HK, Yeung BC, Leung FH. Hybrid particle swarm optimization with wavelet mutation and its industrial applications. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics). 2008;38(3):743–763. pmid:18558539
  29. 29. Lee JW, Lee JJ. Gaussian-distributed particle swarm optimization: A novel Gaussian particle swarm optimization. In: 2013 IEEE International Conference on Industrial Technology (ICIT). IEEE; 2013. p. 1122–1127.
  30. 30. Garg H. A hybrid PSO-GA algorithm for constrained optimization problems. Applied Mathematics and Computation. 2016;274:292–305.
  31. 31. Holden NP, Freitas AA. A hybrid PSO/ACO algorithm for classification. In: Proceedings of the 9th annual conference companion on Genetic and evolutionary computation; 2007. p. 2745–2750.
  32. 32. Şenel FA, Gökçe F, Yüksel AS, Yiğit T. A novel hybrid PSO–GWO algorithm for optimization problems. Engineering with Computers. 2019;35(4):1359–1373.
  33. 33. Issa M, Hassanien AE, Oliva D, Helmi A, Ziedan I, Alzohairy A. ASCA-PSO: Adaptive sine cosine optimization algorithm integrated with particle swarm for pairwise local sequence alignment. Expert Systems with Applications. 2018;99:56–70.
  34. 34. Chegini SN, Bagheri A, Najafi F. PSOSCALF: A new hybrid PSO based on Sine Cosine Algorithm and Levy flight for solving optimization problems. Applied Soft Computing. 2018;73:697–726.
  35. 35. Raju M, Gupta MK, Bhanot N, Sharma VS. A hybrid PSO–BFO evolutionary algorithm for optimization of fused deposition modelling process parameters. Journal of Intelligent Manufacturing. 2019;30(7):2743–2758.
  36. 36. Trivedi IN, Jangir P, Kumar A, Jangir N, Totlani R. A novel hybrid PSO–WOA algorithm for global numerical functions optimization. In: Advances in computer and computational sciences. Springer; 2018. p. 53–60.
  37. 37. Ajeil FH, Ibraheem IK, Sahib MA, Humaidi AJ. Multi-objective path planning of an autonomous mobile robot using hybrid PSO-MFB optimization algorithm. Applied Soft Computing. 2020;89:106076.
  38. 38. Shi Y, Eberhart R. A modified particle swarm optimizer. In: 1998 IEEE international conference on evolutionary computation proceedings. IEEE world congress on computational intelligence (Cat. No. 98TH8360). IEEE; 1998. p. 69–73.
  39. 39. Clerc M, Kennedy J. The particle swarm-explosion, stability, and convergence in a multidimensional complex space. IEEE transactions on Evolutionary Computation. 2002;6(1):58–73.
  40. 40. Yan B, Zhao Z, Zhou Y, Yuan W, Li J, Wu J, et al. A particle swarm optimization algorithm with random learning mechanism and Levy flight for optimization of atomic clusters. Computer Physics Communications. 2017;219:79–86.
  41. 41. Jensi R, Jiji GW. An enhanced particle swarm optimization with levy flight for global optimization. Applied Soft Computing. 2016;43:248–261.
  42. 42. Tian Y, Gao D, Li X. Improved particle swarm optimization with wavelet-based mutation operation. In: International Conference in Swarm Intelligence. Springer; 2012. p. 116–124.
  43. 43. Eberhart R, Kennedy J. A new optimizer using particle swarm theory. In: MHS’95. Proceedings of the sixth international symposium on micro machine and human science. Ieee; 1995. p. 39–43.
  44. 44. Chakraborty UK. Advances in differential evolution. vol. 143. Springer; 2008.
  45. 45. Eberhart RC, Shi Y. Comparing inertia weights and constriction factors in particle swarm optimization. In: Proceedings of the 2000 congress on evolutionary computation. CEC00 (Cat. No. 00TH8512). vol. 1. IEEE; 2000. p. 84–88.
  46. 46. Esmin AA, Lambert-Torres G, De Souza AZ. A hybrid particle swarm optimization applied to loss power minimization. IEEE Transactions on power systems. 2005;20(2):859–866.
  47. 47. Kamboj VK. A novel hybrid PSO–GWO approach for unit commitment problem. Neural Computing and Applications. 2016;27(6):1643–1655.
  48. 48. Tasoulis DK, Pavlidis NG, Plagianakos VP, Vrahatis MN. Parallel differential evolution. In: Proceedings of the 2004 congress on evolutionary computation (IEEE Cat. No. 04TH8753). vol. 2. IEEE; 2004. p. 2023–2029.
  49. 49. Shehadeh HA, Ahmedy I, Idris MYI. Sperm swarm optimization algorithm for optimizing wireless sensor network challenges. In: Proceedings of the 6th international conference on communications and broadband networking; 2018. p. 53–59.
  50. 50. Bridge PD, Sawilowsky SS. Increasing physicians’ awareness of the impact of statistics on research outcomes: comparative power of the t-test and Wilcoxon rank-sum test in small samples applied research. Journal of clinical epidemiology. 1999;52(3):229–235. pmid:10210240
  51. 51. Liang JJ, Suganthan PN, Deb K. Novel composition test functions for numerical global optimization. In: Proceedings 2005 IEEE Swarm Intelligence Symposium, 2005. SIS 2005. IEEE; 2005. p. 68–75.