Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

An improved poor and rich optimization algorithm

Abstract

The poor and rich optimization algorithm (PRO) is a new bio-inspired meta-heuristic algorithm based on the behavior of the poor and the rich. PRO suffers from low convergence speed and premature convergence, and easily traps in the local optimum, when solving very complex function optimization problems. To overcome these limitations, this study proposes an improved poor and rich optimization (IPRO) algorithm. First, to meet the requirements of convergence speed and swarm diversity requirements across different evolutionary stages of the algorithm, the population is dynamically divided into the poor and rich sub-population. Second, for the rich sub-population, this study designs a novel individual updating mechanism that learns from the evolution information of the global optimum individual and that of the poor sub-population simultaneously, to further accelerate convergence speed and minimize swarm diversity loss. Third, for the poor sub-population, this study designs a novel individual updating mechanism that improves some evolution information by learning alternately from the rich and Gauss distribution, gradually improves evolutionary genes, and maintains swarm diversity. The IPRO is then compared with four state-of-the-art swarm evolutionary algorithms with various characteristics on the CEC 2013 test suite. Experimental results demonstrate the competitive advantages of IPRO in convergence precision and speed when solving function optimization problems.

1. Introduction

Swarm intelligence evolutionary algorithms are meta-heuristic optimization algorithms inspired by the natural swarm intelligence phenomenon. These algorithms can obtain the optimal solution without the need to be continuously differentiable in the search space when solving optimization problems, and are robustness, simplicity and extensibility, among others. Given these characteristic, swarm intelligence evolutionary algorithms have been widely used in various engineering fields, such as community detection for complex networks [1], wireless sensor networks [2], pattern recognition [3], parameter estimation [4], image processing [5], and target tracking [6], etc. To further improve the application effectiveness of these algorithms in engineering, scholars have attempted to further improve their optimization performance by designing many new swarm intelligence algorithms, and improving the performance of existing algorithms.

At present, many swarm intelligence evolutionary algorithms have been proposed, and can be roughly divided into three categories according to the heuristic principle. First The swarm heuristic algorithms are proposed by simulating the living habits of organisms in nature, such as particle swarm optimization (PSO) algorithm [7], artificial fish swarm algorithm(AFSA) [8], ant colony optimization(ACO) algorithm [9], bacterial foraging algorithm(BFA) [10], artificial bee colony algorithm (ABC) [11], glowworm swarm optimization algorithm(GSO) [12], firefly algorithm (FA) [13], cat swarm optimization (CSO) [14], cockroach swarm algorithm(CSA) [15], monkey algorithm (MA) [16], bat algorithm(BA) [17], zombie survival optimization(ZSO) [18], krill herd (KH) algorithm [19], migrating birds optimization(MBO) algorithm [20], dolphin echolocation(DE) algorithm [21], social spider optimization (SSO) algorithm [22], gray wolf optimization(GWO) algorithm [23], chicken swarm optimization (CSO) algorithm [24], moth flame optimization(MFO) algorithm [25], monarch butterfly optimization (MBO) algorithm [26], dragonfly algorithm(DA) algorithm [27], dolphin swarm algorithm (DSA) algorithm [28], elephant swarm water search algorithm(ESWS) [29], whale optimization algorithm(WOA) [30], circular structures of puffer fish algorithm(CSOPF) [31], and poor and rich optimization algorithm(PRO) [32], etc. Second, The swarm intelligence evolutionary algorithms are inspired by some physical phenomena. For instance, Birbil et al. proposed electromagnetism-like meta-heuristic (EM) algorithm [33], Menser et al. proposed particle swirl algorithm(PSA) inspired by vortex motion [34], Erol et al. proposed big bang-big crunch(BB-BC) algorithm according to big bang theory and contraction theory [35], Rashedi et al. proposed gravitational search algorithm (GSA) based on the Law of Universal Gravitation proposed by Newton [36], Ying Tan et al. proposed firework algorithm (FWA) by simulating the explosion and lighting process of real fireworks in the night sky [37], A. Kaveh et al. proposed water evaporation optimization (WEO) algorithm [38] inspired by the evaporation process of a few molecules on solid surfaces with different wettability, Javidy et al. proposed ion motion optimization (IMO) algorithm according to the law of motation and transformation of particles in liquid state and solid state [39], Fatma A. Hashim et al. proposed Archimedes optimization algorithm (AOA) [40], which is devised with inspirations from an interesting law of physics Archimedes’ Principle. Third, swarm intelligence evolutionary algorithms are inspired by the genetic evolution process, such as the famous GA, differential evolution (DE), clone selection algorithm (CSA) inspired by clone selection mechanism, immune algorithm (IA), social cognitive optimization(SCO) [41], free search (FS) algorithm [42], harmony search (HS) algorithm [43], Biogeography-Based Optimization (BBO) algorithm [44] proposed by Simon et al. IEEE Transaction on Evolutionary Computation, brain storm optimization(BSO), teaching-learning-based optimization(TLBO) algorithm [45], symbiotic organisms search (SOS) algorithm [46], animal migration optimization (AMO) algorithm [47], etc.

To further improve the performance of various swarm intelligence evolutionary algorithms, scholars have proposed many improved evolutionary algorithms. For instance, Wei Sun et al. proposed an improved particle swarm optimization algorithm based on an all-dimension-neighborhood-based PSO with the randomly selected neighbors learning strategy (AND-RSN-PSO) [48]. At the early stage of PSO, they adopted the randomly selected neighbors (RSN) learning strategy to enhance swarm diversity, whereas at the later stage, they used the all-dimension neighborhood (ADN) strategy to accelerate the convergence rate. Meng Wang et al. proposed a modified sine cosine algorithm (MSCA) [49], which introduces linear searching path and empirical parameter to improve the search path of original SCA. Deng xianli et al. proposed a multi-population based self-adaptive migration PSO (MSMPSO) [50], which integrates the two common neighbor typologies into particle’s social-learning part. Mahmoud M. Saafan et al. proposed a hybrid improved whale optimization salp swarm algorithm (IWOSSA) [51], which combines improved whale optimization algorithm and salp swarm algorithm. Yaping xiao et al. designed a variety of efficient update operators and redesigned the update strategies of branches, and proposed an improved artificial tree algorithm with two populations (IATTP) [52]. Rehab ali Ibrahim et al. proposed an improved version of the gray wolf optimizer, named chaotic opposition-based Grey-Wolf optimization algorithm based on differential evolution and disruption optimization (COGWO2D) [53]. Seyed Mostafa Bozorgi et al. proposed an improved whale optimization algorithm (REWOA), which combines exploitation of WOA with exploration of DE and therefore provides a promising candidate solution [54]. Omid Trakhaneh et al. proposed an improved differential evolution algorithm using Archimedean Spiral and neighborhood search based mutation approach (ADENS) [55]. Emine Bas et al. added two new techniques to the original social spider algorithm to propose an improved social spider algorithm (ISSA) [56].

Among the above swarm intelligence algorithms, the poor and rich optimization algorithm (PRO) was recently proposed by V.K. Bardsiri et al., by taking inspiration from the behavior of the poor and the rich in acquiring wealth to improve their economic conditions. Many experiments have confirmed that PRO has significantly better optimization performance than PSO, ABC, and SCA, etc. It suggests that PRO also shows excellent application potential in various engineering fields. Unfortunately, similar to other swarm intelligence evolutionary algorithms, PRO demonstrates low convergence speed and precision when solving very complex optimization problems, and no study thus far has attempted to improve the performance. To improve the effectiveness of PRO in engineering application, this study designs an improved PRO (IPRO) algorithm. The strong motivation and main contributions of this paper are as follows. First, the population is divided into the poor and rich sub-population dynamically to replace the original fixed allocation model, in order to meet the convergence speed and population diversity requirements across different evolutionary stages. Second, a novel individual updating method for the rich is proposed to further accelerate the convergence speed and prevent excessive losses in population diversity. In this method, the rich learn from the poor and the top best individuals, and the gradually increases proportion of learning from the poor as the iteration progresses. Third, a novel individual updating method for the poor is designed to obtain reliable evolutionary information and maintain population diversity. Here, a new poor is obtain by retaining itself and a offspring individual according to some cross probability, where a offspring individual is obtained by the alternating updating model of Gaussian mutation and learning from the rich individuals. Results from the CEC2013 test suite show that the IPRO significantly outperforms four state-of-the-art optimization algorithms in terms of convergence precision and speed when solving function optimization problems.

The rest structure of this paper is organized as follows: Section 2 introduces the preliminaries, including the related introduction of optimization approaches and terminology, and the principle of the original PRO algorithm; Section 3 describes the innovation, the principle, procedures, and detailed operations of our proposed IPRO algorithm; in section 4, The proposed algorithm is tested and the results are analyzed. Finally, summarize the full paper.

2. Preliminaries

2.1 Terminology

The unconstrained optimization problem can be represented mathematically as (1) where X is a vector that represents an independent variable set, and F represents the objective function to be optimized.

It is noted that, this paper only researches on employing the swarm intelligence optimization algorithms to solve the above unconstrained optimization problem. The following terminology will be used in this paper.

Chromosome: Chromosomes can also be called individuals. A certain number of individuals form a population, and the number of individuals in the population is called population size.

Gene: Genes are elements in a string. Genes are used to represent individual characteristics. For example, if there is a string S = 1 0 1 1, the four elements 1, 0, 1 and 1 are called genes respectively. Their values are called alleles.

Gene locus: Gene locus represents the position of a gene in the string in the algorithm, which is also called gene position. The gene position is calculated from the left to the right of the string. For example, in a string S = 1 0 1 1, the gene position of 0 is 3.

Fitness: The degree of adaption of each individual to the environment is called fitness. In order to reflect the adaptability of chromosomes, a function that can measure each chromosome in the problem is introduced, which is called fitness function.

2.2 Poor and rich optimization algorithm

According to their wealth, people in society can be divided into the poor and the rich, and these two groups try to improve their economic situations in different ways. V.K. Bardsiri et al. simulating such behavior in 2019 to design the poor and rich optimization algorithm (PRO) [32]. In this algorithm, people are equivalent to individuals, and their economic status is equivalent to their fitness values. The poor improves its economic situation by narrowing its economic gap with the rich, whereas the rich increases its economic advantage by increasing its economic gap with the poor. The economic status of the entire population can be improved effectively by improving both the poor and the rich.

The pseudo code of PRO is shown in Algorithm 1. The detailed procedure of the original PRO algorithm is described as follows. First, initialize various parameters including N, D, U, L, T, and Pmut. Where N represents the number of individuals, D represents the dimension of the optimization problem, U and L denote the upper and lower limits of the search range respectively, T denotes the maximum number of iterations, and Pmut denotes the mutation probability. Afterwards, randomly generate the initial population X, and calculate the fitness value of each individual X(i). Second, the population X is divided into two sub-population according to fitness value as follows. The half of the individuals with better fitness value in X consist of the rich sub-population X_rich, and the others in X consist of the poor sub-population X_poor. Third, update each individuals of X_rich according to Eq (2) to form an new rich population X_richnew. Then update X_richnew according to the following mutation operation. For each individual X_richnew (i), generate a random number (i.e. rand). If rand is smaller than Pmut, then X_richnew (i) will be replaced by X_richnew (i) which adds a value of a normal distribution with average of 0 and variance of 1. Afterwards, compare X_richnew (i) with X(i) as follows. If X_richnew (i) is better than X(i), then X(i) will be replaced by X_richnew (i). Fourth, update each individuals of X_poor according to Eq (3) to form the new rich population X_poornew. Then update X_poornew according to the following mutation operation. Fifth, combine X_richnew and X_poor to form X, and find the best individual of X. If the number of iteration reaches the maximum number of iteration, then the algorithm terminates. Otherwise, go to the second step for another round of iteration.

Algorithm 1 PRO

Input: N, D, U, L, T, Pmut

Output: Best solution xbest and its fitness

01 Parameter initialization (N, D, U, L, T, Pmut)

02 X←Generate the initial population at random

03 f (Xi), i ∈ (1,N) ← Calculate the fitness value of each individual

04 t = 0

05  While t< = T do

06   The population X is divided into two subpopulation according to fitness value, including the rich population X_rich and the poor population X_poor

07   X_richnew ← Perform the individual updating method on X_rich according to Section 2.1

08   X_richnew ← Perform the following mutation operation on X_richnew

  for i = 1:N/2

   if rand < Pmut

    X_richnew(i) = X_richnew (i) + randn, // randn is the value of a normal distribution with a mean of 0 and a variance of 1

   end if

   if f(X_richnew(i)) > f(X_rich(i))

    X_richnew(i) = X_rich(i)

   end If

  end For

09  X_poornew ← Perform the individual updating method on X_poor according to Section 2.2

10  X_poornew ← Perform the mutation operation on X_poornew

11  XX_richnewX_poornew

12  xbest ← best(X) // assign current best

13  t = t+1

14 end While

15 Return xbest

2.2.1 Individual updating method for the rich.

Each individual in the rich sub-population is updated as (2) where X_richnew(i) and X_richold(i) represent the i-th new individual and the i-th individual in the new rich sub-population and original rich sub-population, respectively, r is a random number between 0 and 1, and X_poorold(best) is the individual with the best fitness value in the poor sub-population.

2.2.2 Individual updating method for the poor.

Each individual in the poor sub-population is updated as (3) where X_poorold(i) and X_poornew(i) represent the i-th new individual and i-th individual in the new and original poor sub-population, respectively, Pattern is calculated as (4) where X_richold(best), X_richold(worst) and X_richold(mean) represent the individual with the best fitness value, the individual with the worst fitness value, and the mean of all individuals in the rich sub-population, respectively.

3. Proposed algorithm

Reference [32] confirmed in their experiments that PRO can obtain the optimum solution for less complex function optimization problems. Unfortunately, for the highly complex optimization problems, PRO has slow convergence speed and easily falls into local optimal. To address these issues, this paper designs an improved poor and rich (IPRO)algorithm, whose pseudo code is presented in Algorithm 2.

Algorithm 2:IPRO

Input: N, D, U, L, MaxFEs

Output: xbest, f(xbest) //best solution and its fitness value

01 Parameter initialization (N, D, U, L, MaxFEs, Pmut)

02 X ← Generate the initial population randomly

03 f(X(i)), i ∈ (1,N) ← Calculate the fitness value of each individual X(i) in X

04 FEs = 0

05 While FEs <= MaxFEs do

06  X_rich, X_poor ← perform dynamic population division method on X according to Section 3.1

07  X_richnew ← perform the individual updating method on X_rich according to Section 3.2

08  X_poornew ← perform the individual updating method on X_poor according to Section 3.3

09  XnewX_richnewX_poornew

10  X ← perform the mutation operation similar to the method of Algorithm 1 on Xnew

   for i = 1 to N

    if rand < Pmut

     Xnew(i) = Xnew(i) + randn

    end if

    if f(Xnew(i)) < f(X(i))

     X(i) = Xnew (i)

    end If

   end for

11  xbest ← best(X) // assign current best

12  update FEs

13 End While

14 return xbest and its fitness value f(xbest)

The detailed step-by-step procedure for Algorithm 2 is described as follows:

  1. Step 1: Initialize various parameters including N, D, U, L, MaxFEs, and Pmut. The parameters N, D, U, L, and Pmut are similar to those used in Algorithm 1, whereas MaxFes denotes the maximum number of function evaluations.
  2. Step 2: Randomly generate the initial population X, and calculate the fitness value of each individual in X.
  3. Step 3: Determine the size of the rich sub-population X_rich and the poor sub-population X_poor according to Eq (5) (i.e. |X_rich| = z and |X_poor| = Nz). Select the top z best individuals of X to form X_rich, and the others in X consist of X_poor. Further details are presented in Section 3.1.
  4. Step 4: Apply the individual updating method on X_rich and X_poor as described in Sections 3.2 and 3.3, respectively, to form the new rich sub-population X_richnew and the new poor sub-population X_poornew. Afterward, combine X_richnew and X_poornew to form a new population Xnew.
  5. Step 5: Perform the mutation operation on Xnew according to the following method. For each individual Xnew (i), generate a random number (i.e. rand). If rand is smaller than Pmut, then Xnew (i, j) will be replaced by Xnew (i, j), which adds a value of normal distribution with an average of 0 and variance of 1. Afterward, compare Xnew (i) with X(i) as follows. If Xnew (i) is better than X(i), then X(i) will be replaced by Xnew (i). Thus, the population X will be updated. It is worth noting that the evaluations only need be operated in this step.
  6. Step 6: Find the best individual of X, i.e., xbest.
  7. Step 7: Use the maximum number of function evaluations (MaxFEs) as the terminal condition. If the number of function evaluations (FEs) is larger than MaxFEs, then the algorithm terminates. Otherwise, go to Step 2 for another round of iteration.

3.1 Sub-population division method

In PRO, the whole population is divided into the poor and the rich sub-populations, where the former comprises the half of those individuals having the worse fitness values in the entire population. The poor has a slower convergence speed than the rich, but they can help the latter explore more new solutions and quickly move to the optimum solution, by providing abundant evolutionary information. In sum, the poor sub-population focuses on maintaining population diversity, whereas the rich sub-population is mainly responsible for search and exploration.

Generally, each evolutionary stages has different requirements for algorithm performance. At the initial stage, evolution algorithms usually are expected to quickly move to the region belonging to the optimal solution, given the favorable swarm diversity. Along with the evolution processing, the difference in the fitness values between individuals continuously decreases, that is, these individuals become increasingly similar, thereby, gradually reducing population diversity. In this case, the IPRO algorithm can increase population diversity to escape the local optimum. To allow this algorithm, to satisfy the requirements of different evolutionary stages, at the initial stage of evolution, the size of the poor sub-population should be appropriately reduced and that of the rich sub-population should be expanded. Meanwhile, at the final stage of evolution, the size of the poor sub-population should be expanded, whereas that of the poor sub-population should be reduced.

From the aforementioned ideas, this section proposes the following dynamic division method. In each iteration, the individuals of the offspring population are sorted in a descending order based on their fitness values for the minimum optimization problems. Those individuals with z smaller fitness values moved to the rich sub-population, where z is calculated by Eq (5), whereas the other individuals (N-Z) moved to the rich sub-population. (5) where ⌊ ⌋ represents round off, Z represents the size of the rich sub-population (the size of the poor sub-population should be N-Z), t and T represent the current and maximum number of iterations, zmax and zmin represent the minimum and maximum sizes of the rich sub-population, respectively. When zmax and zmin are set to 0.6N and 0.4N, respectively, PRO can obtain satisfactory results.

3.2 Improved individual updating mechanism for the rich

Eq (2) shows that each rich individual in the PRO algorithm is improved in the following way. The rich sub-population increases its difference from the poor sub-population by learning from the individual in the poor population with the best fitness values. Unfortunately, experiments suggest that the above individual updating mechanism for the rich has a slow convergence speed and can easily fall in the local optimum for two reasons. First, all individuals in the rich sub-population learn from the same individual in the poor sub-population. Despite being the best in its sub-population, the evolutionary information of this individual is not the best in the current population. Obviously, those individuals in the rich sub-population have low probability of searching for better solutions, which would slow down the convergence speed to some extent. Second, given that all individuals in the rich sub-population learn from the same individual in the poor sub-population, they become increasingly similar to each other, which in turn increases the possibility of the PRO algorithm to fall in the local optimum. Moreover, if the optimal individual in the poor sub-population remains unchanged after many iterations, the rich sub-population will converge extremely slowly. The PRO algorithm cannot easily escape the local optimum in this case.

Given that the excellent individuals in the rich sub-population carry the best evolutionary information of the current population, the other members of the same sub-population can converge quickly by learning from these excellent individuals. To further accelerate the convergence speed and maintain population diversity, the rich should learn from both the poor and the better individuals in its own sub-population. Given that the PRO algorithm has different requirements for convergence speed and population diversity at different stages of iterations, at the initial stage of evolution, the individuals in the rich sub-population should learn more from the excellent individuals in the same sub-population rather than from the poor sub-population, whereas at the final stage of evolution, these individuals should learn more from the poor sub-population. Following the above ideas, this study designs the following individual updating mechanisms for the rich sub-population as Eq (6). (6) where X_richold(k1) represents an individual that is randomly selected from the s best individuals in the rich sub-population, generally (a good result can be obtained when s equals 10), X_poorold(k2) represents an individual that is randomly selected from the poor sub-population, r1 and r2 represent a matrix with one row and D columns that composes a random number between 0 and 1, and the parameter ωt controls the learning proportion, and changes adaptively along with the iteration according to Eq (7). (7) where ωu and ωl are the upper and lower terms of ωt, respectively. A good result can be generally obtained, when ωu and ωl are set to 1 and 0.2, respectively.

In sum, this section proposes a novel individuals updating mechanism for the rich sub-population. This mechanism uses the original rich and poor sub-populations as its inputs and generates a new rich sub-population as its output. Each individual in the new rich sub-population is generated as follows. First, an individual is randomly selected from the s best individual in the original rich sub-population. Second, an individual is randomly selected from the original poor sub-population. Third, ωt is calculated using Eq (7). Fourth, an individual in the new rich sub-population is generated using Eq (5). It is worth noting that, all individuals in the new rich sub-population need not be evaluated in.this step.

3.3 Improved individual updating mechanism for the poor

As shown in Eq (3), each individual in the poor sub-population learn from the same individual that is calculated by the rich in every generation. As the iterations progress, the genes of individuals in the poor sub-population become increasingly similar to those of individuals in the rich sub-population, thereby reducing population diversity. To escape the local optimum, the poor offers less help to the rich, as they could no longer easily provide other evolutionary information. To reduce the possibility of falling into the local optimum and maintain a fast convergence speed, abundant and excellent evolutionary information must be provided for the rich.

From the above ideas, an improved individual updating mechanism for the poor is proposed as follow. First, the individuals in the poor sub-population perform a Gaussian mutation operation on themselves using Eq (8). Second, after each p iterations, the individuals in the poor sub-population learn from the rich once according to Eq (10). Third, a new individual is generated according to Eq (11). (8) where the parameter q is calculated as (9) where f(X_poorold(i)) and f(X_poorold(k4)) represent the fitness values of the current individual and the other individuals selected randomly in the poor sub-population, respectively. (10) where X_richold(k3) is an individual that is selected randomly from the original rich sub-population, and r is a random number between R1 and R2. A good result is generally obtained, when R1 and R2 are set to 0.2 and 0.8, respectively. (11) where X_poornew (i) and X_poorold (i) represent the new individual generated by Eqs (8) or (10) and the original individual, respectively, j represents the j-th dimension, and cr is a crossover probability. A good result is generally obtained, when cr is set to 0.4.

Algorithm 3 give the pseudo code of the proposed improved individuals updating mechanism for the poor sub-population. It is worth noting that, all new individuals need not be evaluated in this mechanism.

Algorithm 3: the novel improved individuals updating mechanism for the poor sub-population

Input: X_richold, X_poorold, p, t // X_richold and X_poorold represent the orignal rich sub-population, the original poor sub-population, respectively.

Output: X_poornew// the new poor sub-population

01 for i = 1 to N-Z

02  if mod(t,p) = = 0

03   k3 ← select an individual randomly d from the original rich sub-population

04   X_poornew (i) ← generate a new individual according to Eq (10)

05  else

06   q ← obtain according to Eq (9)

07   X_poornew(i) ← generate a new individual according to Eq (8)

08  end if

09 X_poornew (i) ← perform Eq (11) on X_poornew (i) and X_poorold (i)

10 end for

11 return X_poornew

In sum, our proposed improved individual updating method for the poor sub-population has the following two advantages. On the one hand, the individuals in the poor sub-population learn from the rich every p iterations rather than each iterations, which not only improves the evolutionary information of the poor sub-population to some extent, but also minimizes losses in their own population diversity. Therefore, the poor can provide more excellent evolutionary information to the rich, helping them quickly move to the region of the optimal solution. On the other hand, the poor performs a Gaussian mutation operation in most iterations, which makes them perform a search around themselves. In addition, most of the original evolution information can be retained according to the crossover mechanism shown in Eq (11). These operations further maintain the population diversity. Therefore, despite falling into the local optimum, the PRO algorithm also has an increased possibility of jumping out.

4. Experimental results and discussions

In this section, the following four experiments were performed to investigate the performance of our proposed IPRO algorithm: (1) sensitivity analysis with the IPRO parameters; (2) Verification the effectiveness of each improved strategy proposed in Sections 3.1 to 3.3; (3) the optimization performance comparison between IPRO and original PRO; (4) performance comparison between IPRO and other superior swarm intelligence algorithms. The above experiments were performed on the CEC2013 test suite, which contains 28 functions. According to their characteristics, these functions can be divided into unimodal (F1—F5), multimodal (F6—F20), and composition functions (F21—F28). Details of the test suite can be found in [32]. The above experiments were carried out on MATLAB 2016.

4.1 Sensitivity analysis with the IPRO parameters

Compared with the original PRO algorithm, our proposed IPRO algorithm adds the parameters zmin, zmax, ωl, ωu, R1, R2, cr, s, and p. A sensitivity analysis was performed to evaluate the influence of these parameters on the performance of the IPRO algorithm. When analyzing the sensitivity of each parameter, only the value of one parameter was changed to ensure that the other parameters remain unchanged. In each experiment, the population size N was set to 100, the problem dimension D was set to 30, the maximum evaluation times MaxFEs was set to 10000×D = 300000, and Pmut was set to 0.06 similar to that in the original PRO algorithm.

  1. In the sensitivity analysis with zmin and zmax, the parameters were set as follows. zmin and zmax correspond to three groups of values, namely, zmin = 0.4 and zmax = 0.6, zmin = 0.3 and zmax = 0.7, zmin = 0.2 and zmax = 0.8. The other parameters in IPRO, except for zmin and zmax, were set as ωl = 0.2, ωu = 1, R1 = 0.2, R2 = 0.8, cr = 0.4, s = 10, and p = 10.
  2. In the sensitivity analysis with ωl and ωu, the parameters were set as follows. ωl and ωu correspond to three groups of values, namely, ωl = 0.2 and ωu = 1, ωl = 0.2 and ωu = 0.8, ωl = 0.4 and ωu = 0.6. The other parameters in IPRO, except for ωl and ωu, are set as follows: zmin = 0.4, zmax = 0.6, R1 = 0.2, R2 = 0.8, cr = 0.4, s = 10, and p = 10.
  3. In the sensitivity analysis with cr, the values of cr were set to 0.2,0.4,0.6, respectively, and the other parameters were set as ωl = 0.2, ωu = 1, zmin = 0.4, zmax = 0.6, R1 = 0.2, R2 = 0.8, s = 10, and p = 10.
  4. In the sensitivity analysis with R1 and R2, the parameters were set as follows. R1 and R2 correspond to three groups of values, namely, R1 = 0.2 and R2 = 1, R1 = 0.2 and R2 = 0.8, R1 = 0.4 and R2 = 0.6. The other parameters in IPRO, except for R1 and R2, were set as follows: ωl = 0.2, ωu = 1, zmin = 0.4, zmax = 0.6, R1 = 0.2, R2 = 0.8, cr = 0.4, s = 10, and p = 10.
  5. In the sensitivity analysis with s, the values of s were set to 10, 50, 100, respectively, and the other parameters were set as follows: ωl = 0.2, ωu = 1, zmin = 0.4, zmax = 0.6, R1 = 0.2, R2 = 0.8, cr = 0.4, and p = 10.
  6. In the sensitivity analysis with p, the values of p were set to 10, 50, 100, respectively, and the other parameters were set as follows: ωl = 0.2, ωu = 1, zmin = 0.4, zmax = 0.6, R1 = 0.2, R2 = 0.8, cr = 0.4, and s = 10.

The IPRO algorithm with the above parameter settings ran independently 30 times on the CEC2013 test suite. Tables 1 and 2 present the average optimal values obtained in each run with the above different parameter settings when the same preset MaxFEs is reached. The last line of these tables presents the number of functions after the algorithm with the corresponding parameter settings achieves the best optimization effect. The IPRO with zmin = 0.2 and zmax = 0.8 obtained 14 out of 28 functions optimum, the IPRO with zmin = 0.3 and zmax = 0.7 obtained 12 out of 28 functions optimum, and the IPRO with zmin = 0.4 and zmax = 0.6 obtained 22 out of 28 functions optimum. In sum, the performance of IPRO is sensitive to zmin and zmax. Meanwhile, the other data in Tables 1 and 2 show that the performance of IPRO is sensitive to ωl, ωu, cr, and s, and is less sensitive to R1, R2, and p.

thumbnail
Table 1. Sensitivity analysis with zmin, zmax, ωl, ωu, cr.

https://doi.org/10.1371/journal.pone.0267633.t001

4.2 Verification of the effectiveness of various improvement measures

To verify the effectiveness of each improvement measure described in Sections 3.1 to 3.3, the following experiments were conducted. First, the PRO algorithm was combined with each of the improved strategies proposed in Sections 3.1 to 3.2, which yielded three new algorithms, namely, an IPRO algorithm based on the dynamic population division method proposed in Section 3.1(PRO1), an IPRO algorithm based on the individual updating mechanism for the rich proposed in Section 3.2(PRO2), and an IPRO algorithm based on the individual updating mechanism for the poor proposed in Section 3.3 (PRO3). These new algorithms were then tested on the CEC2013 test suite. To ensure the fairness of comparison, in each experiment, the population size N were set to 100, the problem dimension D were set to 30, and the maximum number of evaluations MaxFes = 10000 × D = 300000, the other parameters were set as follows: Pmut = 0.06, zmin = 0.4, zmax = 0.6, ωl = 0.2, ωu = 1, R1 = 0.2, R2 = 0.8, cr = 0.4, s = 10, and p = 10. To avoid the contingency of a single operation of the algorithm and adversely affect the evaluation of the algorithm. Each algorithm runs independently on each function 30 times.

Table 3 presents the mean and standard deviation of the optimal values obtained by PRO1, PRO2, PRO3, and PRO on each function in 30 experiments. The data before and after the symbol “±” represents mean and standard deviation respectively. To prove that the results achieved by the above algorithms were not obtained by chance, the results shown in Table 3 were subjected to a Wilcoxon rank sum test. The test results were then used to determine whether significant differences were present (alternative hypothesis) or absent (null hypothesis) among these algorithms at the 0.05 significance level on each optimization function. The Wilcoxon rank sum test was used in this work because of its fewer distributional assumptions compared with other parametric procedures, such as the T-test. Table 4 presents the Wilcoxon rank sum test results. A p-value of less than 0.05 indicates the presence of a significant difference between the IPRO and PRO on the current function; otherwise, it indicates there is no significant difference between IPRO and PRO in the current function. The symbols ‘+, =, and -’ in the last line of Table 4 indicate the number of functions on which the IPRO significantly outperforms the PRO, on which the IPRO has no significant differences from the PRO, and on which the PRO significantly outperforms the IPRO.

thumbnail
Table 3. Comparison results of each improved strategy on the 30-dimensional CEC2013 test suite.

https://doi.org/10.1371/journal.pone.0267633.t003

thumbnail
Table 4. Wilcoxon rank sum test between PRO and each improved strategy.

https://doi.org/10.1371/journal.pone.0267633.t004

Compared with PRO, PRO1 demonstrated the same performance on 26 functions, a significantly better performance on the multimodal function F14 and composition function F20, and a significantly inferior performance on any function. Meanwhile, PRO2 obtains smaller mean values on 27 functions, the same performance on F3, F8 and F16, a significantly better performance on 25 functions, a significantly inferior performance on 6 functions, and a significantly better performance on 18 test functions.

In sum, the strategies proposed in Sections 3.1 to 3.3, greatly influenced the improvements in algorithm accuracy with the strategy proposed in Section 3.2 showing the most obvious improvement effect.

4.3 Performance comparison between the IPRO and PRO

To compare the performance of our proposed IPRO and the original PRO, they were tested on 10- and 30-dimensional CEC2013 test suites. To ensure the fairness of comparison, the population size N was set to 100, and the maximum number of evaluations was set to MaxFes = 300000. The other parameters were set following those described in section 4.1. Table 5 presents the results of 30 independent experiments conducted on the 10- and 30-dimensional CEC2013 test suites, where “mean” and “std” indicate the mean and standard deviation, and “p-value (IPRO vs. PRO)” in the fourth and seventh columns indicate the Wilcoxon rank sum test results of the algorithms on the 10- and 30-dimensional functions, respectively.

thumbnail
Table 5. Comparison results of IPRO and PRO on CEC2013 test suite.

https://doi.org/10.1371/journal.pone.0267633.t005

For the 10-dimensional test functions, compared with PRO, our proposed IPRO demonstrated a significantly inferior performance only on 3 functions (F2, F4, and F22), no significant differences on 8 functions (F3, F6, F7, F9, F11, F15, F16, and F27), and a significantly better performance on 17 functions. For the 30-dimensional test functions, our proposed IPRO demonstrated an inferior performance only on F16, a similar on three functions (F3, F8, and F15), and a significantly better performance on the other 24 functions.

In sum, compared with the original PRO, our proposed IPRO shows a significant advantage in convergence accuracy. This advantage become more obvious as the dimension of the optimization problem increases.

4.4 Performance comparison between IPRO and other algorithms

To verify its excellence, our proposed IPRO algorithm, was compared with four state-of-the-art algorithms on the CEC2013 test suite, namely, MSMPSO [50], MSCA [49], ADN-RSN-PSO [48], and IATTP [52]. To ensure fairness of comparison, the population size was set to N = 100, the problem dimension was set to D = 30, and the maximum number of evaluations was set to MaxFes = 10000 × D = 300000. The values of the other parameters are presented in Table 6.

Table 7 presents the results of 30 independent experiments conducted for each algorithm on the 30-dimensional CEC2013 test suites. To further investigate the improvement gap between IPRO and each state-of-the-art algorithm, these algorithms were subjected to a Wilcoxon rank sum test with a significance level of 0.05, shown in Table 8. The number of functions on which each algorithm significantly outperformed IPRO, on which the IPRO has no significant differences from each algorithm, and on which the PRO significantly outperforms the IPRO, was counted. To a better understanding of the overall performance of these algorithms on scalable test functions, Table 9 shows the Friedman test results for the data presented in Table 7. The algorithm with lower values is given a higher ranking. Further details on Friedman test can be found in [34].

thumbnail
Table 7. Comparison results of 5 algorithms on the 30-dimensional CEC2013 test suite.

https://doi.org/10.1371/journal.pone.0267633.t007

thumbnail
Table 8. Wilcoxon rank sum test between IPRO and each comparison algorithm on CEC2013 test suite.

https://doi.org/10.1371/journal.pone.0267633.t008

Table 7 shows that for the 30-dimensional optimization problems, MSCA does not obtain the global optimal value of any function, MSMPSO obtains the global optimal value on F3 and F9, AND-RSN-PSO obtains the global optimal value on F3 and F11, IATTP obtains the global optimal value on 7 functions (including. F3, F7, F9, F11, F12, F13, and F20), and our proposed IPRO obtains the global optimal value on 9 functions (including F1, F2, F3, F5, F7, F9, F11, F12, F13, and F20). In addition, the number of best results obtained by MSMPSO, MSCA, AND-RSN-PSO, IATTP, and IPRO were 4, 0, 6, 11, and 18, respectively. Meanwhile, Table 8 shows that compared with MSMPSO, IPRO demonstrates a similar performance on 5 functions (F3, F8, F9, F11, and F16), a significantly inferior performance on F22 and F26, and a significantly better performance on 21 functions. IPRO significantly outperformed MSCA on all 28 functions, but compared with AND-RSN-PSO, IPRO demonstrated a significantly worse performance on 4 functions (including. F17, F22, F23, and F27), a similar performance on 2 functions (including F3 and F11), and a significantly better performance on 22 functions. Compared with IATTP, IPRO demonstrated a significantly worse performance on 3 functions (including F4, F15, and F26), a similar performance on 12 functions, and a significantly better performance on 13 functions. Table 9 shows that among the above five algorithms, MSCA demonstrates the poorest performance, AND-RSN-PSO and MSMPAO are slightly better than MSCA, IATTP is better than the three aforementioned algorithms, and our proposed IPRO demonstrates the best overall performance.

To compare the convergence speed of these algorithms more intuitively, Figs 128 illustrates the results of each algorithm running once randomly on each function. The horizontal and vertical coordinates of this figure represent the number of function evaluations and the logarithmic of the function value obtained by each algorithm in the corresponding function evaluation times, respectively.

Figs 128 shows that, for the unimodal functions F1, F2, F3, and F5, the IPRO shows better convergence compared with the other algorithms. For F1, IPRO rapidly converges to the global optimum, whereas the other algorithms fall into the local optimum. For F3, only PRO falls into the local optimum, whereas the other algorithms rapidly converge to the global optimum. Specifically, IPRO, IATTP, and MSMPSO rapidly converge to the global optimum at the initial stage. For F2, F4, and F5, all algorithms fall into the local optimum. For F4, our IPRO shows a better convergence compared with the other algorithms, yet slower than IATTP. Among 15 multimodal functions(including F6 to F20), our IPRO quickly obtains the global optimum of six functions (including F7, F9, F11, F12, F13, and F20) at the beginning. For functions F7, F9, F12, F13, and F20, our IPRO shows better convergence compared with the other algorithms. For F11, our IPRO shows slower convergence than MSCA, yet more quickly than the other algorithms. For F6, F10, F18, and F19, similar to the other algorithms, our IPRO falls into the local optimum, but shows a fastest convergence speed. For F8, our IPRO shows better converagence compared with PRO, MSMPSO, MSCA, and AND-RSN-PSO at the initial stage, whereas IPRO only shows a better convergence than PRO in the end. For F14, all algorithms fall into the local optimum in the early stage of convergence, including IPRO, PRO, MSMPSO, MSCA, IATTP, and AND-RSN-PSO, whereas only AND-RSN-PSO escapes the local optimum in the end. For F15, F16, and F17, our IPRO shows a fastest convergence speed at the initial stage. As evolution goes on, for F15 and F16, each algorithm fall into the local optimum, and AND-RSN-PSO obtain a highest precision solutions. For F17, AND-RSN-PSO escapes the local optimum. For the composition functions (including F21, F22, F23, F24, F25, F26, F27, and F28), each algorithms shows a rapid convergence in the initial stage, but falls into the local optimum in the end. For F21, F24, and F28, IPRO shows a fastest convergence speed and the best quality solutions. For F23, F25, and F26, IPRO only shows a lower quality solution than one algorithm, i.e. AND-RSN-PSO, IATTP, and MSCA, respectively. For F22 and F27, IPRO shows worse convergence compared with MSCA and AND-RSN-PSO. In sum up, IPRO performs well in terms of convergence speed compared with the other four algorithms.

In sum, compared with the other four algorithms, IPRO showed advantages in terms of convergence accuracy and convergence speed. In addition, our proposed IPRO achieved the global optimal of three out of five unimodal functions, and obtained the global optimal of six out of fifteen multimodal functions. Which indicates that, compared with the composition functions, our IPRO is very effective for the unimodal functions and multimodal functions.

5. Conclusions

This paper designed the IPRO algorithm to further improve the convergence speed and accuracy of the recently developed population-based algorithm PRO. IPRO differs from PRO in three ways. First, a different approach was used to divide the population into the poor and rich sub-population. At the early stage of convergence, those individuals with the better fitness values were included in the rich sub-population, whereas all the others were included in the poor sub-population. This process resulted in a rich sub-population was larger than the poor sub-population. However, at the final stage of convergence, the poor sub-population was larger than the rich sub-population. Second, the individual updating mechanism of the rich was strengthened, by allowing each individual in the rich sub-population to learn from the poor and the best individuals, instead of form the best individual in the poor sub-population. This procedure increased the convergence speed of IPRO and minimized losses in swarm diversity. Third, the individual updating mechanism of the poor was strengthened based on Gauss mutation, crossover strategy, and new evolution strategy, which maintain the swarm diversity to some extent. The performance of the IPRO algorithm on 28 benchmark functions was verified, compared with that of four the-state- of- the-art meta-heuristic optimization algorithms using the CEC2013 test suite.

Given its employment of new search strategies, IPRO has a slightly higher time complexity than the original PRO algorithm. Therefore, the subsequent work will aim to reduce the time complexity and improve optimization efficiency of the IPRO. Future research on the proposed IPRO may explore its application in solving other real-word problems to further validate its flexibility in generating optimum solutions to a wide variety of optimization problems.

Acknowledgments

The authors would like to thank the anonymous reviewers for their critical and constructive comments, their thoughtful suggestions have helped improve this paper substantially.

References

  1. 1. Guendouz M, Amine A, Hamou R M, A discrete modified fireworks algorithm for community detection in complex networks. Applied Intelligence. 2017, 46(2): 373–385.
  2. 2. Jia D, Zhu H, Zou S, Hu P. Dynamic cluster head selection method for wireless sensor network. IEEE Sensors Journal. 2016, 16(8): 2746–2754.
  3. 3. Couto D, Zipfel C. Regulation of pattern recognition receptor signalling in plants. Nature Reviews Immunology. 2016, 16(9): 537–552. pmid:27477127
  4. 4. Chen X, Xu B, Mei C, Ding Y H, Li K J. Teaching–learning–based artificial bee colony for solar photovoltaic parameter estimation. Applied Energy. 2018, 212:1578–1588.
  5. 5. Ahmadi M, Kazemi K, Aarabi A, Niknam T, Helfroush M S. Image segmentation using multilevel thresholding based on modified bird mating optimization. Multimedia Tools and Applications. 2019, 78(16):23003–23027.
  6. 6. Li S, Feng X, Deng Z, Pan F, Ge S. Minimum error entropy based multiple model estimation for multisensor hybrid uncertain target tracking systems. IET Signal Processing. 2020, 14(3).
  7. 7. Zhao Q, Li C W. Two-stage multi-swarm particle swarm optimizer for unconstrained and constrained global optimization. IEEE Access. 2020, 8:124905–124927.
  8. 8. Lei X J, Yang X Q, Wu F X. Artificial fish swarm optimization based method to identify essential proteins. IEEE Transactions on computational biology and bioinformatics.2018. pmid:30113899
  9. 9. Liao T J, Krzysztof S, Marco A. M, et al. Ant Colony Optimization for Mixed-Variable Optimization Problems. IEEE Transactions on evolutionary computation. 2014, 18(4):503–518.
  10. 10. Verma O P, Parihar A S. An Optimal Fuzzy System for Edge Detection in Color Images using Bacterial Foraging Algorithm. IEEE Transactions on Fuzzy systems. 2015.
  11. 11. Wang L, Zhang X, Zhang X. Antenna Array Design by Artificial Bee Colony Algorithm With Similarity Induced Search Method. IEEE Transactions on Magnetics, 2019 55(6):1–4.
  12. 12. Zhang J J, Cui Z H, Wang Y C, et al. A Coupling Approach With GSO-BFOA for Many-Objective Optimization. IEEE Access. 2019, 7:120248–120261.
  13. 13. Huang Y P, Huang M Y, Ye C E. A Fusion firefly algorithm with simplified propagation for photovoltaic MPPT under partial shading conditions. IEEE Transactions on sustainable energy. 2020, 11(4):846–862.
  14. 14. Yan D P, Cao H, Yu Y J, et al. Single-Objective/Multiobjective Cat Swarm Optimization Clustering Analysis for Data Partition. IEEE Transactions on automation science and engineering. 2020, 7(3):1633–1646.
  15. 15. Chen Z H. A modified cockroach swarm optimization. Energy Procedia. 2011.
  16. 16. Cui Y J. Application of the Improved Chaotic Self-Adapting Monkey Algorithm Into Radar Systems of Internet of Things. IEEE Access. 2018, 6:54270–54281.
  17. 17. Jenthilnath J, Kulkarni S, Benediktsson J A. A Novel Approach for Multispectral Satellite Image Classification Based on the Bat Algorithm. IEEE Geoscience and remote sensing letters. 2016, 13(4):599–603.
  18. 18. Hoang T N, Bir B. Zombie Survival Optimization: A Swarm Intelligence Algorithm Inspired By Zombie Foraging. 21st international conference on pattern recognition. 2012:987–990.
  19. 19. Li Z H, Cao Q, Zhao Y H. Krill Herd Algorithm for Signal Optimization of Cooperative Control With Traffic Supply and Demand. IEEE Access. 2019, 7:10776–10786.
  20. 20. Wang P, Sang H, Tao Q, et al. Improved Migrating Birds Optimization Algorithm to Solve Hybrid Flowshop Scheduling Problem With Lot-Streaming. IEEE Access. 2020, 8:89782–89792.
  21. 21. Qing X, Liu S Z, Qiao G, et al. Acoustic Propagation Investigation of a Dolphin Echolocation Pulse at Water-sediment Interface Using Finite Element Model. 2018 OCEANs-MTS/IEEE Kobe Techno-Oceans(OTO), 2018.
  22. 22. Lai Z L, F X, Yu H Q, et al. A Parallel Social Spider Optimization Algorithm Based on Emotional Learning. IEEE Transactions on systems, man, and cybernetics: system. 2021, 51(2):797–808.
  23. 23. Routray A, Singh R K, Mahanty R. Harmonic Reduction in Hybrid Cascaded Multilevel Inverter Using Modified Grey Wolf Optimization. IEEE Transactions on Industry Applications. 2020, 56(2):1827–1838.
  24. 24. Cristin D R, Suresh K D K, Anbhazhagan D P. Severity Level Classification of Brain Tumor based on MRI Images using Fractional-Chicken Swarm Optimization Algorithm[J]. The Computer Journal.2021,64(10):1514–1530.
  25. 25. Korshy A, Kamel S, Alquthami T, et al. Optimal coordination of standard and non-standard direction overcurrent relays using an improved moth-flame optimization. IEEE Access, 2020, 8(2):87378–87392.
  26. 26. Amia B, Mat B. A hybirdization of differential evolution and monarch butterfly optimization for solving systems of nonlinear equations. Journal of computational design and engineering. 2019, 6(3):354–367.
  27. 27. Hammouri A I, Mafarja M, AI-B Et Ar M A, et al. An improved dragonfly algorithm for feature selection. Knowledge-based systems. 2020, 203:106131.
  28. 28. Yong W, Tao W, Cheng-Zhi Z, et al. A new stochastic optimization approach: Dolphin swarm optimization algorithm. International journal of computational intelligence & applications. 2016: 1650011.
  29. 29. Mandal S, Saha G, Pal R K. Recurrent neural network-based modeling of gene regulatory network using elephant swarm water search algorithm. Journal of bioinformatics & computational biology. 2017:1750016. pmid:28659000
  30. 30. Zhang Q, Liu L. Whale optimization algorithm based on Lamarckian learning for global optimization problems. IEEE Access.2019, 7(1):36642–36666.
  31. 31. Mehmet C C, Arif G. Circular structures of puffer fish: a new metaheuristic optimization algorithm. 2018 third international conference on electrical and biomedical engineering, clean energy and green computing. 2018:1–5.
  32. 32. Moosavi S, Bardsiri V K. Poor and rich optimization algorithm: A new human-based and multi populations algorithm. Engineering applications of artificial intelligence. 2019,86:165–181.
  33. 33. Birbil S I, Fang S H. An electromagnetism-like mechanism for global optimization. Journal of global optimization. 2003, 25(1):263–282.
  34. 34. Menser S, Hereford J. A new optimization technique. Proceedings of the IEEE southeast Con 2006. 2006:250–255.
  35. 35. Erol O K, Eksin I. A new optimization method: Big Bang- Big Crunch. Advances in engineering software. 2006, 37(2):106–111.
  36. 36. Rashedi E, Nezamabadi-pour H, Saryzdi S. GSA: a Gravitational search algorithm. Information Sciences. 2009,179(13):2232–2248.
  37. 37. Ying T, Zhu Y C. Fireworks Algorithm for Optimization. International Conference in Swarm Intelligence. 2010, 355–364.
  38. 38. Kaveh A, Bakhshpoori T. Water Evaporation Optimization: A novel physically inspired optimization algorithm. Computers and Structures. 2016, 167(15):69–85.
  39. 39. Javidy B, Hatamlou A, Mirjalili S. Ions motion algorithm for solving optimization problems. Applied soft computing. 2015, 32(1):72–79.
  40. 40. Hashim F A, Hussain K, Houssein E H, et al. Archimedes optimization algorithm: A new metaheuristic algorithm for solving optimization problems. Applied intelligence. 2021, 51(3):1531–1551.
  41. 41. Xie X F, Zhang W J, Yang Z L. Social cognitive optimization for nonlinear programming problems. Proceedings of the first international conference on machine learning and cybernetics. 2002:779–783.
  42. 42. Penev K, Littlefair G. Free search----a comparative analysis. Information sciences. 2005, 172(1–2):173–193.
  43. 43. Zong W G, Kim J H, Loganathan G V. A new heuristic optimization algorithm: harmony search. Simulation. 2001,76(2):60–68.
  44. 44. Simon D. Biogeography-based optimization. IEEE Transactions on Evolutionary Computation. 2009,12(6):702–713.
  45. 45. Rao R V, Patel V. Teaching-learning-based optimization: A novelmethod for constrained mechanical design optimization problems. Computer-Aided Design. 2011, 43(3):303–315.
  46. 46. Cheng M Y, Prayogo D. Symbiotic organisms search: a new metaheuristic optimization algorithm. Computers & Structures. 2014, 139(jul.):98–112.
  47. 47. Li X., Zhang J., and Yin M.. Animal migration optimization: an optimization algorithm inspired by animal migration behavior. Neural Computing and Applications. 2014, 24(7–8):1867–1877.
  48. 48. Sun W, Lin A P, Yu H S, Liang Q K, Wu G H. All-dimension neighborhood based particle swarm optimization with randomly selected neighbors. Information Sciences. 2017, 405: 141–156.
  49. 49. Wang M, Lu G Z. A Modified Sine Cosine Algorithm for Solving Optimization Problems. IEEE Access. 2021, 9: 27434–27450.
  50. 50. Deng X L, Wei B, Zeng H, Ling G, Xia W X. A Multi-Population Based Self-Adaptive Migration PSO. Acta Electronica Sinica. 2018, 46(8): 1858–1865.
  51. 51. Mahmoud M S, Eman M E. IWOSSA: An improved whale optimization salp swarm algorithm for solving optimization problems. Expert systems with applications. 2021, 176(2):114901.
  52. 52. Xiao Y, Chi H, Li Q. An improved artificial tree algorithm with two populations (IATTP). Engineering Applications of Artificial Intelligence. 2021, 104: 104324.
  53. 53. Rehab A I, Mohamed A E, Lu S F. Chaotic Opposition-Based Grey Wolf Optimization Algorithm basedon Differential Evolution and Disruption Operator for Global Optimization. Expert systems with applications. 2018, 108(OCT):1–27.
  54. 54. Jin Q B, Xu Z H, Cai W. An Improved Whale Optimization Algorithm with Random Evolution and Special Reinforcement Dual-Operation Strategy Collaboration. 2021, 13(2):238.
  55. 55. Omid T, Irene M. An improved differential evolution algorithm using Archimedean spiral and neighborhood search based mutation approach for cluster analysis. Future generation computer systems. 2019, 101(1):921–939.
  56. 56. Emine B, Erkan U. Improved social spider algorithm for large scale optimization. Artificial intelligence review.2020.