Figures
Abstract
Particle swarm optimization and genetic algorithms are two classes of popular heuristic algorithms that are frequently used for solving complex multi-dimensional mathematical optimization problems, each one with its one advantages and shortcomings. Particle swarm optimization is known to favor exploitation over exploration, and as a result it often converges rapidly to local optima other than the global optimum. The genetic algorithm has the ability to overcome local extrema throughout the optimization process, but it often suffers from slow convergence rates. This paper proposes a new hybrid algorithm that nests particle swarm optimization operations in the genetic algorithm, providing the general population with the exploitation prowess of the genetic algorithm and a sub-population with the high exploitation capabilities of particle swarm optimization. The effectiveness of the proposed algorithm is demonstrated through solutions of several continuous optimization problems, as well as discrete (traveling salesman) problems. It is found that the new hybrid algorithm provides a better balance between exploration and exploitation compared to both parent algorithms, as well as existing hybrid algorithms, achieving consistently accurate results with relatively small computational cost.
Citation: Aivaliotis-Apostolopoulos P, Loukidis D (2022) Swarming genetic algorithm: A nested fully coupled hybrid of genetic algorithm and particle swarm optimization. PLoS ONE 17(9): e0275094. https://doi.org/10.1371/journal.pone.0275094
Editor: Seyedali Mirjalili, Torrens University Australia, AUSTRALIA
Received: September 2, 2021; Accepted: September 11, 2022; Published: September 23, 2022
Copyright: © 2022 Aivaliotis-Apostolopoulos, Loukidis. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: All relevant data are within the manuscript and its Supporting Information files. All codes are available at a GitHub repository at https://github.com/Aivaliotis/SGA.
Funding: The author(s) received no specific funding for this work.
Competing interests: The authors have declared that no competing interests exist.
1. Introduction
In recent years, there has been an increasing need for swift and accurate solutions to complex problems in many fields, such as science, engineering, manufacturing, finance etc., which has led to the development and implementation of a variety of modern mathematical optimization algorithms. With the constant rise in the computational power of CPUs, the use of heuristic algorithms is becoming increasingly more popular in recent decades. Heuristic optimization algorithms are particularly suitable for complex and computationally heavy multivariate problems that exhibit a plethora of local extrema. This is because, unlike gradient algorithms, they rely on the use of a random population of candidate solutions (“individuals” or “particles”) inside the search space. The goal of finding the best solution (global optimum) is pursued through an iterative procedure that considers a variety of interactions within the population. An effective heuristic algorithm is expected to ensure the quick and repeatable convergence to a satisfactory solution. As such, the creation of an efficient heuristic algorithm requires the balance of two major factors: exploration and exploitation. Exploitation is the algorithm’s ability to converge fast to a solution, whereas exploration is the ability to overcome local extrema successfully, thus reaching the global optimum.
Various heuristic algorithms have been proposed so far in the scientific literature, such as the Genetic Algorithm [1], Simulated Annealing [2], Tabu search [3], Particle Swarm Optimization [4], Ant Colony Optimization [5], Differential Evolution [6], Shuffled Frog-Leaping Algorithm [7], Invasive Weed Optimization [8], Artificial Bee Colony [9], Gravitational Search Algorithm [10], Grey Wolf Optimizer [11, 12], MTDE [13], NMPA [14]. Researchers have also combined different algorithms to produce hybrid versions possessing improved performance, e.g. ANGEL [15], PSOGSA [16], SFLA-IWO [17], GGWO [18], WOA [19], AOA [20], m-SCBOA [21], FRCSA [22]. The present study focuses on the Genetic Algorithm and the Particle Swarm Optimization, and their combination into hybrid algorithms.
Evolutionary algorithms are a sub-set of heuristic algorithms, with the Genetic Algorithm (GA) introduced by Holland [1] being one of the most used and well known. GA is based on the principles of natural evolution and the concept of the survival of the fittest. The sets of optimization variables are treated as chromosomes that are subjected to the three main genetic operators, namely selection, crossover and mutation, in order to achieve a better solution with each subsequent generation. GA favors exploration over exploitation and, as a consequence, its convergence is often slow [23].
Particle Swarm Optimization (PSO) is a social evolutionary algorithm developed by Kennedy & Eberhart [4] based on swarm intelligence. It was inspired by the social behavior of flock of birds and school of fishes. The sets of optimization variables are treated as particles in the search space, where the position of a particle is a solution to the studied problem. Each particle is moving in the search space iteration by iteration according to its score, i.e. the corresponding value of the objective function (personal aspect), relative position to other particles (social aspect) and inertia (change of location in previous iteration). PSO is a fast-converging algorithm favoring exploitation over exploration, but as a result the algorithm is susceptible to getting trapped in local extrema, thus failing to yield the true (global) optimal solution.
In order to alleviate the inherent shortcomings of each of the individual methods, researchers have proposed several hybrid algorithms by combining GA and PSO operations, with the basic idea being the achievement of better balance between exploration and exploitation [16, 23–28]. In these algorithms, GA is usually responsible for the exploration, while PSO focuses in providing improved convergence speed. The hybrid algorithms differ between each other in the details of the coupling of the GA and PSO, e.g. whether a method is nested inside the other (and which) or they are conducted in parallel, and how they interact (i.e. how the outcome of one method influences the computations of the other). For example [25], proposed a hybrid form of PSO with the incorporation of GA’s mutation property, thus providing PSO with an escape route from local extrema. A different adaptation was proposed in [28] with coupled parallel GA and PSO operations. Jeong et al. [23] applied a similar parallel combination to multi-objective optimization. Herein, a new hybrid approach is proposed combining GA and PSO. Its novelty lies on that the PSO is nested inside the GA and applied to only a part of the GA population, aiming to achieve a better balance between exploration and exploitation compared to existing hybrid methods. The performance of the new approach is tested and compared with that of parent algorithms and two existing hybrid algorithms for two sets of continuous benchmark problems, as well as a set of discrete (traveling salesman) problems.
2. Algorithms
In this section, we first provide the framework for GA and PSO in the form they are employed in this study. Subsequently, we present the proposed hybrid algorithm, along with two existing hybrid algorithms from which inspiration was drawn [25, 26].
2.1 Genetic algorithm
Being among the earliest heuristic methods, genetic algorithms have seen widespread application in many scientific fields. In GA, each candidate solution xj = {x1,j,x2,j,…,xD,j} to the problem is termed an “individual” or “chromosome”, with the free (optimization) scalar variables xi being considered “genes”. The size M of the population of individuals to be the considered in the search space at the start of the solution process is set by the user. The algorithm begins by randomly creating sets of genes for the M individuals and then evaluating the objective function f(xj) for each individual. One of the individuals (the global best) would yield the best current solution f(xGB), i.e. min{f(x1),…,f(xM)} for minimization problems or max{f(x1),…,f(xM)} for maximization problems.
Since the basic concept of the algorithm is the “survival of the fittest”, a fitness level LFT is assigned to each individual j, representing the suitability of the solution compared to its peers:
(1)
Before the next iteration of the algorithm, a group of individuals (either randomly selected or targeted) is subjected to random mutations of the chromosomes and crossover breeding.
The mutations are instrumental for avoiding convergence to a sub-optimal solution by keeping the individuals uncluttered. For continuous problems, mutations can be done by adding to (or subtracting from) xi a random percentage of the width Wi of the search space in the corresponding (ith) dimension as follows:
(2)
where randmu are random real numbers between -1 and 1, and R is the maximum possible percentage change, which is usually a parameter selected by the user. Eq (2) applies provided that the mutated individual remains inside the search space and any optimization constraints are satisfied. In case
computed using Eq (2) falls outside the search space, its value is set equal to the closest limiting value (i.e. max xi or min xi).
Crossover provides the means for convergence by combining two individuals to create a new one (offspring) to be used in the next iteration of the algorithm. For continuous problems, an offspring may be created through arithmetic crossover [29] as a linear interpolation (it can be seen also as a weighted average) between two existing solutions (parents), as follows:
(3)
where randcr is a vector of random numbers between 0 and 1. In order to completely replace the two parents before the next iteration, a second offspring is created by swapping the weight factors randcr and (1-randcr).
Elitist selection is often employed in GA, i.e. a proportion of best individuals of the population (elite group) is retained to the next generation. More recently, in order to enhance the exploitation capabilities of GA, elitist crossover is also considered, i.e. crossover is focused in a group of best performing individuals [30, 31]. Fig 1 shows schematically the aforementioned mutation and crossover processes.
The above procedure is repeated iteratively until a set of specified stopping criteria are met (e.g. exceedance of a maximum number of iterations or the global best remains constant for a given number of iterations). The sequence of these steps is shown in the form of flow chart in Fig 2.
2.2 Particle swarm optimization
PSO operates by moving the individuals (particles) in the multi-dimensional search space in a geometric manner. The method starts by generating a preset number (M) of particles at random positions. Each particle j has a vector indicating its current position (xj) and a vector symbolizing its spatial velocity vj, i.e. the change of particle position between successive iterations. Every particle has also a “memory” of its best position (xPB,j), which is kept along with the global best position (xGB) of the entire population. Both xPB,j and xGB are acting as attractors for particle j. At the start of the algorithm, the particles’ initial velocities are set equal to zero. Each particle velocity at subsequent steps is set to be directly proportional to 1) its previous velocity (inertial component), 2) the distance of the current particle position from the global best (social component) and 3) the distance from its personal best (personal component), according to the following equation:
(4)
where rand1 and rand2 are vectors of random numbers between 0 and 1, and c1 and c2 are constants scaling the “social” and “personal” gravitational components, respectively. The factor w is an “inertia” coefficient controlling the influence of the current velocity value to that of the next iteration. Once the new velocity is stablished using Eq (4), the new particle position is calculated as
(5)
Fig 3 shows a schematic representation of the inertial, social and personal velocity components.
This procedure is repeated iteratively until a set of specified convergence and stopping criteria are met (Fig 4). Common practice is to set the inertial factor w to decrease (e.g. linearly or exponentially) as the iteration number increases, a technique often referred as “damping”, in order to facilitate convergence [32]. Herein, we set w to decrease linearly with the number of iterations:
(6)
where wmax and wmin are the maximum (initial) and minimum values of the inertial factor, and maxiter is the maximum allowed number of iterations. For the PSO to be able to achieve convergence, the wmin needs to be well below 1.0.
2.3 Hybrid algorithms
2.3.1 HPSOM.
The standard PSO is formulated in such a way that convergence to a solution is guaranteed, provided that w is set initially or becomes eventually less than unity. This is because the particles tend to converge with successive iterations to a global best, where both social and personal velocity components become zero. However, there is no guarantee that this solution is the global optimum and not a local one. To alleviate this major shortcoming and in order to boost the exploration prowess of the PSO algorithm, Esmin et al. [26] proposed the incorporation of particle mutations. In their algorithm, which is called Hybrid Particle Swarm Optimizer with Mutation (HPSOM), in each iteration once PSO calculations are completed, a group of particles is randomly chosen to undergo mutation (Eq 2). HPSOM can be seen as connecting the PSO and GA processes in series, with PSO preceding GA and with absence of crossovers.
2.3.2 PGPHEA.
Shi et al. [28] proposed an algorithm combining PSO and GA in parallel, named PSO-GA parallel hybrid evolutionary algorithm (PGPHEA). The algorithm starts by setting an initial population divided in two random sub-populations, one undergoing GA computations while the other PSO. For a user-defined number of iterations (ne), each sub-population is undergoing its optimization algorithm without any interaction. Every ne iterations, there is an exchange of a fixed proportion of randomly selected individuals between the two sub-populations. After each exchange, the global bests are redefined in each group and the process is repeated until reaching the set termination criteria (Fig 5). Herein, the two sub-populations are set to be of equal size, following [23]. Moreover, once the exchanged individuals enter PSO, their personal best is set equal to themselves and the inertial velocity component is set equal to zero.
3. Swarming genetic algorithm
The main idea of the proposed hybrid algorithm is to incorporate the fast convergence capabilities of PSO in the strongly exploratory GA. The hybrid algorithm formulated herein, named Swarming Genetic Algorithm (SGA), nests PSO loops inside external loops of GA, as shown in Fig 6 and in the form of pseudo-code (Fig 7). Every ng iterations of GA, a small sub-group (MP) of the global population (MG) is randomly selected to undergo PSO calculations for a specified number of iterations np. Each time a block of np PSO iterations starts, the initial particle velocities are set equal to zero and the personal bests are reset to the current position of the corresponding particles, while the global best is re-evaluated based on the outcome of the preceding GA calculations for the current PSO sub-population. In other words, PSO does not retain a memory of the previous block of PSO calculations. After np iterations, the PSO sub-group is re-introduced to the overall population, which then undergoes new GA operations of mutations and crossovers. The sub-group that undergoes PSO usually converges rapidly to a solution, but given that the rest of the population is unaffected by the nested PSO, the algorithms maintain the strong exploratory character of the standard GA. Computations terminate once certain criteria are met, e.g. the cumulative number of evaluations of the objective function neval reaches a preset limit nmax. From Figs 6 and 7, it can be seen that the added complexity of nesting PSO calculations in GA is minimal and, as it will be shown in the following section, the proposed hybrid algorithm is competitive regarding its computational cost.
4. Benchmark tests
In this section, the performance of SGA is compared against GA [1], PSO [4], HPSOM [26] and PGPHEA [28] for continuous as well as discrete (traveling salesman) single-objective optimization problems, both in terms of effectiveness in finding the global optimum and computational time efficiency.
4.1 Continuous problems
Two sets of continuous single-objective optimizations problems are considered herein. The first (Set A) contains 14 standard optimization functions with dimensions D (i.e. number of optimization variables) in the 2 to 30 range. The second set (Set B) contains the single-objective optimization problems of the CEC 2017 benchmark suite [33, 34] with D set equal to 50. All the problems of the second set are constrained and more demanding than those of the first set. Due to the probabilistic nature of the heuristic algorithms, each time a problem is solved with a given algorithm a different result may be produced. Hence, each algorithm was tested 100 consecutive times for each optimization problem. The performance of the algorithms is compared based on various metrics, such as the average and maximum errors, the Overall Effectiveness (OE) [35], and then ranked accordingly. The results are also analyzed using two statistical tests i.e., the Wilcoxon signed-rank test [16, 35], with the hypothesis that SGA provides smallest error, and the Friedman test.
The overall population size for all methods was equal to 100. The algorithmic parameters for PSO are set c1 = c2 = 2, wmax = 1 and wmin = 0.001. Moreover, a maximum limit was set to the particle velocity equal to 50% of the width of the search space. For the hybrid algorithms, the algorithmic parameters for the PSO component are the same as above, except in PGPHEA where they were increased to wmax = 2 and wmin = 0.01, as trials showed that this improves the performance of PGPHEA (S1 Table in S1 File). For the standard (non-hybridized) GA, an elite group of 30% of the population is isolated in order to undergo elite crossover, while 10% of the population undergoes mutation and 60% undergoes crossover, both randomly selected. In the GA component of the hybrid algorithms, 20% of the population undergoes mutations, 60% is derived from crossovers, and 20% is isolated for elite crossover operations. For PGPHEA, the number of iterations for a population exchange to happen is ne = 100 steps. In SGA, in every iteration (ng = 1) of GA (external loop) 20% of the population is randomly selected to undergo np = 100 PSO iterations (internal loop). For HPSOM, 20% of the population is set to undergo mutation, as it was found that higher percentages caused the algorithm to become unstable. The parameter maxiter in Eq (6) of PSO operations was set equal to np and ne for SGA and PGPHEA, respectively. For PSO and HPSOM, maxiter = 2000.
The above algorithmic parameters and settings are considered as optimal and were established via several preliminary trial runs for the whole sets of continuous test functions that are considered herein. All calculations were performed on an Intel® i7-11800H processor using scripts programmed in Matlab.
4.1.1 Set A.
The continuous problems of Set A are standard benchmarking optimization functions (f1-f14) [28, 36, 37] and are shown in Table 1, along with their search spaces and the number D of optimization variables (dimensions).
The global optima for the above benchmark problems are shown in Table 2. Table 1 contains mostly unconstrained minimization problems, with the exception of f7 and f8 [28] which pertain to constrained maximization. In order to convert the latter to unconstrained minimization problems (to make the studied cases more homogenous) with global optimum equal to zero, the following conversion was implemented:
(7A)
(7B)
(8A)
(8B)
where pen7 and pen8 are penalty terms introducing in the objective functions the enforcement of the respective constraints in the form used in [28], and < > are the Macauley brackets (i.e. <x> = x for x≥0, otherwise = 0). With this transformation, the search range for the optimization variables of the modified functions f7* and f8* is 0 to 100.
For all algorithms, computations are terminated once the cumulative number of evaluations of the objective function exceeds 40020 (termination criterion). The average error and maximum error from the 100 consecutive runs are shown in Tables 3 and 4, respectively, where grey shading marks the best performing algorithms in each case. These results are also compared graphically in Fig 8.
Comparison of algorithmic performance in solving the continuous problems of Set A: a) average error, b) maximum error in predicting the global optimum.
It can be seen that SGA exhibits the best OE (64.29%), while there are significant statistical differences based on the Friedman test (p-value = 7.6828e-04). Nonetheless, it should be noted that the Wilcoxon test (Table 5) did not find significant differences between SGA and PGPHEA for the specified significance level. Moreover, PGPHEA yields the smaller errors than SGA in problem f10 and f14. In most of the cases (most notably f1 and f4), PSO gets trapped in local optima due to its limited exploration capabilities. Among PGPHEA and HPSOM, PGPHEA appears to achieve better convergence than HPSOM (Table 6). It is also interesting to note that in certain cases (most notably f2, f7, f8, f9) HPSOM yields larger errors than PSO. This indicates that the frequent mutation imposed on the PSO particles was making the algorithm less efficient for the specific functions.
The observation that SGA provided better results for a given number of objective function evaluations does not necessarily mean that it is the most computationally efficient. This is because the other calculations in each algorithm (e.g. generation of random numbers, algebraic operations of GA crossovers and PSO velocities) and their relative proportions result in differences in computational cost among the algorithms. Moreover, an algorithm, although not achieving the minimum error, may approach the global optimum to a practically adequate degree faster than the algorithm that yields the least error. To investigate this aspect, the convergence speed, i.e., the evolution of the average fitness (of the 100 runs) with CPU time (as reported by Matlab), is plotted in Fig 9. The fitness is defined herein as
(9)
The perfect fitness is equal to 100, meaning that the error is equal to zero.
In unconstrained optimization problems f2 & f5 and the constrained problems f7 & f8, SGA reaches fitness larger than 99 faster than the rest of the algorithms. GA in general is much slower, but its convergence rate is steady (e.g. f3, f7, f10 and f13). Oppositely, PSO generally attains high fitness values much faster than GA once a breakthrough from local minima is made (e.g. f2, f5, f9, f11, f13). It is interesting to note that PGPHEA and SGA have fitness evolution patterns (curve shape) that are similar to those of PSO, but in most cases overcome local minima and reach high fitness levels at a shorter CPU time.
4.1.2 Set B.
The second set comprises the 29 single-objective continuous problems (F1,F3-F30) of the CEC 2017 benchmark suite [33, 34]. This suite contains unimodal, multimodal, hybrid, and composition functions. The optimization algorithms are tested with the dimension D for all benchmarks set equal to 50. The population size and algorithmic parameters are the same as in the case of the Set A problems presented in the previous section, with the difference that the termination criterion is set to 500100 function solution (5000 steps for a population of 100).
The average error and maximum error from the 100 runs are shown in Tables 7 and 8, respectively, where grey shading marks the best performing algorithms in each case. The results are also compared graphically in Fig 10.
Comparison of algorithmic performance in solving continuous problems of Set B (CEC 2017): a) average error, b) maximum error in predicting the global optimum.
It can be seen that SGA exhibits the best performance, followed by PGPHEA (Table 9). SGA achieves the smallest average and maximum errors for 22 and 21 out of the 29 functions, respectively (Table 7). Moreover, the Wilcoxon test (Table 10) as well as the Friedman test (with p-value = 6.8896e-20) indicate that there are significant statistical differences in the performance of the examined algorithms. In the case of CEC 2017 suite, the parent algorithms GA and PSO cannot provide better results than the hybrid algorithms SGA and PGPHEA in any of the benchmark functions. The evolution of the global best value (average of 100 runs) with CPU time for the 29 benchmark problems can be found in S1 Fig in S1 File. It can be seen that in a number of problems (e.g. F14,F18,F26,F27) SGA is ahead from the other algorithms from the beginning of the solution process. However, in most of the cases, SGA gains an edge in later stages, often with a sharp improvement of the global best, indicating an escape from a sub-optimal solution (e.g. F5,F7,F8,F10,F16,F17,F20,F21,F23,F24).
4.2 Discrete problems
The algorithms were tested for 9 cases of the traveling salesman problem (TSP) of the library TSPLIB [38]. In order to render the PSO algorithm able to solve discrete problems, a PSO adaptation is necessary, e.g. [39–45]. In this paper, we adopt the mapping proposed in [41] according to which each feasible solution (permutation) is linked to a vector (particle position) x = {x1,x2,…,xD} the elements of which are numbers signifying the priorities of the 1,…, D cities. For example, if we have a six-variable permutation problem [A B C D E F], each letter will match the corresponding position, i.e., x1 is city’s “A” priority. A particle vector {0.91, 0.72, 0.87, 0.12, 0.61, 0.89} would result in the following permutation [A F C B E D].
Unlike in the continuous problems (in which case arithmetic crossover was employed), for the crossover operations of the GA (either as standalone algorithm or as a component in the hybrid algorithms), the order crossover approach of [46] was used. Each new individual follows the ‘travel’ order of one parent until a random point from which it starts following the second parent’s ‘travel’ order by skipping the visited cities. For example, if the travel order of the first parent is [A C B F D E] and that of the 2nd parent is [E C B A D F] with the crossover point being the 4th city, the offspring’s ‘travel’ order is [A C B F] + [E D] = [A C B F E D]. On the other hand, the mutation is a simple swap between two cities priority order. For an individual’s order [A F C B E D], a mutation (1,5) would result in the new order [E F C B A D].
The parameters for each algorithm were optimized through trial runs of the nine TSP benchmarks. The parameters c1 and c2 (Eq 4) are equal to 2 in all algorithms involving PSO operations. The inertial factor w is set equal to a fixed value (no “damping”) of 0.01 for SGA, PGPHEA and HPSOM, while for PSO it is set to decrease linearly (Eq 6) with wmax = 1 and wmin = 0.4. The overall population size was M = 20 for all algorithms. For GA (either as standalone algorithm or as a component in the hybrid algorithms), an elite group of 20% of the population is isolated in order to undergo elite crossover, 30% undergoes mutation and 50% crossover. For PGPHEA, it was found that its best performance is achieved with ne = 1, i.e. the exchange between PSO and GA sub-populations happens after every iteration. For HPSOM, 40% of the population is set to undergo mutation. In SGA, every ng = 2 iterations of GA (external loop), 25% of the population is randomly selected to undergo np = 5 PSO iterations (internal loop). As a termination criterion, a maximum number of the route distance calculations equal to 40020 is considered.
Each problem was solved 100 times with each algorithm, and the average and maximum relative errors in the calculated optimal distance are shown in Tables 11 and 12, respectively, and in Fig 11.
Comparison of algorithmic performance in solving TLP: a) average relative error, b) maximum relative error in predicting the optimal distance.
It can be seen that, in the case of TSPs, the GA and PSO algorithms along with the three hybrids have difficulty in finding the global optimum at the prescribed maximum number of objective function evaluations (termination criterion). Nonetheless, a distinct pattern emerges, different than that in the continuous benchmark problems. In overall, PSO appears to be the worst performing algorithm by a significant margin in many cases, and its hybridization results in an improved performance to various degrees (Fig 11). GA is underperforming particularly for the problems containing large number of cities. Nonetheless, it is worth to note that GA is the best performing algorithm for 3 out of the 9 TSP benchmarks. SGA outperforms the other algorithms in 5 out of 9, while HPSOM in 1 out of 9 (Table 11, Fig 11A), with a Friedman test p-value equal to 9.86E-05, indicating significant differences across all algorithms. Although SGA’s ranks first on average (Table 14), according to the Wilcoxon test (Table 13) it can’t be claimed SGA can provide better results than GA with 0.05 significance. Nonetheless, it should be noted that, in terms of maximum relative error, HPSOM seems to have an edge over SGA (Table 12, Fig 11B). Finally, it is worth mentioning that HPSOM provides better results compared to PGPHEA, in contrast to what was observed in the case of continuous problems (Table 14).
5. Conclusions
A new hybrid heuristic algorithm, named Swarming Genetic Algorithm (SGA) was proposed, nesting Particle Swarm Optimization (PSO) loops inside the Genetic Algorithm (GA), in order to enhance the efficiency of the latter in attaining fast the global optimum. The aim was to counter each algorithm’s shortcomings by taking advantage of the exploitation capabilities of PSO and combine them with the well-known exploration prowess of GA. In order to assess the performance of the new hybrid algorithm, two sets of continuous and one set of discrete (traveling salesman) benchmark problems were examined and comparisons were made against GA, PSO and two existing hybrid algorithms. The results showed that SGA has in overall significantly better performance than PSO and GA in terms of accuracy, in both continuous and discrete problems.
Supporting information
S2 File. Data of plots of Fig 9: Fitness vs CPU time.
https://doi.org/10.1371/journal.pone.0275094.s002
(XLSX)
S3 File. Data of plots of S1 Fig: Global best value vs CPU time.
https://doi.org/10.1371/journal.pone.0275094.s003
(XLSX)
References
- 1.
Holland JH. Adaptation in Natural and Artificial System. Ann Arbor: MIT Press; 1975.
- 2. Kirkpatrick S, Gelatt CD Jr, Vecchi MP. Optimization by simulated annealing. Science. 1983; 220(4598): 671–80. pmid:17813860
- 3. Glover F. Tabu search—part I. ORSA Journal on Computing. 1989; 1(3): 190–206.
- 4.
Kennedy J, Eberhart R. Particle Swarm Optimization. In: Proceedings of the 4th IEEE International Conference on Neural Networks; 1995. p. 1942–1948.
- 5. Dorigo M, Maniezzo V, Colorni A. Ant system: optimization by a colony of cooperating agents, IEEE Trans. Systems, Man, Cybernet.-Part B. 1996; 26(1): 29–41. pmid:18263004
- 6. Storn R, Price K. Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. Journal of global optimization. 1997; 11(4): 341–59.
- 7. Eusuff M, Lansey K, Pasha F. Shuffled frog-leaping algorithm: a memetic meta-heuristic for discrete optimization. Engineering Optimization. 2006;38(2): 129–54.
- 8. Mehrabian AR, Lucas C. A novel numerical optimization algorithm inspired from weed colonization. Ecological Informatics. 2006;1(4): 355–66.
- 9. Karaboga D, Basturk B. A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm. Journal of Global Optimization. 2007; 39(3): 459–71.
- 10. Rashedi E, Nezamabadi-Pour H, Saryazdi S. GSA: a gravitational search algorithm. Information Sciences. 2009;179(13): 2232–48.
- 11. Mirjalili S, Mirjalili SM, Lewis A. Grey wolf optimizer. Advances in Engineering Software. 2014; 69: 46–61.
- 12. Nadimi-Shahraki MH, Taghian S, Mirjalili S, An improved grey wolf optimizer for solving engineering problems, Expert Systems with Applications. 2021; 166,
- 13. Nadimi-Shahraki MH, Taghian S, Mirjalili S, Faris H. MTDE: An effective multi-trial vector-based differential evolution algorithm and its applications for engineering design problems. Applied Soft Computing. 2020; 97: 106761.
- 14. Sadiq A. S., Dehkordi A. A., Mirjalili S., Pham Q.-V., ‘Nonlinear marine predator algorithm: A cost-effective optimizer for fair power allocation in NOMA-VLC-B5G networks’, Expert Systems with Applications, vol. 203, p. 117395, Oct. 2022,
- 15. Csébfalvi A. Angel method for discrete optimization problems. Periodica Polytechnica Civil Engineering. 2007; 51(2): 37–46.
- 16.
Mirjalili S, Hashim SZM. A new hybrid PSOGSA algorithm for function optimization, in 2010 International Conference on Computer and Information Application, pp. 374–377.
- 17. Kaveh A, Talatahari S, Khodadadi N. Hybrid invasive weed optimization-shuffled frog-leaping algorithm for optimal design of truss structures. Iranian Journal of Science and Technology, Transactions of Civil Engineering. 2020; 44(2): 405–20.
- 18. Nadimi-Shahraki M. H., Taghian S., Mirjalili S., Zamani H., Bahreininejad A., ‘GGWO: Gaze cues learning-based grey wolf optimizer and its applications for solving engineering problems’, Journal of Computational Science, vol. 61, p. 101636, May 2022,
- 19. Nadimi-Shahraki M. H., Zamani H., Mirjalili S., ‘Enhanced whale optimization algorithm for medical feature selection: A COVID-19 case study’, Computers in Biology and Medicine, vol. 148, p. 105858, Sep. 2022, pmid:35868045
- 20. Abualigah L, Almotairi KH, Al-Qaness MA, Ewees AA, Yousri D, Abd Elaziz M, et al. Efficient text document clustering approach using multi-search Arithmetic Optimization Algorithm. Knowledge-Based Systems. 2022;19;248:108833.
- 21. Sharma S., Saha A. K., Roy S., Mirjalili S., Nama S., ‘A mixed sine cosine butterfly optimization algorithm for global optimization and its application’, Cluster Comput, Aug. 2022,
- 22. Ramachandran M., Mirjalili S., Malli Ramalingam M., Charles Gnanakkan C. A. R., Parvathysankar D. S., Sundaram A., ‘A ranking-based fuzzy adaptive hybrid crow search algorithm for combined heat and power economic dispatch’, Expert Systems with Applications, vol. 197, p. 116625, Jul. 2022,
- 23.
Jeong S, Hasegawa S, Shimoyama K, Obayashi S. Development and investigation of efficient GA/PSO-hybrid algorithm applicable to real-world design optimization. In: 2009 IEEE Congress on Evolutionary Computation; 2009. p. 777–784.
- 24.
Løvbjerg M, Rasmussen TK, Krink T. Hybrid particle swarm optimizer with breeding and subpopulations. In: Proc. Genet. Evol. Comput. Conf., vol. 1; San Francisco, CA; 2001. p. 469–476.
- 25. Shi XH, Liang YC, Lee HP, Lu C, Wang LM. An improved GA and a novel PSO-GA-based hybrid algorithm. Information Processing Letters. 2005; 93(55): 255–261.
- 26. Esmin AA, Lambert-Torres G, De Souza AZ. A hybrid particle swarm optimization applied to loss power minimization. IEEE Transactions on power systems. 2005; 20(2): 859–866.
- 27. Dong N, Wu C-H, Ip W-H, Chen Z-Q, Chan C-Y, Yung K-L. An opposition-based chaotic GA/PSO hybrid algorithm and its application in circle detection. Computers & Mathematics with Applications. 2012; 64(6): 1886–1902.
- 28.
Shi XH, Lu YH, Zhou CG, Lee HP, Lin WZ, Liang YC. Hybrid evolutionary algorithms based on PSO and GA. In: 2003 Congress on Evolutionary Computation, CEC ‘03, vol.4; 2003. p. 2393–2399.
- 29.
Michalewicz Z. Genetic algorithms + data structures = evolution programs. Springer Science & Business Media; 2013.
- 30. Juang CF. A hybrid of genetic algorithm and particle swarm optimization for recurrent network design. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics). 2004; 34(2): 997–1006. pmid:15376846
- 31.
Zhou Y, Han RP. A genetic algorithm with elite crossover and dynastic change strategies. In: International Conference on Natural Computation. Springer; 2005. p. 269–278.
- 32.
Shi YH, Eberhart RC. Parameter selection in particle swarm optimization. In: Proc. 7th Ann. Conf. on Evolutionary Programming; San Diego, CA; 1998.
- 33.
Wu G., Mallipeddi R., Suganthan P. N., ‘Problem Definitions and Evaluation Criteria for the CEC 2017 Competition on Constrained Real- Parameter Optimization’, p. 17
- 34.
‘GitHub—P-N-Suganthan/CEC2017-BoundContrained’. https://github.com/P-N-Suganthan/CEC2017-BoundContrained/blob/master/codes.rar.
- 35. Zamani H, Nadimi-Shahraki MH, and Gandomi AH. QANA: Quantum-based avian navigation optimizer algorithm, Engineering Applications of Artificial Intelligence. 2021; 104: p. 104314,
- 36. Yao X, Liu Y, Lin G. Evolutionary programming made faster. IEEE Transactions on Evolutionary Computation. 1999; 3(2): 82–102.
- 37. Surjanovic S, Bingham D. Virtual Library of Simulation Experiments: Test Functions and Datasets. 2013 [accessed 2020 Jan 26]. Available from: http://www.sfu.ca/~ssurjano.
- 38.
Reinelt G. The Traveling Salesman: Computational Solutions for TSP Applications. Springer; 2003.
- 39.
Zhi XH, Xing XL, Wang QX, Zhang LH, Yang XW, Zhou CG, et al. A discrete PSO method for generalized TSP problem. In: Proceedings of 2004 International Conference on Machine Learning and Cybernetics (IEEE Cat. No.04EX826). Shanghai, China: IEEE; 2004. p. 2378–2383.
- 40.
Pang W, Wang KP, Zhou CG, Dong LJ, Liu M, Zhang HY, et al. Modified particle swarm optimization based on space transformation for solving traveling salesman problem. In: Proceedings of 2004 International Conference on Machine Learning and Cybernetics (IEEE Cat. No.04EX826), vol.4; 2004. p. 2342–2346.
- 41.
Liu B, Wang L, Jin Y, Huang D. An Effective PSO-Based Memetic Algorithm for TSP. In: Huang D-S, Li K, Irwin GWeditors. Intelligent Computing in Signal Processing and Pattern Recognition, Lecture Notes in Control and Information Sciences. Berlin: Springer; 2006. p. 1151–1156.
- 42.
Yuan Z Yang L, Wu Y, Liao L, Li G. Chaotic Particle Swarm Optimization Algorithm for Traveling Salesman Problem. In: 2007 IEEE International Conference on Automation and Logistics. Jinan, China: IEEE; 2007. p. 1121–1124.
- 43.
Zhang J, Si W. Improved Enhanced Self-Tentative PSO algorithm for TSP. In: 2010 Sixth International Conference on Natural Computation. Yantai, China: IEEE; 2010. p. 2638–2641.
- 44. Elloumi W, El Abed H, Abraham A, Alimi AM. 2014, A comparative study of the improvement of performance using a PSO modified by ACO applied to TSP. Applied Soft Computing. 2014; 25: 234–241.
- 45.
Seah MS, Tung WL, Banks T. A Novel Discrete Particle Swarm Optimization approach to large-scale survey planning. In: 11th International Conference on Natural Computation (ICNC); 2015. p. 261–268.
- 46.
Davis L. Applying Adaptive Algorithms to Epistactic Domains. In: Proceedings of the International Joint Conference on Artificial Intelligence–IJCAI85, vol. 1; 1985. p. 162–164.