Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

A hybrid differential evolution based on gaining‑sharing knowledge algorithm and harris hawks optimization

Abstract

Differential evolution (DE) is favored by scholars for its simplicity and efficiency, but its ability to balance exploration and exploitation needs to be enhanced. In this paper, a hybrid differential evolution with gaining-sharing knowledge algorithm (GSK) and harris hawks optimization (HHO) is proposed, abbreviated as DEGH. Its main contribution lies are as follows. First, a hybrid mutation operator is constructed in DEGH, in which the two-phase strategy of GSK, the classical mutation operator “rand/1” of DE and the soft besiege rule of HHO are used and improved, forming a double-insurance mechanism for the balance between exploration and exploitation. Second, a novel crossover probability self-adaption strategy is proposed to strengthen the internal relation among mutation, crossover and selection of DE. On this basis, the crossover probability and scaling factor jointly affect the evolution of each individual, thus making the proposed algorithm can better adapt to various optimization problems. In addition, DEGH is compared with eight state-of-the-art DE algorithms on 32 benchmark functions. Experimental results show that the proposed DEGH algorithm is significantly superior to the compared algorithms.

1 Introduction

Whether in the field of science or engineering, problem optimization is a hot topic. Many researchers are keen to use meta-heuristic algorithms to solve optimization problems, leading to the emergence of various meta-heuristic algorithms, such as Evolution strategies (ES) [1], genetic algorithm (GA) [2], differential evolution [3], particle swarm optimization (PSO) [4], artificial bee colony (ABC) [5], gravitational search algorithm (GSA) [6], teaching–learning-based optimization (TLBO) [7], moth-flame optimization (MFO) [8], whale optimization algorithm (WOA) [9], harris hawks optimization (HHO) [10] and gaining-sharing knowledge algorithm (GSK) [11].

Since its inception, differential evolution (DE) has become one of the most commonly used meta-heuristic algorithms for solving optimization problems [12]. Many scholars have improved DE and applied it in diverse fields, such as clinical medicine [13], text classification [14], optics [15], energy [16] and neural network [17]. Improvements studies to DE can be divided into two broad categories: 1) Changes of DE compositions, which enhance the performance of the original DE by improving the mutation, crossover, selection operation and adjusting control parameters; 2) hybrid DE with other meta-heuristic algorithms to improve performance by combing their respective advantages.

At each generation, the evolution of individuals in differential evolution mainly goes through three stages: mutation, crossover and selection. These stages are the critical targets for the improvements of DE components, among which the mutation operation is the most important. Zhang and Sanderson [18] proposed the famous “DE/currenttopbest/1” mutation operator in their proposed adaptive DE algorithm (JADE), which improved the mutation by using the first 100p% individuals and an external archive containing suboptimal individuals. Wang et al. [19] facilitate a self-adaptive differential evolution algorithm with improved mutation mode (IMMSADE), which ameliorate the classic mutation operator “DE/rand/1” by attaching a benchmark factor to the basis vector. Zheng et al. [20] proposed a collective information-powered DE (CIPDE), a collective individual contained in the mutation operator of which is a linear combination of m individuals with optimal fitness values. Mohamed et al. [21] proposed two enhanced DE variants (EBDE and EDE), in which three different individuals were ranked to participate in mutations, the difference being that the former’s individuals were randomly selected from the top p individuals and from the entire population, while the latter three individuals were all randomly chosen from the population. Li et al. [22] presented an improved differential evolution algorithm with dual mutation strategies collaboration (DMCDE), which applied the improved DE/rand/2 and DE/best/2 based on an elite guidance mechanism. Ghosh et al. [23] proposed a switched parameter DE, in which each individual randomly selected binary crossover operator or BLXαβ crossover operator. Tian et al. [24] presented a DE with improved individual-based parameter setting and selection strategy (IDEI), which developed a diversity selection strategy based on the newly defined weighted fitness value. Cheng et al. [25] proposed an improved DE with fitness and diversity ranking-based mutation operator (FDDE), which judged the contribution of individuals participating in the "DE/rand/1" mutation strategy to population diversity according to their fitness values, and rearranged the positions of the three random individuals based on the ranking information of individual diversity and fitness values.

The control parameters of DE include population size NP, scaling factor F and crossover probability CR, which are the other direction of improvements of DE components. Tanabe and Fukunaga [26] proposed a success-history based parameter adaptation for DE (SHADE). By establishing new historical storages Mcr and Mf, CR and F with good performance in the past were preserved, and new parameter pairs were sampled from them. Shortly after that, Tanabe and Fukunaga [27] raised an enhanced version that added a population size reduction rule to the SHADE (LSHADE). After the end of each evolutionary process, the population size of the next generation was reduced by a linear function. Poláková et al. [28] described a new mechanism of population size adaption to DE, which evaluated the current population diversity based on European distance and adjusted the NP size according to the evaluation results. Meng et al. [29] put forward a DE variant with novel control parameter adaptation (PaDE), which included a grouping strategy for adjusting F and CR and a parabolic reduction rule for changing NP. Li et al. [30] proposed an enhanced adaptive ED algorithm (EJADE), which introduced a crossover probability sorting mechanism and dynamic population reduction strategy based on JADE. Wang et al. [31] proposed a self-adaptive ensemble-based DE (SAEDE), which set the control parameters of each generation through self-adaptive and integration mechanisms, reducing the need for user setting. Xue and Chen introduced [32] an adaptive compact DE (ACDE), in which F and CR obeyed the Cauchy distribution and uniform distribution respectively and were adaptively adjusted according to their respective weighted Lehmer means.

Compared with component improvement, the study of mixing with other algorithms to improve the performance of DE is more novel, which often integrates the advantages of DE and different meta-heuristic algorithms. Guo et al. [33] presented an enhanced self-adaptive DE (ESADE) that combined simulated annealing in the selection stage. By comparing ESADE with the version without simulated annealing, the experimental results showed that ESADE with simulated annealing had better global search capability. Jadon et al. [34] proposed a hybrid artificial bee colony with DE (HABCDE), which applied DE to the onlooker bee stage of the ABC algorithm for faster convergence. Mohamed et al. [35] introduced a semi-parametric adaptation method in the LSHADE hybridized with covariance matrix adaptation evolution strategy (LSHADE-SPACMA), where the crossover operation of DE was applied to the covariance matrix adaptation evolution strategy improve the exploration capability. Zhao et al. [36] proposed a hybrid algorithm based on self-adaptive gravitational search algorithm and DE (SGSADE), which introduced the mutation and crossover of DE into the GSA, improved the local search ability and prevented the rapid loss of population diversity. A hybrid algorithm for DE and particle swarm optimization (DEPSO) was proposed by Wang et al. [37]. At each generation of DEPSO, each individual was determined by a selection factor whether to adopt the improved rand/1 mutation operator or the PSO mutation operator. Luo and Shi [38] mixed a modified DE with whale optimization algorithm (MDE-WOA), which took advantage of the modified DE strong searching ability to avoid WOA falling into local optimal and increased population diversity. Li et al. [39] proposed a hybrid adaptive teaching–learning-based optimization with DE (ATLDE), which embed DE into the learning stage of TLBO. The population of a hybrid symbiotic DE moth-flame optimization algorithm (HSDE-MFO) proposed by Wu et al. [40] was divided into two groups, which were used for exploration-oriented DE strategy and exploitation-oriented MFO strategy respectively. Taking advantage of the ease implementation of boltzmann annealing algorithm [41] and the good diversity of solutions and effective iteration process of DE, Li et al. [42] proposed a modified boltzmann annealing Differential Evolution algorithm (BADE). Ahmadianfar et al. [43] proposed an adaptive DE with PSO (A-DEPSO), which utilized PSO to improve the mutation operation of DE to promote the global search ability and accelerate convergence, and introduced a crossover probability adaptation rate in the crossover operation of DE to increase the local search capability.

Although the performance of DE has been enhanced by the methods mentioned above, some inherent problems are still worth pondering. First of all, whether it is the improvement of DE components or hybridization of DE with other meta-heuristic algorithms, the mutation, crossover, and selection steps of these methods are relatively independent, and the internal connection of DE framework is not high. Secondly, for the research of hybrid improvement, most of them are based on the combination of DE and a certain meta-heuristic algorithm. Not only the meta-heuristic algorithm used is not novel enough, but also the balance between exploration and exploitation is considered in a single way. Therefore, a hybrid differential evolution algorithm based on gaining-sharing knowledge algorithm and harris hawks optimization (DEGH) is proposed in this paper.

The rest of this paper is structured as follows. Section 2 covers the basics of differential evolution (DE), gaining-sharing knowledge algorithm (GSK), and harris hawks optimization (HHO). Section 3 introduces the proposed DEGH in detail. Section 4 presents a series of experimental results and analyses. Section 5 summaries the whole paper and puts forward the research direction in the future.

2 Preliminaries

This section describes the basic principles of differential evolution (DE), gaining-sharing knowledge algorithm (GSK) and harris hawks optimization (HHO).

2.1. Differential evolution

The framework structure of DE mainly includes four stages: initialization, mutation, crossover and selection, among which the last three stages are the cyclic evolution process based on the population.

2.1.1 Initialization.

For a minimization problem minf(X), the population Pg in DE can be defined as: (1) where g and G denote the current and the maximum generation number. NP is the population size, D represents the dimension of the problem. and are the upper and lower boundaries of the solution space, respectively. The original population P0 is determined by random initiation in the solution space, and then the following cyclic evolution process is performed.

2.1.2 Mutation.

At generation g, a mutation individual is generated for each individual Xi,g, commonly treated as follows. (2) where Xr1,g, Xr2,g and Xr3,g are randomly selected individuals from the population Pg, r1,r2,r3∈[1,2,⋯,NP]. The scaling factor F controls the amplification of the difference vector (Xr2,gXr3,g).

2.1.3 Crossover.

By means of binary crossover, the components are extracted from the target individual Xi,g and the mutation individual Vi,g+1 to form the trial individual . (3) where randj is a real random number in [0,1], jrand is a random integer in [1,D]. The crossover probability CR determines the amount of replication from the mutation individual Vi,g+1.

2.1.4 Selection.

After evaluating the fitness of the target individual and the trial individual, the winner goes on to the next generation.

(4)

2.2 Gaining‑sharing knowledge algorithm

Gaining-sharing knowledge optimization algorithm (GSK) [11] is a nature-inspired algorithm that mimics the process of gaining and sharing knowledge throughout the human life, including the junior gaining-sharing phase and the senior gaining-sharing phase. In GSK, Djunior dimensions are randomly selected from each individual to adopt the junior scheme, and the remaining Dsenior = DDjunior dimensions to use the senior scheme. D is the dimension of the problem, and Djunior is determined by the following formula. (5) where the knowledge rate k is a constant, g and G represent the current and the maximum generation number.

2.2.1 Junior gaining-sharing phase.

In this phase, all individuals are arranged in ascending order according to fitness values: Xbest,g,⋯,Xi−1,g,Xi,g,Xi+1,g⋯,Xworst,g. When the Knowledge ratio kr>randj (a random number in [0,1]), the jth dimension of each individual remains unchanged. Otherwise, it is updated as follows. (6) where the knowledge factor kf is a real number greater than zero. and represent the jth dimension of Xi at the current generation and the next generation, respectively. , and are the jth dimensional components of individuals Xi−1,g, Xi+1,g and Xr,g, respectively. f(Xi,g) and f(Xr,g) denote the fitness values of Xi,g and Xr,g, respectively.

2.2.2 Senior gaining-sharing phase.

At this stage, after sorting by fitness values, all the individuals are divided into three groups: best people {Xpb,g}, middle people {Xm,g} and worst people {Xpw,g}, with the number of 100p%, N−(2∙100p%), 100p%, respectively. Similarly, the jth dimension of each individual remains unchanged when kr>randj, otherwise it is updated as follows. (7) where , , represent the jth dimension of individuals Xrpb,g,Xrpw,g,Xrm,g, and individuals Xrpb,g,Xrpw,g,Xrm,g are randomly selected from groups {Xpb,g}, {Xpw,g}, {Xm,g}.

2.3 Harris hawks optimization

Harris hawks optimization (HHO) is a novel swarm-based algorithm proposed by Heidari et al. [10], which imitates the cooperative behavior and chase pattern of Harris hawks in the process of hunting. In HHO, there are three primary phases: exploration, transition from exploration to exploitation, exploitation.

2.3.1 Exploration phase.

At this phase, the hawks use the following two strategies to find prey. (8) (9) where Xmean,g and Xi,g denote the mean and current location vector of the Harris hawk at the current generation g, Xrand,g and Xrabbit,g are positions of a randomly selected hawk and the prey. Xi,g+1 indicates the location vector of the hawk at the next generation g+1. r1, r2, r3, r4 and q are real random numbers in [0,1], UB and LB are the upper and lower range, respectively.

2.3.2 Transition from exploration to exploitation.

Through the rabbit’s escaping energy E, the HHO algorithm can realize the transition from exploration to exploitation. The escaping energy E is formulated as: (10) where g and G indicate the current and the maximum generation number, E0 is the initial energy in (−1,1).

2.3.3 Exploitation phase.

According to the escaping energy E and the successful escaping chance r of the prey, diverse exploitative behaviors are adopted, such as soft besiege, hard besiege, soft besiege with progressive rapid dives and hard besiege with progressive rapid dives. The successful escaping chance r is a real random number in [0,1].

  • Soft besiege (r≥0.5 and |E|≥0.5). The Harris hawks softly encircled the prey, modelled as follows.
(11) (12) where J = 2∙(1−r5) indicates the random jump intensity of the prey, and r5 is a real random number in [0,1].

  • Hard besiege (r≥0.5 and |E|≥0.5). The Harris hawks hardly encircled the prey, and their positions are updated as follows:
(13) where ΔXi,g is the difference between positions of the rabbit and the current hawk, which can be seen in Eq (12).

  • Soft besiege with progressive rapid dives (r<0.5 and |E|≥0.5). The prey still has enough energy to escape, and the Harris hawks respond as follows.
(14) (15) (16) where f(Y) and f(Z) represent the fitness values of Y and Z, respectively. D denotes the dimension of the problem, LF(D) is the Levy fight that can be obtained through the following formula. (17) where u and v are random numbers in [0,1], β is a constant value of 1.5.

  • Hard besiege with progressive rapid dives (r<0.5 and |E|≥0.5). In contrast to the previous behavior, the rabbit’s escaping energy is insufficient, and the behavior of the Harris hawks are modelled as follows.
(18) (19) (20) where Xmean,g is the average position calculated by Eq (9).

3 The proposed algorithm

This section is a detailed introduction to the proposed algorithm, including its motivation, hybrid mutation operator and Crossover probability self-adaption.

3.1 Motivations

According to the above introduction, changes based on DE components and hybridization with other meta-heuristic algorithms can improve the performance of DE. As for GSK algorithm, its two-stage model has been able to balance exploration and exploitation effectively [11]. On this basis, a mutation strategy “DE/rand/1” with global exploration ability and HHO’s Soft Obsessed strategy with exploitation ability are considered. By applying these four strategies to mutation operation, a balanced double insurance mechanism for exploration and exploitation is formed.

Besides, for most DE variants, the operations of mutation, crossover and selection are relatively independent. In DEGH, these operations are linked together by the control parameters F, CR and a binary variable h that records the historical evolution state, making the connection within the whole DE framework even tighter.

3.2 Hybrid mutation operator

In order to achieve a better balance between exploration and exploitation, DEGH adopts a dual insurance mechanism in the mutation operation, which contains four mutation strategies. First, the strategy of GSK in the junior phase (Eq (6) and senor phase (Eq 7) in GSK are introduced and streamlined, which help maintain a sufficient balance between global exploration and local exploitation capabilities in the search process [44]. The two strategies are as abbreviated as GSK/J-mutation and GSK/S-mutation. Second, in order to further strengthen this balance, DE’s classic mutation strategy "DE/rand/1" and the soft besiege in the exploitation phase of HHO are added to the hybrid mutation operator, which are called “DE/rand /1-mutation” and “HHO/SB-mutation”, respectively. Thus, GSK/J-mutation and GSK/S-mutation, combined with DE/rand/1-mutation and HHO/SB-mutation, form a hybrid mutation operator, which is a dual-insurance mechanism for balancing global exploration and local exploitation capabilities.

Before mutation operation, all individuals are arranged according to fitness values to form a new population Pg = {Xbest,g,X2,g,⋯,XNP−1,g,Xworst,g}, which is grouped into best people {Xpb,g}, middle people {Xpw,g} and worst people {Xm,g}, as shown in Fig 1. The population sequencing and grouping strategy of DEGH is the same as that of GSK. On this basis, two random distribution numbers R1i,g and R2i,g, as well as control parameters F and CRi,g, together determine the mutation strategy adopted by each individual. Among them, R1i,g,R2i,g and CRi,g are implemented at the individual level.

thumbnail
Fig 1. Schematic of population sequencing and grouping in DEGH.

https://doi.org/10.1371/journal.pone.0250951.g001

3.2.1 GSK/J-mutation.

When R1i,gF and R2i,g<CRi,g, the strategy of the junior phase (Eq (6)) of GSK is improved. The scaling factor F is substituted for the knowledge factor kf, and the mutation individual Vi,g+1 generated is as follows. (21) where Xi−1,g and Xi+1,g are the nearest better and worsen individuals of the target individual Xi,g. if Xi,g is Xbest,g, Xi−1,g and Xi+1,g are X2,g and X3,g. if Xi,g is Xworst,g, Xi−1,g and Xi+1,g are XNP−2,g and XNP−1,g. Xr,g denotes a randomly selected individual in the new population PG.

3.2.2 GSK/S-mutation.

When R1i,g<F and R2i,gCri,g, similarly, the strategy in Eq (7) of the senior phase of GSK is also changed, and the mutation individual Vi,g+1 is generated by the following mode. (22) where Xrpb,g, Xrpw,g and Xrm,g are randomly chosen individuals from best people {Xpb,g}, middle people {Xpw,g} and worst people {Xm,g}, respectively.

3.2.3. DE/rand/1-mutation.

when R1i,gF and R2i,gCRi,g, the mutation individual Vi,g+1 is produced by the classic mutation operator of DE in Eq (2), which is famous for its strong global search capability.

3.2.4 HHO/SB-mutation.

when R1i,g<F and R2i,g<CRi,g, according to the enhanced version of the soft besiege rule of exploitation phase in HHO, the mutation individual Vi,g+1 is obtained as follows.

(23)(24)

3.3 Crossover probability self-adaption

As shown in the mutation operation above, the crossover probability CR affects the selection of the mutation operator adopted by each individual. In order to make the internal phases of DE more closely linked, the adjustment of CR is associated with mutation and selection operations.

At each generation of DEGH, the frequencies used for GSK/J-mutation, GSK/S-mutation, DE/rand/1-mutation and HHO/SB-mutation are counted and represented as anum,bnum, cnum and dnum respectively. At the same time, the mutation strategy adopted by each individual is labelled with flag: individuals with GSK/J-mutation are flag = 1; individuals with GSK/S-mutation are flag = 2; individuals with DE/rand/1-mutation are flag = 3; individuals with HHO/SB-mutation are flag = 4. Besides, in the selection operation of DEGH, a binary variable h recording the evolutionary status of the trial individual is introduced and participated in the adjustment of CR. If the trial individual fails to evolve, hi,g+1 is set to 0 and CR is assigned a random number in [0,1]. On the contrary, hi,g+1 is set to 1 and the adaptive adjustment of CR is as follows. (25) where flagi records the mutation strategy applied by individual Xi,g and NP is the population size.

3.4 Pseudocode of the proposed algorithm

Based on the above description, pseudo-code of the proposed DEGH algorithm is reported in Fig 2, where the hybrid mutation operator is shown in lines 11–27 and the crossover probability self-adaptation strategy is used in lines 28–34.

3.5 Computational complexity

The computational complexity of the DEGH depends on the following aspects: initialization, sorting, evaluation, mutation, crossover, and selection. Compared with the original DE, DEGH only increases the complexity of sorting. The computational complexity of the initial DE is O(NPDG), and the sorting complexity is O(NP), so in general, the computational complexity of DEGH remains the same as the original DE, which is O(NPDG).

4 Experimental results and analysis

The performance of the proposed DEGH is evaluated by 32 well-known benchmark functions [45, 46] listed in Table 1, in which f1~f14 are unimodal functions and f15~f32 are multimodal functions. Besides, DEGH is compared with eight enhanced DE algorithms including IMMSADE [19], CIPDE [20], EBDE [21], EDE [21], EJADE [30], LSHADE-SPACMA [35], DEPSO [37] and ATLDE [39] at D = 30,100. The former five algorithms are based on changes in DE components, while the latter three algorithms are mixtures of DE and other meta-heuristic algorithms.

4.1 Experimental setting

In the following experiments, to ensure a fair comparison, the common parameters of all algorithms are set the same: the maximum generation number G is set to 1000, the population size NP is set to 100, and 30 independent runs are conducted. Other parameter settings of each algorithm are shown in Table 2.

4.2 Parameter study

In this section, the sensitivity analysis of population size NP and scaling factor and the efficiency analysis of crossover probability are studied through relevant experiments.

4.2.1 Sensitivity analysis to population size.

As one of the control parameters of DE, the influence of population size NP on the performance of DEGH is studied on the 32 benchmark functions at D = 30. DEGH variants with NP = 50,150,200,250 are compared with the standard DEGH with NP = 100, the optimization results of which are evaluated by Friedman, Kruskal-Wallis and Wilcoxon’s rank-sum tests [47]. The statistical tests results are shown in Fig 3 and Table 3.

thumbnail
Fig 3. The results of Friedman and Kruskal-Wallis tests for DEGHs with different population size.

https://doi.org/10.1371/journal.pone.0250951.g003

thumbnail
Table 3. The results of Wilcoxon’s rank-sum test between DEGH with NP = 250 and DEGHs with different population size.

https://doi.org/10.1371/journal.pone.0250951.t003

As can be seen from Fig 3, with the increase of NP, the performance of DEGH improves. DEGH performs best at NP = 250. From the data listed in Table 3, there is no significant difference in the performance of DEGHs with different NP values, that is, DEGH is not sensitive to population size NP. In order not to lose universality, the population size NP is set to 100 in the following experiments.

4.2.2 Sensitivity analysis to scaling factor.

In DEGH, the scaling factor F plays a vital role in the mutation operation. By setting F to F∈[0.1,0.9] in steps of 0.1, a series of experiments are conducted to analyze the sensitivity of the scaling factor. Three nonparametric statistical tests are used to analyze the optimization results of 30-dimensional problems with different F values, which are recorded in Fig 4 and Table 4, respectively.

thumbnail
Fig 4. The results of the Friedman and Kruskal-Wallis tests for DEGHs with different F values.

https://doi.org/10.1371/journal.pone.0250951.g004

thumbnail
Table 4. The results of Wilcoxon’s rank-sum test between DEGH with F = 0.3 and DEGHs with different F values.

https://doi.org/10.1371/journal.pone.0250951.t004

From Fig 4, it is clear that the performance of DEGH is best at F = 0.3. From Table 2, it can be seen that DEGH is insensitive to F except F = 0.1. Therefore, F = 0.3 can be considered as a suitable value for subsequent experiments.

4.2.3 Efficiency analysis to crossover probability.

In order to investigate the effectiveness of crossover probability self-adaptation strategy in DEGH, the efficiency of crossover probability is analyzed by setting CR = 0.2,0.4,0.8, rand and compared with the proposed DEGH, where rand represents a random real number inside [0,1]. The results of the non-parametric statistical tests of these DEGHs are shown in Fig 5 and Table 5.

thumbnail
Fig 5. The results of Friedman and Kruskal-Wallis tests for DEGH and its variants with different CR values.

https://doi.org/10.1371/journal.pone.0250951.g005

thumbnail
Table 5. The results of Wilcoxon’s rank-sum test between DEGH and its variants with different F values.

https://doi.org/10.1371/journal.pone.0250951.t005

From Fig 5, it is evident that the proposed DEGH is the best and DEGH with CR = rand is the second best. It can be concluded from Table 5 that, except for DEGH with CR = 0.2, there is no significant performance difference between DEGH and its variants. In other words, the crossover probability self-adaption is effective, but DEGH is less susceptible to crossover probability.

4.3 Comparison with eight state-of-the-art DE variants

In order to comprehensively evaluate the performance of the proposed algorithm, the optimization results and convergence properties of DEGH and eight enhanced DE algorithms on 32 benchmark functions at D = 30,50,100 are compared and analyzed.

4.3.1 Optimization results.

The optimization results of each algorithm at D = 30, D = 50 and D = 100 are listed in Tables 68 respectively, where Mean and STD refer to the average and standard deviation of the function error value over 30 independent runs. Besides, the Wilcoxon signed-rank test results on each dimensional problem are shown in Table 9, where the symbol “+/−/≈” indicates the performance of DEGH is “better than/worse than/similar to” the compared algorithm.

thumbnail
Table 6. Mean and STD obtained by eight enhanced DEs and DEGH on benchmark functions at 30D.

https://doi.org/10.1371/journal.pone.0250951.t006

thumbnail
Table 7. Mean and STD obtained by eight enhanced DEs and DEGH on benchmark functions at 50D.

https://doi.org/10.1371/journal.pone.0250951.t007

thumbnail
Table 8. Mean and STD obtained by eight enhanced DEs and DEGH on benchmark functions at 100D.

https://doi.org/10.1371/journal.pone.0250951.t008

thumbnail
Table 9. The results of Wilcoxon’s signed-rank test at the 0.05significance level between DEGH and eight DE variants.

https://doi.org/10.1371/journal.pone.0250951.t009

At D = 30, from Table 6, the proposed DEGH gets the global optimal solution on functions f1~f12, f16~f24, f27, f28 and f30. For step function f13, CIPDE, EBDE, EJADE and LSHADE-SPACMA obtain global optimum. For noise function f14, ATLDE is the best. EJADE, CIPDE, EDE and CIPDE give optimal solutions for multimodal functions f15, f25, f26 and f29, respectively. For f31, CIPDE, EBDE, EDE and LSHADE-SPACMA are best. For f32, LSHADE-SPACMA finds the best solution. As can be seen from the Wilcoxon signed-rank test results of D = 30 in Table 9, DEGH is superior to IMMSADE, CIPDE, EBDE, ED, EJADE, LSHADE- SPACMA, DEPSO and ATLDE on 26,19,24,21,24,21,26,25 out of 32 functions, respectively.

At D = 50, it can also be seen from Table 7 that DEGH gives the global minimum on f1~f12, f16~f24, f27, f28 and f30. For f13 and f25, CIPDE is the best. ATLDE, EJADE, IMMSADE, EBDE and LSHADE-SPACMA obtain the optimal solutions of f14, f15, f26, f29 and f32, respectively. CIPDE and LSHADE-SPACMA perform better than other algorithms on f31. According to the test results of the 50-dimensional problems in Table 9, among the 32 functions, DEGH has 27,24,26,26, 24,25 and 25 items that are better than IMMSADE, CIPDE, EBDE, ED, EJADE, LSHADE- SPACMA, DEPSO and ATLDE, respectively.

At D = 100, from Table 8, DEGH is best except two multimodal functions f13, f14 and four multimodal functions f25, f29, f31, f32. For f13, f31 and f32, CIPDE gets optimal solutions. ATLDE, EDE, and LSHADE-SPACMA find the best solutions on f14, f25 and f29, respectively. From Table 9, DEGH outperforms IMMSADE, CIPDE, EBDE, ED, EJADE, LSHADE- SPACMA, DEPSO and ATLDE on 29,27,29,27,29,27,24,24 functions, respectively.

Furthermore, three non-parametric statistical tests are used to analyze these optimization results. The Friedman and Kruskal-Wallis tests results drawn in Fig 6 show that DEGH is the best in all dimensions. The Wilcoxon’s rank-sum test results in Table 10 show that all positive rank sums R+ obtained are far larger than negative rank sums R, no matter in which dimension and compared with which algorithm. Moreover, whether the significance test level is 0.5 or 0.1, all p-values obtained are far less than them. In other words, Wilcoxon’s rank-sum test also confirms that DEGH is significantly superior to other compared algorithms.

thumbnail
Fig 6. The results of the Friedman and Kruskal-Wallis tests for all algorithms at D = 30, 50, 100.

https://doi.org/10.1371/journal.pone.0250951.g006

thumbnail
Table 10. The Wilcoxon’s rank-sum test results for all algorithms at D = 30, 50, 100.

https://doi.org/10.1371/journal.pone.0250951.t010

4.3.2 Convergence properties.

The convergence properties can be summarized into the following four types, which are depicted in Fig 7.

thumbnail
Fig 7. Convergence curves of the mean function error values for f1, f17, f25 and f31 at D = 30, 50, 100.

The horizontal axis and the vertical axis are generations and the mean function error values over 30 independent runs.

https://doi.org/10.1371/journal.pone.0250951.g007

  1. The convergence curves of f1~f12, f16, f22 and f27 are similar, as shown in Fig 7(A). In this type, DEGH does not show apparent advantages at the beginning of evolution, but it can quickly converge to the global minimum first.
  2. The convergence attributes of f17~f21, f23, f24, f28 and f30 are divided into a class, as shown in Fig 7(B). In this type, DEGH shows absolute advantages at the beginning, with the steepest slope and can quickly converge to the global minimum, while other algorithms evolve slowly or stall.
  3. In Fig 7(C), the convergence curve of f25 is plotted, which is similar to that of f14, f15, f26 and f29. On these functions, all algorithms are at varying degrees of evolutionary stagnation or slow evolution.
  4. The evolutionary trend of f13, f31 and f32 is similar, as shown in Fig 7(D). Here, some algorithms fall into evolutionary stagnation, but DEGH continues to evolve downward.

4.4 Discussion on results

The above experiments prove the remarkable superiority of the proposed DEGH. The reasons for DEGH’s outstanding performance are summarized as follows. (1) DEGH, based on GSK and HHO algorithms, is an improvement and hybrid on the DE framework. On the one hand, GSK/J-mutation and GSK/S-mutation operators have a good balance between global exploration and local exploitation. On the other hand, the DE/rand/1-mutation and HHO/SB-mutation are another powerful guarantee for the balance between exploration and exploitation. These two respects cooperate each other, formed the dual-safeguard mechanism for the balance between exploration and exploitation. (2) The crossover probability self-adaption strategy of DEGH strengthens the internal connection between the mutation, crossover and selection stages, and makes the whole frame structure more harmonious. On this basis, the crossover probability and scaling factor dynamically adjust the evolution strategy of each individual to make the proposed algorithm more suitable for various problems.

5 Conclusions

This paper proposes a hybrid differential evolution algorithm based on gaining-sharing knowledge algorithm and harris hawks optimization (DEGH), which can achieve excellent performance even with a fixed scaling factor. Through a series of experiments, the effectiveness and sensitivity of DEGH parameters are investigated. The performance of DEGH is evaluated by comparing with eight state-of-the-art DE variants like IMMSADE [19], CIPDE [20], EBDE [21], EDE [21], EJADE [30], LSHADE-SPACMA [35], DEPSO [37] and ATLDE [39] on 32 benchmark functions at D = 30,100. Experiments results show that: 1) DEGH is not sensitive to population size NP; 2) DEGH is insensitive to F except F = 0.1; 3) For all compared DE variants, DEGH has the best overall performance.

As an extension of this research work, the following aspects are the future research directions. 1) Binary version of DEGH and its application in flight sequencing system; 2) Apply DEGH to the optimization of neural network parameters and further apply it to flight trajectory prediction. 3) Hybridize DE with other emerging meta-heuristic algorithms.

References

  1. 1. Beyer HG, Schwefel HP. Evolution strategies—A comprehensive introduction. Natural Computing. 2002; 1(1):3–52.
  2. 2. Holland JH. Adaptation in Natural and Artificial Systems (second ed.). MIT Press, 1975.
  3. 3. Storn R, Price K. Differential Evolution—A simple and efficient heuristic for global optimization over continuous spaces. Journal of Global Optimization. 1997; 11(4): 341–359.
  4. 4. Kennedy J, Eberhart RC. Particle swarm optimization. 1995 IEEE International Conference on Neural Networks. 1995; 1942–1948.
  5. 5. Karaboga D, Basturk B. A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm. Journal of Global Optimization. 2007; 39(3):459–471.
  6. 6. Rashedi E, Nezamabadi-pour H, Saryazdi S. GSA: A Gravitational Search Algorithm. Information Sciences. 2009; 179(13): 2232–2248.
  7. 7. Rao RV, Savsani VJ, Vakharia DP. Teaching-learning-based optimization: A novel method for constrained mechanical design optimization problems. Computer Aided Design. 2011; 43(3):303–315.
  8. 8. Mirjalili S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowledge-Based Systems. 2015; 89:228–249.
  9. 9. Mirjalili S, Lewis A. The Whale Optimization Algorithm. Advances in Engineering Software. 2016; 95:51–67.
  10. 10. Heidari AA, Mirjalili S, Faris H, Aljarah I, Mafarja M, Chen HL. Harris hawks optimization: Algorithm and applications. Future Generation Computer Systems. 2019; 97:849–872.
  11. 11. Mohamed AW, Hadi AA, Mohamed AK. Gaining-sharing knowledge based algorithm for solving optimization problems: a novel nature-inspired algorithm. International Journal of Machine Learning and Cybernetics. 2020; 11(7):1501–1529.
  12. 12. Bilal , Pant M, Zaheer H, et al. Differential Evolution: A review of more than two decades of research. Engineering Applications of Artificial Intelligence. 2020; 90:103479.
  13. 13. Luo XB, Wan Y, He XJ, Mori K. Observation-driven adaptive differential evolution and its application to accurate and smooth bronchoscope three-dimensional motion tracking. MEDICAL IMAGE ANALYSIS. 2015; 24(1):282–296. pmid:25660001
  14. 14. Diab MD, Hindi KME. Using differential evolution for fine tuning nave Bayesian classifiers and its application for text classification. Applied Soft Computing. 2017; 54:183–199.
  15. 15. Pishchalnikov R. Application of the Differential Evolution for simulation of the linear optical response of photosynthetic pigments. Journal of Computational Physics. 2018; 372:603–615.
  16. 16. Ikeda S, Ooka R. Application of differential evolution-based constrained optimization methods to district energy optimization and comparison with dynamic programming. Applied Energy. 2019; 254:113670.
  17. 17. Troumbis IA, Tsekouras GE, Tsimikas J, Kalloniatis C, Haralambopoulos D. A Chebyshev polynomial feedforward neural network trained by differential evolution and its application in environmental case studies. Environmental Modelling and Software. 2020; 126:104663.
  18. 18. Zhang JQ, Sanderson AC. JADE: Adaptive Differential Evolution with Optional External Archive[J]. IEEE Transactions on Evolutionary Computation. 2009; 13(5):945–958.
  19. 19. Wang SH, Li YZ, Yang HY. Self-adaptive differential evolution algorithm with improved mutation mode. Applied Intelligence. 2017; 47(3):644–658.
  20. 20. Zheng LM, Zhang SX, Tang KS, Zheng SY. Differential Evolution Powered by Collective Information. Information Sciences. 2017; 399:13–29.
  21. 21. Mohamed Ali W, Hadi AA, Jambi KM. Novel mutation strategy for enhancing SHADE and LSHADE algorithms for global Numerical Optimization. Swarm and Evolutionary Computation. 2019; 50:100455.
  22. 22. Li YZ, Wang SH, Yang B. An improved differential evolution algorithm with dual mutation strategies collaboration. Expert Systems with Applications. 2020; 153:113451.
  23. 23. Ghosh A, Das S, Mullick SS, Mallipeddi R, Das AK. A Switched Parameter Differential Evolution with Optional Blending Crossover for Scalable Numerical Optimization. Applied Soft Computing. 2017; 57:329–352.
  24. 24. Tian MN, Gao XB, Dai C. Differential evolution with improved individual-based parameter setting and selection strategy. Applied Soft Computing. 2017; 56:286–297.
  25. 25. Cheng JC, Pan ZB, Liang H, Gao ZQ, Gao JH. Differential evolution algorithm with fitness and diversity ranking-based mutation operator. Swarm and Evolutionary Computation, 61 (2021) 100816.
  26. 26. Tanabe R, Fukunaga A. Success-history based parameter adaptation for Differential Evolution. 2013 IEEE Congress on Evolutionary Computation. 2013; 71–78.
  27. 27. Tanabe R, Fukunaga AS. Improving the search performance of SHADE using linear population size reduction. 2014 IEEE Congress on Evolutionary Computation. 2014;1658–1665.
  28. 28. Poláková R, Tvrdík J, Bujok P. Differential evolution with adaptive mechanism of population size according to current population diversity. Swarm and Evolutionary Computation. 2019; 50:100519.
  29. 29. Meng ZY, Pan JS, Tseng KK. PaDE: An enhanced Differential Evolution algorithm with novel control parameter adaptation schemes for numerical optimization. Knowledge-Based Systems. 2019; 168(15):80–99.
  30. 30. Li SJ, Gu Q, Gong WY, N B. An enhanced adaptive differential evolution algorithm for parameter extraction of photovoltaic models. Energy Conversion and Management. 2020; 205:112443.
  31. 31. Wang SL, Morsidi F, Ng TF, Budiman H, Neoh SC. Insights into the Effects of Control Parameters and Mutation Strategy on Self-adaptive Ensemble-based Differential Evolution. Information Sciences. 2020; 514:203–233.
  32. 32. Xue XS, Chen JF. Matching biomedical ontologies through Compact Differential Evolution algorithm with compact adaption schemes on control parameters. Neurocomputing. 2020.
  33. 33. Guo HX, Li YN, Li JL, S H, Wang DY, Chen XH. Differential evolution improved with self-adaptive control parameters based on simulated annealing. Swarm and Evolutionary Computation. 2014; 19:52–67.
  34. 34. Jadon SS, Tiwari R, Sharma H, Bansal JC. Hybrid Artificial Bee Colony Algorithm with Differential Evolution. Applied Soft Computing. 2017; 58:11–24.
  35. 35. Mohamed AW, Hadi AA, Fattouh AM, Jambi KM. LSHADE with semi-parameter adaptation hybrid with CMA-ES for solving CEC 2017 benchmark problems. 2017 IEEE Congress on Evolutionary Computation (CEC). 2017:145–152.
  36. 36. Zhao FQ, Xue FL, Zhang Y, Ma WM, Zhang C, Song HB. A hybrid algorithm based on self-adaptive gravitational search algorithm and differential evolution. Expert Systems with Applications. 2018; 113:515–530.
  37. 37. Wang SH, Li YZ, Yang HY. Self-adaptive mutation differential evolution algorithm based on particle swarm optimization. Applied Soft Computing. 2019; 81:105496.
  38. 38. Luo J, Shi BY. A hybrid whale optimization algorithm based on modified differential evolution for global optimization problems. Applied Intelligence. 2019, 49(5): 1982–2000.
  39. 39. Li SJ, Gong WY, Wang L, Yan XS, Hu CY. A hybrid adaptive teaching-learning-based optimization and differential evolution for parameter identification of photovoltaic models. Energy Conversion and Management. 2020, 225:113474.
  40. 40. Wu YF, Chen RL, Li CQ, Zhang LYY, Cui ZL. Hybrid Symbiotic Differential Evolution Moth-Flame Optimization Algorithm for Estimating Parameters of Photovoltaic Models. IEEE ACCESS. 2020; 8:156328–156346.
  41. 41. Aarts EHL, Korst JHM. Boltzmann machines as a model for parallel annealing. Algorithmica. 1991; 6 (3):437–465.
  42. 42. Li H, Wang H, Wang L, Zhou XT. A modified Boltzmann Annealing Differential Evolution algorithm for inversion of directional resistivity logging-while-drilling measurements. Journal of Petroleum Science and Engineering. 2020; 188:106916.
  43. 43. Ahmadianfar I, Kheyrandish A, Jamei M, Gharabaghi B. Optimizing operating rules for multi-reservoir hydropower generation systems: An adaptive hybrid differential evolution algorithm. Renewable Energy. 2021; 167:774–790.
  44. 44. Agrawal P, Ganesh T, Mohamed AW. A novel binary gaining-sharing knowledge-based optimization algorithm for feature selection-. Neural Computing and Applications. 2020.
  45. 45. Liang JJ, Qu BY, Suganthan PN, Chen Q. Problem definition and evaluation criteria for the CEC 2015 competition on learning-based real-parameter single objective optimization. 2015.
  46. 46. Awad NH, Ali MZ, Suganthan PN, Liang JJ, Qu BY. Problem definitions and evaluation criteria for the CEC 2017 Special Session and Competition on Single Objective Real-Parameter Numerical Optimization. 2016.
  47. 47. Derrac J, García S, Molina D, Herrera F. A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm and Evolutionary Computation. 2011; 1(1):3–18.