Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

A Novel Flexible Inertia Weight Particle Swarm Optimization Algorithm

Abstract

Particle swarm optimization (PSO) is an evolutionary computing method based on intelligent collective behavior of some animals. It is easy to implement and there are few parameters to adjust. The performance of PSO algorithm depends greatly on the appropriate parameter selection strategies for fine tuning its parameters. Inertia weight (IW) is one of PSO’s parameters used to bring about a balance between the exploration and exploitation characteristics of PSO. This paper proposes a new nonlinear strategy for selecting inertia weight which is named Flexible Exponential Inertia Weight (FEIW) strategy because according to each problem we can construct an increasing or decreasing inertia weight strategy with suitable parameters selection. The efficacy and efficiency of PSO algorithm with FEIW strategy (FEPSO) is validated on a suite of benchmark problems with different dimensions. Also FEIW is compared with best time-varying, adaptive, constant and random inertia weights. Experimental results and statistical analysis prove that FEIW improves the search performance in terms of solution quality as well as convergence rate.

1 Introduction

Swarm intelligence is an exciting new research field still in its infancy compared to other paradigms in artificial intelligence [1]. One of the research areas within computational swarm intelligence is particle swarm optimization (PSO), which developed by Eberhart and Kennedy in 1995 [2, 3], inspired by intelligent collective behavior of some animals such as flocks of birds or schools of fish. In PSO, each individual represents a potential solution and is termed as “particle” and the flock of particles called “swarm” represents the population of individuals, so a population of potential solutions is evolved through successive iterations. The most important advantages of the PSO, compared to other optimization strategies, lies in its speedy convergence towards global optimum, easily implementable code, complex computation free environment and having few parameters to adjust. Accelerating convergence speed and avoiding the local optima have become the two most important and appealing goals in PSO research. A number of variant PSO algorithms have, hence, been proposed to achieve these two goals [4, 5]. It is seen to be difficult to simultaneously achieve both goals. For example, the comprehensive-learning PSO in [5] focuses on avoiding the local optima, but brings in a slower convergence as a result. Therefore, despite being having several attractive features and a potential global optimizer, PSO alike several other populations based search algorithms have certain drawbacks associated with it. To overcome the drawbacks caused by “stagnation of particles”, several attempts have been made to enhance the performance of PSO and the improved variants superseded the standard one. Some of these include, proposing inertia weight (IW) [6, 7], introducing constriction factor based PSO [8], weighting particle’s own experience and neighbors experience [9], fine tuning of various PSO parameters [10], proposing different interaction methods among PSO particles [11, 12]. Moreover PSO has been hybridized [13] with concepts borrowed from other heuristic and deterministic algorithms to improve its searching ability and enhancing its convergence towards global optima. As we know, IW can balance the proportion of global search ability and local exploration ability. When its value is bigger, the algorithm has a stronger global search ability and poorer local exploration ability. When IW value is smaller, global search ability and local exploration ability are just reverse. In the other word, IW controls the particle’s momentum and so many strategies have been proposed in previous studies to choose a suitable IW that maintains the exploration–exploitation trade-off throughout the searching process. In this paper we propose a flexible exponential inertia weight (FEIW) PSO algorithm (FEPSO) for optimization problems. This work differs from the existing time-varying IW strategies at least in two aspects: firstly, it proposes a flexible IW, which can adapt with each problem, i.e., for a certain optimization problem, with suitable parameter selection, we can get a special IW strategy that has best performance for solving it. The second is to compare the best time-varying, adaptive and primitive IW strategies with FEIW and obtain that FEPSO is more efficacious for optimization problem.

The rest of this paper is organized as follows: Section ‎2 presents the principles of particle swarm optimization algorithm. A review on inertia weight strategies is stated in section ‎3. Proposed inertia weight and its properties will be discussed in section ‎4. In Section ‎5, parameter settings and performance evaluation criteria is introduced. The numerical analysis, statistical tests and discussion of results is performed under section ‎6 and the conclusions are given in section ‎7.

2 The Principles of Particle Swarm Optimization Algorithm

The basic idea of the PSO algorithm is to search out the optimum value by collaborating and sharing information between the individuals, and the particle’s quality could be measured according to the fitness value of particles. First, the positions and velocities of a group of particles are initialized randomly, and then the optimal solution can be searched out by updating generations in the search space. Suppose that the size of the swarm is M and the search space is D − dimensional. The position of the ith particle is presented as xi = (xi1, xi2, …, xiD) where xid ∈ [ld, ud], d ∈ [1, D], and ld and ud are the lower and upper bounds of the dth dimension of the search space. The velocity of each particle is represented with a vector. The ith particle velocity is presented as vi = (vi1, vi2, …, viD). At each time step, the position and velocity of the particles are updated according to the following equations [2]: (1) (2) where r1ij, r2ij are two distinct random numbers [2], generated uniformly from the range [0,1], the acceleration coefficients c1, c2 are two positive constants [3] and t is the current iterative time. The best previous position found so far by this particle is denoted as pbesti = (pi1, pi2, … ,piD), and the best previous position discovered by the whole swarm is denoted as gbest = (g1, g2, … ,gD). The velocity of particle should be under the constrained conditions [vmin, vmax]D.

The balance between global and local search throughout the course of a run is critical to the success of an optimization algorithm [14]. Almost all of the evolutionary algorithms utilize some mechanisms to achieve this goal. To bring about a balance between the exploration and exploitation characteristics of PSO, Shi and Eberhart proposed a PSO based on inertia weight (ω) in which the velocity of each particle is updated according to the following equation [15]: (3)

They claimed that a large IW facilitates a global search while a small IW facilitates a local search. By changing the IW dynamically, the search capability is dynamically adjusted. This is a general statement about the impact of ω on PSO’s search behavior shared by many other researchers. However, there are situations where this rule cannot be applied successfully [16].

The PSO procedure can be divided into the following steps:

  1. Initialize the original position and velocity of particle swarm;
  2. Calculate the fitness value of each particle;
  3. For each particle, compare the fitness value with the fitness value of pbest, if current value is better, then renew the position with current position, and update the fitness value simultaneously;
  4. Determine the best particle of group with the best fitness value, if the fitness value is better than the fitness value of gbest, then update the gbest and its fitness value with the position;
  5. Check the finalizing criterion, if it has been satisfied, quit the iteration;
  6. Update the position and velocity of particle swarm, return to step 2.

3 Review on Inertia Weight Strategies

Since the initial development of PSO, several variants of this algorithm have been proposed by researchers. The basic PSO, presented by Kennedy and Eberhart in 1995 [2], has no IW. The first modification introduced in PSO was the use of an IW parameter in the velocity update equation of the initial PSO resulting in Eq (3), a PSO model which is now accepted as the global best PSO algorithm [15]. In this section, the various IW strategies are categorized into three classes. The “primitive class” contains strategies in which the value of the IW is constant during the search or is determined randomly. None of these methods uses any input. The “adaptive class” contains those methods which use a feedback parameter to monitor the state of the algorithm and adjust the value of the IW. The “time-varying class” is defined as a function of time or iteration number.

3.1 Primitive class

IW parameter was originally introduced by Shi and Eberhart in [15]. They used a range of constant IW (CIW) values (4) and showed that by using large values of ω, i.e. ω > 1.2, PSO only performs a weak exploration and with low values of this parameter, i.e. ω > 0.8, PSO tends to traps in local optima. They suggest that with a ω within the range [0.8,1.2], PSO finds the global optimum in a reasonable number of iterations. Shi and Eberhart analyzed the impact of the IW and maximum velocity on the performance of the PSO in [6]. In [17], a random IW (RIW) is used to enable the PSO to track the optima in a dynamic environment. (5) where Rand() is a random number in [0.1]; ω is then a uniform random variable in the range [0.5,1].

3.2 Adaptive class

Adaptive IW strategies are those that monitor the search situation and adapt the IW value based on one or more feedback parameters. In [18], Arumugam and Rao use the ratio of the global best fitness and the average of local best fitness of particles to determine the IW in each iteration with (6) where f(.) is the fitness function. The inertia weight in (6) is termed global-average local best IW (GLBIW). Clerc [19] proposes an adaptive inertia weight (AIW) approach where the amount of change in the inertia value is proportional to the relative improvement of the swarm. Let xi(t) denote the position of particle i in the search space at time step t. The inertia weight is adjusted according to (7) where the relative improvement, mi, is estimated as (8) with ω(Imax) ≈ 0.5 and ω(0) < 1.

3.3 Time-varying class

Most of the PSO variants use time-varying IW strategies in which the value of the IW is determined based on the iteration number. Time-varying IW strategies have important applications in various fields yet [20, 21]. These methods can be either linear or non-linear and increasing or decreasing. In [8], a linear decreasing IW (LDIW) was introduced and was shown to be effective in improving the fine-tuning characteristic of the PSO. In this method, the value of ω is linearly decreased from an initial value (ωmax) to a final value (ωmin) according to the following equation: (9) where t and Imax are the current iterative time and the maximum iterative time, respectively. This strategy is very common and most of the PSO algorithms adjust the value of IW using this updating scheme.

Accepting the general idea of decreasing the IW over iterations, some researchers proposed nonlinear decreasing strategies. Chatterjee and Siarry [22] propose a nonlinear decreasing variant of IW in which at each iteration of the algorithm, ω is determined based on the following equation: (10) where n is the nonlinear modulation index. Different values of n result in different variations of IW all of which start from ωmax and end at ωmin. Feng et al. [23, 24] use a chaotic IW (CHIW) in which a chaotic term is added to the LDIW. The proposed ω is as follows. (11) where ω1 and ω2 are the original value and the final value of IW and z = 4z (1 − z). The initial value of z is selected randomly within the range(0,1). Chen et al. [25] propose a natural exponential inertia weight (NEIW) strategy according to the following equation: (12) where ωmin = 0.4 and ωmax = 0.9, which is found to be very effective for NEIWPSO.

Li and Gao [26] give a kind of exponent decreasing inertia weight (EDIW) (13)

The massive experiments indicate the algorithm performance can enhance greatly when ωmin = 0.4, ωmax = 0.95, d1 = 0.2 and d2 = 7. In [27], Bansal et al. implemented a comparative study on fifteen IW strategies to select best IW strategies. With c = 7 for CIW, ωmin = 0.4, ωmax = 0.9 for LDIW and ω1 = 0.9, ω2 = 0.4 for CHIW, They concluded that CHIW is the best strategy for better accuracy and RIW strategy is best for better efficiency. Also it is shown that CIW and LDIW are best inertia weights based on minimum error. Arasomwan and Adewumi [28] established the fact that LDIW is very much efficient if its parameters are properly set. They showed that with good experimental setting, LDIW will perform competitively with similar variants. Thus in this paper, for comparative studies, we use of CIW, RIW, LDIW, CHIW, NEIW, EDIW, GLBIW and AIW as eight well-known primitive, time-varying and adaptive IW strategies.

4 Proposed Inertia Weight and Its Properties

In order to overcome the premature convergence, low efficiency or low accuracy of the other IW strategies, we introduce a novel IW strategy for improving the performance of PSO. In this section, first this new IW will be introduced then its properties will be analyzed. At the end, we introduce the IW strategy parameters.

4.1 Proposed inertia weight strategy

Definition. Suppose ω1, ω2 and ψ are positive real numbers. We define an inertia weight strategy by (14) where (15) (16) and t ∈ [0,Imax] is an integer number. In this strategy, t and Imax are the current iterative time and the maximum iterative time, respectively. The parameters ω1 and ω2 are inertia weight at the start and inertia weight at the end of a given run, respectively. In the other word (17) and (18)

We call ω(t), the Flexible Exponential Inertia Weight (FEIW) strategy because it can adapt with each problem, i.e., with suitable parameters selection, we can construct many increasing or decreasing inertia weights, or even a lot of strategies with one global minimum in [0,Imax], thus FEIW encompasses a wide range of IW strategies. There is a trade-off between accuracy and efficiency of the PSO algorithm and one of the most important of applications of FEIW is that according to each problem, one can easily change the parameters ω1, ω2 and ψ, to achieve better accuracy or better efficiency or both of them. Fig 1 shows the flow-chart for PSO based on the FEIW technique used in this paper.

4.2 Flexible exponential inertia weight analysis

Before using FEIW, we should have some information about its behavior. In particular, to select its parameters, we need a careful analysis of the function ω(t) In this subsection, for a mathematical analysis of FEIW, suppose that t ∈ [0,Imax] be a real number instead of integer number. We define a new function by (19) and call it as “check function”. Also the notation sgn(.) means the sign function is as follows:

Lemma 1. The check function has the following properties: (20) and (21)

Proof. According to definition of FEIW, ψ > 0 thus 1 –e2ψ < 0, therefore based on Eq (15),

Similarly one can prove the other term.

Lemma 2. The equation ω(t) = 0 has at most one root. This equation has a root if and only if (22)

In addition, this only root, if it exists, is at . Also t* ∈ [0,Imax] if and only if (23)

Proof. By using relation (14), we have

From Lemma1 and relation (22), we can conclude , hence the proof is complete. On the other hand, ψ > 0 and Imax > 0, thus

Using Eqs (15), (16) and (19), we have

Corollary 1. For all t ∈ [0,Imax], ω(t) ≥ 0.

Proof. Suppose ∃t0 ∈ [0,Imax]: ω(t) < 0. First note that based on relations (17) and (18), the end points of curve of ω(t) have positive values. Since ω(t) is a continuous function, thus it has at least two roots, a contradiction, because according to Lemma 2, the equation ω(t) = 0 has at most one root.

Corollary 2. If sgn(Tψ(ω1,ω2) * Tψ(ω2,ω1)) = 1 then Tψ(ω1,ω2) < 0 and Tψ(ω2,ω1) < 0.

Proof. Let Tψ(ω1,ω2) > 0 and Tψ(ω2,ω1) > 0. Thus from Lemma 1, it follows that α1 < 0 and α2 < 0, Hence from relation (14) we conclude that ∀t, ω(t) < 0, a contradiction, because according to Corollary 1, ∀t ∈ [0,Imax], ω(t) ≥ 0.

Theorem 1. The function ω(t) has an extremum if and only if (24)

In addition, this only extremum, if it exists, is a global minimum at . Also t** ∈ [0,Imax] if and only if (25)

Proof. We first calculate ω′(t) and ω″(t) as follows: (26) (27)

To find the critical numbers of differentiable function ω(t), we set its derivative equal to 0. The equation ω′(t) = 0 implies . Thus we should have or α1α2 > 0. Using Lemma 1 and Corollary 2, it is equivalent to sgn(Tψ(ω1,ω2) * Tψ(ω2,ω1)) = 1. To use the second derivative test, we evaluate ω″(t) at this critical number:

Because of ω″(t) > 0, ω(t) has a local minimum at t**, but α1 > 0 and α2 > 0 thus and so t** is a global minimum of differentiable function ω(t). The proof of the second part of this Theorem is similar to that of Lemma 2.

Theorem 2. If (28) then ω(t) is increasing on and is decreasing on if (29)

Proof. From Lemma 1 and relation (28), we have α1 < 0 and α2 > 0, so

Thus , this implies (30)

Therefore ω(t) is increasing on . The proof of decreasing is similar to increasing.

Lemma 3. If Tψ(ω1,ω2) = 0 and ω1 < ω2 then ω(t) is increasing. Also If Tψ(ω2,ω1) = 0 and ω1 > ω2 then ω(t) is decreasing.

Proof. If Tψ(ω1,ω2) = 0 then α1 = 0 and ω2ω1eψ = 0. This implies and ψ > 0 because of ω1 < ω2. In this case, we can conclude from Eq (16) that α2 = ω1, thus using Eq (14), (31)

Therefore and ω(t) is increasing. Now suppose Tψ(ω2,ω1) = 0 thus α2 = 0 and ω1ω2eψ = 0. This implies and ψ > 0 because of ω1 > ω2. Also α1 = ω1 and from Eq (14), (32)

Therefore and ω(t) is decreasing.

Corollary 3. For all t ∈ [0,Imax], ω(t) > 0.

Proof. By Corollary 1, ∀ t ∈ [0,Imax], ω(t) ≥ 0. Suppose that ∃t* ∈ [0,Imax], ω(t*) = 0. Using Lemma 2, we have sgn(Tψ(ω1,ω2) * Tψ(ω2,ω1)) = −1. By Theorem 2, ω(t) is increasing or decreasing. Thus according to relations (17) and (18), ∀t ∈ [0,Imax], ω(t) ≠ 0, a contradiction. Therefore ∀ t ∈ [0,Imax], ω(t) > 0.

Corollary 4. If ω1 = ω2 then ω(t) takes its global minimum in [0,Imax] at .

Proof. Suppose that ω1 = ω2 = Ω. From Eqs (15) and (16), we have α1 = eψα2, thus using Eq (14), it is concluded that (33)

In this special case, the check functions are as follows: (34)

By Theorem 1, has a minimum at .

Thus t** ∈ [0,Imax] and .

Lemma 4. As ψ approaches 0 from the right, FEIW function approaches linear inertia weight function. If ω1 > ω2, then this linear function is decreasing, while if ω1 < ω2, the function is increasing.

Proof. Differentiating ω(t) with respect to t, from Eqs (14)(16), we get (35) so (36) where m is the slope of line through (0,ω1) and (Imax,ω2). Thus the limit of FEIW function as ψ approaches 0 from the right equals as follows: (37)

Since Imax > 0, relation (37) implies is decreasing if ω1 > ω2, and is increasing if ω1 < ω2.

All of above results are summarized in Table 1.

4.3 Flexible exponential inertia weight parameters

The massive experiments indicate the proposed algorithm performance can enhance greatly for most problems when ω1 ≈ 0, ω2 ≈ 1, ψ ≈ 2.6 for increasing FEIW and ω1 ≈ 1, ω2 ≈ 0, ψ ≈ 2.6 for decreasing FEIW and ψ ≈ 5 for cases ω1ω2. In this paper, the parameters of different variations of FEIW strategy are selected such that include all the different situations such as increasing (decreasing) functions and functions with a global minimum. Let . In this strategy, according to Table 1, we experimentally select three values for ψ as follows: (38)

Also six pairs of positive numbers are selected for (ω1,ω2). These variations of FEIW strategies in Table 2 will be used for comparison with four best IW strategies [27] i.e., CIW, RIW, LDIW and CHIW and four well-known strategies i.e., NEIW, EDIW, GLBIW and AIW. As shown in Fig 2, unlike other inertia weights, the FEIW strategies are either increasing functions or decreasing functions or none.

thumbnail
Table 2. The parameters and properties of six variations of FEIW.

https://doi.org/10.1371/journal.pone.0161558.t002

thumbnail
Fig 2. Six variations of Flexible Exponential Inertia Weight (FEIW) strategy.

(A) FEIW-1. (B) FEIW-2. (C) FEIW-3. (D) FEIW-4. € FEIW-5. (F) FEIW-6.

https://doi.org/10.1371/journal.pone.0161558.g002

5 Parameter Settings and Performance Evaluation Criteria

From the standard set of benchmark problems available in the literature, twenty six problems are selected to test efficacy and accuracy of the proposed variants with other existing variants. These problems are of continuous variables and have different degrees of complexity and multimodality. These functions are shown in Tables 3 and 4 along with their range of search space.

5.1 Parameter settings

For implementing these fourteen strategies in PSO, a code has been developed in MATLAB® 2014. For a fair comparison, all the fourteen variants are run with the same parameter setting and on same computing environment. Each PSO variant is run 100 times with random initial population.

  1. ➢. Swarm size: M = 5 × D.
  2. ➢. Problem size: D = 10, 50.
  3. ➢. Acceleration coefficients: c1 = c2 = 2.
  4. ➢. Maximum velocity: vmax = 0.1 × (xmaxxmin)
  5. ➢. Maximum number of iterations allowed: Imax = 500, 1000.

5.2 Performance evaluation criteria (PEC)

According to the “no free lunch theorem” [34], one optimization algorithm cannot offer better performance than all the others on every aspect or on every kind of problem. Thus the efficiency and accuracy of all algorithms is tested against a set of well-known standard benchmark unimodal and multimodal functions given in Tables 3 and 4. Also we use of different evaluation criteria to obtain valid results. A run in which the algorithm finds a solution satisfying |foutfmin| < ε, where fout is the best solution found when the algorithm terminates and fmin is the known global minimum of the problem, is considered to be successful. In this case, ε is error of the algorithm. In order to evaluate the performance of different IW strategies, we need to define different terms for termination of the PSO algorithm, so the termination criterion for all considered PSO variants is one of the following conditions:

  1. ➢. Condition 1: achieving to Imax.
  2. ➢. Condition 2: achieving to Imax or when the known optimum is within 1 –ε of accuracy, whichever occurs earlier.

For each method and problem the following are recorded:

  1. Success rate (SR) is number of successful runs (Srun) per total number of runs (Trun) (39)
  2. Average number of iterations of successful runs (ANS).
  3. Minimum number of iterations of successful runs (MNS).
  4. Average error (AE), (40)
  5. Minimum error (ME) over 100 runs.
  6. Standard deviation (STD) of error over 100 runs.

6 Results, Analysis and Discussions

6.1 Numerical results

In this subsection, a comprehensive comparative study of IW for fourteen strategies is carried out. The computational results for all the considered set of benchmark functions using all the PSO variants, comprises results for the all mentioned performance evaluation criteria (PEC) over 100 runs. The numerical results are shown in Tables 514.

thumbnail
Table 5. Comparison of success rate, average and minimum number of iterations of successful runs for considered PSO variants with condition 2, Imax = 1000, D = 10, ε = 10−1 for f2, f3, f4, f10 functions and ε = 10−10 for others (υ > Imax).

https://doi.org/10.1371/journal.pone.0161558.t005

thumbnail
Table 6. Comparison of success rate, average and minimum number of iterations of successful runs for considered PSO variants with condition 2, Imax = 1000, D = 10, ε = 5 for f15 and f20 functions, ε = 10−1 for f19, f21, f24, f25 functions and ε = 10−10 for others (υ > Imax).

https://doi.org/10.1371/journal.pone.0161558.t006

thumbnail
Table 7. Comparison of average, minimum and standard deviation of error for considered PSO variants with condition 1, Imax = 1000 and D = 10.

https://doi.org/10.1371/journal.pone.0161558.t007

thumbnail
Table 8. Comparison of average, minimum and standard deviation of error for considered PSO variants with condition 1, Imax = 1000 and D = 10.

https://doi.org/10.1371/journal.pone.0161558.t008

thumbnail
Table 9. Comparison of average, minimum and standard deviation of error for considered PSO variants with condition 1, Imax = 1000 and D = 10.

https://doi.org/10.1371/journal.pone.0161558.t009

thumbnail
Table 10. Comparison of average, minimum and standard deviation of error for considered PSO variants with condition 1, Imax = 1000 and D = 10.

https://doi.org/10.1371/journal.pone.0161558.t010

thumbnail
Table 11. Comparison of average, minimum and standard deviation of error for considered PSO variants with condition 1, Imax = 500 and D = 50.

https://doi.org/10.1371/journal.pone.0161558.t011

thumbnail
Table 12. Comparison of average, minimum and standard deviation of error for considered PSO variants with condition 1, Imax = 500 and D = 50.

https://doi.org/10.1371/journal.pone.0161558.t012

thumbnail
Table 13. Comparison of average, minimum and standard deviation of error for considered PSO variants with condition 1, Imax = 1000 and D = 50.

https://doi.org/10.1371/journal.pone.0161558.t013

thumbnail
Table 14. Comparison of average, minimum and standard deviation of error for considered PSO variants with condition 1, Imax = 1000 and D = 50.

https://doi.org/10.1371/journal.pone.0161558.t014

6.2 Comparison Analysis of IW Strategies

According to the numerical results obtained from this study (Tables 514), we can compare IW strategies with each other based on any benchmark function. For each problem and each PEC, the best and worst IW strategies have been determined in Tables 1522. The following notation is used in these tables:

thumbnail
Table 15. Best and worst IW strategies for each benchmark function in terms of success rate, average and minimum number of iterations of successful runs according to Table 5.

https://doi.org/10.1371/journal.pone.0161558.t015

thumbnail
Table 16. Best and worst IW strategies for each benchmark function in terms of success rate, average and minimum number of iterations of successful runs according to Table 6.

https://doi.org/10.1371/journal.pone.0161558.t016

thumbnail
Table 17. Best and worst IW strategies for each benchmark function in terms of success rate, average and minimum number of iterations of successful runs according to Table 6.

https://doi.org/10.1371/journal.pone.0161558.t017

thumbnail
Table 18. Best and worst IW strategies for each benchmark function in terms of average, minimum and standard deviation of error according to Tables 7 and 8.

https://doi.org/10.1371/journal.pone.0161558.t018

thumbnail
Table 19. Best and worst IW strategies for each benchmark function in terms of average, minimum and standard deviation of error according to Table 9.

https://doi.org/10.1371/journal.pone.0161558.t019

thumbnail
Table 20. Best and worst IW strategies for each benchmark function in terms of average, minimum and standard deviation of error according to Table 10.

https://doi.org/10.1371/journal.pone.0161558.t020

thumbnail
Table 21. Best and worst IW strategies for each benchmark function in terms of average, minimum and standard deviation of error according to Tables 11 and 12.

https://doi.org/10.1371/journal.pone.0161558.t021

thumbnail
Table 22. Best and worst IW strategies for each benchmark function in terms of average, minimum and standard deviation of error according to Tables 13 and 14.

https://doi.org/10.1371/journal.pone.0161558.t022

S-IW indicates several inertia weights except variations of FEIW. Also S-FEIW indicates several inertia weights including some variations of FEIW. For example in Table 17, the worst IW strategies for Pinter function (f23) in terms of ANS, are GLBIW and AIW, also in Table 20, the best IW strategies for Quintic function (f22) in terms of AE, are FEIW-3, FEIW-5 and NEIW. Thus the notations S-IW and S-FEIW are used in the f23 and f22 columns of Tables 17 and 20, respectively. It can be seen from Tables 1522 that variations of FEIW emerge as best performers. Let be the number of benchmark functions in table T (15 ≤ T ≤ 22) which achieve the best result with variations of FEIW strategy in terms of PEC. Also let be the total number of benchmark functions in table T. If we define then is the percentage of successful FEIW strategies in terms of PEC among all benchmark functions in table T. Using this definition, we can summarize Tables 1522 in Table 23. For example in this table, , i.e., 90% of IW strategies that can provide the best average error performance for benchmark functions, are variations of FEIW. From Table 23, it could be concluded that FEPSO seems to be more efficient and has good convergence compared to other IW strategies. In the next subsection, we will show that statistical tests confirm that the variations of FEIW significantly improves results.

6.3 Statistical analysis of numerical results

In this section, the numerical results obtained using FEIW strategy and other strategies are statistically analyzed based on non-parametric tests as: Wilcoxon test; Friedman test and Bonferroni-Dunn test [3537]. The Wilcoxon test performs pair wise comparison of variants while Bonferroni-Dunn test detects the significant differences among all variants. Because of nature of numerical results, the logarithmic scale of average, minimum and standard deviation of error are used for statistical tests.

6.3.1 Wilcoxon sign rank test.

Wilcoxon sign rank test is nonparametric statistically hypothesis test which can be used as an alternative to the paired t-test when the results cannot be assumed to be normally distributed. The results for Wilcoxon’s test are summarized as R+ and R, which represent the sum of positive and negative ranks of an algorithm in comparison to other algorithms in the column. During statistical analysis on Table 5, we have considered two performance criteria, average and minimum number of iterations of successful runs, which evaluate the convergence speed of a given algorithm. Table 24 comprises results of wilcoxon signed rank test for these two performance criteria taken Imax = 1000 and D = 10. Table 24 shows that the variations of FEIW win over other strategies in 23 of 24 tests in terms of average number of iterations of successful runs. Also the p-value in most of the cases is less than 0.01. Thus in terms of average number of iterations of successful runs, all the six variations of FEIW are significantly better than CIW, RIW, LDIW and CHIW. According to Table 24, this is true for minimum number of iterations of successful runs. Therefore the wilcoxon sign rank test on Table 5 clearly proves the superiority of FEIW over other IW models in terms of convergence speed. Table 25 shows the results for wilcoxon signed rank test for average and minimum number of iterations of successful runs according to Table 6. Table 25 shows that FEIW-1, FEIW-5 and FEIW-6 win over GLBIW, AIW, NEIW and EDIW in the all cases and also the p-value is less than 0.01 and thus these three variations of FEIW are significantly better than other IW strategies in terms of convergence speed. With applying statistical analysis on Tables 7 and 8, we can evaluate the solution precision of FEPSO algorithm. Table 26 comprises results of wilcoxon signed rank test for average and minimum error taken for Imax = 1000 and D = 10. Table 26 shows that except in FEIW-4, the other variations of FEIW win over other strategies in most of the cases with p-value<0.05. Thus in terms of average and minimum error, FEIW is significantly better than CIW, RIW, LDIW and CHIW. Therefore the wilcoxon sign rank test on Tables 7 and 8 clearly proves the superiority of FEIW over other IW models in terms of solution precision. Table 27 shows the results for wilcoxon signed rank test for average and minimum error according to Tables 9 and 10. The observation of results in Table 27 confirms that FEIW-1 wins in the all cases with p-value less than 0.05 and is significantly better than GLBIW, AIW, NEIW and EDIW. Using wilcoxon signed rank test from Tables 11 and 12, the solution precision of FEPSO algorithm for Imax = 500 and D = 50 can be evaluated. Table 28 contains results of this test for average and minimum error. In terms of average error, all the variations of FEIW win over CIW, RIW and LDIW strategies in all the cases with p-value<0.05. Also FEIW-2 wins over CHIW strategy in all the cases with p-value< 0.05. In terms of minimum error, all the variations of FEIW win over CIW, RIW and LDIW strategies in all the cases with p-value<0.05. Also FEIW-1, FEIW-2 and FEIW-6 win over CHIW strategy in all the cases with p-value<0.05. Thus in terms of average and minimum error, FEIW is significantly better than CIW, RIW, LDIW and CHIW. Therefore the wilcoxon sign rank test on Tables 11 and 12 confirms the superiority of FEIW over other IW strategies in terms of solution precision. With applying wilcoxon signed rank test from Tables 13 and 14, the solution precision of FEPSO algorithm for, Imax = 1000 and D = 50 can be evaluated. Table 29 contains results of this test for average and minimum error. In terms of average error, all the variations of FEIW win over CIW and RIW strategies in all the cases with p-value<0.05. Also FEIW-4 wins over LDIW strategy and FEIW-1 and FEIW-6 win over CHIW strategy in all the cases with p-value<0.05. In terms of minimum error, all the variations of FEIW win over CIW, RIW and LDIW strategies in all the cases with p-value< 0.05. Also FEIW-1, FEIW-2 and FEIW-6 win over CHIW strategy in all the cases with p-value< 0.05. Thus in terms of average and minimum error, FEIW is significantly better than CIW, RIW, LDIW and CHIW. Therefore the wilcoxon sign rank test on Tables 13 and 14 confirms the superiority of FEIW over other IW strategies in terms of solution precision.

thumbnail
Table 24. Wilcoxon-ranks and p-value on the average and minimum number of iterations of successful runs according to Table 5.

https://doi.org/10.1371/journal.pone.0161558.t024

thumbnail
Table 25. Wilcoxon-ranks and p-value on the average and minimum number of iterations of successful runs according to Table 6.

https://doi.org/10.1371/journal.pone.0161558.t025

thumbnail
Table 26. Wilcoxon-ranks and p-value on the average and minimum error according to Tables 7 and 8.

https://doi.org/10.1371/journal.pone.0161558.t026

thumbnail
Table 27. Wilcoxon-ranks and p-value on the average and minimum error according to Tables 9 and 10.

https://doi.org/10.1371/journal.pone.0161558.t027

thumbnail
Table 28. Wilcoxon-ranks and p-value on the average and minimum error according to Tables 11 and 12.

https://doi.org/10.1371/journal.pone.0161558.t028

thumbnail
Table 29. Wilcoxon-ranks and p-value on the average and minimum error according to Tables 13 and 14.

https://doi.org/10.1371/journal.pone.0161558.t029

6.3.2 Friedman test.

The Friedman test is a non-parametric statistical test developed by the Friedman [38, 39]. The goal of this test is to determine whether there are significant differences among the algorithms considered over given sets of data. The Friedman test determines the ranks of the algorithms for each individual data set, i.e., in the minimization problems, the best performing algorithm getting minimum rank. Outcomes of Friedman test on Tables 514 are shown in Tables 3035. The results of Friedman test are used to observe whether there is overall difference among IW strategies. In all tables the p-value of Friedman test is lower than the level of significance considered α = 0.05 and α = 0.01 thus there are significant differences among the observed results. The speed in obtaining the global optimum is a salient yardstick for measuring the algorithm performance. From Table 30, FEIW-1 has the best performance among all IW strategies, in terms of average and minimum number of iterations. Also FEIW-5 has the highest rank of success rate. Similarly, Table 31 shows that FEIW-1 has the best rank among all IW strategies in terms of success rate, average and minimum number of iterations. Thus with condition 2, Imax = 1000 and D = 10, Friedman test proves the advantage of FEIW-1 and FEIW-5 over other IW strategies in terms of convergence speed and solution precision. From Table 32, FEIW-6 and FEIW-1 have the best performance among all IW strategies, in terms of average and minimum error, respectively. Also Table 33 shows that FEIW-1 has the best rank in terms of average and minimum error. Thus with condition 1, Imax = 1000 and D = 10, Friedman test proves that FEIW-6 and FEIW-1 are the best strategies for better accuracy. Under condition 1, Imax = 500 and D = 50, from Table 34 one can observe that FEIW-1 and FEIW-6 have the highest performance since these strategies have minimum rank, in terms of average and minimum error, respectively. With condition 1, Imax = 1000 and D = 50, from Table 35 one can conclude that FEIW-1 is the best IW strategy in both average and minimum error test. Therefore, FEPSO significantly outperforms CIWPSO, RIWPSO, LDIWPSO, CHIWPSO, GLBIWPSO, AIWPSO, NEIWPSO and EDIWPSO in terms of solution quality and convergence rate using the Friedman test.

6.3.3 Bonferroni-Dunn test.

Here we have employed Bonferroni-Dunn test [40] to detect significant differences for the considered variants. The Bonferroni-Dunn test is used to compare an IW strategy with all the other strategies. The performance of two strategies is significantly different if the corresponding mean ranks differ by at least the critical difference (CD): (41) where Ni and Nf are number of IW strategies and benchmark functions, respectively. Also critical values qα at the probability level α is given in [35] as follows (42)

Using Eqs (41) and (42) critical difference for Bonferroni-Dunn test after the Friedman test is as follows (43)

The difference among mean ranking of PSO variants is illustrated by Bonferroni-Dunn’s graph in Figs 35. In Bonferroni-Dunn’s graph, we have drawn a horizontal star-line which represents the threshold for the best performing algorithm (the one with the lowest ranking bar in minimization problems) for a better comparison of variants. A line is drawn for each level of significance considered in this study, at a height equal to the sum of minimum ranking and the corresponding CD computed by the Bonferroni-Dunn method. The bars exceeded these lines are associated to an algorithm having worst performance. In Fig 3, Bonferroni-Dunn bar charts for average and minimum iterations prove that FEIW-1 has the best speed in obtaining the global optimum among all considered IW strategies. Also CIW, RIW, LDIW, CHIW, GLBIW, AIW, NEIW, EDIW, FEIW-2 and FEIW-4 have the worst convergence speed. For success rate criteria, RIW and GLBIW come as worst performers and FEIW-1 and FEIW-5 emerge as best performers. Based on Figs 4 and 5, the other analytical observations are as:

For average error criteria, CIW, RIW, LDIW, CHIW, GLBIW, AIW, FEIW-4 and FEIW-5 emerge as worst performers and FEIW-1 and FEIW-6 as best performers; For minimum error criteria, CIW, RIW, LDIW, CHIW, GLBIW, AIW, FEIW-3 and FEIW-4 come as worst performers and FEIW-1 and FEIW-6 as best performers. For standard deviation criteria, CIW, RIW, LDIW, GLBIW, AIW, FEIW-3, FEIW-4 and FEIW-5 emerge as worst performers and FEIW-1 and FEIW-2 as best performers. Therefore, in general manner, Bonferroni-Dunn bar charts show that FEIW-1 strategy has the best performance among all considered strategies.

thumbnail
Fig 3. Bonferroni-Dunn bar chart.

(A) Average iterations based on Table 5. (B) Average iterations based on Table 6. (C) Minimum iterations based on Table 5. (D) Minimum iterations based on Table 6. (E) Success rate based on Table 5. (F) Success rate based on Table 6.

https://doi.org/10.1371/journal.pone.0161558.g003

thumbnail
Fig 4. Bonferroni-Dunn bar chart.

(A) Average error based on Tables 7 and 8. (B) Average error based on Tables 9 and 10. (C) Minimum error based on Tables 7 and 8. (D) Minimum error based on Tables 9 and 10. (E) Standard deviation of error based on Tables 7 and 8. (F) Standard deviation of error based on Tables 9 and 10.

https://doi.org/10.1371/journal.pone.0161558.g004

thumbnail
Fig 5. Bonferroni-Dunn bar chart.

(A) Average error based on Tables 11 and 12. (B) Average error based on Tables 13 and 14. (C) Minimum error based on Tables 11 and 12. (D) Minimum error based on Tables 13 and 14. (E) Standard deviation of error based on Tables 11 and 12. (F) Standard deviation of error based on Tables 13 and 14.

https://doi.org/10.1371/journal.pone.0161558.g005

6.3.4 Boxplot.

In addition to using statistical tests to observe the performance of considered PSO variants, boxplot analysis is also performed for benchmark functions and shown in Figs 68. In Fig 6, boxplots of average and minimum iterations show that medians of FEIW-1, FEIW-5 and FEIW-6 are smaller than others. Thus these boxplots show that FEPSO is faster than CIWPSO, RIWPSO, LDIWPSO, CHIWPSO, GLBIWPSO, AIWPSO, NEIWPSO and EDIWPSO. The results of boxplots of average and minimum error in Figs 7 and 8, indicate the superiority of FEIW-1, FEIW-5 and FEIW-6 strategies over other approaches in terms of accuracy. These boxplots prove that FEIW strategy is a reliable IW and has better performance than other considered IW strategies.

thumbnail
Fig 6. Boxplots of considered PSO variants.

(A) Average iterations based on Table 5. (B) Average iterations based on Table 6. (C) Minimum iterations based on Table 5. (D) Minimum iterations based on Table 6. (E) Success rate based on Table 5. (F) Success rate based on Table 6.

https://doi.org/10.1371/journal.pone.0161558.g006

thumbnail
Fig 7. Boxplots of considered PSO variants.

(A) Average error based on Tables 7 and 8. (B) Average error based on Tables 9 and 10. (C) Minimum error based on Tables 7 and 8. (D) Minimum error based on Tables 9 and 10. (E) Standard deviation of error based on Tables 7 and 8. (F) Standard deviation of error based on Tables 9 and 10.

https://doi.org/10.1371/journal.pone.0161558.g007

thumbnail
Fig 8. Boxplots of considered PSO variants.

(A) Average error based on Tables 11 and 12. (B) Average error based on Tables 13 and 14. (C) Minimum error based on Tables 11 and 12. (D) Minimum error based on Tables 13 and 14. (E) Standard deviation of error based on Tables 11 and 12. (F) Standard deviation of error based on Tables 13 and 14.

https://doi.org/10.1371/journal.pone.0161558.g008

6.4 Convergence graph

The convergence graph for FEIW-1, FEIW-3, FEIW-5 and FEIW-6 is demonstrated in Fig 9. The termination criterion for these graphs is condition 2, where D = 10 and Imax = 30000. From convergence graph, we can discover that the convergence rate of the mentioned IW strategies is clearly faster than the other strategies on the benchmark functions. At the same time, the best solution get by FEPSO is more optimum than by CIWPSO, RIWPSO, LDIWPSO, CHIWPSO, GLBIWPSO, AIWPSO, NEIWPSO and EDIWPSO.

thumbnail
Fig 9. Convergence graph for some PSO variants.

(A) Sphere Function with ε = 10−20. (B) Griewank Function with ε = 10−1. (C) Ackley Function with ε = 10−15. (D) Zakharov Function with ε = 10−200. (E) Schwefel's Problem 2.22 with ε = 10−20. (F) Weierstrass Function with ε = 10−30.

https://doi.org/10.1371/journal.pone.0161558.g009

7 Conclusion

There are many modifications have been done to the standard PSO algorithm. Some of modifications to the basic PSO are directed towards introducing new strategies of inertia weight which tuned based on trial and error. Suitable selection of the inertia weight provides a balance between global and local searching. This paper proposed a new flexible exponential time-varying inertia weight (FEIW) strategy to improve the performance of PSO. The algorithm named as FEPSO is proposed based on FEIW strategy. We confirmed the FEPSO’s validity in terms of convergence speed and solution precision by testing it with a suit of well-known standard benchmark unimodal and multimodal functions and by comparing obtained results with eight inertia weight strategies of the best time-varying, adaptive and primitive inertia weight strategies. The comparisons are made in terms of convergence speed and solution accuracy and the results are tabulated and graphs are plotted for dimensions 10 and 50 separately. Statistical tests show that this novel strategy converges faster than others during the early stage of the search process and provide better results for problems. Thus experimental results clearly prove the superiority of the proposed model over other inertia weight models. The future work includes the implementation of the FEPSO to solve a real world problem with lots of complexity such as brain MR image segmentation to compare the efficiency of the FEPSO with other recent optimization techniques.

Acknowledgments

We would like to thank Ms. Arezou Jamaly for assistance in editing and preparation of tables and figures.

Author Contributions

  1. Conceptualization: MJA.
  2. Data curation: MJA.
  3. Formal analysis: MJA MS MHS.
  4. Funding acquisition: MJA MS MHS.
  5. Investigation: MJA MS MHS.
  6. Methodology: MJA.
  7. Project administration: MJA.
  8. Resources: MJA.
  9. Software: MJA.
  10. Supervision: MS MHS.
  11. Validation: MJA MS MHS.
  12. Visualization: MJA.
  13. Writing – original draft: MJA.
  14. Writing – review & editing: MJA MS MHS.

References

  1. 1. Ab Wahab MN, Nefti-Meziani S, Atyabi A. A comprehensive review of swarm optimization algorithms. PLoS One. 2015;10(5):e0122827. pmid:25992655
  2. 2. Kennedy J, Eberhart, R.C. Particle swarm optimization. In: Proceedings IEEE international joint conference on neural networks. 1995:1942–8.
  3. 3. Eberhart RC, Kennedy, J., editor A new optimizer using particle swarm theory. Proceedings of the sixth international symposium on micro machine and human science; 1995: New York, NY.
  4. 4. Ciuprina G, Ioan D, Munteanu I. Use of intelligent-particle swarm optimization in electromagnetics. Magnetics, IEEE Transactions on. 2002;38(2):1037–40.
  5. 5. Liang JJ, Qin AK, Suganthan PN, Baskar S. Comprehensive learning particle swarm optimizer for global optimization of multimodal functions. Evolutionary Computation, IEEE Transactions on. 2006;10(3):281–95.
  6. 6. Shi Y, Eberhart RC, editors. Parameter selection in particle swarm optimization. Evolutionary programming VII; 1998: Springer.
  7. 7. Jiao B, Lian Z, Gu X. A dynamic inertia weight particle swarm optimization algorithm. Chaos, Solitons & Fractals. 2008;37(3):698–705.
  8. 8. Eberhart RC, Shi Y, editors. Comparing inertia weights and constriction factors in particle swarm optimization. Evolutionary Computation, 2000 Proceedings of the 2000 Congress on; 2000: IEEE.
  9. 9. Ratnaweera A, Halgamuge SK, Watson HC, editors. Particle Swarm Optimization with Self-Adaptive Acceleration Coefficients. FSKD; 2002.
  10. 10. Trelea IC. The particle swarm optimization algorithm: convergence analysis and parameter selection. Information processing letters. 2003;85(6):317–25.
  11. 11. Kenndy J, Mendes R, editors. Population structure and particle performance. Proceedings of the IEEE Congress on Evolutionary Compution, Honolulu, HI, USA; 2002.
  12. 12. Liu C, Du W-B, Wang W-X. Particle swarm optimization with scale-free interactions. PloS one. 2014;9(5):e97822. pmid:24859007
  13. 13. Alfi A. PSO with adaptive mutation and inertia weight and its application in parameter estimation of dynamic systems. Acta Automatica Sinica. 2011;37(5):541–9.
  14. 14. Shi Y, Eberhart RC, editors. Fuzzy adaptive particle swarm optimization. Evolutionary Computation, 2001 Proceedings of the 2001 Congress on; 2001: IEEE.
  15. 15. Shi Y, Eberhart RC, editors. A modified particle swarm optimizer. Evolutionary Computation Proceedings, 1998 IEEE World Congress on Computational Intelligence, The 1998 IEEE International Conference on; 1998: IEEE.
  16. 16. Nickabadi A, Ebadzadeh MM, Safabakhsh R. A novel particle swarm optimization algorithm with adaptive inertia weight. Applied Soft Computing. 2011;11(4):3658–70.
  17. 17. Eberhart RC, Shi Y, editors. Tracking and optimizing dynamic systems with particle swarms. Evolutionary Computation, 2001 Proceedings of the 2001 Congress on; 2001: IEEE.
  18. 18. Arumugam MS, Rao M. On the improved performances of the particle swarm optimization algorithms with adaptive parameters, cross-over operators and root mean square (RMS) variants for computing optimal control of a class of hybrid systems. Applied Soft Computing. 2008;8(1):324–36.
  19. 19. Engelbrecht AP. Computational intelligence: an introduction: John Wiley & Sons; 2007.
  20. 20. Kumar M, Sasamal T, editors. Design of FIR filter using PSO with CFA and inertia weight approach. Computing, Communication & Automation (ICCCA), 2015 International Conference on; 2015: IEEE.
  21. 21. Xun Z, Juelong L, Jianchun X, Ping W, Qiliang Y, editors. The impact of parameter adjustment strategies on the performance of particle swarm optimization algorithm. Control and Decision Conference (CCDC), 2015 27th Chinese; 2015: IEEE.
  22. 22. Chatterjee A, Siarry P. Nonlinear inertia weight variation for dynamic adaptation in particle swarm optimization. Computers & Operations Research. 2006;33(3):859–71.
  23. 23. Feng Y, Teng G, Wang A, Yao Y, editors. Chaotic inertia weight in particle swarm optimization. Innovative Computing, Information and Control, 2007 ICICIC'07 Second International Conference on; 2007: IEEE.
  24. 24. Feng Y, Yao Y, Wang A, editors. Comparing with chaotic inertia weights in particle swarm optimization. Machine Learning and Cybernetics, 2007 International Conference on; 2007: IEEE.
  25. 25. Chen G, Huang X, Jia J, Min Z, editors. Natural exponential inertia weight strategy in particle swarm optimization. Intelligent Control and Automation, 2006 WCICA 2006 The Sixth World Congress on; 2006: IEEE.
  26. 26. Li H- R, Gao Y- L, editors. Particle swarm optimization algorithm with exponent decreasing inertia weight and stochastic mutation. Information and Computing Science, 2009 ICIC'09 Second International Conference on; 2009: IEEE.
  27. 27. Bansal J, Singh P, Saraswat M, Verma A, Jadon SS, Abraham A, editors. Inertia weight strategies in particle swarm optimization. Nature and Biologically Inspired Computing (NaBIC), 2011 Third World Congress on; 2011: IEEE.
  28. 28. Arasomwan MA, Adewumi AO. On the performance of linear decreasing inertia weight particle swarm optimization for global optimization. The Scientific World Journal. 2013;2013.
  29. 29. Jamil M, Yang X-S. A literature survey of benchmark functions for global optimisation problems. International Journal of Mathematical Modelling and Numerical Optimisation. 2013;4(2):150–94.
  30. 30. Molga M, Smutnicki C. Test functions for optimization needs 2005. Available from: http://www.zsd.ict.pwr.wroc.pl/files/docs/functions.pdf.
  31. 31. Yao X, Liu Y, Lin G. Evolutionary programming made faster. Evolutionary Computation, IEEE Transactions on. 1999;3(2):82–102.
  32. 32. Liang J, Qu B, Suganthan P, Chen Q. Problem definitions and evaluation criteria for the CEC 2015 competition on learning-based real-parameter single objective optimization. Technical Report201411A, Computational Intelligence Laboratory, Zhengzhou University, Zhengzhou China and Technical Report, Nanyang Technological University, Singapore. 2014.
  33. 33. Suganthan PN, Hansen N, Liang JJ, Deb K, Chen Y-P, Auger A, et al. Problem definitions and evaluation criteria for the CEC 2005 special session on real-parameter optimization. KanGAL report. 2005;2005005:2005.
  34. 34. Wolpert DH, Macready WG. No free lunch theorems for optimization. Evolutionary Computation, IEEE Transactions on. 1997;1(1):67–82.
  35. 35. Demšar J. Statistical comparisons of classifiers over multiple data sets. The Journal of Machine Learning Research. 2006;7:1–30.
  36. 36. Derrac J, García S, Molina D, Herrera F. A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm and Evolutionary Computation. 2011;1(1):3–18.
  37. 37. Kumar P, Pant M, editors. Enhanced mutation strategy for differential evolution. Evolutionary Computation (CEC), 2012 IEEE Congress on; 2012: IEEE.
  38. 38. Friedman M. The use of ranks to avoid the assumption of normality implicit in the analysis of variance. Journal of the american statistical association. 1937;32(200):675–701.
  39. 39. Friedman M. A comparison of alternative tests of significance for the problem of m rankings. The Annals of Mathematical Statistics. 1940;11(1):86–92.
  40. 40. Dunn OJ. Multiple comparisons among means. Journal of the American Statistical Association. 1961;56(293):52–64.