Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Fractional-order quantum particle swarm optimization

  • Lai Xu,

    Roles Conceptualization, Data curation, Formal analysis, Methodology, Software, Visualization, Writing – original draft

    Affiliation School of Computer Science, Sichuan University, Chengdu, Sichuan Province, China

  • Aamir Muhammad,

    Roles Data curation, Formal analysis, Software, Validation

    Affiliation School of Computer Science, Sichuan University, Chengdu, Sichuan Province, China

  • Yifei Pu,

    Roles Conceptualization, Supervision

    Affiliation School of Computer Science, Sichuan University, Chengdu, Sichuan Province, China

  • Jiliu Zhou,

    Roles Data curation, Funding acquisition, Supervision, Validation

    Affiliation School of Computer Science, Sichuan University, Chengdu, Sichuan Province, China

  • Yi Zhang

    Roles Funding acquisition, Project administration, Supervision, Validation, Writing – review & editing

    yzhang@scu.edu.cn

    Affiliation School of Computer Science, Sichuan University, Chengdu, Sichuan Province, China

Fractional-order quantum particle swarm optimization

  • Lai Xu, 
  • Aamir Muhammad, 
  • Yifei Pu, 
  • Jiliu Zhou, 
  • Yi Zhang
PLOS
x

Abstract

Motivated by the concepts of quantum mechanics and particle swarm optimization (PSO), quantum-behaved particle swarm optimization (QPSO) was developed to achieve better global search ability. This paper proposes a new method to improve the global search ability of QPSO with fractional calculus (FC). Based on one of the most frequently used fractional differential definitions, the Grünwald-Letnikov definition, we introduce its discrete expression into the position updating of QPSO. Extensive experiments on well-known benchmark functions were performed to evaluate the performance of the proposed fractional-order quantum particle swarm optimization (FQPSO). The experimental results demonstrate its superior ability in achieving optimal solutions for several different optimizations.

Introduction

Particle swarm optimization (PSO) [1], which is inspired by animal social behaviors, such as birds, was first proposed by Kennedy and Eberhart as a population-based optimization technique. In PSO, the potential solutions, which are called particles, go through the solution space by relying on their own experiences and current best particle. PSO has a competitive performance with the classical Genetic Algorithm (GA) [2], evolutionary programming (EP) [3], evolution strategies (ES) [4], genetic programming (GP)[5] and other classic algorithms. It has attracted increasing attention during recent years thanks to its effectiveness in different optimization problems [6][7][8].

Quantum computer [9] was proposed 30 years ago and the formal definition of the quantum computer was given in the late 1980s. Since the quantum computer has shown its potential in several special problems [10], many efforts were dedicated to this field. Several well-known algorithms were proposed, and Shor’s quantum factoring algorithm was the most famous one in these methods [11].

Inspired by a similar idea, the quantum-behaved particle swarm optimization (QPSO) [12] was introduced in 2004 by Sun et al. to improve the convergence of classical PSO. In quantum space, particles search in the complete solution space and the global optimum is guaranteed. In recent decades, fractional calculus has drawn increasing interests and been a strong branch of mathematical analyses. Furthermore, the random variables in the physical process can be regarded as the substitution of real stochastic motion. As a result, the fractional calculus can be introduced to analyze the physical statuses and procedures of objects in Euclidean space. The fractional differential functions have two features. A fractional differential function is a power function for primary functions, and it is an iterative addition or product of specific functions for the other functions. Meanwhile, it has been proved that many fractional-order models are more suitable to describe the natural phenomena. Based on these observations, fractional calculus has been introduced into many fields such as viscoelastic theory [13], diffusion processing [14] and stochastic fractal dynamics [15]. Most of the researches on fractional-order applications focus on the transient state of physical changes. However, the evolutive procedures of systems are rarely included.

In recent years, QPSO has attracted great attention from many researchers. To balance the global and local searching abilities, Xi et al. proposed a novel QPSO called weighted QPSO (WQPSO)[16]. Jiao et al. proposed a dynamic-context cooperative quantum-behaved particle swarm optimization (CQPSO)[17] for medical image segmentation. Although QPSO and its variants have better performance in some aspects, they do not make full use of the state information during the ergodic process and it is inefficient in hunting global optimum. In this paper, a novel quantum particle swarm optimization with the fractional-order position is proposed. Due to the nonlinear, non-causal and non-stationary characteristics of fractional calculus, searching global optimum can be significantly accelerated [18][19].

The rest of this paper is organized as follows: In section 2, some mathematical background about fractional calculus is introduced. Section 3 presents the basic ideas of PSO and QPSO and the proposed method is also given there. Section 4 demonstrated the experimental results of the proposed method. Finally, Section 5 outlines the conclusion.

Background theory for fractional calculus

Grünwald-Letnikov (GL) [20], Riemann-Liouville (RL) [21], and Caputo [22] definitions are three different definitions for fractional calculus in Euclidean space. Due to its convenient computational form, GL definition for the fractional derivative is commonly used for engineering problems.

The GL derivative with an order of function is defined as: (1) where f(x) is a differintegrable function, [a,x] is the function duration, and Γ is the gamma function. Here, denotes the GL fractional differential operator.

In (1), when N is big enough, the limit symbol can be neglected and we can rewrite (1) as: (2) which is a proximate form substituting fractional derivative with multiplication and addition operations [12]. For 1D signal, it has the following expression: (3)

Particle swarm optimization with fractional -order position

Quantum particle swarm optimization

Trajectory analyses in [23] demonstrated that each particle should converge to the corresponding attractor Ci, which is given as follows: (4) where a = c1r1/(c1r1+c2r2). It can be seen that the local attractor is a stochastic attractor of particle i that lies in a hyper-rectangle with pbid and gbd being two ends of its diagonal.

Based on the convergence analysis of PSO [24], inspired by the theory of quantum physics, Sun et al. studied the convergence behavior of PSO and proposed a novel PSO model from quantum mechanics abbreviated as QPSO [25]. Based on the Delta potential, the quantum behavior of particles are considered. In the framework of quantum time-space, the quantum state of a particle can be defined by a wave function ψ(x,t). In 3-D space, ψ(x,t) is given as (5) where Q is the probability that measures the particle’s location in the 3-D space. As a probability density function, we have (6)

The normalized version of ψ can be given as: (7)

As a result, Q and the corresponding distribution function F can be obtained as: (8)

And (9) where Lid(t) denotes the standard deviation, which describes the search range of each particle. The position of the particle can be obtained by Monte Carlo method with the following formula: (10) where s denotes a random constant, which is uniformly distributed on .

Then, u = e−2|y|/L. Let y = xc, we have (11)

The convergence condition of PSO is given by: (12)

Let L be the function of time, we have: (13)

With (13), we have the iterative version of i-th multidimensional particle as follows (14)

A global point called mean best position is introduced to evaluate Lid(t). The global point, which is denoted by mbest, can be computed as the mean of the pbest positions of all particles, which can be given as: (15)

The values of Lid(t) is calculated by: (16)

Finally, the position can be given by: (17) where parameter β is step size, which is utilized to control the convergence speed. rand is a random number with a range of 0 to 1, which is the deciding factor of “±” in (17).

Table 1 illustrates the main steps of QPSO.

QPSO with the fractional-order position

It is well known that fractional calculus has a remarkable long-term memory characteristic [26]. From the definition of Grünwald-Letnikov in (1), it can be seen that fractional derivative is computed with all historical states and it is naturally suitable for the iterative procedure of intelligent optimization algorithms. For examples, Pires E.J.S introduced fractional calculus theory into the updated formula of particle swarm optimization algorithm [27].

To further improve the speed and accuracy of convergence of QPSO, in this section, the proposed QPSO with the fractional-order position is detailed. Initially, the original position is rearranged to modify the order of the position derivative, which can be derived as: (18) (19) (20) (21) (23) and (26) can be uniformly rewritten as: (22)

The left side of (22) is the discrete version of the derivative with α = 1 and we can extend (22) to a generalized version, leading to the following fractional-order expression (23) when rand>0.5,mbestd>Xid(t) and rand<0.5,mbestd<Xid(t).

Similarly, for rand>0.5,mbestd<Xid(t) and rand<0.5,mbestd>Xid(t), we have (24)

Previous researches have demonstrated that while the order α of the derivative is set to [0,1], it will introduce a smoother variation and prolong memory effect, which may lead to a better performance than original integral-order method [12][13]. To study the behavior of the proposed fractional-order strategy, a set of functions are tested and the order α is set to range from 0 to 1 with step size of Δα = 0.1. To simplify the computational complexity, we usually truncate (3) and only use the first four terms, so we have (25)

Then, (23) can be modified to (26) and (24) can be also rewritten as (27) where (28)

It can be seen that from (23) and (24), the position updating of particles depends not only on the position of the previous particle but also on the historical position of the particle in different points in time. The position updating of particles is the result of long-term memory, which can protect the population distribution and diversity to a certain extent. The flowchart of the proposed quantum-behaved swarm optimization with the fractional position (FQPSO) is shown in Table 2.

Experiments

Experimental setup

To validate the performance of the proposed FQPSO, 8 benchmark functions [2830] listed in Table 3 were used to compare FQPSO with PSO and QPSO under the same maximum function evaluations (FEs). For FQPSO, the order was set to from 0.1 to 0.9 with step 0.1. Firstly, to investigate the impact of a fractional position in the proposed algorithm, we use FQPSO with different fractional-orders to compare to QPSO. Then, the best results of FQPSO were used for comparison with other variants of PSO including PSO [31], QPSO, PSO with both chaotic sequences and crossover operation(CCPSO) [32], naive PSO(NPSO) [33] and moderate-random-search strategy PSO(MRPSO) [34]. The parameters of the compared algorithms were set as recommended in the original references. Since the impact of population size on the performance of PSO-based methods is of the minimum significance [35], all experiments in this research were performed with a population size of 20. [34].

The parameters of the compared algorithms were set as recommended in the original references. Since the impact of population size on the performance of PSO-based methods is of the minimum significance [35], all experiments in this research were performed with a population size of 20. β is computed according to the following formula: (29) where β0 = 0.8, β1 = 0.6, t is the current number of iterations and tmax is the maximum number of iterations [36].

Testing FQPSO with different fractional-order

Since QPSO is a stochastic algorithm, it will lead to a different trajectory convergence every time. Therefore, the simulations were performed 50 times with each value in the parameter set α = {0,0.1,0.2,…,1}. In Figs 1 and 2, the result is given for the adopted optimization functions fj, j = 1,2,…,8. To show the gains achieved by our proposed algorithm, three groups of the experiments were performed. In unimodal functions (f1-f5, Group 1) and multimodal functions (f6-f8, Group 2) tests, the maximum numbers of FEs were set to 10000, 30000 and 100000, for 10-D, 30-D and 100-D problems, respectively. In the results, we provided the best results and the mean results. The final results over 50 runs of FQPSO are summarized in Tables 47.

thumbnail
Fig 1. Comparison between FQPSO with different fractional-order on Group 1.

(a) f1, (b) f2, (c) f3, (d) f4, (e) f5.

https://doi.org/10.1371/journal.pone.0218285.g001

thumbnail
Fig 2. Comparison between FQPSO with different fractional-order on Group 2.

(a) f6, (b) f7, (c) f8.

https://doi.org/10.1371/journal.pone.0218285.g002

thumbnail
Table 4. Comparison between FQPSOs with different fractional-order on function 1–2.

https://doi.org/10.1371/journal.pone.0218285.t004

thumbnail
Table 5. Comparison between FQPSOs with different fractional-order on function 3–4.

https://doi.org/10.1371/journal.pone.0218285.t005

thumbnail
Table 6. Comparison between FQPSOs with different fractional-order on function 5–6.

https://doi.org/10.1371/journal.pone.0218285.t006

thumbnail
Table 7. Comparison between FQPSOs with different fractional-order on function 7–8.

https://doi.org/10.1371/journal.pone.0218285.t007

Fig 1 shows the performance of FQPSO with different fractional-orders in Group 1. f1, a Sphere function, is the most widely used unimodal test function. Compared with algorithms with integer-order position, FQPSO shows the best results for this function. Similar results were obtained for other unimodal functions. The improvements achieved by FQPSO on these unimodal functions suggest that fractional-order methods are better at a fine-gained search than integer-order ones. However, it is also worth noting that the performances of FQPSO algorithms with orders 0.1 and 0.2 were not better than integer-order. The reason is that (35) is just an approximation of Dα and the approximation accuracy of D0.1 and D0.2 is not good enough. From Fig 1, we can see that most FQPSO methods’ convergence accuracies are better than QPSO. For 10-D and 30-D problems in function 1, 2 and 3 showed in Fig 1A, 1B and 1C and Tables 5 and 6, when 0.3≤α≤0.9, the convergence accuracies are better than QPSO. For 10-D and 30-D problems in function 4, the convergence accuracies of are better than QPSO, when 0.4≤α≤0.9. For 10-D and 30-D problems in function 5, the convergence accuracies of are better than QPSO, when 0.2≤α≤0.9. Tables 57 also show that the convergence accuracies are better than QPSO in function 1, 2, 3, 4 and 5 when 0.7≤α≤0.9 for100-D problems.

In general, we can always find an appropriate fractional order so that the convergence accuracy of the algorithm is better than the integer order algorithm in Group 1.

In Fig 2, for f6, f7 and f8, the numbers of local minima will increase dramatically as the dimension of the function raises. In this part, we mainly investigated the capability of global searching. f6 is the generalized Rastrigin’s function, which is the most widely used test multimodal functions in PSO algorithm, and tends to be trapped by local minimums. Considering more orders to search in the solution space, FQPSO gets more favorable results than the compared algorithms. f7 is the Ackley function and according to the results in Table 8, the performances of FQPSO have little changes with the variation of dimension and achieve the best results on each dimension. Function f8 is the Weierstrass function, which is continuous everywhere, but differentiable nowhere. In short, FQPSO reaches the global optimum on 10 and 30 dimensions. In Fig 2 and Tables 6 and 7, it can be observed that except FQPSO with orders 0.1 and 0.2, FQPSO can always achieve better results than QPSO. Meanwhile, for function 6, 7 and 8, the convergence accuracies are better than QPSO when 0.3≤α≤0.9.

In summary, FQPSO has superior ability in tackling multimodal functions compared with other algorithms. We can always find an appropriate fractional order for the algorithm that has better convergence accuracy than the integer order one in Group 2.

Table 8 shows the time consumption of FQPSO and QPSO in solving function optimization problems. The default time unit is seconds. The experimental results also confirm that the fractional order method only consumes a little more time in each iteration process, and does not cause a lot of waste of time.

Compare with other variants of PSO

In this experiment, the best results of the FQPSO methods were used for comparison with other variants of PSO, including PSO, QPSO, CCPSO, NPSO and MRPSO. The parameters of the compared algorithms were set according to the recommendations in their original papers. The maximum numbers of FEs were respectively set to 10000 and 30000 for solving 10-D and 30-D problems. All experiments were performed with a population size of 20.

Tables 9 and 10 shows the statistical results of different algorithms on unimodal functions. From the previous results in the last subsection, we can see that FQPSO with D0.8 obtained the best results on functions 1–3 and FQPSO with D0.7 achieved the best results on functions 4–5. We fixed the orders to compare those results with other variants of PSO. The results from different algorithms on these five unimodal functions suggest that FQPSO is better at a fine-gained search than all the other algorithms. The rapid convergence of the FQPSO can be seen as an evidence for our observation in Fig 3. In summary, FQPSO performs best in solving unimodal functions among all the algorithms. Tables 9 and 10 and Fig 4 show the performances of different algorithms on Group 2.

thumbnail
Fig 3. Comparison between different PSO algorithms on Group 1.

(a) f1, (b) f2, (c) f3, (d) f4, (e) f5.

https://doi.org/10.1371/journal.pone.0218285.g003

thumbnail
Fig 4. Comparison between different PSO algorithms on Group 2.

(a) f6, (b) f7, (c) f8.

https://doi.org/10.1371/journal.pone.0218285.g004

thumbnail
Table 9. Comparison between different PSO algorithms on function 1–3.

https://doi.org/10.1371/journal.pone.0218285.t009

thumbnail
Table 10. Comparison between different PSO algorithms on function 4–6.

https://doi.org/10.1371/journal.pone.0218285.t010

Tables 9 and 10 shows the statistical results of different algorithms on unimodal functions. From the previous results in the last subsection, we can see that FQPSO with D0.8 obtained the best results on functions 1–3 and FQPSO with D0.7 achieved the best results on functions 4–5. We fixed the orders to compare those results with other variants of PSO. The results from different algorithms on these five unimodal functions suggest that FQPSO is better at a fine-gained search than all the other algorithms.

The rapid convergence of the FQPSO can be seen as an evidence for our observation in Fig 3.

In summary, FQPSO performs best in solving unimodal functions among all the algorithms. Tables 10 and 11 and Fig 4 show the performances of different algorithms on Group 2.

thumbnail
Table 11. Comparison between different PSO algorithms on function 7–8.

https://doi.org/10.1371/journal.pone.0218285.t011

From the previous results in the last subsection, it can be noticed that FQPSO with D0.9 obtained the best result on function 6, FQPSO with D0.7 got the best result on function 7, and FQPSO with D0.9 achieved the best result on function 8. We also fixed the orders to compare those results with other variants of PSO. It can be seen that FQPSO obtains the global optimum on 10 and 30 dimensions. FQPSO is better to deal with multimodal functions than other algorithms.

In the results of different PSOs on 30 dimensions also supports our conclusion that FQPSO is suitable for multimodal functions. In summary, FQPSO performs best in solving both unimodal and multimodal functions among all the algorithms.

Conclusion

Inspired by the properties of fractional calculus, we presented a novel QPSO algorithm incorporated with fractional calculus strategy, which is based on the properties of long time memory and non-locality of fractional calculus. The goal is to employ the proposed method to accelerate not only the convergence speed but also avoid the local optimums. Since the property of fractional calculus enables quantum-particles in FQPSO to appear anywhere during iterations, it significantly improves the global searching ability. Furthermore, FQPSO also increases the convergence rate for the quantum particles. As a result, the proposed FQPSO method achieves more favorable results than all the other algorithms.

References

  1. 1. Eberhart R.; Kennedy J. A new optimizer using particle swarm theory. In Proceedings of International Symposium on MICRO Machine and Human Science.
  2. 2. Goldberg D.E. Genetic Algorithms in Search. Optimization and Machine Learning. 1989, 7, 2104–2116.
  3. 3. Yuryevich J.; Wong K.P. Evolutionary programming based optimal power flow algorithm. IEEE Trans Power Syst. 1999, 14, 1245–1250.
  4. 4. Mezuramontes E.; Coello C.A.C. A simple multimembered evolution strategy to solve constrained optimization problems. IEEE Transactions on Evolutionary Computation. 2005, 9, 1–17.
  5. 5. Nordin P. Genetic Programming III—Darwinian Invention and Problem Solving. IEEE Transactions on Evolutionary Computation. 2002, 3, 251–253.
  6. 6. Bansal J.C.; Deep K. A Modified Binary Particle Swarm Optimization for Knapsack Problems. Applied Mathematics & Computation. 2012, 218, 11042–11061.
  7. 7. Wang K.P.; Huang L,; Zhou C.G. Particle swarm optimization for traveling salesman problem. Acta Scientiarium Naturalium Universitatis Jilinensis.2003, 3, 1583–1585.
  8. 8. Omran M.; Engelbrecht A.P.; Salman A. Particle Swarm Optimization Method Fof Image Clustering. International Journal of Pattern Recognition & Artificial Intelligence. 2005, 19, 297–321.
  9. 9. Benioff P. The computer as a physical system: A microscopic quantum mechanical Hamiltonian model of computers as represented by Turing machines. Journal of Statistical Physics. 1980, 22, 563–593.
  10. 10. Kane B.E. A silicon-based nuclear spin quantum computer. Nature, 1998, 393, 133–137.
  11. 11. Steffen M.; Vandersypen L.; Breyta G. Experimental Realization of Shor's quantum factoring algorithm. American Physical Society. 2002, 6866.
  12. 12. Sun J.; Feng B.; Xu W. Particle swarm optimization with particles having quantum behavior. In Proceedings of Congress on Evolutionary Computation, 2004.
  13. 13. Koeller R.C. Applications of Fractional Calculus to the Theory of Viscoelasticity. Transactions of the Asme Journal of Applied Mechanics. 1984, 51, 299–307.
  14. 14. Ciesielski M.; Leszczynski J. Numerical treatment of an initial-boundary value problem for fractional partial differential equations. Signal Processing, 2006, 86, 2619–2631.
  15. 15. Buyukkilic F.; Bayrakdar Z.O.; Demirhan D Investigation of the cumulative diminution process using the Fibonacci method and fractional calculus. Physica A Statistical Mechanics & Its Applications. 2016, 444, 336–344.
  16. 16. Xi M.; Sun J.; Xu W. An improved quantum-behaved particle swarm optimization algorithm with weighted mean best position. Advanced Materials Research. 2008, 591–593, 376–1380.
  17. 17. Jiao L.; Stolkin R.; Shang R. Dynamic-context cooperative quantum-behaved particle swarm optimization based on multilevel thresholding applied to medical image segmentation. Information Sciences. 2015, 294, 408–422.
  18. 18. Pu Y.F. Fractional-Order Euler-Lagrange Equation for Fractional-Order Variational Method: A Necessary Condition for Fractional-Order Fixed Boundary Optimization Problems in Signal Processing and Image Processing. IEEE Access. 2016, 99, 1–1.
  19. 19. Pu Y.F.; Siarry P.; Chatterjee A.; Wang Z.N.; Zhang Y.; Liu Y.G.; Zhou J.L.; Wang Y. A Fractional-Order Variational Framework for Retinex: Fractional-Order Partial Differential Equation-Based Formulation for Multi-Scale Nonlocal Contrast Enhancement with Texture Preserving. IEEE Transactions on Image Processing. 2017, 27, 1214–1229. pmid:29990194
  20. 20. Scherer R.; Kalla S.L.; Tang Y. The Grunwald-Letnikov method for fractional differential equations. Computers & Mathematics with Applications. 2011, 62, 902–917.
  21. 21. Abbas S.; Benchohra M. Nonlinear Fractional Order Riemman-Liouville Volterra-Stieltjes Partial Integral Equations on Unbounded Domains. Communications in Mathematical Analysis. 2013, 14, 104–117.
  22. 22. Abdeljawad T. On Riemann and Caputo fractional differences. Computers & Mathematics with Applications. 2011, 62, 1602–1611.
  23. 23. Clerc M.; Kennedy J. The particle swarm: explosion, stability, and convergence in a multi-dimensional complex space. IEEE Transactions on Evolutionary Computation. 2002, 6, 58–73.
  24. 24. Bonyadi M.R.; Michalewicz Z. Analysis of Stability, Local Convergence, and Transformation Sensitivity of a Variant of the Particle Swarm Optimization Algorithm. IEEE Transactions on Evolutionary Computation, 2016, 20, 370–385.
  25. 25. Sun J.; Feng B.; Xu W.B. Particle swarm optimization with particles having quantum behavior. In Proceedings of Proc Congress on Evolutionary Computation. 2004.
  26. 26. Pu Y.F.; Zhou J.L.; Yuan X. Fractional Differential Mask: A Fractional Differential-Based Approach for Multiscale Texture Enhancement. IEEE Transactions on Image Processing. 2010, 19, 491–511. pmid:19933015
  27. 27. Pires E.J.S.; Machado J.A.T.; Oliveira P.B.D.M. Particle swarm optimization with fractional-order velocity. Nonlinear Dynamics. 2010, 61, 295–301.
  28. 28. Kiranyaz S.; Ince T. Yildirim A. Fractional Particle Swarm Optimization in Multidimensional Search Space. IEEE Transactions on Systems Man & Cybernetics Part B Cybernetics. 2010, 40, 298–319.
  29. 29. Liang J.J.; Suganthan P.N.; Deb K. Novel composition test functions for global numerical optimization. In Proceedings of Swarm Intelligence Symposium. 2015.
  30. 30. Yao X.; Liu Y.; Lin G. Evolutionary programming made faster. IEEE Transactions on Evolutionary Computation. 1996, 3, 82–102.
  31. 31. Shi Y.; Eberhart R. A modified particle swarm optimizer. In Proceedings of Advances in Natural Computation.
  32. 32. Park J.B.; Jeong Y.W.; Shin J.R. An Improved Particle Swarm Optimization for Nonconvex Economic Dispatch Problems. IEEE Transactions on Power Systems. 2010, 25, 156–166.
  33. 33. Qin J.; Liang Z. A naive Particle Swarm Optimization. In Proceedings of Evolutionary Computation IEEE.
  34. 34. Gao H.; Xu W. A New Particle Swarm Algorithm and Its Globally Convergent Modifications. IEEE Transactions on Systems Man & Cybernetics Part B Cybernetics A Publication of the IEEE Systems Man & Cybernetics Society. 2011, 41, 1334.
  35. 35. Bergh F.V.D.; Engelbrecht A.P. Effect of swarm size on cooperative particle swarm optimizers. In Proceedings of the 3rd Annual Conference on Genetic and Evolutionary Computation Conference (GECCO), 2001, 892–899.
  36. 36. Sun J.; Wu X.; Palade V.; Fang W,; Lai C.H.; Xu W.B. Convergence analysis and improvements of quantum-behaved particle swarm optimization. Information Sciences, 2012, 193, 81–103.