Simple gravitational particle swarm algorithm for multimodal optimization problems

In real world situations, decision makers prefer to have multiple optimal solutions before making a final decision. Aiming to help the decision makers even if they are non-experts in optimization algorithms, this study proposes a new and simple multimodal optimization (MMO) algorithm called the gravitational particle swarm algorithm (GPSA). Our GPSA is developed based on the concept of “particle clustering in the absence of clustering procedures”. Specifically, it simply replaces the global feedback term in classical particle swarm optimization (PSO) with an inverse-square gravitational force term between the particles. The gravitational force mutually attracts and repels the particles, enabling them to autonomously and dynamically generate sub-swarms in the absence of algorithmic clustering procedures. Most of the sub-swarms gather at the nearby global optima, but a small number of particles reach the distant optima. The niching behavior of our GPSA was tested first on simple MMO problems, and then on twenty MMO benchmark functions. The performance indices (peak ratio and success rate) of our GPSA were compared with those of existing niching PSOs (ring-topology PSO and fitness Euclidean-distance ratio PSO). The basic performance of our GPSA was comparable to that of the existing methods. Furthermore, an improved GPSA with a dynamic parameter delivered significantly superior results to the existing methods on at least 60% of the tested benchmark functions.


Introduction
Multimodal optimization (MMO) algorithms [1] (also known as niching methods or techniques) can locate multiple global optima in a single run, which is essential for solving many scientific and engineering optimization problems, e.g., Toyota paradox [2], motor design [3], clustering validity functions [4], network modeling [5], truss-structure design [6], overlay network [7], multi-robot cooperation [8], wireless sensor network [9], object detection [10], and honeycomb core design [11]. Compared with unimodal optimization (UMO) algorithms that can locate just a single global optimum, MMO algorithms have several benefits in some realworld problems [1], where some factors can be difficult to model mathematically, e.g., degree of difficulty in manufacturing. Specifically, having multiple solutions with a similar quality will give a decision maker more options for consideration, with factors that are not captured in the mathematical model. Finding multiple solutions may also help to reveal hidden properties or a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 The remainder of this paper is organized as follows. Section 2 discusses the MMO problems and the drawbacks of the existing methods [22-29, 31, 32]. Section 3 introduces our GPSA, describes its search mechanism, and demonstrates its niching capability on one-and twodimensional functions. Section 4 evaluates the performance of our GPSA on the CEC 2013 niching test problems, and confirms the comparable performances of our GPSA and the existing one-stage methods. In Section 5, our GPSA is improved by assigning a dynamic parameter. The superiority of the improved version over the existing methods is demonstrated in this section. Section 6 concludes the paper.

MMO problems
Consider an optimization problem of the form where x is a d-dimensional vector and f : R d ! R is a real-valued function. This study focuses on the case where multiple global optima satisfy (1), i.e., where x � k is kth global optima and N go is the number of these global optima. Such problems are said to be multimodal, namely, they are MMO problems [1]. Conversely, problems with a single global optimum (N go = 1) are known as UMO problems.
Eq (1) can be solved by particle-swarm-based approaches by considering the motion of a swarm of N candidate solutions: XðtÞ ≔ fx 1 ðtÞ; x 2 ðtÞ; . . . ; x N ðtÞg; x i ðtÞ 2 R d ; where t is discrete time. The candidate solutions (called particles) explore the d-dimensional domain R d , seeking either multiple global optima (in MMO problems) or a unique optimum (in UMO problems). The performances of such approaches therefore depend on the swarm movements.

Classical PSO
PSO was originally proposed in [19] and [20], and numerous variants for improvements have been proposed. In a typical classical PSO [21], the motion of the ith particle x i (t) is recursively described as [37]: v i ðt þ 1Þ ¼ ov i ðtÞ þ c 1 r 1i ðtÞ � ðpb i ðtÞ À x i ðtÞÞ þ c 2 r 2i ðtÞ � a i ðtÞ; ð4aÞ ( with a i ðtÞ ¼ gbðtÞ À x i ðtÞ; where � denotes component-wise multiplication, v i (t) is the velocity of the ith particle, and r 1i (t) and r 2i (t) are independent white random-process vectors whose components are uniformly distributed over [0,1]. The personal bests, denoted by pb i ðtÞ 2 R d , are the highest-cost positions found by each particle thus far (up to time t) among x i (0), � � �, x i (t). The global best gbðtÞ 2 R d is the best solution found by any particle up to time t. ðo; c 1 ; c 2 Þ 2 R 3 are tuning parameters. Eqs (4) and (5) imply that only single solutions are found, as the global feedback term (5) obtains a unique limit. This problem has already been demonstrated by [25].

Two-stage MMO algorithms
To overcome the problem of classical PSO, researches have proposed two extended PSO methods: niching migratory multi-swarm optimizer (NMMSO) [26] and multi-charged particle swarm optimization (mCPSO) [31]. These methods, called two-stage niching methods [28], separate the particles into sub-swarms by a clustering algorithm in the first stage, and perform sub-swarming searches of the optimum by the classical PSO in the second stage. Alternative two-stage methods, which utilize an evolutionary strategy in the second stage, include covariance matrix self-adaptation with repelling subpopulations (RS-CMSA) [27], the Hill-Valley evolutionary algorithm (HillVallEA) [28], and its improved variant (HillVal-lEA19) [29]. These two-stage methods also utilize additional sub-procedures, namely, a restart scheme in NMMSO, mCPSO, RS-CMSA and HillVallEA and a taboo archive in RS-CMSA.
Unfortunately, the complexity of these two-stage methods is daunting to engineers wishing to implement search algorithms in their production items. Although the source codes of these methods are available, they might not be directly implementable in the production items of a firm. In such situations, engineers should understand and implement the algorithms by themselves, as reported in [30].

One-stage MMO algorithms
Simpler niching methods than the two-stage methods are also available. Some of these algorithms are based on PSO; specifically, RPSO, fitness Euclidean-distance ratio PSO (FERPSO) [22], locally informed particle swarm (LIPS) [24], and species-based PSO (SPSO) [23]. Another niching method, called the niching gravitational search algorithm (NGSA) [32] uses gravitational forces. In the present study, these methods are called one-stage methods because they include no clustering algorithms. Table 1 summarizes the characteristics (computational complexity, number of tuning parameters, and algorithm type) of these one-stage methods.
Focusing on the computational complexity, these existing methods can be broadly classified into two groups: RPSO with complexity O(N); and FERPSO, LIPS, SPSO and NGSA with O(N) a ✔ n/a n/a n/a n/a The number of Parameters: Type: Based on PSO ✔ ✔ ✔ ✔ n/a Using gravitational force n/a n/a n/a n/a ✔ "✔" and "n/a" indicate applicable and not applicable, respectively. a the same as classical PSO b obtained by calculating the Euclidean norm between the particles and by selecting or sorting of particles, where α = N or N log N. N is the number of particles. https://doi.org/10.1371/journal.pone.0248470.t001 In the method of the former group (RPSO), the complexity is equivalent to that of the classical PSO (4). In the methods of the latter group, which calculate the Euclidean norm between particles, the complexity is required to be at least O(N 2 ). In addition, FERPSO adds a complexity O(N) for particle selection, whereas LIPS, SPSO and NGSA add a complexity O(N log N) for particle sorting.
With the exception of RPSO and FERPSO, the number of tuning parameters is one or two higher in these methods than in the classical PSO. The additional parameters are required for sorting the particles.
The comparison confirms RPSO as the simplest niching method, but RPSO is known to deliver poorer performance than the other methods [25].
The above discussion highlights the drawbacks of the existing high-performance niching methods, namely, the difficulties in understanding and implementing two-stage methods, and the high computational complexity and large number of tuning parameters in one-stage methods.

GPSA
To resolve the above drawbacks, the present study proposes a new and simple niching method called GPSA. The basic idea is that, if the particles in PSO attract each other, they can autonomously organize into sub-swarms and search the multiple global optimum without any additional procedures. Specifically, the classical global feedback term (5) is replaced with a term that introduces inverse-square gravitational forces between the particles, i.e., where ||�|| denotes the Euclidean norm, d ki (t) is a displacement vector between the positions of the ith and kth particles, and u ki (t) is the normalized vector of d ki (t). Algorithm 1 shows the pseudocode of our GPSA. The novelty of our GPSA is summarized below.
• Our method is a purely dynamical one-stage method, i.e., the particles are updated only by a dynamical system with no additional procedures. In contrast, the existing two-stage methods require a clustering algorithm, a restart scheme, or a taboo archive [26][27][28][29]31].
• Unlike the existing one-stage methods [22][23][24]32], our method requires no algorithmic procedure for selecting the social or nearest best(s). Set personal bests via pb i (t) = x i (t) 6:

Algorithm 1 GPSA
while t < t max do 7: for i = 1, . . ., N do 8: Update particles' position and velocity via (4) with (6) 9: if f(x i (t + 1)) > f(pb i (t)) then . Update personal best 10: pb i (t + 1) = x i (t + 1) 11: else 12: pb i (t + 1) = pb i (t) 13: Therefore, our GPSA is advantaged by simplicity. Our GPSA can be easily understood and implemented by non-experts in optimization algorithms. Fig 1 compares the characteristics (computational complexity and number of tuning parameters) of our GPSA with those of the existing one-stage methods. As evidenced in the figure, our GPSA has the same number of tuning parameters ðo; c 1 ; c 2 Þ 2 R 3 as RPSO and FERPSO, and fewer parameters than other one-stage methods. In addition, the complexity of our GPSA is O(N 2 ), obtained by summing the complexities O(N(N − 1)) of (6) and O(N) of (4). This complexity is higher than in the O(N) method, but lower than in the O(N 2 + α) methods, where α = N or N log N. Therefore, in terms of the number of tuning parameters and complexity, our GPSA is simpler than these existing one-stage methods except RPSO.
Although the nominal performance of our GPSA is apparently inferior to that of RPSO in Fig 1, the actual performances of GPSA and RPSO are comparable. Furthermore, after minor improvements, our GPSA significantly outperforms RPSO. This performance will be demonstrated after the following examples.

One-dimensional examples
Our proposed GPSA dynamically organizes the niching behavior of the sub-swarms. This mechanism is clarified through illustrative examples in the present and following sub-sections.  produced by a two-particle GPSA solving a one-dimensional UMO problem, f(x) = −x 2 (an inverted sphere). Here the GPSA parameters were set to ω = 0.729 and c 1 = 1.49445. These values are known to prevent particle divergence in the classical PSO [38], and are used even in the existing niching PSOs [22,25]. The newly introduced parameter c 2 was set to 0.05.
During the initial phase (0 � t � 23), the particles were mutually attracted by gravity. At some close enough distance (t = 23), they appeared to repel one another. Such particle repulsion behavior, called a repulsive flip here, is considered as a gravity-induced motion. Roughly speaking, ignoring the randomness and setting ω = c 1 = 0 in (4a), the one-dimensional gravitational force (6) becomes so the positions of the two close particles are updated as If dðtÞ ¼ ffi ffi ffi ffi c 2 3 p , the particles are swapped as Similarly, we obtain as shown in Fig 3(A). In addition, when there is what we call an attractive flip (Fig 3(B)), because In contrast, generates a repulsive flip because dðt þ 1Þ > ffi ffi ffi ffi c 2 3 p (Fig 3(C)). From (13), we established that dðt þ 1Þ ¼ j dðtÞ 3 À 2c 2 dðtÞ 2 j ! 1 ðdðtÞ ! 0Þ. In other words, the repulsive flip strongthens as d(t) approaches 0.
After the repulsive flip at t = 23 (Fig 2(A)), each particle underwent different damped oscillations around pb i (t), because the second term in (4a) imposes a linear restoring force. After the second repulsive flip at t = 40, repeated mutual repulsive flips of the particles were followed by individual dumped oscillations (Fig 2(B)). During this time, the particle's personal bests, pb 1 (t) and pb 2 (t), rapidly converged to the problem optimum x = 0 (Fig 2(C)), confirming that our GPSA can solve the UMO problem.
The search motions of our GPSA also effectively solve MMO problems. Initially, our GPSA autonomously and dynamically generated multiple sub-swarms near two of the optima (Fig 4(A)). A repulsive flip between x 5 (t) and x 6 (t) at t = 11 then repelled x 6 (t), which underwent a large-amplitude damped oscillation. This repulsion and oscillation process helped the personal best pb 6 (t) to find the distant optimum x = 0 (Fig 4(B)).

Two-dimensional example
Our proposed GPSA can also solve a two-dimensional MMO problem involving a two-dimensional Himmelblau function [33]. Fig 5 shows snapshots of a GPSA simulation run, starting from particles that were randomly placed in the right half-plane ( Fig 5(A)). This initial spatial bias was designed to challenge the search for global optima in the left half-plane.
As clarified in Fig 5, the particles again autonomously and dynamically formed multiple sub-swarms. While most of the particles remained in the right half-plane, some escaped to the  left half-plane through the repulsive flips. In this run, the first repulsive flip occurred immediately (at t = 1) between x 10 (t) and x 14 (t), causing pb 14 (t) to approach one of the distant global optima by t = 10 ( Fig 5(B)). The pb 23 (t) and pb 11  Because our GPSA dynamics include random fluctuations, all global optima are not guaranteed to be found within a finite amount of time (see Fig 6). Nevertheless, as demonstrated in the following sections, the performances of our GPSA and our GPSA with minor improvements were comparable to and considerably superior to those of the existing methods, respectively.

Experimental setup
This section evaluates our GPSA on the twenty benchmark functions of the CEC 2013 niching-method competition [33]. These benchmark functions were proposed in 2013. However, they are still the latest ones because they have been utilized by the CEC and GECCO niching competitions held between 2013 and 2020. The benchmark functions are listed in Table 2. The terms d and N go were introduced in (1) and (2) , Vincent (f 7 , f 9 ) and the Modified Rastrigin (f 10 ). These functions over domain D are formally defined in [33]. For the external domain of D, the value of these functions was set to −10 10 � f i (x), 8x | x 2 D, i = 1, � � �, 20 as the penalty value because our GPSA's particle can exceed the domain D due to the repulsive flip.
Following [33], the number of function evaluations in a single run was set to N fes as listed in Table 2. The total number of particles was set to N = 500 for f 8 and f 9 , which have many global optima, and to N = 50 for the remaining functions, as adopted in the existing one-stage methods, RPSO [25] and FERPSO [22]. Each of these particles was updated until t max = N fes /N. The number of runs was set to N run = 100, at least double the number of runs in previous studies [22,24,25,33].

Performance metrics
The performance was evaluated by two metrics used in [33]. The first was the peak ratio (PR), which measures the average fraction of the global optima found per run and is given by where n go k is the number of global optima found during the kth run. The n go k was determined by a standard method provided in [33], which considers that a new global optimum is detected when pb i (t) in (4) satisfies the following two conditions. First, the pb i (t) should be further than any of the already discovered global optima in terms of the distance specified in [33]. Second, the difference between the fitness values of the pb i (t) and the global optima provided in [33] should be less than the accuracy level �. Following [33], the accuracy levels in this study were varied as � 2 {10 −1 , 10 −2 , 10 −3 , 10 −4 , 10 −5 }.
The second metric was the success rate (SR), which measures the probability of finding all global optima during the same run. The measure is given by where N succ is the number of runs in which all the global optima are found. The benchmark functions and performance evaluation were implemented in the code provided by the CEC 2013 competition organizers. Obtainable from https://github.com/mikeagn/ CEC2013.

Effects of the parameter c 2
The performance sensitivity of our GPSA to the parameter c 2 was investigated while the other parameters were fixed at ω = 0.729 and c 1 = 1.49445. Table 3 lists the PR-values obtained across all benchmark functions and the accuracy levels for c 2 = 10 −2 , 10 −4 , and 10 −6 , where D is the function domain as listed in Table 2. The best values for each function and accuracy-level are highlighted in bold. The bottom section of the table summarizes the averaged PR obtained by summing the PR-values of all functions and accuracy levels and dividing by the total number of values.
At the accuracy level � = 10 −5 , the PR of the functions with small domains (f 1 , f 2 , f 3 , f 5 , and f 10 ) increased with decreasing c 2 , indicating that the exploitation performance of our GPSA  improved as c 2 decreased. Note that as the accuracy level � decreases, a stricter requirement is imposed on the exploitation performance [39].
In contrast, at the accuracy level � = 10 −1 , the PR-values of the large-domain functions (f 4 , f 6 , f 7 , f 8 , f 9 , and f 11 , � � �, f 16 ) increased with c 2 , indicating that the exploration performance improved as c 2 increased.
The mechanism of these improvements can be explained by the effect of the gravitational force described in the one-dimensional example. Increasing the c 2 increases the distance between the particles generating the repulsive flip (dðtÞ ¼ ffi ffi ffi ffi c 2 3 p À � þ in (14)), elevating the frequency of repulsive flips and reducing that of the attractive motions. These results clearly demonstrate that the balance between the exploitation and exploration performance of our GPSA can be controlled by c 2 ; specifically, a large c 2 is suitable for a global searching over a large domain, whereas a small c 2 favors local searching over a small domain.

Performance comparison with the existing one-stage methods
Next, the performance of our GPSA was compared with those of the existing methods, namely RPSO and FERPSO. These methods were selected for the following reasons. First, they are one-stage methods like our GPSA. Second, they replace the global feedback term (5) of the classical PSO with another term, similarly to our GPSA. Finally, they have the same number of tuning parameters as our GPSA (see Fig 1).
In RPSO, (5) is replaced by where lb i ðtÞ 2 R d is the local best of the ith particle at iteration t. The local best is the highestcost position found by the particle or one of its neighbors, i.e., the best among the personal bests pb i−1 (t), pb i (t), and pb i+1 (t). In FERPSO, (5) is replaced with: where nb i ðtÞ 2 R d again represents a neighborhood personal best, but selected by maximizing the Euclidean-distance ratio (FER): where α = ||s||/(f(gb(t)) − f(x w (t))) is a scaling factor, ||s|| is the size of the search space [22], x w (t) is the least-fitted particle in the current population, and pb i (t) and pb j (t) are the personal bests of the ith and jth particles, respectively. In this study, the tuning parameters of RPSO and FERPSO were set to ω = 0.729 and c 1 = c 2 = 1.49445 consistent with [22,25]. The other parameters and experimental conditions were consistent with those in the previous section. Table 4 lists the SR-values in (16) obtained by our GPSA and the compared methods. The best and averaged values are indicated and summarized, as described in Table 3.
On the small-domain function f 10 , our GPSA obtained remarkably higher SR-values than the compared methods. In particular, for � = 10 −1 , 10 −2 , 10 −3 and 10 −4 and c 2 = 10 −6 , the SRvalue of our GPSA exceeded 0.4, versus SR = 0 in the compared methods. On the other smalldomain functions f 2 and f 3 , the SR-values of our GPSA with c 2 = 10 −6 either outperformed or equaled those of the compared methods. These results showed that when c 2 is relatively small, the exploitation performance of our GPSA is comparable to those of other methods.
On the large-domain functions, f 4 and f 5 with � = 10 −1 and 10 −2 and c 2 = 10 −2 , our GPSA performed at least as well as the compared methods. These results indicate that when c 2 is a relatively large, the exploration performance of our GPSA is comparable to those of other methods.
However, in terms of the averaged SR values, the other methods outperformed our GPSA. This degradation was primarily attributed to the dilemma of choosing between the exploitation and exploration performance in our GPSA.

Improvement of GPSA with a dynamic c 2
To resolve the exploration-exploitation dilemma of our GPSA, this section introduces a dynamic c 2 that further improves its performance.

Dynamic c 2 for our GPSA
Given our observations in the previous section, it is inferred that our GPSA's iteration should initially start with a large c 2 to enhance the exploration performance, and that c 2 should decrease as the iteration increments to improve the exploitation performance. Therefore, the present study proposes a method with a dynamic c 2 , in which c 2 is provided as a nonlinear function of the iteration t based on the dynamic ω as used successfully in the classical PSOs   Table 2. The best values for each function and accuracy-level among RPSO, FERPSO and our GPSA with c 2 = 10 −2 , 10 −4 , and 10 −6 are highlighted in bold.
https://doi.org/10.1371/journal.pone.0248470.t004 [21,40,41], as follows: where c ini 2 is the initial c 2 at the beginning of a run and n is the nonlinear modulation index. In this study, these parameters were set to c ini 2 ¼ 10 2 and n = 20 to cover the c 2 range analyzed in the previous section. Fig 7 shows the dynamic c 2 as a function of t for t max = 1000, where the left graph is a linear plot and the right a semi-log plot. It is shown that the c 2 gradually decreased to zero as t increased, and c 2 used in Table 3 was covered. In this study, this modified GPSA is called a dynamic GPSA (DGPSA).
Our DGPSA proposed above is more complicated than our GPSA, however, it is still simpler than the existing one-stage methods in Fig 1, except for RPSO (FERPSO, LIPS, SPSO and NGSA), the reason being that the computational complexity of our DGPSA is O(N 2 + 1) obtained by summing the complexities O(N 2 ) of original GPSA and O(1) of (20), and is even lower than FERPSO, LIPS, SPSO, and NGSA. In addition, the number of tuning parameters of our DGPSA, ðo; c 1 ; c ini 2 ; nÞ 2 R 4 , is fewer than or equivalent to that of LIPS, SPSO and NGSA. Table 5 shows the PR-values obtained by our DGPSA and the compared methods. First, these values of our DGPSA were compared with our GPSA results in Table 3. On f 6 , f 8 , and f 12 , � � �, f 20 , our DGPSA obtained higher PR-values at all accuracy levels than our GPSA results. Even on f 1 , f 2 , f 4 , f 5 , f 7 , f 9 , f 10 , and f 11 , our DGPSA also had higher PR-values at the strictest accuracy level � = 10 −5 . Furthermore, the averaged PR-value increased by more than 2.3 times. These results clearly demonstrate that our DGPSA successfully solves the dilemma found in our GPSA.

Performance of our DGPSA
Furthermore, our DGPSA obtained higher PR-values than the compared methods on f 2 , f 4 , f 6 , f 7 , f 9 , � � �, f 16 , and f 19 . In particular, on f 6 and f 7 , the PR-values of our DGPSA were at least 1.9 times higher than those of the compared methods. The averaged PR-values also indicated the superiority of our DGPSA. Table 6 shows the SR-values obtained by our DGPSA, all of which were superior or equal to those of the compared methods shown in Table 4. The results above show that our DGPSA outperformed the compared methods in terms of PR and SR.

Statistical tests
The statistically significant tests were not yet conducted in the results shown above. Here, the number of functions is counted for which our DGPSA significantly outperformed the others, as was done in [39]. Consider the following sets: where n go k ðf i ; �Þ is n go k in (15) obtained by our DGPSA on the function f i (i = 1, � � �, 20) and the accuracy level �, N i ð�Þ is a set of the n go k ðf i ; �Þ for all k, and n go k ðf i ; �Þ and N i ð�Þ are those of the compared method. Then, to classify the f i , the following sets were introduced: Here, m i and m i indicate the median of N i ð�Þ and N i ð�Þ, respectively.T i becomes unity when there is a significant difference between m i and m i , and otherwise zero. This significant difference was determined by the Wilcoxon rank-sum test [42] with a significance level of 0.05, as used in [39]. F þ ð�Þ and F À ð�Þ are the sets of f i on which our DGPSA was significantly superior and inferior to the compared method, respectively. Also, F 0 ð�Þ is the set of f i where there was no significant difference between our DGPSA and the compared method. Table 7 shows the resulting numbers of functions #F þ ð�Þ, #F À ð�Þ, and #F 0 ð�Þ, where A denotes the number of elements of a set A. Here, Sum(�) indicates the sum of #F þ ð�Þ, #F À ð�Þ, and #F 0 ð�Þ. The values in parentheses are the composition ratios of the resulting number to the Sum(�). Focusing on the comparison between RPSO and our DGPSA shown in the top of Table 7, it is understood that #F þ ð�Þ were equal to 12 for all �, and that their composition ratios were 60% ð¼ #F þ ð�Þ=Sumð�Þ � 100 %Þ. From the definition of (23), these results show that our DGPSA was significantly superior to RPSO for 60% of the benchmark functions. On the other hand, #F À ð�Þ were equal to or less than 4, and their composition ratios were 20% or less. From the definition of (24), these results show that our DGPSA was significantly inferior to RPSO in at most 20% of the benchmark functions.
Next, focusing on the comparison between FERPSO and our DGPSA shown in the bottom of the table, it is shown that #F þ ð�Þ were equal to 16 and that their composition ratios were 80%, as well as that #F À ð�Þ were equal to 1 and their composition ratios were 5%. This indicates that our DGPSA was significantly superior and inferior to FERPSO for 80% and 5% of the benchmark functions, respectively. Therefore, these results clearly demonstrate that our DGPSA was significantly superior to the compared one-stage methods in at least 60% of the benchmark functions. indicates the statistically significant difference with p-value <10 −4 between our DGPSA and the compered method, which were tested by Wilcoxon rank-sum test [42]. All algorithms were implemented by Python and executed by the same physical computer (Lenovo ThinkPad T480s, Intel Core i5 processor with 24GB Memory).

Runtime comparison
Focusing on the runtime comparison between our DGPSA and FERPSO, the runtime of our DGPSA was statistically significantly shorter than that of FERPSO. The mechanism of these results can be explained by the computational complexity described in Fig 1. Focusing on the comparison between our DGPSA and RPSO, our DGPSA was considerably inferior to RPSO for the one-dimensional function f 1 . On the other hand, for the high-dimensional functions f 16 , f 18 , and f 20 , our DGPSA showed a statistically significantly shorter runtime than those of RPSO.
These results demonstrated that although our DGPSA has higher computational complexity than RPSO (see Fig 1), its execution time was significantly shorter than RPSO for the highdimensional functions.

Performance comparison with the existing two-stage methods
The performance of our DGPSA was also compared with those of the existing two-stage methods, namely, RS-CMSA and HillVallEA19, that are the winner of the competition on niching methods held in GECCO 2017 and 2019. Table 8 compares the PR-values of our DGPSA for � = 10 −5 (reproduction from Table 5) with those of the two-stage methods in [29,43], where the benchmark problems and simulation conditions were consistent with those of this study. Table 8 also shows the statistical test results between our DGPSA and the existing two-stage methods, which was conducted similar to the previous section (Statistical test). The original results of RS-CMSA and HillVallEA19 were obtained from the repository provided by the competition organizers https://github.com/mikeagn/CEC2013. The symbol (0) indicates that there was no statistically significant difference between our DGPSA and the compared method. The symbol (−) indicates that our DGPSA was significantly inferior to the compared method with a significance level of 0.05.
Our DGPSA performed comparably to the existing two-stage methods on functions f 1 , f 2 , f 3 , f 4 , and f 5 with no statistical difference. Although the PR-values for the other functions were significantly inferior in our DGPSA than in the two-stage methods, our one-stage DGPSA is suitable for non-experts in optimization algorithms who wish to implement an MMO algorithm into the production items (as described in GPSA's section). • Although our DGPSA is simpler than the existing one-stage methods, it outperformed the compared one-stage methods in both PR and SR.
• The well-known statistical test method, namely, the Wilcoxon rank-sum test, confirmed the superiority of our DGPSA over the compared one-stage methods on at least 60% of the benchmark functions.
• In comparison of runtime for high-dimensional functions, our DGPSA was significantly superior to the compared one-stage methods.
• Although our DGPSA was statistically inferior to the existing two-stage methods for highdimensional functions, its simplicity enables its implementation as an MMO algorithm by non-experts.
Clearly, our proposed DGPSA resolves the shortcomings of the existing methods by virtue of its simple and purely dynamical algorithm that outperforms the existing one-stage methods (FERPSO and RPSO). Therefore, we believe that the proposed DGPSA is a more appropriate algorithm than the existing methods for the situation where non-experts in optimization algorithms understand and implement a MMO algorithm to solve the real world problems.
In the future, we plan to investigate the applicability of our GPSA to real-world optimization problems, including optimal structure design [6,11] and the training of neural networks. We will also consider ways of optimizing the model parameters and applying dynamic inertia weight.