Figures
Abstract
The firefly algorithm (FA) is proposed as a heuristic algorithm, inspired by natural phenomena. The FA has attracted a lot of attention due to its effectiveness in dealing with various global optimization problems. However, it could easily fall into a local optimal value or suffer from low accuracy when solving high-dimensional optimization problems. To improve the performance of the FA, this paper adds the self-adaptive logarithmic inertia weight to the updating formula of the FA, and proposes the introduction of a minimum attractiveness of a firefly, which greatly improves the convergence speed and balances the global exploration and local exploitation capabilities of FA. Additionally, a step-size decreasing factor is introduced to dynamically adjust the random step-size term. When the dimension of a search is high, the random step-size becomes very small. This strategy enables the FA to explore solution more accurately. This improved FA (LWFA) was evaluated with ten benchmark test functions under different dimensions (D = 10, 30, and 100) and with standard IEEE CEC 2010 benchmark functions. Simulation results show that the performance of improved FA is superior comparing to the standard FA and other algorithms, i.e., particle swarm optimization, the cuckoo search algorithm, the flower pollination algorithm, the sine cosine algorithm, and other modified FA. The LWFA also has high performance and optimal efficiency for a number of optimization problems.
Citation: Li Y, Zhao Y, Shang Y, Liu J (2021) An improved firefly algorithm with dynamic self-adaptive adjustment. PLoS ONE 16(10): e0255951. https://doi.org/10.1371/journal.pone.0255951
Editor: Yogendra Arya, J.C. Bose University of Science and Technology, YMCA, INDIA, INDIA
Received: May 25, 2021; Accepted: July 27, 2021; Published: October 7, 2021
Copyright: © 2021 Li et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: All relevant data are within the manuscript.
Funding: The author received funding for this work, this work is supported by the National Natural Science Foundation of China (No. 71601071), the Science & Technology Department of Henan Province, China (No. 182102310886 and 162102110109). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
1 Introduction
Inspired by various biological systems in nature, many scholars have proposed effective methods that simulate natural evolution to solve complex optimization problems. One of the earliest algorithms was a genetic algorithm proposed by Professor Holland [1]. Researchers then shifted to foraging behaviour of groups of animals, such as the ant colony optimization algorithm (ACO) that simulated the behaviour of ants [2]. Eber proposed particle swarm optimization (PSO) based on bird predation behaviour [3]. In recent years, new heuristic algorithms have been proposed. To name a few, Yang and BED proposed the cuckoo search (CS) algorithm based on the breeding and spawning of cuckoo [4]; The bat algorithm (BA) was proposed based on echolocation behaviour in bats [5]; the whale optimization algorithm (WOA) was inspired by the hunting behaviour of humpback whales [6]; and there is also the grey wolf optimizer algorithm [7], the sine cosine algorithm (SCA) [8], the polar bear optimization algorithm (PBO) [9], etc. More swarm intelligent algorithms were created mainly because of the no free lunch (NFL) theorem [10], which states a single swarm intelligent optimization algorithm cannot solve all optimization problems. The key point of the NFL theorem is that it is meaningless to assess whether an algorithm is good without a real problem, and the performance of an algorithm must be verified by specific problems. Therefore, it is of great significance to study the performance of swarm intelligent algorithms in different application fields [11, 12].
The firefly algorithm (FA) was proposed in 2008 by Xinshe Yang, a Cambridge scholar. The algorithm is based on the luminous characteristics and attractive behaviour of individual fireflies [13]. Compared with other intelligent algorithms, the FA has the advantages of having a simple model, a clear concept, few parameters to adjust, and strong searchability. Like other algorithms, however, it could easily fall into local optimum, resulting in slow convergence speed and low convergence accuracy. As a result, many scholars have made improvements to the standard FA. Yang first introduced the Levy flight into the random part of the location updating formula of the FA and developed an FA with Levy flight characteristics [14]. Subsequently, Yang improved the quality of the FA by introducing chaos into the standard FA and increased the accuracy of the standard FA by dynamically adjusting its parameters [15]. Sharma introduced the inertia weight into the FA; this strategy can overcome the tendency of falling into local optima and can achieve a slow convergence for optimization problems [16]. Farahani and other scholars proposed a Gaussian distribution FA, which referred to an adaptive step size and improved the glow worm algorithm by improving the overall position of the FA population through Gaussian distribution [17]. An FA based on parameter adjustment is better than PSO in solving dynamic optimization problems. Sh. M. Farahani and other scholars introduced an automatic learning machine into the standard FA to adjust the algorithm’s parameters, so that the algorithm can adjust the parameter values at any time according to the environment [18]. Adiland and other scholars improved the self-adaptation of the search mechanism and parameters of individual fireflies and embedded chaotic mapping to solve a mechanical design optimization problem [19]. Carbas also used the FA to solve a steel construction design problem [20]. Tahereh and other scholars used fuzzy coefficients to adjust the parameters of the FA and balanced the local and global search capabilities of the FA [21]. A number of optimization problems of test functions and engineering optimization problems have been verified for performance of the improved FA [22]. The FA is widely used in many fields, especially in computer and engineering fields, such as routine optimization [23, 24], robot path planning [25] and image processing [26–28]. Many scholars around the world have conducted in-depth studies on the theory and application of the FA and are continuously expanding its application fields.
To improve the performance of the FA, this paper proposes an improved FA based on self-adaptive inertia weight logarithmic and dynamic step-size adjust factor (LWFA). Self-adaptive logarithmic inertial weight is also introduced in the updating formula. This strategy can effectively balance the exploration and exploitation capabilities and also improve the convergence speed of the algorithm. A step-size adjust factor is also added into the LWFA to randomly change the algorithm’s step-size, preventing the FA from falling into a local optimum.
Section 2 discusses the standard FA. The proposed LWFA is introduced in Section 3. In Section 4, the complexity analysis and convergence analyses are conducted to calculate the stability and validity of the LWFA. In Section 5, ten benchmark optimization test functions and IEEE CEC2010 test functions are used to evaluate the performance of the proposed algorithm. Experimental function curve results are shown in Section 5. Lastly, the work is summarized in Section 6.
2 Standard firefly algorithm
The firefly algorithm (FA) was proposed by a Cambridge scholar Xinshe Yang in 2008 [13]. The FA is a random search algorithm based on swarm intelligence that simulates the attraction mechanism between individual fireflies in nature. To idealize certain characteristics of fireflies when constructing mathematical models of FA, the following idealization criteria are used:
- Fireflies, both male and female, are attracted only by light intensity between groups regardless of gender;
- The attraction of fireflies to each other is proportional to the brightness of the light;
- The brightness of fireflies is related to the objective function value that is to be optimized.
2.1 Mathematical description and application
In the firefly algorithm, fireflies have a unique lighting mechanism and behavior, and the light they emit can only be perceived by other individual fireflies within a certain range for two reasons: the light intensity I and the distance from the light source r are in inverse proportion, and the light can be absorbed by air. The basic principle of the standard FA is that in a randomly distributed firefly population, fireflies with high brightness attract fireflies with low brightness toward them, and each firefly approach a firefly with high absolute brightness in a solution space, update its position, complete an iteration of positions, and find an optimal position to achieve optimization. The brightness of fireflies is related to the objective function value. If fireflies have higher brightness and better positions, they will attract more fireflies. The brightest fireflies move randomly because they cannot be attracted to any other fireflies. The relationship between fireflies and the objective function value should be established. The absolute brightness of the algorithm is expressed by the objective function value, that is, the absolute brightness Ii of a firefly set at the location is equal to the objective function value at the location
, that is
(1)
Supposing a brighter firefly attracts a firefly with low brightness, the relative brightness between the two fireflies is . Ii is expressed as the absolute brightness of firefly i; γ is the light absorption coefficient, generally set as a constant; rij is the cartesian distance from firefly i to firefly j. That is
(2)
Supposing that the brightness between fireflies is proportional to the attraction, the attraction is βij,
(3)
β0 is the largest attraction, and generally β0 = 1.
The updating formula of the algorithm is as follows:
(4)
Where, t is the number of iterations, δi is a random number derived from a uniform distribution, Gaussian distribution, or other distribution, and α is a random term coefficient.
The flow chart of the standard firefly algorithm (FA) is shown in the Fig 1:
3 The proposed firefly algorithm
Due to the shortcomings of the standard FA, Yang optimized a test function with singularities with FA to achieve better optimization performance. The results show that the FA can effectively solve this kind of global optimization problem, and the FA was successfully applied to the global optimization problem of pressure piping design. However, the parameters in the standard FA are set in advance, which will lead to premature convergence of the algorithm, or the algorithm cannot converge due to improper parameter settings. Hence, the standard FA needs to be improved to achieve better optimization performance.
3.1 Parameter analysis of firefly algorithm
The standard FA updates positions based on the attraction between high-brightness fireflies and low-brightness fireflies. Since the visual range of fireflies is limited, low-brightness fireflies can only find mobile higher brightness fireflies within their visual range. It should be noted that the light absorption coefficient γ is an important parameter affecting the degree of attraction between fireflies. When γ→0, the attraction β→β0. The attraction between fireflies will not change with a change in distance, that is to say, a flash firefly can be seen from anywhere. When γ→∞, β→0. When the attraction between fireflies is almost zero, individual fireflies move randomly, that is to say, individual fireflies are myopia, which is equivalent to flying in a fog environment; they cannot see each other. The attraction algorithm of fireflies is between these two extremes. When the light absorption coefficient gets a better value (usually 1.0), the distance between two fireflies is the distance that affects the attraction between the two fireflies. If the distance rij→∞ between two fireflies that are attracted to each other, the attraction β→0. Therefore, the basic conditions of the FA in the optimization process are as follows:
- The distance between attracted low-brightness fireflies and actively high-brightness fireflies should not be too large;
- There must be a difference in brightness between individual fireflies, with at least one firefly having a higher brightness to attract other fireflies for location updates. There is greater randomness in the distance between individual fireflies that are attracted to each other.
3.2 The improvement of firefly algorithm
3.2.1 The minimum attraction.
By analysing the parameters of the FA, where the light absorption coefficient γ is too large or where the light absorption coefficient γ is a fixed value, the optimization interval become too large, which will easily result in the attraction between individual fireflies to approach zero and individual fireflies to lose attraction between each other. To avoid the distance between individual fireflies from being too far, the attraction between them may be almost zero. This paper introduces a minimum attraction to improve the standard FA to prevent individual fireflies from randomly moving. At this time,
(5)
β0 = 1.0, βmin∈[0,1]. So, even if the distance between fireflies is too far, , the attraction between them can be the βmin.
To validate the minimum attraction strategy can solve the shortcomings of the firefly algorithm, some test functions (D = 30) are selected to compare the FA with the algorithm which only obtain the minimum attraction strategy (LWFA-1). The comparison results of best value are shown in Table 1, and this paper also selects two benchmark functions for experiments and draws some conclusions. Fig 2 is the convergence curves of unimodal functions f2. Fig 3 is the convergence curves of multimodal functions f7. The population number (N) is 30, and the maximum number of iterations (T) is 1000. All algorithms ran 30 times independently on the test functions.
In the Table 1, the accuracy of the LWFA-1 is greatly reduced. Compared with FA, the LWFA-1 have better performance to solve optimization problems. The comparison results can illustrate the minimum attraction strategy is effect in the LWFA-1. From the Figs 2 and 3, the convergence curve of the FA is basically a straight line and will not change with the increase of iteration times, which indicates that the FA cannot jump out of the local optimal solution. By contrast, converge curve of LWFA-1 is smoother.
Therefore, this strategy can help the original firefly algorithm jump out of the local optimal solution and improve the convergence accuracy of the FA.
3.2.2 Self-Adaptive inertia weight based on logarithmic decrement.
The setting of inertia weight w can balance the global search ability and local optimization ability to some extent in the iterative process of FA. Initially, the setting of inertia weight is generally linear inertia weight. Based on the motion rule of a firefly in the FA, when a low-brightness firefly is attracted by a brighter individual firefly, the early motion amplitudes of the individual firefly is larger with strong global searchability so that it can enter the process of local optimization at a faster speed. An increase in iteration time helps to bring a firefly closer to the optimal value, and its moving speed should be decreased to improve its local optimization ability and should converge quickly to avoid oscillation state in the vicinity of the optimal value, thus improving the optimization ability of the FA.
As shown in Fig 4, compared with the linear decreasing inertia weight [29], the inertia weight of sinusoidal adjustment strategy [30], and the Gaussian decreasing inertia weight [31], the logarithmic decreasing self-adaptive inertia weight can be rapidly reduced in the early stages of iteration so that the local search stage can be quickly started and the search speed can be increased. Similarly, compared with the Gaussian decreasing strategy of inertia weight, in the late stages of iteration, the value of inertia weight changes more slowly, and individual fireflies can perform local searches to reduce the occurrence of oscillation. However, Gaussian decreasing inertia weight is almost unchanged after 500 iterations. If a firefly at this time falls into local optimum, it cannot easily jump out of the local optimum position. Therefore, the logarithmic self-adaptive inertia weight strategy is more suitable for individual firefly movement in the FA.
In this paper, logarithmic decreasing self-adaptive inertia weight [32] is used to meet the above requirements. The formula of logarithmic decreasing self-adaptive inertia weight is as follows:
(6)
b is a logarithmic adjustment factor, w1 is an initial value of inertia weight, w2 is a final value of inertia weight, t is the current number of iterations, and T is the highest number of iterations.
The variation range of the inertial weight coefficient may have an impact on the result of function optimization. In this paper, by reading literature and simulation experiments, the parameter is set to b = 1.0、 w1 = 0.9、 w2 = 0.4. In this case, the optimize of the firefly algorithm is better.
To prove the inertial weight can also solve the shortcomings of the firefly algorithm, some test functions (D = 30) are selected to compare the FA with the algorithm which only obtain the inertial weight (LWFA-2). The comparison results of best value are shown in Table 2, and this paper also selects two benchmark functions for experiments and draws some conclusions. Fig 5 is the convergence curves of unimodal functions f2. Fig 6 is the convergence curves of multimodal functions f7. The parameter Settings are the same as in Section 3.2.1.
In the Table 2, compared with FA, the LWFA-2 have better performance to solve optimization problems. From the Figs 5 and 6, the converge curve of LWFA-2 is smoother. Therefore, this strategy also can help the original firefly algorithm jump out of the local optimal solution and improve the convergence accuracy of the FA.
3.2.3 Self-adaptive and dynamic step-size.
The random step in the updating formula of the standard FA is generally a random number vector of Gaussian distribution, uniform distribution, or other distribution. Yang introduced a Levy flight into the random part of the updating formula of the FA, which to a certain extent reduces the possibility of individual fireflies falling into local optimal and improves the search performance of the standard FA. At the same time, an increase in the search dimension of an individual firefly decreases the optimization accuracy of the standard FA. For high-dimensional test functions, random turbulence will easily occur and the convergence speed will decrease, and the algorithm cannot converge to an optimal value. Inspired by references [17, 33], this paper introduces the step adjustment factor c.
T is the largest iteration, t is the current iteration, D is the dimension of an individual firefly, and θ∈[0,1]. In this paper, θ = 0.1. In the operation of the algorithm, the random step size decreases with an increase in the search dimension of an individual firefly or iteration number. This can ensure that the individual firefly’s random step size change is small in a high-dimensional environment, and accurate explosion can be in a small range to find the optimal value with high precision. To prove that the dynamic step-size strategy can improve the effectiveness of the FA in different dimensions, three dimensions (D = 10/30/100) are used for comparative experiments. This paper selects multimodal function f2 and unimodal function f7 for simulation test on the three dimensions; the population number (N) is 30, and the maximum number of iterations (T) is 1000. Experimental results show that the step adjustment factor can greatly improve the optimization accuracy of the FA. The experimental results are shown in Fig 7.
3.3 The procedures for the realization of LWFA
- In the initial stage, set the following parameters γ、 β0、 βmin、 and α, firefly population number N, and maximum iteration number T, and randomly generate the initial position
of fireflies in the optimization interval;
- Substitute the position vector
of fireflies into the objective function f(x) to obtain the initialization brightness Ii of fireflies, and compare and analyze results to obtain the current global optimal brightness Ibest and individual optimal positions.
- Update the phase of the fireflies’ locations. The attraction between individual fireflies βij.
- Update the formula according to the position of fireflies. A firefly with high initialization brightness attracts a firefly with low initialization brightness to finalize position updating. The updated formula after improvement is as follows:
(8)
(9)
- After updating its position, a firefly generates a new position vector that replaces the old position vector in an objective function, completes the brightness update, and reorders its current brightness to obtain the current global optimal value.
- Update wt、 c、 and t. Generally, the stop condition for the iteration in the FA is that the number of iterations reaches a predefined number or that the global optimal result is sought to achieve the required accuracy.
The flow chart of the LWFA is shown in the Fig 8:
4 Theoretical analysis of improved firefly algorithm
4.1 The complexity analysis
The complexity of a swarm intelligence algorithm reflects the performance of the algorithm from other different aspects. If the algorithm needs an infinite amount of time to find a global optimal solution, the uptime availability is not included. Therefore, the computational complexity of the algorithm is very significant and crucial. Suppose the population size of the SCA is N, the maximum number of iterations is T, and the dimension of the decision variable is D.
In the standard FA, the computational complexity of the initialization phase is O (N), the positions update of fireflies is O (T*N*D), and checking for fireflies positions outside the search space is O (T*N). Hence, the computational complexity of the FA for the global search for an optimal position is O (N) + O (T*N*D) + O (T*N).
Similarly, the computation complexity of the improved FA in the initialization phase is O(N). The improvement of the minimum attraction is O(T*N), inertia weight based on logarithmic decrement w(t)i is O(T*N), and self-adaptive and dynamic step-size is O(T*N*D). The particle positions update is also O(T*N*D), and checking for particle positions outside the search space is O (T*N). Therefore, the computational complexity of the LWFA to achieve an optimal particle position is O (N) + O (3*T*N) + O (2*T*N*D).
Although the computation complexity of the improved FA (LWFA) is higher than that of the standard FA, both of them are in the same order of magnitude.
4.2 The convergence analysis
Convergence analysis is a key factor to evaluate the convergence of swarm intelligence algorithms. As the number of iterations increases, errors between optimal results and theoretical optimal values become minute; eventually, swarm intelligence algorithms approach a fixed value. In the FA, the position update phase of fireflies determines whether the algorithm can converge to a global optimal. Many scholars have used different methods to analyze the convergence of swarm intelligence algorithms. In this paper, the LWFA is analyzed using a second-order nonhomogeneous differential equation.
Using the updating mechanism of the LWFA, we can easily find that the position update of the FA is conducted on a dimension-by-dimensional basis, and each dimension is independent. For the convenience of analysis, this paper simplifies the algorithm by choosing a one-dimensional analysis. From the updating formula (8), weight w is the constant coefficient a, βij is the constant coefficient b, and α*δij is the constant coefficient d; xj(t) is the current optimal position of a firefly, denoted as gb, and the optimal location of the next iteration is denoted as pb. Thus, the updating formula (8) is reduced as follows:
(10)
(11)
Formulations (10–11) can be obtained as:
(12)
(13)
Its characteristic equation is λ2+(1+r)λ+r =0, Δ = (1+r)2−4r = (r−1)2≥0.
Therefore, the convergence process of LWFA needs to consider the following two conditions:
1) When Δ = 0, the characteristic equation has two same real roots, , and x(t) = (A0+A1t)λt, A0, A1 are undetermined coefficients, the result is calculated as:
2) When Δ>0, the characteristic equation has two different real roots, , and
; A0, A1, and A2 are undetermined coefficients, and the result is calculated as follows:
Based on the convergence analysis, if the LWFA converges iteratively, the following two conditions must be met [34].
- ➀. If t→∞, x(0) has its maximum and tend to finite value.
- ➁. ‖λ1‖<1, and ‖λ2‖<1
The calculated results are as follows:
When Δ = 0, the convergent domain is r = 1;
When Δ>0, the convergent domain is , Thus,
. According to Eq (5), (6) and (8), the b∈(0,1), a∈(−1,2)⇒w(t)∈(−1,2).
In the whole iteration process of the improved algorithm, the inertia weight region is between 0.4 and 0.9. Therefore, the range of inertia weight conforms to the convergent domain (−1,2), thus the LWFA can converge to a global optimal solution in the iteration.
5 Application of improved firefly algorithm in function optimization
To evaluate whether the improved FA has advantages in various applications, two sets of benchmark test functions, ten classical benchmark functions with different dimensions (D = 10, 30, and 100) and standard IEEE CEC2010 functions have been used. In this paper, PSO, CS SCA, FPA, FA, the other improved FA and the LWFA are respectively used to conduct simulation experiments, and the experimental results are compared. Based on the experimental results, the improved FA in this paper shows good optimization performance in terms of optimization accuracy and convergence speed.
5.1 The benchmark test functions set 1
In this section, ten classical benchmark functions are used as test functions to evaluate the performance of the improved FA algorithm [35]. These functions represent different optimization problems. The specific test functions are as follows:
- Schaffer function:
(14)
This function’s theoretical optimal value is 0, and the optimal position is at (0,…,0). This function is continuous, differentiable, non-separable, non-scalable, and unimodal. - Sphere function:
(15)
This function’s minimum value is 0. This function is a classical high-dimensional unimodal function. Its optimal position point (0,…,0) is in the center of the brim, and the relative exploration area is very limited. - Rastrigin function:
(16)
This function obtains its optimum value 0 at (0,…,0). This function is a multimodal function, which has ten minimum values in solutions. The peak shape of the function fluctuates in volatility. It is rather difficult to explore global search. - Griewank function:
(17)
This function obtains its minimum value 0 at (0,…,0). This function is continuous, differentiable, non-separable, scalable, and multimodal. - Ackley function:
(18)
This function obtains its minimum value 0 at (0,…,0). This function is continuous, differentiable, non-separable, scalable, and multimodal. It has several local optimal values in its search space. - Sum-Squares function:
(19)
This function’s theoretical optimal value is 0, and its optimal position point is at (0,…,0). This function is continuous, differentiable, separable, scalable, and unimodal. - Zakharov function:
(20)
This function obtains its minimum value 0 at (0,…,0). This function is continuous, differentiable, non-separable, scalable, and multimodal. - Schwefel’s problem 1.2 function:
(21)
This function obtains its minimum value 0 at (0,…,0). This function is continuous, differentiable, non-separable, scalable, and unimodal. - Schwefel’s problem 2.21 function:
(22)
This function gets the minimum value 0 at (0,…,0). This function is continuous, non-differentiable, separable, scalable, and unimodal. - Schwefel’s problem 2.22 function:
(23)
This function obtains its minimum value 0 at (0,…,0). This function is continuous, differentiable, non-Separable, scalable, and unimodal.
5.2 Comparison with other improved firefly algorithms on set 1
In this section, some improved firefly algorithms are selected to prove the optimization ability of LWFA. The results of other improved FA are extracted from the original literatures. The comparison results of LWFA, VVSFA [36], IWFA [37], and CLFA [38] are shown in the Table 3.
From the Table 3, the average value is contained. In the same dimension D = 100, the accuracy of LWFA is obviously higher than other improved algorithms. When solve the high dimension optimization problems, compared with other FA, the LWFA can overcome the curse of dimensionality. Therefore, the LWFA is competitive in solving optimization problems.
5.3 Comparison with other state-of-art algorithms on set 1
For the ten test functions, f2, f6, f8, f9, and f10 are unimodal functions, and f1, f3, f4, f5, and f7 are multimodal functions. The dimensions of the test functions are set to 10, 30, and 100, respectively. The test results are obtained using PSO, CS algorithm, SCA, FPA, FA, and LWFA.
The experimental environment is MATLAB 2014a, the operating system is windows 7 flagship, 4.00 GB of running memory and the processor is Intel (R) Core (TM) i3 – 2350M CPU @ 2.30 GHz. The parameters of the six algorithms PSO [33], the CS [34], the FPA [35], the SCA [8], the FA [36], and the LWFA are as shown in Table 4.
To ensure fairness and comparability in the simulation experiments, the same experimental parameters are defined for all the algorithms. Each algorithm will run 30 times to solve each test function, and the test results will be recorded; the optimal solution of the objective function value, the worst solution, the average value, and standard deviation are obtained in this dimension, as shown in Tables 5–7.
Based on Tables 5–7, when D = 10/30/100, the LWFA significantly improves convergence accuracy. For either unimodal functions or multimodal functions that have multiple extreme points, as the dimension increases, the LWFA obtains better optimal performance. When D = 100, the accuracy of LWFA is more than 100 magnitudes higher than when D = 10. The LWFA can also solve optimization problems with a small standard deviation, indicating that the improved algorithm has greater robustness to solve high-dimension problems.
By comparing Tables 5–7, we can find that when D = 10, the LWFA can obtain optimal value 0 for functions f1, f3, and f4, while the optimization accuracy of the other four algorithms (PSO, the CS, FPA, and FA) is not high. For functions f2, f5–f10, the LWFA proposed in this paper is more accurate than PSO, the CS, and FA in terms of optimal values and standard deviation. The LWFA has higher convergence accuracy than the SCA except for the functions f2, f5–6. When D = 30, the standard CS is used to solve f6 and f10, and it failed many times in the convergence experiments. Moreover, when D = 100, PSO is unable to solve the convergence problem for the function f7. The CS also loses convergence for functions f3, f6, f8, and f10. When D = 100, the LWFA can quickly reach the optimal value 0 for functions f1, f3, f4, and f5. For other functions, the optimization accuracy and stability of PSO, the CS, FPA, and FA is lower than the performance of LWFA proposed in this paper. Particularly, the performance of the SCA decreases rapidly as the dimension increases, but the LWFA has a faster convergence speed and higher accuracy in 30 high-dimensional independent runs (D = 100).
5.4 Experiments and results analysis on set 2
In this section, to further analyze the performance of the improved FA, IEEE CEC2010 functions [39] are used in simulation experiments. PSO, the SCA, the FPA, and the standard FA are used to evaluate the performance of the LWFA. The simulation environment is similar to that of set 1. The parameters of the five algorithms are also shown in Table 4. To realize significant solutions, each function is run 30 times independently, and all the simulation experiments are tested under the same conditions. The solutions of all the algorithms are shown in Table 8.
The details of the results—mean and standard deviation are recorded in Table 6. Based on Table 8, it can be easily analyzed that the LWFA outperformed the standard FA. Compared with the other algorithms (PSO, the FPA, FA, and SCA), the im-proved FA achieved superior optimization performance.
5.5 Convergence curve analysis
To clearly show the convergence speed and accuracy of the LWFA, this paper will plot the convergence curves of the LWFA and the five other algorithms (PSO, the CS, FPA, SCA, and FA) with D = 10 on ten benchmark functions, as shown in Fig 9. When the dimension D = 30/100, the convergence curve of function f2, f5, and f6 are shown in Fig 10. In these graphs, the horizontal axes represent the iteration T = 1000, and the robustness of each algorithm is demonstrated in the vertical axes.
Fig 9 shows the performance of the LWFA. The other algorithm (PSO, the CS, FPA, and FA) have similar solutions based on the convergence curves. Compared with the SCA, the LWFA can obtain a higher accuracy for the function f2, f5, and f6. However, the performance of the SCA drops rapidly as the dimension increases. Fig 10 shows that the LWFA obtains a higher optimal value than the SCA.
Overall, based on the above comparison results above, we can see that the improved FA proposed in this paper is better than the other algorithms in the same dimension. When the comparison dimension is 30/100, the convergence rate of the LWFA is faster than the other algorithms in the convergence graph of the same function and has a vertical decline state. In different dimensions, the optimization accuracy of PSO, the CS, FPA, SCA, and FA will decrease with an increase in dimension. However, the optimization accuracy of the LWFA also increases gradually and shows good optimization performance for each function with an increase in its dimensions.
6 Conclusions
In this paper, the biological principle and mathematical description of the standard FA are presented, and the improvement of the FA and the research of its application fields are summarized. To achieve minimum attraction, an FA based on self-adaptive logarithmic inertia weight is proposed. The step adjustment factor is added to the random step term to dynamically adjust the random step, thus greatly improving the optimization performance of the FA. To evaluate the performance of the LWFA, multiple simulation experiments were conducted. In the first experiment set, ten benchmark test functions and IEEE CEC2010 standard test functions are used to compare the performance of state-of-the-art algorithms with that of the LWFA. The comparisons with PSO, the FPA, SCA, and CS show that the improved FA increases the accuracy of the solutions and significantly enhances the robustness of the solutions for the test functions, particularly for high-dimensional test functions. The experimental results in this study have convincingly shown and confirmed that the LWFA achieves high performance in solving optimization problems. The application of the improved FA to real-world engineering problems and multi-objective optimization will be done in future work.
References
- 1. Holland John H. (1973). Genetic Algorithms and the Optimal Allocation of Trials. Siam Journal on Computing, 2(2):88–105.
- 2. Dorigo M, Maniezzo V, Colorni A. (2002). Ant system: Optimization by a colony of cooperating agents. IEEE Transactions on Systems Man and Cybernetics, 26(1):29–41.
- 3. Kennedy J, Eberhart R C. (2002). Particle swarm optimization. International conference on networks, pp.1942–1948.
- 4. Yang X, Deb S. (2009). Cuckoo Search via Lévy flights. Nature and biologically inspired computing, pp.210–214.
- 5. Pelta D A, Krasnogor N. (2009). Nature-inspired cooperative strategies for optimization. Nature inspired cooperative strategies for optimization, pp.982–989.
- 6. Mirjalili S, Lewis A. (2016). The Whale Optimization Algorithm. Advances in Engineering Software, 2016, 95(95):51–67.
- 7. Mirjalili S, Mirjalili S M and Lewis A. (2014) Grey wolf optimizer. Advances in Engineering Software, 69: 46–61.
- 8. Mirjalili S. (2016). SCA: A Sine Cosine Algorithm for solving optimization problems. Knowledge Based Systems, 96(96): 120–133.
- 9. Polap D, Niak M W. (2017). Polar Bear Optimization Algorithm: Meta-Heuristic with Fast Population Movement and Dynamic Birth and Death Mechanism. Symmetry, 9(10):203–223.
- 10. Wolpert D H, Macready W G. (1997). No free lunch theorems for optimization. IEEE Transactions on Evolutionary Computation, pp.67–82.
- 11. Husnain G, Anwar S. (2021). An intelligent cluster optimization algorithm based on Whale Optimization Algorithm for VANETs (WOACNET). PLOS ONE,16(4): e0250271. pmid:33882105
- 12. Li Y, Zhao Y, Liu J. (2021). Dynamic sine cosine algorithm for large-scale global optimization problems. Expert Systems with Applications 177:114950.
- 13. Yang X. (20090. Firefly algorithms for multimodal optimization. International conference on stochastic algorithms foundations and applications, pp.169–178.
- 14. Yang X. (2010). Firefly Algorithm, Levy Flights and Global Optimization. arXiv: Optimization and Control, pp.209–218.
- 15. Yang X. (2011). Chaos-Enhanced Firefly Algorithm with Automatic Parameter Tuning. International Journal of Swarm Intelligence Research, 2(4):1–11.
- 16. Sharma S, Jain P and Saxena A. (2020). Adaptive Inertia-Weighted Firefly Algorithm. Intelligent Computing Techniques for Smart Energy Systems, pp.495–503.
- 17. Farahani S M, Abshouri A A, Nasiri B, et al. (2011). A Gaussian Firefly Algorithm. International Journal of Machine Learning and Computing, pp.448–453.
- 18. Farahani S M, Abshouri A A, Nasiri B, et al. (2012). Some Hybrid models to Improve Firefly Algorithm Performance. International journal of artificial intelligence, pp.97–117.
- 19. Baykasoglu A, Ozsoydan F B. (2015). Adaptive firefly algorithm with chaos for mechanical design optimization problems. Applied Soft Computing, pp.152–164.
- 20.
Carbas S. (2020). Enhanced Firefly Algorithm for Optimum Steel Construction Design. Applications of Firefly Algorithm and its Variants. Springer, pp.119–146.
- 21. Hassanzadeh T, Meybodi M R and Shahramirad M. (2017). A New Fuzzy Firefly Algorithm with Adaptive Parameters. International Journal of Computational Intelligence and Applications, 16(03):1750017.
- 22. Liu J, Mao Y, Liu X, et al. (2020). A dynamic adaptive firefly algorithm with globally orientation. Mathematics and Computers in Simulation, pp.76–101.
- 23. Chandrawati T B, Sari R F. (2018). A Review of Firefly Algorithms for Path Planning, Vehicle Routing and Traveling Salesman Problems. International conference on electrical engineering and informatics, pp.30–35.
- 24. Rajmohan S, Natarajan R. (2019). Group influence based improved firefly algorithm for Design Space Exploration of Datapath resource allocation. Applied Intelligence, 49(6):2084–2100.
- 25. Altabeeb A M, Mohsen A M and Ghallab A. (2019). An improved hybrid firefly algorithm for capacitated vehicle routing problem. Applied Soft Computing, pp.105728.
- 26. Fan T, Wang J, Feng M, et al. (2019). Application of multi-objective firefly algorithm based on archive learning in robot path planning. International Journal of Intelligent Information and Database Systems, 12(3):199–211.
- 27. Mondal A, Dey N and Ashour S A. (2019). Digital Image Processing based Cuckoo Search Algorithm and its Variants: A Comprehensive Review. Applications of, Cuckoo Search Algorithm and its Variants.
- 28. Rajagopalan A, Modale D R, Senthilkumar R. (2020). Optimal Scheduling of Tasks in Cloud Computing Using Hybrid Firefly-Genetic Algorithm. Advances in Decision Sciences, Image Processing, Security and Computer Vision Springer, pp.678–687.
- 29. Wang F, Ye C. (2016). Location of emergency rescue center in urban areas based on improved firefly algorithm. Industrial engineering and management, pp.21–24.
- 30. Jiang C, Zhao S, Shen S, et al. (2012). Particle swarm optimization algorithm with sinusoidal changing inertia weight. Computer Engineering and Applications, 48(8):40–42.
- 31. Zhang X, Wang P, Xing J, et al. (2012). Particle swarm optimization algorithms with decreasing inertia weight based on Gaussian function. Application Research of Computers, 29(10): 3710–3712.
- 32. Wenzhi D, Xinle Y. (2015). Particle swarm optimization algorithm based on inertia weight logarithmic decreasing. Computer engineering and applications, pp.99.
- 33. Hassanzadeh T, Meybodi M R, and Shahramirad M. (2017). A New Fuzzy Firefly Algorithm with Adaptive Parameters. International Journal of Computational Intelligence and Applications, 16(03):1750017.
- 34. Liu J, Xing Y and Li Y. (2018). A Gravitational Search Algorithm with Adaptive Mixed Mutation for Function Optimization. International journal of performability engineering, 14(4):681–690.
- 35. Jamil M, Yang X. (2013). A literature survey of benchmark functions for global optimization problems. International Journal of Mathematical Modelling & Numerical Optimisation, 4(2):150–194.
- 36. Yu S, Zhu S, Ma Y, et al. (2015). A variable step size firefly algorithm for numerical optimization. Applied Mathematics & Computation, 263(C):214–220.
- 37. Zhu Q G, Xiao Y K, Chen W D, Ni C X, Chen Y. (2016). Research on the improved mobile robot localization approach based on firefly algorithm. Chinese Journal of Scientific Instrument, 37(2):323–329.
- 38. Kaveh A, Javadi S M. (2019). Chaos-based firefly algorithms for optimization of cyclically large-size braced steel domes with multiple frequency constraints. Computers and Structures, 214:28–39.
- 39. Tang K, Yao X, Suganthan P, et al. (2010). Benchmark Functions for the CEC 2010 Special Session and Competition on Large Scale Global Optimization. University of Science and Technology of China (USTC), School of Computer Science and Technology, Nature Inspired Computation and Applications Laboratory (NICAL), Hefei, Anhui, China. Tech. Rep.