Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Advanced arithmetic optimization algorithm for solving mechanical engineering design problems

Abstract

The distributive power of the arithmetic operators: multiplication, division, addition, and subtraction, gives the arithmetic optimization algorithm (AOA) its unique ability to find the global optimum for optimization problems used to test its performance. Several other mathematical operators exist with the same or better distributive properties, which can be exploited to enhance the performance of the newly proposed AOA. In this paper, we propose an improved version of the AOA called nAOA algorithm, which uses the high-density values that the natural logarithm and exponential operators can generate, to enhance the exploratory ability of the AOA. The addition and subtraction operators carry out the exploitation. The candidate solutions are initialized using the beta distribution, and the random variables and adaptations used in the algorithm have beta distribution. We test the performance of the proposed nAOA with 30 benchmark functions (20 classical and 10 composite test functions) and three engineering design benchmarks. The performance of nAOA is compared with the original AOA and nine other state-of-the-art algorithms. The nAOA shows efficient performance for the benchmark functions and was second only to GWO for the welded beam design (WBD), compression spring design (CSD), and pressure vessel design (PVD).

1. Introduction

Optimization techniques are popular for solving real-world problems. Finding solution to these complex, nonlinear, and multimodal real-world problems usually requires reliable optimization techniques, such as metaheuristic algorithms, which have proved to be a reliable optimization technique for such problems. The popularity of metaheuristic algorithms hinges on their ease of use and implementation, their being gradient-free, and having the ability to by-pass local optima. Metaheuristic algorithms have been successfully applied to solve problems in engineering, medicine, and many other areas.

Nature has inspired many metaheuristic algorithms; they solve optimization problems by mimicking natural phenomena. These phenomena cover a range of natural processes from such areas as biology, physics, chemistry and swarms (population-based) [1,2]. The bio-inspired metaheuristic algorithms are frequently inspired by the laws of natural evolution. The randomly generated search agents are evolved by combining the best individuals after every iteration during the search process. Examples of these bio-inspired metaheuristic algorithms include genetic algorithms (GA) [3], the artificial algae algorithm (AAA) [4], and the evolution strategy (ES) [5]. The physics- and chemistry-based based metaheuristic algorithms mimic some physical rules in the universe, for example the simulated annealing (SA) [6], gravitational search algorithm (GSA) [7,8], and the artificial chemical reaction optimization algorithm (ACROA) [8]. The swarm-based algorithms are population-based; they mimic the social behavior of animals in groups. Popular swarm algorithms include the particle swarm optimization (PSO) [9] and the ant colony optimization (ACO) [10].

The arithmetic optimization algorithm (AOA) is a recently proposed population-based metaheuristic algorithm. The algorithm is based on the distributive behavior of the arithmetic operators of multiplication (M), division (D), subtraction (S), and addition (A). The performance of the AOA was investigated using twenty-three benchmark functions, six hybrid composite functions, and several real-world engineering design problems. The AOA experimental results showed promising results when compared against results from eleven other well-known optimization algorithms [11].

The distributive power of the arithmetic operators gives the AOA its unique ability to find the global optima for optimization problems used to test its performance. However, several other mathematical operators exist, which have the same or better distributive properties, which could be exploited to enhance the performance of AOA. This motivated us to use the high-density values that the natural logarithm and exponential operators can generate to enhance the exploratory ability of AOA. The addition and subtraction operators are still used for exploitation. The major contribution of our work can be summarized as follows:

  • We propose a new advanced arithmetic optimization algorithm which we refer to as the nAOA.
  • The nAOA improves the exploratory ability of original AOA by using the high-density numbers generated by the natural logarithm and exponential operators.
  • The candidate solutions are initialized using the beta distribution instead of the default random number initialization scheme.
  • The random variables and adaptations used in the algorithm have beta distribution.

The rest of the paper is organized as follows. In Section 2, the literature is review and discussed. We presented the proposed algorithm in Section 3. Section 4 covers the experimental setup, results, and discussion. Finally, Section 5 presents the concluding remarks and suggests future research directions.

2. Literature review

The sine cosine algorithm (SCA) uses a mathematical model based on sine and cosine functions to achieve an optimal solution [12]. Research results proved the algorithm’s ability to explore different search space regions, to avoid being stuck in local optima, and to converge towards the global optimum. Furthermore, the SCA algorithm showed promising abilities in solving real-world problems by obtaining a smooth shape for the airfoil problem with very low drag.

A comparative study of recent algorithms, including the arithmetic optimization algorithm (AOA), the salp swarm algorithm (SSA), the slime mould optimization algorithm (SMA), and the marine predators algorithm, was carried out [13]. Based on the study, a new hybrid of the slime mould algorithm and the simulated annealing algorithm (HSMA-SA) was proposed to strengthen the exploitation and exploration abilities of the hybrid algorithm. The hybrid was applied to structural engineering design problems, where it showed promising results.

The arithmetic optimization algorithm was used to boost the artificial neural network in the proposed (IANN-AOA) where it was applied in solving the damage quantification problem [14]. The main idea is for the improved indicator to eliminate the healthy elements from the numerical model. The data for the damaged elements collected from an improved indicator’s damage index is used as input with the damage level as output. The results for the IANN-AOA showed that the damaged elements are predicted with higher precision by the improved indicator. The result is the same for damage quantification, but the results for IANN-AOA are more accurate than those for IANN-BCMO.

Premjumar et al. [15] proposed the multi-objective version of the arithmetic optimization algorithm (MOAOA). The algorithm was used for solving real-world constrained multi-objective optimization problems (RWMOPs) found in mechanical engineering, chemical engineering processes and syntheses, and power electronics systems. The performance of the MOAOA was tested on a set of 35 constrained RWMOPs and five ZDT unconstrained problems and compared with four other state-of-the-art multi-objective algorithms. The superiority of the MOAOA over the other algorithms considered is confirmed by its high accuracy and coverage across all objectives [15].

An improved arithmetic optimization algorithm (dAOA) was proposed, which used a modified version of the extreme learning machine (ELM) model for the identification of the proton exchange membrane fuel cells (PEMFCs) [16]. The configurations of the ELM were optimized by the improved algorithm, which in turn minimized the sum of the square error between the output voltage of the real PEMFC data and the output voltage. Their simulation showed that the proposed dAOA provided accurate parameters of the PEMFC stack system.

3. The proposed nAOA

In our proposed improvements for the AOA, the optimization process starts with initializing the candidate solutions using the beta distribution. This was chosen because so many authors have used many other distributions besides the random number to generate the initial population, with varying levels of success [1720]. The candidate solutions are improved after every iteration according to the optimization rules. Stochastic processes are used to find optimal solutions, so the probability of getting the optimal solution increases with multiple runs.

The optimization process goes through two phases: exploration and exploitation. Exploration refers to scouring a new area/region within the search space for an optimal solution, whereas exploitation refers to scouring the neighborhood of already visited areas for the optimal solution. A good balance between exploration and exploitation can guarantee an optimal solution. For our proposed nAOA, the natural log (L ’ln’) and the exponential (E ’e’) operators are used to achieve the exploration, while the addition (A ’+’) and subtraction (S ’-’) operators are used to achieve the exploitation.

3.1. Motivation

Arithmetic is an elementary branch of mathematics, and one of the oldest. It deals with the study of numbers and properties of operators applied to them. The traditional operators are addition, subtraction, division, and multiplication. However, it also involves advanced operators like logarithmic functions, exponentiation, computation of percentages, and square roots [21]. Abualigah et al. [11] used addition, subtraction, division, and multiplication for optimization in AOA. The success of AOA as an optimizer greatly motivated us to consider using other advanced arithmetic operators for our proposed nAOA. The logarithm and exponential functions are used at the exploration phase to update the candidate solutions and the addition and subtraction are used for the exploitation phase. The behavior of the optimization operators during the optimization process is shown in Fig 1.

3.2. Optimization process

After the candidate solutions have been initialized, the optimizer needs to decide into which optimization phase to go. The value of the math optimization accelerator (MOA) function, defined in Eq 1, determines that phase. The exploration phases are shown in Fig 2, as used by our proposed algorithm. A detailed description of the phases is given in the next subsection. (1) where Ci is the current iteration, Max, Min are, respectively, the maximum and minimum values of the accelerator function, M_iter is the maximum number of iterations, and MOA(Ci) is the value of the accelerator function at the ith iteration.

The exponential function is everywhere continuous and increasing. It is asymptotic around the x-axis. It is one-to-one, and it can be shown to be mapped onto R+. The logarithmic function is the inverse of the exponential function, it is also continuous and increasing everywhere. The exponential and logarithmic functions’ ranges are used to set the directions given in Fig 1, which greatly influences the exploration ability of the proposed nAOA.

3.3. Exploration phase

The value of MOA is compared with a randomly generated beta distributed number (b1); this determines the phase nAOA goes into. Exploration in nAOA is carried out by the natural log and the exponential operators. The behavior of the two operators can be seen in Fig 1. The candidate solutions are updated using these two operators at this phase. The high dispersion of values generated by the operators makes them ideal for exploration. They can search new regions in the search space for an optimal solution; however, they are unable to converge to the optimal solution, unlike the addition and subtraction operators. In essence, the ln and e operators are complementary.

The nAOA exploration phase is based on the model given in Eq 2, given below. If b1<MOA, the exploration phase is activated, executing either the "ln" or the e operator. A second beta distributed random number (b2) is generated, if b2<0.5, the ln operator is executed. While the ln operator is executing, the e operator is ignored. If b2≥0.5, the e operator is executed, while also ignoring the ln operator. We used a stochastic scaling coefficient (μ) to increase the diversity of the exponential or logarithmic values so as to explore more diverse regions of the search space. This helps nAOA avoid getting stuck in local optima. Fig 3 is a model of how the candidate solutions are updated using the simplest arithmetic rule as shown in Eq 2. The math optimization probability (MOP) is given in Eq 3. (2) (3) where Xnew(i,j) is the new solution to be computed, best(j) is the best solution from the previous iteration, ϵ is a very small integer, UBj and LBj are the upper and lower bound respectively. μ = 0.5 and α = 5 [11] are, respectively, the stochastic scaling factor and the exploitation accuracy over the iterations.

3.4. Exploitation phase

If b1>MOA, the exploitation phase is activated, executing either the ’+’ or the ’-’ operator. The candidate solutions are updated using these two operators, which are modeled in Eq 4. The high density of values generated by the operators makes them ideal for exploitation. The low dispersion values can search the neighborhood of already visited regions in the search space for optimal solutions. They are able to converge to the optimal solution, unlike the ’ln’ and ’e’ operators. In essence, the ’+’ and ’-’ operators are complementary.

A third beta distributed random number (b3) is generated, if b3<0.5, and the subtraction operator is executed. While the subtraction operator is executing, the addition operator is ignored. If b3≥0.5, the addition operator is executed, while also ignoring the subtraction operator. We used a stochastic scaling coefficient (μ) to increase the diversity of the addition or the subtraction values so as to explore more diverse regions of the search space. This helps nAOA avoid getting stuck in the local optima. Fig 3 shows how the candidate solutions are updated using the simple arithmetic rule, as shown in Eq 4.

(4)

3.5. Pseudocode and computational complexity of nAOA

In this section, we summarize the proposed improved arithmetic operator algorithm. The optimization process randomly executes the natural log (ln), exponential (e), addition (+), and subtraction (-) operators. The value of MOA is set between 0.2 to 0.9, which determines which phase the algorithm goes into. The algorithm avoids converging towards the near-optimal solution whenever r1>MOA and eventually stops after reaching a certain criterion as shown in the pseudocode below. Algorithm 1 shows the steps of our proposed algorithm, and the flow chart is given in Fig 4.

The main optimization processes of the algorithm are the initialization processes, evaluation of fitness function, and updating candidate solutions. The population size is N; updating the candidate solution depends on the iterations (I) and the different optimization problem parameters (P). Therefore, the computational complexity of nAOA is O(N × (IP + 1)).

Algorithm 1. Pseudocode of the nAOA

Set the values for α,μ.

Use beta distribution to initialize the candidate solutions’ positions. (i = 1,…,N.)

Calculate the Fitness of each given solutions

Determined best solution so far

while (t < Maximum Iteration) do

    Compute the MOA and MOP

    for (i = 1 to number of Solutions) do

        for (j = 1 to size of problem dimension) do

        Generate b1,b2,b3 (beta distributed random values between [0,1]

Exploration phase:        if b1 > MOA

                            if b2 > 0.5 then

                        Update the ith solutions’ positions using the log operator (Eq 2).

                else

                                Update the ith solutions’ positions using the exp operator (Eq 2).

                end if

                        else

Exploitation phase:                if b3 > 0.5 then

                                    Update the ith solutions’ positions using the (S"+") in Eq (4).

                        else

                                 Update the ith solutions’ positions using the (A"+") in Eq (4).

                            end if

                        end if

                end for

            end for

Calculate the Fitness of each given solutions

Determined best so far

t = t+1

end while

Return the best solution (x).

4. Results and discussion

In this section, we present the results of experiments conducted to evaluate the performance of nAOA. We used 20 benchmark test functions, 10 CEC 2020 test functions, and 3 engineering problems to evaluate the nAOA. We compared the results of nAOA on the 20 benchmark test functions, 10 CEC 2020 test functions, and engineering problems with the results from the original AOA and the following algorithms:

  • Constriction coefficient-based PSO and GSA (CPSOGSA) [22]
  • Gravitational search algorithm (GSA) [7]
  • Particle swarm optimization (PSO) [9]
  • Biogeography-based optimization (BBO) [23]
  • Differential evolution (DE) [24]
  • Ant colony optimization (ACO) [10]
  • Salp swarm algorithm (SSA) [25]
  • Sine cosine algorithm (SCA) [12]
  • Grey wolf optimizer (GWO) [26]

The algorithms and engineering design problems were implemented using MATLAB R2020b; they were run on Windows 10 OS, Intel Core i7-7700@3.60GHz CPU, 16G RAM. The number of function evaluations was set at 50,000, and the number of independent runs was set at 30. The source codes are publicly available from the respective references. For a fair comparison, all the algorithms were executed using 1000 iterations and a population size of 50. The controlling parameters of the algorithms considered are given in Table 1. The test functions used for our experiments are presented in Tables 2 and 3. The results are presented using five (5) performance indicators: best, worst, average, standard deviation (SD), and median. The algorithms are compared using mean, standard deviation, Friedman ranking (Rank) test, and Wilcoxon signed-rank test.

thumbnail
Table 1. Controlling parameters for algorithms considered.

https://doi.org/10.1371/journal.pone.0255703.t001

4.1. Results for benchmark functions

The numerical efficiency of the proposed nAOA algorithm was tested by solving 30 mathematical optimization problems. The first 20 problems are classical benchmark functions, while the remaining 10 problems are composite benchmark functions from the CEC 2020 test suite, frequently used in the optimization literature. The benchmark functions can be divided into unimodal, multimodal, fixed-dimension multimodal, and composite functions. The major difference between fixed-dimensional multimodal functions and multimodal functions is their ability to tune the number of design variables. By contrast, the composite test functions make finding global optima challenging by shifting the global optimum to random positions. Tables 4 and 5 give the results for the benchmark and composite test functions, respectively.

4.1.1. Evaluation of exploitation capability.

We discuss the exploitation ability of nAOA using unimodal functions F1–F7, since they have only one global optimum. It can be seen from Table 4 that nAOA outperformed the original AOA and nine other state-of-the-art algorithms considered for these functions. Though, all the algorithms were able to find the optimal solution as the best results, the performance, superiority, and stability of nAOA is confirmed by the value of the standard deviation, mean, and result of Friedman’s test. The mean values were used for the Friedman’s test, and a p-value of 0.00 was returned, which is less than the tolerance level of 0.05 hence, we reject the null hypothesis (the distributions of the obtained results for all the algorithms considered are the same). The nAOA returned the lowest mean rank, which means it performed optimally when compared with the ten other algorithms. This result also confirms nAOA’s ability to perform exploitation.

4.1.2. Evaluation of exploration capability.

The multimodal functions have many local optima and so provide a good test for the exploration capability of optimizers. Functions F8–F20 are multimodal and fixed-dimension multimodal functions. The number of local optima for each increase exponentially with the number of problem design variables. We can see from Table 4 that nAOA performed optimally and, in most cases, returned the lowest mean value and standard deviation. The stability of nAOA is also confirmed by the value of the standard deviation and result of Friedman’s test. The p-value of 0.00 was returned, which is less than the tolerance level of 0.05 hence, we reject the null hypothesis (the distributions of the obtained results for all the algorithms considered are the same). Again, nAOA returned the lowest mean rank of the Friedman’s test; indicating that it performed optimally when compared with 10 other algorithms. This also indicated that nAOA also has a good exploration capability.

4.1.3. Ability to escape from local minima.

We used the composite functions found in the CEC2020 suite to evaluate the ability of nAOA to escape local minima. A proper balance of exploration and exploitation guarantees avoidance of local optima. The results presented in Table 5 show that nAOA outperformed the original AOA, along with the nine algorithms considered for all the functions. It returned the lowest mean and standard deviation. The stability of nAOA is also confirmed by the value of the standard deviation and result of Friedman’s test. The p-value of 0.00 was returned, which is less than the tolerance level of 0.05 hence, we reject the null hypothesis (the distributions of the obtained results for all the algorithms considered are the same). Once more, the nAOA returned the lowest mean rank, which means it performed optimally when compared with 10 other algorithms. This proves that nAOA has a good balance of exploration and exploitation. This ability can be attributed to the update mechanism used by the proposed algorithm.

4.1.4. Convergence behavior.

The convergence behavior of nAOA is compared with that of the original AOA and nine other state-of-the-art algorithms in Fig 5. It can be seen that nAOA tends to search extensively the areas with the likelihood of finding the global optima. For F1-F4, the algorithms did not converge abruptly to the earliest found best solutions. This behavior guarantees exploration and eventual convergence after multiple iterations. We can also see that nAOA converged to the optimal solution for these functions. The second behavior that can be noticed in the convergence is that, as iterations increase, the algorithms tend to be accelerated quickly towards the best solution found so far. The adaptive mechanism of the algorithms ensures they look for regions with a high likelihood of finding the optimal solution and, as such, converge more rapidly towards the optimum early in the iterations. This behavior is evident in F5, F12, F13, and F18. Another observed behavior is noticed in F16, F17, F19-F20, where the convergence occurs towards the final iterations. This can be attributed to the efforts of the algorithm to avoid local optima, so the search process continues till the end. The convergence curve for the composite function F21-F30 clearly confirms nAOA’s ability to escape the local minima. Accordingly, nAOA obtained superior and highly competitive results which are characterized by nAOA’s being able to converge towards the best result for all functions.

thumbnail
Fig 5. Convergence behavior of classical and composite benchmark test functions.

https://doi.org/10.1371/journal.pone.0255703.g005

4.2. Application to engineering problem

Applying optimization techniques to engineering problems is primarily intended to minimize the values of design parameters and hence the overall design cost. The nAOA was applied to three mechanical engineering design problems: the welded beam design problem (WBD), the compression spring design problem (CSD), and the pressure vessel design problem (PVD). The penalty method has been adopted for constraint handling, whereby the algorithm is penalized for any constraint violation. Simple scalar penalty functions were adopted for this experiment.

The result obtained for the application of nAOA to solve the engineering problem was compared with 10 other metaheuristic algorithms: CPSOGSA, GSA, PSO, BBO, DE, ACO, GWO, SCA SSA, and AOA. For a fair comparison, the algorithms and engineering design were implemented in MATLAB R2020b; they were run on Windows 10 OS, Intel Core i7-7700@3.60GHz CPU, 16G RAM. The number of function evaluations was set at 50,000, and the number of independent runs was set at 30. The source codes are publicly available in the respective references. For a fair comparison, all the algorithms were executed using 1000 iterations and population size of 50. The results for each engineering problem are presented using five (5) performance indicators: namely, best, worst, average, standard deviation (SD), and median. In addition, the algorithms are compared using mean, standard deviation, and Wilcoxon signed-rank test.

4.2.1. The welded beam design problem.

The welded beam design problem is a minimization problem, in which we used nAOA along with 10 other algorithms to reduce the manufacturing cost of the design [27]. Fig 6 gives an illustration of the WBD. The WBD constraints are shear (τ) and beam blending (θ) stress, bar buckling load (Pc), beam end deflection (δ), and side constraints.

The design variables for WBD are:

The WBD problem is formulated mathematically as follows [27]: (5) subject to

The intervals for the design variables are as follows: where

The parameters for WBD are set as follows:

The results of the experiment conducted for WBD for our comparative analysis are given in Table 6, which shows the results for nAOA and 10 other algorithms (GSA, PSO, BBO, DE, ACO, CPSOGSA, GWO, SCA, SSA, and AOA). The results indicate that nAOA outperformed the original AOA for the cost function of the WBD problem. Moreover, nAOA returned minimum values for mean and SD when compared with AOA. Looking at the values for the design variables h, t, and b, we see that nAOA returned optimal values for all three variables. However, the overall best performing algorithm in terms of the average and standard deviation is GWO. Nevertheless, our proposed algorithm was very competitive as it returned the same best cost value with GWO and came second to GWO for the mean and standard deviation. The Wilcoxon signed-rank test indicates that PSO, BBO, DE, ACO simulation results are not statistically significant because they have p-values greater than 0.05. Whereas those of SSA, SCA, GWO, nAOA, AOA, CPSOGSA, and GSA are significant because they have p-values less than 0.05.

The convergence curves at the 100th and 1000th iterations for nAOA and 10 other algorithms used for the comparative analysis is shown in Fig 7. We used these two curves to evaluate the algorithms’ behavior at both the early stage and later stage of the iterations. It shows that nAOA has regular values at the start of the iterations; the same can be observed for the other comparative algorithms. Since the algorithms were all able to find best results early in the iteration process, they converged towards the best result and remained stable until the end of the optimization iterations. The similar results at the different iteration phases show insensitivity to the initialization scheme used by the initial candidate solution. On one hand we notice that the convergence curves of nAOA, AOA, CGSA, GWO, SCA, SSA, and CPSOGSA lie close to each other as they have nearly equal values for the cost function. On the other hand, the convergence curve for PSO, BBO, and ACO lie together at the top of the figure because they all have large values for the average, SD, and median, which translate into sub-optimal results for the cost. We can see DE standing alone at the middle of the curves, where, although it returned a suboptimal result, it was still better than PSO, BBO, and ACO.

thumbnail
Fig 7. Convergence curves for WBD.

Note: 100th iterations. Note: 1000th iterations.

https://doi.org/10.1371/journal.pone.0255703.g007

4.2.2. Compression spring design problem.

The compression spring design problem (CSD), as shown in Fig 8, is a continuous constrained optimization problem. The goal is to minimize the volume V of a coil spring under a constant tension/compression load. There are three design variables:

  • the number of spring’s active coils P = x1 ∈ [2, 15]
  • the diameter of the winding D = x2 ∈ [0.25, 1.3]
  • the diameter of the wire d = ox3 ∈ [0.05, 2]

The mathematical formulation of the CSD problem is as given in [29].Given (6) subject to

The intervals for the design variables are:

The results used for our comparative analysis for CSD are shown in Table 7 for nAOA and 10 other algorithms (GSA, PSO, BBO, DE, ACO, CPSOGSA, GWO, SCA, SSA, and AOA). It can be seen that nAOA outperformed the original AOA for the cost function of the CSD problem. Moreover, nAOA returned smaller values for both mean and SD than did AOA. The same can be observed for the design variables d, D, and P. However, the overall best performing algorithm in terms of the best, average, and standard deviation is again the GWO. And once again, our proposed algorithm was very competitive in returning the same best cost value as did GWO and came second to GWO for mean and standard deviation. The PSO, BBO, ACO, and DE have large values for the average, SD, and median, translating to sub-optimal results for the cost. The Wilcoxon signed-rank test indicates that PSO, BBO, DE, ACO simulation results are not statistically significant because they have p-values greater than 0.05. The results of SSA, SCA, GWO, nAOA, AOA, CPSOGSA, and GSA were significant because they have p-values less than 0.05.

The convergence curves for nAOA and 10 other algorithms used for the comparative analysis in the compression spring design are shown in Fig 9 at the 100th and 1000th iterations, in order to evaluate the algorithms’ behavior at both the early stage and later stage of the iterations. The curves for all the algorithms show irregular values at the start of the iterations; indicating that nAOA behaves similarly to the other algorithms. Since the algorithms were unable to find best results early in the iteration process, they searched the space for the optimal solution and were able to converge towards the best result and remain stable until the end of the optimization iterations. The dissimilarities in behavior of the curve at the different iteration phases show sensitivity to the initialization scheme used by the initial candidate solution. Moreover, we see the efficient performance of nAOA, AOA, GSA, GWO, SCA, and SSA because their curves lie together at the bottom of the figure. By contrast we note the sub-optimal performance of PSO, BBO, GA, and DE, indicated by curves lie together at the top of the figure because they show large cost function values.

thumbnail
Fig 9. Convergence curves of CSD.

Note: 100th iterations. Note: 1000th iteration.

https://doi.org/10.1371/journal.pone.0255703.g009

4.2.3. Pressure vessel design problem.

A pressure vessel design model (PVD) is shown in Fig 10. The four decision variables are defined as follows: x1 is the thickness of the pressure vessel Ts, x2 is the thickness of the head Th, x3 stands for the inner radius of the vessel R, and x4 is the length of the vessel barring head L.

The PVD can be formulated mathematically as follows [30]:

Given (7)

The interval of the design variables are as follows:

The result for experiments for PVD is shown in Table 8, indicating the comparative analysis of all the optimization techniques for PVD problem; that is, nAOA and 10 other algorithms (GSA, PSO, BBO, DE, ACO, CPSOGSA, GWO, SCA, SSA, and AOA). The results show that nAOA outperformed the original AOA for the cost function of the PVD problem. Moreover, nAOA returned smaller values for mean and SD when compared with AOA, the same can be observed for the design variables Ts, Th, R, and L. However, again, the overall best performing algorithm in terms of the best, average, and standard deviation is GWO. Our proposed algorithm was, nevertheless, very competitive as it returned the same best cost value as GWO and came second to GWO for mean and standard deviation. The PSO, BBO, ACO, and DE have large values for the average, SD, and median indicating sub-optimal results for the cost. The Wilcoxon signed-rank test indicates that PSO, BBO, DE, and ACO simulation results are not statistically significant because they have p-values greater than 0.05. The results of SSA, SCA, GWO, nAOA, AOA, CPSOGSA, and GSA were significant because they have p-values less than 0.05.

The comparative analysis at the 100th and 1000th iterations for nAOA and 10 other algorithms used is shown by the convergence curves in Fig 11. The pairs of curves are shown to evaluate the algorithms’ behavior at the early and later stages of the iterations. The figure shows that nAOA has irregular values at the start of the iterations, as do the other comparative algorithms. The irregular values imply that the algorithms were unable to find best results early on, although as iterations progressed towards the later stages, the algorithms converged towards the best result and remained stable to the end of the optimization iterations. These results showing dissimilarities at the different iteration phases indicate the sensitivity of the algorithms to the initialization scheme used by the initial candidate solution. As can be seen, the convergence curves of nAOA, AOA, CGSA, GWO, SCA, SSA, and CPSOGSA all lie close to each other at the bottom of the figure because their values for the cost function are close. Then we note that the convergence curve for PSO, BBO, DE, and ACO also lie together, but above those for the rest of the algorithms, where their large values for the average, SD, and median indicate sub-optimal results for the cost. The efficient performance of nAOA, GSA, GWO, SCA, ACO, and SSA is indicated by their position at the bottom of the figure.

thumbnail
Fig 11. Convergence curves for PVD.

Note: 100th iteration. Note: 1000th iteration.

https://doi.org/10.1371/journal.pone.0255703.g011

4.3. Overall simulation result’s discussion

This section gave overall analysis of the simulation results of all the 11 state-of-the-art algorithms used in our experiments. The algorithms are the proposed nAOA, the original AOA, and nine other state-of-the-art algorithms (CPSOGSA, GSA, PSO, BBO, DE, ACO, GWO, SCA, and SSA). The best performing algorithm overall is the GWO, because it returned best values for WBD, CSD, and PVD problems, while nAOA provides optimal values for the fitness function of WBD, CSD, and PVD (second only to GWO). In addition, the simulation results of the classical and composite (CEC 2020) benchmark functions convey that nAOA performed optimally, as can be seen from statistical results of average and SD values, which are very close to the global minimum.

The WBD problem results, shown in Table 6, indicate that nAOA, GWO, and CPSOGSA returned between 1.6957 and 1.6976 as best result, which is near the optimal cost value for WBD (0.69). Their respective average results are 1.7731, 1.6976, and 1.8545, which are also close to the global optimal for WBD. The GSA, AOA, and SSA have an average value of 2.8718, far from the best value (1.69). The algorithms PSO, BBO, GA, and DE all showed sub-optimal results.

Furthermore, Table 7 shows the result for the CSD problem. This set of results conveys that nAOA returned a best result of 3.6619, which is the same as that returned by GWO, SSA, and CPSOGSA. This result is better than that for GSA (3.7502), PSO (409.7), BBO (409.7), GA (409.7), DE (409.7), and ACO (209.9). The average results and standard deviations for nAOA, GWO, SSA, and CPSOGSA show that the algorithms could find near-optimal results early in the iteration process and quickly converge towards their best results. Nevertheless, these latter ’best’ results were still not as good as those for the nAOA, GWO, SSA, and CPSOGSA.

The results shown in Table 8 indicate that, for the PVD problem, nAOA, GWO, and CPSOGSA all returned 2302.6 as their best result. Their respective average results are 3303.1, 2556.8 and 4113.4. The performance of nAOA is second to only GWO, which had the best performance. The algorithms GSA, AOA, and SSA have average values between 3858.4 and 4440.8, which are not close to that returned by the best-performing algorithm. The algorithms PSO, BBO, GA, and DE showed sub-optimal results.

This overall analysis of the results of our experiments conveys that nAOA showed promising results for optimizing the classical and composite (CEC 2020) benchmark functions. It clearly outperformed the original AOA and was very competitive with the other 9 algorithms used. The same conclusion is seen for optimizing the fitness function and design parameters of the three mechanical engineering frameworks considered here, as indicated by comparing nAOA with the other participating algorithms. Another observation is that nAOA, AOA, GSA, GWO, SCA, SSA, and CPSOGSA provide better statistical results for the fitness function of the mechanical engineering design frameworks. However, the performance of PSO, BBO, GA, and ACO is suboptimal for all three engineering benchmarks.

5. Conclusion and future directions

In this paper, we proposed an improved nAOA algorithm that uses the high-density values that the natural logarithm and exponential operators can generate to enhance the exploratory ability of AOA. The addition and subtraction operators still carry out the exploitation phase. We tested the performance of the nAOA with 33 benchmark functions and three engineering design benchmarks. As a result, the nAOA has shown efficient performance for the WBD, CSD, and PVD (being second only to GWO).

This research has opened future research direction; it will be interesting to see how researchers can overcome the drawback of premature convergence and sensitivity in randomization. Researchers could use the stochasticity, ergodicity, and complex nonlinear motion properties of chaotic maps to overcome this drawback. In addition, nAOA could be applied to many other real-world problems, such as the economic load dispatch problems of electronic science. Furthermore, nAOA has considerable potential for hybridization with other state-of-the-art algorithms.

References

  1. 1. Ezugwu A. E., “Nature-inspired metaheuristic techniques for automatic clustering: a survey and performance study,” SN Applied Sciences, 2(2), 273, 2020.
  2. 2. Ezugwu A. E., Shukla A. K., Nath R., Akinyelu A. A., Agushaka J. O., Chiroma H., et al, “Metaheuristics: a comprehensive overview and classification along with bibliometric analysis,” Artificial Intelligence Review, 1–80, 2021.
  3. 3. Holland J. H., Adaptation in Natural and Artificial Systems. University of Michigan Press. (Second edition: MIT Press, 1992), 1975.
  4. 4. Uymaz S. A., Tezel G. and Yel E., “Artificial algae algorithm (AAA) for nonlinear global optimization,” Applied Soft Computing, 31, 153–171, 2015.
  5. 5. Rechenberg I., “Evolutionary strategies. In simulation methods in medicine and biology,” Springer, Berlin, Heidelberg, 83–114, 1978.
  6. 6. Kirkpatrick S., Gelatt C. D. and Vecchi M. P., “Optimization by simulated annealing,” Science, 220(4598), 671–680, 1983. pmid:17813860
  7. 7. Rashedi E., Nezamabadi-Pour H. and Saryazdi S., “GSA: a gravitational search algorithm,” Information Sciences, 179(13), 2232–2248, 2009.
  8. 8. Alatas B., “ACROA: artificial chemical reaction optimization algorithm for global optimization,” Expert Systems with Applications, 38(10), 13170–13180, 2011.
  9. 9. Kennedy J. and Eberhart R., “Particle swarm optimization,” in Proceedings of ICNN’95-international conference on neural networks (Vol. 4), 1995.
  10. 10. Dorigo M. and Di Caro G., “Ant colony optimization: a new meta-heuristic,” in Proceedings of the 1999 congress on evolutionary computation-CEC99 (Cat. No. 99TH8406) (Vol. 2), 1999.
  11. 11. Abualigah L., Diabat A., Mirjalili S., Abd Elaziz M. and Gandomi A. H., “The arithmetic optimization algorithm,” Computer Methods in Applied Mechanics and Engineering, 376, 113609, 2021.
  12. 12. Mirjalili S., “SCA: a sine cosine algorithm for solving optimization problems,” Knowledge-based systems, 96, 120–133, 2016.
  13. 13. Gürses D., Bureerat S., Sait S. M. and Yıldız A. R., “Comparison of the arithmetic optimization algorithm, the slime mold optimization algorithm, the marine predators algorithm, the salp swarm algorithm for real-world engineering applications,” Materials Testing, 63(5), 448–452, 2021.
  14. 14. Khatir S., Tiachacht S., Le Thanh C., Ghandourah E., Mirjalili S. and Wahab M. A., “An improved Artificial Neural Network using Arithmetic Optimization Algorithm for damage assessment in FGM composite plates,” Composite Structures, 114287, 2021.
  15. 15. Premkumar M., Jangir P., Kumar B. S., Sowmya R., Alhelou H. H., Abualigah L., et al., “A New Arithmetic Optimization Algorithm for Solving Real-World Multiobjective CEC-2021 Constrained Optimization Problems: Diversity Analysis and Validations,” IEEE Access, 2021.
  16. 16. Xu Y. P., Tan J. W., Zhu D. J., Ouyang P. and Taheri B., “Model identification of the Proton Exchange Membrane Fuel Cells by Extreme Learning Machine and a developed version of Arithmetic Optimization Algorithm,” Energy Reports, 7, 2332–2342, 2021.
  17. 17. Agushaka J. and Ezugwu A., “Influence of Initializing Krill Herd Algorithm with Low-Discrepancy Sequences,” IEEE Access. Vol. 8, 210886–210909, 2020.
  18. 18. Covic N. and Lacevic B., “Wingsuit Flying Search-A Novel Global Optimization Algorithm,” IEEE Access, 8, 53883–53900, 2020.
  19. 19. Ivorra B., Mohammadi B. and Ramos A. M., “A multi-layer line search method to improve the initialization of optimization algorithms,” European Journal of Operational Research, 247(3), 711–720, 2015.
  20. 20. Pant M., Thangaraj R., Grosan C. and Abraham A., “Improved particle swarm optimization with low-discrepancy sequences,” in 2008 IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence), 2008.
  21. 21. Cunnington S., The story of arithmetic: a short history of its origin and development, London: S. Sonnenschein, 1904.
  22. 22. Rather S. and Bala P., “Hybridization of constriction coefficient based particle swarm optimization and gravitational search algorithm for function optimization,” in International Conference on Advances in Electronics, Electrical, and Computational Intelligence (ICAEEC- 2019), 2019.
  23. 23. Simon D., “Biogeography based optimization,” IEEE Transactions on Evolutionary Computation, 12(6), 702–713, 2008.
  24. 24. Storn R. and Price K., “Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces,” Journal of Global Optimization, 11(4), 341–359, 1997.
  25. 25. Mirjalili S., Gandomi A., Mirjalili S., Saremi S., Faris H. and Mirjalili S., “Salp swarm algorithm: a bioinspired optimizer for engineering design problems,” Advances in Engineering Software, 114, 163–191 2017.
  26. 26. Mirjalili S., Mirjalili S. M. and Lewis A., “Grey wolf optimizer,” Advances in Engineering Software, 69, 46–61, 2014.
  27. 27. Coello C., “Use of self-adaptive penalty approach for engineering optimization problems,” Computers in Industry,41(2), 113–127, 2000.
  28. 28. Ragsdell K. and Phillips D., “Optimal design of a class of welded structures using geometric programming,” ASME Journal of Engineering for Industries 98(3). 1021–1025, 1976.
  29. 29. Kazemzadeh-Parsi M. J., “A modified firefly algorithm for engineering design optimization problems. Iranian Journal of Science and Technology,” Transactions of Mechanical Engineering, 38(M2), 403, 2014.
  30. 30. Sandgren E., “NIDP in mechanical design optimization,” Journal of Mechanical Design, 112(2), 223–229, 1990.