Figures
Abstract
This paper analyzes the shortcomings of the traditional Whale Optimization Algorithm (WOA), mainly including the tendency to fall into local optima, slow convergence speed, and insufficient global search ability for high-dimensional and complex optimization problems. An improved Whale Optimization Algorithm (GWOA) is proposed to overcome these issues. By integrating several improvement strategies, such as adaptive parameter adjustment, enhanced prey encircling, and sine-cosine search strategies, GWOA significantly enhances global search ability and convergence efficiency. However, GWOA increases computational complexity, which may lead to longer computation times when handling large-scale problems. It may also fall into local optima in high-dimensional cases. Several experiments were conducted to verify the effectiveness of GWOA. First, 23 classic benchmark functions were tested, covering unimodal, multimodal, and compositional optimization problems. GWOA was compared with other basic metaheuristic algorithms, excellent WOA variants, and the latest algorithms. Then, a comparative scalability experiment is performed on GWOA. The experimental results showed that GWOA achieved better convergence speed and solution accuracy than other algorithms in most test functions, especially in multimodal and compositional optimization problems, with an Overall Efficiency (OE) value of 74.46%. In engineering optimization problems, such as pressure vessel design and spring design, GWOA effectively reduced costs and met constraints, demonstrating stronger stability and optimization ability. In conclusion, GWOA significantly improves the global search ability, convergence speed, and solution stability through multi-strategy integration. It shows great potential in solving complex optimization problems and provides an efficient tool for engineering optimization applications.
Citation: Gu Y, Wei J, Li Z, Lu B, Pan S, Cheong N (2025) GWOA: A multi-strategy enhanced whale optimization algorithm for engineering design optimization. PLoS One 20(9): e0322494. https://doi.org/10.1371/journal.pone.0322494
Editor: Seyedali Mirjalili, Torrens University Australia, AUSTRALIA
Received: January 7, 2025; Accepted: March 22, 2025; Published: September 3, 2025
Copyright: © 2025 Gu et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: All relevant data are within the manuscript and its Supporting Information files.
Funding: This study was supported by grants from the Macao Polytechnic University (MPU Grant no. RP/FCA-06/2022 to NC, YG, and JW) and from the Macao Science and Technology Development Fund (FDCT Grant no. 0044/2023/ITP2 to NC, YG, and JW).
Competing interests: The authors have declared that no competing interests exist.
1 Introduction
Metaheuristic algorithms are computational methods used to solve optimization problems, especially large-scale, complex, and high-dimensional ones. Unlike traditional exact algorithms, metaheuristic algorithms do not rely on a specific mathematical model or gradient information of the objective function. Instead, the problem is solved by mimicking the ’heuristics’ of natural or social phenomena. Metaheuristic algorithms have powerful global search capabilities, exploring a wide range of solutions to avoid local optima. The core idea is to design an efficient, practical algorithm that provides high-quality solutions in most cases [1]. Among these high-quality solutions, some may approach the optimal solution. Although there is no guarantee of optimality, it is possible to find approximate optimal solutions in a large search space. Thus, these algorithms are able to find a balance between global and local search. Combining stochasticity with heuristic strategies leads to improved solution efficiency and optimization performance. Metaheuristic algorithms are widely used in a variety of optimization problems, including those with irregular, discrete, or noisy characteristics. There are many metaheuristic algorithms, with typical examples including the Genetic Algorithm (GA) [2], Artificial Bee Colony (ABC) [3], Particle Swarm Optimization (PSO) [4], Dung Beetle Optimization (DBO) [5], Grey Wolf Optimizer (GWO) [6], Harris Hawk Optimization (HHO) [7], Sine-Cosine Algorithm (SCA) [8], Whale Optimization Algorithm (WOA) [9], and others. When solving real-world problems, both modeling and optimization are usually required. Modeling evaluates the objective function by using a correct mathematical model, while optimization is used to achieve the optimal configuration of the design parameters. Therefore, the key to optimization is the algorithm itself. Based on this, this paper will focus on the improvement of optimization algorithms.
Metaheuristic algorithms originated in the 1970s. The earliest algorithms, such as Simulated Annealing (SA) [10] and GA, , were inspired by nature’s annealing process and evolution. They have powerful global search capabilities but have large computational costs. From the 1990s to the 2000s, population-based algorithms like Particle Swarm Optimization (PSO) and Ant Colony Optimization (ACO) [11] were proposed and gradually applied to dynamic optimization problems. In the 2010s, with the rise of deep learning and big data. The newer algorithms such as Gray Wolf Optimization (GWO), Bat Algorithm (BA) [12] and Elk herd optimizer (EHO) [13] emerged. The application of hybrid algorithms has also increased, improving the performance of the algorithms in the search space. Currently, researchers are focused on improving newer algorithms and experimenting with hybrid approaches. Chen et al. [14] proposed a method for trajectory planning and time optimization of woodworking manipulators using 3-5-3 segmented polynomial interpolation and a modified Particle Swarm Optimization algorithm (GoldS-PSO). The method significantly improves operational efficiency, stability and smoothness in motion control. Yang et al. proposed a structural optimization method for lightweight adhesive modules using simulation, alternative models, and the Dung Beetle Optimizer (DBO) [15]. A weight reduction of 11.7% was achieved while maintaining adhesive stability and load capacity. Nadimi-Shahraki et al. proposed an Improved Gray Wolf Optimization (I-GWO) algorithm [16]. The problems of traditional GWO algorithm in terms of population diversity, local and global search balance, and premature convergence were effectively solved by introducing a dimensional learning based hunting strategy (DLH). Wang et al. proposed an improved hybrid hawk optimization algorithm (IHAOHHO), which enhances the global exploration and local exploitation capabilities of the AO and HHO algorithms by combining nonlinear escape energy parameters and stochastic dyadic learning strategies [17]. Tawhid et al.proposed a new and efficient multi-objective optimization algorithm, The Multi Objective Sine Cosine Algorithm (MO-SCA), which obtains different levels of non-domination and maintains the diversity of solution sets through elite non-domination ranking and congestion distance methods [18]. Chakraborty et al. proposed an improved Whale Optimization Algorithm (WOAmM) by combining the mutual benefit stage in the symbiotic organism search algorithm with WOA [19]. The problem of premature convergence is mitigated so that the search space can be explored more efficiently and avoid the overuse of computational resources. Wei et al. proposed an Adaptive Position Updating PSO (IPSO), which integrates inertia weight and adaptive position update strategy to improve the convergence speed of PSO and avoid premature convergence [20]. However, hybrid optimization algorithms risk overfitting, as they may overly focus on task-specific engineering constraints. This can result in good performance on specific tasks but poor generalization. Additionally, hybrid algorithms often require more computational resources, which can lead to imbalanced resource utilization in large-scale problems.
The Whale Optimization Algorithm (WOA) studied in this paper is a metaheuristic optimization algorithm based on group behavior. It is inspired by the unique feeding behavior of humpback whales and was proposed by S. Mirjalili et al. in 2016 [21]. Its core is to achieve the global optimization objective by simulating the predatory behavior of humpback whales, including encircling the prey, spiraling around the prey, and searching for it. WOA is widely used in function optimization, engineering problems, and image processing due to its simplicity and high efficiency. However, it often gets trapped in local optima when dealing with complex problems. Especially in large-scale search space, the algorithm may not be able to effectively jump out of the local optimal point, resulting in unsatisfactory final results. Additionally, WOA converges slowly in some cases, especially for high-dimensional problems. Its convergence is often slower than that of other optimization methods such as Particle Swarm Optimization (PSO) or Genetic Algorithms (GA). As iterations progress, the population diversity in WOA may gradually diminish, limiting its ability to explore potentially better solutions. This affects the optimization results. The balance between exploration and convergence in the algorithm is also somewhat problematic, although it performs a broader global search during the initial stages. However, as the iteration progresses, premature convergence tends to limit the algorithm’s ability to continue discovering better solutions. For high-dimensional problems, WOA’s performance is notably worse. Its performance is notably worse for high-dimensional problems due to low search efficiency and high time complexity. To address these limitations, several WOA variants have been proposed. For example, in 2019, Chen et al. proposed an improved whale optimization algorithm (BWOA) by introducing Lévy flight (LF) and a chaotic local search strategy (CLS) into the whale optimization algorithm (WOA), improving the balance between the global exploration capability and the local search capability of the traditional WOA. It was validated on classical engineering design optimization problems such as tension/compression springs, pressure vessel design, and three-bar truss design [22]. In 2021, Chakraborty et al. proposed an improved WOAmM algorithm by combining the reciprocity phase in the Symbiotic Organisms Search (SOS) with the Whale Optimization Algorithm (WOA) [19]. This enables it to enhance the exploration of the search space, thus avoiding the waste of computational resources due to overexploitation. In 2022, Yang et al. proposed the Multi-Strategy Whale Optimization Algorithm (MSWOA) by introducing chaotic mapping, adaptive weights, Lévy flight mechanism, and evolutionary population dynamics mechanism, and combining it with the Semi-Supervised Extreme Learning Machine (SSELM) [23]. The problems of the whale optimization algorithm, which is prone to falling into local optimal solutions and experiencing slow convergence, are solved. Optimizing the parameter selection significantly improves the classification accuracy and performance in engineering applications. In the same year, Chakraborty et al. proposed an Improved Whale Optimization Algorithm (ImWOA) by adjusting the random selection process in the search for prey phase, and introducing a cooperative hunting strategy for whales [24]. It divides its iterative process into two phases: exploration and exploitation. exploration and exploitation. Thus, it increases the diversity of the solution, avoids local optimal solutions, and improves the accuracy and convergence speed of the solution. However, the variants of algorithms proposed by researchers generally increase the difficulty of algorithm design, implementation, and parameter tuning. Since each variant has different hyperparameters, selecting and adjusting them becomes challenging. Additionally, improper parameter settings for a specific sub-algorithm may affect the overall performance. Furthermore, the convergence of variant algorithms may become unstable, especially when the characteristics of different algorithms vary greatly. Some algorithms may converge quickly, while others may lead to local optima or slow convergence, thus reducing overall efficiency.
To address this, this paper proposes the GWOA, which accelerates the convergence process by providing a high-quality starting point through the Good Nodes Set initialization. The Growth-based Encircling Prey strategy enhances the balance and stability of the search; the Synergetic Search-for-Prey strategy prevents over-reliance on a single individual, improving the systematic nature of the search process; the Adaptive Sin-Cosine strategy effectively regulates the relationship between global and local searches, reducing the risk of local optima; and the improved Cauchy Mutation strategy based on DE further improves the diversity of the search, preventing the algorithm from getting trapped in local optima. Through the integration of these strategies, the improved GWOA demonstrates stronger robustness, global search capability, and convergence speed in various complex optimization problems. The introduction of adaptive mechanisms and dynamic adjustments reduces dependence on parameters, making the algorithm more efficient and easier to implement across various applications.
2 Related research on engineering design optimization challenges
Engineering optimization aims to find an optimal solution under specific constraints. Typically by minimizing or maximizing an objective function, such as cost, time, resource consumption, or performance. These problems often involve discontinuous, non-differentiable functions, non-convex surfaces, multimodal problems, or noisy functions, making them difficult to solve efficiently with traditional methods. Additionally, deterministic methods have high computational costs, especially for large solution spaces requiring exhaustive searches. In practical optimization, near-optimal solutions are generally acceptable, as engineers often prefer to quickly find a satisfactory solution rather than spend excessive time pursuing a marginally better one [25]. With advances in technology and computational power, optimization algorithms have become indispensable tools in various engineering fields. Engineering optimization problems can be classified into linear, nonlinear, integer, compositional, multi-objective, and dynamic categories. This paper focuses on enhancing the Whale Optimization Algorithm (WOA) for nonlinear engineering optimization problems, such as Pressure Vessel design [26], Gear Train design [27], Corrugated Bulkhead design [28] and Speed Reducer design [29].
In recent years, engineering design optimization of Pressure Vessels, Tension/Compression Springs, Piston Levers and Speed Reducers has shifted from traditional mathematical models to simulation-based intelligent optimization techniques. Meta-heuristic algorithms like Gray Wolf Optimization (GWO), Whale Optimization (WOA), and Particle Swarm Optimization (PSO) are widely used in these areas. These algorithms do not rely on precise mathematical models and can handle complex design spaces, performing well in multi-objective and multi-constraint optimization problems. Jun et al. proposed the Cauchy Gray Wolf Optimization Algorithm (CGWO), which improves convergence speed, solution accuracy, and robustness through techniques like Cauchy distribution initialization, dynamic inertia weighting, and a greedy strategy [30]. It significantly improves the convergence speed, solution accuracy and robustness in terms of global search capability, balanced exploration and exploitation, and avoidance of early convergence, and outperforms traditional methods in engineering applications. Eleonora et al. optimized the design of tension/compression springs using a variety of popular population intelligence algorithms aiming to minimize the weight of the springs [31]. Hu et al. proposed an energy-feedback suspension system that combines magnetorheological dampers (MRDs) with a whale optimization algorithm-proportional integral differential (WOA-PID) control algorithm, and demonstrated that the system possesses a certain energy recovery capability in suspension control [32]. Zhou et al. proposed an improved whale optimization algorithm (LWOA) based on Lévy flight [33]. By enhancing the local optimal jumping ability, its outperformance over WOA was verified on the Speed Reducer. This paper proposes an improved algorithm GWOA to address WOA’s limitations. Meanwhile, this paper will verify that GWOA outperforms the original WOA and other algorithms in terms of convergence speed, robustness, and stability in the above four engineering optimization problems.
3 Whale optimization algorithm
The Whale Optimization Algorithm (WOA) is a metaheuristic algorithm proposed by Mirjalili et al. in 2016, inspired by the hunting behavior of humpback whales [21]. The WOA simulates the spiral updating strategy and encircling prey strategy exhibited by humpback whales during predation.
3.1 Initialization
During initialization, the population distribution and parameter settings are defined. The population is initialized by setting the individual positions, Xi, where each individual represents a solution and the population consists of candidate solutions. The fitness (objective function value) for each individual is then calculated, and the individual with the best fitness becomes the initial global best solution . Each whale’s position is randomly initialized in the search space. Assuming the search space is j-dimensional, the initial position of the
individual is given below:
where ub and lb are the upper and lower bounds of the decision variables; and Rand is a random value between 0 and 1.
3.2 Encircling prey
The WOA search space is the global solution space, with the prey’s position determined first. Since the location in the solution space is not known a priori, WOA assumes the current best solution is the target, and other whale individuals approach the current best solution. When 1, whales perform a shrinking encircling action, gradually approaching the target and focusing on local development near the best solution. This helps improve convergence and optimization accuracy but may lead to premature trapping in local optima. The specific position update formula is as follows:
where is the current global best solution; X is the current whale position; and A and C are coefficients used to control the shrinkage and exploration, influencing the attraction or repulsion of whales to the prey. These coefficients are calculated as follows:
where parameter a starts at 2 and linearly decreases to 0 with iterations; r is a random value between [0, 1]; T is the maximum number of iterations; and t is the current iteration.
3.3 Bubble-net attacking method
Spiral updating is a unique bubble-net hunting behavior of humpback whales, which gradually approach the prey by spiraling around it. It is a local exploration strategy. When 0.5, the spiral updating formula is as follows:
where represents the distance between the current individual position X and the leader
; b is a constant defining the shape of the spiral (usually set to 1); and the spiral coefficient l ranges between [–2, 1].
The exponential decay and periodic movement are used to simulate the whale’s path toward the prey, while maintaining some randomness to enhance the global search ability and avoid local convergence. As the whale approaches the prey, the spiral movement becomes tighter. However, in complex search spaces, WOA’s position updates rely heavily on randomness and are entirely based on the current global best solution. This approach can be effective in the early stages but tends to converge near local optima in the later stages, struggling to escape local optima and achieve a balance between global and local search strategies.
3.4 Search-for-prey
When whales have not yet located the prey, to ensure all whales fully explore the solution space, one whale is randomly selected to search for the prey. This random search enlarges the exploration range and prevents trapping in local optima. When 1, the update formulas are as follows:
where Xrand is a random individual from the current population; and A controls how far the individual moves away from the current best solution to ensure global search capability.
3.6 Advantages and disadvantages of WOA
WOA randomly initializes the population position vectors and sets the maximum number of iterations and parameter a firstly. Then, the global best solution is determined by evaluating the fitness. The hunting behavior of whales is simulated by updating the target position through encircling the prey and using random search strategies to select prey. The population positions are updated based on the hunting results and continue until the maximum iterations or convergence condition is met, outputting the global best solution. This algorithm approximates the optimal solution using bionics principles, offering excellent search performance and convergence. It is simple, easy to implement, and applicable to various optimization problems. Compared to complex algorithms, WOA is more computationally efficient. However, in high-dimensional, multimodal problems, the population may focus too much on local development, reducing global search capability. The algorithm’s performance is sensitive to parameter settings, and improper adjustment of parameters can reduce search effectiveness. As iterations progress, the population diversity gradually diminishes, affecting the search capability. Additionally, increasing the number of iterations increases computational costs. To address these issues, this paper introduces an enhanced whale optimization algorithm with multiple strategies, GWOA, which uses Good Nodes Set initialization to generate a uniform distribution of population, incorporates Growth-based Encircling Prey strategy with a newly-designed inertia weight , incorporates a novel designed Adaptive Sine-Cosine strategy, Synergetic Search-for-Prey strategy and an improved Cauchy Mutation Strategy based on DE. Meanwhile, the updating method of parameter a was resigned to better balance exploration and exploitation. These enhancements comprehensively improve the global search capability, convergence speed, population diversity and robustness of the WOA.
4 GWOA
4.1 Good nodes set initialization
The core idea behind Good Nodes Set Initialization is to select a set of high-quality initial nodes (i.e., “good nodes”) to improve global search efficiency and optimization solution quality. The basic principle is to select several excellent nodes as the initial solutions or starting nodes for the population at the beginning of the algorithm. These nodes are usually determined through heuristic methods, statistics, or other techniques, rather than being generated purely randomly. This provides the optimization algorithm with a set of starting nodes that are closer to the global optimal solution. The traditional WOA algorithm uses pseudo-random number generation to generate the population. This method is simple and exhibits strong randomness, but it also has the problem of uneven population distribution. Randomly generated populations may be overly dense in some areas and sparse in others, resulting in uneven coverage of the search space, which negatively affects the efficiency and performance of the WOA algorithm during the search process. As shown in the Fig 1, GWOA uses Good Nodes Set initialization to generate a uniformly distributed population, improving the quality of the population.
Let Dn be an n-dimensional Euclidean space’s unit cube, where there exists a point set:
where C(r, ) is a constant that depends on r and
(
> 0). Then; PM is called the Good Points Set.
The value of r is calculated as follows:
where p is the smallest prime number satisfying (p-3)/2 . The Good Nodes Set is mapped to the actual search space using the following mapping formula:
where ubj and lbj represent the upper and lower bounds of the dimension.
By providing high-quality initial nodes, Good Nodes Set initialization can accelerate the convergence speed of the algorithm, reduce unnecessary computations, and improve the quality of the solutions. It effectively avoids local optima issues caused by random initialization, especially in complex multi-modal optimization problems. Furthermore, it ensures uniform coverage of the search space, enhancing global search ability and preventing bias, thus improving the optimization algorithm’s performance and efficiency. This advantage is not only evident in two-dimensional spaces but also shows in high-dimensional spaces, as the construction of the Good Nodes Set itself is independent of the dimension.
4.2 Growth-based encircling prey
In the original prey encirclement strategy, position updates are based on a fixed linear relationship, making the search process overly dependent on the best solution. The search range remains unchanged throughout the process, which may cause the algorithm to over-explore in the early stages or fall into local optima during the exploitation phase. Shi et al. were the first to introduce the concept of inertia weight into the PSO algorithm, which led to significant performance improvement [34]. Inspired by inertia weights, this paper proposes an inertia weight update mechanism based on the Sigmoid function. The inertia weight
is calculated as follows:
where is an inertia weight that decreases from 0.9 to 0; t represents the current iteration; and T is the maximum number of iterations.
The following are the common calculation formulas for :
As shown in the Fig 2, the major advantage of the proposed inertia weight lies in its smooth decrease. In the early stages, a larger
gives the search agents strong exploration ability, allowing them to search the solution space widely. As iterations proceed, the reduction in
reduces the dependence on the optimal solution for position updates, shifting from global exploration to local refinement. This mechanism balances exploration and exploitation at different stages, preventing premature convergence to a local optimum. Near the global optimal solution, it prevents large steps from causing excessive deviations from the optimal solution. It also makes position updates more flexible and avoids over-reliance on leaders.
The Growth-based Encircling Prey strategy is defined as follows:
where is the proposed inertia weight;
is the current global best solution; A is a coefficient vector; and
represents the distance between the current individual position X and the leader
, as shown in Eq 7.
4.3 Adaptive sine-cosine strategy
The Sine-Cosine Algorithm (SCA) is a population-based global optimization algorithm proposed by Seyedali Mirjalili [35]. It is inspired by the oscillatory behavior of the sine and cosine functions. The core idea of the SCA is to simulate the oscillatory behavior of the sine and cosine functions to find the global optimum. The algorithm is simple, easy to implement, and requires few parameters, making it suitable for various optimization problems. The optimization process of SCA is based on two main update mechanisms: the convergence property of the sine function and the oscillatory property of the cosine function. This allows SCA to explore the search space effectively and avoid local optima. To enhance the global search ability of WOA, we have introduced an adaptive sine-cosine strategy inspired by SCA. The adaptive sine-cosine strategy incorporates the oscillatory behavior of the sine function and the spiral updating behavior of the cosine function, enabling the algorithm to adaptively explore in the early stages and avoid local optima while balancing exploration and exploitation. The adaptive sine-cosine strategy is modeled as follows:
where r is a random number between 0 and 1; l is a random number that controls the whale’s movement pattern when searching for prey, the formula is shown in Eq 10; is the distance between the current individual X and the global best solution
, the formula is shown in Eq 7; b is a spiral factor that adjusts the speed of the exponential function; r1 is a dynamic factor that controls the range of the sine strategy; t represents the current iteration; T is the maximum number of iterations; r2 is a random angle between 0 and 2
; r3 is a random factor between 0 and 2; and r4 controls the periodic behavior of the update position.
The adaptive sine-cosine strategy introduces sine and cosine functions to randomly adjust the search direction and uses random factor r3 to control the amplitude of the update. This provides more search dimensions and dynamic position updates, preventing the algorithm from sticking to a single hunting pattern that could cause it to get stuck in local optima. In the original WOA algorithm, the global exploration ability is strong in the early stages of a search, but as iterations progress, the search range shrinks, potentially slowing down the search speed in later stages. To address this, the Adaptive Sine-Cosine strategy introduces adaptive shrinking control parameters r4 and r1, allowing broad exploration in the early stages and gradual narrowing toward the target, reducing oscillations and deviations, which helps accelerate convergence. Additionally, the strategy uses the random factor r to control the selection between the sine and cosine hunting methods, allowing whale agents to perform both global and local searches. This enables dynamic control of the search phase increases the flexibility of the hunting behavior. Furthermore, the introduction of random angles and diversified range r4 enhances diversity and jumping ability, improving the chances of escaping local optima. The non-linear form of the sine-cosine strategy effectively explores the complex fitness function’s search space, improving the algorithm’s adaptability to high-dimensional problems. This strategy enhances global search capability, avoids local optima, and helps the algorithm find the optimal solution faster and more efficiently. It is especially beneficial for complex, high-dimensional optimization problems, where it can find the global optimum in fewer iterations while exhibiting stronger robustness and adaptability.
4.4 Synergetic search-for-prey
In the original Search-for-prey strategy, a random whale is selected each time for the search. This may cause unnecessary fluctuations in the solution space, particularly in the middle stages of the algorithm, potentially leading to an unstable search process or even jumping out of the global optimum region. The strategy is overly reliant on the randomly chosen individual, which can cause the algorithm to get stuck in local optima, especially in complex solution spaces. To reduce the excessive randomness of this strategy, we propose a new Synergetic Search-for-Prey strategy. In the Synergetic Search-for-Prey strategy, position updates are made by referencing both the global optimal solution and the average position of the current whale agents, focusing the search near the current optimal solution. This enhances local search ability and helps explore the current region more effectively and avoids unnecessary long-distance jumps. The Synergetic Search-for-Prey strategy also avoids over-reliance on a single individual, improving the diversity of the search. By introducing both the global optimal solution and the average position of all whale agents, and introducing a random disturbance term. The position update formula better combines the overall information of the whale population. This enables the algorithm to escape local optima, enhancing its global search ability. The Synergetic Search-for-Prey strategy is modeled as follows:
where Xm represents the average position of the whale agents, calculated in Eq 27; r is a random number uniformly distributed in the interval [0, 1]; X is the position of the current whale agent; is the position of the current global optimal solution.
4.5 Improved cauchy mutation strategy based on differential evolution
The standard position update strategy in WOA relies on biomimetic principles like prey encirclement and spiral updates. These strategies generally focus on searching near the current optimal solution, particularly in the later stages when whale agents converge on a specific area, reducing the exploration of the search space. To enhance search diversity, a mutation strategy can be introduced after position updates to perturb the current solution and help whale agents escape local optima, allowing for broader exploration. This is particularly useful when solving complex problems with multiple local optima, as the mutation strategy helps agents avoid premature convergence.
Therefore, this paper proposes an improved Cauchy Mutation Strategy based on Differential Evolution. This strategy combines the mutation mechanism of the Cauchy distribution with the differential evolution approach, enhancing the global search capability of the algorithm. It can effectively avoid stagnation near local optima and improve the exploration and convergence properties in complex optimization problems.
First, a new intermediate solution is generated using the differential evolution strategy:
where Xi represents the position of the current ith agent; is the position of the current best individual; X1, X2, X3 are positions of three randomly selected different individuals from the population; and F is a factor controlling the scaling of the differential vector, calculated as below:
where Rand is a random number between 0 and 1.
Next, Cauchy Mutation is applied to the intermediate solution :
where cauchy(0,1,1,dim) represents the perturbation generated by the Cauchy distribution.
Finally, boundary checks and adjustments are made to avoid population degradation in WOA, it as shown in Eq 31:
If the fitness of the mutated solution is better than the original solution Xi, then
will replace Xi.
4.7 Computational complexity analysis
4.7.1 Time complexity analysis.
The time complexity mainly depends on the computational effort required during each iteration. We need to analyze the key steps involved in each iteration.
Time Complexity of WOA:
- The initialization of WOA requires generating a random matrix of size SearchAgentsno
dim, and the time complexity is O(SearchAgentsno
dim);
- The objective function calculation involves running the outer loop for Maxiter iterations. The inner loop operates on SearchAgentsno search agents. The operations include checking boundary conditions and calculating the objective function value fitness = fobj
, which has a time complexity of O(dim) because the objective function is computed for each position dimension. The leader update compares the objective function values of each search agent, which has a time complexity of O(1). Therefore, the overall objective function calculation time complexity is O(Maxiter
SearchAgentsno
dim);
- In each iteration, the positions of all search agents are updated based on certain mathematical formulas. Each search agent needs to traverse dim dimensions, so the time complexity for updating each agent’s position is O(dim). Each position update involves multiple formulas (e.g., updating A, C, D), most of which have a time complexity of O(1). Therefore, the time complexity for position updates is O(SearchAgentsno
dim).
Algorithm 2. GWOA
Thus, the overall time complexity of WOA is O(Maxiter SearchAgentsno
dim).
GWOA introduces some improvements compared to WOA, but its time complexity is similar to WOA.
Time Complexity of GWOA:
- The initialization process is the same as in WOA, with a time complexity of O(SearchAgentsno
dim);
- The objective function calculation follows the same procedure as WOA. The outer loop runs Maxiter times, and the inner loop iterates over SearchAgentsno search agents. The time complexity for calculating the objective function and updating the leader is O(dim) for each agent;
- GWOA uses more complex strategies, including inertia weight updates, cosine-sine strategies, and multi-differential Cauchy mutation strategies. These increase the computational effort, and each update involves operations over dim dimensions, which has a time complexity of O(dim).
Therefore, the time complexity of GWOA is O(Maxiter SearchAgentsno
dim) + O(Maxiter
SearchAgentsno
dim). The total time complexity is still O(Maxiter
SearchAgentsno
dim), but due to additional operations (such as Cauchy mutation and cosine-sine strategies), the actual computational effort is slightly higher.
4.7.2 Space complexity analysis.
The space complexity is primarily determined by storing the search agent position matrix and other variables.
Space Complexity of WOA:
- The position matrix Positions stores the position data of size SearchAgentsno
dim, with a space complexity of O(SearchAgentsno
dim);
- The leader’s position and score only require storing one-dimensional leader information, and the space complexity is O(dim). Thus, the space complexity of WOA is O(SearchAgentsno
dim).
Space Complexity of GWOA:
- GWOA stores the position matrix Positions the same as WOA, with a space complexity of O(SearchAgentsno
dim);
- Additionally, GWOA introduces extra variables (such as cosine-sine strategies and Cauchy mutation) to store intermediate values like E, r1, r2, etc. While these variables require less space, they still contribute to additional space consumption. The space complexity for these variables is O(SearchAgentsno
dim). Thus, the space complexity of GWOA is O(SearchAgentsno
dim).
Both WOA and GWOA have a space complexity of O(SearchAgentsno dim) because they both need to store the position matrix of the search agents. GWOA also needs to store some additional intermediate variables, but their space requirements are minimal. Therefore, the space complexity of GWOA is the same as that of WOA.
Moreover, when analyzing WOA and GWOA, it is observed that the time complexity of both algorithms is generally the same. Both of them depend on the number of search agents, the problem dimension, and the number of iterations. Although GWOA improve convergence speed through strategy enhancements, its time complexity does not fundamentally change. The space complexity is directly related to the number of search agents and the problem dimension, and the complexities of the algorithms are generally the same. In most engineering design challenges, the additional computational complexity is minimal and can be ignored.
5 Experiments
To verify the performance and effectiveness of the GWOA algorithm, this paper uses 23 classical benchmark functions for testing. As shown in Table 1, the benchmark set includes functions F1 to F23, which are primarily used to test and evaluate the optimization algorithm’s performance under unimodal, multimodal, and compositional functions [36]. Among these, F1-F7 are unimodal functions, F8-F15 are multimodal functions, and F16-F23 are compositional functions. Each function tests different algorithm characteristics. The experiments set up in this paper are as follows, with engineering design optimization experiments presented in Chapter 6. The hardware environment for the experiment is shown in Table 2 below:
- Perform an ablation study by removing five improvement strategies from GWOA and testing them on selected benchmark functions;
- Perform qualitative analysis experiment on the benchmark functions for GWOA;
- Test GWOA, basic metaheuristic algorithms, and other excellent current metaheuristic algorithms on the benchmark function;
- Testing the scalability of GWOA and other metaheuristic algorithms on the benchmark functions.
The parameter settings for each algorithm are shown in Table 3:
5.1 Ablation study
In this section, we remove five improvement strategies from GWOA: GWOA without the Good Nodes Set initialization is named GWOA1; GWOA with the Growth-based Prey Encirclement Strategy replaced by the original WOA prey encirclement mechanism is named GWOA2; GWOA with the Synergetic Search-for-Prey strategy replaced by the original WOA prey search strategy is named GWOA3; GWOA with the Adaptive Sine-Cosine (ASC) strategy replaced by the original WOA spiral updating is named GWOA4; and GWOA without the Differential Evolution-based Enhanced Cauchy Mutation strategy is named GWOA5. The uniform settings include the maximum iteration T=500 and population size N=30. Each algorithm runs 30 times on 23 benchmark functions for performance analysis. The iteration curves are shown as Fig 3.
From the figure, it is observed that the initialization of the Good Nodes Set generates a uniformly distributed whale population. This uniform distribution helps algorithm explore the solution space more effectively and quickly, overcoming the problem of poor optimization efficiency that may arise from improper initial population selection in WOA. As a result, the convergence speed and accuracy are improved on functions F5, F15, and F21-F23. The Growth-based Encircling Prey strategy enhances the dynamic balance between exploration and exploitation in the algorithm, overcoming the issues of local optima and slow convergence speed that may occur with fixed parameter settings in WOA. In particular, it focuses more on local exploitation in the later stages, which is advantageous when handling functions F1-F6, F9-F11. The Synergetic Search-for-Prey strategy increases dependence on the optimal solution and introduces randomness to provide more search directions for the whale individuals. This improvement reduces the insufficient exploration problem that may arise in WOA for complex problems. The random perturbation further enhances the global search ability of the algorithm, demonstrating better performance on functions F1-F4, F7, F10-F11. The Adaptive Sine-Cosine (ASC) strategy considers the distance between the whale’s current position and the current best solution, thereby strengthening the exploration ability in the early stages. This strategy effectively reduces the risk of WOA getting trapped in local optima in high-dimensional and complex optimization problems, improving the global search capability. It enables the algorithm to converge quickly to the optimal solution for functions like F1-F6, F12. From F1-F6, F21-F22, the improved Cauchy Mutation strategy based on DE introduces a new perturbation mechanism, effectively helping the whale individuals escape from the current local optimal area and avoid early convergence to suboptimal solutions. This strategy helps the algorithm maintain population diversity during the search process, preventing the solution aggregation phenomenon that may occur in WOA in some cases.
Through these improvement strategies, GWOA shows significant improvements in global search ability, convergence speed, and solution accuracy compared to the traditional WOA, making it more effective in solving complex multi-constrained optimization problems.
5.2 Qualitative analysis experiment
In the qualitative analysis, we record the individual search history of GWOA on benchmark functions, the exploration-exploitation ratio during the iteration process, and population diversity. This allows us to comprehensively evaluate the improvement effects of GWOA. The improvement points of GWOA over WOA can be summarized in the following Fig 4. The experimental setup includes a maximum iteration count of T=500, function dimension Dim=30 and a population size of N=30. The results of the experiment are shown in the Fig 5, Fig 6, Fig 7 below:
From the population search history in the figure, it can be observed that GWOA, benefiting from the Good Points Set strategy, explores most of the area in unimodal functions F1-F7 and quickly converges to the global optimum. In multimodal and compositional functions F8-F23, the trajectory covers multiple potential optimal areas before converging and focusing on the neighborhood of the optimal solution. This demonstrates a good balance between global exploration and local exploitation in GWOA. In functions F20-F23, local optima were found, indicating that GWOA, when facing complex function problems, may increase computation time and converge to a local optimum.
Further analysis of the changes in the exploration-exploitation ratio during the iteration process shows that a higher exploration ratio helps escape local optima in the early stages, while a higher exploitation ratio aids in accelerating convergence in the later stages. The Growth-based Encircling Prey strategy introduces an adaptive parameter adjustment mechanism to dynamically adjust key parameters in the algorithm. As a result, in functions F5-F6, F8, F11-14, and F17-23, GWOA’s exploration-exploitation ratio exhibits a favorable state. This strategy enhances the algorithm’s search capability at different stages, with initial exploration focusing on discovery and later-stage exploitation driving convergence. Its dynamic balancing capability reflects the strong adaptability of the algorithm. In functions F1-F4, F7, F9-F10, and F15-F16, the Synergetic Search-for-Prey strategy and the Adaptive Sine-Cosine (ASC) strategy contribute to fast convergence in the early stages, demonstrating GWOA’s ability to quickly find the optimal solution. GWOA incorporates the improved Cauchy mutation strategy based on Differential Evolution to enhance search diversity. In complex functions such as F14-F23, GWOA maintains higher population diversity, preserving global search ability and avoiding premature convergence. However, in functions F1-F13, rapid convergence leads to a sharp decline in population diversity.
In summary, the performance of GWOA reflects its ability to balance exploration and exploitation, maintain diversity, and achieve convergence precision. GWOA converges quickly and stably in unimodal, functions, indicating strong exploitation capability. In multimodal and compositional functions, GWOA maintains diversity and eventually converges, demonstrating enhanced global search ability. However, in some functions, an imbalance between exploration and exploitation may lead to premature convergence.
5.3 Comparison of different metaheuristic algorithms
To further verify the superiority of GWOA, we selected Dung Beetle Optimization Algorithm (DBO), Grey Wolf Optimizer (GWO), Harris Hawk Optimization Algorithm (HHO), Sine Cosine Algorithm (SCA), and Whale Optimization Algorithm (WOA), An enhanced whale optimization Algorithm (eWOA) [37], Modified Whale Optimization Algorithm (MWOA) [38], a multi-strategy WOA (MSWOA) [39], Attraction Repulsion Optimization Algorithm (AROA) [40], an improved version of the sand cat swarm optimization algorithm (ISCSO) [41] as comparisons. The detailed information of the algorithm is shown in Table A1 and the detailed parameter settings of the algorithm are shown in Table A2. The algorithms are tested on selected benchmark functions with the following unified settings: maximum iteration T=500, population size N=30 and function dimension Dim=30. Each algorithm runs 30 times on 23 classical benchmark functions to record the average fitness (Ave), standard deviation (Std), p-values of Wilcoxon rank-sum test, and Friedman values for performance analysis.
5.3.1 Parametric analysis.
As shown in Fig 8 and Table 4. GWOA performs significantly better than other algorithms on F1-F7, with both the Avg and Std being 0. The Avg and Std of other algorithms on these functions are notably higher than those of GWOA. This indicates that GWOA is able to find the optimal solution and performs very stably. The convergence speed and stability of GWOA allow it to quickly find the global optimum in unimodal function problems. GWOA also shows superior performance on F9, F11-F13, and F15, suggesting that it performs well on these multimodal functions, benefiting from the Synergetic Search-for-Prey strategy and its global search ability. It is better able to escape local minima and find the global optimum. The smoothness of its curve demonstrates GWOA’s robustness and strong adaptability and consistency for different types of optimization problems. GWOA also performs excellently on F8 and F14, thanks to the application of the Good Points Set initialization. In the early stages, GWOA can quickly narrow the search space, making the solution space more regular, allowing it to find the global optimum quickly and stably. On F10 and F16-F19, the GWOA curve is similar to other algorithms because most algorithms can find very close solutions, showing minimal performance differences. On F20-F23, GWOA performs well but the results are similar to those of some other algorithms. This is because these algorithms converge to near-optimal solutions, with no significant differences. However, GWOA has a smaller Std, indicating stronger stability. By integrating various improvement strategies, GWOA possesses powerful global search ability and high stability, performing outstandingly on both unimodal and multimodal functions. In complex compositional functions, the results of GWOA are similar to those of other excellent algorithms like MSWOA and eWOA, as these algorithms can also quickly find the global optimum or near-optimal solution in these cases.
However, in performance evaluation of optimization algorithms, Avg and Std are often used to measure convergence and stability, but they may not directly reflect the algorithm’s superiority. Relying solely on these two metrics has some limitations in comparing different algorithms. So non-parametric statistical methods, such as Wilcoxon rank-sum test and Friedman test, are often used for more in-depth analysis and more reliable performance validation.
5.3.2 Non-parametric Wilcoxon rank-sum test and non-parametric Friedman test.
The Friedman test is a non-parametric statistical method used to compare differences between three or more related samples. It serves as a non-parametric alternative to analysis of variance (ANOVA) and is applicable in situations where multiple experiments are conducted on the same dataset using different algorithms. This test can assess statistical differences between algorithms and identify whether significant differences exist. The performance of multiple algorithms across different datasets or test environments is ranked. The ranks are then summed, and the significance of the inter-group differences is determined using a chi-squared distribution. The Friedman test effectively reduces bias between samples, thereby enabling a fairer comparison of algorithms. As shown in the Table 5, there is a clear difference between GWOA and other excellent algorithms. GWOA’s average Friedman value is 2.3021, ranking first.
The Wilcoxon rank-sum test is a non-parametric statistical method designed to assess whether there are significant differences in the distributions of two independent samples. The core idea is to merge the two datasets, rank them, and calculate the statistic based on the ranks to infer distribution differences. Unlike traditional methods that rely on mean and variance, the Wilcoxon test is not affected by outliers, providing more robust results, particularly when handling non-normally distributed data. In the comparison of optimization algorithms, the Wilcoxon test is an effective tool for determining whether there is a real difference in the performance of two algorithms. If the p-value of the test is less than the preset significance level (typically 0.05), the performance difference between the two algorithms can be considered statistically significant, rather than due to random fluctuations. As shown in the Table 5 below, “+” represents a significant difference, “=” indicates equal results, and “–” denotes no obvious difference. GWOA shows significant differences compared to most algorithms, while there are more ties when compared to the excellent variant MWOA, as both ultimately converge to the optimal solution.
5.4 Scalability Comparison Experiment on Different Metaheuristic Algorithms
In the benchmark functions, F1-F13 are expandable functions. To explore the performance of GWOA in different dimensions, the experiments extend F1-F13 to 50 and 100 dimensions for further analysis. The algorithm’s parameter settings remain unchanged, with a maximum number of iterations set to T=500 and a population size of N=30. Then, the algorithm is run 30 times on each function, and the p-values of the Wilcoxon rank-sum test and the Friedman value are recorded. The experimental results are shown in Table 6:
The results show that GWOA maintains good performance in high-dimensional environments, consistently ranking first in the Friedman Rank. In the Wilcoxon rank-sum test, it exhibits significant differences compared to other comparative algorithms. This is sufficient to demonstrate that GWOA has a strong competitive advantage over other optimization algorithms.
5.5 Overall effectiveness of GWOA
To further validate the performance of GWOA, this study uses a useful metric called Overall Efficiency (OE) [42] to summarize the performance results of GWOA and other algorithms. In the Table 7 below, w denotes win, t denotes tie, and l denotes loss. The OE of each algorithm is calculated using the following formula Eq 32:
where N is the total number of tests; L is the total number of failed tests for each algorithm.
Table 7 summarizes the performance results of GWOA and other algorithms. The results show that although the performance of GWOA decreases at high dimensions, its overall efficiency is 74.46%, making it the most effective algorithm. GWOA demonstrates excellent performance on classical benchmark functions and shows significant differences from the selected comparison algorithms. This quantitatively validate GWOA’s effectiveness
6 Engineering design optimization
Optimization algorithms play a key role in solving complex design and decision-making problems in engineering [43] [44]. Their objectives usually include minimizing costs, maximizing performance, or improving efficiency. These algorithms offer effective solutions, particularly in design, planning, and scheduling, contributing to efficient, economical, and sustainable system designs. This paper evaluates the performance of GWOA in engineering optimization problems through four case studies: Pressure Vessel Design, Tension/Compression Spring Design, Piston Lever Design, and Speed Reducer Design. GWOA is compared with other algorithms using a consistent setup of 500 iterations and a population size of 30. Each algorithm is run 30 times on the design problems, and performance is analyzed based on the average (Avg) and standard deviation (Std).
We applied the Penalty Function Method to manage optimization constraints. The Penalty Function Method is a widely recognized and effective technique for constraint handling. This method converts constrained optimization problems into unconstrained ones by adding a penalty term to the objective function. When a variable violates a constraint, the penalty function imposes a significant penalty. This encourages the algorithm to favor solutions that satisfy the constraints, simplifying the optimization process.
6.1 Pressure vessel design
The Pressure Vessel design problem is a classic engineering optimization problem, where the goal is to minimize the manufacturing cost of a pressure vessel. x1, x2, x3 and x4 are the design parameters representing the thickness of the vessel shell, thickness of the vessel head, inner diameter of the vessel, and length of the vessel (excluding the head) respectively. The structure of a pressure vessel as shown in the Fig 9. And the Pressure Vessel design problem is modeled as follows.
Variable:
Objective function:
Subject to:
Variable range:
Where:
The objective function is subject to individual constraints, g1 is the constraint on the ratio between the thickness and the radius; g2 represents the linear relationship between the inner diameter and the radius; g3 corresponds to the volume requirement for the container design; g4 is the length requirement for the container design.
In practice, the selection of suitable materials is often constrained by suppliers, cost, and physical properties such as corrosion resistance and thermal stability. Additionally, the manufacturing process of pressure vessels is complex and may involve processes such as welding and casting, which present challenges for design accuracy and cost.
6.2 Tension/compression spring design
In the Tension/Compression Spring design problem, the goal is to find the optimal spring parameter combination that meets performance requirements, typically minimizing the spring’s volume or weight. The Tension/Compression Spring design as shown in the Fig 11. x1 is the spring wire diameter ranging from 0.05 to 2.0, controlling the spring’s strength and stiffness. x2 is the spring outer diameter, ranging from 0.25 to 1.3, determining the spring’s spatial size. x3 is the number of active coils, ranging from 2.0 to 15.0, influencing the spring’s deformation ability. The Tension/Compression Spring design is modeled as follows.
Variable:
Objective function:
Subject to:
Variable range:
Where:
where g1 represents the design requirements for the stress and dimensions of the spring material; g2 is the optimized design of the spring’s material strength and dimensions, ensuring the spring’s safety and stability; g3 refers to the spring’s elastic modulus and response, ensuring the spring is neither too stiff nor too soft under working conditions; g4 pertains to the spring’s dimensional ratio, ensuring a reasonable overall size ratio of the spring.
Spring design typically requires very precise manufacturing processes, as even slight deviations can lead to performance instability. Springs may face fatigue failure issues over time, especially under high load and frequent operation conditions.
6.3 Piston lever design
The Piston Lever design problem revolves around optimizing the geometric parameters of the piston lever to meet design objectives and constraints while improving performance. The structure of a piston lever is shown in the Fig 13. The key variables typically involved are x1 for the horizontal distance of the lever, x2 for the vertical distance, x3 for the piston lever diameter, and x4 for the length of the piston lever. The goal is to minimize the material volume, thus reducing the piston lever’s weight and cost. The Piston Lever design problem can be described as:
Variable:
Objective function:
Subject to:
Variable range:
where
Where g1 refers to the torque balance design, ensuring that the piston system generates sufficient torque to balance the external load; g2 is the maximum torque limitation design, which represents the maximum torque generated by the piston rod during operation; g3 pertains to the dimensional design requirements, ensuring the structural safety and stability of the system; g4 is the geometric design of the piston rod, ensuring its mechanical performance during operation.
The manufacturing precision of gears significantly impacts the overall system efficiency and noise levels, thus requiring high-precision processing techniques. The efficiency of the gear transmission system is often affected by factors such as friction, lubrication, and the quality of the gear surface, which necessitates the consideration of lubrication methods and material selection during the design process.
6.4 Speed reducer design
The Speed Reducer design problem aims to optimize the structural parameters of the speed reducer to minimize the system’s weight. The structure of a speed reducer is shown in Fig 15. The design involves 7 key variables: the gear diameter x1, the center distance x2, gear-related parameters x3, the diameter of the gear shaft x4, the diameter of another gear shaft x5, the thickness of the gear shaft x6, the thickness of another gear shaft x7. The Speed Reducer design problem is defined below:
Variable:
Objective function:
Subject to:
Variable range:
Where:
where g1 refers to the power transmission efficiency design; g2 is the strength of the transmission system; g3 and g4 are the strength of the bearings and components; g5 and g6 pertain to the strength and stiffness of the shafts; g7 is the load capacity design; g8 refers to the gear size ratio design; g9 is the geometric ratio design of the gears; g10 and g11 are the relative size designs of the shafts and gears.
6.5 Results and analysis
As the experimental results shown in Fig 10, Fig 12, Fig 14, Fig 16 and Table 8, GWOA outperforms other algorithms in engineering design problems by virtue of its excellent stability and optimization accuracy. It’s able to converge to the optimal solution quickly and accurately.
7 Conclusion
This paper proposes an improved Whale Optimization Algorithm (GWOA). First, GWOA introduces the Good Nodes Set Initialization during the initialization phase. Then, several improvement strategies are applied, including the Growth-based Encircling Prey strategy, Synergetic Search-for-Prey strategy, Adaptive Sine-Cosine strategy, and Improved Cauchy Mutation Strategy based on Differential Evolution. To validate its effectiveness, we tested GWOA using benchmark functions and engineering optimization problems. GWOA was compared with the latest improved algorithms and other classic algorithms. The experimental results show that GWOA outperforms other algorithms in terms of convergence speed, accuracy, and stability. In engineering optimization problems like pressure vessel design and spring design, GWOA also performs excellently, effectively minimizing costs while satisfying constraint conditions.
GWOA is introduced by an enhanced Cauchy Mutation based on Differential Evolution, which significantly improves its global search ability, thereby optimizing its performance in numerical optimization tasks. However, this enhancement comes with a substantial increase in computational complexity, especially when dealing with large-scale complex optimization problems. While GWOA outperforms traditional WOA in standard numerical optimization problems, the growth in computation time becomes noticeable when handling large problems. Specifically, the Differential Evolution-based enhanced Cauchy Mutation introduces additional computational steps during the exploration of the solution space, which directly impacts the algorithm’s execution efficiency. Therefore, as computational complexity increases, the feasibility of GWOA in real-time application scenarios, particularly in engineering design tasks that require fast decision-making and immediate feedback, may be limited. Since large-scale optimization problems often require results within a short time frame, the high computational cost of GWOA restricts its application in real-time optimization tasks, especially in parameter tuning tasks where rapid response times are crucial.
In summary, GWOA significantly enhances the global search capability, convergence speed, and solution accuracy of the optimization algorithm through the integration of various improvement strategies. Comparative experiments demonstrate that GWOA has advantages in solving complex, multi-constrained optimization problems, offering new insights for future engineering optimization applications. Moving forward, we plan to test the algorithm with mechanical part prototypes, verify it in real-world scenarios, and further optimize GWOA for more reliable mechanical designs. Additionally, we aim to explore GWOA’s optimization applications in other fields, such as education and healthcare, to expand its range of applicability.
Details of the benchmark functions
To support the experimental study presented in this paper, we utilized the Standard Benchmark Functions. The relevant data have been uploaded to Figshare, and the link to the specific modeling of the Standard Benchmark Functions (Dim=30) is available here: https://figshare.com/s/aea70ae3f8877f7c8461. This provides reference material for further analysis by interested readers.
Appendix Table
Table A1 is details of the metaheuristic algorithms.
Table A2 are detailed parameters for engineering design problems.
Acknowledgments
I sincerely appreciate the contributions of Junhao Wei, Zikun Li, Baili Lu, and Shirou Pan to this paper. I also thank Ngai Cheong for the guidance provided throughout the process. Special thanks to Ngai Cheong, the corresponding author of this project, for proofreading the manuscript.
References
- 1.
Gandomi AH, Yang XS, Talatahari S, Alavi AH, editors. Metaheuristic algorithms in modeling and optimization. In: Metaheuristic applications in structures and infrastructures. Amsterdam, The Netherlands: Elsevier; 2013. p. 1–24.
- 2. Karaboga D, Basturk B. A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm. J Glob Optim. 2007;39(3):459–71.
- 3. Holland JH. Genetic algorithms. Sci Am. 1992;267(1):66–72.
- 4.
Kennedy J, Eberhart R. Particle swarm optimization. In: Proceedings of ICNN’95 - International Conference on Neural Networks. Vol. 4. Perth, WA, Australia; 1995. p. 1942–8. https://doi.org/10.1109/ICNN.1995.488968
- 5. Xue J, Shen B. Dung beetle optimizer: a new meta-heuristic algorithm for global optimization. J Supercomput. 2022;79(7):7305–36.
- 6. Mirjalili S, Mirjalili SM, Lewis A. Grey wolf optimizer. Adv Eng Softw. 2014;69:46–61.
- 7. Heidari AA, Mirjalili S, Faris H, Aljarah I, Mafarja M, Chen H. Harris hawks optimization: algorithm and applications. Future Gener Comput Syst. 2019;97:849–72.
- 8. Wang M, Lu G. A modified sine cosine algorithm for solving optimization problems. IEEE Access. 2021;9:27434–50.
- 9. Mostafa Bozorgi S, Yazdani S. IWOA: an improved whale optimization algorithm for optimization problems. J Comput Des Eng. 2019;6(3):243–59.
- 10. Kirkpatrick S, Gelatt CD Jr, Vecchi MP. Optimization by simulated annealing. Science. 1983;220(4598):671–80. pmid:17813860
- 11.
Colorni A, Dorigo M, Maniezzo V, et al. Distributed optimization by ant colonies. In: Proceedings of the first European conference on artificial life. Vol. 42. 1991. p. 134–42.
- 12.
Yang XS. A new metaheuristic bat-inspired algorithm. In: González JR, Pelta DA, Cruz C, Terrazas G, Krasnogor N, editors. Nature Inspired Cooperative Strategies for Optimization (NICSO 2010). Studies in Computational Intelligence. Vol. 284. Berlin, Heidelberg: Springer; 2010. p. 65–74. https://doi.org/10.1007/978-3-642-12538-6_6
- 13. Al-Betar MA, Awadallah MA, Braik MS, Makhadmeh S, Doush IA. Elk herd optimizer: a novel nature-inspired metaheuristic algorithm. Artif Intell Rev. 2024;57(3):48.
- 14. Chen S, Zhang C, Yi J. Time-optimal trajectory planning for woodworking manipulators using an improved PSO algorithm. Appl Sci. 2023;13(18):10482.
- 15. Yang P, Sun L, Zhang M, Chen H. A lightweight optimal design method for magnetic adhesion module of wall-climbing robot based on surrogate model and DBO algorithm. J Mech Sci Technol. 2024;38(4):2041–53.
- 16. Nadimi-Shahraki MH, Taghian S, Mirjalili S. An improved grey wolf optimizer for solving engineering problems. Expert Syst Appl. 2021;166:113917.
- 17. Wang S, Jia H, Abualigah L, Liu Q, Zheng R. An improved hybrid Aquila optimizer and Harris Hawks algorithm for solving industrial engineering optimization problems. Processes. 2021;9(9):1551.
- 18. Tawhid MA, Savsani V. Multi-objective sine-cosine algorithm (MO-SCA) for multi-objective engineering design problems. Neural Comput Appl. 2017;31(S2):915–29.
- 19. Chakraborty S, Kumar Saha A, Sharma S, Mirjalili S, Chakraborty R. A novel enhanced whale optimization algorithm for global optimization. Comput Ind Eng. 2021;153:107086.
- 20.
Wei J, Gu Y, Law KLE, Cheong N. Adaptive position updating particle swarm optimization for UAV path planning. In: 2024 22nd International Symposium on Modeling and Optimization in Mobile, Ad Hoc, and Wireless Networks (WiOpt). Seoul, Korea, Republic of; 2024. p. 124–31.
- 21. Mirjalili S, Lewis A. The whale optimization algorithm. Adv Eng Softw. 2016;95:51–67.
- 22. Chen H, Xu Y, Wang M, Zhao X. A balanced whale optimization algorithm for constrained engineering design problems. Appl Math Model. 2019;71:45–59.
- 23. Yang W, Xia K, Fan S, Wang L, Li T, Zhang J, et al. A multi-strategy whale optimization algorithm and its application. Eng Appl Artif Intell. 2022;108:104558.
- 24. Chakraborty S, Sharma S, Saha AK, Saha A. A novel improved whale optimization algorithm to solve numerical optimization and real-world applications. Artif Intell Rev. 2022;55:4605–716.
- 25. Leite JPB, Topping BHV. Improved genetic operators for structural engineering optimization. Adv Eng Softw. 1998;29(7–9):529–62.
- 26. Arunkumar S, Eshwara Moorthy PR, Karthik N. Design optimization of horizontal pressure vessel. Material Today Proceed. 2020;26:1526–31.
- 27. Cadet G, Paredes M. Optimized dimensioning of helical compression springs. Eur J Mech A/Solids. 2024;107:105385.
- 28. Kim P, Lee J. An integrated method of particle swarm optimization and differential evolution. J Mech Sci Technol. 2009;23(2):426–34.
- 29. Lin MH, Tsai JF, Hu NZ, Chang SC. Design optimization of a speed reducer using deterministic techniques. Math Probl Eng. 2013;2013:419043.
- 30. Li J, Sun K. Pressure vessel design problem using improved gray wolf optimizer based on cauchy distribution. Appl Sci. 2023;13(22):12290.
- 31. D Durđev M, Desnica E, Pekez J, Milošević M, Lukić D, Novaković B, et al. Modern swarm-based algorithms for the tension/compression spring design optimization problem. Ann Fac Eng Hunedoara. 2021;19(2):55–8.
- 32. Hu W, Xiao P, Zhai R, Pan J, Sun Y. Research on energy-regenerative suspension based on WOA-PID algorithm. J Intell Material Syst Struct. 2022;34(5):536–50.
- 33. Zhou Y, Ling Y, Luo Q. Lévy flight trajectory-based whale optimization algorithm for engineering optimization. Eng Comput. 2018;35(7):2406–28.
- 34.
Shi Y, Eberhart R. A modified particle swarm optimizer. In: 1998 IEEE International Conference on Evolutionary Computation Proceedings IEEE World Congress on Computational Intelligence (Cat No98TH8360). 1998. p. 69–73. https://doi.org/10.1109/icec.1998.699146
- 35. Mirjalili S. SCA: a sine cosine algorithm for solving optimization problems. Knowl-Based Syst. 2016;96:120–33.
- 36.
Suganthan PN, Hansen N, Liang JJ, Deb K, Chen Y-P, Auger A, et al. Problem definitions and evaluation criteria for the CEC 2005 special session on real-parameter optimization. Technical Report, Nanyang Technological University, Singapore, May 2005 and KanGAL Report #2005005, IIT Kanpur, India.
- 37. Chakraborty S, Saha AK, Chakraborty R, Saha M. An enhanced whale optimization algorithm for large scale optimization problems. Knowl-Based Syst. 2021;233:107543.
- 38.
Jain R, Gupta D, Khanna A. Usability feature optimization using MWOA. In: Proceedings of the International Conference on Innovative Computing and Communications: ICICC 2018. Vol. 2. 2019. p. 453–62.
- 39. Zhou R, Zhang Y, Sun X, Liu H, Cai Y. MSWOA: multi-strategy whale optimization algorithm for engineering applications. Eng Lett. 2024;32:8.
- 40. Cymerys K, Oszust M. Attraction–repulsion optimization algorithm for global optimization problems. Swarm Evol Comput. 2024;84:101459.
- 41. Niu Y, Yan X, Wang Y, Niu Y. An improved sand cat swarm optimization for moving target search by UAV. Expert Syst Appl. 2024;238:122189.
- 42. Wei J, Gu Y, Yan Y, Li Z, Lu B, Pan S, et al. LSEWOA: an enhanced whale optimization algorithm with multi-strategy for numerical and engineering design optimization problems. Sensors (Basel). 2025;25(7):2054. pmid:40218567
- 43. Lin X, Yu X, Li W. A heuristic whale optimization algorithm with niching strategy for global multi-dimensional engineering optimization. Comput Ind Eng. 2022;171:108361.
- 44. Liu M, Yao X, Li Y. Hybrid whale optimization algorithm enhanced with Lévy flight and differential evolution for job shop scheduling problems. Appl Soft Comput. 2020;87:105954.