Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

RWOA: A novel enhanced whale optimization algorithm with multi-strategy for numerical optimization and engineering design problems

  • Junhao Wei,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing

    Affiliation Faculty of Applied Sciences, Macao Polytechnic University, Macao, China

  • Yanzhao Gu,

    Roles Software, Validation, Visualization

    Affiliation Faculty of Applied Sciences, Macao Polytechnic University, Macao, China

  • Baili Lu,

    Roles Formal analysis, Investigation, Methodology

    Affiliation College of Animal Science and Technology, Zhongkai University of Agriculture and Engineering, Guangzhou, China

  • Ngai Cheong

    Roles Funding acquisition, Supervision, Writing – review & editing

    ncheong@mpu.edu.mo

    Affiliation Faculty of Applied Sciences, Macao Polytechnic University, Macao, China

Abstract

Whale Optimization Algorithm (WOA) is a biologically inspired metaheuristic algorithm with a simple structure and ease of implementation. However, WOA suffers from issues such as slow convergence speed, low convergence accuracy, reduced population diversity in the later stages of iteration, and an imbalance between exploration and exploitation. To address these drawbacks, this paper proposed an enhanced Whale Optimization Algorithm (RWOA). RWOA utilized Good Nodes Set method to generate evenly distributed whale individuals and incorporated Hybrid Collaborative Exploration strategy, Spiral Encircling Prey strategy, and an Enhanced Spiral Updating strategy integrated with Levy flight. Additionally, an Enhanced Cauchy Mutation based on Differential Evolution was employed. Furthermore, we redesigned the update method for parameter a to better balance exploration and exploitation. The proposed RWOA was evaluated using 23 classical benchmark functions and the impact of six improvement strategies was analyzed. We also conducted a quantitative analysis of RWOA and compared its performance with other state-of-the-art (SOTA) metaheuristic algorithms. Finally, RWOA was applied to nine engineering design optimization problems to validate its ability to solve real-world optimization challenges. The experimental results demonstrated that RWOA outperformed other algorithms and effectively addressed the shortcomings of the canonical WOA.

1 Introduction

In recent decades, meta-heuristic algorithms have been developed and widely studied and applied. Due to the complexity and diversity of many real-world problems, traditional exact algorithms often struggle to find the optimal solution in a reasonable amount of time. Meta-heuristic algorithms, on the other hand, find approximate optimal solutions with little computational resources by drawing on problem-specific structure and domain knowledge. They perform well in dealing with complex problems such as large-scale, nonlinear, and multi-modal problems, especially when complete information or accurate explanations are not available at the time of the problem. The following heuristic algorithms have gained great popularity in recent years: the Particle Swarm Algorithm (PSO) [1], the Sparrow Search Algorithm (SSA) [2], the Whale Optimization Algorithm (WOA) [3], the Harris Hawk Optimization Algorithm (HHO) [4], Genetic Algorithms (GA)[5], Simulated Annealing Algorithms (SA) [6], Dung Beetle Optimization Algorithms (DBO) [7], the Grey Wolf Optimizer (GWO) [8], Ant Colony Optimization Algorithm (ACO) [9], Artificial Bee Colony Algorithm (ABC) [10], and so on. Nowadays, meta-heuristic algorithms are widely used to solve life problems such as path planning [11], neural network parameter optimization [12], feature selection [13], optimal scheduling of power systems [14], node coverage optimization of WSN networks [15], shop floor scheduling [16], advanced planning and scheduling (APS) [17], tension/compression spring design [18], welded beam design [19], hydraulic thrust bearing design [20], and antenna design [21] to name a few.

Early engineering design optimization methods mainly include Linear Programming (LP), Nonlinear Programming (NLP), and Integer Programming (IP), focusing on the optimization of a single discipline and usually aiming to solve specific objectives or problems. In these traditional methods, designers mainly rely on experience and intuition to make decisions, using relatively simple strategies to address the problem. As a result, they often achieve a locally optimal solution rather than a globally optimal one. Moreover, traditional design methods typically rely on manual calculations and relatively simple tools, requiring designers to manually adjust parameters. The problem-solving process is cumbersome and time-consuming, and it can only handle simpler models and smaller-scale design spaces. In contrast, modern engineering design optimization emphasizes the use of heuristic algorithms and focuses on the integration of multiple disciplines and global optimization. Modern design methods not only consider a single objective but also take into account the interrelationships between various disciplines and objectives. Through collaborative optimization, Multi-Objective Optimization (MOO), and Multidisciplinary Design Optimization (MDO), designers can achieve more comprehensive and integrated optimization results. Designers can explore a broader design space and seek globally optimal solutions. This paper focuses on the optimization of chemical engineering design using meta-heuristic algorithms, specifically addressing Corrugated bulkhead design, Industrial refrigeration systems, Reactor network design, and piston lever optimization, ordering to enhance chemical plants, productivity, safety, and product quality.

However, meta-heuristic algorithms have some limitations. Particle Swarm Optimization (PSO) faces the risk of premature convergence when handling complex or multi-modal optimization problems, often leading to an early convergence to local optima [1]. Genetic Algorithm (GA), while versatile, exhibits weak local exploitation capabilities and requires tuning of numerous parameters, increasing the complexity of parameter adjustment [5]. The Grey Wolf Optimizer (GWO) is simple in structure and easy to implement, yet in complex or multi-modal problems, GWO may also fall into local optima during later iterations [8]. Harris Hawks Optimization (HHO) is known for its strong exploration and exploitation capabilities, but the complexity of parameter settings makes it challenging to tune, and its effectiveness in addressing complex problems is not guaranteed [4]. Ant Colony Optimization (ACO) performs well in solving discrete and compositional optimization problems, yet the pheromone update mechanism in ACO can lead to premature convergence to local optima [9]. Additionally, ACO requires careful tuning of multiple parameters, such as the pheromone evaporation rate and heuristic factors, and the interactions between these parameters are often complex. The No Free Lunch theorem suggests that the superiority of an optimization algorithm on a specific set of problems does not guarantee its effectiveness across other problem domains. Improving the balance between exploration and exploitation, enhancing the efficiency of the exploration phase, increasing the accuracy of the exploitation phase, and maintaining population diversity in the later stages of iteration have become the major challenges in enhancing the performance of metaheuristic algorithms.

In recent years, many scholars have made various attempts to improve metaheuristic algorithms. [22] In 2021, Ugur Guvenc et al. proposed AGDE. AGDE introduced a mutation operator, an adaptive crossover rate CR, and a Fitness-Distance Balance strategy, simulating a more efficient selection mechanism in nature. Its aim was to enhance the balance search capability and diversity of Differential Evolution (DE) [23]. In 2022, Ma C et al. proposed Grey Wolf Optimizer based on the Aquila Exploration Method (AGWO). The AGWO drawn inspiration from Aquila Optimizer (AO), enabling some wolves to possess flying capabilities, thereby expanding the search range and improving global search performance. This modification effectively reduced the risk of getting trapped in local optima. In 2023, Elsisi M et al. proposed the Improved Bald Eagle Search algorithm with dimension learning-based hunting (I-BES), designed to overcome the slow convergence, local optima trapping, and loss of diversity in the early stages that were common issues in the Bald Eagle Search (BES) algorithm [25]. I-BES effectively overcame the tuning issues in model predictive control (MPC) for autonomous vehicles (AVs), including vision dynamics. In 2024, Yang Z et al. proposed Competing leaders Grey Wolf Optimizer (CGWO) [26]. CGWO benefited from a novel mechanism of competing leaders that provided a flexible wolf pack leadership hierarchy to avoid stagnation in local optima and accelerate convergence speed. In addition, a population diversity-enhanced initiation method was designed to help improve the efficiency of the mechanism of competing leaders. These improved metaheuristic algorithms integrated various novel improvement strategies, offering new insights for enhancing the performance of metaheuristic algorithms.

Mirjalili et al. proposed the Whale optimization Algorithm (WOA) in 2016 [3]. WOA is a meta-heuristic algorithm that mimics the feeding strategy of whales, which has the characteristics of strong global optimization ability and simple structure. However, WOA also has certain drawbacks, such as: easy to fall into local optimum, low convergence accuracy, and difficult to balance between global and local exploration. In recent years, scholars have made various attempts to improve WOA. In 2020, Rahnema N et al. proposed the ABCWOA [27]. ABCWOA introduces Random Memory (RM) and Elite Memory (EM) to enhance both convergence and exploration capabilities. In 2020, Ruiye Jiang et al. proposed an improved Whale Optimization Algorithm (WAROA), specifically designed to address complex, large-scale, and constrained optimization problems [28]. The core innovation of WAROA lies in two main adjustments that improve the efficiency and applicability of WOA: first, strategic adjustments of key parameters and the establishment of the basic principles of the original optimization algorithm; second, the introduction of an armed force scheme, which classifies the searching whales and promotes efficient cooperation between different categories. In 2023, Shen Y et al. proposed an improved Whale Optimization Algorithm based on multi-population evolution (MEWOA) [29]. MEWOA divides the population into three subpopulations based on individuals’ fitness: exploration subpopulation, exploitation subpopulation, and moderation subpopulation. Different search strategies are assigned to each subpopulation. This multi-population co-evolution strategy effectively enhances the search capability of WOA. In 2024, Gharehchopogh F S et al. proposed a new hybrid Whale Optimization Algorithm and Golden Jackal Optimization (WOAGJO) aimed at addressing the issue of WOA getting trapped in local optima [30].

2 Organization of the paper

Chapter 3 briefly Ooverviewed the major contribution of this research. Chapter 4 primarily analysed the current research works on engineering design. Chapter 5 provided a detailed explanation of the principles of the WOA, along with its advantages and disadvantages. Chapter 6 introduced the RWOA proposed in this paper. Chapter 7 evaluated the performance of RWOA through a series of experiments. Chapter 8 involved testing various metaheuristic algorithms and RWOA on different engineering design optimization problems to validate the practicality and robustness of RWOA.

3 Major contributions

The structure of WOA is relatively simple, making it easy to understand and implement. However, WOA struggles to balance exploration and exploitation, and the population quality tends to deteriorate significantly over iterations, leading to insufficient global exploration and premature convergence to local optima. Although the aforementioned studies mentioned in Chapter 1 have improved the performance of WOA to some extent, most of them fail to simultaneously balance exploration and exploitation, enhance convergence speed and accuracy, effectively escape local optima, and maintain a high level of population diversity in the later stages of iteration. Considering WOA performs poor in engineering optimization design, we introduced an enhanced whale algorithm with multi-strategy (RWOA). RWOA aimed to make up for the shortcomings of WOA to a certain extent and explore the potential of WOA as an excellent optimizer for engineering design optimization.

RWOA introduced Good Nodes Set Initialization to generate uniformly distributed populations, employs a newly designed Hybrid Collaborative Exploration (HCE) strategy to enhance global exploration, incorporated Spiral Encircling Prey Strategy that integrates Spiral flight, utilized an Enhanced Spiral Updating Strategy with Levy flight, introduced an Enhanced Cauchy Mutation based on Differential Evolution and introduced a new update mechanism for the parameter a to better balance exploration and exploitation. Experiments showed that RWOA effectively addresses the drawbacks of WOA. Furthermore, compared to the classical WOA and other state-of-the-art (SOTA) metaheuristic algorithms, RWOA demonstrated significant advantages in both numerical optimization and real-world optimization problems.

4 Research works on engineering design

Based on the complexity and characteristics of optimization problems, optimization methods can generally be divided into two main categories: traditional methods and modern methods. Traditional optimization methods include Linear Programming (LP) and Nonlinear Programming (NLP) [31], which are suitable for problems with small scales and relatively simple objective functions and constraints. Dynamic programming is primarily used for problems involving sequential decision processes, particularly those with time series or multi-stage decisions. Integer Programming (IP) [32] is used for discrete optimization problems, especially when design variables are integers or belong to a finite set.

With the increasing complexity of engineering design, traditional single-discipline optimization methods can no longer meet the demands of modern engineering, thus leading to the emergence of Multi-Disciplinary Design Optimization (MDO). Herskovits et al. proposed numerical models for Simultaneous Analysis and Design Optimization (SAND) and Multi-Disciplinary Design Optimization (MDO), solved using numerical techniques based on the Feasible Arc Interior Point Algorithm (FAIPA) [33]. Even for large-scale optimization problems, this approach significantly reduces computational effort and integrates well with existing engineering simulation codes. The MDO method aims to optimize multiple interdependent disciplines or subsystems simultaneously to achieve more comprehensive and efficient design solutions. In many engineering design problems, such as minimizing weight and maximizing strength. Multiple conflicting objective functions must often be considered. To address these types of problems, Multi-Objective Optimization (MOO) methods have been developed. Yi et al. proposed a method that integrates multi-domain performance criteria into MOO, aiming to provide designers with an integrated optimization solution to improve the overall performance of buildings [34]. MOO methods can simultaneously optimize multiple objectives and provide a set of Pareto optimal solutions.

In recent years, the rapid development of Artificial Intelligence (AI) and Machine Learning (ML) technologies has greatly promote the application of these advanced technologies in engineering design optimization. For example, Pablo N. Pizarro et al. used deep neural networks to predict wall thickness and length based on previous architectural and engineering projects, improving design efficiency and reducing trial-and-error processes [35]. Fang et al. proposed a deep reinforcement learning (DRL)-based wind turbine rotor speed optimization method, considering rain intensity and wind speed conditions to reduce rain erosion-induced blade coating fatigue damage [36]. Furthermore, data-driven optimization, using big data technology, extracts valuable information from historical data, learns optimization patterns, and provides scientific data support for the design process. The application of AI and ML technologies not only enhances the efficiency and accuracy of the optimization process but also provides new momentum and perspectives for engineering design innovation.

Compared to traditional methods, modern optimization methods rely more on heuristic algorithms to tackle complex, multi-modal, and highly constrained problems. Typical modern optimization methods include Genetic Algorithms (GA), which are based on principles of biological evolution and are especially effective for complex and nonlinear problems [5]; Particle Swarm Optimization (PSO), inspired by the foraging behavior of bird flocks, with strong global search capabilities [1]; Simulated Annealing (SA), which mimics the physical annealing process to avoid local optima, widely used in complex system optimization [6]; and Ant Colony Optimization (ACO), which simulates the foraging behavior of ants and is particularly useful for path optimization problems [9]. Additionally, topology optimization is widely applied in structural design to optimize the shape and material distribution of structures, thereby improving design efficiency and performance.

5 WOA

The WOA is a new meta-heuristic algorithm proposed by Mirjalili et al. from Griffith University, Australia, which mimics the behaviour of whales in searching for, encircling, and capturing their prey for the purpose of solving a complex optimization problem [3].

5.1 Initialization

As shown in Fig 2 on the left, like most meta-heuristic algorithms, WOA uses pseodo-random number initialization for population initialization.

(1)

where is whale population initialized by pseodo-random number initialization; ub and lb are the upper limit and lower limit of the problem; Rand is a random number between 0 and 1.

5.2 Encircling prey

Humpback whales recognize the location of their prey and encircle them. Since the location of the optimal design in the search space is unknown, WOA assumes that the current optimal candidate solution is the target prey or a near-optimal solution. After defining the optimal search agent, other search agents will try to update their positions to the optimal search agent. This behavior is represented by Eqs 2 and 3.

(2)(3)

where t is the current iteration; A and C are coefficient vectors; is the position of the current best solution; X is the position of the whale.

If the result of each iteration has a better solution, the fitness value of this position is smaller than the fitness value of this position, then the position vector at this time will be set as the new .

The formulae for vector A and C are given below:

(4)(5)(6)

where r is a vector of random numbers from 0 to 1; a decreases from 2 to 0 during the iteration, as shown in Fig 1; t is the current number of iterations; T is the maximum number of iterations.

thumbnail
Fig 1. The update method of parameter a in the original WOA.

https://doi.org/10.1371/journal.pone.0320913.g001

5.3 Bubble-net attacking method

In addition to encircling their prey, whales also use the bubble net attack method to attack their prey. They trap their prey by blowing round bubbles in the water to create a water net. At around 15 meters underwater, the whales swim upwards in a spiral position and spit out bubbles of varying sizes to encircle their prey and bring them closer to a central area. The whale then swallows the prey in the bubble net in a near vertical position. Therefore, the bubble net attack method is divided into the following two strategies:

  1. Shrinking encircling: This behaviour is achieved by reducing the value of A in Eq 4. The position is updated as shown in Eq 3.
  2. Spiral updating position: The method first calculates the whale position X and the prey position in a straight line distance from the prey position as shown in Eq 7. And then creates a spiral equation to simulate the whale spiraling up to encircle the prey as shown in Eq 8.(7)(8)(9)(10)
    where b is a constant defining the shape of the logarithmic helix, usually set to 1; is a linearly varying parameter of [-2, -1]; t is the current number of iterations; T is the maximum number of iterations; Rand is a random number between 0 and 1; the spiral coefficient l takes the values [-2, 1].

WOA sets a 50% probability for each of the Shrinking encircling mechanism and spiral updating position mechanism of the whale updating its position as follows:

(11)

5.4 Search for prey

If the whale moves beyond the position where the prey exists, then the whale will abandon the previous moving direction and randomly search for other prey in other directions to avoid falling into a local optimum. The modeling of the whale searching for prey is as follows:

(12)(13)

where is a random whale chosen from the current population; A and C are described in Eq 4 and Eq 6.

5.5 Advantages and disadvantages of WOA

The pseudo-code of the WOA is shown in Algorithm 1.

Algorithm 1: WOA

Begin

  (Pseodo-random number initialization)

  Initialize the population using Pseodo-random number method;

  Initialize the parameters  ( T , N , p , etc. ) ;

  Calculate the fitness of each search agent;

  The best search agent is ;

    while t < T

     for each search agent

      Update a, A, C, l, and p;

      if p < 0 . 5       

if  | A | < 1

        (Search for prey)

       Update the position of the current search agent by      Eq 3;

       else

        (Encircling prey)

       Update the position of the current search agent by      Eq 13;

       end if

      else

        (Spiral updating)

       Update the position of the current search agent by      Eq 8;

      end if

     end for

     Check if any search agent goes beyond the search space and amend it;

     Calculate the fitness of each search agent;

     Update if there is a better solution;

     t = t + 1

    end while

  return

  End

From Algorithm 1, it can be seen that the late iteration of traditional WOA causes the WOA to have the disadvantages of weak global search ability, slow convergence speed, low accuracy of searching for the best and easy to fall into the local optimum due to the reduction of the diversity of the population. The development and exploration ability of traditional WOA is weak, and the parameters of WOA are difficult to balance the global exploration and local development of the whale algorithm. In addition, the traditional WOA does not take into account the possible differences in the guiding force of the prey to guide the whale for position updating. WOA has much room for improvement. Therefore, we proposed RWOA to address the above shortcomings of WOA.

6 RWOA

6.1 Good nodes set

The traditional WOA uses Pseodo-random number initialization to generate the population, which is simple, direct and random, but the randomly generated population is not uniformly distributed in the whole solution space, it is denser somewhere and sparser somewhere, which can’t achieve the uniform distribution of the population in the whole search space, and it leads to the inefficiency of WOA in searching process due to the low quality of the population as shown in Fig 2 on the left. Therefore, RWOA uses Good Nodes Set Initialization to generate uniformly distributed populations.

thumbnail
Fig 2. Pseodo-random number initialization vs good nodes set initialization.

https://doi.org/10.1371/journal.pone.0320913.g002

The concept of the Good Nodes Set was first proposed by the Chinese mathematician Loo-keng Hua. This method constructs a set of nodes that are uniformly distributed in the space, which provides a significant advantage over random initialization, especially in higher-dimensional spaces. The key intuition behind this approach is that the construction of the Good Nodes Set is independent of the number of dimensions, which ensures uniformity in any dimensional space. In contrast, random initialization often suffers from clustering of nodes in some regions and large gaps in others, especially as the dimensionality increases. The uniform distribution provided by the Good Nodes Set improves the search efficiency by avoiding such issues and facilitating better global exploration.

Mathematically, the Good Nodes Set is defined as a sequence of nodes in a unit hypercube D-dimensional Euclidean space, where the nodes are spaced according to a specific non-random, deterministic pattern. The uniformity of the distribution can be understood through the fractional part of the product , ensuring that the nodes are spread evenly. This deterministic nature contrasts with the random nature of pseudo-random number initialization, where the nodes are arbitrarily distributed, leading to inefficiencies.

In this sense, Good Nodes Set Initialization enhances the population quality by providing a more structured and evenly distributed set of candidate solutions, improving both the exploration and exploitation abilities of the algorithm, as illustrated in Fig 2 on the right [37]. Good Nodes Set in D-dimensional space is described by Eq 14 as follows:

(14)

where {x} represents the fractional part of x; M is the number of points; r is a deviation parameter greater than zero; the constant C ( r , 𝜀 )  is associated only with r and 𝜀 is related to and is a constant greater than zero.

This set is called Good Nodes Set and each node p(k) in it is called a Good Node. Assume that the upper and lower bounds of the dimension of the search space are and , then the mapping formula for mapping the Good Nodes Set to the actual search space is:

(15)

6.2 Hybrid collaborative exploration (HCE) strategy

Red-billed Blue Magpie Optimization algorithm (RBMO) is a new meta-heuristic algorithm proposed by Shengwei Fu et al. in 2024 [38], which has a powerful search capability by simulating the hunting and food storage behaviour of red-billed blue magpies. Red-billed blue magpies use two strategies to search for food, and they usually act in small groups of 2-5 individuals or in groups of more than 10 individuals to improve the search efficiency.

Location updates were made using Eq 16 when red-billed blue magpies were searching for food in small groups:

(16)

where t represents the current number of iterations; X represents the location of the search agent; p represents the number of red-billed blue magpies in 2-5 cliques randomly selected from all search individuals; represents the randomly selected individual; represents the randomly selected search agent in the current iteration.

Location updates were made using Eq 17 when red-billed blue magpies were searching for food as a group:

(17)

where q represents the number of red-billed blue magpies in 10-n randomly selected cliques from all searched individuals and n is the population size.

In the search-for-prey phase, the WOA uses only the position of a randomly selected whale for searching, which may limit the diversity of search paths, especially in high-dimensional search spaces, and may lead to insufficient search capability. In addition, although WOA updates the position by randomly selecting a reference individual, this single randomness may not be sufficient to help the WOA explore regions far away from the current optimal solution, which reduces the global search capability. Therefore, Hybrid Collaborative Exploration mechanism in RWOA incorporates the idea of searching for food from RBMO algorithm by randomly selecting a small group of 2-5 whales or a group of 10-n whales in the whale population to search for food randomly, as shown in Fig 3. By introducing the multi-option positions of 2-5 or 10-n whales for reference, RWOA is able to combine the information of multiple individuals for position updating. This means that more directions and paths are considered in the RWOA position update, avoiding the limitations of a single position update and allowing it to jump and explore over a larger area while searching. This strategy of incorporating RBMO helps to enhance the global search capability of WOA, enabling the algorithm to better discover the global optimal solution. In addition, this strategy takes into account the position of the current best solution, effectively guiding the search of the whales towards better solutions. As the number of iterations increases, the reliance on the current best solution’s position gradually decreases. This means that, in the early stages, the position update is more influenced by the global best solution to guide the search, while as iterations progress, the reliance on the current best solution’s position diminishes, the search range expands, and greater emphasis is placed on diversity, thereby preventing premature convergence to local optima. This multi-option reference reduces the instability caused by a single random individual, improves the robustness of the algorithm, and makes WOA perform more stably in the face of various optimization problems.

thumbnail
Fig 3. Hybrid collaborative exploration mechanism.

In the search for prey, whales adopt the cooperative strategy of the red-billed blue magpies, randomly selecting 2-5 whales or 10-n whales to search for prey.

https://doi.org/10.1371/journal.pone.0320913.g003

The formula for Hybrid Collaborative Exploration (HCE) strategy of RWOA is shown in Eq 18:

(18)

where A is the vector of coefficients; is the position of the current best solution; p and q represent the number of whales in 2-5 small groups or 10-n flocks randomly selected from all searched individuals respectively; denotes a random number from 0 to 1.

6.3 Spiral encircling prey strategy

As shown in Eq 3, the original WOA uses an encircling prey strategy, where the whale’s position is updated based on the distance to the prey. While effective in approaching the optimal solution, this strategy can lead to premature convergence and local optima, particularly in complex, multi-modal problems. The simplicity of the strategy limits exploration during local search and makes the algorithm sensitive to initial positions.

Spiral flight is a continuous and systematic search pattern that differs from traditional linear or local search methods, as shown in Fig 4. It allows the whales to generate more varied trajectories while encircling the prey. This spiral motion not only enhances the encircling capability towards the target but also helps avoid local convergence during the search process. The introduction of spiral motion improves the whales’ precision while encircling the prey, particularly when approaching the optimal solution. The nonlinear spiral trajectory makes the search trajectory more flexible, ensuring that the whale maintains ongoing exploration of the target while preventing quick convergence to a particular location, which may cause it to miss other potential optimal solutions. By incorporating the spiral strategy, the whales’ movement direction and distance become more diverse, enabling exploration of the solution space around the prey in multiple directions, thus enhancing the diversity of the search process. Fig 5 is a schematic diagram of the Spiral Encircling Prey mechanism.

thumbnail
Fig 5. Spiral encircling prey mechanism.

The whales encircle the prey in a spiral pattern.

https://doi.org/10.1371/journal.pone.0320913.g005

(19)(20)(21)

where r is a random number from 0 to 1; Z and L are spiral coefficients.

6.4 Enhanced spiral updating strategy

While the original spiral updating strategy of WOA aids in local search, the lack of randomness and perturbation mechanisms may cause whale individuals to converge around the leader (current best solution) in a short time, limiting the ability to escape local optima. To address this, this paper proposed an Enhanced Spiral Updating strategy by introducing Levy flight into the Spiral Updating strategy.

Levy flight is a random process whose distribution characteristics lie between Brownian motion and more extreme jumping behaviors. The concept originated from the work of the mathematician Paul Levy in the 1920s. Inspired by foraging behavior in nature and jumping phenomena in complex systems, Levy flight combines short-range exploration with long-distance jumps, resembling the foraging paths of predators such as sharks, birds, and insects. Fig 6 illustrates a simulation of Levy flight. It is a random walk model based on the Levy distribution, which has a higher probability of producing longer step sizes, enabling whales to explore vast distances through jumps. In global search scenarios, Levy flight allows for large jumps, which helps avoid getting trapped in local optima, while still maintaining some local search capability through shorter steps. By incorporating unequal probabilities for both long and short jumps, Levy flight enhances the diversity of the search process, making it better suited to adapt to more complex solution space structures. The step size L(s) of Levy flight is calculated as follows:

(22)

where u and ν are normally distributed; β=1.5.

(23)(24)

The calculation of is given by:

(25)

By introducing Levy flight, the long step sizes inherent in the Levy flight mechanism help individuals leap freely across the entire search space. This significantly boosts the jumping capability of the Spiral Updating strategy, thereby increasing the chance of escaping local optima and enhancing the global search ability. The introduction of Levy flight allows whales to not only follow the spiral trajectory for local search but also utilize larger jumps for global exploration, which helps prevent the algorithm from converging prematurely to local optima. This strategy enhances the diversity and flexibility of whale movements, particularly in cases where the search space is large and the solution space is complex, effectively avoiding premature convergence. Enhanced Spiral Updating strategy was modeled as follows:

(26)

where represents the position of the current best solution; L(s) is the step size of the Levy flight; b is a constant that defines the shape of a logarithmic spiral, usually set to 1; the value of spiral coefficient l is  [ − 2 , 1 ] , and l is calculated in Eq 10; X is the whale’s position; is the parameter of the linear change of  [ − 2 , − 1 ] , calculated in Eq 9; Rand is a random number between 0 and 1.

6.5 Enhanced cauchy mutation based on differential evolution

The standard position update strategies for WOA mainly rely on bionic principles, such as prey encirclement and spiral update. These strategies are usually more focused on searching in the vicinity of the current optimal solution, especially in the later stages of the iteration, the WOA may gradually converge to a particular region, leading to a decrease in the exploration ability of the search space. By introducing a mutation strategy after the position update, a new perturbation can be introduced into the solution space, generating a certain degree of mutation that allows individual whales to jump out of the current local optimal region and explore a wider solution space. In addition, for algorithms like WOA that rely on optimal solutions in the population for search, the variation strategy can help individuals jump out of the locked region so that the solution of WOA does not fall into a local optimum, especially when dealing with complex problems with multiple local optimal solutions. By incorporating a variation strategy into the WOA, the possible limitations of the WOA in the later convergence phase can be compensated for, ensuring that the WOA continues to explore rather than converging to a sub-optimal solution early on. We incorporate a novel Cauchy mutation strategy based on Differential Evolution into RWOA.

First, we generate an intermediate solution by Differential Evolution.

(27)

where represents the current position of the individual; represents the position of the current best individual; , and represent the positions of three different individuals randomly selected from the population respectively; F is the factor controlling the scaling of the difference vector, which is calculated as in Eq 27.

(28)

where Rand denotes a random number between 0 and 1.

Subsequently, execute Cauchy mutation on the generated intermediate solutions :

(29)

where δ denotes the perturbation term sampled from the Cauchy distribution:

(30)

Then, after applying the mutation operation to the intermediate solution, boundary checking and adjustment are required, otherwise the population degradation of WOA may occur. The boundary checking is performed as follows:

(31)

where ub and lb represent the upper and lower bounds of the problem respectively.

Finally, if the mutated solution has a better fitness than the original solution , then the solution is updated with .

6.6 Redesign of parameter a

Although integrating these strategies improves WOA’s performance, the traditional linear update of parameter a from 2 to 0 no longer meets RWOA’s needs. The linear method results in a smooth behavior change, especially at later stages, where the minimal reduction slows convergence. To address this, a Sigmoid-based parameter a update was proposed to balance global exploration and local exploitation, as shown in Eq 32. Fig 7 compares the proposed Sigmoid method with the original linear one.

thumbnail
Fig 7. Comparison between the update methods of the original parameter a and the proposed a (β=10, β=15, β=20 and β=25).

https://doi.org/10.1371/journal.pone.0320913.g007

The Sigmoid update reduces a slowly at first, rapidly in the middle, and slowly again at later stages. This simulates complex variation, enabling distinct convergence behaviors: slow early reduction supports broad exploration, rapid mid-stage decrease enhances convergence speed, and slower late-stage reduction balances exploration and exploitation for fine local searches. This strategy improves adaptability and final solution accuracy, making the algorithm more effective and reliable for complex problems.

(32)

where β is the scaling factor of the Sigmoid function and value of β was tested in the parameter sensitivity analysis experiment; t is the current iteration number; T is the maximum number of iterations.

The pseudo-code of the complete RWOA is provided in Algorithm 2.

Algorithm 2: RWOA

Begin

  (Good Nodes Set Initialization)

  Initialize the population using Good Nodes Set method;

  Initialize the parameters  ( T , N , p , etc. ) ;

  Calculate the fitness of each search agent;

  The best search agent is ;

    while t < T

     for each search agent

      Update a, A, C, l, and p;

      if p < 0 . 5

       if  | A | < 1

        (Spiral Encircling Prey)

       Update the position of the current search agent by Eq 19;

       else

        (Hybrid Collaborative Exploration)

       Update the position of the current search agent by Eq 18;

       end if

      else

        (Enhanced Spiral Updating)

       Update the position of the current search agent by Eq 26;

      end if

     end for

        (Enhanced Cauchy Mutation based on Differential Evolution)

     for each search agent

      Generate an intermediate solution by DE through Eq 27;

      Execute Cauchy mutation to generate an another intermediate

    solution

      by Eq 29;

       if is within the boundary

        = ;

       else

        if exceeds the upper boundary ub

        set it to the upper boundary ub;

        else if exceeds the lower boundary lb

        set it to the lower boundary lb;

        end if

       end if

     end for

     Calculate the fitness of each search agent;

     Update if there is a better solution;

     t = t + 1

    end while

  return

  End

6.7 Time complexity analysis

Assume that the time complexity of Pseodo-random number initialization in WOA is O(ND). During each iteration, and the total time complexity of position updates is O(ND). Therefore, the total time complexity per iteration is O(ND). If the algorithm iterates T times, the total time complexity of WOA is calculated as:

Total Time Complexity 1 = Initialization + T * (the total time complexity per iteration) = O(ND) + T * O(ND) = O ( T ∗ ND ) 

Assume that the time complexity of Good Nodes Set Initialization in RWOA is O(ND). During each iteration, and the total time complexity of position updates is O(ND), the time complexity of Enhanced Cauchy Mutation based on Differential Evolution is O(ND), Therefore, the total time complexity per iteration is O(ND). If the algorithm iterates T times, the total time complexity of RWOA is calculated as:

Total Time Complexity 2 = Initialization + T * (the total time complexity per iteration) =O(ND) + T * O(ND) = O ( T ∗ ND ) 

In summary, the time complexity of RWOA and WOA are the same, both are O ( T ∗ ND ) .

7 Simulation experiments and analysis

The experimental environment for experiments was Windows 11 (64bit), Intel(R) Core(TM) i5-8300H CPU @ 2.30GHz, 8GB running memory and the simulation platform is Matlab R2023a.

In order to validate the performance and effectiveness of the RWOA, the following four experiments are designed to test the algorithms on selected classical benchmark functions [39]:

  • Experiment 1: Each of the six improvement strategies was removed from RWOA respectively and an ablation study was performed on the 23 classical benchmark functions in Table 2;
  • Experiment 2: An parameter sensitivity analysis experiment was performed and the four values of the scaling factor β in Eq 32 were tested on the benchmark functions to determine the optimal value of β that best balanced the exploration and exploitation capabilities of the RWOA;
  • Experiment 3: A qualitative analysis experiment was performed by applying RWOA on the benchmark functions to comprehensively evaluate the performance, robustness and exploration-exploitation balance of RWOA in different types of problems, by assessing search behavior, exploration-exploitation capability and population diversity;
  • Experiment 4: RWOA was tested against other state-of-the-art metaheuristic algorithms (basic metaheuristic algorithms and enhanced metaheuristic algorithms) on the classical benchmark functions, to verify the superiority of RWOA.
thumbnail
Table 1. Current research on improved metaheuristic algorithms.

https://doi.org/10.1371/journal.pone.0320913.t001

7.1 Ablation study

In this ablation study, we excluded each of the six improvement strategies from the RWOA:

  • RWOA1: RWOA without Good Nodes Set Initialization;
  • RWOA2: RWOA without Hybrid Collaborative Exploration strategy;
  • RWOA3: RWOA without Spiral Encircling Prey strategy;
  • RWOA4: RWOA without Enhanced Spiral Updating strategy;
  • RWOA5: RWOA without Enhanced Cauchy Mutation based on Differential Evolution;
  • RWOA6: RWOA with the original update method of parameter a;

The number of iterations T=500 and the number of populations N=30 were set uniformly. Each algorithm was run 30 times individually on 23 classical Benchmark Functions for performance analysis. The Friedman values of the algorithms were recorded. And the iteration curves were shown in Fig 8.

thumbnail
Fig 8. Iteration curves of the algorithms in ablation study.

https://doi.org/10.1371/journal.pone.0320913.g008

As shown in Fig 8, the Good Nodes Set Initialization generated whale individuals that were more uniformly distributed in the solution space. This high-quality population provided a significant advantage when addressing problems such as F5-F6, F12-F15, and F21-F23. The Hybrid Collaborative Exploration strategy integrated RBMO’s cluster hunting strategy, allowing RWOA to update positions by incorporating the positional information of multiple individual whales. This avoided the limitations associated with single-position updates, enabling RWOA to explore a larger search range. Consequently, the global search capability of WOA was enhanced, strengthening its ability to solve complex functions. The Spiral Encircling Prey Strategy, incorporating a nonlinear step size through Spiral flight, increased the randomness and introduced nonlinear fluctuations in position updates. This addition brought both periodicity and unpredictability to the algorithm, allowing it to consistently avoid local optima and prevent premature convergence, particularly when tackling complex problems such as F5-F6 and F15-F17. The Enhanced Spiral Updating strategy, which introduced Levy flight, allowed whale individuals to effectively escape local optima while spiraling upward. The incorporation of step size of Levy flight enhanced the algorithm’s global exploration ability and significantly improved its convergence speed. This strategy is particularly advantageous when dealing with functions such as F1-F4 and F9-F11. The Enhanced Cauchy Mutation based on Differential Evolution integrated the concepts of Differential Evolution and Cauchy mutation, helping the algorithm to generate superior solutions after position updates. The new update method for parameter a based on the Sigmoid function, effectively balanced exploration and exploitation. This novel convergence behavior improved the algorithm’s adaptability and optimization precision. As shown in Table 3, the average Friedman value of RWOA is 2.9217, ranking first. This indicates that RWOA was the optimal choice.

thumbnail
Table 3. Friedman values of the algorithms in the ablation study.

https://doi.org/10.1371/journal.pone.0320913.t003

7.2 Parameter sensitivity analysis experiment

The purpose of conducting the parameter sensitivity analysis experiment is to observe the changes in algorithm performance by adjusting the value of the scaling factor β, and thereby select the most suitable parameter value. In the code provided, β controls the update rate of parameter a, which determines the shape of the Sigmoid function. This parameter affects the rate at which a decreases from 2 to 0, directly controlling the balance between exploration and exploitation in the iterative process of the algorithm. The larger the value of β, the more abrupt the change of a from 2 to 0. Conversely, the smaller the value of β, the smoother the change of a. The main significance of this experiment is to optimize the performance of the algorithm, enhance its adaptability, and ensure that it performs well across different optimization problems. By adjusting β, the algorithm’s balance between global search in the early stages and local search in the later stages can be better controlled, thereby affecting the final optimization outcome. In this experiment, β values of 10, 15, 20, and 25 were selected for testing. The number of iterations, T, was set to 500, and the population size, N, was set to 30. Each algorithm was run 30 times individually on 23 classical benchmark functions for performance analysis. The Friedman values of the algorithms were recorded. As shown in Table 4, the RWOA with β = 25 achieved the smallest average Friedman value and ranked first. Therefore, β = 25 was chosen for this study.

7.3 Qualitative analysis experiment

In the qualitative analysis experiment, we set the number of iterations to T = 500 and the population size to N = 30, and ran RWOA independently on 23 benchmark functions in Table 2 to analyze the search history, exploration-exploitation ratio, and population diversity of RWOA. In addition, we provided the landscape of the benchmark functions and the iteration curves for reference. The results of the qualitative analysis were shown in Fig 9, Fig 10 and Fig 11, which includes:

  • Landscapes of benchmark functions;
  • Search history of the whale population;
  • Exploration-exploitation ratio;
  • Population diversity curves;
  • Iteration curves.
thumbnail
Fig 9. Results of qualitative analysis experiment (F1-F8).

https://doi.org/10.1371/journal.pone.0320913.g009

thumbnail
Fig 10. Results of qualitative analysis experiment (F1-F8).

https://doi.org/10.1371/journal.pone.0320913.g010

thumbnail
Fig 11. Results of qualitative analysis experiment (F1-F8).

https://doi.org/10.1371/journal.pone.0320913.g011

The search history represents the positions and distribution of the whale individuals. In the search history graph of the whale population, the red circles indicate the global optimum position, while the blue circles represent the search history of the whale individuals. Notably, RWOA effectively explored the entire search space. In single-modal functions such as F1-F4, RWOA exhibited fast convergence, with whale individuals finding the optimal solution within a limited number of iterations, leading to a concentrated distribution of individuals in the solution space. In the case of complex multi-modal functions like F8 and F17-F23, where many local optima exist, RWOA first performed rapid global exploration and then refined its search in promising regions. As a result, the whale individuals traversed a large portion of the solution space, and the search history was mainly concentrated around the optimal solution. In terms of balancing exploration and exploitation, RWOA performed excellently, effectively managing this trade-off. When dealing with functions like F1-F4 and F9-F1, RWOA showed a rapid increase in the exploitation ratio during the early iterations, demonstrating its strong exploration capability. In the case of functions like F14-F23, RWOA exhibited a higher exploitation ratio in the early iterations, with a slow decline in the exploitation ratio. This highlights its robust global exploration ability and local exploitation capability. For functions such as F5-F8 and F14-F23, the diversity curve of RWOA’s population consistently fluctuated and maintains high values. This indicates that RWOA can maintain high population diversity when handling complex multi-modal functions, effectively preventing premature convergence caused by population clustering in certain areas.

7.4 Comparative experiment of different algorithms

To further validate the superiority of RWOA, we selected Zebra Optimization Algorithm (ZOA) [40], Rime Optimization Algorithm (RIME) [41], Improved Sand Cat Swarm Optimization Algorithm (ISCSO) [42], Grey Wolf Optimizer (GWO) [8], Harris Hawks Optimization (HHO) [4], Attraction-Repulsion Optimization Algorithm (AROA) [43], MWOA [44], MSWOA [45], and Whale Optimization Algorithm (WOA) [3] for comparison, and tested them on the benchmark functions listed in Table 2. The parameter settings for each algorithm are shown in Table 5. The number of iterations was uniformly set to T=500, and the population size to N=30. Each algorithm was independently run 30 times on 23 benchmark functions, and the average fitness (Ave), standard deviation (Std), p-values of the Wilcoxon rank-sum test, and Friedman values were recorded for performance analysis. The experimental results are shown in Fig 12, Tables 7 and 8.

thumbnail
Table 7. Comparative results of each algorithm in comparative experiment.

https://doi.org/10.1371/journal.pone.0320913.t007

thumbnail
Fig 12. Iteration curves of different algorithms in comparison experiment.

https://doi.org/10.1371/journal.pone.0320913.g012

As can be seen from Fig 12, Tables 7 and 8, the average fitness (Ave) and standard deviation (Std) of RWOA are superior to other algorithms in F1-F13 and F15-F23. This proves that RWOA had better robustness, adaptability and stability than other algorithms when dealing with uni-modal problems or most complex multi-modal problems. In F1 and F3, both MWOA and RWOA found the optimal solution within a limited number of iterations. In F9-F11, the performance of both RWOA and ZOA, ISCSO, HHO, MWOA is the best, which also proves that RWOA had the ability to find the optimal solution to complex problems without losing to other algorithms. However, in F14, although RWOA had been optimized to the optimal solution for many times, its overall average fitness and standard deviation are slightly worse than RIME, indicating that RWOA was less stable than RIME when solving problems like F14, confirming the famous ’No free lunch (NFL) theorem’: RWOA is not perfect, it performs well in the vast majority of benchmark functions, but may perform slightly worse than individual algorithms on a few functions.

However, in the performance evaluation of optimization algorithm, the convergence and stability of the algorithm are usually measured by average fitness and standard deviation, which can not directly reflect the performance of the algorithm. Comparing the performance of different algorithms by average fitness and standard deviations alone has certain limitations, so non-parametric tests are often introduced: Wilcoxon rank sum tests and Friedman tests.

The Wilcoxon rank-sum test is a non-parametric test that is used to statistically test the difference between the medians of two independent samples. In the comparison of optimization algorithms, when we wish to compare whether there is a significant difference between the effects of two algorithms, Wilcoxon rank-sum test can help us determine whether the difference is statistically significant. If the p-value of Wilcoxon rank-sum test is less than a set of significance levels (typically 0.05), it can be assumed that the performance difference between the two algorithms is significant and not just due to random error. As can be seen from Table 8, RWOA had significant differences with RIME, GWO and MWOA in all benchmark functions.

The Friedman test is a non-parametric analysis of variance used to compare the performance of multiple algorithms on different test problems. It can identify if there are statistically significant differences between algorithms. Friedman test can effectively eliminate the bias between samples by comparing the performance of multiple algorithms in multiple data sets or test environments, providing a fairer comparison and avoiding the multiple comparison problem caused by comparing multiple algorithms individually. As shown in Table 8, the average Friedman value of RWOA is 1.6833, ranking first among the selected SOTA algorithms.

Table 9 summarizes all performance results of RWOA and other algorithms by a useful metric named overall effectiveness (OE). In Table 9, w indicates win, t indicates tie and l indicates loss. The OE of each algorithm is computed by Eq 33 [46].

(33)

where N is the total number of tests; L is the total number of losing tests for each algorithm.

thumbnail
Table 8. Result of non-parametric tests of different algorithms.

https://doi.org/10.1371/journal.pone.0320913.t008

RWOA with overall effectiveness of 100.00% was the most effective algorithm. And RWOA was competitive with other SOTA algorithms on the benchmark functions. The results revealed the ability of RWOA to handle optimization problems. In summary, after comparing average fitness and standard deviation, p-values of Wilcoxon rank-sum test and Friedman values of Friedman test, RWOA performed best, which proved the superiority of RWOA.

8 Engineering design optimization

In engineering design optimization simulations, we employed the Penalty Function Method to handle the constraints of the optimization problem. The Penalty Function Method is a widely used and effective constraint handling technique. This approach transforms the constrained optimization problem into an unconstrained one by incorporating a punishment term into the objective function, simplifying the solution process. When a variable violates a constraint, the penalty function imposes a significant penalty, thereby guiding the algorithm to favor solutions that satisfy the constraints.

A comparative study was conducted between RWOA and other algorithms including Zebra Optimization Algorithm (ZOA) [40], Rime Optimization Algorithm (RIME) [41], Improved Sand Cat Swarm Optimization Algorithm (ISCSO) [42], Grey Wolf Optimizer (GWO) [8], Harris Hawks Optimization (HHO) [4], Attraction-Repulsion Optimization Algorithm (AROA) [43], MWOA [44], MSWOA [45] and Whale Optimization Algorithm (WOA) [3]. Parameter settings for each algorithm are shown in Table 5, with the maximum number of iterations T=500 and population size N=30 uniformly. Each algorithm was run independently 30 times, recording the average fitness value (Ave) and standard deviation (Std) for performance analysis. The experimental results were presented in Figs 14, 16, 18, 20, 22, 24, 26, 28, 30 and Table 10.

thumbnail
Table 9. Effectiveness of RWOA and other SOTA algorithms.

https://doi.org/10.1371/journal.pone.0320913.t009

thumbnail
Table 10. Average fitness and standard deviation of each algorithm across the seven engineering design problems.

https://doi.org/10.1371/journal.pone.0320913.t010

8.1 Three-bar truss

The three-bar truss is a simple structural system consisting of three members, as shown in Fig 13. It is commonly used to support concentrated loads and is widely applied in engineering fields such as bridges, buildings, and aerospace. The three-bar truss design problem is a classic structural optimization problem, often used to study the mechanical behavior of simple structures under external loading conditions. In the three-bar truss design problem, the objective is to optimize the cross-sectional areas of the truss members to minimize material usage while ensuring that the structure meets the required mechanical performance.

This optimization problem involves a nonlinear objective function, three nonlinear inequality constraints, and two continuous decision variables and . The objective function for the three-bar truss design problem can be described as follows:

Variable:

Minimize:

(34)

Subject to:

(35)(36)(37)

Where:

Variable range:

The experimental results are shown in Fig 14 and Table 10. From Table 10, it can be observed that RWOA significantly outperformed other algorithms in terms of stability, with the best optimization accuracy among all algorithms. This indicates that RWOA had a significant advantage when handling such optimization problems.

thumbnail
Fig 14. Iteration curves of the algorithms in three-bar truss design.

https://doi.org/10.1371/journal.pone.0320913.g014

8.2 Tension/compression spring

The extension/compression spring, as shown in Fig 15, plays a crucial role in modern industry, with widespread applications in fields such as automotive, home appliances, and electronics. Its design optimization not only helps improve product performance and extend service life, but also reduces costs and enhances manufacturing efficiency. Through reasonable design optimization, the spring can achieve optimal performance in dynamic working environments and meet various stringent requirements. The optimization objective of the design problem for the extension/compression spring is the minimization of its mass. The problem needs to be solved under constraints such as shear force, deflection, fluctuation frequency, and outer diameter. There are three design variables in this problem: coil diameter d, mean coil diameter D, and number of coils N. There are also four constraints, to . The mathematical model of the problem is as follows,

Variable:

Minimize:

(38)

Subject to:

(39)(40)(41)(42)

Where:

Variable range:

The experimental results are presented in Fig 16, and Table 10. As shown in Table 10, the stability of RWOA in the tension/compression spring design problem significantly surpassed other algorithms, and it achieved the highest optimization accuracy among all algorithms. This indicates that RWOA had a significant advantage in handling such optimization problems.

thumbnail
Fig 16. Iteration curves of the algorithms in tension/compression spring design.

https://doi.org/10.1371/journal.pone.0320913.g016

8.3 Speed reducer

A speed reducer is a mechanical transmission device and one of the key components of a gearbox, shown in Fig 17. It is primarily used to reduce the rotational speed of an electric motor or other power sources while increasing the output torque. The reducer achieves this speed reduction through gears, worm gears, or other transmission mechanisms. It is typically applied in situations where there is a need to decrease the rotational speed, increase torque, or adjust the direction of motion.

In the optimization design of a reducer, the goal is to minimize the weight of the reducer. This problem involves seven variables, which are as follows: the width of the gear teeth , the gear module , the number of teeth on the small gear , the length of the first shaft between the bearings , the length of the second shaft between the bearings , the diameter of the first shaft , and the diameter of the second shaft . Furthermore, this problem also involves eleven constraints, to . The mathematical formulation of the problem is as follows,

Variable:

Minimize:

(43)

Subject to:

(44)(45)(46)(47)(48)(49)(50)(51)(52)(53)(54)

Where:

Variable range:

The experimental results are presented in Fig 18, and Table 10. As shown in Table 10, the stability of RWOA in the Speed Reducer design problem significantly surpassed other algorithms, and it achieved the highest optimization accuracy among all algorithms. This indicates that RWOA had a significant advantage in handling such optimization problems.

thumbnail
Fig 18. Iteration curves of the algorithms in speed reducer design.

https://doi.org/10.1371/journal.pone.0320913.g018

8.4 Cantilever beam

A cantilever beam is a common structural form, fixed at one end and free at the other, as shown in Fig 19. The cantilever beam design problem is a classic engineering structural optimization problem, with the objective of minimizing material usage or beam weight while satisfying constraints on strength, stability, and other factors. This optimization problem is widely used in civil engineering, mechanical design, and aerospace fields.

The cantilever beam consists of five hollow square cross-section units. As shown in Fig 19, each unit is defined by one variable, and the thickness is constant. Therefore, the design problem includes five structural parameters, which correspond to five decision variables, denoted as , , , , . The objective function for the cantilever beam design problem can be expressed as:

Variable:

Minimize:

(55)

Subject to:

(56)

Where:

Variable range:

The experimental results are presented in Fig 20 and Table 10. As shown in Table 10, the stability of RWOA in the Cantilever Beam design problem significantly surpassed other algorithms, and it achieved the highest optimization accuracy among all algorithms. This indicates that RWOA had a significant advantage in handling such optimization problems.

thumbnail
Fig 20. Iteration curves of the algorithms in cantilever beam design.

https://doi.org/10.1371/journal.pone.0320913.g020

8.5 Pressure vessel

A pressure vessel is a common mechanical structure used in fields such as chemical engineering, aerospace, and medical applications. The pressure vessel design problem is a classic structural optimization problem, where the goal is to minimize the manufacturing costs of the pressure vessel, including pairing, forming, and welding processes. The design of the pressure vessel is shown in Fig 21, with caps sealing both ends of the vessel. The cap at one end is hemispherical. and represent the wall thickness of the cylindrical section and the head, respectively, while is the inner diameter of the cylindrical section, and is the length of the cylindrical section, excluding the head. Thus, , , , and are the four optimization variables of the pressure vessel design problem. The objective function and four optimization constraints are represented as follows:

Variable:

Minimize:

(57)

Subject to:

(58)(59)(60)(61)

Where:

Variable range:

The experimental results are presented in Fig 22 and Table 10. As shown in Table 10, the stability of RWOA in the Pressure Vessel design problem significantly surpassed other algorithms, and it achieved the highest optimization accuracy among all algorithms. This indicates that RWOA had a significant advantage in handling such optimization problems.

thumbnail
Fig 22. Iteration curves of the algorithms in pressure vessel design.

https://doi.org/10.1371/journal.pone.0320913.g022

8.6 I-beam

An I-beam, named for its cross-sectional shape resembling the letter I, is a type of steel with high strength and low self-weight. It is widely used in various engineering structures. Its superior mechanical properties make it applicable in multiple fields, particularly in structures subjected to bending moments and axial forces. The objective of I-beam design optimization is to select the geometric parameters of the I-beam (such as width, height, thickness, etc.) in a way that maximizes its performance. This typically involves maximizing its load-bearing capacity, minimizing material usage, controlling structural deformations, and reducing costs. Optimizing I-beam design in engineering can enhance the safety, economy, and efficiency of structures. As shown in Fig 23, the I-beam design optimization problem involves four variables (, , and ) and two constraints ( and ). , , and represent the web height, flange width, web thickness, and flange thickness of the I-beam, respectively. The objective function for the I-beam design problem can be described as:

Variable:

Maximize:

(62)

Subject to:

(63)(64)

Where:

Variable range:

The experimental results are shown in Fig 24 and Table 10. As shown in Table 10, RWOA significantly outperformed the other algorithms in terms of both optimization accuracy and stability for the I-beam design problem. This demonstrates that RWOA had superior solving capabilities when handling this type of problem. This indicates that RWOA had a significant advantage in handling such optimization problems.

thumbnail
Fig 24. Iteration curves of the algorithms in I-beam design.

https://doi.org/10.1371/journal.pone.0320913.g024

8.7 Piston lever

In a factory, piston levers are critical components in various important pieces of equipment, such as pumps and compressors, where their performance directly impacts the reliability and efficiency of the entire system. Fig 25 is the structure of a piston lever. The piston lever design problem focuses on determining the optimal piston dimensions and material selection to ensure maximum performance in transmission systems. This design influences the efficiency, stability, and durability of mechanical systems. The optimization task seeks to balance dimensions, weight, material choice, and manufacturing cost in order to find the optimal piston size and material that maximize system efficiency and cost-effectiveness. Therefore, the piston lever design problem is of paramount importance in factories, and optimizing these key components is essential to ensuring the optimal performance of the system. The optimization involves four variables: piston length , piston diameter , material property , and transmission rod length , which affect the system’s mechanical properties, dynamic performance, and cost. The piston lever design problem can be described as:

Variable:

Minimize:

(65)

Subject to:

(66)(67)(68)(69)

Variable range:

Where:

The experimental results are shown in Fig 26 and Table 10. As shown in Table 10, RWOA significantly outperformed the other algorithms in terms of both optimization accuracy and stability for the piston lever design problem. This demonstrates that RWOA had superior solving capabilities when handling this type of problem. This indicates that RWOA had a significant advantage in handling such optimization problems.

thumbnail
Fig 26. Iteration curves of the algorithms in piston lever design.

https://doi.org/10.1371/journal.pone.0320913.g026

8.8 Reactor network

Fig 27 is the structure of a reactor network. The reactor network design problem aims to optimize the configuration of chemical reactors in chemical plants, ordering to achieve a more efficient chemical reaction process. This involves selecting reactor types, configuring their arrangements, and allocating fluid flow rates, with the objective of maximizing product concentration. Optimization is achieved by adjusting the configuration and operating conditions of reactors to improve reaction efficiency, reduce energy consumption, or enhance product quality. The problem includes four variables representing the concentrations at different reaction stages: for the reactant concentration in the first reactor, for the product concentration in the first reactor, for the reactant concentration in the second reactor, and for the final product concentration. The constraints through are defined as follows: represents the balance of reactants and products in the first reactor, enforces mass conservation between the first and second reactors, maintains equilibrium for the reactant concentration between the reactors, and ensures mass conservation between intermediate and final products. The problem of reactor network design is modeled below.

Variable:

Minimize:

(70)

Subject to:

(71)(72)(73)(74)(75)

Where:

Variable range:

The experimental results are shown in Fig 28 and Table 10. As shown in Table 10, RWOA significantly outperformed the other algorithms in terms of both optimization accuracy and stability for the Reactor Network design problem. This demonstrates that RWOA had superior solving capabilities when handling this type of problem. This indicates that RWOA had a significant advantage in handling such optimization problems.

thumbnail
Fig 28. Iteration curves of the algorithms in reactor network design.

https://doi.org/10.1371/journal.pone.0320913.g028

8.9 Gas transmission system

The gas transmission system, as shown in Fig 29, is a crucial component of the modern energy supply chain, widely used in various industries, urban natural gas supply, and multinational energy transportation. Since the transportation of natural gas relies on Gas Transmission Compressors and pipeline networks, the design optimization of these devices is essential to ensuring energy transmission efficiency and reducing energy waste. The objective of the Gas Transmission Compressor optimization problem is to design and optimize the parameters of the natural gas transmission compressor, so that the compressor can deliver optimal performance under different working conditions, reduce energy consumption, extend service life, and minimize costs. The Gas Transmission Compressor optimization problem involves four design variables and one constraint. The meanings of the variables to are: indicates the length between compressor stations; indicates the compression ratio denoting inlet pressure to the compressor; indicates the pipe inside diameter; indicates the gas speed on the output side. The mathematical modeling of the Gas Transmission Compressor optimization problem is as follows:

Variable:

Minimize:

(76)

Subject to:

(77)

Where:

Variable range:

The experimental results are shown in Fig 30 and Table 10. As shown in Table 10, RWOA significantly outperformed the other algorithms in terms of both optimization accuracy and stability for the Gas Transmission System design problem. This demonstrates that RWOA had superior solving capabilities when handling this type of problem. This indicates that RWOA had a significant advantage in handling such optimization problems.

thumbnail
Fig 30. Iteration curves of the algorithms in gas transmission system design.

https://doi.org/10.1371/journal.pone.0320913.g030

9 Conclusion and future work

RWOA introduced the Goode Nodes Set Method to generate uniformly distributed populations. It incorporated newly designed strategies, including the Hybrid Collaborative Exploration Strategy, Spiral Encircling Prey Strategy, Enhanced Spiral Updating Strategy, and Enhanced Cauchy Mutation based on Differential Evolution. RWOA also incorporated a newly designed update method for parameter a, which helped balance global exploration and local exploitation.

To validate the effectiveness of the RWOA improvements, four experiments were conducted:

  1. We systematically removed six of the improvements in RWOA and compared the results with the complete RWOA to evaluate the effectiveness of each individual improvement strategy;
  2. A Parameter Sensitivity Analysis experiment was conducted to select the most suitable scaling factor β for RWOA;
  3. A Qualitative Analysis experiment was performed, and the results showed that RWOA effectively balanced exploration and exploitation during optimization, achieving high convergence accuracy, fast convergence speed, and maintaining population diversity;
  4. RWOA was compared with state-of-the-art (SOTA) metaheuristic algorithms on 23 classical benchmark functions to verify its superiority.

Furthermore, RWOA was applied to nine engineering design optimization problems, demonstrating its feasibility in real-world engineering design optimization. RWOA provided a novel approach for the application of WOA in engineering design.

RWOA effectively addressed the shortcomings of the classic WOA, such as premature convergence, low population diversity in later iterations, slow convergence speed, low convergence accuracy, and the imbalance between exploration and exploitation. Compared with other metaheuristic algorithms, RWOA shown strong competitiveness. Although RWOA provided competitive results for numerical optimization and engineering design tasks, its time consumption was comparable to that of WOA in common numerical optimization problems, primarily due to the introduction of Enhanced Cauchy Mutation based on Differential Evolution. However, in large-scale complex numerical optimizations, the Enhanced Cauchy Mutation based on Differential Evolution led to increased computational time for RWOA. Therefore, RWOA was not suitable for solving large-scale real-time problems.

Future work will involve rigorous testing on manufacturing prototypes, validation in real-world scenarios, and consideration of real-world constraints to enhance the reliability and effectiveness of the optimization process, aiming to achieve more reliable and efficient mechanical designs. We recommend RWOA as a tool for design, simulation, and manufacturing, meeting the needs of contemporary industry. Furthermore, future research will focus on further exploring RWOA’s applications in APS, data clustering, path planning, and neural network parameter optimization.

Appendix A. Details of the benchmark fucntions

To support the experimental study in this paper, we used the Standard Benchmark Functions. The relevant data has been uploaded to Figshare, and the link for the specific modeling of Standard Benchmark Functions is: https://figshare.com/account/items/28440863/edit, for reference and further analysis by the readers.

Acknowledgments

I sincerely appreciate the contributions of Yanzhao Gu and Baili Lu to this paper. I also thank Ngai Cheong for the guidance provided throughout the process. Special thanks to Ngai Cheong, the corresponding author of this project, for proofreading the manuscript. Additionally, the supports provided by Macao Polytechnic University and Macao Science and Technology Development Fund enabled us to conduct data collection, analysis, and interpretation, as well as cover expenses related to research materials and participant recruitment. MPU and FDCT investment in our work have significantly contributed to the quality and impact of our research findings.

References

  1. 1. Kennedy J, Eberhart R. Particle swarm optimization. In: Proceedings of ICNN’95-international conference on neural networks, vol. 4. IEEE; 1995. p. 1942–8
  2. 2. Xue J, Shen B. A novel swarm intelligence optimization approach: Sparrow search algorithm. Syst Sci Control Eng. 2020;8(1):22–34.
  3. 3. Mirjalili S, Lewis A. The whale optimization algorithm. Adv Eng Softw. 2016;95:51–67.
  4. 4. Heidari AA, Mirjalili S, Faris H, Aljarah I, Mafarja M, Chen H. Harris hawks optimization: Algorithm and applications. Fut Gen Comput Syst. 2019;97:849–72.
  5. 5. Holland JH. Genetic algorithms. Scientific Am 1992;267(1):66–72.
  6. 6. Van Laarhoven PJM, Aarts EHL, and Van Laarhoven, PJM. Simulated annealing. Springer Netherlands; 1987 P. J. M. Van Laarhoven, E. H. L. Aarts, and P. J. M. Van Laarhoven. Simulated annealing. Springer Netherlands, 1987.
  7. 7. Xue J, Shen B. Dung beetle optimizer: A new meta-heuristic algorithm for global optimization. J Supercomput 2022;79(7):7305–36.
  8. 8. Mirjalili S, Mirjalili SM, Lewis A. Grey wolf optimizer. Adv Eng Softw. 2014;69:46–61.
  9. 9. Dorigo M, Birattari M, Stutzle T. Ant colony optimization. IEEE Comput Intell Mag. 2006;1(4):28–39.
  10. 10. Cai J, Wan H, Sun Y, et al. Artificial bee colony algorithm-based self-optimization of base station antenna azimuth and down-tilt angle. Telecommun Sci. 2021;1:69–75.
  11. 11. Wei J, Gu Y, Law KLE. Adaptive position updating particle swarm optimization for UAV path planning. In: 2024 22nd international symposium on modeling and optimization in mobile, ad hoc, and wireless networks (WiOpt). IEEE; 2024. .
  12. 12. Cook D, Ragsdale CT, Major RL. Combining a neural network with a genetic algorithm for process parameter optimization. Eng Applic Artif Intell. 2000;13(4):391–6.
  13. 13. Wang X, Yang J, Teng X. Feature selection based on rough sets and particle swarm optimization. Pattern Recog Lett. 2007;28(4):459–71.
  14. 14. Yuan X, Wang L, Yuan Y. Application of enhanced PSO approach to optimal scheduling of hydro system. Energy Convers Manage 2008;49(11):2966–72.
  15. 15. Wang J, Gao Y, Liu W, Sangaiah AK, Kim H-J. An improved routing schema with special clustering using pso algorithm for heterogeneous wireless sensor network. Sensors (Basel) 2019;19(3):671. pmid:30736392
  16. 16. Käschel J, Teich T, Zacher B. Real-time dynamic shop floor scheduling using Evolutionary Algorithms. Int J Prod Econ 2002;79(2):113–20.
  17. 17. Dayou L, Pu Y, Ji Y. Development of a multiobjective GA for advanced planning and scheduling problem. Int J Adv Manuf Technol. 2008;42(9–10):974–92.
  18. 18. He S, Prempain E, Wu QH. An improved particle swarm optimizer for mechanical design optimization problems. Eng Optim 2004;36(5):585–605.
  19. 19. Yildiz AR. Comparison of evolutionary-based optimization algorithms for structural design optimization. Eng Applic Artif Intell 2013;26(1):327–33.
  20. 20. Şahin İ, Dörterler M, Gokce H. Optimization of hydrostatic thrust bearing using enhanced grey wolf optimizer. Mechanics 2019;25(6):480–6.
  21. 21. Wei J, Gu Y, Yan Y, et al. MRBMO: An enhanced RBMO algorithm for solving numerical optimization challenges; 2025.
  22. 22. Gharehchopogh FS, Gholizadeh H. A comprehensive survey: Whale optimization algorithm and its applications. Swarm Evolut Comput. 2019;48(1):1–24.
  23. 23. Guvenc U, Duman S, Kahraman HT. Fitness-distance balance based adaptive guided differential evolution algorithm for security-constrained optimal power flow problem incorporating renewable energy sources. Appl Soft Comput. 2021;108:107421.
  24. 24. Ma C, Huang H, Fan Q. Grey wolf optimizer based on Aquila exploration method. Expert Syst Applic. 2022;205:117629.
  25. 25. Elsisi M, Essa ME-SM. Improved bald eagle search algorithm with dimension learning-based hunting for autonomous vehicle including vision dynamics. Appl Intell 2022;53(10):11997–2014.
  26. 26. Yang Z. Competing leaders grey wolf optimizer and its application for training multi-layer perceptron classifier. Expert Syst Applic. 2024;239:122349.
  27. 27. Rahnema N, Gharehchopogh FS. An improved artificial bee colony algorithm based on whale optimization algorithm for data clustering. Multimed Tools Appl. 2020;79(43–44):32169–94.
  28. 28. Jiang R, Yang M, Wang S, et al. An improved whale optimization algorithm with armed force program and strategic adjustment. Appl Math Model. 2020;81:603–23.
  29. 29. Shen Y, Zhang C, Soleimanian Gharehchopogh F, Mirjalili S. An improved whale optimization algorithm based on multi-population evolution for global optimization and engineering design problems. Expert Syst Applic. 2023;215:119269.
  30. 30. Gharehchopogh FS, Mirjalili S, Isik G. A new hybrid whale optimization algorithm and golden jackal optimization for data clustering. In: Handbook of whale optimization algorithm. Academic Press; 2024. .
  31. 31. Quesada I, Grossmann IE. An LP/NLP based branch and bound algorithm for convex MINLP optimization problems. Comput Chem Eng. 1992;16(10–11):937–47.
  32. 32. Fu JF, Fenton RG, Cleghorn WL. A mixed integer-discrete-continuous programming method and its application to engineering design optimization. Eng Optim. 191;17(4):263–80.
  33. 33. Herskovits J, Mappa P, Goulart E. Mathematical programming models and algorithms for engineering design optimization. Comput Methods Appl Mech Eng. 2005;194(30–33):3244–68
  34. 34. Yi Y, K TA, Park J. Multi-objective optimization (MOO) of a skylight roof system for structure integrity, daylight, and material cost. J Build Eng. 2021;34:102056.
  35. 35. Pizarro PN, Massone LM. Structural design of reinforced concrete buildings based on deep neural networks. Eng Struct. 2021;241:112377
  36. 36. Fang J, Hu W, Liu Z, Chen W, Tan J, Jiang Z, et al. Wind turbine rotor speed design optimization considering rain erosion based on deep reinforcement learning. Renew Sustain Energy Rev. 2022;168:112788.
  37. 37. Chixin Xiao, Zixing Cai, Yong Wang. A good nodes set evolution strategy for constrained optimization. In: 2007 IEEE congress on evolutionary computation; 2007 p. 943–50. https://doi.org/10.1109/cec.2007.4424571
  38. 38. Fu S, Li K, Huang H. Red-billed blue magpie optimizer: a novel metaheuristic algorithm for 2D/3D UAV path planning and engineering design problems. Artif Intell Rev. 2024;57(6):1–89.
  39. 39. Suganthan PN, Hansen N, Liang JJ. Problem definitions and evaluation criteria for the CEC 2005 special session on real-parameter optimization. KanGAL report; 2005. p. 2005005.
  40. 40. Trojovská E, Dehghani M, Trojovský P. Zebra optimization algorithm: A new bio-inspired optimization algorithm for solving optimization algorithm. IEEE Access. 2022;10:49445–73.
  41. 41. Su H, Zhao D, Heidari AA, Liu L, Zhang X, Mafarja M, et al. RIME: A physics-based optimization. Neurocomputing. 2023;532:183–214.
  42. 42. Li Y, Zhao L, Wang Y, et al. Improved sand cat swarm optimization algorithm for enhancing coverage of wireless sensor networks. Measurement. 2024;233:114649.
  43. 43. Cymerys K, Oszust M. Attraction–repulsion optimization algorithm for global optimization problems. Swarm Evolut Comput. 2024;84:101459.
  44. 44. Yang W, Xia K, Fan S, Wang L, Li T, Zhang J, et al. A multi-strategy whale optimization algorithm and its application. Eng Applic Artif Intell. 2022;108:104558.
  45. 45. Yang W, Xia K, Fan S. A multi-strategy whale optimization algorithm and its application. Eng Applic Artif Intell. 2022;108:104558.
  46. 46. Nadimi-Shahraki MH, Taghian S, Mirjalili S, Faris H. MTDE: An effective multi-trial vector-based differential evolution algorithm and its applications for engineering design problems. Appl Soft Comput. 2020;97:106761.