Figures
Abstract
This study develops an enhanced Secretary Bird Optimization Algorithm (ASBOA) based on the original Secretary Bird Optimization Algorithm (SBOA), aiming to further improve the solution accuracy and convergence speed for wireless sensor network (WSN) deployment and engineering optimization problems. Firstly, a differential collaborative search mechanism is introduced in the exploration phase to reduce the risk of the algorithm falling into local optima. Additionally, an optimal boundary control mechanism is employed to prevent ineffective exploration and enhance convergence speed. Simultaneously, an information retention control mechanism is utilized to update the population. This mechanism ensures that individuals that fail to update have a certain probability of being retained in the next generation population, while guaranteeing that the current global optimal solution remains unchanged, thereby accelerating the algorithm’s convergence. The ASBOA algorithm was evaluated using the CEC2017 and CEC2022 benchmark test functions and compared with other algorithms (such as PSO, GWO, DBO, and CPO). The results show that in the CEC2017 30-dimensional case, ASBOA performed best on 23 out of 30 functions; in the CEC2017 100-dimensional case, ASBOA performed best on 26 out of 30 functions; and in the CEC2022 20-dimensional case, it performed best on 9 out of 12 functions. Furthermore, the convergence curves and boxplot results indicate that ASBOA has faster convergence speed and robustness. Finally, ASBOA was applied to WSN problems and three engineering design problems (three-bar truss, tension/compression spring, and cantilever beam design). In the engineering problems, ASBOA consistently outperformed competing methods, while in the WSN deployment scenario, it achieved a coverage rate of 88.32%, an improvement of 1.12% over the standard SBOA. These results demonstrate that the proposed ASBOA has strong overall performance and significant potential in solving complex optimization problems. Although ASBOA performs well in these problems, its performance in high-dimensional multimodal problems and complex constrained optimization is unstable, and the introduced strategies add some complexity. Additionally, different parameter settings may lead to varying results, and the sensitivity of different problems to these parameters can also differ. It is necessary to adjust the settings according to the specific problem at hand in order to further refine and achieve a more stable version.
Citation: Meng Q, Kuang X, Yu Z, He M, Cui H (2025) Augmented secretary bird optimization algorithm for wireless sensor network deployment and engineering problem. PLoS One 20(8): e0329705. https://doi.org/10.1371/journal.pone.0329705
Editor: Yirui Wang,, Ningbo University, CHINA
Received: December 18, 2024; Accepted: July 16, 2025; Published: August 8, 2025
Copyright: © 2025 Meng et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: All relevant data are within the manuscript.
Funding: The author(s) received no specific funding for this work.
Competing interests: The authors have declared that no competing interests exist.
1. Introduction
With the continuous advancement of Wireless Sensor Network (WSN) technology and the expansion of its application scenarios, network coverage optimization has become a core research topic in the fields of the Internet of Things (IoT) and communication systems [1,2]. In modern smart cities, environmental monitoring, industrial automation, and similar domains, the deployment and management of WSNs play a critical role in determining network performance and efficiency. Optimizing network coverage not only enhances network reliability, ensuring stable and real-time data transmission, but also improves the accuracy and comprehensiveness of data collection. This is essential for achieving precise monitoring, timely responses, and efficient management [3,4].
By deploying sensor nodes strategically in space, the coverage area and signal quality of the sensor network can be significantly improved, thereby enhancing the precision and efficiency of task execution [5,6]. For instance, in environmental monitoring, an optimal node arrangement ensures broader area coverage and a more sensitive response to dynamic changes, improving the system’s real-time feedback capability. This, in turn, strengthens environmental monitoring and early warning mechanisms [7,8]. Moreover, optimized network layouts help reduce redundant data and communication conflicts, improving data collection accuracy and boosting the efficiency of decision-support systems. Rational network deployment not only increases resource utilization efficiency but also significantly extends the lifecycle of wireless sensor networks [9,10].
However, the optimization of WSNs faces numerous challenges. First, energy consumption of nodes is a critical issue, especially in large-scale networks where energy constraints often become a key factor affecting network performance. Additionally, environmental uncertainties, such as weather changes, obstacles, and node failures, further increase the complexity of network coverage optimization [11,12]. Traditional optimization methods, such as numerical optimization based on mathematical models and discrete search techniques, are effective in some simple scenarios. However, they often struggle when addressing complex global optimization problems. The high computational cost and limited adaptability of traditional approaches to dynamic environments create significant bottlenecks in practical applications [13].
Many researchers have conducted systematic studies on Wireless Sensor Networks (WSNs), leading to the development of various bio-inspired algorithms to optimize routing from member nodes to the sink node, with the goal of reducing energy consumption and prolonging network lifetime. For instance, Priyadarshi conducted a comprehensive study on WSN routing and clustering mechanisms, focusing on innovative optimization methods and offering a panoramic, in-depth analysis incorporating AI technologies [14]. Rawat proposed a cluster-based energy-efficient protocol for heterogeneous networks, which systematically utilizes sensor energy for cluster management, significantly enhancing network lifetime and reducing energy consumption [15]. To further extend network longevity, Rahul introduced a three-tier heterogeneous clustering scheme. This approach first classifies sensor nodes into three differentiated groups based on energy levels, then selects the optimal cluster head (CH) by considering energy thresholds and node efficiency metrics [16]. Intrusion Detection Systems (IDS) in WSNs largely rely on effective feature selection (FS) to improve performance. Nguyen proposed a novel method named Genetic Sacrificing Whale Optimization (GSWO) to overcome the limitations of traditional approaches [17]. Priyadarshi also introduced an efficient cluster head formation technique that significantly optimizes energy utilization, thereby achieving superior network lifetime performance [18]. Addressing the critical issue of limited battery life in WSN nodes, Rahul and colleagues developed a novel and efficient cluster head selection mechanism—an Energy-Dependent Clustering Framework (EDCF) for heterogeneous WSNs—designed to significantly extend network lifespan [19]. To tackle challenges such as high deployment costs and insufficient effective coverage in WSNs, Qu et al. proposed a coverage optimization method based on an Improved Multi-Strategy Grey Wolf Optimizer (IGWO-MS) [20]. Bharat Gupta and colleagues suggested enhancing network coverage through slight node repositioning: they accurately identified blind spots within the monitored area and calculated optimal new positions for mobile nodes [21]. Raj Vikram introduced an Improved Triangular-Based Localization Scheme (MTBLS) aimed at enhancing the performance of traditional Midpoint-Based Localization Scheme (MBLS) and Triangular-Based Localization Scheme (TBLS), thereby better meeting the communication needs of intelligent distribution automation systems [22].
To address these challenges, metaheuristic algorithms have emerged. These algorithms, by simulating processes such as biological evolution, collective behavior, or physical phenomena in nature, exhibit powerful global search capabilities [23,24]. Compared to traditional optimization methods, metaheuristic algorithms are more effective in handling nonlinear and high-dimensional optimization problems, making them particularly suitable for optimization tasks in complex and uncertain environments [25]. Piyush proposed a clustering protocol called the Efficient Cluster Head Selection Scheme (ECSS), which enhances overall network lifetime and performance by preferentially selecting high energy-efficient cluster heads (CHs) [26]. Rahul Priyadarshi introduced a cube-based three-dimensional coverage model and deployment framework. By establishing a quantitative relationship between the sensor’s radius and its coverage area, the model calculates the minimum number of nodes required to achieve full coverage [27]. In WSN coverage optimization, metaheuristic algorithms such as Genetic Algorithm (GA) [28], Particle Swarm Optimization (PSO) [29], and Ant Colony Optimization (ACO) [30] have been widely applied and have achieved significant results [24,31]. By simulating the adaptive evolution process of biological populations, these algorithms not only enable global search but also facilitate rapid escape from local optima. As a result, they are better able to tackle the various complex problems in WSNs, offering high optimization efficiency and strong practical value [32]. Furthermore, to address the sensitivity of current algorithms to hyperparameters, Blan investigated adaptive control and self-configuration mechanisms to enhance robustness and reduce reliance on manual parameter tuning [33].
Generally, metaheuristic algorithms can be categorized into four types [34,35]: Evolutionary Algorithms (EA), Physics-based Algorithms (PhA), Human-based Algorithms (HB), and Swarm Intelligence (SI)-based Algorithms. Evolutionary Algorithms include Differential Evolution (DE) [36] and Genetic Algorithms (GA) [28], among others. Physics-based Algorithms include Simulated Annealing (SA) [37], and Gravitational Search Algorithm (GSA) [38]. Human-based Algorithms include Teaching–Learning-based Optimization Algorithm (TLBO) [39], Social Evolution and Learning Optimization (SELO) [40], Love Evolution Algorithm (LEA) [41], and Gold Rush Optimization algorithm (GRO) [42], among others. Swarm Intelligence-based Algorithms include Bat Algorithm (BA) [43], Grey Wolf Optimizer (GWO) [44], Harris Hawks Optimization (HHO) [45], Golden Jackal Optimization (GJO) [46], Quantum Avian Navigation Algorithm (QANA) [47], Black Widow Optimization (BWO) [48], Red-beaked Magpie Optimization (RBMO) [49], Golden Eagle Optimizer (GEO) [50],Genghis Khan shark optimizer(GKSO) [51], and Goose Optimization Algorithm (GOOSE) [52], among others. These organisms demonstrate remarkable problem-solving capabilities through decentralized, self-organized interactions among individuals. Researchers in the field of Swarm Intelligence (SI) aim to understand and replicate such characteristics in artificial systems, thereby developing algorithms and methods capable of effectively addressing a wide range of optimization challenges. The fundamental concept of SI systems is based on emergent intelligence, where simple entities following local rules collectively exhibit complex global behaviors. This provides an effective approach for solving various complex optimization problems [53].
Secretary Bird Optimization Algorithm(SBOA) [54] is a recently proposed swarm intelligence-based metaheuristic algorithm. Compared to traditional optimization algorithms, SBOA has significant advantages in optimization performance, with its simple structure and efficient solving ability standing out. However, like most metaheuristic algorithms, SBOA still faces potential challenges such as slow convergence speed and the risk of falling into local optima. To mitigate the risk of SBOA getting trapped in local optima while improving its convergence speed, this paper proposes three improvement strategies: the Differential Cooperative Search Mechanism, the Optimal Boundary Control Mechanism, and the Information Retention Control Mechanism. Based on these strategies, a new Augmented Secretary Bird Optimization Algorithm (ASBOA) is introduced, which is applied to Wireless Sensor Network (WSN) problems and engineering problems. The main contributions of this paper are as follows:
- (1). Three improvement strategies are proposed and applied to SBOA, resulting in an enhanced version called ASBOA;
- (2). Comparative experiments are conducted with eight benchmark algorithms using the CEC2017 benchmark suite, verifying the optimization performance of ASBOA;
- (3). ASBOA is applied to the optimization of Wireless Sensor Network layout problems, further demonstrating its effectiveness in solving real-world engineering problems.
The remaining sections of this paper are organized as follows: Section 2 provides an introduction and modeling of the Wireless Sensor Network (WSN) model. Section 3 discusses the standard SBOA algorithm and the improved ASBOA version. Section 4 presents a detailed analysis of the experimental results. Section 5 offers a summary of the paper and discusses future prospects.
2. WSN mathematical model
2.1 Basic concepts of WSN
We examine the sensor’s sensing model depicted in Fig 1, where two concentric circles are drawn around the sensor. The inner circle represents the sensing area, denoted as , which corresponds to the radius of the disk. The outer circle indicates the communication range, represented by
. We assume that
. It has been demonstrated that if the communication range is at least twice the sensing range, this condition is adequate to guarantee complete coverage of the convex region, thereby ensuring connectivity among active nodes [11]. However, this assumption of omnidirectional sensing capability does not apply to certain types of sensor nodes, such as cameras or ultrasonic sensors with directional sensing areas. As shown in Fig 2, if the circular sensing area is transformed into a square, the circle’s diameter becomes the diagonal of the square [11,31,55].
The sensor is powered by a battery, so it can only operate for a limited period. Therefore, energy-efficient and coverage protocols are required to extend the battery’s lifetime. The probability of event detection is inversely related to the Euclidean distance between the sensor and the event. In a Wireless Sensor Network (WSN), there are two sensor detection models to identify effective coverage. These are the binary detection model, assuming no uncertainty, and the probability perception model with random detection. The probabilistic coverage model offers a more accurate representation of real sensor performance in an environment. On the other hand, the binary perception model, which is the most basic and commonly studied coverage model, assumes that a sensor detects all events within its sensing range [23]. In this paper, we adopt the binary detection model, as expressed in Equation (1).
Here, is the probability that an event occurring at position
is detected by a sensor located at position
.
represents the Euclidean distance between the sensor and the event.
2.2 Constraints and objective functions of WSN
The objective function proposed in this study primarily considers coverage. First, we assume the sensor nodes are denoted as , and the deployment space is
. The coverage radius of the sensor is
, and the covered points are
. The coverage equation is given by Equation (2).
Here, denotes the Euclidean distance between the sensor
to the point
, which is calculated using Equation (3).
Therefore, the coverage probability of the deployment area is as follows:
In this equation, and
are the length and width of the deployment space. The objective of this problem is to maximize the network coverage, which is a maximization problem. Therefore, its fitness function
is:
3. Secretary bird optimization algorithm and Augmented secretary bird optimization algorithm
3.1 Secretary bird optimization algorithm
3.1.1 Inspiration of Secretary bird optimization algorithm.
The inspiration for the Secretary Bird Optimization Algorithm (SBOA) comes from the natural survival behavior of the secretary bird, primarily simulating its hunting strategy and escape behavior from predators or enemies. The three stages of the secretary bird’s hunting behavior correspond to the exploration phase of the algorithm, while the two strategies for escaping predators and enemies correspond to the exploitation phase of the algorithm [56,57].
3.1.2 Mathematical model of Secretary bird optimization algorithm.
The mathematical model of the Secretary Bird Optimization Algorithm (SBOA) is established in three stages: the initialization phase, the exploration phase, and the exploitation phase. The mathematical models for each stage are presented as follows:
Initialization Phase: To solve the optimization problem, an initial solution for the search needs to be determined, which is initialized using the following equation:
Where, represents the initial value of the
candidate solution’s
decision variable;
and
represent the upper and lower bounds, respectively, and
is a random number in the range (0, 1).
Exploration Phase: This phase is divided into three stages: searching for prey ,exhausting prey
,and attacking prey
. In the “searching for prey” stage, the secretary bird looks for potential prey. Once prey is identified, it moves into the “exhausting prey” stage, where it consumes the prey’s energy. The bird, with sharp judgment of the prey’s movements, leisurely wanders, jumps, and provokes near the prey, gradually depleting the prey’s energy. When the prey’s stamina is nearly exhausted, the bird attacks. This process is modeled using equations (7) and (8) [54].
In this context, represents the current iteration number, and
denotes the maximum number of iterations.
represents the new state of the
secretary bird in the first stage.
and
are the random candidate solutions in the first-stage iteration, and
is a random array of dimensions
generated from the interval [0, 1].
represents the position information of the
dimension of the solution.
indicates the fitness value of the objective function for that solution.
represents an array of dimensions
randomly generated from the standard normal distribution (mean = 0, standard deviation = 1).
denotes the global best solution,
refers to the Lévy flight function, which is calculated using equation (9).
In this equation, and
are fixed constants with a value of 1.5.
and
are random numbers generated within the interval [0, 1], and
represents the Gamma function.
Exploitation Phase: During this phase, the secretary bird may encounter attacks from predators or competitors trying to steal its food. It is highly intelligent and typically adopts various evasive strategies to protect itself or its food. These strategies are primarily divided into two types: one is to escape by flying or running , and the other is camouflage, where the secretary bird uses environmental colors or structures to blend in and make itself harder for predators to detect
[54].This process is modeled using equations (10) and (11).
In this context, ,
and
represent arrays of dimensions
randomly generated from a normal distribution.
is the random candidate solution for the current iteration,
represents the global best solution, and
is a randomly selected integer, either 1 or 2.
3.2 Augmented secretary bird optimization algorithm
3.2.1 Differential cooperative search mechanism.
Faced with the differences among individuals, understanding the underlying causes of these variations, and learning from them, individual growth can be greatly enhanced, leading to a faster convergence of the algorithm. However, in the attack prey stage of the SBOA’s exploration phase, relying solely on the information from the best individual can lead to the algorithm getting stuck in local optima, and fail to further explore the solution space in search of better solutions. To address this issue, a differential collaborative search mechanism has been introduced in SBOA’s attack prey 1 to facilitate the calculation process. This allows every individual to potentially participate in the computation, ensuring that valuable information surrounding the global best region is preserved [58,59]. In each gap, individuals from adjacent levels consistently engage in the operations, thereby introducing diverse evolutionary information to the population. However, the last two gaps do not adhere to this pattern, intentionally introducing an element of uncertainty into the evolutionary process. Specifically, the learning operator in ASBOA utilizes four gaps designed to approximate the fitness landscape. These gaps are outlined in Equation (12).
Here, represents the global best solution, while
denotes the second-best solution, often referred to as the elite solution.
represents the worst solution, and
and
are random individuals.
represents the gap between two individuals, allowing the learner to fully comprehend the differences between them and leverage these differences for improvement.
To account for this variability, the learning factor is introduced for each of the four differential measures. For the
individual, it influences the
group gap, and is modeled as follows (equation (13)):
Where represents the normalized ratio of the Euclidean distance for the
group gap
, with a range of [0, 1]. When the
group is larger,
also increases, indicating that the
individual will learn more from the
gap.
During the search process, secretary birds at different positions have different perspectives on themselves. The secretary bird uses
to evaluate its own acceptable range of knowledge. A larger
indicates that the individual needs to learn more to improve itself.
is modeled using equation (14):
Where represents the fitness value of the individual, and
represents the fitness value of the worst individual. Generally, a smaller
means that the individual is better at extracting and absorbing the essence of knowledge. Therefore, the individual should have a smaller
, which biases it towards local exploitation. When
is larger, it indicates that the individual is underperforming and needs to bridge the knowledge gap. In this case, the individual should have a larger
which biases it towards global exploration.
Knowledge acquisition and transformation are inherently lossy processes. For the gap vector
,the
individual absorbs a portion of the knowledge, which is referred to as the
knowledge gain
. For the
individual,
is obtained by performing the
and
operations sequentially on the
group gap vector. This process is described by equation (15):
Here, represents the knowledge gained by the
individual from the
gap.
is the self-evaluation of the individual, while
is the evaluation of the external situation. Through the combined effect of these two evaluations, the
individual identifies the knowledge it requires from the
thereby completing the learning process. By assimilating the knowledge gaps between different individuals, the
individual accumulates a wealth of knowledge. The detailed learning process is given by equation (16), which models the improved prey attack phase.
This mechanism is a collaborative search strategy that employs five vectors, each contributing one of four types of information related to the convergence direction. By balancing these four directional pieces of information based on the distance between vectors and fitness values, the algorithm adaptively determines the current search direction. As shown in Fig 3, the contributions of five individuals generate four search directions, which are interdependent. This effectively reduces the risk of the ASBOA algorithm becoming trapped in local optima.
Therefore, the improved ASBOA exploration phase is updated using the following equation:
3.2.2 Optimal boundary control mechanism.
During the iterative process of optimization algorithms, some individuals often exceed the predefined search boundaries. In the standard SBOA algorithm, this is typically handled by directly setting individuals that exceed the boundaries to the upper or lower limits of the search space. However, this method fails to effectively utilize the positional information of individuals, potentially resulting in the loss of their original movement trends and valuable information. In the search process of the optimization algorithm, the entire population should focus on exploring new potential positions, with the current global best solution providing critical clues for identifying these new positions. Therefore, to maximize the use of positional information, an optimal boundary control mechanism based on the current global best solution’s position is proposed [60]. By incorporating the global best solution, this mechanism adjusts the position of individuals that exceed the boundaries, as described by equation (18).
As shown in Fig 4, during the iterative process, the search agents continuously move towards the global best individual, causing the boundaries of the entire search space to shrink towards the optimal individual. This prevents the algorithm from performing ineffective exploration, allowing it to converge more quickly to the vicinity of the optimal solution.
3.2.3 Information retention control mechanism.
Following the adjustment of each individual’s candidate solution during the learning process, its quality may either increase or decrease. Consequently, it is essential to confirm whether actual progress has been achieved. If progress is made, the fitness value of the
individual will decrease, resulting in an improvement in its ranking. If the
individual experiences a regression, the individual may discard some of the knowledge it has acquired. Nonetheless, due to the time and effort involved in the learning process, there remains a small probability that the acquired knowledge may still be preserved [61]. To manage this retention probability, a knowledge retention control mechanism is introduced in ASBOA, with a retention probability
. This process is described by equation (19).
Here, is a random number uniformly distributed in the range [0, 1], and PPP is applied to decide whether the newly acquired knowledge of the
individual should be retained when the individual fails to update. This implies that when an individual fails to update, there is a 0.001 chance that the individual will be carried over to the next generation of the population. Additionally, it ensures that the current global best solution remains unchanged, thereby accelerating the convergence of the algorithm.
In summary, the pseudocode for ASBOA is depicted as Algorithm 1.
Algorithm 1. Pseudo-Code of ASBOA.
1: Initialize the secretary bird population by Equations (6)
2: Initialize parameter P and
3: Calculate the fitness of each secretary bird
4: while do
5:
6:
7: for do
8. Hunting behavior (exploration):
9: Calculate the fitness using Equation (17).
9: Update the position of the current search agent by Equation (19)
10: Using equation (18) for boundary adjustment
11: Escape strategy (exploitation):
12: Calculate the fitness using Equation (6).
13: Update the position of the current individual using Equation (19).
14: Using equation (18) for boundary adjustment
15: End for
16: Update the best solution found so far .
17: End while
3.2.4 Computational complexity analysis.
The time required by different algorithms to solve the same optimization problem may vary. Evaluating the computational complexity of an algorithm is a crucial metric for assessing its execution efficiency and practical feasibility [62]. In this study, we conducted a detailed analysis of the time complexity of the SBOA algorithm using Big-O notation. Assuming the population size of the Secretary Bird Optimization Algorithm (SBOA) is , the problem dimension is
, and the maximum number of iterations is
, the algorithm’s primary steps can be decomposed and analyzed based on the definition of time complexity and the computation rules of Big-O notation. First, the random initialization of the population, as the initial step of the algorithm, involves generating initial values for
solutions, resulting in a time complexity of
. Next, during the solution update process, two main operations are performed: (1) locating the position of the current optimal solution and (2) updating the positions of all solutions. These operations have time complexities of
and
, respectively. Overall, the total time complexity of the SBOA algorithm can be summarized as
. For the improved algorithm proposed in this study, ASBOA, no additional iterative operations or higher-complexity computations were introduced. Therefore, its time complexity remains consistent with the original SBOA, still
.
3.2.5 Strategy effectiveness analysis.
This section delves into the impact of three enhancement strategies on the SBOA algorithm: the Differential Cooperative Search Mechanism, Optimal Boundary Control Mechanism, and Information Retention Control Mechanism. Based on these strategies, three variants of SBOA were proposed: DCSMSBOA, OBCMSBOA, and IRCMSBOA. According to the experimental results shown in Fig 5 each strategy significantly improved the convergence accuracy and speed of SBOA, with the ASBOA, which combines all three strategies, demonstrating the best performance. Specifically, for both unimodal and multimodal functions, OBCMSBOA and IRCMSBOA exhibited similar effects, effectively enhancing the convergence speed and accuracy of the algorithm. Among them, the improvement in DCSMSBOA was particularly remarkable, significantly accelerating the convergence speed. Furthermore, when dealing with complex mixed-modal functions, IRCMSBOA showed some decline in performance, whereas OBCMSBOA and DCSMSBOA demonstrated stronger adaptability. Overall, the ASBOA, which integrates all three strategies, achieved excellent performance on most functions.
In conclusion, ASBOA successfully overcame the issues of slow convergence and premature convergence, achieving satisfactory results on multiple benchmark functions, thanks to the effective integration of these three strategies.
4. Experimental results and analysis
4.1 Test function and compare algorithms parameter Settings
The experiment compares the performance of ASBOA with the standard SBOA, classical algorithms, and recently proposed algorithms on the CEC2017 benchmark suite (dim = 30/100) [12]. Table 1 shows the algorithm information and related parameter settings used in the experiment. The classical algorithms include the Grey Wolf Optimizer (GWO) [44], Whale Optimization Algorithm (WOA) [63], Particle Swarm Optimization (PSO) [29], African vultures optimization algorithm (AVOA) [64]. The recently proposed algorithms include the Crested Porcupine Optimizer (CPO) [65], Black-winged Kite Algorithm (BKA) [66], Dung Beetle Optimizer (DBO) [60] and Secretary Bird Optimization Algorithm(SBOA) [54]. To ensure fairness and eliminate the influence of randomness, all algorithms in this experiment were set with a population size of 30, a maximum of 500 iterations, and each algorithm was run independently 30 times. The results were statistically analyzed for mean (Ave), standard deviation (Std), and average ranking, with the best result for each function highlighted in bold. All experiments were performed in an environment running Windows 11, with a system featuring an Intel(R) Core(TM) i5-13400 processor at 2.5GHz, 16GB of RAM, and MATLAB 2024a software.
4.2 Assessing performance with CEC2017 and CEC2022 test suite
This section uses the CEC2017 test suite [12] (with dimensions of 30 and 100) and CEC2022 test suite [67] (with dimensions of 30) to evaluate the effectiveness of ASBOA. Tables 2–4 summarize the test results of 9 comparison algorithms and the ASBOA algorithm on two different dimensions, including the average values and standard deviations of the 30 test functions from the CEC2017 and CEC2022 test suite.
For the unimodal functions CEC2017 F1–F3 and CEC2022 F1-F5, ASBOA demonstrates the ability to converge to a solution close to the global optimum for both the 30-dimensional and 100-dimensional versions of the F1 function. SBOA and CPO only converge to the global optimum in the 30-dimensional case, while other comparison algorithms fail to find better solutions. The CEC2017-F2 function induces significant instability in most algorithms as the dimensionality increases. However, ASBOA effectively addresses this issue, showing notably superior performance. For the CEC2017-F3 function, the global optimum is located within a large, smooth region, which leads to a rapid decline in convergence speed for most algorithms. In contrast, ASBOA excels by avoiding incorrect convergence directions and enhancing convergence speed. Furthermore, while the performance of all algorithms deteriorates with higher dimensions, ASBOA is less affected by dimensionality increases. This is due to the three enhancement mechanisms within ASBOA, which enable the algorithm to better adapt to the search space and deliver improved results.
As the problem dimension grows, the performance of all algorithms in identifying the optimal solution diminishes for the simple multimodal functions CEC2017 F4-F10 and CEC2022 F6-F8. Both ASBOA and SBOA demonstrate superior optimization performance in lower dimensions. However, ASBOA manages to maintain consistent stability and delivers high-quality results across various dimensions. On the other hand, the performance of the standard SBOA gradually worsens as the dimension increases, highlighting the increasing advantages of ASBOA.
For the hybrid functions CEC2017 F11-F20 and CEC2022 F9-F12, ASBOA outperforms the other algorithms in terms of results. In particular, ASBOA maintains its dominance on functions CEC2017-F12, F15, and F18. SBOA also has some advantages, particularly on functions CEC2017-F16, F17, and F20. However, for the majority of the functions, ASBOA’s superiority is the most pronounced.
For the composite functions CEC2017 F21–F30, ASBOA still demonstrates dominant performance on these problems. SBOA and CPO only show advantages in a few cases, while other comparison algorithms exhibit no significant advantage and fail to solve these problems effectively in both dimensions. In summary, the three improvement mechanisms of ASBOA effectively iterated the solutions, leading to competitive performance.
To better illustrate the performance of each algorithm, the ranking distribution of each algorithm across all functions is shown in Fig 6. It is evident that for CEC2017 (dim = 30), ASBOA ranks best in 23 functions and second in 7 functions. For CEC2017 (dim = 100), ASBOA achieves the best performance in 26 functions and ranks second in 4 functions. For the CEC2022 benchmark with dimension = 20, ASBOA achieved first place rankings on 9 out of 12 test functions and secured second place on 2 functions, demonstrating consistently excellent performance without any worst-case rankings. From the ranking perspective, it is noteworthy that ASBOA consistently maintains a top two position across all 42 functions, demonstrating its stability in performance. Notably, ASBOA excels in solving unimodal and composite functions, while the standard SBOA achieves the best result in only 6 functions (dim = 30) and 4 functions (dim = 100). CPO achieves only 1 best result in dim = 30, with other algorithms failing to achieve the best result in any function. This highlights the significant improvement of ASBOA over the standard SBOA.
4.3 Convergence analysis
To explore the convergence patterns of various algorithms throughout the iteration process and their responsiveness to different functions and dimensions, this study conducted experiments using some representative functions from the CEC 2017 and CEC2022 benchmark suite. These functions include the unimodal function CEC2017-F1 and CEC2022 F1-F3, multimodal functions CEC2017-(F7, and F10) and CEC2022-F7, hybrid functions CEC2017-F18 and CEC2022-F11, as well as composite functions CEC2017-(F22 and F30). The experiments were carried out at dimensions of dim = 30 and dim = 100. The comprehensive convergence results are shown in Fig 7.
For the unimodal function CEC2017-F1 and CEC2022-F1 and F2 convergence curve, ASBOA appears to have an advantage in both convergence speed and accuracy. ASBOA notably reduces the sensitivity of SBOA to parameter variations. Furthermore, the performance advantage of ASBOA becomes especially pronounced in the case of F2, particularly as the problem dimension grows.
For the convergence curves of the multimodal functions CEC2017-(F7 and F1) and CEC2022-F7, ASBOA exhibits the fastest convergence rate and the highest accuracy in reaching the optimal solution. In contrast, other algorithms tend to get trapped in local optima as the number of iterations increases due to the local optimum characteristics of the problem, preventing them from obtaining better solutions. Furthermore, as the dimension increases, ASBOA’s convergence curve stands out among all the algorithms. This demonstrates the effectiveness and competitiveness of the three strategies integrated into ASBOA.
For the hybrid functions CEC2017-F18 and CEC2022-F11 convergence curves, ASBOA also provides the best results. Despite the complex topology of these functions, ASBOA’s convergence curve continues to decrease until the end of the iterations, indicating that it consistently improves the quality of the current solution. This demonstrates ASBOA’s ability to effectively navigate through challenging optimization landscapes.
For the combinatorial functions CEC2017 F22 and F30 convergence curves, ASBOA does not show a significant advantage on the high-dimensional F22, but it still performs relatively well compared to other algorithms. On F30, the results of SBOA and ASBOA are similar, but ASBOA’s relative advantage is more noticeable. In conclusion, it can be deduced that ASBOA exhibits good convergence behavior on these problems, particularly in terms of maintaining stability and finding competitive solutions.
4.5 Robustness analysis
To further confirm the robustness and stability of the algorithm, boxplots illustrating the performance of each algorithm on representative functions were generated, as shown in Fig 8. The figure clearly reveals that the data distribution for ASBOA is generally more concentrated, indicating its robust and stable search capability. This can be attributed to the three enhancement mechanisms embedded within ASBOA, which continuously refine the search process and mitigate the impact of complex problems. Additionally, while the data distributions for GWO, PSO, CPO, and SBOA are relatively stable, their boxplots are positioned higher, indirectly suggesting that these algorithms may not offer a distinct advantage, with their robustness and stability falling short compared to ASBOA.
4.6 Wilcoxon rank sum test
To comprehensively demonstrate the superiority of the proposed algorithm, this section uses the Wilcoxon rank-sum test to evaluate the performance of ASBOA in each experiment and compares it with other algorithms at a significance level of . The null hypothesis
is that there is no significant difference between the two algorithms. When the
-value is less than 0.05, the null hypothesis is rejected, indicating a significant difference between the two algorithms. When the
-value is greater than 0.05, the null hypothesis is accepted, suggesting that the performance difference between the two algorithms is not significant, i.e., the algorithms perform similarly. The differences between the algorithms are presented in graphical form, with sections where the
-value is greater than 0.05 highlighted. The test results are shown in Tables 5–7.
As shown in the table, the data marked as highlighted is relatively sparse, and there are fewer significant differences across the three dimensions, indicating that the newly proposed ASBOA demonstrates a clear distinction from the compared metaheuristic algorithms. Therefore, ASBOA exhibits excellent overall performance among various metaheuristic algorithms, suggesting that the three strategies we introduced effectively enhance the algorithm’s convergence speed and solution accuracy, showing significant differences when compared with other algorithms.
4.7 Time cost comparison between ASBOA and SBOA
Based on the previous research results, the improved ASBOA significantly outperforms the traditional SBOA in terms of overall performance. This section will focus on analyzing the differences in computational time between the two algorithms. To ensure fairness, the parameter settings for both ASBOA and SBOA are kept consistent with those used earlier. Additionally, the average runtime of the algorithms over 30 independent experiments is recorded. Fig 9 illustrates the average computational time (in seconds) required by each algorithm to solve every test function.
The experimental results on the CEC 2017 benchmark set (dim = 30) show that for unimodal functions and some relatively simple multimodal functions, the execution times of ASBOA and SBOA are essentially the same. However, when faced with more complex hybrid functions, ASBOA typically requires more computational time than SBOA. This suggests that ASBOA exhibits higher computational efficiency when tackling more complex problems. Compared to the standard SBOA, ASBOA not only employs a more efficient search strategy but also performs better in terms of global search ability and local convergence speed. Overall, ASBOA achieves higher solution accuracy than SBOA on most test functions, and the slight increase in runtime is negligible.
4.8 Exploration pattern analysis
In this section, the optimization performance of ASBOA during the search process is analyzed using search paths, average fitness values, search trajectories, and convergence trend plots. The convergence trend reflects the best fitness value achieved at each iteration, while the average fitness value represents the mean fitness level of all individuals at each iteration. The trajectory curve records the dynamic changes in the first dimension during the search process, and the search path intuitively presents the distribution of visited locations throughout the optimization process.
The second column of Fig 10 illustrates the evolution curves of the average fitness values for SBOA and ASBOA, highlighting the competitive advantage of ASBOA. Compared to SBOA, ASBOA identifies solutions progressively closer to the optimal solution of the test functions as iterations proceed. The third column depicts the search trajectories of ASBOA in the first dimension, demonstrating its ability to initially explore potential high-quality regions during the exploration phase, followed by transitioning into an exploitation phase that focuses on refining the identified superior regions to enhance solution precision.
The fourth column shows the convergence curves of ASBOA across various test functions, displaying a rapid decline trend that indicates its efficiency in approaching the optimal solutions. The fifth column visualizes the positional changes of particles during the optimization process. It can be observed that ASBOA exhibits a flexible search pattern across different stages: initially performing a comprehensive exploration of the entire search space, then gradually concentrating on smaller regions for in-depth exploitation, and ultimately locating solutions that are closest to the optimal.
4.9 Applied ASBOA for Wireless Sensor Network Deployment
In the experiment, nodes of the wireless sensor network are randomly deployed in an
area. Specifically,
represents the number of nodes, and the area covered is
. The communication radius of the sensors is set to
for one case and
for another. To compare the performance of the algorithms, the parameters of all algorithms in the fitness function are set to be identical, as shown in Table 1. The population size for all algorithms is set to 30, and the number of iterations is set to 500. To ensure the fairness of the experiment, 30 independent runs were conducted with the same environment and parameter settings to eliminate randomness, and the relevant experimental results were recorded. The minimum (Min), maximum (Max), median (Median), mean (Mean), standard deviation (Std) and number of failed nodes (FN) for each comparison algorithm were statistically analyzed, with the optimal values highlighted in bold in Table 8. In this study, the faulty nodes are randomly selected to simulate unpredictable node failures in real-world wireless sensor networks caused by energy depletion or environmental factors. The convergence curves of different algorithms in the WSN are shown in Fig 11, and the node deployment optimized by each algorithm is shown in Fig 12.
As observed from Table 8, the performance of the proposed Multi-strategy Gray Wolf Optimization Algorithm (ASBOA) in the Wireless Sensor Network (WSN) is significantly better than that of the other comparison algorithms in terms of the minimum (Min), maximum (Max), median (Median), and mean (Mean) values. This indicates that ASBOA achieves superior results, attaining better coverage, with a maximum coverage rate of 88.32%, compared to 87.20% for the standard SBOA. ASBOA thus demonstrates an improvement of 1.12% over SBOA. Furthermore, the average coverage rate of ASBOA is also much lower than that of the other comparison algorithms, which suggests that the proposed ASBOA exhibits strong robustness for this problem, outperforming other algorithms. In terms of the number of failed nodes, the proposed ASBOA demonstrates superior performance with only 2 failed nodes out of a total of 30 nodes. In comparison, the standard SBOA and the recently proposed CPO each have 4 failed nodes. The WOA exhibits the highest number of failed nodes at 7, followed by PSO, DBO, and BKA, each with 6 failed nodes. These results indicate that ASBOA is more suitable for WSM problems than these other algorithms. These results confirm that the proposed ASBOA demonstrates outstanding optimization performance in WSNs, providing a powerful tool for solving the wireless sensor network node deployment problem.
As shown in Fig 11, although ASBOA does not converge as quickly as GWO, WOA, BKA, AVOA, PSO, and CPO in the early iterations, it continues to improve as the number of iterations increases and ultimately achieves better results. In contrast, other algorithms become trapped in local optima and fail to find better solutions. This suggests that the proposed ASBOA has excellent potential for solving the wireless sensor network (WSN) node deployment optimization problem. As seen in Fig 12, compared to the node deployments optimized by other algorithms, the deployment distribution optimized by ASBOA is more reasonable, with less overlap and fewer blank areas. Each node is deployed more evenly. In comparison to the optimizations by other algorithms, such as WOA, CPO, and BKA, which show significant overlap of deployed nodes—an unreasonable outcome—ASBOA’s more rational deployment results in multiple benefits in terms of energy efficiency, coverage, data accuracy, network reliability, cost, and delay. This makes the entire network more sustainable, reliable, and cost-effective. From another perspective, this further validates the effectiveness of ASBOA.
4.10 Engineering optimization problem
4.10.1 Three-bar Truss Design Problem.
The three-bar truss design problem is a classic optimization problem widely applied in the field of civil engineering. The primary objective of this problem is to minimize the overall structural weight by optimizing design parameters, thereby improving material efficiency and the economic performance of the engineering structure. The design involves the adjustment of two key parameter variables, which directly affect the geometry and dimensions of the truss, ultimately determining the structure’s mass and performance. The specific structural form of the three-bar truss is shown in Fig 13. The mathematical model of this problem is given by equation (20).
Where
Table 9 presents the optimization results of ASBOA compared with other algorithms for the three-bar truss design problem. As can be seen, ASBOA achieves the best optimization result, with a value of 2.6390E + 02.
4.10.2 Tension/Compression Spring Design Problem.
This design problem falls within the field of mechanical design optimization, with the objective of achieving lightweight design by minimizing the weight of the tension/compression spring. The optimization task focuses on three key parameters of the spring: wire diameter (d), coil diameter (D), and number of coils (N). These parameters not only directly affect the weight of the spring but also determine its mechanical performance and service life under tension and compression conditions. By optimizing these variables in a reasonable manner, material consumption and production costs can be significantly reduced while meeting the design strength and functional requirements. The geometric structure and working principle of this engineering problem are shown in Fig 14, with the detailed mathematical description provided by equation (21).
Table 10 presents the optimization results of AGRO compared with other algorithms for the tension/compression spring design problem. As shown, the optimization results of ASBOA outperform those of the other comparison algorithms, with the optimal value being 1.2668E-02.
4.10.3 Cantilever beam design problem.
The cantilever beam design problem is a typical engineering optimization problem, with the goal of optimizing the design of a five-cube hollow structure. The core task of this problem is to adjust key parameters in such a way that the cost is minimized while meeting strength, stiffness, and other specific design constraints. The optimization process requires balancing material efficiency, manufacturing complexity, and structural performance to achieve a cost-effective design solution. The structure of this problem is shown in Fig 15, which visually illustrates the geometry and design characteristics of the cantilever beam. The specific optimization objectives and constraints are given by equation (22).
Table 11 demonstrates the outstanding performance of ASBOA and other comparison algorithms in optimizing the CBD problem, where ASBOA achieves the best objective function value of 1.3400E + 00, outperforming the other comparison algorithms.
The experimental results from the three engineering optimization design problems demonstrate that the proposed ASBOA shows significant advantages in addressing real-world engineering optimization problems. Compared to other comparison algorithms, ASBOA not only finds better solutions but also effectively reduces engineering costs, showcasing its efficiency and practical value in real-world applications.
5. Summary and prospect
This paper proposes an improved version of the SBOA, aimed at enhancing the convergence speed and solution accuracy of the standard SBOA, while reducing the risk of ASBOA getting trapped in local optima. First, a Differential Cooperative Search Mechanism is employed to reduce the risk of the ASBOA algorithm becoming trapped in local optima, and an Optimal Boundary Control Mechanism is used to avoid ineffective exploration, improving the convergence speed of the algorithm. Additionally, an Information Retention Control Mechanism is applied to update the population, ensuring that the current global optimal solution cannot be replaced, further enhancing the algorithm’s convergence speed.
The ASBOA algorithm is tested using the CEC2017 and CEC2022 benchmark functions. The results show that for CEC2017 (dimensions = 30 and 100) and CEC2022(dimensions = 20), ASBOA achieves the best ranking in 76.67% (23 out of 30), 86.67% (26 out of 30) and 75% (9 out of 12) of the functions, respectively. It is also applied to the WSN problem and three engineering problems, where it outperforms all comparison algorithms. In the WSN problem, the coverage rate can reach 88.32%, an improvement of 1.12% over the standard SBOA. These results demonstrate that the proposed ASBOA has strong overall performance and holds great promise for optimization applications.
Experimental observations on the CEC2017 and CEC2022 test set show that ASBOA can quickly converge to a global optimal solution in most cases. However, there is still a risk of getting trapped in local optima (e.g., in CEC2017-F26, F27, and F29, and CEC2022-F10). Furthermore, the coverage rate in the WSN problem remains insufficient and requires further improvement. Due to a lack of population diversity, stagnation may occur. Other potential issues still need to be further tested and validated.
Although ASBOA has demonstrated excellent performance in wireless sensor network (WSN) node deployment optimization and engineering optimization problems, there remain areas for improvement. First, while ASBOA outperforms other algorithms in terms of coverage and optimization results, its convergence speed in the early stages is relatively slow, which may reduce its efficiency in real-world applications requiring rapid responses. Second, although ASBOA exhibits strong robustness, its performance and stability under larger-scale problems or more complex constraints have not been fully validated. Additionally, the sensitivity of its parameters and its adaptability to different problems require further investigation.
Future research can focus on several promising directions. First, the introduction of dynamic parameter adjustment mechanisms or more efficient search strategies could enhance the algorithm’s convergence speed during early iterations, allowing it to achieve high-quality solutions faster. For larger-scale networks or problems with complex constraints, designing more adaptive strategies or hybrid algorithms will be essential to improve the method’s flexibility and broaden its applicability. At the same time, efforts to optimize the algorithm’s structure could significantly reduce computational complexity, making it more suitable for applications requiring high real-time responsiveness. Another intriguing direction lies in exploring the integration of machine learning or deep learning techniques, which can leverage data-driven approaches to optimize parameter settings or improve the modeling of complex problems. Furthermore, in-depth theoretical analyses are needed to better understand the algorithm’s convergence properties and computational complexity, along with the development of automated parameter adjustment methods to minimize the need for manual tuning. Additionally, future work may extend to the design of deployment strategies specifically for wireless sensor networks (WSNs), addressing challenges such as k-coverage and k-connectivity while simultaneously minimizing network costs and maximizing network lifespan. Developing multi-objective and binary versions of ASBOA could also expand its capability to meet diverse problem requirements. These advancements collectively hold the potential to further enhance the efficiency and versatility of ASBOA, establishing it as a robust and general-purpose tool for tackling complex optimization challenges.
References
- 1. Anil Kumar N, Sukhi Y, Preetha M, Sivakumar KJ. Ant Colony Optimization with Levy-Based Unequal Clustering and Routing (ACO-UCR) technique for wireless sensor networks. Systems and Computers. 2024;33(3).
- 2. Priyadarshi R, Kumar RR, Ying Z. Techniques employed in distributed cognitive radio networks: a survey on routing intelligence. Multimedia Tools and Applications. 2025;84(9):5741–92.
- 3.
Ab Aziz NAB, Mohemmed AW, Alias MY, Ieee. A wireless sensor network coverage optimization algorithm based on particle swarm optimization and voronoi diagram. In: International Conference on Networking, Sensing and Control, Okayama, JAPAN, 2009, p. 596.
- 4. Amaldi E, Capone A, Malucelli F. Radio planning and coverage optimization of 3G cellular networks. Wireless Networks. 2008;14(4):435–47.
- 5. Dinesh K, Svn SKJP. GWO-SMSLO: Grey wolf optimization based clustering with secured modified Sea Lion optimization routing algorithm in wireless sensor networks. Applications. 2024;17(2):585–611.
- 6. Priyadarshi R, Gupta B, Anurag A. Wireless sensor networks deployment: A result oriented analysis. Wireless Personal Communications. 2020;113(2):843–66.
- 7.
Shwetha G, Murthy SJJOAIT. A combined approach based on antlion optimizer with particle swarm optimization for enhanced localization performance in wireless sensor networks. 2024;15(1).
- 8.
Rathee M, Kumar S, Dilip K, Dohare UJAHN. Towards energy balancing optimization in wireless sensor networks: A novel quantum inspired genetic algorithm based sinks deployment approach. 2024;153:103350.
- 9.
Ab Aziz NAB, Mohemmed AW, Sagar BSD, Ieee. Particle swarm optimization and Voronoi diagram for wireless sensor networks coverage optimization. In: International Conference on Intelligent and Advanced Systems, Kuala Lumpur, MALAYSIA, 2007. 961–5.
- 10. Priyadarshi R, Gupta B, Anurag A. Deployment techniques in wireless sensor networks: a survey, classification, challenges, and future research issues. The Journal of Supercomputing. 2020;76(9):7333–73.
- 11.
El Khamlichi Y, Tahiri A, Abtoy A, Medina-Bulo I, Palomo-Lozano F. A hybrid algorithm for optimal wireless sensor network deployment with the minimum number of sensor nodes. 2017;10(3):80.
- 12.
Wu G, Mallipeddi R, Suganthan P. Problem definitions and evaluation criteria for the CEC 2017 competition and special session on constrained single objective real-parameter optimization. In: 2016.
- 13. Egwuche OS, Singh A, Ezugwu AE, Greeff J, Olusanya MO, Abualigah L. Machine learning for coverage optimization in wireless sensor networks: a comprehensive review. Annals of Operations Research. 2023.
- 14. R. Priyadarshi, “Energy-Efficient Routing in Wireless Sensor Networks: A Meta-heuristic and Artificial Intelligence-based Approach: A Comprehensive Review,” Archives of Computational Methods in Engineering, vol. 31, no. 4, pp. 2109–37, 2024/05/01 2024.
- 15. P. Rawat, S. Chauhan, and R. Priyadarshi, “A Novel Heterogeneous Clustering Protocol for Lifetime Maximization of Wireless Sensor Network,” Wireless Personal Communications. 2021;117(2):825–41.
- 16. Priyadarshi R, Rawat P, Nath V, Acharya B, Shylashree N. Three level heterogeneous clustering protocol for wireless sensor network. Microsystem Technologies. 2020;26(12):3855–64.
- 17. Nguyen TM, Vo HH-P, Yoo M. Enhancing intrusion detection in wireless sensor networks using a GSWO-catboost approach. Sensors (Basel). 2024;24(11):3339. pmid:38894128
- 18. Priyadarshi R, Soni SK, Nath V. Energy efficient cluster head formation in wireless sensor network. Microsystem Technologies. 2018;24(12):4775–84.
- 19. Priyadarshi R, Rawat P, Nath V. Energy dependent cluster formation in heterogeneous wireless sensor network. Microsystem Technologies. 2019;25(6):2313–21.
- 20.
Ou Y, Qin F, Zhou K-Q, Yin PF, Mo LP, Mohd Zain AJS. An improved grey wolf optimizer with multi-strategies coverage in wireless sensor networks. 2024;16(3):286.
- 21. Priyadarshi R, Gupta B. Coverage area enhancement in wireless sensor network. Microsystem Technologies. 2020;26(5):1417–26.
- 22. Priyadarshi R, Vikram R. A Triangle-Based Localization Scheme in Wireless Multimedia Sensor Network. Wireless Pers Commun. 2023;133(1):525–46.
- 23.
Wang W, Lyu LJIA. “Adaptive Tasmanian Devil Optimizer for Global Optimization and Application in Wireless Sensor Network Deployment,” 2024.
- 24.
Ahmad R, Alhasan W, Wazirali R, Aleisa NJIA. Optimization algorithms for wireless sensor networks node localization: An overview. 2024.
- 25. Qiu Y, Ma L, Priyadarshi R. Deep learning challenges and prospects in wireless sensor network deployment. Archives of Computational Methods in Engineering. 2024;31(6):3231–54.
- 26. Rawat P, Chauhan S, Priyadarshi R. Energy-efficient clusterhead selection scheme in heterogeneous wireless sensor network. Energy. 2020;29(13):2050204.
- 27. Priyadarshi R, Gupta B. Area coverage optimization in three-dimensional wireless sensor network. Wireless Personal Communications. 2021;117(2):843–65.
- 28. Holland JH. Genetic algorithms. Scientific American. 1992;267(1):66–73.
- 29.
Kennedy J, Eberhart R. Particle swarm optimization. In: Proceedings of ICNN’95 - International Conference on Neural Networks, 1995. 1942–8.
- 30. Dorigo M, Birattari M, Stützle T. Ant Colony Optimization. Comput Intell Mag. 2006;1:28–39.
- 31. Ben Amor O, Chelly Dagdia Z, Bechikh S, Ben Said L. Many-objective optimization of wireless sensor network deployment. Evolutionary Intelligence. 2024;17(2):1047–63.
- 32. Miao Z, Yuan XF, Zhou FY, Qiu XJ, Song Y, Chen K. Grey wolf optimizer with an enhanced hierarchy and its application to the wireless sensor network coverage optimization problem. Applied Soft Computing. 2020;96:Art. no. 106602.
- 33. Bian K, Priyadarshi R. Machine learning optimization techniques: A survey, classification, challenges, and future research issues. Archives of Computational Methods in Engineering. 2024;31(7):4209–33.
- 34. Li K, Huang H, Fu S, Ma C, Fan Q, Zhu Y. A multi-strategy enhanced northern goshawk optimization algorithm for global optimization and engineering design problems. Computer Methods in Applied Mechanics and Engineering. 2023;415:116199.
- 35. Su H, et al. RIME: A physics-based optimization. Neurocomputing. 2023;532:183–214.
- 36. Storn R, Price K. Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. Journal of global optimization. 1997;11(4):341–59.
- 37. Kirkpatrick S, Gelatt CD, Vecchi MP. Optimization by simulated annealing. Science. 1983;220(4598):671–80.
- 38. Rashedi E, Nezamabadi-Pour H, Saryazdi S. GSA: a gravitational search algorithm. Information Sciences. 2009;179(13):2232–48.
- 39. Rao RV, Savsani VJ, Vakharia D. Teaching–learning-based optimization: a novel method for constrained mechanical design optimization problems. Computer-aided design. 2011;43(3):303–15.
- 40. Kumar M, Kulkarni AJ, Satapathy SC. Socio evolution & learning optimization algorithm: A socio-inspired optimization methodology. Future Generation Computer Systems. 2018;81:252–72.
- 41. Gao Y, Zhang J, Wang Y, Wang J, Qin L. Love Evolution Algorithm: a stimulus–value–role theory-inspired evolutionary algorithm for global optimization. The Journal of Supercomputing. 2024.
- 42. Zolf K. Gold rush optimizer: a new population-based metaheuristic algorithm. Operations Research and Decisions. 2023;33(1).
- 43. Yang XS, He X. Bat algorithm: literature review and applications. IJBIC. 2013;5(3):141.
- 44. Mirjalili S, Mirjalili SM, Lewis A. Grey Wolf Optimizer. Advances in Engineering Software. 2014;69:46–61.
- 45. Heidari AA, Mirjalili S, Faris H, Aljarah I, Mafarja M, Chen H. Harris hawks optimization: Algorithm and applications. Future Generation Computer Systems. 2019;97:849–72.
- 46. Chopra N, Mohsin Ansari M. Golden jackal optimization: A novel nature-inspired optimizer for engineering applications. Expert Systems with Applications. 2022;198:116924.
- 47. Zamani H, Nadimi-Shahraki MH, Gandomi AH. QANA: Quantum-based avian navigation optimizer algorithm. Engineering Applications of Artificial Intelligence. 2021;104:104314.
- 48. Hayyolalam V, Kazem AAP. Black widow optimization algorithm: a novel meta-heuristic approach for solving engineering optimization problems. Engineering Applications of Artificial Intelligence. 2020;87:103249.
- 49. Fu S, Li K, Huang H, Ma C, Fan Q, Zhu Y. Red-billed blue magpie optimizer: a novel metaheuristic algorithm for 2D/3D UAV path planning and engineering design problems. Artif Intell Rev. 2024;57(6).
- 50. Mohammadi-Balani A, Dehghan Nayeri M, Azar A, Taghizadeh-Yazdi M. Golden eagle optimizer: A nature-inspired metaheuristic algorithm. Computers & Industrial Engineering. 2021;152:107050.
- 51. Hu G, Guo Y, Wei G, Abualigah L. Genghis Khan shark optimizer: A novel nature-inspired algorithm for engineering optimization. Advanced Engineering Informatics. 2023;58:102210.
- 52.
Hamad RK, Rashid TAJES. GOOSE algorithm: a powerful optimization tool for real-world engineering challenges and beyond. In: 2024. 1–26.
- 53. Priyadarshi R, Kumar RR. Evolution of swarm intelligence: A systematic review of particle swarm and ant colony optimization approaches in modern research. Archives of Computational Methods in Engineering. 2025.
- 54. Fu Y, Liu D, Chen J, He L. Secretary bird optimization algorithm: a new metaheuristic for solving global optimization problems. Artificial Intelligence Review. 2024;57(5):123.
- 55. Xu G, Shen W, Wang X. Applications of wireless sensor networks in marine environment monitoring: a survey. Sensors (Basel). 2014;14(9):16932–54. pmid:25215942
- 56. Hofmeyr SD, Symes CT, Underhill LG. Secretarybird Sagittarius serpentarius population trends and ecology: insights from South African citizen science data. PLoS One. 2014;9(5):e96772. pmid:24816839
- 57. De Swardt DH. Late-summer breeding record for secretarybirds Sagittarius serpentarius in the Free State. Gabar. 2011;22:31–3.
- 58. Hu G, Cheng M, Houssein EH, Hussien AG, Abualigah L. SDO: A novel sled dog-inspired optimizer for solving engineering problems. Advanced Engineering Informatics. 2024;62:102783.
- 59. Gao H, Zhang Q, Bu X, Zhang H. Quadruple parameter adaptation growth optimizer with integrated distribution, confrontation, and balance features for optimization. Expert Systems with Applications. 2024;235:121218.
- 60. Xue J, Shen B. Dung beetle optimizer: a new meta-heuristic algorithm for global optimization. Journal of Supercomputing. 2022.
- 61.
Zhang Q, Gao H, Zhan ZH, Li J, Zhang HJKBS. Growth Optimizer: a powerful metaheuristic algorithm for solving continuous and discrete global optimization problems. 2023;261:110206.
- 62. Fu S, et al. Modified LSHADE-SPACMA with new mutation strategy and external archive mechanism for numerical optimization and point cloud registration. Artificial Intelligence Review. 2025;58(3):72.
- 63. Mirjalili S, Lewis A. The whale optimization algorithm. Advances in Engineering Software. 2016;95:51–67.
- 64. Abdollahzadeh B, Gharehchopogh FS, Mirjalili S. African vultures optimization algorithm: a new nature-inspired metaheuristic algorithm for global optimization problems. Computers & Industrial Engineering. 2021;158(Article):Art. no. 107408.
- 65. Abdel-Basset M, Mohamed R, Abouhawwash M. Crested porcupine optimizer: A new nature-inspired metaheuristic. Knowledge-Based Systems. 2024;284:111257.
- 66.
Wang J, Wang W c, Hu X x, Qiu L, Zang H f JAIR. Black-winged kite algorithm: a nature-inspired meta-heuristic for solving benchmark functions and engineering problems. 2024;57(4):1–53.
- 67. Luo W, Lin X, Li C, Yang S, Shi Y. Benchmark functions for CEC 2022 competition on seeking multiple optima in dynamic environments. arXiv preprint. 2022.