Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Zebra optimization algorithm incorporating opposition-based learning and dynamic elite-pooling strategies and its applications

  • Tengfei Ma,

    Roles Conceptualization, Data curation, Methodology, Software, Validation, Visualization, Writing – original draft

    Affiliations School of Automation and Electrical Engineering, Tianjin University of Technology and Education, Tianjin, China, Tianjin Key Laboratory of Information Sensing & Intelligent Control, Tianjin, China

  • Guangda Lu,

    Roles Funding acquisition, Supervision, Writing – review & editing

    Affiliations School of Automation and Electrical Engineering, Tianjin University of Technology and Education, Tianjin, China, Tianjin Key Laboratory of Information Sensing & Intelligent Control, Tianjin, China

  • Zhuanping Qin ,

    Roles Methodology, Supervision, Writing – review & editing

    * qinzhuanping@126.com

    Affiliations School of Automation and Electrical Engineering, Tianjin University of Technology and Education, Tianjin, China, Tianjin Key Laboratory of Information Sensing & Intelligent Control, Tianjin, China

  • Tinghang Guo,

    Roles Software, Writing – review & editing

    Affiliations School of Automation and Electrical Engineering, Tianjin University of Technology and Education, Tianjin, China, Tianjin Key Laboratory of Information Sensing & Intelligent Control, Tianjin, China

  • Zheng Li,

    Roles Methodology, Validation, Writing – review & editing

    Affiliations School of Automation and Electrical Engineering, Tianjin University of Technology and Education, Tianjin, China, Tianjin Key Laboratory of Information Sensing & Intelligent Control, Tianjin, China

  • Changli Zhao

    Roles Formal analysis, Supervision

    Affiliation Tianjin Sino-German University of Applied Sciences, Tianjin, China

Abstract

To address the limitations of the Zebra Optimization Algorithm (ZOA), including insufficient late-stage optimization search capability, susceptibility to local optima, slow convergence, and inadequate exploration, this paper proposes an enhanced Zebra Optimization Algorithm integrating opposition-based learning and a dynamic elite-pooling strategy (OP-ZOA: Opposition-Based Learning Dynamic Elite-Pooling Zebra Optimization Algorithm). he proposed search algorithm employs a good point set-elite opposition-based learning mechanism to initialize the population, enhancing diversity and facilitating escape from local optima. Additionally, a real-time information synchronization mechanism is incorporated into the position update process, enabling the exchange of position and state information between the optimal individual (Xbest) and the vigilante agent (Xworse). This eliminates information silos, thereby improving global search capability and convergence speed. Furthermore, a dynamic elite-pooling strategy is introduced, incorporating three distinct fitness factors. The optimal individual’s position is updated by randomly selecting from these factors, enhancing the algorithm’s ability to attain the global optimum and increasing its overall robustness. During experimental evaluation, the efficiency of OP-ZOA was verified using the CEC2017 test functions, demonstrating superior performance compared to seven recently proposed meta-heuristic algorithms (Bloodsucking Leech Algorithm (BSLO), Parrot Optimization Algorithm (PO), Polar Lights Algorithm (PLO), Red-tailed Hawk Optimization Algorithm (RTH), Bitterling Fish Optimization Algorithm (BFO), Spider Wasp Optimization Algorithm (SWO) and Zebra Optimization Algorithm (ZOA)). Finally, OP-ZOA exhibits distinct advantages in optimizing the APF (artificial potential field) method to address local optimum convergence issues. Specifically, it achieves faster iteration speeds across four different environments, with the planned path length after escaping local optima being shortened by an average of 7.55175 m (16.291%) compared to other optimization algorithms. These results confirm OP-ZOA’s enhanced optimization capability, significantly improving both escape efficiency from local optima and solution reliability.

Introduction

In science and engineering design, numerous complex optimization problems exist, characterized by non-convexity, high nonlinearity, multi-peak behavior, and multi-variable interactions. Metaheuristic algorithms have emerged as a research hotspot for addressing such challenges in engineering applications due to their advantages of simple implementation, operational flexibility, and efficient optimization search capabilities [1]. These algorithms have been successfully applied to solve real-world problems, including neural network optimization [2], resource allocation [3], and target tracking [4].

Multi-objective complex optimization problems are prevalent in production and daily life, where conflicting objectives must be optimized simultaneously [5,6]. However, obtaining optimal solutions for such problems remains challenging. To address this, multi-objective evolutionary algorithms—capable of deriving multiple solutions in a single learning process—have been extensively studied and applied in recent years [79]. Among these, swarm intelligence optimization algorithms exhibit distinct advantages, including minimal parameter requirements, straightforward implementation, and independence from gradient information. These algorithms can efficiently identify optimal solutions for multi-objective problems under highly complex constraints and within reasonable computational time [1012]. Notable examples include Particle Swarm Optimization (PSO) [13], Artificial Bee Colony (ABC) [14], Philoponella Prominens Optimizer (PPO) [15], Harris Hawk Optimization (HHO) [16], and Whale Optimization Algorithm (WOA) [17]. Evolutionary algorithms, typically inspired by biological evolution, offer strong global search capabilities and adaptability to high-dimensional, nonlinear problems. Representative examples include the Differential Evolution Algorithm (DE) [18] and the Love Evolution Algorithm (LEA) [19], among others.

In the context of ongoing technological advancements, significant attention has been devoted to the development and refinement of swarm intelligence algorithms across various applications, including positioning computation [20], traveler path planning [21], support vector machine optimization [22], robot pathfinding [23], power system control [24], and Internet of Things (IoT) routing protocol optimization [25,26]. For instance, the integration of the swarm intelligence Boids model with Deep Reinforcement Learning (DRL) enhances the efficiency of UAVs in pursuit-evasion tasks [27]. Similarly, a hybrid approach combining an Artificial Neural Network (ANN) and the Hybrid Cuckoo Search Algorithm (HCS) addresses complex fluid dynamics problems, solves Partial Differential Equations (PDEs), and analyzes convective heat transfer in straight-ribbed sheets with temperature-dependent thermal conductivity [2830]. The Multi-Objective Particle Swarm Optimization (MOPSO) algorithm has been applied to solve multi-warehouse vehicle routing problems in refinery oil distribution [31], while the Cuckoo Search (CS) algorithm, combined with a dual-file strategy, optimizes classical truss structures [32]. Additionally, metaheuristic-based algorithms have been implemented to identify small fixed-wing unmanned aerial vehicle (UAV) systems [33]. These improved algorithms demonstrate high efficiency in locating optimal solutions for large-scale problems. Despite their advantages—such as strong adaptability in automatic search strategy adjustment, high robustness in handling complex engineering problems, suitability for distributed computation, and ease of implementation—some of these methods still face challenges related to premature convergence to local optima.

The Zebra Optimization Algorithm (ZOA) is a heuristic optimization algorithm inspired by the collective behavior of zebra groups in nature, proposed by Trojovská et al. [34] in 2022. It offers advantages such as a simple principle and ease of implementation, and since its introduction, numerous scholars have developed improvements to the algorithm. Although ZOA demonstrates certain optimization advantages compared to most metaheuristic algorithms, it still suffers from limitations such as low convergence accuracy and susceptibility to local optima. For instance, Hazem M. El-Hageen et al. [35] proposed the Chaotic Zebra Optimization Algorithm (CZOA), which integrates chaotic mapping with ZOA to enhance the lifetime of wireless sensor networks (WSNs). The chaotic mapping increases the algorithm’s randomness and search range, thereby improving search diversity and reducing the risk of premature convergence to local optima. Similarly, Mahmoud M. Elymany et al. [36] introduced a hybrid optimization approach combining ZOA with an artificial gorilla troop optimizer (GTO) strategy. This synergy enhances the maximum power point tracking (MPPT) process in photovoltaic (PV) and wind power systems, ensuring maximum energy output under varying weather conditions. Additionally, Sarada Mohapatra et al. [37] combined ZOA with an Adaptive Network-based Fuzzy Inference System (ANFIS), using ZOA to optimize ANFIS parameters for more accurate MPPT control in hybrid microgrids.

Despite the enhancement of the zebra algorithm’s optimization accuracy and speed by the aforementioned research, the following issues persist due to the incompleteness of the improvement method: (1) The initial population prior to the iterative updating of the algorithm exhibits a significant dependency on the initial conditions, resulting in inadequate robustness. (2) The strategy of interactively updating the individual populations is relatively single and mechanical, which cannot be targeted, and lacks the problem of a reasonable balance between global and local search. (3) The algorithm remains susceptible to local optimal traps, resulting in poor convergence accuracy. (4) The individual population position update method is not detailed enough. Based on the above issues, the primary contributions of this study can be outlined as follows:

In this study, we propose a new zebra optimization algorithm (OP-ZOA, Opposition-Based Learning Dynamic Elite-Pooling Zebra Optimization Algorithm) that integrates opposition-based learning and dynamic elite-pooling strategies. The proposed approach implements three key innovations: First, a good point set-elite opposition-based learning mechanism initializes the population to enhance diversity. Second, a real-time information synchronization mechanism updates searcher positions by coordinating optimal individuals and vigilantes, enabling more effective defense strategy adaptation and risk management while boosting global search capability and convergence speed. Third, a dynamic elite-pooling strategy introduces three distinct fitness factors (mean fitness, sub-fitness, and elite fitness) for optimal individual position updates through randomized selection, significantly improving global optimization accuracy and solution quality. The algorithm’s superiority is rigorously validated through comprehensive benchmark testing on standard functions. Comparative analysis of engineering applications demonstrates OP-ZOA’s superior feasibility and performance relative to existing optimization methods.

Traditional Zebra Optimization Algorithm

  1. A. Initialization process

The starting position of the zebra within the search area was chosen randomly. The ZOA population matrix is shown in Eq 1.

(1)

where is an population matrix, is the problem dimension, and is the population size. is a vector representing a solution, and is a variable in the solution.

ZOA members are updated using two natural behaviors of zebras. The first of these two behavioral patterns is foraging and the second is a predator defense mechanism. As a result, individuals in the ZOA community are renewed twice in each iteration.

The Zebra Optimization Algorithm (ZOA) [34] performs an optimization search by simulating the behavior of zebras to achieve position updating and solve the problem to be optimized, which is divided into two main phases: foraging strategy and defense strategy.

  1. B. Foraging strategy

The foraging strategy of ZOA is mainly modeling the role of the pioneer zebra [38,39], i.e., the best member of the population is regarded as the pioneer zebra, and the pioneer zebra leads the other members of the population towards its position in the search space so that all the members find their own positions.

Therefore, zebra’s position update during the foraging phase can be mathematically modeled using (2) and (3).

(2)(3)

where is the new state of the th zebra, is its th dimension value, is the value of its objective function, is the pioneering zebra that is the best member, is its jth dimension, is a random number in the interval [0,1], and , where is a random number in the interval [0,1]. Thus, , and if the parameter , the variation in population mobility is much larger.

  1. C. Defense strategy

The defense strategy of ZOA mainly simulates the defense strategy of zebras against predator attacks [40,41], thus updating the position of ZOA group members in the search space. The defense strategy consists of two scenarios: (2) escape and (3) launching an attack. The escape strategy mainly simulates the zebras using zigzag escape routes and random side-turns, which can be mathematically modeled by S1 in Eq 4; the launching an attack strategy mainly simulates that when a certain zebra is attacked, the other zebras will move towards the attacked zebra and try to scare and confuse the attacker by establishing a defensive structure. Its mathematical modeling can be simulated by S2 in Eq 4. When updating the zebra’s position, the new position is accepted if the zebra has a higher objective function value at the new position. This updating condition can be modeled by Eq 5.

(4)(5)

where is the new state of the ith zebra based on the defense strategy, is its jth dimension value, is the value of its objective function, is the iterative contour, is the maximum number of iterations, is a constant equal to 0.01, is the randomly generated in the interval [0,1] of two strategies probability of choosing one, is the state of the attacked zebra, and is its jth dimension value.

Each ZOA iteration updates the population members based on the foraging and defense strategies. The process of updating the algorithm population continues according to (2) to (5) until the algorithm is fully executed. During successive iterations, the best candidate solution is updated and saved.

Improved Zebra Optimization Algorithm

This section presents an enhanced Zebra Optimization Algorithm (OP-ZOA, Opposition-Based Learning Dynamic Elite-Pooling Zebra Optimization Algorithm) that combines opposition-based learning with a dynamic elite-pooling strategy. The proposed hybrid algorithm introduces three key modifications to the classical ZOA framework: (1) population initialization via good point set-elite opposition-based learning, (2) incorporation of a real-time information synchronization mechanism, and (3) implementation of a dynamic elite-pooling strategy. These enhancements collectively improve the algorithm’s global search capability and convergence speed while significantly strengthening its ability to escape local optima.

  1. A. Good point set-elite opposition-based learning-initializing populations.
    1.  a. Good point set initialization populations

The good point set initialization population generates more uniformly distributed population nodes compared to traditional random initialization. This approach significantly reduces the impact of randomness, yielding more stable and reliable results across multiple algorithm runs while improving solution space coverage. By addressing the inherent limitations of pseudorandom number generation systems – particularly their tendency to produce non-uniform population distributions and clustering effects – this initialization strategy enhances the algorithm’s ability to achieve global optimal solutions.

Let be a unit cube in s-dimensional space, and if , where there exists a set of points:

(6)

Its deviation satisfies >, where is a constant related only to and , then is called the good point set.

The value of the good point set is , where is the smallest prime number satisfying . After generating the good points set, it is mapped to the search space:

(7)

where is the upper bound and is the lower bound.

Fig 1 compares the population distributions between random initialization and good point set initialization. The good point set method demonstrates superior spatial uniformity, which enhances population diversity during the search process and effectively prevents premature convergence. This uniform distribution characteristic enables the algorithm to approach global optima more efficiently, thereby significantly improving optimization performance.

thumbnail
Fig 1. (a) Random initialized population distribution (b) Good point set initialized population distribution.

https://doi.org/10.1371/journal.pone.0329504.g001

  1. b. Elite opposition-based learning strategy

After population initialization, randomly generated solutions often cause the algorithm to revisit undesired regions of the search space, leading to inefficient search.

To address this limitation, we implement an Opposition-Based Learning (OBL) strategy to enhance initial population diversity. Originally proposed by Tizhoosh [42] in 2005, OBL operates as a perturbation mechanism that generates inverse solutions () for each candidate solution () during initialization. The method evaluates both original and opposition solutions, retaining the superior candidates to improve solution quality.

The opposition-based learning approach generates inverse solutions () for each candidate solution () during the search process. This strategy significantly increases the probability of locating global optima compared to purely random sampling, as the opposite solutions provide complementary exploration of the search space. By simultaneously evaluating both original and opposition solutions, the algorithm enhances its detection capability while achieving more comprehensive search space coverage.

Suppose . The opposite of , is computed as follows:

(8)

The equation is generalized to the multidimensional case. . When defined, the equation is as follows:

(9)

The Opposition-Based Learning mechanism is added to the traditional ZOA algorithm, where and are the upper and lower bounds of the problem and is the search agent, and the equation is as follows:

(10)

Individual is selected based on the magnitude of the fitness of the greedy mechanism by comparing the fitness of with :

(11)
  1. B. Mechanisms for synchronizing real-time information

The traditional ZOA exhibits limited information exchange between individuals during iterations. When encountering potential threats or “invaders”, the population frequently fails to coordinate effective collective defense mechanisms. This communication deficiency significantly constrains the algorithm’s capacity for rapid environmental adaptation. The traditional ZOA exhibits limited information exchange between individuals during iterations. When encountering potential threats or invaders, the population frequently fails to coordinate effective collective defense mechanisms. This communication deficiency significantly constrains the algorithm’s capacity for rapid environmental adaptation.

To address these limitations, this study introduces a real-time information synchronization mechanism to improve population responsiveness and environmental adaptation. The mechanism comprises two key components: the optimal individual (X_best) and the vigilante agent (X_worse). The optimal individual X_best guides collective foraging behavior while continuously updating and broadcasting its position through real-time synchronization, enabling efficient exploitation of resource-rich regions. Simultaneously, the vigilante X_worse, representing the population’s least adapted member, monitors potential threats and approaching intruders during iterations. Through synchronized position and status updates from both components, the population achieves more effective defense strategy adjustments and risk mitigation.

The population renewal method is shown in Eq 12:

(12)

where is the bootstrap factor and is the caution factor with a random number of size [0,1] respectively.

Through the mechanism of Real-time information synchronization, ZOA is able to respond more flexibly to complex and changing environmental conditions and enhance the overall survivability of the population, while achieving a more efficient exploration and utilization of equilibrium in the optimization problem solving process.

  1. C. Dynamic elite-pooling strategy

The Traditional ZOA often suffers from premature convergence to local optima during optimization, significantly limiting its exploration capability for potentially superior solutions. To address this limitation, we introduce a dynamic elite-pooling strategy designed to enhance global search performance and maintain population diversity. The strategy establishes an elite pool (Elitep) comprising the highest-fitness individuals, effectively preventing local optima entrapment.

The implementation involves three key steps: First, fitness evaluation identifies three optimal zebra individuals (PZ1, PZ2, PZ3). Second, their positional arithmetic mean () is computed as:

(13)

The elite pool incorporates these three individuals along with their average position (). During each iteration, rather than exclusively using the optimal individual’s position for updates, the algorithm randomly selects a reference point from Elitep to guide zebra movement. This dynamic elite-pooling approach achieves dual objectives: maintaining awareness of the current optimal solution while simultaneously exploring potential solution spaces suggested by other high-fitness individuals. The strategy significantly improves the algorithm’s exploration capability and local optima avoidance.

The proposed OP-ZOA algorithm whose pseudo-code is shown in Algorithm 1, and the main process is shown in the form of a flowchart showing the various steps of OP-ZOA, as shown in Fig 2.

Algorithm 1: Pseudo-Code of OP-ZOA

Start OP-ZOA.

1. Input: The optimization problem information.

2. Set the number of iterations (T) and the number of zebras’ population (N).

3. Initialization of the position of zebras and evaluating Fitness Functions Based on Opposition-Based Learning Strategies, Using Eqs 9 and 10.

4. For t = 1: T

5.    Dynamic selection from the Elite pool, Using Eq 10 update PZi: pioneer zebra.

6.    For i = 1: N

7.     Phase 1: Foraging behavior

8.     Substituting PZi selected from the Elite pool into Eq 1 new status of the ith zebra.

9.     Update the ith zebra using Eq. 2.

10.     Phase 2: Defense strategies against predators

11.     If Ps < 0.5, Ps = rand

12.       Strategy 1: against lion (exploitation phase)

13.       Real-time Message Synchronization: introduction of Xbest and Xworse constraints: calculate new status of the ith zebra using mode S1 in Eq 12.

14.     else

15.       Strategy 2: against other predator (exploration phase)

16.       Calculate new status of the ith zebra using mode S2 in Eq. 12.

17.     end if

18.     Update the ith zebra using Eq 4.

19.   end for i = 1: N

20.   Save best candidate solution so far.

21. end for t = 1: T

22. Output: The best solution obtained by ZOA for given optimization problem.

End OP-ZOA

  1. D. Analysis of the computational complexity

The complexity of any algorithm is a function that provides the running time or space concerning the input size. This is of two kinds: one is the complexity of space and the other is time complexity. The OP-ZOA mainly consists of three parts, which are the initialization stage, foraging behavior stage and defense strategy stage. Thus, the computational complexity of the OP-ZOA algorithm mainly depends on these three stages. The time complexity of the initialization phase is . The iterative computation consists of two phases: foraging behavior with a time complexity of and defense strategy with identical complexity. Therefore, the algorithm’s total time complexity amounts to . In terms of space complexity, the complexity of the population matrix is , the complexity of the fitness array is , and the complexity of the fitness curve is . Therefore, the total space complexity is . Here, represents the population size, represents the problem dimension, and represents the maximum number of iterations.

Ablation experiments

  1. A. CEC2017 functions

The CEC2017 test functions are 30 classical test functions, and their benchmark functions are shown in Table 1. Functions F1-F3 represent unimodal functions featuring a single global optimum, designed to evaluate an algorithm’s capability for single-solution optimization. The multimodal functions F4-F9 contain multiple local optima alongside one global optimum, testing an algorithm’s ability to avoid premature convergence. Hybrid functions F11-F20 combine unimodal and multimodal characteristics to assess performance on mixed-type optimization problems. Composition functions F21-F29 incorporate complex nonlinear and nonconvex features, challenging algorithms with highly complex optimization scenarios. Notably, functions F11-F29 consist of mixed and combined compositions of benchmark functions, providing comprehensive evaluation metrics for algorithm testing.

  1. B. Experimental design

The experimental environment employed in this study is Windows 11 64-bit operating system, with an Intel® Core™ i9-14900HXHz CPU operating at a main frequency of 2.20GHz. The test software utilized is MATLAB R2022a.

This study introduces three key enhancements to the traditional zebra optimization algorithm (ZOA): (1) good point set-elite opposition-based learning for population initialization, (2) a dynamic elite-pooling strategy, and (3) a real-time information synchronization mechanism. Each enhancement was systematically integrated into ZOA and evaluated independently through controlled experiments to assess its individual contribution to algorithm performance.

To ensure fair comparison, all five algorithms were configured with identical parameters: search intervals between [−100,100], population size of 30, 10-dimensional search space, and maximum iteration count of 500. Performance evaluation employed comprehensive statistical measures including minimum (Min), mean (Mean), standard deviation (Std), maximum (Max), Wilcoxon rank-sum test, Wilcoxon significance test, and Friedman test, with detailed results presented in Tables 2 and 3 and Figs 3 and 4. Specifically, the OZOA algorithm integrates improvement point 1, the PZOA algorithm integrates improvement point 2, and the SZOA algorithm integrates improvement point 3. The serial numbers 1, 2, 3, 4 and 5 in the table correspond to ZOA and OZOA, respectively, PZOA, SZOA, and OP-ZOA algorithms.

thumbnail
Table 3. The p-values obtained by Wilcoxon signed-rank test of ablation experiments.

https://doi.org/10.1371/journal.pone.0329504.t003

  1. C. Analysis of convergence capability

To analyze the convergence behavior of the traditional Zebra Optimization Algorithm and solution quality at different improvement stages, Fig 3 presents convergence curves comparing algorithm performance. Fig 4 further displays the box plot distributions across all 30 test functions. The convergence analysis reveals that OP-ZOA exhibits minimal initial convergence delay when solving F10, F13, F18, F21, F28, and F30, while demonstrating accelerated convergence during mid-to-late iterations and ultimately achieving superior optimization results.

Fig 4 demonstrates that OP-ZOA consistently generates higher-quality solutions with reduced solution variance compared to competing algorithms, indicating stable performance through its centralized data distribution pattern. However, the algorithm shows susceptibility to local optima in test functions F10, F13, F18, and F25, as evidenced by data points exceeding the central distribution boundaries.

  1. D. Statistical tests

Statistical evaluation in Table 2 demonstrates that OP-ZOA achieves top-two Friedman rankings for 28 of the 30 test functions, with exceptions occurring only in F10 and F18. The algorithm exhibits susceptibility to local optima in functions F10, F13, and F18, which slightly reduces its robustness. Nevertheless, OP-ZOA maintains top-tier performance in standard deviation rankings across all other test functions, confirming its overall optimization stability. Table 3 presents Wilcoxon signed-rank test results at , where most p-values below this threshold support alternative hypothesis , statistically confirming OP-ZOA’s superiority.

Collectively, the convergence patterns in Figs 3 and 4 and statistical evidence in Tables 2 and 3 validate that the enhanced OP-ZOA algorithm demonstrates significant robustness when evaluated against CEC2017 benchmark functions.

Simulation comparison and performance test

  1. A. Experimental design

To verify the performance of the improved OP-ZOA algorithm, it is compared with seven other algorithms: the Bloodsucking Leech Algorithm (BSLO) [43], Spider Wasp Optimization Algorithm (SWO) [44], Parrot Optimization Algorithm (PO) [45], Polar Lights Algorithm (PLO) [46], Red-tailed Hawk Optimization Algorithm (RTH) [47], Bitterling Fish Optimization Algorithm (BFO) [48], and Zebra Optimization Algorithm (ZOA) [34]. The CEC2017 test function consists of a total of 30 objective test functions with search intervals between [−100,100].

  1. B. Analysis of convergence capability

To comprehensively analyze algorithm convergence and solution quality, we employ convergence plots, fitness trajectories, and box plots. The OP-ZOA’s convergence behavior is evaluated through six metrics: search history, convergence curves, average fitness, population diversity, first-dimension trajectories, and exploration-exploitation ratios.

Fig 5 illustrates these analyses, where the first column’s search history reveals individuals initially exploring promising regions before converging toward global optima, demonstrating effective exploration-exploitation balance. Notably, for function F15, OP-ZOA exhibits strong exploratory behavior by extensively searching the upper-left solution space before locating optima in the right region, highlighting its ability to maintain population diversity and avoid local optima. The second column’s average fitness curves demonstrate OP-ZOA’s rapid convergence. While exhibiting strong exploitation for unimodal problems through inter-individual learning, the algorithm occasionally encounters local optima in multimodal and hybrid problems, yet achieves high precision through dynamic elite-pooling guidance. Columns three and four present first-dimension trajectories and population diversity, showing significant initial variation that promotes exploration of high-quality solutions. The fifth column’s exploration-exploitation analysis reveals OP-ZOA’s adaptive balance: rapid exploitation intensification for unimodal functions versus gradual exploration reduction in multimodal scenarios.

thumbnail
Fig 5. Convergence analysis plot for OP-ZOA (Search history, Average fitness, Trajectory of 1st dimension, Population diversity, Changes in the percentage of exploration and exploitation).

https://doi.org/10.1371/journal.pone.0329504.g005

For standardized comparison, all algorithms use identical parameters: population size 30, 10-dimensional search space, and 500 maximum iterations. Fig 6 displays convergence curves from 10 independent runs across all 30 functions, while Fig 7 presents their corresponding box plot distributions.

thumbnail
Fig 6. CEC2017 test function images and iteration curves of different algorithms.

https://doi.org/10.1371/journal.pone.0329504.g006

thumbnail
Fig 7. Box plots of different algorithms on CEC2017 test function.

https://doi.org/10.1371/journal.pone.0329504.g007

The convergence analysis in Fig 6 demonstrates that while ZOA shows marginally better convergence performance than SWO, PO, RTH, BFO, BSLO, and PLO algorithms, it still exhibits several limitations including curve flattening, search stagnation, reduced optimization accuracy, and susceptibility to local optima. In contrast, the enhanced OP-ZOA algorithm achieves notable improvements in both convergence speed and solution accuracy compared to the original ZOA, while effectively overcoming local optima through its opposition-based learning and dynamic elite-pooling mechanisms.

Comparative evaluation reveals that OP-ZOA maintains similar initial convergence rates to ZOA while achieving superior final accuracy, owing to its real-time information synchronization and dynamic elite-pooling strategies. Although OP-ZOA shows slightly slower convergence when test functions F8, F10, F18, F20, F21, F28, and F30 - potentially due to exploration-exploitation imbalances – it demonstrates consistently higher accuracy in other test functions. These results confirm that the implemented opposition-based learning and dynamic elite-pooling strategies significantly enhance ZOA’s global optimization capability while reducing its tendency for premature convergence.

Fig 7 demonstrates that OP-ZOA maintains concentrated data distributions and stable performance across multiple runs. However, the OP-ZOA algorithm may fall into local optima in test functions F5, F8, and F10, resulting in a large range of data distribution.

  1. C. Statistical tests

To rigorously evaluate OP-ZOA’s optimization accuracy, we compared eight algorithms (ZOA, SWO, PO, RTH, OP-ZOA, BFO, BSLO, and PLO) on 30 test functions (dim = 10), conducting 10 independent runs per algorithm. Performance was assessed using seven metrics: minimum value (Min), mean value (Mean), standard deviation (Std), maximum value (Max), Wilcoxon rank-sum test, Wilcoxon signed-rank test, and Friedman test, with detailed results presented in Table 4. The table entries 1–8 correspond to ZOA, SWO, PO, RTH, OP-ZOA, BFO, BSLO, and PLO respectively.

Table 4 demonstrates that while OP-ZOA may experience exploration-exploitation imbalances in functions F10, F18, F21, F24, and F28 - reflected in its suboptimal Friedman rankings for these cases – it achieves top-two performance rankings in the remaining 25 test functions, indicating statistically significant differences between algorithm groups. Across the 30 test functions, OP-ZOA shows susceptibility to local optima in F2, F10, F18, F20, F21, and F22, as evidenced by reduced standard deviation rankings. However, for all other functions, it maintains top-two standard deviation rankings with consistently smaller values than ZOA, confirming that the implemented opposition-based learning and dynamic elite-pooling strategies successfully enhance both robustness and stability.

Table 5 gives the p-values obtained by the Wilcoxon signed-rank test for OP-ZOA and each comparison algorithms. From the results, it can be concluded that the vast majority of p-values are less than 0.05, i.e., the vast majority of results accept the alternative hypothesis H1, which validates the superiority of OP-ZOA.

thumbnail
Table 5. The p-values obtained by Wilcoxon signed-rank test.

https://doi.org/10.1371/journal.pone.0329504.t005

In conclusion, the experimental results presented in Figs 57 and Tables 4 and 5 demonstrate that OP-ZOA achieves strong performance across the CEC2017 test functions, validating the effectiveness of the proposed improvement strategies.

  1. D. Sensitivity analysis

The proposed OP-ZOA algorithm employs two key parameters: the escape coefficient (R) and the escape strategy probability threshold (Ps). A sensitivity analysis was conducted by systematically varying these parameters while maintaining all other parameters constant.

To examine the influence of the escape coefficient (R) and escape strategy probability threshold (Ps) on OP-ZOA’s performance, we performed experiments using the CEC2017 test functions with a dimensionality of 10, population size of 30, and maximum iteration count of 500. Each configuration was independently executed 10 times while maintaining fixed values for all other parameters.

The tested R values included 0.05, 0.1, 0.15, 0.2, 0.25, and 0.3, while Ps values ranged from 0.1 to 0.9 in 0.1 increments. Figs 810 present sensitivity analyses of R and Ps values across unimodal, hybrid, and composition benchmark functions. The figure presents a dual-component visualization comprising a two-parameter interaction heatmap and a single-parameter sensitivity analysis plot. Within the heatmap, optimal parameter combinations are distinctly marked by red circular indicators. The results (Fig 11 and Table 6) reveal correlation coefficients below 0.1 between R and Ps values across all CEC2017 test functions, indicating that OP-ZOA’s performance remains relatively unaffected by variations in these parameters for unimodal, multimodal, and hybrid function types.

thumbnail
Table 6. Sensitivity analysis of escape coefficient (R) and escape strategy probability threshold (Ps) (based on Mean value).

https://doi.org/10.1371/journal.pone.0329504.t006

thumbnail
Fig 8. Escape coefficient (R) and escape strategy probability threshold (Ps) unimodal functions analysis.

https://doi.org/10.1371/journal.pone.0329504.g008

thumbnail
Fig 9. Escape coefficient (R) and escape strategy probability threshold (Ps) hybrid functions analysis.

https://doi.org/10.1371/journal.pone.0329504.g009

thumbnail
Fig 10. Escape coefficient (R) and escape strategy probability threshold (Ps) composition functions analysis.

https://doi.org/10.1371/journal.pone.0329504.g010

thumbnail
Fig 11. CEC2017 Summary Analysis of Optimal Function Parameters.

https://doi.org/10.1371/journal.pone.0329504.g011

Optimization problem application

To evaluate OP-ZOA’s practical effectiveness in engineering applications, we tested it on the local optima avoidance challenge in artificial potential field (APF) path planning, comparing its performance with seven algorithms: SWO, PO, RTH, BFO, BSLO, PLO, and ZOA. The APF optimization task specifically addresses local optima escape to generate optimal collision-free paths with minimal length. We assessed algorithm performance using three critical metrics: fitness value, path length, and computation time. Comparative analysis of convergence behavior, total path lengths, and computational efficiency confirmed OP-ZOA’s superior performance.

In assessing OP-ZOA’s performance for mobile robot navigation, we identified two key scenarios causing APF algorithm entrapment in local optima and consequent target unreachability: U-shaped obstacles and proximal endpoint obstacles. Four specialized environmental configurations were designed to evaluate these local optima scenarios. The experimental outcomes, depicted in Figs 1215, include: 10 × 10 environment analyses (Figs 12 and 13), 15 × 15 environment evaluations (Figs 14 and 15), eight-algorithm convergence comparisons (Fig 16), post-optimization path length measurements (Table 7), and assessment of different environmental path lengths (Fig 13).

thumbnail
Table 7. Path data for different algorithms in four different environments.

https://doi.org/10.1371/journal.pone.0329504.t007

thumbnail
Fig 12. Path planning chart for Environment 1 (10 × 10, destination containing obstacles).

Fig 12 demonstrates the path planning performance in a 10 × 10 environment from start point (1,1) to target (7,9). The traditional APF method becomes trapped in a local optimum at (6.97,9.05), exhibiting endpoint oscillations. In contrast, OP-ZOA and the seven comparison algorithms successfully escape this local optimum, achieving collision-free path completion. This comparative result highlights OP-ZOA’s effectiveness in overcoming the characteristic limitations of APF methods.

https://doi.org/10.1371/journal.pone.0329504.g012

thumbnail
Fig 13. Path planning chart for Environment 2 (10 × 10, destination containing obstacles).

Fig 13 illustrates path planning performance in a 10 × 10 environment from start point (5,0) to target (5,9). The traditional APF method becomes trapped in a local optimum at (4.93,9.0), resulting in endpoint oscillations. However, OP-ZOA and all seven comparison algorithms successfully overcome this local optimum, completing collision-free path planning. These results further demonstrate the superior capability of OP-ZOA in handling local optima compared to traditional APF approaches.

https://doi.org/10.1371/journal.pone.0329504.g013

thumbnail
Fig 14. Path planning chart for Environment 3 (15 × 15, U-shaped obstacles).

Fig 14 presents path planning results in a 15 × 15 environment from start point (14,13) to target (1,0). The APF method encounters a U-shaped obstacle at (8.79,6.98) and becomes trapped in a local optimum. While OP-ZOA and six other comparison algorithms successfully escape this local optimum and complete collision-free path planning, the SWO algorithm fails due to obstacle collisions. This comparative analysis demonstrates OP-ZOA’s robust performance in complex obstacle environments where traditional methods like APF and certain optimization algorithms (SWO) exhibit limitations.

https://doi.org/10.1371/journal.pone.0329504.g014

thumbnail
Fig 15. Path planning chart for Environment 4 (15 × 15, U-shaped obstacles).

Fig 15 demonstrates path planning performance in a 15 × 15 environment from start point (15,2) to target (3,4). The traditional APF method encounters a U-shaped obstacle at (8.82,6.9) and becomes trapped in a local optimum. While OP-ZOA and five other algorithms successfully escape this local optimum and achieve collision-free path completion, the SWO, BSLO, and PLO algorithms fail due to obstacle collisions. These findings further substantiate OP-ZOA’s enhanced robustness in complex navigation environments where multiple comparative algorithms demonstrate performance constraints.

https://doi.org/10.1371/journal.pone.0329504.g015

Fig 17 and Table 7 demonstrate that OP-ZOA consistently achieves the shortest path lengths after escaping local optima across all experimental settings. In Setting 1, path lengths were reduced by an average of 13.642m (22.316%). Similar improvements were observed in Setting 2 (3.449m, 8.951%), Setting 3 (12.983m, 33.297%, excluding SWO), and Setting 4 (0.133m, 0.6%, excluding SWO, BSLO, and PLO). Overall, OP-ZOA outperformed other algorithms by an average of 16.291% in path length reduction.

thumbnail
Fig 17. Planning path distances for four different algorithms in four different environments.

https://doi.org/10.1371/journal.pone.0329504.g017

Fig 16 further reveals OP-ZOA’s superior convergence rate when handling artificial potential field (APF) challenges, particularly in complex scenarios where optimal solutions are difficult to obtain. These results confirm OP-ZOA’s effectiveness in addressing path planning challenges caused by local minima, while successfully balancing exploration and exploitation. The algorithm demonstrates strong stability and practical engineering value, showing promising potential for solving complex real-world optimization problems.

Conclusion

In this paper, an enhanced Zebra Optimization Algorithm (OP-ZOA) was proposed. The proposed algorithm integrates Opposition-Based Learning and Dynamic Elite-Pooling strategies. In order to overcome the problem of late search stagnation caused by the lack of population diversity, the good point set-Elite Opposition-Based Learning mechanism is introduced in the initialization of the population. Additionally, a Real-time Information Synchronization mechanism is incorporated into the searcher position update to address the deficiency of the OP-ZOA algorithm in balancing global and local optimization. By introducing a dynamic elite-pooling strategy, three different fitness factors are added, and the fitness factors are randomly selected so as to improve the ability of the algorithm to obtain the global optimal solution, which in turn improves the global optimization accuracy and speed of the OP-ZOA algorithm.

In this paper, the optimization performance of OP-ZOA is verified using CEC2017 test functions. Seven excellent meta-search algorithms proposed in recent years are compared with OP-ZOA on the CEC2017 benchmark function. These algorithms include ZOA, SWO, BSLO, RTH, BFO, PO, and PLO. Then, Wilcoxon signed-rank test and Friedman’s test are performed on the results of the runs of OP-ZOA and its competitors on the CEC2017 benchmark function. The experimental results show that for different types of test functions, the OP-ZOA search algorithm, which contains both opposition-based learning and dynamic elite-pooling strategies, exhibits better convergence speed and accuracy, and is competitive and stable. In addition, a convergence analysis was performed. OP-ZOA was then used to solve the APF artificial potential field method trapped in a locally optimal design problem to validate the ability of OP-ZOA to solve engineering problems. The core results obtained in this study are summarized as follows:

  1. 1). OP-ZOA exhibits stronger performance in terms of global optimization and local exploitation compared to the results of competitors’ test function runs at CEC2017.
  2. 2). OP-ZOA converges faster on F2 ~ F4, F6, F7, F9, F11, F13, F14, F19, F20, F22, F23, F27, and F29 of the CEC2017 test function compared to its competitors.
  3. 3). Wilcoxon signed rank test yields p-values mostly less than 0.05, indicating that OP-ZOA’s results are significantly different from those of its competitors.
  4. 4). In Friedman test, the algorithm did not rank first on F10, F12, F18, F21, F22, F24, F28 and F30 and ranked first on the rest of the tested functions.
  5. 5). OP-ZOA shows strong scalability on most of the functions in the CEC2017 benchmark set.
  6. 6). OP-ZOA algorithm shows some advantages in solving the problem of APF artificial potential field method falling into local optimal design. OP-ZOA algorithm can jump out of the local optimum in four different environments set up in the experiments, and the total paths planned are all the shortest path lengths, and the path lengths are shortened than other optimization algorithms by an average of 7.55175m, which accounts for 16.291%. Among them, in experimental environment 1, the paths are shortened by 13.642m on average, accounting for 22.316%; in experimental environment 2, the paths are shortened by 3.449m on average, accounting for 8.951%; in experimental environment 3 (excluding the SWO algorithm), the paths are shortened by 12.983m on average, accounting for 33.297%; and in experimental environment 4 (excluding the SWO, BSLO and PLO algorithms), the paths are shortened by 0.133m on average, accounting for 33.297%. were shortened by 0.133m on average, accounting for 0.6%.

From the results of this paper, it can be concluded that OP-ZOA has a broad research prospect. However, the introduction of OP-ZOA algorithm inevitably leads to an increase in running time. Therefore, other valuable future research based on this study includes:

  1. 1). Providing a multi-objective optimization version of OP-ZOA.
  2. 2). Continuously improving on the existing basis in terms of the selection of bootstrapping individuals (e.g., fitness-distance balance), the balance between exploration and exploitation (e.g., parameter or new search operation), and the updating mechanism (e.g., natural survivor method) to ensure the optimization performance while reducing the runtime overhead, and improve computational efficiency.
  3. 3). Propose a fusion algorithm with better performance by fusing OP-ZOA and other meta-heuristic algorithms.
  4. 4). Integrating the OP-ZOA algorithm into ROS for practical simulation verification of mobile robots.
  5. 5). Apply OP-ZOA to different practical optimization problems.

Acknowledgments

The authors are grateful for the support of the Tianjin Science and Technology Plan Project, grant number 24YDLQGX00090; and Tianjin “Jie bang Gua shuai” Science and Technology Program, grant number 2023JB02.

References

  1. 1. Tang J, Liu G, Pan Q. A review on representative swarm intelligence algorithms for solving optimization problems: Applications and trends. IEEE/CAA J Autom Sin. 2021;8(10):1627–43.
  2. 2. Moayedi H, Nguyen H, Kok FL. Nonlinear evolutionary swarm intelligence of grasshopper optimization algorithm and gray wolf optimization for weight adjustment of neural network. Eng Comput. 2021;37(2):1265–75.
  3. 3. Yue Y, Cao L, Lu D, Hu Z, Xu M, Wang S, et al. Review and empirical analysis of sparrow search algorithm. Artif Intell Rev. 2023;56(10):10867–919.
  4. 4. Arafat MY, Moh S. Localization and clustering based on swarm intelligence in UAV networks for emergency communications. IEEE Internet Things J. 2019;6(5):8958–76.
  5. 5. El-shafeiy E, Sallam KM, Chakrabortty RK, Abohany AA. A clustering based swarm intelligence optimization technique for the Internet of Medical Things. Expert Syst Appl. 2021;173:114648.
  6. 6. Nama S, Saha AK, Chakraborty S, Gandomi AH, Abualigah L. Boosting particle swarm optimization by backtracking search algorithm for optimization problems. Swarm Evol Comput. 2023;79:101304.
  7. 7. Yue Y, Cao L, Chen H, Chen Y, Su Z. Towards an optimal KELM using the PSO-BOA optimization strategy with applications in data classification. Biomimetics (Basel). 2023;8(3):306. pmid:37504194
  8. 8. Nama S, Saha AK. A bio-inspired multi-population-based adaptive backtracking search algorithm. Cognit Comput. 2022;14(2):900–25. pmid:35126764
  9. 9. Cao L, Wang Z, Wang Z, Wang X, Yue Y. An energy-saving and efficient deployment strategy for heterogeneous wireless sensor networks based on improved seagull optimization algorithm. Biomimetics (Basel). 2023;8(2):231. pmid:37366826
  10. 10. Saha A, Nama S, Ghosh S. Application of HSOS algorithm on pseudo-dynamic bearing capacity of shallow strip footing along with numerical analysis. Int J Geotech Eng. 2019;4:1298–311.
  11. 11. Nama S, Saha AK. A new parameter setting-based modified differential evolution for function optimization. Int J Model Simul Sci Comput. 2020;11(04):2050029.
  12. 12. Nama S. A novel improved SMA with quasi reflection operator: performance analysis, application to the image segmentation problem of Covid-19 chest X-ray images. Appl Soft Comput. 2022;118:108483.
  13. 13. Sharma S, Chakraborty S, Saha AK, Nama S, Sahoo SK. mLBOA: A modified butterfly optimization algorithm with lagrange interpolation for global optimization. J Bionic Eng. 2022;19(4):1161–76.
  14. 14. Sharma S, Saha AK, Roy S, Mirjalili S, Nama S. A mixed sine cosine butterfly optimization algorithm for global optimization and its application. Cluster Comput. 2022;25(6):4573–600.
  15. 15. Gao Y, Wang J, Li C. Escape after love: Philoponella prominens optimizer and its application to 3D path planning. Cluster Comput. 2025;28:81.
  16. 16. Nama S, Chakraborty S, Saha AK, Mirjalili S. Hybrid moth-flame optimization algorithm with slime mold algorithm for global optimization. In: Nama S, ed. Handbook of Moth-Flame Optimization Algorithm: Variants, Hybrids, Improvements, and Applications. CRC Press; 2022: 155–76.
  17. 17. Chakraborty S, Saha AK, Chakraborty R, Saha M, Nama S. HSWOA: an ensemble of hunger games search and whale optimization algorithm for global optimization. Int J Intell Syst. 2022;37(1):52–104.
  18. 18. Storn R, Price K. Differential evolution – A simple and efficient heuristic for global optimization over continuous spaces. J Global Optimization. 1997;11(4):341–59.
  19. 19. Gao Y, Zhang J, Wang Y, Wang J, Qin L. Love evolution algorithm: a stimulus–value–role theory-inspired evolutionary algorithm for global optimization. J Supercomput. 2024:1–62.
  20. 20. Cao L, Chen H, Chen Y, Yue Y, Zhang X. Bio-inspired swarm intelligence optimization algorithm-aided hybrid TDOA/AOA-based localization. Biomimetics (Basel). 2023;8(2):186. pmid:37218772
  21. 21. Wang X, Zhang H, Liu S, Wang J, Wang Y, Shangguan D. Path planning of scenic spots based on improved A* algorithm. Sci Rep. 2022;12(1):1320. pmid:35079066
  22. 22. Xia J, Wang Z, Yang D, Li R, Liang G, Chen H, et al. Performance optimization of support vector machine with oppositional grasshopper optimization for acute appendicitis diagnosis. Comput Biol Med. 2022;143:105206. pmid:35101730
  23. 23. Zhang X, Guo Y, Yang J, Li D, Wang Y, Zhao R. Many-objective evolutionary algorithm based agricultural mobile robot route planning. Comput Electron Agric. 2022;200:107274.
  24. 24. Aribowo W, Muslim S, Suprianto B, Haryudo SI, Hermawan AC. Intelligent control of power system stabilizer based on archimedes optimization algorithm-feed forward neural network. Int J Intell Eng Syst. 2021;14(3):43–53.
  25. 25. Bai Y, Cao L, Chen B, Chen Y, Yue Y. A novel topology optimization protocol based on an improved crow search algorithm for the perception layer of the internet of things. Biomimetics (Basel). 2023;8(2):165. pmid:37092417
  26. 26. Wang S, Cao L, Chen Y, Chen C, Yue Y, Zhu W. Gorilla optimization algorithm combining sine cosine and cauchy variations and its engineering applications. Sci Rep. 2024;14(1):7578. pmid:38555275
  27. 27. Jin W, Tian X, Shi B, Zhao B, Duan H, Wu H. Enhanced UAV pursuit-evasion using boids modelling: a synergistic integration of bird swarm intelligence and DRL. CMC. 2024;80(3):3523–53.
  28. 28. Habib S, Islam S, Khan Z. An evolutionary-based neural network approach to investigate heat and mass transportation by using non-Fourier double-diffusion theories for Prandtl nanofluid under Hall and ion slip effects. Eur Phys J Plus. 2023;138(12).
  29. 29. Ullah A, Ismail EAA, Awwad FA. Applications of the neuro-evolutionary approach to the parabolic type partial differential equations. Ain Shams Eng J. 2025;16(1):103186.
  30. 30. Ullah A, Ali S, Awwad FA, Ismail EAA. Analysis of the convective heat transfer through straight fin by using the Riemann-Liouville type fractional derivative: probed by machine learning. Heliyon. 2024;10(4):e25853. pmid:38384546
  31. 31. Xu X, Lin Z, Li X, Shang C, Shen Q. Multi-objective robust optimisation model for MDVRPLS in refined oil distribution. Int J Production Res. 2021;60(22):6772–92.
  32. 32. Tejani GG, Mashru N, Patel P, Sharma SK, Celik E. Application of the 2-archive multi-objective cuckoo search algorithm for structure optimization. Sci Rep. 2024;14(1):31553. pmid:39738304
  33. 33. Nonut A, Kanokmedhakul Y, Bureerat S, Kumar S, Tejani GG, Artrit P, et al. A small fixed-wing UAV system identification using metaheuristics. Cogent Eng. 2022;9(1).
  34. 34. Trojovska E, Dehghani M, Trojovsky P. Zebra Optimization Algorithm: a new bio-inspired optimization algorithm for solving optimization algorithm. IEEE Access. 2022;10:49445–73.
  35. 35. El-Hageen HM, Alfaifi YH, Albalawi H. Chaotic zebra optimization algorithm for increasing the lifetime of wireless sensor network. J Netw Syst Manag. 2024;32(4).
  36. 36. Elymany MM, Enany MA, Elsonbaty NA. Hybrid optimized-ANFIS based MPPT for hybrid microgrid using zebra optimization algorithm and artificial gorilla troops optimizer. Energy Conversion Manag. 2024;299:117809.
  37. 37. Mohapatra S, Mohapatra P. American zebra optimization algorithm for global optimization problems. Sci Rep. 2023;13(1):5211. pmid:36997597
  38. 38. Estes R. The behavior guide to african mammals: including hoofed mammals, carnivores, primates, illustrated. Oakland, CA, USA: Univ. California Press; 1992: 235–48.
  39. 39. Pastor J, Danell K, Duncan P, Bergström R. The roles of large herbivores in ecosystem nutrient cycles. Large herbivore ecology, ecosystem dynamics and conservation. Cambridge, U.K.: Cambridge University Press; 2006: 289–325.
  40. 40. Kennedy AS, Kennedy V. Animals of the Masai Mara. Princeton, NJ, USA: Princeton University Press; 2013.
  41. 41. Caro T, Izzo A, Reiner RC, Walker H, Stankowich T. The function of zebra stripes. Nature Commun. 2014;5(1):1–10.
  42. 42. Tizhoosh HR. Opposition-based learning: a new scheme for machine intelligence. In: Proceedings of the International Conference on Computational Intelligence for Modelling, Control and Automation and International Conference on Intelligent Agents, Web Technologies and Internet Commerce (CIMCA-IAWTIC’06). Vienna, Austria; 2005: 695–701.
  43. 43. Bai J, Nguyen-Xuan H, Atroshchenko E, Kosec G, Wang L, Abdel Wahab M. Blood-sucking leech optimizer. Adv Eng Soft. 2024;195:103696.
  44. 44. Abdel-Basset M, Mohamed R, Jameel M. Spider wasp optimizer: a novel meta-heuristic optimization algorithm. Artificial Intelligence Rev. 2023;56(10):11675–738.
  45. 45. Lian J, Hui G, Ma L, Zhu T, Wu X, Heidari AA, et al. Parrot optimizer: algorithm and applications to medical problems. Comput Biol Med. 2024;172:108064. pmid:38452469
  46. 46. Yuan C, Zhao D, Heidari AA, Liu L, Chen Y, Chen H. Polar lights optimizer: algorithm and applications in image segmentation and feature selection. Neurocomputing. 2024;607:128427.
  47. 47. Ferahtia S, Houari A, Rezk H, Djerioui A, Machmoum M, Motahhir S, et al. Red-tailed hawk algorithm for numerical optimization and real-world problems. Sci Rep. 2023;13(1):12950. pmid:37558724
  48. 48. Zareian L, Rahebi J, Shayegan MJ. Bitterling fish optimization (BFO) algorithm. Multimed Tools Appl. 2024;83(31):75893–926.