Figures
Abstract
Swarm intelligence optimization algorithms represent a significant branch of nature-inspired computational methods, designed to solve complex optimization problems by simulating the collective behavior of biological systems. Whale optimization algorithm (WOA) is a newly developed meta-heuristic algorithm, which is mainly based on the predation behavior of humpback whales in the ocean. This study proposes an enhanced version of the WOA, named the Outpost-based Multi-population Whale Optimization Algorithm (OMWOA), which integrates two key mechanisms: the outpost mechanism and a multi-population enhanced mechanism. These modifications aim to improve the algorithm’s performance in terms of solution accuracy and convergence rate. The effectiveness of OMWOA is thoroughly evaluated by benchmarking it against state-of-the-art evolutionary algorithms from the IEEE CEC 2017 and IEEE CEC 2022 competitions. Additionally, this study provides a detailed analysis of the influence of the outpost and multi-population mechanisms on OMWOA’s performance, as well as its scalability in problems of varying dimensionalities. To validate its applicability in real-world problems, the proposed algorithm is combined with Kernel Extreme Learning Machine (KELM) for solving medical disease diagnosis tasks. The experimental results demonstrate the superior performance of OMWOA in terms of diagnostic accuracy across five medical datasets, highlighting its potential for real-world applications.
Citation: Tang K, Zhang L (2025) An Enhanced Whale Optimization Algorithm with outpost and multi-population mechanisms for high-dimensional optimization and medical diagnosis. PLoS One 20(6): e0325272. https://doi.org/10.1371/journal.pone.0325272
Editor: Akash Saxena, Central University of Haryana School of Engineering and Technology, INDIA
Received: January 18, 2025; Accepted: May 8, 2025; Published: June 3, 2025
Copyright: © 2025 Tang, Zhang. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: All XXX files are available from the XXX database. https://gitcode.com/Open-source-documentation-tutorial/a16ba.
Funding: The author(s) received no specific funding for this work.
Competing interests: The authors have declared that no competing interests exist.
1. Introduction
Swarm intelligence optimization algorithms represent a significant branch of nature-inspired computational methods, designed to solve complex optimization problems by simulating the collective behavior of biological systems. These algorithms draw inspiration from the self-organizing and cooperative behaviors observed in nature, such as the foraging of ant colonies, the synchronized movement of bird flocks, and the coordinated hunting of wolves. By emulating these behaviors, swarm intelligence algorithms rely on the interaction of simple agents operating under decentralized control. Each agent follows basic rules, yet their collective interactions lead to emergent global intelligence capable of efficiently exploring and exploiting the search space. Key characteristics of swarm intelligence algorithms include adaptability to changing environments, scalability for handling high-dimensional problems, and the ability to avoid local optima by maintaining a balance between exploration and exploitation.
Swarm intelligence and evolutionary optimization techniques are primarily inspired by natural processes such as evolution, hunting, foraging, and survival strategies observed in groups or individuals in nature [1–3]. In recent years, these optimization methods have become crucial in addressing numerous large-scale and real-world challenges [4–6]. Compared to gradient-based methods, swarm intelligence algorithms have demonstrated superior efficiency in solving complex optimization tasks [7]. Notable examples include traditional approaches such as particle swarm optimization (PSO) [8,9] and ant colony optimization (ACO) [10,11]. Recently developed algorithms include the artificial bee colony algorithm (ABC) [12], multi-verse optimizer (MVO) [13], fruit fly optimization algorithm (FOA) [14,15], grasshopper optimization algorithm (GOA) [16], bat algorithm (BA) [17,18], chicken swarm optimization (CSO) [19], and artificial fish swarm algorithm (AFSA) [20]. The whale optimization algorithm (WOA), introduced by Mirjalili in 2016 [21], emulates the hunting behavior of whales during foraging to efficiently explore and exploit potential optimal or near-optimal solutions.
In recent years, the Whale Optimization Algorithm (WOA) has been extensively applied across various domains, including feature selection [22,23], retinal vascular recognition [24], neural network optimization [25,26], image segmentation [27], image retrieval [28], key recognition [29], wind speed prediction [30], and sentiment analysis [31]. WOA is characterized by its simplicity and robust global search capabilities, which allow it to outperform algorithms like PSO and SCA in terms of solution quality. However, WOA faces challenges when dealing with complex and high-dimensional problems, particularly in terms of a slow convergence rate and suboptimal solution quality during the latter stages of iteration. To address these limitations, researchers have proposed numerous enhancements. Zhou et al. [32] introduced a Lévy flight-based WOA (LWOA) for engineering optimization, leveraging Lévy flight trajectories to enhance population diversity and mitigate premature convergence, thus improving the ability to escape local optima. Mafarja et al. [22] developed a binary version of WOA that integrates evolutionary operators such as selection, crossover, and mutation. Yousri et al. [33] explored the use of ten different chaotic maps to optimize WOA’s parameters, resulting in improved performance. Moreover, variants like CWOA and adaptations of standard WOA have been utilized to estimate chaotic behavior parameters in PMSM under noise-free and noisy conditions, achieving lower error rates, faster convergence, and reduced execution times. Bhowmik et al. [34] proposed balancing local and global searches by employing non-linear and random variations of parameter “a” and an inertial weight strategy for updating parameter “c.” Sun et al. [25] introduced chaos into WOA’s initialization process, using chaotic dynamics to enhance search diversity and reduce self-centered tendencies in the search process. Yaqoob et al. [35] proposed a novel method called the Harris Hawks Optimization and Cuckoo Search Algorithm (HHOCSA), which is applicable to commonly used machine learning classifiers. Alnowibet et al. [36] proposed two major improvements in the WOA: first, a reverse learning-based method was employed during the initialization phase, and second, a Cauchy mutation operator was introduced during the position update phase. The proposed variant is named the Enhanced Whale Optimization Algorithm (AWOA).
In this article, in order to improve the performance of WOA, we incorporate an outpost mechanism and a multi-population enhanced mechanism into WOA. Incorporating an outpost mechanism and a multi-population enhanced mechanism into the Whale Optimization Algorithm (WOA) can significantly improve its search efficiency and solution quality, especially when dealing with complex and high-dimensional optimization problems. The outpost mechanism helps in maintaining a set of “outpost” individuals, which are strategically distributed across the search space to act as exploratory agents. These outposts are designed to remain in diverse areas of the solution space, facilitating a more extensive and diversified exploration process. By doing so, they reduce the likelihood of the algorithm getting trapped in local optima, thereby improving the global search capabilities of the WOA. These outposts serve as anchor points, guiding the search towards promising regions while maintaining a high level of exploration.
On the other hand, the multi-population enhanced mechanism enhances the balance between exploration and exploitation by employing multiple sub-populations that operate concurrently within the search space. Each sub-population independently explores different regions, which leads to a broader and more thorough search. This mechanism enables the algorithm to avoid the common pitfall of converging to a single, potentially suboptimal solution. The diversity introduced by multiple populations allows the algorithm to maintain a higher degree of flexibility, ensuring that different regions of the search space are explored in parallel, thereby increasing the likelihood of finding the global optimum.
When combined, these two mechanisms provide a powerful strategy for enhancing the WOA’s performance. The outpost mechanism ensures the algorithm explores various regions of the solution space without premature convergence, while the multi-population mechanism enables the algorithm to explore multiple areas simultaneously, reducing the risk of stagnation. Together, these mechanisms promote a more balanced and effective exploration-exploitation trade-off, leading to improved convergence rates, higher quality solutions, and greater robustness in solving complex optimization problems. These enhancements allow the WOA to outperform traditional single-population algorithms, making it a more reliable and efficient tool for tackling large-scale, high-dimensional optimization tasks.
In summary, the main contributions of this study are outlined as follows:
- 1. This study introduces an enhanced version of WOA, referred to as OMWOA, which incorporates the outpost mechanism and a multi-population enhanced mechanism.
- 2. The performance of the proposed OMWOA was assessed by comparing it with leading evolutionary algorithms from the IEEE CEC 2017 and IEEE CEC 2022 competitions. This study further provides a comprehensive analysis of the impact of the two enhancement mechanisms on OMWOA’s performance, as well as an evaluation of its scalability across different dimensionalities.
- 3. To test the optimization performance of OMWOA in real-world problems, we combined OMWOA with KELM to solve the diagnosis problem of medical diseases. OMWOA has achieved good diagnostic results on 5 medical datasets.
This paper is organized as follows. Section 2 briefly describes WOA. The proposed OMWOA is described in detail in Section 3. Section 4 introduces and analyses OMWOA in benchmark function testing. The Section 5 analyzes the experimental results of medical problems. The Section 6 summarizes the whole paper and looks forward to the future.
2. An overview of WOA
WOA, proposed by Mirjalili in 2016 [21], is a meta-heuristic algorithm inspired by the hunting behavior of humpback whales. Humpback whales employ a distinctive hunting strategy: they dive underwater, spiral upwards from depths of about 12 meters, and emit bubbles of varying sizes. These bubbles rise to the water’s surface in unison, forming a spiral bubble net that ensnares and directs prey towards the center. With its mouth nearly vertical amidst the bubble circle, the whale engulfs the trapped prey. WOA seeks to emulate the spiral bubble net tactic employed by humpback whales, accomplishing foraging through three mechanisms: spiral predation, random predation and encirclement, and contraction. The mathematical model of WOA is expounded in the ensuing sections.
2.1 Encircling prey
In the wild, whales possess the ability to pinpoint the whereabouts of their prey and encircle them for predation. WOA operates under the assumption that the optimal position within the existing population represents the prey; all remaining whale individuals converge around this prey, with their locations updated according to Eq. (1) and Eq. (2).
In this equation, t denotes the iteration count, while A and C denote coefficient vectors. represents the best position within the current population, with A and C derived from Eq. (3) and Eq. (4).
Among these parameters, r1 and r2 are random numbers within the interval (0,1). The value of a linearly decreases from 2 to 0, where t represents the current iteration number and is the maximum iteration number.
2.2 Spiral bubble-net feeding maneuver
During hunting, humpback whales ascend in spirals towards their prey. In WOA, whales utilize Eq. (5) to adjust their position as they swim towards the optimal individual.
In this context, signifies the distance between the optimal individual X before the update and the optimal position
. b serves as a constant that defines the spiral shape, while l represents a random number within the range of [−1,1]. In the mathematical model, given that the spiral predation of whales involves both movement around the outer ring and contraction of the enclosure, half of the probability will opt for the contraction mechanism, as exemplified by Eq. (2), to update the whales’ locations.
2.3 Searching for prey
The whale’s search and predatory actions occur randomly based on its position. Within the WOA framework, the whale adjusts its position using Eq. (6) and Eq. (7).
In this context, represents a randomly chosen whale position vector.
The pseudocode of WOA is shown as in Algorithm 1.
Algorithm 1. Pseudocode of WOA
Begin by initializing a set of agents Xi (i = 1,2,3...., n) with random distribution.
Evaluate the fitness of each search agent.
Identify the optimal search agent X* among them.
While (FEs < MaxFEs)
for each search agent
Update a, A, C, L, and p
if(p < 0.5)
if(|A| < 1)
Update the position of the search agents.
else if(|A| > 1)
Randomly select a search agent .
Update the position of the search agent.
end if
else if(p > 0.5)
Position has been refreshed using the spiral Eq. (5).
end if
end for
Verify whether any search agent exceeds the search space boundaries and rectify it accordingly.
Evaluate the fitness of the solutions obtained.
Adjust X* if an improved solution is detected by the method.
t = t + 1
end while
return x*
3. The proposed OMWOA
In this section, we provide a detailed description of OMWOA. Unlike the original algorithm, OMWOA incorporates two additional strategies. Firstly, it integrates outpost mechanism, enhancing the basic algorithm. Additionally, a multi-population enhanced mechanism is introduced to improve the convergence speed and enhance the solution quality of the algorithm.
3.1 Outpost mechanism
Initially, the population’s fitness value is compared with the value obtained in the previous iteration. If the fitness value from the current iteration is better, the position is moved to the current one. If the fitness value does not improve, the position remains in the sub-optimal one.
signifies the position’s location in this case. The updated population position will subsequently take the place of the current population position, as indicated by the formula above.
In the second phase, the individual investigates in a random direction and distance around the optimal value. The position distribution for this random search can be approximated by a Gaussian distribution. The formula for the probability density function of the Gaussian distribution is described in Eq. (9).
In this situation, is the standard deviation among the individuals, and
represents the mean value of the entire population. The normal distribution characteristics allow us to derive the individuals’ distribution density. Therefore, in this mechanism, a normal distribution with
is utilized for all problems. The variables generated in this way are employed in this study as outlined below:
refers to a normal distribution that generates a Gaussian gradient vector, and
denotes the dot product (entry-wise multiplication). In the third step, we use the Eq. (11) to illustrate an individual’s inclination when updating.
In Eq. (11), if the fitness value discovered during the current iteration exceeds the current fitness value, the formula employs the plus sign to update the current best position and the subgroup’s optimal fitness value. Otherwise, the formula uses the minus sign.
3.2 The multi-population enhanced mechanism
In the original algorithm, once a particular individual discovers the optimal solution, other individuals align in the optimal direction, causing a decline in diversity. To enhance the global optimization ability, especially for multi-modal challenges, a multi-population mechanism was introduced into FOA. This mechanism comprises two parameters, .
In this scenario, FEs indicates the current evaluation count, while MaxFEs signifies the maximum number of evaluations permitted. and
correspond to the lower and upper bounds of the problem, respectively.
The population is split into M subgroups, each independently searching. Meanwhile, some individuals in each subgroup have a probability of engaging in a global search, and the search radius reduces with increasing iterations. The position of the individual is explicitly depicted in Eq. (14).
Where denotes the individual that has undergone mutation.
The pseudocode of OMWOA is shown in Algorithm 2. The OMWOA is a metaheuristic approach designed for global optimization tasks. Initially, a set of search agents Xi is initialized with random distributions. Each agent’s fitness is evaluated, and the optimal agent X* is identified. The algorithm iterates until a maximum number of function evaluations FEs is reached. During each iteration, parameters a, A, C, L, and p are updated, influencing agent movements based on predefined conditions. Depending on the value of p, agents either update their positions using spiral equations or interact with randomly selected agents. Post-update, mechanisms like outpost and multi-population enhancements foster diversity and improve search effectiveness. Boundary checks ensure agents remain within the search space. The process continually evaluates solution fitness, updating X* if better solutions are found. OMWOA’s iterative approach aims to converge towards optimal solutions efficiently across various optimization challenges.
Algorithm 2. Pseudocode of OMWOA
Begin by initializing a set of agents Xi (i = 1,2,3...., n) with random distribution.
Evaluate the fitness of each search agent.
Identify the optimal search agent X* among them.
While (Fes < MaxFEs)
for each search agent
Update a, A, C, L, and p
if(p < 0.5)
if(|A| < 1)
Update the position of the search agents.
else if(|A| > 1)
Randomly select a search agent .
Update the position of the search agent.
end if
else if(p > 0.5)
Position has been refreshed using the spiral Eq. (5).
end if
end for
Updating X by outpost mechanism;
Updating X by multi-population enhanced mechanism;
Verify whether any search agent exceeds the search space boundaries and rectify it accordingly.
Evaluate the fitness of the solutions obtained.
Adjust X* if an improved solution is detected by the method.
t = t + 1
end while
return X*
4. Experimental results and discussions
Within this section, we commence with a presentation of comparative results and proceed to discuss our observations in detail. We begin by analyzing algorithm parameters, followed by conducting simulation experiments on benchmark functions to comprehensively validate the effectiveness of the OMWOA. Lastly, we explore practical applications of the algorithm.
4.1 Benchmark functions
4.1.1IEEE CEC 2017 benchmark functions.
Table 1 shows the details of the IEEE CEC 2017 benchmark functions. For unbiased outcomes, all algorithms were tested under consistent conditions: population size and maximum evaluation times were set to 30 and 300,000, respectively. Unless otherwise specified, the population size was set to 30, which is a commonly used setting in the field of evolutionary computation. A population size of 30 is sufficient to ensure adequate exploration of the search space. For the total number of iterations, we set it to 300,000, corresponding to a problem dimension of 30 and 10,000 iterations per dimension. This setting is widely adopted in evolutionary computation to facilitate fair performance comparisons between algorithms. In the scalability experiments, we further validated our algorithm across different dimensions, including 30, 50, and 100, to demonstrate its performance under varying problem scales. Each algorithm underwent independent testing 30 times per benchmark function. Conducting 30 independent runs helps to reduce the influence of randomness and enhances the credibility of the experimental results. The Friedman test, a non-parametric statistical method for comparative analysis, was employed to evaluate and rank algorithm performance across the benchmark functions. The average ranking value (ARV) derived from the Friedman test allows for further statistical comparisons, where algorithms with lower ARV demonstrate better performance. The Friedman test allows for a more detailed and intuitive analysis of performance differences among algorithms. By setting the confidence level at 0.05, our experiments demonstrated that the proposed OMWOA algorithm consistently outperformed the compared algorithms across 30 independent runs.
4.1.2 IEEE CEC 2022 benchmark functions.
Table 2 shows the details of the IEEE CEC 2022 benchmark functions.
4.2 Ablation analysis
This section clarifies the improved effects of two enhancement mechanisms on OMWOA through ablative experiments, which are essential in scientific research. Ablative experiments are crucial for validating the robustness and reliability of research findings. By systematically removing a variable or factor and observing its impact, these experiments confirm the observed effects and eliminate other potential explanations. This methodology allows researchers to ascertain the contribution and significance of each factor in the study, thereby affirming the reliability of the results and mitigating potential confounding variables. Ablative experiments are indispensable for verifying scientific hypotheses, supporting research conclusions, and enhancing the credibility and reproducibility of research. Table 3 and Table 4 presents the experimental results, where OWOA represents WOA improved solely by the outpost mechanism, and MWOA denotes WOA improved exclusively by the multi-population enhanced mechanism. Conducting 30 independent experiments on CEC 2017 benchmark functions, the data indicate that OMWOA, enhanced by both mechanisms, has a significant advantage over WOA improved by either mechanism alone. Specifically, OMWOA outperforms OWOA in 8 functions and MWOA in 18 functions, demonstrating that the combined use of both mechanisms greatly enhances WOA, which is not achievable by either mechanism alone.
Based on the ranking outcomes, it is clear that OMWOA achieves faster convergence and superior precision compared to RWOA across single-peak functions, hybrid functions, and composite functions. Additionally, OMWOA demonstrates higher precision than DWOA in multi-module benchmark functions. These results validate OMWOA as the optimal solution for improving WOA’s performance in handling these test functions. Hence, OMWOA is chosen as the preferred enhancement method for WOA following this analysis.
4.3 Scalability analysis
This section evaluates OMWOA’s scalability by testing it under different dimensions. Scalability tests are crucial for evaluating the performance of evolutionary computing algorithms in handling large-scale issues. By varying the problem size across various dimensions, the algorithm’s capacity to tackle different sizes and complexities can be assessed. These experiments measure the algorithm’s performance in terms of resource efficiency, time consumption, and solution quality, thereby determining its applicability and limitations. Scalability experiments are critical for providing reliable solutions to extensive problems in practical scenarios, thus promoting the wider application of evolutionary computing. The study considers three dimensions: 30, 50, and 100, which are standard benchmarks in evolutionary computing, demonstrating OMWOA’s optimization capabilities. Setting the problem dimension to 30 is a common practice in the field of evolutionary computation. When the dimension is increased to 100, the problem becomes a high-dimensional optimization task. Using a dimension of 100 better highlights the performance stability of the proposed OMWOA when addressing problems of varying difficulty. Table 5 presents the scalability analysis results, showing that OMWOA consistently outperforms WOA across all tested dimensions. Scalability tests also compare the original WOA, highlighting OMWOA’s superior performance in various dimensions.
It is well recognized that solving problems becomes increasingly complex and challenging as the dimensions of test functions grow. Based on the above analysis, OMWOA emerges as a superior method for optimizing high-dimensional functions compared to the original WOA.
As depicted in Fig 1, the convergence trajectories of OMWOA (red) and WOA (blue) are shown for various test functions. The dimensions analyzed are 30, 50, and 100, and the chosen test functions include F1, F13, F15, and F19 from the CEC 2017 benchmark suite. Fig 1 highlights that OMWOA exhibits a faster convergence rate and higher accuracy compared to WOA.
4.4 Historical searches
Visualizing algorithmic search processes is critical in evolutionary computing research. Visual tools enable researchers to intuitively monitor the trajectory of the algorithm’s search within the solution space, its speed, and its ability to avoid local optima. This enhances the understanding of the algorithm’s operational principles and behavior, offering valuable insights for further optimization. Visual experiments also help identify algorithmic limitations and potential issues, guiding improvements. Therefore, visual experiments of algorithmic search processes are essential for the thorough investigation and refinement of evolutionary computing algorithms, promoting their development and practical application. To illustrate OMWOA’s search process, Fig 2 presents its historical trajectory on IEEE CEC 2017 benchmark functions, including F1, F7, F9, F23, and F25. Fig 2(a) depicts simulated images of these functions, while Fig 2(b) details the historical search path of OMWOA. Red points represent global optima, while black points indicate the optimizer’s findings at each iteration. OMWOA’s trajectory demonstrates a strong inclination towards optimal values, effectively avoiding local optima. The black dots surrounding the red dot and others evenly distributed throughout the search space highlight OMWOA’s global exploration ability. Fig 2(c) illustrates the relative discrepancy from the optimal value at each iteration, showing OMWOA’s stabilization around 500 iterations. Finally, Fig 2(d) portrays the average fitness values obtained at the conclusion of each iteration, showing an overall decreasing trend.
4.5 Comparison of other related algorithms
4.5.1 Comparative Experiments at CEC 2017 benchmark functions.
This section evaluates OMWOA using the IEEE CEC 2017 benchmark functions. The Wilcoxon signed-rank test [37] and Friedman test [38] was employed to evaluate performance.
Table 6 presents a comprehensive comparison of OMWOA with alternative competing algorithms using the IEEE CEC 2017 benchmark functions. The competing algorithms involved in this experiment include HGWO [39], WEMFO [40], mSCA [41], SCADE [42], CCMWOA [43], QCSCA [44], BWOA [45], CCMSCSA [46], CLACO [47], BLPSO [48], GCHHO [49]. This analysis includes each algorithm’s rank, performance against OMWOA indicated by wins/draws/losses (+/ = /-), and the average performance score (AVG) across 30 independent runs. OMWOA achieves the top rank with an impressive average score of 1.30E + 00, denoted by the “~” symbol in the + / = /- column, indicating that OMWOA serves as the benchmark algorithm in this comparative study. This underscores OMWOA’s robust optimization capabilities and its ability to consistently achieve optimal solutions across the diverse set of benchmark functions. Among the competing algorithms, HGWO, despite ranking 9th, exhibits a relatively higher average score of 8.40E + 00. The 30/0/0 + / = /- metric highlights that OMWOA outperforms HGWO in all benchmark instances, emphasizing OMWOA’s superior optimization efficiency compared to HGWO. WEMFO and BLPSO, ranked 4th and 5th respectively, demonstrate competitive performance with average scores of 4.50E + 00 and 5.33E + 00. WEMFO shows a 28/0/2 + / = /- metric, indicating occasional instances where it performs better than OMWOA, while BLPSO exhibits a 25/4/1 + / = /- metric, suggesting its capability to occasionally match OMWOA’s performance. However, OMWOA maintains its superior ranking due to its overall lower average score. In contrast, algorithms like mSCA and SCADE, ranked 12th and 11th respectively, exhibit poorer performance with average scores exceeding 1.10E + 01 and no wins against OMWOA. These results underscore their limited effectiveness compared to OMWOA in achieving optimal solutions for the benchmark functions. The CCMWOA and CLACO, ranked 7th and 3rd respectively, present strong competition with average scores of 7.63E + 00 and 4.33E + 00. Despite OMWOA’s top ranking, CCMWOA achieves 29 wins against OMWOA, indicating its competitive performance in optimization scenarios. CLACO also demonstrates 26 wins against OMWOA, reinforcing its capability to outperform OMWOA in specific instances. Other algorithms such as QCSCA, BWOA, and GCHHO, ranked 6th, 8th, and 10th respectively, demonstrate mixed performance with average scores of 5.47E + 00, 8.23E + 00, and 8.53E + 00. Their + / = /- metrics indicate varying levels of competitiveness against OMWOA, with QCSCA showing occasional superiority in specific instances but generally falling short in overall performance compared to OMWOA.
In summary, the experimental results conclusively demonstrate that OMWOA outperforms all other competing algorithms on the IEEE CEC 2017 benchmark functions. OMWOA’s top rank and the lowest average performance score underscore its effectiveness and robustness in addressing complex optimization challenges. These findings establish OMWOA as a leading optimization framework with substantial potential for practical applications across various domains.
Understanding the convergence rate is critical for evaluating how effectively evolutionary algorithms perform and exploring their capacity for development. Fig 3 illustrates the convergence curves of OMWOA and its competitors on the CEC 2017 benchmark functions. Convergence curves are vital analytical tools in evolutionary algorithm research. These plots provide a visual summary of the convergence behavior of algorithms during optimization, displaying the progress of the search process within the solution space. By examining these curves, researchers can gain insights into the convergence speed, stability, and potential issues such as premature convergence or oscillation. Moreover, convergence curves allow for the fine-tuning of algorithm parameters to achieve optimal performance, ensuring better adaptation to specific problem-solving requirements. Therefore, these plots are crucial in evolutionary algorithm research, serving as key indicators for evaluating performance and refining algorithm designs. The graph illustrates the convergence curves for all compared algorithms across twelve test functions, with the x-axis indicating the number of iterations and the y-axis representing the optimization value. For functions F5, F8, F22, and F26, OMWOA demonstrates significant convergence advantages, quickly reaching optimal values and achieving the lowest optimal values. Even in other plots, especially in complex scenarios, where the convergence curves of various algorithms are closely clustered, OMWOA consistently achieves the best optimization values.
4.5.2 Comparative Experiments at CEC 2022 benchmark functions.
The competing algorithms involved in this experiment include HGWO [39], WEMFO [40], mSCA [41], SCADE [42], CCMWOA [43], QCSCA [44], BWOA [45], CCMSCSA [46], CLACO [47], BLPSO [48], GCHHO [49]. Table 7 presents a comparative analysis of OMWOA against several competing algorithms using the IEEE CEC 2022 benchmark functions. This table provides insights into the rankings, performance distribution (+/ = /-), and average performance scores (AVG) across multiple experimental runs. “+” indicates that OMWOA outperforms the optimizer, “-” means OMWOA underperforms compared to the optimizer, and “=” denotes no significant difference in performance between OMWOA and the optimizer. The Wilcoxon signed-rank test [37] and Friedman test [38] was employed to evaluate performance. OMWOA secures the top rank denoted by the symbol “~” in the + / = /- column, indicating its superior performance compared to all other algorithms evaluated in this study. OMWOA achieves an impressive average score of 1.35E + 00, highlighting its robustness and effectiveness across the diverse and complex optimization challenges presented by the IEEE CEC 2022 benchmark functions. QCSCA follows closely with the 2nd rank, achieving an average score of 2.78E + 00 and a + / = /- metric of 6/0/6. While QCSCA performs well, it falls short of OMWOA’s top performance, indicating that OMWOA consistently achieves better average results across the benchmark functions. GCHHO and CLACO secure the 4th and 3rd ranks respectively, with average scores of 4.25E + 00 and 3.96E + 00. Their + / = /- metrics of 7/2/3 and 4/2/6 illustrate their varying degrees of competitiveness relative to OMWOA, showing instances where these algorithms perform well but do not consistently match OMWOA’s top-tier performance. HGWO occupies the 9th rank with an average score of 7.89E + 00 and a + / = /- metric of 10/1/1, indicating its lower effectiveness compared to OMWOA across the benchmark functions. SCADE, CCMWOA, BWOA, CCMSCSA, and BLPSO are positioned in the lower ranks (from 10th to 12th), demonstrating their comparative weaknesses in achieving competitive scores compared to OMWOA. In summary, OMWOA emerges as the top-performing algorithm in this comparative evaluation on the IEEE CEC 2022 benchmark functions. Its consistent top rank and superior average performance underscore OMWOA’s effectiveness and robustness in solving complex optimization problems across various domains. These findings position OMWOA as a promising choice for practitioners and researchers seeking reliable solutions in global optimization tasks.
Fig 4 illustrates the convergence curves of OMWOA and its competitors on the CEC 2022 benchmark functions. The diagram depicts the convergence behavior of all compared algorithms across nine test functions, with the x-axis indicating the number of iterations and the y-axis representing the optimization value. For functions F1, F4, F6, and F7, OMWOA shows considerable convergence advantages, swiftly reaching optimal values and achieving the lowest optimization levels. Even in other plots, particularly in complex scenarios where the convergence curves are closely clustered, OMWOA consistently attains the best optimization results.
5. Application to medical data analysis
Intelligent optimization algorithms play a pivotal role in addressing real-world challenges[50]. This section introduces the integration of OMWOA with KELM, a renowned machine learning model. By optimizing the two kernel parameters of KELM using OMWOA, a novel model, OMWOA-KELM, is proposed. The performance of OMWOA-KELM is evaluated against four established classifiers. The experimental data comprise real medical datasets sourced from the UCI Machine Learning Repository[51] and the Group of Applied Research in Orthopedics (GARO) at the Centre Médico-Chirurgical de Réadaptation des Massues in Lyon, France. These datasets encompass a variety of diseases, including heart disease.
The Kernel Extreme Learning Machine (KELM) was selected as the classifier in this study due to its superior learning speed, strong generalization ability, and robustness against local minima, which are critical for handling complex and high-dimensional medical data. Compared to traditional neural networks and support vector machines (SVM), KELM offers significantly faster training by avoiding iterative parameter tuning, as it analytically determines the output weights. Furthermore, by incorporating kernel functions, KELM effectively captures nonlinear relationships within the data without the need for manually designing complex network architectures. Previous studies have demonstrated that KELM often achieves competitive or even superior classification accuracy with considerably lower computational cost, making it highly suitable for real-world medical applications where both predictive performance and efficiency are essential. Therefore, integrating KELM with the proposed optimization algorithm enables a more efficient and effective classification framework.
To ensure a fair comparison and minimize the impact of parameter variability on model performance, the parameter configurations for OMWOA-KELM and WOA-KELM were kept nearly identical in this study. Specifically, the population size and maximum number of iterations were set to 10 and 50, respectively, to balance computational efficiency with adequate search capability. The hyperparameters and
were both constrained within the range [
,
], which has been widely adopted in previous studies to achieve a reasonable trade-off between model complexity and generalization performance. Support Vector Machine (SVM) was employed as the baseline model, with its
and
parameters tuned within the same range using a grid search strategy to ensure consistency across comparative evaluations. A Gaussian kernel was selected due to its proven effectiveness in handling non-linear classification tasks, and LIBSVM [52] was utilized for SVM implementation.
For the K-Nearest Neighbors (KNN) model, the number of nearest neighbors was set to 1, and the Euclidean distance metric was adopted to maintain simplicity while capturing local data structures. The Classification and Regression Tree (CART) model was configured using default parameters, as these settings have demonstrated stable performance in similar classification tasks. The Backpropagation (BP) neural network was implemented using MATLAB’s Levenberg–Marquardt algorithm, with 8 hidden neurons and a mean squared error (MSE) threshold of 0.001, which provides a balance between convergence speed and model accuracy.
Since the raw medical data could not be directly processed by the aforementioned models, data preprocessing was performed to enhance model stability and comparability. Specifically, standardization and normalization were applied to scale all features within the range of [−1, 1]. This preprocessing step ensured consistent input distributions across all models while highlighting the advantages of the proposed OMWOA-KELM algorithm when handling varying data complexities.
The standardized data were subjected to 10-fold cross-validation. This approach splits the dataset into ten subsets, using nine for training and the remaining one for validation in each iteration. Such a method maximizes the utility of limited data and ensures robust model evaluation.
5.1 Metrics of the classification performance
In this study, to demonstrate the exceptional performance of the proposed OMWOA-KELM in diagnosing and classifying actual medical data, we employ sensitivity, specificity, accuracy (ACC), and the Matthews correlation coefficient (MCC) as performance metrics. Their formal definitions are as follows: FN refers to false negatives, FP to false positives, TP to true positives, and TN to true negatives.
Among these four-evaluation metrics, ACC measures classification accuracy by focusing on TP and TN. Sensitivity captures the model’s adaptability and emphasizes FP and TP. Specificity is linked primarily to TN, while MCC evaluates the model’s reliability. A higher MCC indicates greater reliability.
5.2 Breast cancer dataset
The Breast Cancer dataset is a commonly used medical dataset proposed by Dr. Wolberg [53]. The number of instances in the dataset is 699, each with 10 attributes. Table 8 shows the results for each time after the tenfold crossover. The data in the table show that the experimental results of OMWOA -KELM are as follows: ACC is 96.57%, Sen is 96.70%, Spe is 96.02%, MCC is 92.36%.
In Fig 5, we graphically illustrate the experimental results, which allow us to observe the results more visually. The experimental data provides a comprehensive comparison of different models based on four evaluation metrics: accuracy (ACC), sensitivity, specificity, and Matthew’s correlation coefficient (MCC). The results include the mean and standard deviation for each metric.
For ACC, OMWOA-KELM achieves the highest mean value of 0.9657 with a standard deviation of 0.0226, surpassing other models such as WOA-KELM (0.9599 ± 0.0231) and KNN (0.9542 ± 0.0211). CART, BP, and SVM show slightly lower values of 0.9428 ± 0.0309, 0.9313 ± 0.0471, and 0.9356 ± 0.0809, respectively.
In terms of sensitivity, WOA-KELM demonstrates the best performance with a mean of 0.9705 ± 0.0326, followed closely by OMWOA-KELM (0.9670 ± 0.0326). KNN, BP, CART, and SVM achieve mean values of 0.9664, 0.9604, 0.9558, and 0.9575, respectively, with varying standard deviations.
For specificity, OMWOA-KELM again outperforms with a mean of 0.9602 ± 0.0471, whereas KNN and WOA-KELM achieve values of 0.9324 ± 0.0674 and 0.9395 ± 0.0763, respectively. CART, BP, and SVM demonstrate lower specificity values, particularly BP and SVM, which fall below 0.9.
Regarding MCC, OMWOA-KELM reaches the highest score of 0.9236 ± 0.0507, followed by WOA-KELM (0.9145 ± 0.0499) and KNN (0.9010 ± 0.0406). CART, BP, and SVM present lower reliability, with MCC values of 0.8740, 0.8525, and 0.8584, respectively.
5.3. Bupa liver dataset
The Bupa liver dataset is a commonly used medical dataset by Forsyth’s report [54]. The number of instances in the dataset is 345, each with 7 attributes. Table 9 shows the experimental results for the four indicators.
Specifically, we show the experimental results in Fig 6. The dataset highlights the performance of various models evaluated on four metrics: ACC, sensitivity, specificity, and MCC, reported with their mean, standard deviation, and an additional specific value.
For ACC, OMWOA-KELM achieves the highest mean value of 0.8296 ± 0.0530, outperforming models such as WOA-KELM (0.7667 ± 0.0464) and BP (0.7593 ± 0.1051). SVM demonstrates the lowest mean ACC (0.7074 ± 0.1932) despite its specific ACC reaching 0.8148, higher than most models.
In sensitivity, OMWOA-KELM again leads with a mean of 0.8765 ± 0.0655 and a specific value of 0.7500, followed by BP (0.8020 ± 0.1207). CART and SVM exhibit lower sensitivity, with respective means of 0.7796 and 0.7055, and specific values of 0.6000 and 0.7500.
Regarding specificity, BP, WOA-KELM, and SVM achieve the highest specific values (1.0000 each), although their mean values vary: BP (0.7281 ± 0.1822), WOA-KELM (0.7721 ± 0.1292), and SVM (0.7244 ± 0.1922). OMWOA-KELM follows closely, with a mean specificity of 0.7854 ± 0.0921 and a specific value of 0.8571.
For MCC, OMWOA-KELM shows the best performance with a mean of 0.6602 ± 0.0899 and a specific value of 0.5415. BP and WOA-KELM perform moderately well, with MCC means of 0.5252 and 0.5487, respectively. SVM records the lowest mean MCC (0.4140 ± 0.4019) despite a specific MCC of 0.6614.
5.4. Cleveland heart dataset
The Cleveland heart medical dataset is a commonly used medical dataset. The number of instances in the dataset is 303, each with 14 attributes. The results of OMWOA -KELM on the dataset are recorded in Table 10.
Specifically, we show the experimental results in Fig 7. The dataset evaluates six models using four metrics: accuracy (ACC), sensitivity, specificity, and Matthew’s correlation coefficient (MCC), presented as mean ± standard deviation.
For ACC, OMWOA-KELM outperforms other models with a mean of 0.8316 ± 0.0529, followed by SVM (0.8017 ± 0.0903). Models such as CART (0.7553 ± 0.1066) and WOA-KELM (0.7488 ± 0.0640) show moderate accuracy, while BP scores the lowest with 0.7092 ± 0.0929.
In sensitivity, SVM and OMWOA-KELM achieve comparable results with 0.7757 ± 0.1141 and 0.7722 ± 0.0687, respectively. WOA-KELM (0.7171 ± 0.0870) and KNN (0.7099 ± 0.0901) demonstrate slightly lower sensitivity, while CART and BP perform less favorably, with means of 0.7047 ± 0.1159 and 0.6869 ± 0.1243, respectively.
For specificity, OMWOA-KELM excels with 0.8840 ± 0.0715, followed by SVM (0.8389 ± 0.1379). Other models, including CART (0.7785 ± 0.1609), KNN (0.7733 ± 0.1224), and WOA-KELM (0.7733 ± 0.1224), demonstrate moderate performance. BP shows the lowest specificity at 0.7395 ± 0.0848.
Regarding MCC, OMWOA-KELM achieves the best reliability with 0.6559 ± 0.1066. SVM follows with 0.6129 ± 0.1577, while KNN, CART, and WOA-KELM have comparable values, ranging from 0.4862 to 0.4926. BP exhibits the lowest MCC (0.4183 ± 0.1788), indicating reduced reliability.
5.5. Diabetes dataset
The Diabetes dataset is a commonly used medical dataset. The number of instances in the dataset is 768, each with 8 attributes. The results of OMWOA -KELM on the dataset are recorded in Table 11. The data in the table show the experimental results of OMWOA -KELM.
Specifically, we show the experimental results in Fig 8. The performance of six models is evaluated across four key metrics: accuracy (ACC), sensitivity, specificity, and Matthew’s correlation coefficient (MCC), with results presented as mean ± standard deviation.
In terms of ACC, OMWOA-KELM shows the best performance, achieving 0.8317 ± 0.0287, followed by SVM at 0.7885 ± 0.1172. WOA-KELM and KNN demonstrate moderate accuracy, scoring 0.7720 ± 0.0812 and 0.7520 ± 0.0645, respectively. CART (0.7387 ± 0.1034) and BP (0.7294 ± 0.0868) yield the lowest accuracy values.
For sensitivity, OMWOA-KELM leads again with a mean of 0.7813 ± 0.1006, while SVM and WOA-KELM follow with values of 0.7630 ± 0.1691 and 0.7463 ± 0.1525, respectively. KNN and CART perform moderately, recording 0.7175 ± 0.1393 and 0.7300 ± 0.1468, while BP lags behind at 0.6903 ± 0.1261.
For specificity, OMWOA-KELM achieves the highest value of 0.8811 ± 0.0723, significantly outperforming other models. SVM follows with 0.8188 ± 0.1088, and WOA-KELM performs moderately with 0.7966 ± 0.0705. KNN (0.7855 ± 0.0627), BP (0.7675 ± 0.0803), and CART (0.7518 ± 0.1355) show relatively lower values.
In MCC, OMWOA-KELM stands out with a mean of 0.6671 ± 0.0552. SVM and WOA-KELM follow with values of 0.5790 ± 0.2366 and 0.5414 ± 0.1727, respectively. KNN, CART, and BP perform less favorably, with MCC scores ranging from 0.5030 to 0.4561.
5.6. Heart dataset
The Heart medical dataset is a commonly used medical dataset. The number of instances in the dataset is 270, each with 13 attributes. The results of OMWOA -KELM on the dataset are recorded in Table 12. The data in the table show the experimental results of OMWOA -KELM.
Specifically, we show the experimental results in Fig 9. The models are evaluated across four key indicators. OMWOA-KELM achieves the highest performance in all metrics, particularly excelling in sensitivity (0.8728) and MCC (0.6318). CART demonstrates strong performance in specificity (0.7443) and MCC (0.5338), while SVM excels in accuracy (0.7407). WOA-KELM performs well, especially in sensitivity (0.8102), but overall, it lags behind OMWOA-KELM. KNN and BP show lower scores, especially in MCC and specificity.
6. Conclusions and future works
In this study, we introduced OMWOA, an enhanced version of the Whale Optimization Algorithm, which incorporates the outpost and multi-population mechanisms to address the common challenges of slow convergence and local optima stagnation. The experimental evaluation, benchmarked against prominent evolutionary algorithms from the IEEE CEC 2017 and IEEE CEC 2022 competitions, demonstrated that OMWOA outperforms traditional algorithms in terms of both convergence rate and solution quality. Furthermore, the scalability analysis showed that OMWOA retains its high performance across a wide range of problem dimensionalities. To assess its practical utility, we applied OMWOA in conjunction with KELM to solve the medical disease diagnosis problem. The results from five different medical datasets suggest that the proposed algorithm not only excels in theoretical optimization but also holds promise for real-world applications.
The proposed algorithm also has certain limitations, such as the lack of validation of its optimization performance in other real-world applications. Future work could further refine the scalability of OMWOA in more complex and high-dimensional real-world problems, and explore its integration with other machine learning models for broader applicability.
References
- 1. Aljarah I, Mafarja M, Heidari A, Faris H, Zhang Y, Mirjalili S. Asynchronous accelerating multi-leader salp chains for feature selection. Appl Soft Comput. 2018;71:964–79.
- 2. Mafarja M, Aljarah I, Heidari AA, Faris H, Fournier-Viger P, Li X, et al. Binary dragonfly optimization for feature selection using time-varying transfer functions. Knowledge-Based Systems. 2018;161:185–204.
- 3. Xu Y, Chen H, Heidari AA, Luo J, Zhang Q, Zhao X, et al. An efficient chaotic mutative moth-flame-inspired optimizer for global optimization tasks. Expert Systems with Applications. 2019;129:135–55.
- 4. Faris H, Ala’M A-Z, Heidari A, Aljarah I, Mafarja M, Hassonah M. An intelligent system for spam detection and identification of the most relevant features based on evolutionary random weight networks. Inf Fusion. 2019;48:67–83.
- 5. Faris H, Heidari AA, Al-Zoubi AM, Mafarja M, Aljarah I, Eshtay M, et al. Time-varying hierarchical chains of salps with random weight networks for feature selection. Expert Systems with Applications. 2020;140:112898.
- 6. Faris H, Mafarja M, Heidari A, Aljarah I, Ala’M A-Z, Mirjalili S. An efficient binary salp swarm algorithm with crossover scheme for feature selection problems. Knowl Based Syst. 2018;154:43–67.
- 7. Zhang X, Wang D, Zhou Z, Ma Y. Robust Low-Rank Tensor Recovery with Rectification and Alignment. IEEE Trans Pattern Anal Mach Intell. 2021;43(1):238–55. pmid:31329109
- 8.
Kennedy J, Eberhart R, editors. Particle swarm optimization. IEEE International Conference on Neural Networks - Conference Proceedings; 1995.
- 9. Zhang X, Hu W, Qu W, Maybank S. Multiple object tracking via species-based particle swarm optimization. IEEE Trans Circuits Syst Video Technol. 2010;20(11):1590–602.
- 10. Dorigo M, Blum C. Ant colony optimization theory: A survey. Theoretical Computer Science. 2005;344(2–3):243–78.
- 11. Deng W, Xu J, Zhao H. An Improved Ant Colony Optimization Algorithm Based on Hybrid Strategies for Scheduling Problem. IEEE Access. 2019;7:20281–92.
- 12. Karaboga D, Basturk B. A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm. J Glob Optim. 2007;39(3):459–71.
- 13. Mirjalili S, Mirjalili SM, Hatamlou A. Multi-Verse Optimizer: a nature-inspired algorithm for global optimization. Neural Comput & Applic. 2015;27(2):495–513.
- 14. Shen L, Chen H, Yu Z, Kang W, Zhang B, Li H, et al. Evolving support vector machines using fruit fly optimization for medical data classification. Knowledge-Based Systems. 2016;96:61–75.
- 15. Zhang X, Xu Y, Yu C, Heidari AA, Li S, Chen H, et al. Gaussian mutational chaotic fruit fly-built optimization and feature selection. Expert Systems with Applications. 2020;141:112976.
- 16. Saremi S, Mirjalili S, Lewis A. Grasshopper Optimisation Algorithm: Theory and application. Advances in Engineering Software. 2017;105:30–47.
- 17. Yang XS. A new metaheuristic bat-inspired algorithm. Stud Comput Intell. 2010:65–74.
- 18. Yu H, Zhao N, Wang P, Chen H, Li C. Chaos-enhanced synchronized bat optimizer. Applied Mathematical Modelling. 2020;77:1201–15.
- 19.
Meng X, Liu Y, Gao X, Zhang H. A new bio-inspired algorithm: Chicken swarm optimization. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics): Springer Verlag; 2014. p. 86-94.
- 20. Li XL, Shao ZJ, Qian JX. Optimizing method based on autonomous animats: fish-swarm algorithm. Xitong Gongcheng Lilun yu Shijian. 2002;22(11):32.
- 21. Mirjalili S, Lewis A. The Whale Optimization Algorithm. Advances in Engineering Software. 2016;95:51–67.
- 22. Mafarja M, Mirjalili S. Whale optimization approaches for wrapper feature selection. Applied Soft Computing. 2018;62:441–53.
- 23. Zheng Y, Li Y, Wang G, Chen Y, Xu Q, Fan J, et al. A Novel Hybrid Algorithm for Feature Selection Based on Whale Optimization Algorithm. IEEE Access. 2019;7:14908–23.
- 24. Hassan G, Hassanien AE. Retinal fundus vasculature multilevel segmentation using whale optimization algorithm. Signal, Image and Video Processing. 2017:1-8.
- 25. Sun WZ, Wang JS. Elman Neural Network Soft-Sensor Model of Conversion Velocity in Polymerization Process Optimized by Chaos Whale Optimization Algorithm. IEEE Access. 2017;5:13062–76.
- 26. Aljarah I, Faris H, Mirjalili S. Optimizing connection weights in neural networks using the whale optimization algorithm. Soft Comput. 2016;22(1):1–15.
- 27. Aziz MAE, Ewees AA, Hassanien AE. Whale Optimization Algorithm and Moth-Flame Optimization for multilevel thresholding image segmentation. Expert Systems with Applications. 2017;83:242–56.
- 28. Aziz MAE, Ewees AA, Hassanien AE. Multi-objective whale optimization algorithm for content-based image retrieval. Multimed Tools Appl. 2018;77(19):26135–72.
- 29. Thanga Revathi S, Ramaraj N, Chithra S. Brain storm-based Whale Optimization Algorithm for privacy-protected data publishing in cloud computing. Cluster Comput. 2018;22(S2):3521–30.
- 30. Wang J, Du P, Niu T, Yang W. A novel hybrid system based on a new proposed algorithm—Multi-Objective Whale Optimization Algorithm for wind speed forecasting. Applied Energy. 2017;208:344–60.
- 31. Tubishat M, Abushariah MAM, Idris N, Aljarah I. Improved whale optimization algorithm for feature selection in Arabic sentiment analysis. Appl Intell. 2018;49(5):1688–707.
- 32. Zhou Y, Ling Y, Luo Q. Lévy flight trajectory-based whale optimization algorithm for engineering optimization. Engineering Computations (Swansea, Wales). 2018;35(7):2406–28.
- 33. Yousri D, Allam D, Eteiba MB. Chaotic whale optimizer variants for parameters estimation of the chaotic behavior in Permanent Magnet Synchronous Motor. Applied Soft Computing. 2019;74:479–503.
- 34. Elhosseini MA, Haikal AY, Badawy M, Khashan N. Biped robot stability based on an A–C parametric Whale Optimization Algorithm. Journal of Computational Science. 2019;31:17–32.
- 35.
Yaqoob A, Verma N, Aziz R, Saxena A. Enhancing feature selection through metaheuristic hybrid cuckoo search and Harris hawks optimization for cancer classification. Metaheuristics for machine learning: algorithms and applications. 2024. p. 95–134.
- 36. Alnowibet KA, Shekhawat S, Saxena A, Sallam KM, Mohamed AW. Development and applications of augmented whale optimization algorithm. Math. 2022;10(12):2076.
- 37. Demsar J. Statistical comparisons of classifiers over multiple data sets. J Mach Learn Res. 2006;7:1–30. pmid:WOS:000236331400001
- 38. García S, Fernández A, Luengo J, Herrera F. Advanced nonparametric tests for multiple comparisons in the design of experiments in computational intelligence and data mining: Experimental analysis of power. Information Sciences. 2010;180(10):2044–64.
- 39. Deng S, Wang X, Zhu Y, Lv F, Wang J. Hybrid Grey Wolf Optimization Algorithm–Based Support Vector Machine for Groutability Prediction of Fractured Rock Mass. J Comput Civ Eng. 2019;33(2).
- 40. Shan W, Qiao Z, Heidari AA, Chen H, Turabieh H, Teng Y. Double adaptive weights for stabilization of moth flame optimizer: Balance analysis, engineering cases, and medical diagnosis. Knowledge-Based Systems. 2021;214:106728.
- 41. Gupta S, Deep K. A hybrid self-adaptive sine cosine algorithm with opposition based learning. Expert Syst Appl. 2019;119:210–30.
- 42. Alambeigi F, Pedram SA, Speyer JL, Rosen J, Iordachita I, Taylor RH, et al. SCADE: Simultaneous Sensor Calibration and Deformation Estimation of FBG-Equipped Unmodeled Continuum Manipulators. IEEE Trans Robot. 2020;36(1):222–39. pmid:32661460
- 43. Luo J, Chen H, Heidari A, Xu Y, Zhang Q, Li C. Multi-strategy boosted mutative whale-inspired optimization approaches. Appl Math Model. 2019;73:109–23.
- 44. Hu H, Shan W, Tang Y, Heidari A, Chen H, Liu H. Horizontal and vertical crossover of sine cosine algorithm with quick moves for optimization and feature selection. J Comput Des Eng. 2022;9(6):2524–55.
- 45. Reddy K. S, Panwar L, Panigrahi BK, Kumar R. Binary whale optimization algorithm: a new metaheuristic approach for profit-based unit commitment problems in competitive electricity markets. Engineering Optimization. 2018;51(3):369–89.
- 46. Shan W, Hu H, Cai Z, Chen H, Liu H, Wang M. Multi-strategies boosted mutative crow search algorithm for global tasks: cases of continuous and discrete optimization. J Bionic Eng. 2022;19(6):1830–49.
- 47. Liu L, Zhao D, Yu F, Heidari AA, Li C, Ouyang J, et al. Ant colony optimization with Cauchy and greedy Levy mutations for multilevel COVID 19 X-ray image segmentation. Comput Biol Med. 2021;136:104609. pmid:34293587
- 48. Chen X, Li K, Xu B, Yang Z. Biogeography-based learning particle swarm optimization for combined heat and power economic dispatch problem. Knowl Based Syst. 2020;208:106463.
- 49. Song S, Wang P, Heidari A, Wang M, Zhao X, Chen H. Dimension decided Harris hawks optimization with Gaussian mutation: balance analysis and diversity patterns. Knowl Based Syst. 2021;215:106425.
- 50. Zhang X, Wang H, Du C, Fan X, Cui L, Chen H, et al. Custom-Molded Offloading Footwear Effectively Prevents Recurrence and Amputation, and Lowers Mortality Rates in High-Risk Diabetic Foot Patients: A Multicenter, Prospective Observational Study. Diabetes Metab Syndr Obes. 2022;15:103–9. pmid:35046681
- 51.
Frank AAA. UCI Machine Learning Repository; 2010.
- 52. Chang C-C, Lin C-J. LIBSVM. ACM Trans Intell Syst Technol. 2011;2(3):1–27.
- 53. Wolberg WH, Mangasarian OL. Multisurface method of pattern separation for medical diagnosis applied to breast cytology. Proc Natl Acad Sci U S A. 1990;87(23):9193–6. pmid:2251264
- 54. McDermott J, Forsyth RS. Diagnosing a disorder in a classification benchmark. Pattern Recognition Letters. 2016;73:41–3.