Figures
Abstract
Image segmentation is a fundamental step in image processing, yet determining the optimal thresholds for multi-threshold segmentation remains a computationally challenging task as the search space expands exponentially with the number of thresholds. To effectively address this issue, this paper proposes a Multi-Strategy Remora Optimization Algorithm (MSROA) designed for efficient color image segmentation. MSROA improves upon the standard algorithm by integrating a Beta random restart strategy with a “prior” property to prevent stagnation in local optima, alongside a random walk with fast predation and an elite learning strategy to enhance convergence speed and solution accuracy. The optimization performance of MSROA was rigorously evaluated on the CEC2017 and CEC2020 benchmark test suites. Wilcoxon rank-sum tests confirmed that MSROA achieves statistically significant improvements over seven state-of-the-art comparison algorithms. Furthermore, the algorithm was applied to color image segmentation tasks using Otsu’s method and Kapur’s entropy as objective functions. Experimental results on standard datasets demonstrate that MSROA not only identifies optimal threshold combinations more accurately but also yields segmented images with superior quality. Quantitative evaluations show that MSROA consistently achieves higher Peak Signal-to-Noise Ratio (PSNR), Feature Similarity Index Measure (FSIM), and Structural Similarity Index Measure (SSIM) values compared to competitors, proving its capability to effectively preserve fine textures and edge details even at high threshold levels. The source code of MSROA is publicly available at https://github.com/wencs666/MSROA.
Citation: Jia H, Wen C, Rao H, Abualigah L, Abdel-Salam M (2026) Multi-strategy remora optimization algorithm for color multi-threshold image segmentation. PLoS One 21(2): e0342261. https://doi.org/10.1371/journal.pone.0342261
Editor: Vedik Basetti, SR University, INDIA
Received: October 16, 2025; Accepted: January 20, 2026; Published: February 18, 2026
Copyright: © 2026 Jia et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: All image datasets used in this study are available from the BSDS500 dataset (URL: https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/resources.html).
Funding: H. Jia was supported by the Natural Science Foundation of Fujian Province (Grant No. 2025J011049, http://www.fjkjt.gov.cn). The funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. There was no additional external funding received for this study.
Competing interests: The authors have declared that no competing interests exist.
1 Introduction
Image segmentation [1] is a fundamental technique in computer vision and image processing, defined as the process of partitioning an image into distinct regions where pixels share high similarity in attributes such as color, intensity, or texture. The importance of this process cannot be overstated, as it serves as a prerequisite step that directly influences the performance of downstream analysis tasks. For instance, in the medical field, precise segmentation is crucial for lesion detection and identifying diseased tissues, thereby assisting doctors in accurate diagnosis [2]. In intelligent transportation systems, it enables vehicle detection [3] and traffic congestion analysis [4], reducing the burden on traffic management. Similarly, in precision agriculture [5,6] and industrial quality control [7,8], the efficiency of production and monitoring relies heavily on the accuracy of automated segmentation. Consequently, developing robust segmentation methods is a critical research priority with wide-ranging applications in remote sensing [9], facial recognition [10], and aerospace technology [11].
Image segmentation can be categorized into color image segmentation [12] and grayscale image segmentation [13] based on the type of input image. According to segmentation criteria, it can be further classified into threshold-based methods [14], region-based methods [15], and edge-based methods [16]. In addition, there are other methods developed based on specific theories [17]. Among these, threshold-based segmentation techniques are commonly divided into two types: single-threshold segmentation and multi-threshold segmentation. Single-threshold segmentation classifies pixel grayscale values into two groups—foreground (target region) and background—based on a single threshold value. In contrast, multi-threshold segmentation uses multiple thresholds to divide the image into several distinct regions. Compared to single-threshold segmentation, multi-threshold segmentation is more effective in handling complex backgrounds and images with multiple target regions, making it widely applicable in various image analysis tasks.
Numerous methods have been developed to address image segmentation problems. For instance, the Kapur entropy method [18], also known as the maximum entropy method, segments the image histogram into multiple regions using thresholds and seeks to maximize the sum of the entropies of these regions. The Otsu method [19], or maximum between-class variance method, selects thresholds by maximizing the variance between segmented classes. In addition to these, several techniques have been proposed for multi-threshold image segmentation, including the minimum cross-entropy method (MCE) [20]. However, as the number of thresholds increases, the search space expands rapidly, making it increasingly difficult to identify optimal thresholds using traditional exhaustive search methods. To overcome this challenge, swarm intelligence-based optimization algorithms have gained significant attention in recent years for effectively solving multi-level threshold image segmentation problems.
Swarm intelligence-based optimization algorithms, inspired by various natural and social phenomena, can be broadly categorized into four groups based on their underlying inspiration: physics-based, evolutionary-based, swarm-based, and human-based algorithms. Physics-based meta-heuristic algorithms simulate physical laws and principles to guide the search process. Representative examples include the Big-Bang Big Crunch (BB-BC) algorithm [21], Lightning Search Algorithm (LSA) [22], Artificial Electric Field Algorithm (AEFA) [23], Sine Cosine Algorithm (SCA) [24], Arithmetic Optimization Algorithm (AOA) [25], Gravitational Search Algorithm (GSA) [26], Black Hole Algorithm (BHA) [27], and Henry Gas Solubility Optimization (HGSO) [28]. Evolutionary-based algorithms are inspired by biological evolution mechanisms, such as the Artificial Algae Algorithm (AAA) [29], Genetic Algorithm (GA) [30], Monkey King Evolution (MKE) [31], and Differential Evolution (DE) [32]. Swarm-based algorithms model the collective behavior of social organisms, including Particle Swarm Optimization (PSO) [33], Moth-Flame Optimization (MFO) [34], Ant Colony Optimization (ACO) [35], Whale Optimization Algorithm (WOA) [36], and Marine Predator Algorithm (MPA) [37]. Human-based algorithms mimic human social behavior and decision-making strategies, such as the Imperialist Competitive Algorithm (ICA) [38], Cohort Intelligence (CI) [39], and Social Group Optimization (SGO) [40]. According to the No Free Lunch (NFL) theorem [41], no single algorithm performs best across all optimization problems. This has driven the continuous development of novel algorithms and improvements to existing ones. Consequently, numerous enhanced or hybrid variants have been proposed in recent years, including MGTO [42], IWHO [43], MHHO [44], and MSCSO [45].
At present, many researchers have applied swarm intelligence optimization algorithms to multi-threshold image segmentation problems. Rao et al. enhanced the performance of the crayfish optimization algorithm by optimizing the maximum foraging amount parameter, introducing an adaptive foraging adjustment strategy, and incorporating the core formula of the differential evolution algorithm [46]. Their method demonstrated excellent performance in multi-threshold image segmentation tasks. Jia et al. improved the artificial rabbit optimization algorithm by integrating a center-driven strategy and a Gaussian random walk mechanism to enhance its optimization capability. When applied to multi-threshold color image segmentation, their approach achieved high segmentation accuracy and fast execution [47]. Peng et al. proposed a hybrid algorithm by improving the dragonfly optimization algorithm through chaotic initialization and elite reverse learning, and integrating it with the differential evolution algorithm. They employed Kapur entropy, Minimum Cross-Entropy, and Otsu methods as objective functions to obtain optimal fitness values. The segmented images achieved high performance in terms of pixel similarity, structural similarity, brightness, and contrast [48]. Ma et al. optimized the initial population of the whale optimization algorithm using reverse learning, introduced an adaptive factor to balance exploration and exploitation, and incorporated horizontal and vertical crossover strategies to enhance optimization performance [49]. Their method achieved the highest similarity between segmented and original images, delivering higher-quality solutions and greater stability.
The Remora Optimization Algorithm (ROA) [50] is a meta-heuristic swarm intelligence algorithm inspired by the foraging behavior of remoras. The core idea behind ROA is to adaptively select different host-switching and behavioral strategies based on various stages of the remora’s foraging process. However, standard ROA faces specific limitations when applied to the complex multimodal landscapes of multi-threshold segmentation. It is prone to premature convergence, often getting trapped in local optima during the early search phase, and suffers from slow convergence rates. Although improved versions exist [51–53], challenges related to insufficient global exploration and the randomness of roulette wheel selection remain, potentially causing the algorithm to miss global optima.
However, despite these improvements, existing variants of ROA still face challenges related to exploration and convergence. For example, the MROA does not sufficiently address exploration, leading to slower convergence rates. Additionally, the randomness inherent in roulette wheel selection may cause the algorithm to skip over optimal solutions during the search process. To overcome these shortcomings, this paper introduces a Multi-Strategy Remora Optimization Algorithm (MSROA). First, a Beta Random Restart Strategy is proposed, which leverages the “prior” properties of the beta distribution to simulate evolutionary exploration and elimination, helping the algorithm escape local optima. Second, a two-phase strategy is introduced, combining random walk in the early stage (simulating autonomous host exploration) with fast predation in the later stage (simulating rapid target capture). This approach not only increases population diversity but also improves convergence accuracy. Finally, inspired by different learning behaviors, an elite learning strategy is developed, combining elite forward learning and elite reverse learning to effectively mitigate the negative impact of local optima.
Through the integration of these strategies, MSROA achieves both enhanced global exploration and faster convergence. The main contributions of this paper are as follows:
- A Beta Random Restart Strategy is proposed by improving the traditional restart mechanism. Inspired by biological exploration and leveraging the ‘prior’ properties of the beta distribution, this strategy effectively enhances the algorithm’s ability to escape local optima and significantly improves convergence speed.
- A Two-Phase Search Mechanism is introduced by simulating different predation behaviors of the host. The random walk in the early stage enhances exploration, while the fast predation in the later stage strengthens exploitation, thereby improving the overall optimization capability of the algorithm.
- An Elite Learning Strategy is designed by mimicking two distinct learning behaviors—elite forward learning and elite reverse learning. This strategy mitigates the negative effects of premature convergence to local optima and improves the accuracy and robustness of the final solutions.
To evaluate the optimization performance of the proposed MSROA, this study conducted experiments using 29 benchmark functions from the CEC 2017 test suite and 10 benchmark functions from the CEC 2020 test suite. Finally, MSROA was applied to solve multi-threshold image segmentation problems. The segmented images obtained using MSROA not only achieved superior fitness values but also outperformed competing methods in terms of Peak Signal-to-Noise Ratio (PSNR), Feature Similarity Index Measure (FSIM), and Structural Similarity Index Measure (SSIM), demonstrating improved segmentation quality across multiple evaluation metrics.
The rest of this paper is organized as follows: Sect 2 describes the original algorithm of ROA. Sect 3 introduces the Beta random restart strategy, random walk, fast predation, and elite learning strategy. Sect 4 introduces the theory and methods of the multi-threshold image segmentation problem. Sect 5 presents the experimental results and analysis of MSROA on CEC benchmark functions and its application effects in image segmentation tasks. Sect 6 summarizes this paper and discusses future research directions.
2 Remora optimization algorithm (ROA)
The Remora Optimization Algorithm (ROA) is a novel swarm intelligence optimization algorithm introduced by Jia et al. in recent years, inspired by the foraging behavior of remoras. Initially, the remora selects a host and attaches itself to it, obtaining the necessary food through this attachment. The remora then switches between different hosts, such as swordfish and whales, based on its experience. To further enhance its food acquisition, the remora engages in host foraging, attacking different hosts to optimize its resource collection. In this process, the SFO (Swordfish Foraging Optimization) algorithm is employed during the exploration phase, while the WOA (Whale Optimization Algorithm) is utilized in the exploitation phase.
2.1 Initialization
In the initialization phase, a population of random candidate solutions is generated within the defined search space, where each individual serves as an initial solution for the ROA. The initialization process is mathematically described by Eq (1) as follows:
where, Xi is the position of the solution, lb and ub are the lower and upper bounds of the search space, respectively, and rand is a random number that can be evenly distributed between 0 and 1.
2.2 Free travel (Exploration)
2.2.1 SFO strategy.
During the exploration phase, the SFO algorithm updates the position of the swordfish. Since the remora is in the exploration stage, its movement speed is faster. The remora selects the swordfish as its host, attaches to it, and follows its movement. The position update in this stage is governed by Eq (2):
where t represents the current iteration number, is the position of the current optimal individual, and
is the position of a randomly selected fish from the population.
2.2.2 Experience attack strategy.
When the remora is in the exploration stage, a small range of exploratory attacks will be carried out according to the host’s location and the previous generation of remoras. The updated formula of remora is shown in Formula (3):
where is the exploratory motion attack of the remora,
is the position of the previous generation of the remora, and
is a random number that follows a normal distribution between 0 and 1.
After a small number of attempts, Formula (4) determines whether the host needs to be switched. The formula for switching the host is shown in Formula (5):
where, the remora is selected to adsorb on the host by H(i), and the initial value is 0 or 1. If H(i) equals 0, the remora adsorb on the whale. If H(i) equals 1, the remora adsorb on the swordfish. The round represents a rounded function, represents the fitness value of
, and f(Xatt) represents the fitness value of Xatt.
2.3 Eat thoughtfully (Exploitation)
2.3.1 WOA strategy.
When the remora is in the development stage, it adsorbs onto the whale and moves with it. According to the WOA, the position update of the remora is computed using the following Formulas (6, 7, 8, 9):
where D is the distance between the position of the best individual and the position of the current individual, is a random number, a is a control parameter that linearly decreases from –2 to 1 over the course of iterations, and T is the maximum number of iterations.
2.3.2 Host feeding.
When the remora is in the development stage, it will perform a small-range local search near the host. The position update during this stage is described by the following formulas:
where A represents the distance between the previous position and the current position of the fish. To constrain the position updates, a constant factor C = 0.1 is used. To simulate the size of the host and the remora, B and V are introduced to represent the volume of the host and the volume of the remora, respectively.
The flowchart of ROA is presented in Fig 1.
3 Proposed method
3.1 Beta random restart strategy
Traditional restart strategies typically perform restarts after a fixed number of iterations. However, such strategies cannot promptly eliminate poor-performing individuals, and may also interrupt promising search trajectories of well-performing individuals. Additionally, an accumulation of poor individuals can lead the algorithm to become trapped in local optima.
To address these issues, this paper introduces a Beta random restart strategy inspired by biological evolutionary exploration. This strategy incorporates a prior property: when the number of iterations exceeds a threshold , the population is sorted based on fitness values, and the bottom half of individuals (i.e., poor solutions) are re-initialized. This approach ensures that unproductive individuals are refreshed, while preserving high-quality individuals to continue exploitation, thereby improving both convergence accuracy and speed.
The Beta distribution is selected over other common distributions, such as Gaussian or Cauchy, due to its specific mathematical properties and biological relevance. Mathematically, the Beta distribution is naturally defined on the bounded interval [0,1]. This aligns perfectly with the normalized timeline of the optimization process (represented by the ratio of current to maximum iterations), whereas Gaussian or Cauchy distributions are defined on an infinite domain , which would require artificial truncation or mapping to fit the search parameters.
Biologically, the Beta distribution serves as a “prior” probability model that simulates the uncertainty of the remora’s decision-making. Controlled by the random parameters α and β, the Beta distribution exhibits high shape flexibility—it can be symmetric, skewed left, or skewed right. This flexibility allows Tbeta to dynamically vary, simulating the unpredictable environmental pressures that cause a remora to abandon a host at different stages of foraging. Unlike a rigid Gaussian bell curve, the Beta distribution allows the algorithm to explore various restart timings effectively.
The mathematical formulation of the Beta random restart strategy is shown as follows:
where, represents the number of random iterations,
represents the position of the i-th “not good” individual,
is a random number subject to
, and
is a random number subject to a uniform distribution in (0,1). Note that lb and ub represent the lower and upper bounds of the search space, respectively.
3.2 Random walk with Fast predation strategy
ROA accelerates the convergence speed of the algorithm when simulating the experience attack and host switching of the remora. Still, it also makes the algorithm easy to fall into the local optimal solution, affecting the algorithm’s convergence accuracy. In order to enhance the exploration ability of the algorithm and improve the convergence speed of the algorithm, this paper simulates the purposeless search of the host in the first half stage. It simulates the fast predation behavior of the host in the second half stage.
At the same time, in the fast predation stage, this paper designs a nonlinear driving factor, which effectively accelerates the convergence speed of the algorithm. The factor f exhibits an increasing non-linear trend, serving to dynamically adjust the search step size. Specifically, in the early phase of the search (when t is small), f takes a value close to 0, resulting in smaller step sizes that facilitate stable, fine-grained exploration. As the iteration t progresses towards the maximum T, f gradually increases to 1. During the fast predation phase (where t > 0.5T), f ranges from approximately 0.37 to 1. This increasing magnitude allows the remora to make larger, more aggressive movements towards the global optimal solution in the final stages, thereby ensuring rapid convergence. The mathematical formula of random walk and fast predation strategy is as follows:
where, represents the position obtained by random walk and fast predation strategy;
and
represent the positions of two random individuals, f is the nonlinear driving factor of fast predation stage.
3.3 Elite learning strategy
Learning behavior can be the advantage of learning elites, but it also can be found its shortcomings, to improve themselves. Based on the inspiration of this learning behavior, simulating the advantages and disadvantages of learning elites, this paper proposes an elite learning strategy to strengthen the learning ability of remora. At the same time, elites must have many advantages because they are excellent. Therefore, we add learning factors and inhibitory factors to learning strategies. The learning factor fluctuates with the number of iterations in the early stage. With the deepening of learning, the later learning factor will gradually increase and strengthen the learning of elites. The inhibition factor decreases monotonously with the number of iterations, indicating that it reduces the impact of its shortcomings in the learning process. In the algorithm, the learning factor can accelerate the convergence speed of the algorithm, and the suppression factor can reduce the negative impact of the local optimal solution. The mathematical formula of the elite learning strategy is as follows:
where, and
represent the position of forward learning and reverse learning, respectively. sf is the learning factor.
3.4 The proposed MSROA
To further enhance the optimization performance of the ROA, this paper introduces a multi-strategy Remora Optimization Algorithm (MSROA). First, by simulating exploration in biological evolution, a Beta random restart strategy is proposed to improve the exploration capability of the algorithm and mitigate the impact of local optima. Next, through a combination of random walk and fast predation strategies, the movement and predation behavior of the host are further simulated, leading to an overall enhancement of the algorithm’s optimization performance. Finally, inspired by different learning behaviors, an elite learning strategy is introduced to improve convergence accuracy and accelerate the algorithm’s convergence speed. The pseudocode for the proposed MSROA is provided in Algorithm 1, and the corresponding flowchart is presented in Fig 2.
Algorithm 1 MSROA algorithm pseudo code.
1: Initialization parameters(Population size: n, maximum number of iterations: T, remora factor: C)
2: Initialize the population by Formula (1)
3: while t < T do
4: Pulls individuals beyond the search space back to the boundary
5: Calculate the fitness value of each individual and update Xbest
6: if t > Trand then
7: The position of the second half of the individual is updated by Formula (17).
8: end if
9: for i = 1 to n do
10: if then
11: Using Formula (2) to update the position of the individual.
12: else if then
13: Using Formula (6) to update the position of the individual.
14: end if
15: Empirical attack according to Formula (3)
16: if then
17: Switch the host through Formula (5) and update Xi
18: else
19: Host feeding by Formulas (10)
20: end if
21: Calculate the position of Xnew by Formula (18).
22: if then
23:
24: end if
25: Calculate the position of and
by Formula (20) and (21).
26: if then
27: if then
28:
29: end if
30: else if then
31:
32: end if
33: end for
34: t = t + 1
35: end while
36: return Xbest
3.5 Analysing algorithm complexity
The complexity of MSROA primarily arises from the initialization process, location updates, and fitness evaluations. The complexity of the initialization is ×
, where N represents the population size and
is the dimensionality of the search space. The complexity of location updates is
, where MaxFEs refers to the maximum number of fitness evaluations. The complexity of fitness evaluation is
, where C is the time cost of evaluating a single solution.
Regarding the proposed learning strategy, the algorithm evaluates both positive and negative learning positions simultaneously. While this mechanism effectively doubles the number of function evaluations for the individuals involved in this specific phase within a single iteration, it remains within an acceptable computational range. In the asymptotic complexity analysis, the constant factor introduced by these additional evaluations (e.g., ) does not alter the order of magnitude. Furthermore, the total computational cost is bounded by the maximum number of function evaluations (MaxFEs). Therefore, despite the increased evaluations per iteration, the overall algorithmic complexity class remains unchanged. Thus, the total time complexity of MSROA is:
Comparing this with the traditional ROA, the time complexity has not increased, indicating that the additional strategies in MSROA do not significantly impact the overall computational cost.
4 Image segmentation theory and methods
Image segmentation based on thresholding aims to determine the optimal threshold using a specific method, and then compare the image pixels to distinguish the target from the background. Threshold-based image segmentation methods can be categorized into two main types: single-threshold segmentation and multi-threshold segmentation. In single-threshold segmentation, the image histogram is divided into two categories—target and background—based on a single threshold. In contrast, multi-threshold segmentation divides the image into multiple categories, aiming to maximize the inter-class variance between them. When the image is complex and contains multiple objects, the performance of the single-threshold method tends to be insufficient, which is why multi-threshold image segmentation has been widely explored.
In the context of color image segmentation addressed in this paper, it is crucial to clarify the processing strategy for color channels. Unlike methods that convert color images into a grayscale version, which may lead to the loss of potential color information, this study calculates the thresholds for the Red (R), Green (G), and Blue (B) channels individually. Specifically, the histogram for each RGB component is analyzed separately, and the optimal thresholds are determined by maximizing the objective function (Otsu or Kapur) for each channel independently. This component-wise approach ensures that the segmentation process fully leverages the color distribution details of the original image.
Several methods exist for multi-threshold image segmentation. In the following section, two such methods—the Otsu method and the Kapur entropy method—are discussed.
4.1 Otsu method (maximum between-class variance method)
The Otsu method was proposed by scholar Nobuyuki Otsu in 1979 [19]. The main idea behind this method is to divide the image’s histogram into different categories using multiple thresholds, calculate the inter-class variance for each category, and then sum these variances. The Otsu method asserts that the optimal segmentation threshold corresponds to the point where the sum of the inter-class variances is maximized, which results in the best segmentation effect. For color images, this variance maximization is performed separately for each of the R, G, and B channels. The specific formula for the Otsu method is as follows:
where:
In the above formula, represents the average grayscale level of the image.
4.2 Kapur entropy method (maximum entropy method)
The Kapur entropy method was proposed by Kapur et al. in 1985 [18]. This method divides the histogram of an image into different categories using multiple thresholds and seeks to maximize the sum of the entropies of these categories. Assuming the image, denoted as , consists of N pixels, the threshold set is
, which divides the image into n + 1 parts. The probabilities of each part are denoted as
.
The specific formula for the Kapur entropy method is as follows:
In the above formula, represent the entropy of the different classes, and pi represents the probability of the i-th gray level. The formula is given in Formula (27), where hi denotes the sum of the number of pixels at the i-th gray level.
5 Analysis and discussion of experimental results
In order to comprehensively evaluate the performance of MSROA, this section selects two distinct test functions from CEC 2017 and CEC 2020 for testing its optimization performance. To highlight the superiority of MSROA, seven well-performing algorithms are chosen for comparison, including the Remora Optimization Algorithm (ROA) [50], Sand Cat Swarm Optimization (SCSO) [54], Whale Optimization Algorithm (WOA) [36], Genetic Algorithm (GA) [30], Exponential-Trigonometric Optimization (ETO) [55], Arithmetic Optimization Algorithm (AOA) [50],Prairie Dog Optimization Algorithm (PDO) [56],Particle Swarm Optimization (PSO) [33] and Crayfish Optimization Algorithm (COA) [57]. The parameter settings for these algorithms are summarized in Table 1.
All experiments were conducted on a 64-bit Windows 11 operating system, using an Intel(R) Core(TM) i7-11700 processor (11th generation) with 16 GB of RAM. The simulations were executed using MATLAB R2025a. The population size N is 30, and the maximum evaluation count MaxFEs is 10000×dim.
5.1 Experimental results and analysis of CEC 2017 and CEC 2020 test functions
5.1.1 Detailed description of CEC 2017 and CEC 2020 test functions.
CEC 2017 consists of 29 test functions, categorized as follows: CEC1-CEC3 are unimodal functions, CEC4-CEC10 are multimodal functions, CEC11-CEC20 are hybrid functions, and CEC21-CEC30 are composition functions. A detailed description of the CEC 2017 functions is provided in Table 2.
CEC 2020 includes a total of 10 test functions. The CEC1-CEC4 functions are the translation-rotation function, translation-rotation Schwefel function, translation-rotation Lunacek double grating function, and extended Rosenbrock’s plus grievance function. Functions CEC5-CEC7 are mixed functions, while CEC8-CEC10 are composite functions. A detailed description of the CEC 2020 functions is shown in Table 3.
5.1.2 Experimental results and analysis of CEC2017 and CEC2020 test functions.
This section further evaluates the performance of MSROA using two benchmark test suites: CEC2017 and CEC2020. The experimental results for CEC2017 and CEC2020 are presented in Tables 4 and 5, respectively, where the optimal values are rounded for clarity. The convergence curves for the two function sets are illustrated in Figs 3 and 4.
Overall, for CEC2017,MSROA demonstrates the best performance among all compared algorithms. For most test functions, such as F1–F3, F8–F10, and F12–F16, its MIN and MEAN values are significantly lower than those of other algorithms. For instance, the MIN value for F1 is only 1.75 × 103, which is considerably better than that of ROA (2.36 × 109). Moreover, MSROA exhibits relatively small STD values, indicating strong stability and low result variability. In contrast, algorithms such as GA, ROA, and PDO perform poorly. GA yields extremely high values in functions like F10, F11, and F29 (MEAN of 1.48 × 104 for F10), along with excessively large STD values, reflecting weak robustness. Similarly, ROA and PDO result in high MIN and MEAN values across many functions, demonstrating subpar optimization capability. The SCSO, WOA, ETO, and AOA algorithms perform moderately. While they achieve good results in certain functions (F18 and F22), their overall performance is less stable and efficient than that of MSROA. As shown in Figs 3 and 4, MSROA consistently finds the optimal solution across most test functions and converges more rapidly than the compared algorithms. For unimodal functions such as F1 and F2, MSROA exhibits notably faster convergence, requiring fewer function evaluations to reach the optimum. For multimodal functions like F4, F6, and F8, MSROA demonstrates the ability to escape local optima during the middle and late stages of evolution. In the case of composite functions, MSROA also exhibits superior global search capability compared to other methods. This performance can be attributed to the restart mechanism embedded in MSROA. In functions such as F23, F27, and F29, this mechanism enables the algorithm to escape from local optima and locate the global optimum effectively.
Table 5 compares the performance of several optimization algorithms, including MSROA and ROA, on functions F1–F10 from the CEC2020 test set. The evaluation includes three indicators: the minimum value (MIN), which reflects the best convergence achieved; the mean value (MEAN), indicating overall stability; and the standard deviation (STD), which measures the dispersion of results. MSROA exhibits outstanding performance, achieving lower optimal values in several functions (F1 and F2), while maintaining relatively controlled MEAN and STD values. This demonstrates both high convergence accuracy and stable optimization behavior. Although ROA and SCSO occasionally attain competitive optimal values, their corresponding MEAN and STD are often large and unstable, indicating significant result fluctuations and limiting their practical value in engineering applications. Special attention should be paid to function F4, where all algorithms exhibit a standard deviation of 0. We confirm that this result is not due to decimal truncation. Instead, it indicates that the topological structure of function F4 is relatively simple for the algorithms tested in this study. All participating algorithms were able to locate the theoretical global optimum in every independent run without exception. Consequently, while F4 confirms that all algorithms possess basic convergence capabilities, it does not provide a challenge sufficient to differentiate their performance.
Fig 5 illustrates the convergence performance of eight algorithms, including MSROA and ROA. Overall, MSROA demonstrates the best performance across most test functions: its convergence curves are consistently lower, exhibit rapid descent, and stabilize quickly as the number of function evaluations increases, indicating fast convergence and high final accuracy. Specifically, in functions F1, F2, and F3, MSROA significantly outperforms the other algorithms. Its advantages become even more pronounced in more complex functions such as F5 and F7. In contrast, ROA and SCSO show relatively poor performance, with convergence curves generally positioned higher, exhibiting slower descent and higher final values. This is especially evident in F1, F5, and F7, where their convergence efficiency is noticeably lower than that of the other algorithms. Algorithms such as WOA and GA show moderate performance, with curve positions and convergence rates falling between those of MSROA and ROA. Although ETO, AOA, and PDO exhibit slight fluctuations, their overall performance still lags behind MSROA. For function F4, all algorithms present similar convergence behavior, confirming earlier observations that F4 is a relatively simple function. As such, it serves as a benchmark for evaluating the baseline convergence capabilities of algorithms.
In summary, MSROA exhibits superior convergence speed and accuracy across most functions, particularly in complex scenarios, while ROA and SCSO show weaker performance. Function F4 remains a useful reference for basic convergence verification.
5.1.3 Analysis of Wilcoxon rank sum test results.
Table 6 presents the results of the Wilcoxon rank-sum test comparing MSROA with seven other algorithms (including ROA and SCSO) on the CEC2017 test suite across functions F1–F29. To ensure the statistical validity of the comparison, the test is performed based on the data obtained from 30 independent runs for each algorithm on every function. The p-value is used to evaluate the statistical significance of performance differences, where p < 0.05 indicates a significant difference. The last row, denoted by ‘ + / −/ = ’, represents the number of functions where MSROA performs significantly better, worse, or has no significant difference, respectively. Overall, MSROA demonstrates a clear advantage. Compared to ROA, GA, and PDO, the results are ‘29/0/0’, indicating that MSROA significantly outperforms these algorithms on all 29 functions. Against WOA, the result is ‘28/0/1’, with only one function showing no significant difference, highlighting MSROA’s nearly comprehensive superiority. For algorithms with more competitive performance, the differences are more nuanced. Compared to SCSO, MSROA achieves ‘19/0/10’, suggesting significant improvements in 19 functions and no significant difference in the remaining 10. Similar trends are observed for ETO (‘20/0/9’) and AOA (‘25/0/4’). In most cases, p-values are far below 0.05, reinforcing the reliability of the observed superiority. Only a few cases (SCSO on F4 and F5) show , implying comparable performance on these specific functions.
The CEC2020 results in Table 7 further support MSROA’s advantage. Most p-values are significantly below 0.05—for instance, the p-value for MSROA vs. ROA on F1 is 1.73 × 10−6, indicating a substantial difference. MSROA achieves a ‘10/0/0’ result against GA, outperforming it significantly in all 10 functions. Against six other algorithms, including ROA and SCSO, MSROA obtains ‘9/0/1’, with the only exception being F4 (,p = 1.00 for ROA), again confirming that this function is relatively easy and not sufficient for distinguishing algorithmic performance.
In summary, the Wilcoxon rank-sum test confirms that MSROA delivers statistically significant improvements over most baseline algorithms on both CEC2017 and CEC2020, especially over ROA, GA, and PDO, and is only indistinguishable from some algorithms on a limited number of simple functions.
5.2 Multi threshold image segmentation experiment
5.2.1 Dataset and experimental parameter settings.
This section evaluates the optimization performance of the proposed MSROA algorithm in comparison with seven existing image segmentation algorithms. The experimental datasets consist of 12 images selected from the Berkeley Segmentation Dataset and Benchmark 500 (BSDS500) [58]. Fig 6 presents the original images along with the corresponding histograms of the red (R), green (G), and blue (B) color channels. To ensure fair comparison, the population size N is set to 30. The threshold K is assigned varying values corresponding to the algorithm dimension (15, 20, 25, and 30), and the maximum number of function evaluations is defined as
.
5.2.2 Statistical results and analysis of image segmentation.
Table 8 presents the performance of eight algorithms across 12 images and four threshold levels (), using Otsu’s method as the fitness criterion (where a higher value indicates better segmentation quality). Overall, MSROA consistently achieves superior performance. Across all image-threshold combinations, MSROA yields high fitness values, often outperforming or matching those of comparable algorithms such as ROA and WOA. For instance, when K = 15 for Image 1, MSROA achieves 5322.64, slightly outperforming ROA (5321.04) and WOA (5321.99). Similarly, for Image 6 at K = 30, MSROA attains 5720.92, surpassing ROA (5719.62) and WOA (5720.21). Compared to lower-performing algorithms such as AOA and PDO, MSROA demonstrates significant advantages. For example, at K = 15 for Image 10, MSROA reaches 3982.70, considerably higher than AOA (3966.57) and PDO (3961.21); for Image 12 at K = 20, MSROA records 2799.22, outperforming AOA (2787.31) and PDO (2784.61). Moreover, as the threshold K increases, MSROA’s fitness value exhibits a steady upward trend, with consistent performance advantages across all threshold levels, highlighting its robustness under varying parameter settings. In conclusion, when Otsu’s criterion is employed as the objective function, MSROA demonstrates leading performance across most test scenarios, validating its effectiveness and reliability in image segmentation tasks.
Table 9 presents the performance results of MSROA and seven other algorithms across 12 images and four threshold levels (), with Kapur’s entropy serving as the fitness function. MSROA demonstrates consistently superior performance. Across all image-threshold combinations, MSROA frequently ranks among the top performers, often achieving the highest or second-highest fitness value. For example, when K = 30 for Image 9, MSROA achieves a value of 63.7909, significantly outperforming ROA (54.8521), SCSO (60.9884), and others. Similarly, at K = 30 for Image 3, MSROA reaches 62.6638, surpassing WOA (61.5811) and PSO (61.3898). Compared to lower-performing algorithms such as AOA and PDO, MSROA shows clear advantages. For instance, at K = 15 for Image 5, MSROA achieves 40.7560, far exceeding AOA (36.3560) and PDO (35.5923); at K = 25 for Image 10, MSROA scores 54.8698—nearly 8 units higher than AOA (46.5231) and PDO (46.7252). Moreover, as the threshold K increases, the Kapur values of MSROA exhibit a steady upward trend. The growth and performance advantage remain consistent across all thresholds, reflecting MSROA’s adaptability to different parameter settings. In summary, when using Kapur’s entropy as the fitness function, MSROA consistently outperforms most peer algorithms, demonstrating stable and outstanding performance across all evaluation scenarios.
5.2.3 Analysis of image segmentation convergence curve.
Figs 7 and 8 present the convergence curves of eight algorithms, including MSROA, ROA, and SCSO, applied to 12 images under four threshold settings (), using Otsu’s entropy as the fitness function. In these curves, higher values indicate better performance. Among all algorithms, MSROA consistently maintains a superior position. Its convergence curves rise rapidly with the number of function evaluations and stabilize at a high level, indicating fast convergence speed and strong final performance. For example, at K = 15, the MSROA curve is notably higher than those of the other algorithms across all images. This performance advantage remains evident for K = 20, K = 25, and K = 30, and becomes particularly pronounced in higher-threshold scenarios. In contrast, AOA and PDO show relatively weak performance, with convergence curves remaining at lower levels throughout. The curves of ROA, SCSO, and similar algorithms fall between those of MSROA and the lower-performing methods. As the threshold K increases, most algorithms exhibit an upward trend in their convergence curves. However, MSROA demonstrates the most stable and consistent improvement, reflecting its strong adaptability across different segmentation complexities.
Figs 9 and 10 illustrates the convergence curves of eight algorithms on 12 images under four threshold settings (), using Kapur’s entropy as the fitness function. The higher the curve, the better the algorithm’s performance. Among all methods, MSROA consistently achieves superior performance. Its convergence curves rise rapidly and stabilize early as the number of function evaluations (FEs) increases, demonstrating significant advantages in both convergence speed and final solution quality. When K = 15, the MSROA curves are markedly higher than those of other algorithms across all images. This performance advantage persists as the threshold increases to K = 20, 25, and 30, particularly under high-threshold settings. In contrast, AOA and PDO generally show weak performance, with convergence curves that remain at lower levels throughout. Algorithms such as ROA and SCSO demonstrate moderate performance, with curve positions and convergence rates lying between those of MSROA and the less competitive methods.
Overall, MSROA outperforms the competing methods in both convergence speed and final optimization results across all scenarios.
5.3 Supplementary evaluation indicator
The performance of each algorithm is assessed using a set of quantitative metrics, some of which are employed to evaluate the fitness function during the optimization process, while others are used to assess the quality of the final segmented images. The former is characterized by reporting the arithmetic mean and standard deviation of the fitness values. The latter is evaluated using three widely recognized image quality metrics: the Feature Similarity Index Measure (FSIM), the Peak Signal-to-Noise Ratio (PSNR), and the Structural Similarity Index Measure (SSIM).
FSIM evaluates the perceptual quality of a segmented image by comparing the similarity of structural features between the segmented image and the original image. The specific formulation of FSIM is as follows:
Among them, Ω denotes the entire image pixel domain, SL(x) represents the similarity measure of low-level image features at pixel location x. SL(x) is defined as Eq (29). PCm(x) denotes the phase congruency map at x, which is defined as . It represents the phase congruency feature derived from the two regions.
SPC(x) denotes the similarity measure based on phase congruency, while SG(x) represents the gradient magnitude similarity between two regions, G1(x) and G2(x). The parameters α, β, T1, and T2 are predefined constants. In this study, to ensure the reproducibility of the experiments and following the standard settings in the literature, the parameters are explicitly set as follows: and
, which assign equal importance to phase congruency and gradient magnitude features. Furthermore, the stability constants are set to T1 = 0.85 and T2 = 160.
PSNR is an indicator used to quantify the similarity between the segmented image and the original image. It is defined as:
Where MSE denotes the mean squared error, I(i, j) represents the grayscale value at the ith row and jth column of the original image, K(i, j) denotes the grayscale value at the corresponding position in the segmented image, and M and N are the number of rows and columns in the image matrix, respectively.
SSIM evaluates the similarity between two images based on luminance, contrast, and structural information. Its formulation is given by:
Let x and y denote two images, where and
represent their respective means,
and
denote their variances, and
is the covariance between x and y. Constants c1 and c2 are included for stability.
5.3.1 Analysis of supplementary evaluation indicators.
Tables 10, 11 and 12 present the performance of the MSROA algorithm using Otsu as the cost function, evaluated in terms of FSIM, PSNR, and SSIM, respectively. Similarly, Tables 13, 14 and 15 report the corresponding results when Kapur entropy is employed as the cost function. Compared with other algorithms such as ROA, WOA, and AOA, MSROA consistently achieves superior numerical performance across all metrics and exhibits a stable, monotonic improvement as the threshold increases. This behavior highlights its stronger adaptability and segmentation efficiency.
Across all metrics—FSIM (reflecting phase congruency), PSNR (quantifying grayscale fidelity), and SSIM (assessing structural similarity)—MSROA demonstrates robust segmentation performance under different fitness functions. Its overall effectiveness significantly exceeds that of the comparative algorithms.
While MSROA outperforms in most scenarios, some competing algorithms also demonstrate competitive performance under specific conditions. For instance, the SCSO algorithm yields results close to MSROA in certain SSIM evaluations using the Otsu method. In Image 2, at K = 30, SCSO achieves an SSIM value of 0.9743, only 0.0003 below the 0.9780 obtained by MSROA. However, SCSO’s performance across all images and thresholds lacks the stability observed in MSROA.
Similarly, the PSO algorithm achieves competitive results in specific FSIM evaluations under the Kapur entropy criterion. For example, for Image 10 at K = 15, PSO attains an FSIM of 0.9360, slightly higher than MSROA’s 0.9329. Nevertheless, PSO’s performance is less consistent across different thresholds and metrics.
In summary, due to its stable and high numerical performance across diverse cost functions and evaluation criteria, MSROA exhibits a significant advantage in image segmentation and consistently outperforms most other algorithms in comprehensive evaluations.
6 Conclusion
ROA, a meta-heuristic method with notable search performance, suffers from drawbacks such as slow convergence and limited accuracy due to its reliance on random host selection. To address these limitations, this study introduces a MSROA. MSROA integrates a Beta random restart strategy with prior-guided properties, a random walk and fast predation mechanism, and an elite learning strategy to enhance both convergence speed and solution accuracy. To comprehensively evaluate its performance, MSROA is first benchmarked against standard test suites from CEC2017 and CEC2020. Comparative analysis with seven well-established algorithms confirms the strong optimization capabilities of MSROA. In real-world applications, MSROA is applied to multi-threshold image segmentation tasks. By using Otsu’s method and Kapur’s entropy as objective functions, it effectively identifies optimal threshold combinations for color image segmentation. In this context, MSROA not only achieves the highest values for the chosen objective functions but also delivers superior segmentation results. Quantitative evaluations demonstrate that when assessed using Peak Signal-to-Noise Ratio (PSNR), Feature Similarity Index Measure (FSIM), and Structural Similarity Index Measure (SSIM), MSROA consistently achieves the highest average scores compared to other algorithms for both Otsu and Kapur methods. The data confirms that MSROA yields segmentation results with higher structural fidelity and lower distortion, particularly at higher threshold levels.
Future work will focus on further enhancing MSROA by incorporating concepts such as chaos theory, stochastic processes, game theory, and biological population competition models. These improvements aim to develop a more efficient and general-purpose optimization algorithm. Additionally, MSROA will be extended to more complex image processing tasks and broader optimization domains, offering a robust solution for a wide range of practical problems.
Acknowledgments
The authors would like to thank the support of the Fujian Key Laboratory of Agricultural IoT Applications and the IoT Engineering Research Center for Universities in Fujian Province.
References
- 1. Yanowitz SD, Bruckstein AM. A new method for image segmentation. Computer Vision, Graphics, and Image Processing. 1989;46(1):82–95.
- 2. Singh L, Janghel RR, Sahu SP. An empirical review on evaluating the impact of image segmentation on the classification performance for skin lesion detection. IETE Technical Review. 2022;40(2):190–201.
- 3. Chen C, Wang C, Liu B, He C, Cong L, Wan S. Edge intelligence empowered vehicle detection and image segmentation for autonomous vehicles. IEEE Trans Intell Transport Syst. 2023;24(11):13023–34.
- 4. Jianming H, Qiang M, Qi W, Jiajie Z, Yi Z. Traffic congestion identification based on image processing. IET Intell Transp Syst. 2012;6(2):153–60.
- 5. Willekens A, Wyffels F, Pieters JG, Cool SR. Integrating plant growth monitoring in a precision intrarow hoeing tool through canopy cover segmentation. Neural Comput & Applic. 2025;37(24):20139–60.
- 6. Bhadane G, Sharma S, Nerkar VB. Early pest identification in agricultural crops using image processing techniques. International Journal of Electrical, Electronics and Computer Engineering. 2013;2(2):77–82.
- 7. Albanese A, Nardello M, Fiacco G, Brunelli D. Tiny machine learning for high accuracy product quality inspection. IEEE Sensors J. 2023;23(2):1575–83.
- 8.
Massaro A, Panarese A, Dipierro G, Cannella E, Galiano A, Vitti V. Image processing segmentation applied on defect estimation in production processes. In: 2020 IEEE International Workshop on Metrology for Industry 4.0 & IoT. IEEE; 2020. 565–9.
- 9. Bhadoria P, Agrawal S, Pandey R. Image segmentation techniques for remote sensing satellite images. IOP Conf Ser: Mater Sci Eng. 2020;993(1):012050.
- 10. Rangayya , Virupakshappa , Patil N. An enhanced segmentation technique and improved support vector machine classifier for facial image recognition. IJICC. 2021;15(2):302–17.
- 11. Baroutaji A, Wilberforce T, Ramadan M, Olabi AG. Comprehensive investigation on hydrogen and fuel cell technology in the aviation and aerospace sectors. Renewable and Sustainable Energy Reviews. 2019;106:31–40.
- 12.
Yining Deng, Manjunath BS, Shin H. Color image segmentation. In: Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149). p. 446–51. https://doi.org/10.1109/cvpr.1999.784719
- 13.
Zheng L, Pan Q, Li G, Liang J. Improvement of Grayscale Image Segmentation Based on PSO Algorithm. In: 2009 Fourth International Conference on Computer Sciences and Convergence Information Technology. IEEE; 2009. p. 442–6. https://doi.org/10.1109/iccit.2009.68
- 14. Bhargavi K, Jyothi S. A survey on threshold based segmentation technique in image processing. International Journal of Innovative Research and Development. 2014;3(12):234–9.
- 15. Abubakar FM. A study of region-based and contourbased image segmentation. SIPIJ. 2012;3(6):15–22.
- 16. Pratondo A, Chui C-K, Ong S-H. Robust edge-stop functions for edge-based active contour models in medical image segmentation. IEEE Signal Process Lett. 2016;23(2):222–6.
- 17. Wu X. Review of theory and methods of image segmentation. Agricultural Biotechnology. 2018;7(4):136–41.
- 18. Virk IS, Maini R. Medical image segmentation based on fuzzy 2-partition Kapur entropy using fast recursive algorithm. IJIEI. 2020;8(4):346.
- 19. Zhai G, Liang Y, Tan Z, Wang S. Development of an iterative Otsu method for vision-based structural displacement measurement under low-light conditions. Measurement. 2024;226:114182.
- 20. Kumar A, Kumar A, Vishwakarma A, Singh GK. Multilevel thresholding for crop image segmentation based on recursive minimum cross entropy using a swarm-based technique. Computers and Electronics in Agriculture. 2022;203:107488.
- 21. Erol OK, Eksin I. A new optimization method: Big Bang–Big Crunch. Advances in Engineering Software. 2006;37(2):106–11.
- 22. Shareef H, Ibrahim AA, Mutlag AH. Lightning search algorithm. Applied Soft Computing. 2015;36:315–33.
- 23. Anita , Yadav A. AEFA: Artificial electric field algorithm for global optimization. Swarm and Evolutionary Computation. 2019;48:93–108.
- 24. Mirjalili S. SCA: A Sine Cosine Algorithm for solving optimization problems. Knowledge-Based Systems. 2016;96:120–33.
- 25. Abualigah L, Diabat A, Mirjalili S, Abd Elaziz M, Gandomi AH. The arithmetic optimization algorithm. Computer Methods in Applied Mechanics and Engineering. 2021;376:113609.
- 26. Rashedi E, Nezamabadi-pour H, Saryazdi S. GSA: A Gravitational Search Algorithm. Information Sciences. 2009;179(13):2232–48.
- 27. Abualigah L, Elaziz MA, Sumari P, Khasawneh AM, Alshinwan M, Mirjalili S, et al. Black hole algorithm: a comprehensive survey. Appl Intell. 2022;52(10):11892–915.
- 28. Hashim FA, Houssein EH, Mabrouk MS, Al-Atabany W, Mirjalili S. Henry gas solubility optimization: a novel physics-based algorithm. Future Generation Computer Systems. 2019;101:646–67.
- 29. Uymaz SA, Tezel G, Yel E. Artificial algae algorithm (AAA) for nonlinear global optimization. Applied Soft Computing. 2015;31:153–71.
- 30.
Mathew TV. Genetic algorithm. 53. IIT Bombay; 2012.
- 31. Sarkar S, Mali K. Monkey king evolution (MKE)-GA-SVM model for subtype classification of breast cancer. Digit Health. 2024;10:20552076241297002. pmid:39659402
- 32. Bilal , Pant M, Zaheer H, Garcia-Hernandez L, Abraham A. Differential evolution: a review of more than two decades of research. Engineering Applications of Artificial Intelligence. 2020;90:103479.
- 33.
Kennedy J, Eberhart R. Particle swarm optimization. In: Proceedings of ICNN’95 - International Conference on Neural Networks. p. 1942–8. https://doi.org/10.1109/icnn.1995.488968
- 34. Mirjalili S. Moth-flame optimization algorithm: a novel nature-inspired heuristic paradigm. Knowledge-Based Systems. 2015;89:228–49.
- 35. Blum C. Ant colony optimization: introduction and recent trends. Physics of Life Reviews. 2005;2(4):353–73.
- 36. Mirjalili S, Lewis A. The whale optimization algorithm. Advances in Engineering Software. 2016;95:51–67.
- 37. Faramarzi A, Heidarinejad M, Mirjalili S, Gandomi AH. Marine predators algorithm: a nature-inspired metaheuristic. Expert Systems with Applications. 2020;152:113377.
- 38.
Atashpaz-Gargari E, Lucas C. Imperialist competitive algorithm: an algorithm for optimization inspired by imperialistic competition. In: 2007 IEEE Congress on Evolutionary Computation. 2007. p. 4661–7. https://doi.org/10.1109/cec.2007.4425083
- 39.
Kulkarni AJ, Durugkar IP, Kumar M. Cohort intelligence: a self supervised learning behavior. In: 2013 IEEE International Conference on Systems, Man, and Cybernetics. 2013. p. 1396–400. https://doi.org/10.1109/smc.2013.241
- 40. Satapathy S, Naik A. Social group optimization (SGO): a new population evolutionary optimization technique. Complex Intell Syst. 2016;2(3):173–203.
- 41. Wolpert DH, Macready WG. No free lunch theorems for optimization. IEEE Trans Evol Computat. 1997;1(1):67–82.
- 42. Wu T, Wu D, Jia H, Zhang N, Almotairi KH, Liu Q, et al. A modified gorilla troops optimizer for global optimization problem. Applied Sciences. 2022;12(19):10144.
- 43. Zheng R, Hussien AG, Jia H-M, Abualigah L, Wang S, Wu D. An improved wild horse optimizer for solving optimization problems. Mathematics. 2022;10(8):1311.
- 44. Song M, Jia H, Abualigah L, Liu Q, Lin Z, Wu D, et al. Modified Harris Hawks optimization algorithm with exploration factor and random walk strategy. Comput Intell Neurosci. 2022;2022:4673665. pmid:35535189
- 45. Wu D, Rao H, Wen C, Jia H, Liu Q, Abualigah L. Modified sand cat swarm optimization algorithm for solving constrained engineering optimization problems. Mathematics. 2022;10(22):4350.
- 46. Rao H, Jia H, Zhang X, Abualigah L. Hybrid adaptive crayfish optimization with differential evolution for color multi-threshold image segmentation. Biomimetics (Basel). 2025;10(4):218. pmid:40277617
- 47. Jia H, Su Y, Rao H, Liang M, Abualigah L, Liu C, et al. Improved artificial rabbits algorithm for global optimization and multi-level thresholding color image segmentation. Artif Intell Rev. 2024;58(2).
- 48. Peng , Jia HM, Lang CB. Modified dragonfly algorithm based multilevel thresholding method for color images segmentation. Math Biosci Eng. 2019;16(6):6467–511. pmid:31698573
- 49. Ma G, Yue X. An improved whale optimization algorithm based on multilevel threshold image segmentation using the Otsu method. Engineering Applications of Artificial Intelligence. 2022;113:104960.
- 50. Jia H, Peng X, Lang C. Remora optimization algorithm. Expert Systems with Applications. 2021;185:115665.
- 51. Wang S, Rao H, Wen C, Jia H, Wu D, Liu Q, et al. Improved remora optimization algorithm with mutualistic strategy for solving constrained engineering optimization problems. Processes. 2022;10(12):2606.
- 52. Wen C, Jia H, Wu D, Rao H, Li S, Liu Q, et al. Modified remora optimization algorithm with multistrategies for global optimization problem. Mathematics. 2022;10(19):3604.
- 53. Wang S, Hussien AG, Jia H, Abualigah L, Zheng R. Enhanced remora optimization algorithm for solving constrained engineering optimization problems. Mathematics. 2022;10(10):1696.
- 54. Seyyedabbasi A, Kiani F. Sand Cat swarm optimization: a nature-inspired algorithm to solve global optimization problems. Engineering with Computers. 2022;39(4):2627–51.
- 55. Luan TM, Khatir S, Tran MT, De Baets B, Cuong-Le T. Exponential-trigonometric optimization algorithm for solving complicated engineering problems. Computer Methods in Applied Mechanics and Engineering. 2024;432:117411.
- 56. Ezugwu AE, Agushaka JO, Abualigah L, Mirjalili S, Gandomi AH. Prairie dog optimization algorithm. Neural Comput & Applic. 2022;34(22):20017–65.
- 57. Jia H, Rao H, Wen C, Mirjalili S. Crayfish optimization algorithm. Artif Intell Rev. 2023;56(S2):1919–79.
- 58. Arbeláez P, Maire M, Fowlkes C, Malik J. Contour detection and hierarchical image segmentation. IEEE Trans Pattern Anal Mach Intell. 2011;33(5):898–916. pmid:20733228