Figures
Abstract
The Tree-Seed Algorithm (TSA) is a swarm intelligence algorithm inspired by the propagation relationship between trees and seeds. However, the original TSA is prone to premature convergence and becomes trapped in local optima when addressing high-dimensional, complex optimization problems, limiting its practical efficacy. To overcome these limitations, this paper proposes an Adaptive and Migration-enhanced Tree Seed Algorithm (AMTSA), which integrates three key mechanisms to significantly enhance performance in solving complex optimization tasks. First, to effectively evade local optima, an adaptive tree migration mechanism is designed to dynamically adjust the search step-size and direction based on individual fitness, thereby improving global exploration. Second, to enhance the algorithm’s adaptability and efficiency across different search stages, an adaptive seed generation strategy based on the dynamic Weibull distribution is introduced. This strategy enables flexible control over the number of seeds and promotes a balanced search throughout the solution space. Third, to mitigate convergence oscillations during the global search, a nonlinear step-size adjustment function inspired by the GBO algorithm is incorporated, which effectively improves convergence stability by responding to the iteration progress. Rigorous testing on the IEEE CEC 2014 benchmark functions demonstrates that AMTSA’s overall performance surpasses not only state-of-the-art optimizers like JADE and LSHADE but also recent TSA variants, including STSA, fb-TSA, and MTSA. To further validate its robustness in high-dimensional spaces, AMTSA was tested on 30 benchmark functions at 30, 50, and 100 dimensions. Results show that AMTSA ranked first in the number of functions optimized best and exhibited the fastest convergence speed among all compared algorithms. In a real-world application, AMTSA was employed to optimize multi-threshold segmentation for lung cancer CT images. The resulting AMTSA-SVM classification model achieved an accuracy of 89.5%, significantly outperforming models such as standard SVM (76.22%), DE-SVM (82%), GA-SVM (79.33%), TSA-SVM (84.44%), and JADE-SVM (89.12%). In conclusion, the proposed AMTSA, by integrating adaptive migration, dynamic seed generation, and nonlinear step-size control, successfully addresses the inherent deficiencies of the native TSA, offering a more efficient and robust tool for solving high-dimensional, complex optimization problems. The AMTSA source code will be available at www.jianhuajiang.com.
Citation: Li C, Jiang J, Ma Z, Yu Z, Li H, Liu J, et al. (2026) Adaptive and migration-enhanced tree seed algorithm for multi-threshold CT image segmentation and lung cancer recognition. PLoS One 21(1): e0333304. https://doi.org/10.1371/journal.pone.0333304
Editor: Mahamed G.H. Omran, Gulf University for Science and Technology, KUWAIT
Received: June 24, 2025; Accepted: September 11, 2025; Published: January 16, 2026
Copyright: © 2026 . This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: The source code for the AMTSA algorithm for this study is publicly available (www.jianhuajiang.com). The benchmark functions used for performance evaluation in this study are from the publicly available IEEE CEC 2014 suite. The lung cancer CT images used in the application study were obtained from the public repository, The Cancer Imaging Archive (TCIA) (https://www.cancerimagingarchive.net). All other relevant data for this study are within the paper and its Supporting Information files.
Funding: The author(s) received no specific funding for this work.
Competing interests: The authors have declared that no competing interests exist.
1 Introduction
Optimization problems are crucial in modern scientific research and engineering practice, and their core objective is to find optimal or near-optimal solutions under given constraints [1]. Such problems involve a variety of domains, such as product design, resource allocation, path planning, etc., and cut across a wide range of human activities [2]. Although traditional optimization methods such as Dynamic Programming and Newton’s method can provide exact solutions for some problems, they are often difficult to deal with effectively as the complexity of the problem increases, especially when dealing with nonlinearities, dynamic constraints, or noisy disturbances [3,4]. To solve these complex optimization problems, heuristic and meta-heuristic algorithms are gradually becoming more effective choices [5,6].
Heuristic algorithms rely on problem-specific intuitions and strategies [7]. These methods can quickly locate potential solutions by rationally guiding the search process. In contrast, metaheuristic algorithms provide a more general framework [8,9]. They can be widely applied to a variety of complex optimization problems without the need for deep customization. Typical metaheuristic algorithms, such as Genetic Algorithms, Particle Swarm Optimization, and Ant Colony Algorithms, possess global search capabilities and strong parallelism. This enables efficient exploration in complex search spaces [10–13]. However, the No Free Lunch Theorem states that no optimization algorithm can perform optimally in all contexts [14]. Therefore, it is necessary to choose the appropriate algorithm based on the specific nature of the problem [15].
Metaheuristic algorithms can be broadly classified into non-nature-inspired and nature-inspired categories. Non-nature-inspired algorithms, such as Tabu Search(TS) [16], Iterated Local Search(ILS) [17], and Adaptive Dimensional Search(ADS) [18], are widely used for optimization tasks. On the other hand, nature-inspired algorithms, including Particle Swarm Optimization(PSO) [19], Sailfish Optimizer(SFO) [20], Beluga Whale Optimization(BWO) [21], Spider Monkey Optimization(SMO) [22], and Cheetah Optimizer(CO) [23], are increasingly popular due to their simplicity and effectiveness in solving complex optimization problems. However, while some of these algorithms perform exceptionally well for specific types of problems, their effectiveness can vary when applied to different or more intricate scenarios.
Among various metaheuristic algorithms, one notable approach is the Tree Seed Algorithm (TSA) [24]. TSA simulates the natural growth process of trees and seeds, exploring the solution space of optimization problems through the distribution and growth of trees and seeds [25]. Compared to other metaheuristic algorithms, TSA is recognized for its simple structure and high computational efficiency, particularly when applied to large-scale, complex optimization problems [26]. In TSA, trees and seeds are treated as candidate solutions within the solution space, and the iterative search process is guided by generating seeds from these trees. However, TSA has limitations. It is prone to premature convergence, often getting stuck in local optima, a common issue in many optimization algorithms, especially when tackling complex problems with multiple local extremes [27]. As optimization problems grow more complex, traditional methods increasingly fail to meet practical needs, making heuristic and metaheuristic algorithms more attractive due to their flexibility and adaptability in addressing such challenges [28–30]. This study seeks to address the limitations of TSA by proposing a novel variant that incorporates initial design modifications and hybridization techniques. However, like many early metaheuristics, the native TSA suffers from significant limitations when applied to complex, high-dimensional problems, such as a tendency for premature convergence and an insufficient balance between exploration and exploitation. This research is directly motivated by the need to address these shortcomings. Rather than proposing a new metaphor-based algorithm from scratch, our work focuses on a mechanism-driven enhancement of the TSA framework. We introduce three synergistic, adaptive mechanisms, designed to specifically overcome TSA’s inherent deficiencies and significantly boost its performance and robustness.
1.1 Motivations
TSA has shown effectiveness in solving optimization problems, but it suffers from several key limitations that hinder its performance. Firstly, TSA tends to converge prematurely, often becoming trapped in local optima due to its limited exploration capacity [31]. Secondly, TSA’s reliance on random perturbations for seed generation introduces excessive randomness, making it difficult to control the search process effectively [32]. Lastly, TSA’s search process can lack diversity, especially at fixed locations, making it harder to solve complex problems effectively [33]. These limitations highlight the need for improvements to enhance TSA’s search capability and adaptability in complex optimization problems. To address these challenges, this study designs an adaptive framework as the core of the proposed method. This adaptive framework operates on three key components of TSA: trees, seeds, and step sizes, thereby enhancing the overall performance and robustness of the algorithm. The motivations driving this research are as follows:
- TSA struggles with balancing exploration and convergence due to its fixed seed generation, which often leads to premature convergence [27]. By redesigning the seed generation process, we introduce a dynamic method to adjust the number of seeds at different stages, addressing this issue and improving the search efficiency and convergence rate.
- TSA struggles with enhancing local exploration and accelerating convergence, particularly during the local exploration phase [34]. By introducing a nonlinear step-size adjustment mechanism and an asymptotic approximation mechanism, we improve TSA’s adaptability at different stages, which enhances convergence during the local exploration phase and ensures more efficient and stable search performance.
- TSA often lacks diversity in its search process, particularly at fixed locations, which limits its global exploration capability [35]. By introducing an adaptive migration mechanism, we enhance TSA’s ability to explore more broadly, addressing this limitation and improving its efficiency in solving complex problems.
1.2 Contribution
While the Tree-Seed Algorithm (TSA) has been the focus of our ongoing research, leading to variants such as ATSA [26] and KATSA [33], the proposed AMTSA represents a significant methodological advancement rather than an incremental improvement. Our previous works, like ATSA, introduced a double-layer framework, and KATSA utilized a k-NN strategy to enhance the search process. However, these prior models still faced challenges in achieving a robust, dynamic balance between global exploration and local exploitation, particularly in high-dimensional and complex problem landscapes. The novelty of AMTSA lies in its unique, three-part synergistic framework designed specifically to overcome these limitations. Unlike our previous approaches, AMTSA introduces:
- A Dynamic Weibull Distribution for Seed Generation: This is a fundamental departure from the more rigid or heuristic seed adjustment strategies in our prior work. By dynamically linking the seed count to the population’s real-time fitness statistics (mean and standard deviation), AMTSA can adapt its search breadth in a more principled and responsive manner.
- A GBO-Inspired Nonlinear Step-Size Mechanism: While previous variants adjusted search parameters, AMTSA is the first in our series to incorporate a sophisticated, nonlinear step-size function inspired by the Gradient-based Optimizer (GBO). This, combined with an asymptotic convergence approach, allows for a much smoother and more effective transition from an aggressive global search to a fine-tuned local search, a feature not fully realized in ATSA or KATSA.
- A Fitness-Guided Adaptive Migration Strategy: The migration mechanism in AMTSA is distinct from earlier concepts. It not only provides an escape route from local optima but does so adaptively, where the migration step size and direction are dynamically determined by the individual tree’s fitness value. This creates a more intelligent and targeted exploration capability.
In summary, AMTSA is not merely another TSA variant but a comprehensive redesign. Its true innovation lies in the integration of these three adaptive, mutually reinforcing mechanisms, which together create a more robust and efficient optimization tool specifically tailored for the high-dimensional, complex problems addressed in this paper.
In recent years, a growing emphasis has been placed on the need for scientific rigor in the design of metaheuristic algorithms. Scholars such as Sörensen [36] and Camacho-Villalón et al. [37] have published critiques of algorithms that rely heavily on natural metaphors without introducing sufficient mechanical novelty. This body of work argues that the contribution of a truly valuable new algorithm must lie in the introduction of concrete and novel mechanisms that effectively solve optimization problems, rather than simply proposing a new story. We fully concur with this perspective, and it has served as a guiding principle in the design of AMTSA. Although the Tree-Seed Algorithm (TSA) was initially inspired by the relationship between trees and seeds in nature, the core innovation of our proposed AMTSA is not founded on this metaphor. Instead, it is rooted in its concrete, quantifiable mathematical and adaptive mechanisms. The contribution of this work is not the mimicry of a natural process, but rather a framework of three synergistic, carefully engineered components designed to overcome the limitations of existing algorithms, enabling superior performance in solving complex, high-dimensional optimization problems.
2 Related work
2.1 A brief introduction of tree-seed algorithm
Tree-Seed Algorithm (TSA), proposed by Mustafa Servet Kiran in 2015, is a population intelligence algorithm inspired by the relationship between tree and seed [24]. TSA solves optimization problems by simulating the process of trees reproducing their offspring. The algorithm has been widely used in optimization problems such as function optimization, engineering design optimization, and resource allocation. it has attracted attention for its good global search capability and high convergence speed.The key principles of TSA are outlined below.
- Step1. Initialize each tree in the population: In TSA, each tree in the initial population is initialized using Eq (1) to generate a feasible solution as the initial tree Ti,j.
(1)
Ti,j denotes the position of the ith tree in the jth dimension. Lj,min and Hj,max are the lower and upper bounds of the jth dimension, respectively, and ri,j is a random number uniformly distributed in the interval (0,1). - Step2. Seed number generation mechanism: Determine the number of seeds to be generated according to Eq (2).
(2)
Among them, the fix function is used to round the elements to the integer closest to zero, low indicates the minimum number of seeds generated by each tree, and its value is; high indicates the maximum number of seeds generated by each tree, and its value is
.
- Step3. Tree Species Update Mechanism: The tree update mechanism is the core of TSA, which mainly consists of two different update formulas. The first formula Eq (3) is used for local search and the second Eq (4) is used for global search.
(3)
(4)
where Si,j is the jth dimension of the seed generated from the ith tree, Ti,j is the jth dimension of the ith tree, Bj is the jth dimension of the optimal tree position obtained so far, Tr,j is the jth dimension of a tree r randomly selected from the population, andis a random scaling factor in the range [-1, 1]. Local search focuses on refining the search scope and improving the exploitation of the algorithm, while global search enhances the exploration of the algorithm and prevents falling into local optimal solutions.
- Step4. Termination condition setting: In all experiments, the termination condition was determined by the maximum number of function evaluations MaxFEs determined by Eq (5). The updating of the number of function evaluations (FEs) followed Eq (6).
(5)
(6)
where D is the dimension of the problem. ns is the number of seeds generated per tree. This termination condition ensures that the algorithm is able to stop at the right time in case of limited computational resources, avoiding ineffective search for too long.
2.2 Literature review
The Tree-Seed Algorithm (TSA) has been recognized as a potent optimization tool due to its simple structure and efficiency [38]. However, like many metaheuristics, the native TSA faces challenges in maintaining a robust balance between global exploration and local exploitation, often leading to premature convergence in complex problem landscapes. Consequently, a significant body of research has emerged, focusing on enhancing the original algorithm. These efforts can be broadly categorized into three main streams: refining tree evolution mechanisms, innovating seed generation strategies, and expanding the algorithm’s practical applications [39,40].
- Mechanisms of tree evolution: A primary research direction for improving TSA has been to enhance the mobility and exploration capability of the ’trees’ to prevent the algorithm from settling in local optima. Some studies have drawn inspiration from other successful metaheuristics. The Migration Tree-Seed Algorithm (MTSA), for instance, integrated concepts from the Grey Wolf Optimizer (GWO), employing a gravity-based learning mechanism to guide its migration strategy, thereby improving the balance between global and local search [41]. Other approaches have focused on internal mechanism design. The Triple Tree-Seed Algorithm (TriTSA), for example, designed two novel migration mechanisms based on triple learning methods and a sine-based random distribution to boost population diversity and adaptability [42]. These works underscore a clear trend: incorporating sophisticated migration strategies is a key avenue for augmenting TSA’s global search capabilities.
- Innovations in seed generation: Another major focus has been to refine the seed generation process, which is central to the algorithm’s search behavior. One group of enhancements involves introducing dynamic adaptation of key parameters. For example, STSA utilizes sine and cosine functions to dynamically adjust balancing parameters [43], while fb-TSA introduces a feedback loop to adaptively tune the search tendency (ST) value and the number of seeds (ns) based on the search progress [32]. A second group of works has focused on incorporating new information to guide seed placement. EST-TSA, for instance, leverages information from the current best solution to enhance its local search [44], and TSASC integrates the Sine Cosine Algorithm (SCA) to refine the seed position updating formula [45]. A third avenue involves hybridizing with other search strategies to increase diversity, such as LTSA employing a Lévy flight random walk [46] and DTSA designing a velocity-driven seed generation mechanism [27].
- Algorithm applications: The practical value of these algorithmic enhancements is demonstrated by the successful application of TSA and its variants across diverse and challenging domains. In engineering, CTSA has been tailored to solve constrained optimization problems by effectively combining Deb’s rules with the TSA framework [47]. In finance, a hybrid model, sinhTSA-MLP, utilized TSA to optimize a multilayer perceptron, significantly improving the accuracy of credit default risk prediction [48]. The algorithm has also proven effective in medical diagnostics, where a TSA-ANN model was developed for the accurate classification of COVID-19 cases from medical images [49]. The algorithm’s versatility is further demonstrated by DTSA’s integration of discrete operators to solve complex arrangement coding optimization tasks [27].
Despite these advances in tree evolution and seed generation, a holistic approach that simultaneously addresses adaptive step-sizing, dynamic population management, and robust migration strategies has been underexplored. Many existing methods improve one aspect of the algorithm, sometimes at the expense of another, or rely on fixed parameters that limit their adaptability across different problem types [27]. This paper aims to fill this research gap. We propose an Adaptive and Migration-enhanced Tree Seed Algorithm (AMTSA) that integrates three synergistic mechanisms to create a more balanced, robust, and efficient optimizer for complex, high-dimensional problems.
To ensure a comprehensive comparison, we acknowledge the emergence of other high-performance optimization algorithms in the computational intelligence field in recent years. Notable examples include ICSPM and ICSPM2 [50], Exploratory Cuckoo Search [51], and the Improved Salp Swarm Algorithm with HDPM (ISSA) [52]. While a direct experimental comparison with these methods was beyond the scope of the current study, benchmarking AMTSA against these promising approaches remains a valuable direction for future work.
2.3 An overview of GBO
GBO is a meta-heuristic optimization algorithm based on gradientizer [53]. Proposed by Iman Ahmadianfar, Omid Bozorg-Haddad, and Xuefeng Chu in 2020, GBO is inspired by gradient-based Newtonian methods and uses two main operators [54], Gradient Search Rule (GSR) and Local Escape Operator (LEO), and a set of vectors to explore the search space. The working principle of GBO can be briefly summarized the working principle of GBO can be briefly summarized as follows.
Step1: In the initialization phase: the GBO first generates an initial population with each individual (i.e., vector) randomly distributed in the search space. The vector representation of the population size N and dimension D is Eq (7), and the initial vector is generated by Eq (8):
Where Xmin and Xmax, are the lower and upper bounds of the decision variable, respectively, and rand(0,1) is a random number between [0, 1].
Step2: Gradient Search Rule (GSR): Using the gradient method in Eq (9), each individual is guided toward an improved solution.
where and
are the best and worst solutions obtained during the optimization process. ε is a small value used to avoid divide-by-zero errors.
is a small step size used in the numerical gradient computation. To improve the search capability of GBO and balance global exploration with local exploitation, the GSR is modified by introducing a stochastic parameter
in Eq (10). In this study,
is the key adaptive coefficient for balancing exploration and exploitation, and it is expressed as:
where and
are used to regulate the minimum and maximum values of the parameter β range. These parameters are used to compute the adaptive parameter
, which balances global exploration and local exploitation during the iterations of the algorithm. The integration of the adaptive coefficient mechanism from the GBO algorithm into TSA requires new balancing mechanisms to regulate the global and local search phases. Therefore, Sect 3.2 details the integration of the above enhancements into the TSA
Step3. Integrated position update: The position of the current vector is updated using using GSR and Directional Movement (DM).DM is given by Eq (13) and the integrated position update formula is as Eq (14).
3 Methods
In this section, we present the three key innovations in the algorithm, organized according to the logical sequence of improvements. First, we propose an adaptive method for adjusting the number of seeds at different stages of the process. Second, we introduce an adaptive, nonlinear approach to optimize the seed generation locations. Finally, we design an adaptive migration mechanism to help the algorithm escape local optima and improve its overall performance.
3.1 A dynamic seed number strategy based on the Weibull distribution
The original Tree seed Algorithm(TSA) uses a completely randomized number of seeds generation strategy where the number of seeds depends on the population size. This mechanism generates the number of seeds with great randomness and lacks a dynamic and adaptive strategy, which may result in the algorithm failing to maintain good performance in different dimensions and search phases. To improve these drawbacks, we introduce an adaptive seed generation strategy with dynamic Weibull distribution, which makes the generation of the number of seeds more flexible and adaptive to the current search situation by adaptively adjusting the shape and scale parameters of the Weibull distribution to improve the search efficiency and adaptability of the algorithm. We set the scale parameter lambda and shape parameter k via Eq (15) and Eq (16), and let ns change adaptively via Eq (17).
where dmax, dmin are the upper and lower bounds of the search space, best−obj is the value of the objective function corresponding to the minimum value of the fitness, D is the dimension of the problem, std−obj is the standard deviation of the fitness value, mean−obj is the mean of the fitness value, and is the upward round.
The scale parameter (lambda) directly influences the seed distribution range, with its value adjusted based on the relative volatility of the population’s fitness, represented by . A larger value of this ratio indicates higher fitness diversity and thus a larger scale parameter, resulting in more seeds for broader global exploration. Conversely, a smaller ratio leads to fewer seeds and faster convergence with more localized searches. Meanwhile, the shape parameter (k) is dynamically adjusted based on the problem’s dimensionality (D) and the difference between the population’s average fitness and the current optimal fitness (mean−obj-best). In higher-dimensional problems, a larger k expands the seed count, promoting diversity and global exploration. If the fitness difference is large, k increases to improve global search; if small, it decreases to focus on local search and accelerate convergence. Together, these parameters ensure a balance between global exploration and local exploitation, enhancing TSA’s adaptability. The idea is shown in Fig 1.
3.2 Adaptive step size with nonlinear strategy
In TSA, the step size in the seed generation equation is a uniform random number between (-1,1), which controls the search range of the seed in the neighborhood of the tree position. When
, the Eq (3) of the seed generated based on the current tree is influenced by the best position, and the step size moderates the strength of this influence. However, the values of
exhibit large randomness and significant jumps during the search process. This can result in insufficient fine-tuning of the search. As a result, the excavation may not adaptively adjust as needed during the search.
To solve this problem, Inspired by the GB-based improved GSR search rule in the GBO algorithm, 2 perturbation factors and
are introduced to adjust the step size with a nonlinear function to enhance the search process to be more flexible. The generated solution should be able to explore the search space around its corresponding best solution. Thus, when the number of iterations reaches a later stage, the parameter values increase. This increase helps the algorithm escape from local optima. It does so by boosting the diversity of the population, which enhances the search around the best solution obtained. Then, using the method of progressive convergence (asymptotic convergence), the two perturbation factors are dynamically adjusted. This adjustment enables a smooth transition from global search to local search. The overall performance of the optimization algorithm is further improved by gradually reducing the step size. We give the perturbation factors
and
by Eq (18) and Eq (19), and the dynamic step size by Eq (20), and the generating seed formula by Eq (21).
where t is the current iteration number and is the maximum iteration number.
Beta ranges from 1.2 to 0.2, smoothly transitioning from exploration to exploitation. Larger beta values in early iterations enhance exploration, while smaller values in later stages focus on exploitation. The alpha nested sine function introduces oscillation, promoting diverse search paths and avoiding local optima. The idea is shown in Fig 2.
3.3 Adaptive migration tree strategy
The mechanism for TSA to jump out of the local optimum is necessary. Due to the incompleteness of the mechanism, seeds in TSA remain locked around their parent tree. The particle optimum that falls locally is difficult to escape from its current position. In order to solve the stagnation problem of the local optimum, the introduction of a migration mechanism is necessary. To solve this problem, we design a migration mechanism. This mechanism adaptively determines the step size and direction. It provides an opportunity for the tree seeds that have fallen into a local optimum to escape from their current position.
This migration mechanism is generalized in nature, all parent trees are compared for migration to decide whether to migrate to that position or not. The step size is determined by a dynamic step factor with an adaptive function, the direction is determined by a random direction vector with an adaptive function, and the final migration formula consists of the step size, direction, and Cauchy mutation strategy. In order to achieve the problem of jumping out of the local optimum. Eq (22) is given the adaptive function which dynamically adjusts the individual’s step size according to the fitness. Individuals with lower fitness have larger step sizes to encourage more exploration, while individuals with higher fitness have smaller step sizes and move more cautiously.
The adaptive function is the basis for determining the step size and direction, but in order to ensure the limitation of the step size, a nonlinearly varying dynamic change factor generated from the sine function is added. Eq (23) gives the factor
where r is a uniformly distributed random number taking values between [0, 1].
When traversing each parent tree, the step final value (Eq (24))) and the effect of direction (Eq (25)) are determined by the fitness of that parent tree compared with the fitness of the randomly matched individuals. And the parent tree after the update is completed is generated by Eq (26).
where target−index is the randomly selected target parent tree and is the position of the parent tree after migration. The idea is shown in Fig 3.
3.4 AMTSA: A novel tree seed algorithm
The algorithm is improved to address the original defects of the original Trees seed Algorithm (TSA). An adaptive seed generation strategy with dynamic Weibull distribution is introduced, which adaptively adjusts the shape and scale parameters of the Weibull distribution and determines the number of seeds to be generated through different dimensions and search stages.It significantly improves the global search capability during the exploration phase. Additionally, it enhances the fast response and optimization search capability during the convergence phase. Inspired by the GSR search rule in the GBO algorithm, the original step size is nonlinearly improved to realize the smooth transition from global search to local search, and to improve the mining accuracy and adaptive ability of the optimization algorithm. Finally, an adaptive migration mechanism is designed to generate a migration strategy by integrating global search and adaptive directional step size, giving each tree will jump out of the local optimum and mine better points. These strategies these methods together promote a more dynamic, adaptive and efficient optimization algorithm that optimizes the original shortcomings of the TSA algorithm. The flowchart of the algorithm is shown in Fig 4.
3.5 Time complexity analysis of AMTSA
To evaluate the efficiency of the AMTSA algorithm, it is essential to analyze its time complexity, which reflects how the algorithm’s computational cost grows with respect to the problem size. The AMTSA algorithm consists of several major stages, including initialization, iterative updates, seed production, and migration mechanisms. In this section, we provide a detailed analysis of the time complexity involved in each phase of the algorithm.
In the initialization phase, the algorithm generates random positions for the trees within a defined search space. Since the tree population consists of N trees, each having D dimensions, the time complexity for this operation is . Additionally, the objective function value for each tree is computed, which also requires
time, as each evaluation involves processing a D-dimensional vector. Therefore, the total time complexity of the initialization phase is
.
The iterative update phase, which is the core of AMTSA, involves several key steps within each generation. This phase iterates for a maximum of generations, and within each iteration, multiple operations are performed, including adaptive weight calculations, seed generation, fitness evaluations, and updates to tree positions. The adaptive weight calculation is computationally light, with a time complexity of O(1), as it involves basic arithmetic operations. Seed production, on the other hand, is more computationally demanding. Each tree produces a variable number of seeds, ns, based on a Weibull distribution. The number of seeds generated for each tree is bounded, and the average number of seeds is approximately O(1). For each seed, the fitness function is evaluated, which takes O(D) time. Thus, the total complexity for seed production and fitness evaluation across all trees is
.
In the migration phase, which is designed to help the algorithm escape local optima, each tree adjusts its position dynamically based on its fitness and the positions of other trees. This phase includes random number generation, position updates, and fitness comparisons. Since these operations are performed for each of the N trees and each involves operations that are linear with respect to the problem dimension D, the time complexity for the migration phase is .
Finally, the overall complexity of the algorithm is dominated by the iterative update phase. As the algorithm performs iterations, each requiring
time, the total time complexity of AMTSA is
. This means that the computational cost of the algorithm increases linearly with both the number of trees N and the problem dimension D, as well as the number of generations
.
In summary, the time complexity of AMTSA is primarily driven by the iterative update phase, which is . This reflects the typical behavior of evolutionary algorithms, where the time complexity grows with the size of the population and the dimensionality of the search space, as well as the number of iterations required for convergence.
Algorithm 1 The pseudo-code of the AMTSA.
1: Step 1. Initialize the population
2: Set the initial number of zones and problem dimensions;
3: Set the ST parameter and termination criteria;
4: Evaluate each tree’s fitness against the target function;
5: Creation1:
6: Generate initial positions (and any auxiliary data) via Sine
chaotic mapping;
7: End Creation1
8: Step 2. Seed-based search
9: for each tree do
10: Compute based on iterations;
11: Compute current fitness mean and standard deviation;
12: Derive Weibull parameters (shape k, scale λ) from fitness
data;
13: Determine number of seeds for this tree via Weibull
distribution;
14: for each seed do
15: for each dimension do
16: if then
17: Creation2:
18: Compute dynamic step size (Eq (20));
19: Generate new tree position (Eq (21));
20: else
21: Generate new tree position (Eq (21));
22: End Creation2
23: end if
24: end for
25: end for
26: Select the best seed; if it outperforms the tree, replace
the tree with that seed;
27: end for
28: Step 3. Migratory perturbation
29: Design perturbation term (Eq (23));
30: for each tree do
31: Randomly pick a target individual;
32: if some condition then
33: Compute step size (Eq (25));
34: else
35: Compute step size (Eq (25));
36: if
37: Determine direction via Normal-distributed random vector
weighted by fitness;
38: Update tree position (Eq (27));
39: end for
40: Compare new vs. old positions; keep the better one;
41: Step 4. Termination check
42: If stopping criteria not met, go to Step 2;
43: Step 5. Report
44: Output the best-found solution.
4 Experimental environment and setting
4.1 Experimental fundamentals
All experiments were implemented in the MATLAB 2016a programming environment and executed on a computer with an Intel Core i7-11800H processor, running a 64-bit Windows 10 operating system. To ensure a rigorous and fair comparison, all experiments were designed to strictly adhere to the official guidelines of the IEEE CEC 2014 benchmark competition. The 30 benchmark functions provided by the competition were evaluated at 30, 50, and 100 dimensions. For statistical significance, the final results reported for each function are the average of 30 independent runs. Following the competition’s standard protocol, the stopping criterion for all algorithms was uniformly set to a maximum number of fitness evaluations (MaxFEs), defined as D×10,000, where D is the problem dimension. This ensures that every algorithm is allocated an identical computational budget proportional to the problem’s difficulty. Furthermore, to address the critical issue of fair parameterization, the settings for each comparative algorithm, including their population size strategies, were configured based on the recommendations from their respective original publications or the official CEC 2014 report. This avoids the bias of using a single, fixed setting for all algorithms.
A comprehensive list detailing the specific parameter settings for every algorithm is provided in Table 1. This standardized protocol guarantees that all algorithms are compared on a level playing field, making the performance evaluation both reproducible and credible.
4.2 Comparative algorithms
To comprehensively evaluate the performance of the proposed AMTSA, it was benchmarked against a large and diverse suite of state-of-the-art and classic optimization algorithms. The comparison suite was carefully selected to assess AMTSA’s performance in multiple contexts. The algorithms are grouped into three categories:
High-Performance and CEC Benchmark Algorithms. To validate AMTSA against the highest standards, we included several algorithms known for their top-tier performance in rigorous academic competitions, particularly the IEEE CEC 2014 benchmark. This group includes LSHADE [55], JADE [56], as well as other CEC competition winners and highly regarded methods such as GaAPPADE [57], MVMO-SH [58], and the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) [59].
Established and Novel Heuristic Algorithms. To position AMTSA in the broader context of metaheuristics, it was also compared against other respected optimizers. This set includes Differential Evolution (DE) [60,61], Genetic Algorithm (GA) [62], and Bat Algorithm (BA) [63].
TSA Variants. To demonstrate the improvements over the original algorithm and provide a comprehensive comparison within its own family, AMTSA is compared with a wide range of TSA variants. This includes the original Tree-Seed Algorithm (TSA) [24], recent notable versions such as EST-TSA [44], fb-TSA [32], STSA [43], and MTSA [64]. The comparison is also extended to our prior works, ATSA [26] and KATSA [33].
4.3 Evaluation metrics
To analyze the algorithm’s behavior, this study assesses population diversity, the balance between exploration and exploitation, and convergence performance.
Population Diversity Evaluation. The assessment of population diversity is a key metric for evaluating efficiency, as it directly affects the depth of exploration in the search space [12]. This paper represents population diversity using the dispersion between individuals and the centroid of the group, calculated using Eqs (28) and (29).
Exploration and Exploitation Analysis. The balance between exploration and exploitation is a cornerstone of performance in metaheuristic algorithms. Exploration refers to the algorithm’s ability to search broadly across the entire solution space to discover globally promising regions, while exploitation involves refining the search in the vicinity of known good solutions to find the local optimum precisely. A successful algorithm must transition smoothly from exploration to exploitation during its run. To quantitatively measure these two behaviors, this study adopts a widely recognized method based on population diversity. The diversity, which reflects the spread of individuals in the population, is calculated using the average distance of individuals from their population centroid. The diversity at iteration t, denoted as , is calculated using Eqs (30) and (31). A high diversity value indicates that the population is widely dispersed, signifying a state of exploration. Conversely, a low diversity value suggests that the population has converged around promising areas, which is characteristic of exploitation. Therefore, we define the percentage of exploration and exploitation at iteration t as follows:.
where is the population diversity at the current iteration t, and
is the maximum diversity recorded throughout the entire optimization process. This metric allows for a dynamic assessment of the algorithm’s behavior, providing clear insights into how it balances the search process over time.
Convergence Curve. The convergence curve is used to visually assess an algorithm’s efficiency and convergence speed. It plots the best fitness value found so far against the number of iterations. A steeper curve indicates faster convergence toward the optimal solution.
5 Analysis ans discussion
This section presents a comprehensive performance evaluation of the proposed AMTSA algorithm, examining its capabilities through a multi-faceted series of experiments. The evaluation begins in Sect 5.1 with a qualitative analysis of AMTSA’s convergence behavior and search dynamics. Sect 5.2 then provides an in-depth quantitative benchmark against a wide range of state-of-the-art and classic optimizers. To further dissect the algorithm’s properties, subsequent sections investigate its sensitivity to key parameters (Sect 5.3), validate the contribution of each novel component through an ablation study (Sect 5.4), and assess its practical computational efficiency and scalability (Sect 5.5). The chapter concludes with rigorous statistical tests in Sect 5.6 to confirm the significance of the observed results.
5.1 Qualitative analysis
Qualitative analysis of the algorithm is one of the criteria to detect the performance of the algorithm, we conducted convergence behavior analysis, population diversity analysis and exploration mining analysis experiments for this algorithm to observe the performance of the algorithm. Among them, we chose the single-peak function F1 to evaluate the exploitation ability of the algorithm and the multi-peak function F8 to evaluate the exploration ability of the algorithm, as shown in Fig 5 below.
5.1.1 Convergence behavior analysis.
In the case where the algorithm was subjected to a convergence behavior analysis, a total of four tests were performed with the following specific features.
- Figs 6(a) and 5.1.1(a) show the optimization process of the AMTSA algorithm. In these figures, the black dots represent the area covered by the current seed, while the red dots indicate the best position found, i.e., the optimal solution. The clustering phenomenon of black dots around the red dots indicates that AMTSA is moving towards convergence during the gradual optimization process.
- The Figs 6(b) and 5.1.1(b) demonstrate the convergence of AMTSA, clearly reflecting its ability to rapidly approach the optimal solution. The significant decreases in the convergence curves not only highlight the algorithm’s efficiency in the optimization search process, but also demonstrate its keen ability to explore the problem space. This fast convergence property gives AMTSA a significant advantage in complex optimization tasks.
- The Figs 6(c) and 7(c) monitor the change in the first dimension, providing an important insight into the behavior of the algorithm and effectively preventing premature convergence to a local optimum. Experimental evidence shows that the AMTSA algorithm is able to efficiently steer the search process to avoid falling into a local optimum, thus ensuring that it explores a wider solution space. This ability significantly improves its performance in complex optimization problems.
- The Figs 6(d) and 7(d) demonstrate the convergence trend of the mean over multiple iterations. In these figures, the significant decrease in the curve indicates that AMTSA performs well in terms of overall convergence, further validating its effectiveness in optimization tasks. This result not only reflects the stability of the algorithm, but also emphasizes its reliability and advantages in dealing with complex problems.
5.1.2 Population diversity analysis.
In optimization algorithms, population diversity is a key metric for evaluating efficiency, as it directly affects the depth of exploration in the search space. Higher population diversity helps avoid local optima and maintains the algorithm’s global search capability. Thus, maintaining appropriate population diversity is crucial for ensuring the algorithm’s robustness and strong global search performance.
To analyze the population diversity dynamics of AMTSA, Fig 8 contrasts its behavior on different types of test functions: the unimodal functions F1 (Fig 8a) and F2 (Fig 8b), and the complex multimodal function F8 (Fig 8c). When addressing simple unimodal problems such as F1 and F2, the population diversity of AMTSA decays rapidly. This phenomenon is not a deficiency in the algorithm’s exploratory capability but is rather a direct manifestation of its efficient exploitation strategy. As the function landscape is simple with a unique global optimum, the algorithm quickly locates the target region, prompting the population to converge efficiently, which naturally leads to a decrease in diversity. In sharp contrast, when faced with the complex multimodal function F8, which features numerous local optima, AMTSA demonstrates its powerful exploration capability. As shown in Fig 8(c), AMTSA proactively increases and persistently maintains a level of population diversity far higher than that of TSA from the early stages of the search. This sustained high diversity, driven by the dynamic seed generation strategy, is the key mechanism that enables the algorithm to perform global exploration and avoid premature convergence. In summary, AMTSA’s population diversity is not static but is intelligently regulated according to the problem’s complexity. It prioritizes convergence efficiency on simple problems while emphasizing global exploration on complex ones. This adaptive mechanism is the core reason for its robust performance across diverse optimization tasks.
5.1.3 Exploration and exploitation analysis.
To evaluate AMTSA’s search dynamics, its exploration and exploitation behaviors were analyzed on both a simple unimodal function (F1) and a complex multimodal function (F8). The results, compared against the standard TSA, are presented in Fig 10. As shown in Fig 9(a) for the unimodal function F1, both AMTSA and TSA exhibit a rapid transition to full exploitation. This behavior is expected and efficient for a function with a single global optimum, as it allows the algorithms to quickly converge without allocating unnecessary resources to prolonged global exploration. However, the superiority of AMTSA’s adaptive strategy becomes evident on the complex multimodal function F8, as depicted in Fig 9(b). While the standard TSA displays a slow, gradual shift from exploration to exploitation, AMTSA maintains a high level of exploration for a significantly longer duration (approximately 350 iterations). This sustained exploratory phase, driven by its adaptive mechanisms, allows AMTSA to thoroughly survey the rugged search landscape and effectively evade the numerous local optima. Following this crucial phase, AMTSA makes a decisive transition to full exploitation to refine the best-found solution.
In summary, these results confirm that AMTSA possesses a more sophisticated control over its search dynamics. It intelligently allocates computational effort by prioritizing rapid convergence on simple problems and sustained exploration on complex ones, which is key to its robust performance.
5.2 Quantitative analysis
In this section, we demonstrate the superior performance of AMTSA through three well-designed experiments. Sect 4.3.1 provides a rigorous comparison of AMTSA with multiple TSA variants, clearly revealing its strengths. Sect 4.3.2 then extends the comparison to include additional metaheuristics, highlighting AMTSA unique ability to cope with complex problems. Finally, Sect 4.3.3 provides box-and-line plots generated based on the results of 30 rounds of experiments, providing an intuitive view of AMTSA stability over multiple iterations. Together, these experimental results support AMTSA position as an efficient and reliable optimization algorithm.
5.2.1 Comparative Experiment 1: AMTSA versus EST-TSA, MTSA, TSA, STSA, and fb-TSA.
The first comparative experiment aims at evaluating the basic TSA [24] and its latest variants, including EST-TSA [44], fb-TSA [32], STSA [43], and MTSA [64], to demonstrate their respective advantages. Tables 2–4 provide comparative evaluations of AMTSA with these algorithms on 30, 50 and 100 dimensions. In addition, the convergence curves are shown in detail in Figs 10–12, providing a visual basis for understanding the performance of the different algorithms. These comparisons not only reveal the superiority of AMTSA in high-dimensional spaces, but also emphasize its unique performance in terms of convergence speed and stability.
Tables 2–4 show the average optimal values obtained through 30 experimental iterations, each of which was performed for 500 iterations, which provides a valuable perspective for evaluating the convergence performance of the algorithms. These aggregated data not only help to identify the superiority of different algorithms, but also provide a basis for further performance analysis, reflecting the differences in the performance of the algorithms in the optimization process. An in-depth analysis of these results enables a more comprehensive understanding of the potential and advantages of AMTSA in practical applications.
In addition, the convergence process over 500 iterations was recorded in detail for each experimental iteration to reveal the performance dynamics. By computing the average of 30 locally optimal solutions at each iteration point, we obtained average convergence curves that demonstrate the subtle performance changes of the algorithm during the optimization process. The slopes of these curves, in turn, provide a quantitative assessment of the speed of convergence, further enhancing the understanding of the algorithm’s efficiency. This in-depth analysis allows us to more clearly identify the performance characteristics at different stages, providing a valuable reference for improving the algorithm.
The results show that designing an adaptive migration mechanism enhances the ability to jump out of local optima. In addition, the improvement of the position update formulation (Eq (21)) significantly increases the convergence speed compared to the previously proposed TSA variant.
Based on multiple comparisons and experimental data, we conclude that the algorithm designed in this study outperforms other TSA algorithms and their variants in terms of optimization.
5.2.2 Comparative Experiment 2: AMTSA versus established and novel heuristic optimization algorithms.
Tables 5–7 present the detailed performance results of AMTSA over 30 experiments, with systematic comparisons against other algorithms, recording the mean optimal values and overall rankings. These statistical data clearly indicate that AMTSA demonstrates unique advantages in handling complex, particularly multimodal, problems.
To more intuitively illustrate the dynamic convergence process, and in response to the valuable reviewer feedback on providing a comprehensive presentation, we have revised the convergence curves shown in Figs 13–15. The revised figures now feature a deliberately chosen set of representative functions intended to provide a more balanced and objective overview of algorithm performance. In addition to cases where AMTSA is dominant, we have specifically included functions where it was not the top performer at each dimension: F5 and F25 were added at 30 dimensions (Fig 13); F12 and F25 at 50 dimensions (Fig 14); and F12 and F30 at 100 dimensions (Fig 15).
A comprehensive analysis of these selected curves reveals that while AMTSA shows excellent performance in most scenarios, the newly included plots provide a more complete picture. On certain functions, other state-of-the-art algorithms are highly competitive. For example, on the complex composite function F25, CMA-ES demonstrates more stable convergence in the later stages. Similarly, in high-dimensional tests on F12 and F30, algorithms like LSHADE also achieve very competitive results. This indicates that while our proposed adaptive migration mechanism and GBO-inspired nonlinear step-size strategy significantly enhance overall performance, the choice of the optimal algorithm remains problem-dependent, consistent with the No Free Lunch theorem. Together, these experimental results confirm the effectiveness of AMTSA as an efficient and robust optimization tool and position it as a valuable option for solving complex optimization problems.
The mean values for AMTSA and 11 comparative algorithms in 30 dimensions.
5.3 Parameter sensitivity analysis
The proper setting of algorithmic parameters is crucial for its performance and robustness. This section aims to evaluate the dependence of the AMTSA algorithm on key internal parameters through a systematic parameter sensitivity analysis, providing empirical support for the effectiveness of the parameter values selected in this paper. We primarily focus on three key parameters: the search tendency parameter ST, the calculation constant for the shape parameter k in the dynamic Weibull distribution (Eq (16)), and the dynamic range of the β parameter in the adaptive step size strategy (Eq (18)). All experiments were conducted on CEC 2014 benchmark functions with a dimension of 100. Each set of experiments was independently run 30 times, and the average best fitness value was recorded.
The ST parameter plays a crucial role in balancing global exploration and local exploitation within the Tree Seed Algorithm. To evaluate the sensitivity of AMTSA to ST values, we tested the performance of ST=0.05,0.1,0.3,0.5,0.8 on F1 (unimodal), F2 (unimodal), F4 (multimodal), F5 (multimodal), F17 (composition function), and F18 (composition function) with a dimension of 100. Table 8 presents the average best values of AMTSA for different ST values on these test functions. As can be seen from Table 8, when ST is set to 0.1, AMTSA achieves the best average optimal values across all tested functions, indicating that ST=0.1 provides an ideal balance between exploration and exploitation for AMTSA. Both smaller and larger ST values may lead to a decrease in algorithm performance, such as premature convergence to local optima or slowed convergence speed. The F5 function, being a constant function, is unaffected by the ST parameter, further validating the reasonableness of the experimental setup. This result strongly supports the decision to adopt ST=0.1 as the default parameter in this paper and demonstrates the good robustness of AMTSA to the ST parameter.
Additionally, we performed a sensitivity analysis on the constant term 10 () included in the calculation of parameter k in Eq (16) and the dynamic range of parameter β in Eq (18). As shown in Fig 16, the sensitivity analysis for the constant Ck in Eq (16) reveals that when Ck is 10, the convergence curves of AMTSA on F1, F4, and F17 are the smoothest, and the final convergence accuracy is the highest. This indicates that Ck = 10 helps the algorithm maintain an appropriate population diversity and search efficiency at different search stages. Similarly, the sensitivity analysis of the dynamic range for parameter β in Eq (18) intuitively demonstrates that when the range is set to [0.2,1.2], AMTSA achieves the fastest convergence speed on F2, F5 and F18, and the stability of the final solution is optimal. This range design ensures a smooth transition between global exploration in the early stage and local exploitation in the later stage, effectively avoiding premature convergence and local optima, thereby significantly enhancing the overall performance of the algorithm. In summary, the parameter sensitivity analysis conducted in this section shows that the AMTSA algorithm has good robustness to the selected key parameters. The experimental results support the parameter choices adopted in this paper, proving that these parameter values enable AMTSA to achieve an effective balance between exploration and exploitation and maintain stable high performance on different types of complex optimization problems. Although the current parameters have been optimized through systematic analysis, future work could further explore a more complete self-adaptive parameter adjustment mechanism to further improve the algorithm’s generality and adaptability across a wider range of problem domains.
5.4 Ablation study of AMTSA components
To scientifically validate the effectiveness of each new component in AMTSA, an ablation study was conducted. This study was designed to analyze the contribution of three core mechanisms: the dynamic Weibull distribution-based seed strategy (DS), the nonlinear step-size adjustment (NS), and the adaptive migration (AM). We compared the performance of the full AMTSA against the original TSA (as a baseline) and three variants with a single mechanism removed (AMTSA-noDS, AMTSA-noNS, and AMTSA-noAM). The comparison was performed on functions F1, F4, F17, and F30 at 30 dimensions under identical experimental conditions.
The convergence curves for each algorithm are presented in Fig 17. It is evident from the figure that compared to the complete AMTSA, removing any single component (DS, NS, or AM) leads to a significant degradation in convergence speed and solution accuracy on most test functions. This result provides strong evidence that all three of our proposed innovations make indispensable contributions to the algorithm’s performance, and their synergy is key to AMTSA’s ability to efficiently solve complex problems.
5.5 Computational efficiency and scalability analysis
In addition to solution quality, the practical utility of an optimization algorithm is also determined by its computational efficiency. To assess this, we conducted an empirical runtime comparison to evaluate the practical computational cost and scalability of AMTSA. The experiment was designed to measure the average wall-clock time required for AMTSA and a representative set of comparative algorithms–including its baseline TSA, the classic DE, and the high-performance JADE and LSHADE–to complete their standard search budget. All tests were executed on the same machine to ensure a fair comparison. We measured the average CPU time in seconds over 30 independent runs for each algorithm to complete the full MaxFEs = evaluations on several benchmark functions at dimensions D = 30, 50, and 100.
The results of this analysis are presented in Table 9. As expected, the computationally lightweight DE algorithm was the fastest in all scenarios. The results show that AMTSA incurs a moderate computational overhead compared to its baseline, TSA, which is attributable to the additional calculations required by its three adaptive mechanisms. However, this modest increase in runtime is justified by the significant improvement in optimization accuracy, as demonstrated in our main experimental results. Importantly, AMTSA’s runtime is highly competitive with other state-of-the-art algorithms like JADE and LSHADE, indicating that its efficiency is well within the range of modern high-performance optimizers. Furthermore, by observing the increase in runtime as the dimension D grows from 30 to 100, it is evident that AMTSA exhibits good scalability. Its computational cost increases at a rate comparable to the other established algorithms, confirming its suitability for tackling high-dimensional problems.
5.6 Statistical experiments
To rigorously evaluate the performance of the proposed AMTSA, a statistical comparison was conducted using the Wilcoxon’s Signed-Rank Test [65], with the results presented in Table 10. In this analysis, AMTSA was systematically benchmarked against the ten state-of-the-art and classic algorithms featured in Comparative Experiment 2. The table details the p-values computed at significance levels of α=0.1 and α=0.05, and indicates whether the null hypothesis–that there is no significant difference between the paired algorithms–was rejected. This non-parametric statistical test is particularly well-suited for this analysis as it does not assume a normal distribution of the results, thus providing a more accurate and robust reflection of the true performance differences.
The statistical results unequivocally demonstrate the significant superiority of the AMTSA algorithm. As evidenced in Table 10, AMTSA achieved a statistically significant advantage over all ten comparative algorithms across all tested dimensions (D = 30, 50, and 100) . In every paired comparison, the calculated p-value was substantially lower than the strictest significance level of α=0.05, leading to the consistent rejection of the null hypothesis. This outcome not only confirms the superior performance and stability of AMTSA but also provides strong statistical validation for its effectiveness. In summary, the AMTSA algorithm is not only a high-performing optimizer but is also statistically proven to be a significant advancement over a wide range of established and high-performance metaheuristics, underscoring its substantial value and potential for wide-ranging applications in complex optimization problems.
6 A study on the application of AMTSA-optimized lung cancer CT image segmentation and recognition
This section presents two core applications in lung cancer CT image processing: a multi-threshold image segmentation model optimized using the AMTSA algorithm and a lung cancer recognition model. In the segmentation component, by incorporating the Minimum Symmetric Cross-Entropy (MSCE) criterion, a multi-threshold segmentation method optimized using AMTSA is proposed to achieve precise differentiation between lung nodules and the background. Its performance is quantitatively evaluated using the Structural Similarity Index (SSIM) and Peak Signal-to-Noise Ratio (PSNR). In the lung cancer recognition component, a Convolutional Neural Network (CNN) is first employed to extract deep features from CT images, followed by the optimization of a Support Vector Machine (SVM) using AMTSA to construct an efficient recognition model, with accuracy, recall, and F1 score serving as the evaluation metrics.
6.1 AMTSA-optimized multi-threshold lung cancer CT image segmentation model
Lung cancer CT image segmentation aims to accurately distinguish lung nodules from the background. Multi-threshold segmentation methods partition the image into different regions by setting multiple gray-level thresholds, thereby enabling fine segmentation. However, traditional multi-threshold segmentation algorithms often encounter issues such as high computational complexity and susceptibility to local optima in practical applications, leading to imprecise threshold selection and, consequently, limiting both segmentation performance and clinical reliability.
To address these challenges, this paper proposes a multi-threshold lung cancer CT image segmentation method optimized using the AMTSA algorithm. The proposed approach first constructs a fitness function based on the gray-level features of CT images and adopts the Minimum Symmetric Cross-Entropy (MSCE) criterion as the evaluation standard, thereby transforming the multi-threshold segmentation problem into a threshold optimization task solved by AMTSA. By fully leveraging the adaptive tree migration mechanism and the seed generation strategy based on the dynamic Weibull distribution inherent in AMTSA, the algorithm iteratively optimizes within a complex search space to obtain the optimal threshold set, achieving precise segmentation of lung nodules and the background. Experimental results indicate that this method significantly outperforms traditional approaches in segmentation performance metrics, such as the Structural Similarity Index (SSIM) and Peak Signal-to-Noise Ratio (PSNR), offering more efficient and accurate image analysis support for early lung cancer diagnosis.
To ensure the fairness and reproducibility of our SVM-based recognition experiments, all comparative optimization algorithms sought the optimal hyperparameters for the SVM under identical conditions. Specifically, all algorithms operated within a unified, predefined search space. The search range for the SVM’s regularization parameter, C, was set to [0.1, 100], while the range for the RBF kernel coefficient, gamma, was set to [0.001, 1]. This standardized setup ensures that the final performance differences directly reflect the efficiency of the optimization algorithms themselves, rather than any variance in their search domains.
6.1.1 AMTSA-based multi-threshold selection implementation.
To ensure the quality and consistency of the input data, all CT images underwent a three-step preprocessing pipeline before feature extraction. First, we performed grayscale normalization to linearly map the pixel values of each image to a standard range of [0, 255], which mitigates variations caused by different scanning parameters. Second, a 5x5 median filter was applied to suppress noise, such as Gaussian and salt-and-pepper noise, while preserving the edge details of lung nodules. Finally, we utilized histogram equalization to enhance the image contrast, making the distinction between nodule regions and surrounding tissues more prominent for the subsequent segmentation task.
6.1.2 AMTSA for optimal multi-threshold segmentation.
In this application, we formulate the multi-threshold image segmentation challenge as an optimization problem to be solved by our proposed AMTSA. The primary objective is to identify an optimal set of ‘n‘ grayscale thresholds, denoted as , which can most effectively distinguish lung nodules from the surrounding background tissue.
1) Solution Representation: In this context, each individual (a "tree") within the AMTSA population represents a complete candidate solution. It is encoded as a D-dimensional vector, where D equals the number of thresholds to be found (e.g., D=4, 10, or 20). The position of each tree in the search space corresponds to a specific combination of threshold values, with each value constrained within the image’s grayscale range of [0, 255].
2) Fitness Evaluation: To guide the optimization process, we employ the Minimum Symmetric Cross-Entropy (MSCE) as the fitness function [cite]. For any set of thresholds proposed by a tree, the image is partitioned into D+1 classes. The total cross-entropy of all classes is then calculated according to Eq (36). The optimization goal for AMTSA is to discover the threshold vector that minimizes the MSCE value, as a lower MSCE score signifies a more optimal and stable image segmentation.
6.1.3 Lung cancer CT image segmentation experiments and results analysis.
Lung Cancer CT Image Dataset
In evaluating the performance of the AMTSA algorithm in multi-threshold lung cancer CT image segmentation, this study compares it with other optimization algorithms, including the Sine Cosine Algorithm (SCA) [66], Social Network Search (SNS) [67], Harris Hawks Optimization (HHO) [68], Grey Wolf Optimizer (GWO) [41], and Dynamic Bat Optimization (DBO) [69]. To comprehensively assess the performance of these algorithms, metrics such as the Feature Similarity Index (FSIM), Structural Similarity Index (SSIM), and Mean Squared Error (MSE) are used for system evaluation.
For the multi-threshold image segmentation application, we utilized the Lung PET-CT-Dx dataset, publicly available from The Cancer Imaging Archive (TCIA) [cite]. The original dataset contains DICOM images with a resolution of 512x512 pixels. For our experiments, we selected four representative CT images that feature typical solid and ground-glass nodules. The datasets for this retrospective study were accessed between October 2024 and January 2025. The authors did not have access to any information that could be used to identify individual participants during or after data collection, as all data were fully anonymized by the source institutions. Fig 18 shows the data preprocessing steps.
Evaluation Metrics
To quantitatively evaluate the performance of the segmentation algorithms, we employed two widely-recognized metrics: the Peak Signal-to-Noise Ratio (PSNR) and the Structural Similarity Index (SSIM) [70]. PSNR primarily measures the fidelity of the segmented image against the original, while SSIM assesses the preservation of structural information. Higher values for both metrics indicate superior segmentation performance. Fig 19 shows the optimal thresholds under different algorithmic optimisation.
Fig 20 shows the convergence curves of each algorithm on different images. By observing the convergence curves, it is evident that the AMTSA algorithm demonstrates exceptional fast convergence ability. Compared to other algorithms, the AMTSA algorithm typically approaches the optimal solution in a very small number of iterations, sometimes achieving segmentation performance that other algorithms can only attain after multiple iterations. This indicates that AMTSA can find high-quality segmentation solutions quickly and effectively with lower computational cost, making it suitable for applications with high efficiency requirements.
To further evaluate the performance of different algorithms in image segmentation, this study uses Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM) as evaluation metrics, representing image fidelity and structural similarity, respectively. Tables 11 and 12 show the PSNR and SSIM values of six algorithms (AMTSA, DBO, HHO, GWO, SCA, SNS) on two lung cancer CT test images, using different numbers of thresholds (2, 10, 20).
An analysis of the performance metrics in Tables 11 and 12 indicates that the AMTSA algorithm demonstrates competitive performance in the task of lung cancer CT image segmentation. Specifically, regarding the PSNR metric, AMTSA generally excels at high threshold levels (e.g., 20 thresholds), achieving PSNR values of 21.5721 and 21.6521 on the two test images, respectively. This suggests the algorithm’s strong potential for preserving image fidelity. It is noteworthy, however, that under certain settings (e.g., 10 thresholds on Test Image 1), other algorithms such as SCA exhibited comparable or even superior performance. The results for the SSIM metric present a more varied picture. For instance, on Test Image 1, the SNS and DBO algorithms achieved higher SSIM values than AMTSA at 20 thresholds, suggesting they had an advantage in maintaining the structural similarity of that particular image. Concurrently, AMTSA obtained the highest SSIM score (0.6093) for Test Image 2 at the 20-threshold level.
Taken together, these findings suggest that no single algorithm achieved absolute superiority across all metrics and test images. AMTSA shows a particular strength in the PSNR metric, especially at higher threshold levels, which may indicate a favorable trade-off between image fidelity and segmentation detail. While its SSIM performance was not universally the best, its consistently high PSNR values underscore its potential as an effective optimization tool for image segmentation.
6.2 AMTSA-optimized lung cancer recognition model
This study uses a subset of the lung cancer histopathological image dataset, focusing specifically on 300 lung cancer images. The original dataset contains 25,000 histopathological images, divided into five categories. All images are 768x768 pixels in size and are in JPEG file format. The dataset originates from a HIPAA-compliant and validated source, consisting of 750 lung tissue images (250 benign lung tissue, 250 lung adenocarcinomas, and 250 lung squamous cell carcinomas). The dataset was then augmented to 25,000 images using the Augmentor package. For the purposes of this study, only lung cancer images were selected, including 100 lung benign tissue images, 100 lung adenocarcinoma images, and 100 lung squamous cell carcinoma images, totaling 300 images. These 300 images were used to train and evaluate the AMTSA-optimized lung cancer recognition model. The selection of these specific categories helps to focus the study on lung cancer classification, providing support for the development of accurate and effective diagnostic models.
To address the potential for overfitting due to the limited size of the 300-image dataset and to ensure a robust evaluation of the model’s generalization performance, we employed a 10-fold stratified cross-validation strategy. The dataset, consisting of 100 images from each of the three classes (benign, adenocarcinoma, and squamous cell carcinoma), was partitioned into 10 equal-sized folds. For each fold, the class distribution was kept consistent with the overall dataset (stratification). The cross-validation process consisted of 10 iterations. In each iteration, one unique fold was reserved as the test set, while the remaining 9 folds were used to train the AMTSA-SVM model. The final performance metrics reported in Table 13 (Accuracy, Recall, etc.) represent the average values and standard deviations computed across all 10 folds. This rigorous validation method ensures that our results are a reliable estimate of the model’s performance on unseen data.
Early and accurate diagnosis of lung cancer is crucial for improving patient survival rates, and recognition based on lung CT images plays a key role in this process. Support Vector Machine (SVM) has demonstrated certain advantages in lung image classification. However, traditional SVM suffers from limitations in parameter selection and model optimization, which can negatively impact its classification performance. The AMTSA algorithm is known for its excellent global search and local optimization capabilities, enabling efficient optimization in complex search spaces. By incorporating AMTSA into the parameter optimization process of SVM, the goal is to leverage its strengths to enhance the performance of SVM in lung cancer recognition. Specifically, based on the feature data of lung CT images, a fitness function is constructed, transforming the SVM parameter selection problem into an optimization task for AMTSA. Through iterative searching with AMTSA, the optimal parameter combination for SVM is identified, thereby achieving precise lung cancer recognition. This model construction approach integrates the classification power of SVM with the optimization capability of AMTSA, offering the potential to significantly improve the accuracy and efficiency of lung cancer recognition. This model provides more reliable technical support for early lung cancer diagnosis, contributing to advancements in both research and clinical applications
6.2.1 Feature extraction from lung CT images.
Early and accurate diagnosis of lung cancer depends on the effective extraction of representative features from CT images. Due to the limitations of traditional handcrafted feature extraction methods, this study employs Convolutional Neural Networks (CNNs) for feature extraction. CNNs are capable of automatically learning complex features from images, particularly suited for capturing high-level spatial and texture information, which is crucial for effective lung cancer detection.
In this study, we use the pre-trained EfficientNetB3 model, which is based on deep convolutional neural networks and has demonstrated excellent performance in various computer vision tasks. To improve the efficiency and accuracy of feature extraction, the EfficientNetB3 model pre-trained on ImageNet is utilized, with the classification layer removed and global average pooling (GAP) applied to aggregate features. This approach converts each image into a fixed-length feature vector, facilitating subsequent classification and analysis.
The process is as follows: first, each input lung CT image is resized to 300x300 pixels and preprocessed, including normalization, to meet the input requirements of the EfficientNetB3 model. Let the input image be , and the image is then preprocessed using the function
, resulting in the normalized image
. The image is then passed through the EfficientNetB3 network, where the image is processed through convolutional layers to extract features:
where is the feature map extracted by the convolutional layers. Global average pooling is then applied to reduce the feature map to a fixed-length vector:
The resulting feature vector is a 1536-dimensional vector containing high-level information such as texture, shape, and edges of the image. These features are then used as input for training and evaluation of the AMTSA-optimized lung cancer recognition model.
By utilizing CNN for feature extraction, this approach not only preserves key information from the images but also enhances the model’s generalization ability, allowing it to process various types of lung CT images and providing strong support for subsequent lung cancer recognition.
6.2.2 Lung cancer recognition process based on AMTSA-optimized SVM.
Support Vector Machine (SVM) has shown certain advantages in classifying small lung image datasets. However, traditional SVM suffers from limitations in parameter selection and model optimization, which affect its classification performance. The AMTSA algorithm (Adaptive and Migration-enhanced Tree Seed Algorithm) excels in global search and local optimization, enabling efficient optimization in complex search spaces.
By integrating AMTSA into the parameter optimization process of SVM, the goal is to enhance the performance of SVM in lung cancer recognition by leveraging AMTSA’s optimization capabilities. Specifically, based on the feature data of lung CT images, a fitness function is constructed to transform the SVM parameter selection problem into an optimization task for AMTSA. Through iterative searching with AMTSA, the optimal SVM parameter combination is identified, achieving precise lung cancer recognition.
This model construction approach effectively integrates the classification power of SVM with the optimization ability of AMTSA, promising to significantly improve the accuracy and efficiency of lung cancer recognition, thus providing more reliable technical support for early lung cancer diagnosis. The specific process flow is shown in Fig 21.
6.2.3 Evaluation metrics.
To comprehensively evaluate the performance of the model, this study employs the following evaluation metrics: Accuracy, Recall, Precision and F1 Score.
6.2.4 Experiment results analysis.
This study aims to evaluate the effectiveness of the AMTSA-optimized Support Vector Machine (SVM) model in lung cancer CT image recognition. To ensure fairness in comparison, the AMTSA-SVM model was compared with the traditional SVM optimized by grid search (GridSearch), GA-SVM optimized by Genetic Algorithm (GA), DE-SVM optimized by Differential Evolution (DE), JADE-SVM optimized by Adaptive Differential Evolution with Optional External Archive (JADE), and TSA-SVM optimized by Tree Seed Algorithm (TSA). The parameter selection range for all algorithms was kept consistent, ensuring a fair comparison between AMTSA and the other optimization algorithms.
From the data presented in Table 13, it is evident that the AMTSA-SVM model outperforms all other algorithms across all evaluation metrics, particularly in terms of accuracy and recall, demonstrating its excellent classification and diagnostic capabilities.
Specifically, the AMTSA-SVM model achieved an accuracy of 89.5%, significantly higher than that of the traditional SVM (76.22%), GA-SVM (79.33%), JADE-SVM (89.12%), and TSA-SVM (84.44%). This indicates that the AMTSA optimization process effectively enhances the classification performance of the SVM model. Furthermore, the recall rate for AMTSA-SVM was 92%, highlighting its superior ability to identify lung cancer patients and reduce the risk of missed diagnoses. Considering both precision and recall, the AMTSA-SVM model achieved the highest F1 score of 0.887, reflecting its excellent overall performance.
In conclusion, the experimental results demonstrate that the AMTSA-optimized SVM model has significant advantages in lung cancer CT image recognition tasks. This is primarily due to the strong global search and local optimization capabilities of the AMTSA algorithm, which efficiently avoids local optima and identifies the best SVM parameter combination. Compared to GA, AMTSA more effectively utilizes population information during solution space exploration, leading to faster convergence. Compared to JADE, AMTSA has superior global search capabilities, thus avoiding premature convergence.
7 Conclusions and future work
This paper introduced the Adaptive and Migration-enhanced Tree Seed Algorithm (AMTSA), a novel variant designed to overcome the significant limitations of the original TSA, such as premature convergence and the tendency to become trapped in local optima when solving complex problems. The primary contributions of this work are threefold: 1) the introduction of a dynamic seed generation strategy based on the Weibull distribution, which adaptively adjusts the algorithm’s search breadth; 2) the integration of a GBO-inspired nonlinear step-size mechanism, which ensures a smoother and more effective balance between exploration and exploitation; and 3) the design of a fitness-guided adaptive migration strategy that provides an intelligent mechanism for escaping from local optima.
The advantages of AMTSA were demonstrated through comprehensive experiments. On the rigorous IEEE CEC 2014 benchmark, AMTSA consistently outperformed not only its parent algorithm and recent TSA variants but also several state-of-the-art optimizers, confirming its superior performance in high-dimensional, complex search spaces. Furthermore, its successful application to lung cancer CT image segmentation and SVM-based recognition showcased its practical utility and the effective transfer of its search mechanics to a challenging real-world biomedical task. However, a notable disadvantage is the trade-off between performance and computational cost. Our runtime analysis indicates that the sophisticated adaptive mechanisms of AMTSA, while crucial for its success, result in a moderate increase in computational time compared to the simpler baseline TSA.
This study is also subject to several limitations. First, while AMTSA excels on complex problems, its performance on low-dimensional or simple unimodal landscapes remains unexplored. Second, several of its control parameters are fixed rather than fully self-adaptive, which might limit its robustness across a wider range of problem types. Third, the validation was restricted to the IEEE CEC 2014 benchmark and a single application domain, leaving its generality on newer and more challenging benchmarks, such as CEC 2021 and CEC 2022, unverified. Finally, the lung cancer recognition model was validated on a relatively small dataset; while cross-validation was employed, further validation on larger datasets is warranted.
Future work will focus on addressing these limitations. A key priority will be to benchmark AMTSA against the latest CEC 2021/2022 test suites to further assess its competitiveness. We plan to introduce self-tuning rules for key parameters and develop a fully dynamic search-tendency threshold to further refine the exploration-exploitation trade-off. We also aim to extend AMTSA to multi-objective and large-scale optimization tasks, such as financial feature selection and supply-chain design, where efficiency and scalability are paramount. In addition, we plan to investigate AMTSA as a global optimizer for training deep learning models, where its strong search capability may help accelerate convergence by navigating complex loss landscapes with many saddle points and shallow minima.
References
- 1. Kirkpatrick S, Gelatt CD Jr, Vecchi MP. Optimization by simulated annealing. Science. 1983;220(4598):671–80. pmid:17813860
- 2. Osaba E, Villar-Rodriguez E, Del Ser J, Nebro AJ, Molina D, LaTorre A, et al. A tutorial on the design, experimentation and application of metaheuristic algorithms to real-World optimization problems. Swarm Evolut Computat. 2021;64:100888.
- 3. Polyak BT. Newton’s method and its use in optimization. Eur J Operat Res. 2007;181(3):1086–96.
- 4. Jensen RE. A dynamic programming algorithm for cluster analysis. Oper Res. 1969;17(6):1034–57.
- 5. Zhao S, Zhang T, Cai L, Yang R. Triangulation topology aggregation optimizer: A novel mathematics-based meta-heuristic algorithm for continuous optimization and engineering applications. Expert Syst Applic. 2024;238:121744.
- 6. Blum C, Roli A. Metaheuristics in combinatorial optimization. ACM Comput Surv. 2003;35(3):268–308.
- 7. SS VC, HS A. Nature inspired meta heuristic algorithms for optimization problems. Computing. 2022;104(2):251–69.
- 8. Martí R, Sevaux M, Sörensen K. 50 years of metaheuristics. Eur J Oper Res. 2024.
- 9. Lopes Silva MA, de Souza SR, Freitas Souza MJ, de França Filho MF. Hybrid metaheuristics and multi-agent systems for solving optimization problems: A review of frameworks and a comparative analysis. Appl Soft Comput. 2018;71:433–59.
- 10. de Melo Menezes BA, Kuchen H, Buarque de Lima Neto F. Parallelization of swarm intelligence algorithms: Literature review. Int J Parallel Prog. 2022;50(5–6):486–514.
- 11. Harada T, Alba E. Parallel genetic algorithms. ACM Comput Surv. 2020;53(4):1–39.
- 12. Zhang X, Liu H, Tu L. A modified particle swarm optimization for multimodal multi-objective optimization. Eng Applic Artif Intell. 2020;95:103905.
- 13. Di Caprio D, Ebrahimnejad A, Alrezaamiri H, Santos-Arteaga FJ. A novel ant colony algorithm for solving shortest path problems with fuzzy arc weights. Alexandria Eng J. 2022;61(5):3403–15.
- 14. Wolpert DH, Macready WG. No free lunch theorems for optimization. IEEE Trans Evol Computat. 1997;1(1):67–82.
- 15. Khishe M, Mosavi MR. Chimp optimization algorithm. Expert Syst Applic. 2020;149:113338.
- 16. Umam MS, Mustafid M, Suryono S. A hybrid genetic algorithm and tabu search for minimizing makespan in flow shop scheduling problem. J King Saud Univ – Comput Inform Sci. 2022;34(9):7459–67.
- 17. Li J, Pardalos PM, Sun H, Pei J, Zhang Y. Iterated local search embedded adaptive neighborhood selection approach for the multi-depot vehicle routing problem with simultaneous deliveries and pickups. Expert Syst Applic. 2015;42(7):3551–61.
- 18. Vrugt JA, Robinson BA, Hyman JM. Self-adaptive multimethod search for global optimization in real-parameter spaces. IEEE Trans Evol Computat. 2009;13(2):243–59.
- 19. Wang D, Tan D, Liu L. Particle swarm optimization algorithm: An overview. Soft Comput. 2017;22(2):387–408.
- 20. Shadravan S, Naji HR, Bardsiri VK. The sailfish optimizer: A novel nature-inspired metaheuristic algorithm for solving constrained engineering optimization problems. Eng Applic Artif Intell. 2019;80:20–34.
- 21. Zhong C, Li G, Meng Z. Beluga whale optimization: A novel nature-inspired metaheuristic algorithm. Knowl-Based Syst. 2022;251:109215.
- 22. Bansal JC, Sharma H, Jadon SS, Clerc M. Spider monkey optimization algorithm for numerical optimization. Memetic Comp. 2014;6(1):31–47.
- 23. Akbari MA, Zare M, Azizipanah-Abarghooee R, Mirjalili S, Deriche M. The cheetah optimizer: A nature-inspired metaheuristic algorithm for large-scale optimization problems. Sci Rep. 2022;12(1):10953. pmid:35768456
- 24. Kiran MS. TSA: Tree-seed algorithm for continuous optimization. Expert Syst Applic. 2015;42(19):6686–98.
- 25. El-Fergany AA, Hasanien HM. Tree-seed algorithm for solving optimal power flow problem in large-scale power systems incorporating validations and comparisons. Appl Soft Comput. 2018;64:307–16.
- 26. Jiang J, Yang X, Li M, Chen T. ATSA: An adaptive tree seed algorithm based on double-layer framework with tree migration and seed intelligent generation. Knowl-Based Syst. 2023;279:110940.
- 27. Jiang J, Huang J, Wu J, Luo J, Yang X, Li W. DTSA: Dynamic tree-seed algorithm with velocity-driven seed generation and count-based adaptive strategies. Symmetry. 2024;16(7):795.
- 28. Osaba E, Villar-Rodriguez E, Del Ser J, Nebro AJ, Molina D, LaTorre A, et al. A tutorial on the design, experimentation and application of metaheuristic algorithms to real-World optimization problems. Swarm Evolut Comput. 2021;64:100888.
- 29. Zhan Z-H, Shi L, Tan KC, Zhang J. A survey on evolutionary computation for complex continuous optimization. Artif Intell Rev. 2021;55(1):59–110.
- 30. Rajwar K, Deep K, Das S. An exhaustive review of the metaheuristic algorithms for search and optimization: Taxonomy, applications, and open challenges. Artif Intell Rev. 2023:1–71. pmid:37362893
- 31. Beşkirli M. Solving continuous optimization problems using the tree seed algorithm developed with the roulette wheel strategy. Expert Syst Applic. 2021;170:114579.
- 32. Jiang J, Meng X, Chen Y, Qiu C, Liu Y, Li K. Enhancing tree-seed algorithm via feed-back mechanism for optimizing continuous problems. Appl Soft Comput. 2020;92:106314.
- 33.
Jiang J, Wu J, Meng X, Qian L, Luo J, Li K. Katsa: knn ameliorated tree-seed algorithm for complex optimization problems. https://doi.org/10.2139/ssrn.4636664
- 34. Chakraborty S, Saha AK, Chhabra A. Improving whale optimization algorithm with elite strategy and its application to engineering-design and cloud task scheduling problems. Cogn Comput. 2023;15(5):1497–525.
- 35. Cui Y, Shi R, Dong J. CLTSA: A novel tunicate swarm algorithm based on Chaotic-Lévy flight strategy for solving optimization problems. Mathematics. 2022;10(18):3405.
- 36. Sörensen K. Metaheuristics—The metaphor exposed. Int Trans Operat Res. 2013;22(1):3–18.
- 37. Camacho-Villalón CL, Dorigo M, Stützle T. Exposing the grey wolf, moth-flame, whale, firefly, bat, and antlion algorithms: Six misleading optimization techniques inspired bybestialmetaphors. Int Trans Operat Res. 2022;30(6):2945–71.
- 38. Beşkirli M, Kiran MS. Optimization of Butterworth and Bessel filter parameters with improved tree-seed algorithm. Biomimetics (Basel). 2023;8(7):540. pmid:37999181
- 39. El-Fergany AA, Hasanien HM. Tree-seed algorithm for solving optimal power flow problem in large-scale power systems incorporating validations and comparisons. Appl Soft Comput. 2018;64:307–16.
- 40. Gharehchopogh FS. Advances in tree seed algorithm: A comprehensive survey. Arch Computat Methods Eng. 2022;29(5):3281–304.
- 41. Mirjalili S, Mirjalili SM, Lewis A. Grey wolf optimizer. Adv Eng Softw. 2014;69:46–61.
- 42. Jiang J, Liu Y, Zhao Z. TriTSA: Triple tree-seed algorithm for dimensional continuous optimization and constrained engineering problems. Eng Applic Artif Intell. 2021;104:104303.
- 43. Jiang J, Xu M, Meng X, Li K. STSA: A sine tree-seed algorithm for complex continuous optimization problems. Phys A: Stat Mecha Applic. 2020;537:122802.
- 44. Jiang J, Jiang S, Meng X, Qiu C. EST-TSA: An effective search tendency based to tree seed algorithm. Phys A: Stat Mech Applic. 2019;534:122323.
- 45. Jiang J, Han R, Meng X, Li K. TSASC: Tree–seed algorithm with sine–cosine enhancement for continuous optimization problems. Soft Comput. 2020;24(24):18627–46.
- 46. Chen X, Przystupa K, Ye Z, Chen F, Wang C, Liu J, et al. Forecasting short-term electric load using extreme learning machine with improved tree seed algorithm based on Lévy flight. Eksploatacja i Niezawodność – Maintenance Reliab. 2022;24(1):153–62.
- 47. Babalik A, Cinar AC, Kiran MS. A modification of tree-seed algorithm using Deb’s rules for constrained optimization. Appl Soft Comput. 2018;63:289–305.
- 48. Jiang J, Meng X, Liu Y, Wang H. An enhanced TSA-MLP model for identifying credit default problems. Sage Open. 2022;12(2).
- 49. Aslan MF, Sabanci K, Ropelewska E. A new approach to COVID-19 detection: An ANN proposal optimized through tree-seed algorithm. Symmetry. 2022;14(7):1310.
- 50. Abed-alguni BH, Paul D. Island-based Cuckoo Search with elite opposition-based learning and multiple mutation methods for solving optimization problems. Soft Comput. 2022;26(7):3293–312.
- 51. Tian Y, Zhang D, Zhang H, Zhu J, Yue X. An improved cuckoo search algorithm for global optimization. Cluster Comput. 2024;27(6):8595–619.
- 52. Sekaran K, Lawrence SPA. Mutation boosted salp swarm optimizer meets rough set theory: A novel approach to software defect detection. Trans Emerg Tel Tech. 2024;35(3).
- 53. Ahmadianfar I, Bozorg-Haddad O, Chu X. Gradient-based optimizer: A new metaheuristic optimization algorithm. Inform Sci. 2020;540:131–59.
- 54. di Serafino D, Toraldo G, Viola M. Using gradient directions to get global convergence of Newton-type methods. Appl Math Comput. 2021;409:125612.
- 55.
Tanabe R, Fukunaga AS. Improving the search performance of SHADE using linear population size reduction. In: 2014 IEEE congress on evolutionary computation (CEC). IEEE; 2014. p. 1658–65.
- 56. Jingqiao Zhang, Sanderson AC. JADE: Adaptive differential evolution with optional external archive. IEEE Trans Evol Computat. 2009;13(5):945–58.
- 57.
Mallipeddi R, Wu G, Lee M, Suganthan PN. Gaussian adaptation based parameter adaptation for differential evolution. In: 2014 IEEE Congress on evolutionary computation (CEC); 2014. p. 1760–7.
- 58.
Erlich I, Venayagamoorthy GK, Worawat N. A Mean-variance optimization algorithm. In: IEEE Congress on evolutionary computation; 2010. p. 1–6. https://doi.org/10.1109/cec.2010.5586027
- 59. Hansen N, Müller SD, Koumoutsakos P. Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (CMA-ES). Evol Comput. 2003;11(1):1–18. pmid:12804094
- 60. Deng W, Xu J, Song Y, Zhao H. Differential evolution algorithm with wavelet basis function and optimal mutation strategy for complex optimization problem. Appl Soft Comput. 2021;100:106724.
- 61. Das S, Abraham A, Chakraborty UK, Konar A. Differential evolution using a neighborhood-based mutation operator. IEEE Trans Evol Computat. 2009;13(3):526–53.
- 62. Ahmad A, Yadav AK, Singh A, Singh DK, Ağbulut Ü. A hybrid RSM-GA-PSO approach on optimization of process intensification of linseed biodiesel synthesis using an ultrasonic reactor: Enhancing biodiesel properties and engine characteristics with ternary fuel blends. Energy. 2024;288:129077.
- 63. Yılmaz S, Küçüksille EU. A new modification approach on bat algorithm for solving optimization problems. Appl Soft Comput. 2015;28:259–75.
- 64. Jiang J, Meng X, Qian L, Wang H. Enhance tree-seed algorithm using hierarchy mechanism for constrained optimization problems. Expert Syst Applic. 2022;209:118311.
- 65. Harris T, Hardin JW. Exact Wilcoxon signed-rank and Wilcoxon Mann–Whitney Ranksum tests. Stata J: Promot Commun Stat Stata. 2013;13(2):337–43.
- 66. Mirjalili S. SCA: A sine cosine algorithm for solving optimization problems. Knowl-Based Syst. 2016;96:120–33.
- 67. Talatahari S, Bayzidi H, Saraee M. Social network search for global optimization. IEEE Access. 2021;9:92815–63.
- 68. Heidari AA, Mirjalili S, Faris H, Aljarah I, Mafarja M, Chen H. Harris hawks optimization: Algorithm and applications. Future Gener Comput Syst. 2019;97:849–72.
- 69. Xue J, Shen B. Dung beetle optimizer: A new meta-heuristic algorithm for global optimization. J Supercomput. 2022;79(7):7305–36.
- 70. Kumar A, Kumar A, Vishwakarma A, Singh GK. Multilevel thresholding for crop image segmentation based on recursive minimum cross entropy using a swarm-based technique. Comput Electron Agric. 2022;203:107488.