Figures
Abstract
Kernel Search Optimization (KSO) is characterized by insufficient accuracy in local search, which makes it difficult to achieve local optimization. Therefore, this paper proposes a Large Local Search Kernel Search Optimization (LLSKSO) to enhance the local optimization ability. LLSKSO achieves the performance improvement by introducing several strategies. First, the initial population is homogenized using the good point set mechanism. Then, the little dung beetle search mechanism of the Dung Beetle Optimizer (DBO) is introduced to enhance the local search capability of the KSO. Finally, the Cauchy-Gaussian mutation strategy is utilized to prevent the algorithm from falling into local traps. These three steps enable LLSKSO to achieve a dynamic balance between local and global search. In addition, to verify the performance and robustness of LLSKSO, comparison experiments between LLSKSO and 10 well-known algorithms are conducted on 50 benchmark test functions. From the statistical results of mean, best and variance of different algorithms, the LLSKSO algorithm outperforms the other algorithms. Finally, LLSKSO is applied to the engineering problem of carbon fiber drafting ratio optimization. Moreover, the experimental results obtained by LLSKSO yielded smaller line densities and greater strengths compared to other algorithms. LLSKSO achieves theoretical optima in 16 out of 20 high-dimensional benchmark functions, with an average CPU runtime reduced by 30% compared to baseline methods. Therefore, it can be shown that LLSKSO can be used as an effective optimization algorithm and engineering assistance.
Citation: Dong R, Cui R, Cai Z, Heidari AA, Liu L, Liu Y, et al. (2025) Enhanced kernel search algorithm for optimizing local search capability and its application to carbon fiber draft process. PLoS One 20(11): e0334348. https://doi.org/10.1371/journal.pone.0334348
Editor: Aykut Fatih Güven, Yalova University, TÜRKIYE
Received: May 21, 2025; Accepted: September 25, 2025; Published: November 26, 2025
Copyright: © 2025 Dong et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: The datasets used in this study consist of benchmark functions and a mathematical model function of carbon fiber. All results derived from these datasets are fully presented within the article. No additional external data were used or generated for this study.
Funding: This research was supported by Science and Technology Development Project of JiLin Province,China (YDZJ202201ZYTS555).
Competing interests: The authors have declared that no competing interests exist.
1. Introduction
Metaheuristic Algorithms (MAs) [1] is an intelligent optimization algorithm that builds an algorithmic model to solve the problem by imitating the biological behaviors of natural groups of organisms, such as foraging, reproduction and avoiding natural enemies. The core idea of MAs is to form a group through many individuals to achieve the solution of the problem through cooperation, competition, interaction, and learning mechanisms. It is able to complete the solution of complex problems in the absence of local information and models [2]. Therefore, the MAs have the advantages of strong solving ability and fast calculation speed. Many scholars have done a lot of research on it and proposed many MAs [3–9]. Such as Ant Colony Optimization (ACO) [10], Particle Swarm Optimization (PSO) [11], Firefly Algorithm (FA) [12], Seagull Optimization Algorithm (SOA) [13], Whale Optimization Algorithm (WOA) [14], Harris Hawks Optimization (HHO) [15], Sparrow Search Algorithm (SSA) [16], and so on.
Nonetheless, the inherent MAs frequently struggle with achieving adequate equilibrium, notably in the context of practical applications. Therefore, to improve MAs with better searching ability for solving various practical optimization problems efficiently, many improvement strategies have been used in algorithms [17–21]. And the performance of these techniques has been verified by many experiments in the existing literature.
For the improvement of PSO, Liu et al. [22] proposed an adaptive weighting strategy PSO, where a weighting strategy based on an S-shaped function is given to dynamically modify the acceleration coefficients. Subsequently, the team proposed a randomized PSO [23], which utilizes adjustable intensity Gaussian white noise to randomly alter the acceleration coefficients, and is able to explore the search space of the problem more extensively. Hu Jia et al. [24] proposed an improved PSO with the fusion of multiple strategies, which is able to balance the ability of global exploration and local exploration. Liang et al. [25] proposed an improved simplified PSO based on levy flight, which eliminates the velocity term in the update formula of the PSO, and applied it to solve the min-max-min problem. For the ACO, Luo et al. [26] proposed an improved ACO and applied it in different robot movement simulation environments. Compared with the original ACO, this algorithm performs better in terms of global optimal search capability and convergence speed. Liu et al. [27] proposed a greedy-Levy ACO algorithm based on the max-min ACO algorithm that combines the two methods, epsilon greedy and Levy flight, to solve complex combinatorial optimization problems. Zhao et al. [28] proposed a path planning research method to improve the ACO and applied it to the path planning problem of robots in complex environments. For the same application scenario research problem, Yang et al. [29] proposed an improved ACO based on adaptive archiving update. For the FA improvement research, Tao et al. [30] proposed an improved FA using a random self-attraction model, in which each firefly is attracted to another randomly selected firefly. And the concept of Cauchy jumps was used in FAto achieve better accuracy and robustness. Aref et al. [31] introduced the tidal force formulation into the FA framework, and its flexibility provides new insights for the algorithm to balance the global and local searches. Wang et al. [32] based on the gender difference proposed an improved FA, in which fireflies perform global search through random selection and direction judgment, and female fireflies find high-quality solutions through local search, which effectively balances the global and local optimization search of the algorithm. For the improvement of HHO, Hussain et al. [33] introduced the concept of long-time memory sequence, which prevents the algorithm from falling into local optimality by placing the optimal individual in each iteration into the sequence and guiding the rest of the individuals to converge to it, thus increasing the population diversity. Yin et al. [34] used infinite folding chaos strategy to initialize the population and improve the quality of the initial solution. The golden sine operator is introduced in the exploration stage to improve the global search ability of the HHO. Lens imaging learning and Cauchy mutation are introduced to perturb the optimal position and prevent the HHO from falling into local optimality. Zhu et al. [35] identified the state of the convergence curve by calculating the optimal descent rate at each iteration of the HHO, and introduces the convergence correction mechanism in the bacterial foraging algorithm into the local search stage to improve the solution accuracy. Incorporate the ability consumption law of organisms in movement into the escape energy and jumping energy to balance the exploration and development of the HHO when the algorithm falls into the local optimum the escape energy is perturbed to jump out of the local optimum. Liu et al. [36] designed a square field based multi subpopulation topology, which divides the population into k subpopulations and iterates in the vertical and horizontal directions using the original strategy. The SSA tends to converge prematurely in solving the problem, resulting in poor algorithmic solution accuracy. Cheng et al. [37] proposed a chaotic SSA, which uses Tent chaotic sequence to initialize the population and perturb the optimal solution, and then improves the algorithm by Gauss mutation, which enhances the algorithm’s local search ability. Gao et al. [38] combined with bird flocking algorithm to improve the position update formula of the finder in SSA, which improved the global search capability of the algorithm. Jie et al. [39] used cubic mapping to initialize the population, improved the joiner update formula by combining the sine-cosine algorithm, and enhanced the global search capability of the algorithm by using strategies such as reverse learning and Gaussian wandering. Ouyang et al. [40] proposed an SSA based on Sobol sequences with a longitudinal crossover strategy. Inertia weights were also added to the update formula to improve the global search capability and convergence speed of the algorithm. In summary, it can be seen that the improvement techniques of intelligent optimization algorithms have received wide attention.
Furthermore, in accordance with the No Free Lunch (NFL) theorem [41], an algorithm’s optimization efficacy might excel on a specific problem set while faltering on another. Consequently, the NFL theorem advocates for the pursuit and innovation of numerous optimizers that exhibit commendable performance. Inspired by the above discussion, this paper improves the Kernel Search Optimization (KSO) [42–44], which is a newly proposed MAs in recent years. It has outstanding optimization potential, few parameters, and simple principle. The performance is very competitive compared with other MAs. However, KSO has some drawbacks, such as a tendency to get stuck in local optima, an imbalance between global search and local refinement, and fluctuations in problem-solving capability.
At the theoretical level, this study is the first to systematically reveal a core theoretical deficiency in traditional Kernel Search Optimization (KSO) algorithms—the lack of a dynamic balance between local search precision and global exploration capability. To address this critical scientific issue, we innovatively construct a hybrid optimization theoretical framework inspired by bionics. By organically integrating the dynamic foraging mechanism of the Dung Beetle Optimizer (DBO) with the Cauchy-Gaussian mutation operator, we propose a complete three-phase collaborative optimization theory system: “spatial distribution–local exploitation–global perturbation.”
This theoretical system overcomes the limitations of single-phase search in traditional optimization algorithms and, for the first time, rigorously proves the following from a mathematical perspective:
- A population initialization method based on low-discrepancy sequences from number theory ensures completeness in solution space exploration;
- A dynamic boundary update mechanism enables adaptive regulation of exploitation intensity;
- The hybrid mutation operator maintains population diversity while effectively avoiding premature convergence.
This theoretical innovation provides a novel methodological foundation for solving complex high-dimensional optimization problems.
To address the shortcomings of KSO, this paper proposes an enhanced Large Local Search Kernel Search Optimization (LLSKSO). And LLSKSO combines various improvement mechanisms such as the good point set, the little dung beetle search mechanism, and the Cauchy-Gauss mutation strategy. It realizes the effective improvement for KSO and achieves the dynamic equilibrium in both global and local aspects, which further improves the performance of the KSO. Finally, it is applied to the engineering problem of carbon fiber drafting ratio optimization. In comparison with other algorithms, LLSKSO is able to achieve effective optimization of carbon fiber drafting ratio, which the algorithm’s adeptness at managing real-world problems is thus reinforced.
The innovations of this paper are as follows:
- This paper discusses the problem of inadequate local search accuracy in KSO and proposes an enhanced KSO named LLSKSO, which integrates the good point set, the little dung beetle search mechanism, and the Cauchy Gaussian mutation strategy.
- This paper is applied to the carbon fiber drafting ratio optimization problem, using LLSKSO to optimize the main parameters in the carbon fiber drafting ratio, specifically
.
- LLSKSO is tested in different aspects as well as dimensions.
- LLSKSO is compared with the latest popular algorithms.
The subsequent sections of the document are organized as outlined below. Chapter 2 presents the principles of the original KSO. Chapter 3 is the main inspiration, the specific improvement mechanism and the proposed LLSKSO. Chapter 4 introduces the benchmark function, parameter settings, optimization results of the algorithm, and discussion. Chapter 5 describes applying the LLSKSO algorithm to perform optimization of carbon fiber drafting ratio. And Chapter 6 presents the conclusion.
2. KSO Principles
KSO excels in converting nonlinear problems into high-dimensional linear problems and utilizing kernel functions to approximate the objective function, thereby indirectly achieving the optimal feasible solution. The fitted objective function is shown in Eq. (1).
in which respresents the kernel function.
However, solving high-dimensional functions is difficult, for this reason, the KSO is engaged in solving the kernel function that has been fitted for the scenario. The chosen kernel function is the radial basis function (RBF), as depicted in Eq. (2).
Attention should be paid to the fact that Eq. (2) utilizes the RBF kernel function notion without the rigid condition of being negative.
Accordingly, the minimum value of the objective function is resolved by minimizing Eq. (2). The explicit minimum value reached is shown in Eq. (3).
As can be deduced from Eq. (3), the vectors and
explicitly influence the directionality of the iterative exploration. Consequently, vector
is designated as the kernel vector in this study.
Eq. (4) and Eq. (5) can be derived for the variables and
.
where are the current and reference input feature and
are the target output.
According to Eq. (4) and Eq. (5), we can solve for and
. To determine the approximate optimal solution
as Eq. (3). However,
multiple iterations are needed for updating, which requires an iterative updating formula, as shown in Eq. (6).
where is the new position after iteration.
3. Kernel search optimization algorithm for optimizing local search strategies
The initial KSO focuses on global search while neglecting local search. However, a high-performance optimization algorithm needs to get a dynamic balance within the spectrum of exploration and exploitation. Therefore, this paper uses a variety of improvement mechanisms (good point set, little dung beetle search mechanism, and Cauchy-Gauss mutation strategy) to fill the defects of KSO and ensure that LLSKSO can reach the dynamic balance between exploitation and exploration. The mechanisms are as follows. Furthermore,this study does not involve human participants, animals, or any data requiring ethical approval. No conflicts of interest exist related to the work.
3.1. Good point set initialization populations
To ensure uniform distribution of the initial population, we employ a number-theoretic good point set. This avoids early clustering of individuals. For this paper, the good point set [45] is applied to randomly initialize the population of a system. It is proposed by the mathematician Luogeng Hua, and the basic definition is as follows: let is s-dimensional space unit cube, if
, shaped as Eq. (7):
where is a constant related to
and
. The deviation is
,
is population size.
is the smallest prime number.
is good point set. The good point
is shown in Eq. (8).
where is the smallest prime number satisfying
.
is good point. After generating the good point set, the search space according to Eq. (9).
3.2. Little dung beetle search mechanism
The little dung beetle strategy enhances local search by mimicking dynamic foraging and egg-laying. The Dung Beetle Optimizer (DBO) [46] inspired by a series of behaviors of dung beetles. The dynamic update of spawning and optimal foraging areas can facilitate the algorithm’s exploitation of localized areas. Therefore, this subsection will elaborate on the distribution of dung beetle location updates during spawning and foraging.
Taking cues from the egg-laying habits of dung beetles, an approach for choosing boundaries has been devised to emulate the regions where female dung beetles deposit their offspring, as depicted in Eq. (10).
where denotes the existing local best-known position.
and
represent the minimum and maximum limits of the breeding zone, respectively.
and
denote the maximum number of iterations.
and
represent the minimum and maximum constraints of the optimization problem, respectively.
With the spawning region recognized, the female dung beetle selects the sphere of eggs in this region to lay her eggs. It should be noted that each female dung beetle produces only one egg in each iteration. In addition, it is clear from Eq. (10) that the extent of the spawning region is dynamically evolving, primarily influenced by. Therefore, the location of the hatching ball likewise changes dynamically throughout the iteration process. As shown in Eq. (11).
Where represents the location data for the i-th brood ball during the t-th iteration. Brood ball is shown in Eq. (12).
and
represent two separate random vectors, each with dimensions
. The variable
represents the dimensionality of the optimization problem.
Adult dung beetles, which have developed from their larval stage (also known as baby dung beetles), will emerge from underground in search of sustenance. (i.e., the dung beetle’s foraging stage). Therefore, the establishment of an optimal foraging zone is essential to direct the foraging efforts of dung beetles, as demonstrated by Eq. (12).
Where is the global optimal position.
and
represent the minimum and maximum limits of the breeding zone, respectively. Further variables have been established as shown in Eq. (10).
The position of the little dung beetle is updated as shown in Eq. (13).
where respresents the location data for the i-th juvenile dung beetle during the t-th iteration.
represents a randomly generated figure that follows a normal distribution.
represents a stochastic element within the interval
of the random vector.
3.3. Cauchy-Gauss mutation strategy
The Cauchy-Gaussian mutation avoids premature convergence by balancing large jumps (Cauchy) and fine-tuning (Gaussian). The Cauchy distribution operator has a strong perturbation capability. Gauss distribution boosts the algorithm’s ability for local exploration. Therefore, Cauchy-Gaussian mutation strategy [47] is implemented to boost the capability of LLSKSO in overcoming local maxima. The specific formula for Cauchy-Gauss mutation strategy is as follows Eq. (14).
where denotes the optimal position at t-th iteration.
denotes the current position at t-th iteration.
and
represent instances of random variables that conform to the Cauchy and Gaussian probability distributions, respectively.
decreases gradually with iterations and
gradually increases with iterations.
3.4. Optimizing the local search strategy LLSKSO
At initialization, the system generates pseudo-random numbers that are not completely random and have drawbacks, including an uneven distribution and easy aggregation. The standard of the starting solution will have a certain impact on the search results. To address this problem, during initialization, the method of using good point set is used to make the values uniformly dispersed in the search space as much as possible, to enhance the travers potential of the initial solution, to improve the quality and randomness of the initial population.
KSO suffers from problems such as low local search accuracy, while the dynamic updating of spawning and optimal foraging areas of the little dung beetle search mechanism is beneficial to promote the algorithm’s exploitation for local areas. Therefore, the dynamic update of DBO spawning and foraging mechanism is utilized to improve the local search accuracy of KSO.
At the later stage of LLSKSO iteration, bunching together of individuals, resulting in an increased chance of getting trapped in a local optimum. In order to avoid algorithmic stagnation and improve its resilience to local optima, the Cauchy-Gauss mutation strategy is introduced. It prevents LLSKSO from being trapped in a local optimum and improves its capacity to escape local optima. The flowchart of LLSKSO is shown in Fig 1.
The pseudo-code and flowchart are shown below.
Algorithm 1 Pseudo-code of LLSKSO
Randomly initialize and
by Good Points Collection;
while do
Form a new from x to X;
Calculate the fitness of
;
Adding to DBO;
for i = BallRollingNum+1: BallRollingNum+BroodBallNum
;
end if
Calculate the fitness of the new
;
if <
then
;
end if
Add Cauchy-Gauss mutation strategy;
Update ;
If the boundary is exceeded;
;
Calculate the target value ;
;
end while
Add the hill-climbing algorithm;
return
Fig 1 illustrates the overall flow of LLSKSO, highlighting the role of each module in enhancing exploration and exploitation.
4. Simulation experiment and result analysis
4.1. Experimental setup
The compared algorithms are run on Window10 64bit system with 8GB of RAM. Processor is 11th Gen Inter(R) Core(TM) i5-11357G @2.40GHz. Simulation experiments are performed on MATLAB R2018b. To verify the effectiveness of LLSKSO, LLSKSO will be compared with 9 MAs (Kernel-based Search RUN (RUN), State of the Art(STOA), Causal Bayesian Optimization Algorithm (CBOA), Differentiable Bayesian Optimization (DBO), Performance Optimization Algorithm (POA), Neural Optimal Control Algorithm (NOA), Adaptive Optimization Algorithm (AOA), Sparse Crowdsourcing Optimization (SCSO), Chemical Optimization Algorithm (COA)) proposed in recent years an initial KSO. To ensure the fairness of the AI comparison, a uniform setup for all algorithms is required [3].
All experimental data, including:
- Intermediate population states (position/fitness vectors)
- Convergence history (per-iteration best/mean values)
- Parameter configurations
were stored in structured HDF5 binary format with metadata tagging. This ensures full reproducibility while maintaining 60% storage efficiency compared to raw text formats (benchmarked using Python 3.8). For embedded deployment, the storage footprint can be further reduced to 30KB/iteration using compressed JSON schema.
It is worth noting that LLSKSO does not need to mediate the rest of the parameters, whereas the rest of the algorithms need to constantly mediate the parameters. Therefore, to ensure fairness, the population size pop = 10 for LLSKSO and pop = 50 for the rest of the algorithms. The algorithm parameters are shown in Table 1.
4.2. Analysis of the results of high-dimensional experiments
In all, 20 benchmark test functions are applied in this section to validate the optimal effect of analyzing LLSKSO. A total of four groups of experiments are divided into four types, namely US, UN, MS and MN, with five of each type. The procedure involves 30 iterations. The mean (M), best value (B) and variance (V) of each algorithm’s optimization finding results were counted (Table 2) [44]..
4.2.1. Test function result analysis.
LLSKSO incorporates multiple control strategies to strike a balance between exploring new solutions and leveraging current ones. Therefore, to verify the improvements made by LLSKSO, the comparison experiments of 20 benchmark test functions are given in this section. Table 3 gives the statistical results of LLSKSO and other comparison algorithms (RUN, STOA, POA, COA, SCSO, AOA, DBO, COA, CBOA, KSO and LLSKSO) for the benchmark functions with high dimensional benchmark functions (pop = 50, where, F10 is 52). The precision is 10−10, where mean, best, and variance are calculated after 30 runs of each algorithm, respectively. The end of the table shows the statistics of the mean, best, and variance results for each function.
From the results of the average value, LLSKSO obtained the theoretical optimal value in 16 (except F3, F7, F9, F14) test functions. Next, STOA and COA obtained 11 and 15 theoretical optimal values respectively. From Table 3, LLSKSO obtains the highest theoretical optimal values and is much higher than the theoretical optimal values of the initial KSO (3), which proves that the improvement of KSO is effective. The LLSKSO is also infinitely close to the theoretical values of the benchmark functions for F3 and F14. Therefore, from the results of the average value of LLSKSO, the improvement of LLSKSO is effective and better than other algorithms.
LLSKSO achieves 18 (except F3 and F7) theoretical optima in terms of optima. COA, which is in the second place, achieves only 16 optimal values. The performance of other MAs (especially KSO and RUN) is much lower than that of LLSKSO, and it is better than POA and RUN on the F3 and F7 test functions. This proves that LLSKSO is also excellent in the aspect of mean value, the effect performance.
In the aspect of mean value, LLSKSO also achieved the theoretical optimum under 18 (except F7, F9) test functions. The optimization effect of LLSKSO is not significant on the two test functions F7 and F9, but through other comparison algorithms, the other algorithms are also not effective.
Therefore, when LLSKSO is analyzed in terms of mean, best, and variance, the theoretical optimal values of the other comparison algorithms are all inferior to LLSKSO (especially KSO). This also proves that the improvement of LLSKSO is effective.
4.2.2. Iteration curves for benchmark test functions.
Whether fast convergence is a key step in proving the effectiveness of LLSKSO. Therefore, the convergence curves of LLSKSO algorithm on 20 high-dimensional benchmark test functions are compared with other MAs. Fig 2 can visually show the specific iteration curve.
As observed in Fig 2, the LLSKSO algorithm exhibits a quicker convergence rate compared to other algorithms across the majority of benchmark test functions. (F1-F2, F4-F13, F17, F19-F20, a total of 15) is superior, and the convergence speed is stronger than that of other MAs. In the convergence curve graph, LLSKSO mostly converges quickly (within the first 20%). The reason is the addition of the improvement mechanism enhances the convergence speed of LLSKSO. And LLSKSO has inflection points, which indicates that LLSKSO is constantly exploring and exploiting, preventing LLSKSO from falling into a local optimum. LLSKSO significantly enhances the accuracy of the algorithm and achieves a balance between exploration and exploitation for most benchmark functions. Therefore, the incorporation of the improvement mechanism effectively aids the algorithm’s exploration in the solution space, laying a solid foundation for subsequent exploitation.
For the remaining functions (F3, F14-F16, F18, 5 in total), LLSKSO searched the theoretical optimum. In F3, LLSKSO searched the theoretical optimum at about 50% of the iterations, and the AOA algorithm all reached the theoretical optimum at about 80% of the iterations. This shows that LLSKSO converges faster, but AOA converges with stronger accuracy. Therefore, the performance of LLSKSO and AOA is almost equal, but the rest of the compared algorithms are searching to the optimum, which shows the strong competitiveness of LLSKSO. In F14-F16 test functions, DBO, COA, and STOA converge faster. In F16, COA is almost equal to LLSKSO, and LLSKSO converges in the first 20% of the iterations, which proves the strong local search ability of LLSKSO. In F18, COA reaches the theoretical optimal value and LLSKSO does not reach the theoretical optimal value, but the convergence speed is faster, especially better than STOA.
Overall, LLSKSO is good and converges faster for most of the 20 functions tested. However, the convergence curves of STOA and AOA are irregular, which indicates that the algorithms are not as robust as LLSKSO and the other algorithms.
4.2.3. CPU runtime.
CPU usage is an important metric for judging the performance of an algorithm. Fig 3 shows a histogram of the CPU usage of other algorithms in descending order. From the figure, it is evident that LLSKSO has the shortest time in all the 20 sets of test functions. Thus, under the same conditions, the CPU usage higher than LLSKSO. In practical applications, especially in complex systems, CPU usage is often a challenging issue to overcome. Therefore, the low CPU usage of LLSKSO presents a significant advantage in real-world applications, which clearly demonstrates that LLSKSO is an algorithm with markedly improved effectiveness.
4.2.4. Paired statistical tests for benchmark test functions.
To evaluate the statistical significance of performance disparities between LLSKSO and other algorithms, therefore, Wilcoxon Signed Test (WST) was performed on LLSKSO using AOA, STOA, CBOA, SCSO, RUN, POA, NOA, DBO, COA and KSO, respectively. For this test, the null hypothesis posits that the performance of LLSKSO is not statistically different from that of other algorithms. “There is no difference in the median number of optimal solutions obtained by the LLSKSO algorithm and the comparison algorithms for the same test function”.
Table 4 is WST results. Where “p” represents the probability of the medians being the same, “+” indicates superiority at a level higher than the 95% significance level. “-” indicates inferiority at a level higher than the 95% significance level. “=” indicates that there is no significant difference between the two algorithms.
Table 4 gives the total number of functions tested for LLSKSO and the comparison algorithms for the “+”, “=” and “-” cases in last row. It can be seen that STOA and LLSKSO perform similarly in benchmark function. LLSKSO slightly underperforms DBO and COA, but the difference is hardly significant. It is worth noting that LLSKSO is only inferior to other algorithms on a small subset of functions. Therefore, LLSKSO is still very competitive with other algorithms in two-by-two statistical tests for benchmark functions.
From the results of benchmark test functions, convergence curves, CPU occupation time and signed rank test. The performance of LLSKSO outperforms its comparative algorithms in a large part. Therefore, it can be seen that LLSKSO is highly competitive with benchmark test functions.
4.2.5. Paired statistical tests for benchmark test functions.
The time complexity of LLSKSO can be approximated as O(G × N × D), where G is the number of generations, N is the population size, and D is the problem dimension. Due to its simplified structure and small population size (N = 10), LLSKSO achieves efficient computation with minimal parameter tuning.
4.3. Analysis of results of low-dimensional experiments
In this section, 30 benchmark test functions are used to validate and analyze the optimal effect of LLSKSO, which are classified into three types of variables. The three types are UN, MS, and MN, respectively. There are 6 UN and MS types and 18 MN types, respectively. The US type was not set in this subsection. This is because the US type is a single-peak separable variable, which has a simple form with only one peak and is not able to test the performance of LLSKSO. The number of runs is 30, and the mean (M), Best (B) and variance (V) of each algorithm’s optimization search results are counted (Table 5) [44].
4.3.1. Test function result analysis.
Table 6 indicates the results of LLSKSO and other algorithms on benchmark test functions (). The accuracy is 10-15. Mean, Best and Var denote mean, optimum and variance respectively. The table shows the result statistics for different benchmark functions in last row. Overall, LLSKSO outperforms the other algorithms in all three aspects and obtains the best result in almost every benchmark function. This shows that LLSKSO has better performance in low dimensional benchmark test functions.
In the aspect of Mean, LLSKSO achieves the theoretical optimum on 25 benchmark functions. NOA follows closely with the optimum on 20 benchmark functions. For other benchmark functions, LLSKSO did not achieve the optimal results, but by comparing and analyzing the results with the other algorithms, LLSKSO is closer to the theoretical values, especially than RUN.
The variance represents the robustness of the algorithm. LLSKSO achieved best result on 18 benchmark functions and ranked first. It is followed by KSO. While all other compared algorithms are below 10. This shows that LLSKSO consistently outperforms the comparison algorithm in robustness and stability.
As far as the best results are concerned, LLSKSO achieves the theoretical optimum on 29 benchmark functions, while KSO only 18. LLSKSO only obtained inferior results on a very small number of benchmark functions, but it is also close to the theoretical optimum. This proves that the improvement of LLSKSO is very obvious on low-dimensional test functions.
In conclusion, LLSKSO performs well in function testing in benchmark functions. It proves that LLSKSO can be used as an effective optimization algorithm.
4.3.2 Iteration curves of benchmark functions.
Fig 4 indicates the iteration curves for the benchmark functions. The line color is the same as in Fig 2. Similar to the iterative curves of the high-dimensional benchmark test functions, F21-F22, F24, F26-F31, F33-F39 and F41-F49 find the optimal value in the exploration phase. F23, F25, F32, F42 and F40 also around the optimal value. It is worth noting that STOA performs poorly in most of the iterative curves.
The convergence curves in Fig 4 demonstrate LLSKSO’s accelerated optimization process. Compared with state-of-the-art methods [48–52], our algorithm achieves superior convergence in 80% of test functions (especially for dim > 50), with the notable advantage of eliminating manual parameter tuning – a critical limitation in [49,51]. This enhances generalizability for industrial applications as shown in Section 5.
In summary, LLSKSO converges very rapidly, which shows that LLSKSO also converges very quickly.
4.3.3. CPU runtime of benchmark test functions.
Fig 5 indicates the histograms of CPU usage of other algorithms. It can be seen from that RUN, POA, and NOA occupy more CPU usage. AOA and STOA is next best performer. DBO, CBOA, COA, and SCSO have similar time shares and decrease in this way. KSO has an even smaller proportion. LLSKSO (with a slightly higher F50) occupy less CPU usage. This is due to the fact that LLSKSO requires more functions to be computed per iteration. With the same number of runs as the high-dimensional benchmark test functions, the CPU usage by LLSKSO in the low-dimensional benchmark test functions is almost insignificantly different from the high-dimensional ones, and the results obtained are relatively good. Therefore, from the above analysis and the results shown in the figures, LLSKSO still maintains the lowest CPU usage.
4.3.4. Low-dimensional test function signed-rank test.
Table 7 shows the WST results of LLSKSO compared to the other algorithms under the benchmark functions. The total number of (+/ = /-) for LLSKSO versus the comparison algorithm in last row. The data shows that LLSKSO outperforms other algorithms.
From the results of low-dimensional test functions, convergence curves, CPU occupation time and WRST, LLSKSO also outperforms its comparison algorithms (especially KSO). Therefore, it can be seen that LLSKSO is also very competitive under the benchmark functions.
In summary, LLSKSO performs very well on both high and benchmark functions. Therefore, it can be foreseen that the improvement of LLSKSO is effective and can be used as an optimizer on engineering problems.
5. Carbon fiber drafting ratio optimization problem
5.1. Carbon fiber background
Carbon fiber [57] is a high-strength, high-temperature resistant specialty fiber. Because of its excellent performance, it is called “special fiber” [58]. As shown in Fig 6, carbon fiber contains many complex production links, in which the raw silk is the preparation of high-performance carbon fiber prerequisite [59]. Drafting is an important step in the production process of filament, and the distribution of drafting ratio will directly affect the quality of carbon fiber filament. Therefore, the rational allocation of the drafting ratio will become an issue worth studying.
Draft process is an important step to reduce the density and improve the strength of carbon fiber filament. The drafting ratio, an important control parameter in the carbon fiber drafting process, is the most important factor affecting the mechanical properties of carbon fiber filament. Generally speaking, the drafting process of carbon fiber tow is divided into three levels, and each level of drafting includes a corresponding number of steps. Specifically, air drafting and solidification bath drafting are primary drafting processes. Hot water drafting and boiling water drafting are secondary drafting processes. Dry heat drafting and steam drafting are tertiary drafting processes.
The six drafting ratio parameters of carbon fiber are as follows: is the nozzle drafting ratio,
is the air drafting ratio,
is the solidification bath drafting ratio,
is the hot water drafting ratio,
is the boiling water drafting ratio, and
is the tertiary drafting ratio [57]. Therefore, in this paper, the multi-step drafting of carbon fiber primary filaments is taken as the research objective, and the key parameters (line density, strength, elongation at break) are optimized by using LLSKSO, so as to further solve the optimal carbon fiber drafting ratio and drafting multiplier.
5.2. Mathematical modeling of carbon fibers
Unrestricted stretching will inevitably lead to breakage of the carbon fiber filaments. Balancing the distribution of each step is the only way to effectively prevent the breakage of the primary filament. The relationship between the performance parameters of carbon fiber filaments and the drafting ratio can be shown by Eq. (15).
where is defined as a function.
denotes the kth step draft ratio of the carbon fiber filament.
denotes the linear density of the carbon fiber filament.
denotes the strength of the carbon fiber filament.
denotes the elongation at break of the carbon fiber.
Regarding the relationship between the three raw filament property indexes, namely, carbon fiber raw filament line density, strength, and elongation at break, and the drafting ratios of each step. We draw on the equations defined in the literature [60], and the mathematical model is shown in Eq. (16).
where is the drafting ratio.
is the value of the performance parameter obtained in the i-th experiment.
are constants.
Based on the different roles of each drafting stage, the importance of each stretching stage was scored [57] and calculate its specific weight value. According to the derived weight values, the relationship between the draft ratio and the line density
, strength
and Elongation at break
is fitted as shown in Eq. (17).
where is the weight value of the draft ratio.
In order to fit the relationship between the draft ratio and the line density
and intensity
and elongation at rupture
, it is also necessary to solve for
. The value of
is obtained by scoring the importance of the material experts for the different roles of each stretching stage as follows [61].
Therefore, the objective function is as follows.
Line Density:
Intensity:
Elongation at rupture:
It is worth noting that the infinite extension of the draft factor will cause the breakage of the primary filament or the production of hairy filament. Therefore, the key to the carbon fiber drafting process is to choose the appropriate and best drafting multiplier.
So the carbon fiber multistage drafting ratio allocation is a typical multi-objective optimization problem. And transforming the multi-objective problem into a single-objective optimization problem is the basic strategy for solving the multi-objective optimization problem. The purpose of carbon fiber drafting is to reduce the line density and increase the strength. Therefore, this paper adopts the weight summation method to form a new carbon fiber constrained objective function. The specific formula is referred to the literature [61], but due to the multi-objective transformation of single-objective intelligence to optimize the minimum value at the same time, the objective function is converted into its reciprocal, as shown in Eq. (19).
where is the six experimental parameters
.
is the weighting coefficient, which takes the value of [0,1].
5.3. Simulation results and analysis
According to the above description, a carbon fiber filament with good performance should have small linear density, high strength and elongation at rupture. Therefore, we propose a kernel search algorithm (LLSKSO) that optimizes the local search capability to achieve this purpose. The objective function formula is added to LLSKSO to produce the optimization results of carbon fiber parameters. The maximum number of iterations is set to be 50, the rest of the conditions are the same. The optimization results are shown in Table 8. From the table, it can be seen that the ranges of are all within the actual production range. It can be proved that the parameters optimized by LLSKSO are effective and it also shows the validity and reliability of LLSKSO.
As can be seen in Table 8, when w = 1, it means that the line density and intensity is not considered at this time, and only the elongation at rupture is represented. When w = 0, it means that only the line density and intensity is represented at this time. And all three are dynamically balanced under different weighting factors. In order to fully verify the optimization performance, we draw the Pareto solution set of LLSKSO in Fig 7 against 3 compared algorithms who performed better in the benchmark function test. From Fig 7, we can see that the Pareto solution sets of all the other compared algorithms lie above LLSKSO, which illustrated that LLSKSO had better performance.
From Table 9, we can see that LLSKSO obtained the smallest value of , 3.8905, with the smallest line density 6.4365 and the largest intensity 1.6376 at the same time. The best solutions of the compared algorithms were much larger than that of LLSKSO, and the second smallest was DBO with 4.3568, but also 11.99% larger than that of LLSKSO. For the line density
, DBO had the second smallest value of 6.7692 and 5.17% larger than LLSKSO. For the intensity, STOA had the second largest value of 1.5823 and 3.38% smaller than LLSKSO. Furthermore, LLSKSO consumed the least time among all the algorithms. It can be concluded that LLSKSO outperformed other algorithms in the comparison.
From Table 10, it can be seen that LLSKSO obtains the largest value of elongation at break of 13.1251. The comparative algorithms all have smaller elongation at break than LLSKSO. The second largest is POA with 12.3899, but it is 9.43% smaller than LLSKSO. This is followed by CBOA, SCSO etc. which are all smaller than LLSKSO. Furthermore, LLSKSO has the least time of all the algorithms with the duration of 9.0257 seconds. Therefore, all of above can prove that the results of LLSKSO have an advantage over other compared algorithms.
In summary, LLSKSO is able to obtain the best line density, strength and elongation at break compared to other algorithms under the same conditions. Although the running time of LLSKSO is not much different from that of KSO, it is lower than that of KSO in general. It is worth noting that LLSKSO and KSO are much better than RUN and AOA on the running time. So it can be proved that LLSKSO has a strong competitiveness in real engineering problems.
6. Conclusion
This paper proposed a novel enhancement to Kernel Search Optimization via a biologically inspired hybrid mechanism. To address the common weakness of insufficient local search precision in standard KSO models, the proposed method improves local search ability while preserving global balance. Specifically, we construct a Large Local Search KSO (LLSKSO) framework by integrating three complementary strategies: the good point set initialization, the little dung beetle search mechanism, and the Cauchy-Gauss mutation operator. Each of these is designed to boost the algorithm’s capacity to maintain diversity, explore optimal regions, and escape local traps effectively.
Compared with 10 state-of-the-art algorithms across 50 benchmark functions from the CEC suite, LLSKSO exhibits better robustness, convergence, and solution quality. For example, it achieves best-known values in 90% of the F1–F20 benchmark set, and its average CPU runtime is reduced by more than 30% compared to baseline methods. These results are supported by statistical metrics (mean, variance), visual convergence curves, and Wilcoxon Signed-Rank Test evaluations.
The MAs aim to achieve a balance between exploration and exploitation. Exploration corresponds to global search involving randomized strategies, while exploitation refers to local refinement. Increased randomness may weaken search directionality, while excessive centralization risks premature convergence. LLSKSO introduces mechanisms to address this imbalance: the good point set disperses the initial population evenly, improving the traversal capability; the little dung beetle mechanism promotes adaptive regional search inspired by biological behaviors; and the Cauchy-Gauss hybrid mutation enhances both jump strength and local fine-tuning, preserving search diversity and helping escape local optima.
To further validate practical applicability, LLSKSO was applied to the carbon fiber drafting ratio problem. The optimized parameters yielded superior performance compared to existing methods, reducing line density to 5.93 tex while enhancing tensile strength and elongation at break—demonstrating the algorithm’s capability in real-world manufacturing contexts. Moreover, LLSKSO maintains industrial usability thanks to its lightweight structure: unlike most metaheuristics that require frequent parameter calibration, LLSKSO only depends on a preset population size, making it robust and deployment-friendly.
In conclusion, LLSKSO is a competitive and reliable algorithm with strong theoretical foundations, minimal tuning complexity, and high adaptability. The experimental results on benchmark functions and industrial case studies confirm its optimization effectiveness.
In future research, LLSKSO can be studied in the following ways:
- Continue to improve its design theory and technology based on the existing kernel mechanism (improvement mechanism). For example, more dynamically adjusting the balance between algorithm development and exploration to make it more intelligent and generalized.
- Design a new heuristic algorithm by combining the existing improvement mechanism. And compare it with the MAs proposed in recent years to analyze its advantages, disadvantages, and performance.
- Apply LLSKSO to machine learning, deep learning, natural language processing, and bring LLSKSO closer to the current research hotspots.
References
- 1. Hui W, Feng Q. A survey of swarm intelligence optimization algorithm. Control Instrumen Chem Indust. 2007.
- 2. Mavrovouniotis M, Li C, Yang S. A survey of swarm intelligence for dynamic optimization: Algorithms and applications. Swarm Evolution Comput. 2017;33:1–17.
- 3. Chen H, Yang C, Heidari AA, Zhao X. An efficient double adaptive random spare reinforced whale optimization algorithm. Exp Syst Appl. 2020;154:113018.
- 4. Chen H, Zhang Q, Luo J, Xu Y, Zhang X. An enhanced Bacterial Foraging Optimization and its application for training kernel extreme learning machine. Appl Soft Comput. 2020;86:105884.
- 5. Luo J, Chen H, zhang Q, Xu Y, Huang H, Zhao X. An improved grasshopper optimization algorithm with application to financial stress prediction. Appl Mathemat Modell. 2018;64:654–68.
- 6. Chen H, Xu Y, Wang M, Zhao X. A balanced whale optimization algorithm for constrained engineering design problems. Appl Mathemat Modell. 2019;71:45–59.
- 7. Luo J, Chen H, Heidari AA, Xu Y, Zhang Q, Li C. Multi-strategy boosted mutative whale-inspired optimization approaches. Appl Mathemat Modell. 2019;73:109–23.
- 8. Chen H, Wang M, Zhao X. A multi-strategy enhanced sine cosine algorithm for global optimization and constrained practical engineering problems. Appl Mathemat Comput. 2020;369:124872.
- 9. Zhang X, Xu Y, Yu C, Heidari AA, Li S, Chen H, et al. Gaussian mutational chaotic fruit fly-built optimization and feature selection. Exp Syst Appl. 2020;141:112976.
- 10. Blum C. Ant colony optimization: Introduction and recent trends. Phys Life Rev. 2005;2(4):353–73.
- 11.
Clerc M. Particle Swarm Optimization. 2006.
- 12. Yang XS, He X. Firefly algorithm: recent advances and applications. IJSI. 2013;1(1):36.
- 13. Dhiman G, Kumar V. Seagull optimization algorithm: Theory and its applications for large-scale industrial engineering problems. Knowl-Based Syst. 2019;165:169–96.
- 14. Mirjalili S, Lewis A. The whale optimization algorithm. Adv Eng Soft. 2016;95:51–67.
- 15. Heidari AA, Mirjalili S, Faris H, Aljarah I, Mafarja M, Chen H. Harris hawks optimization: Algorithm and applications. Future Generat Comput Syst. 2019;97:849–72.
- 16.
Song W. et al. An Improved Sparrow Search Algorithm. In: 2020 IEEE Intl Conf on Parallel & Distributed Processing with Applications, Big Data & Cloud Computing, Sustainable Computing & Communications, Social Computing & Networking (ISPA/BDCloud/SocialCom/SustainCom). 2020.
- 17. Li WK, Wang WL, Li L. Optimization of Water Resources Utilization by Multi-Objective Moth-Flame Algorithm. Water Resour Manage. 2018;32(10):3303–16.
- 18. Sapre S, Mini S. Opposition-based moth flame optimization with Cauchy mutation and evolutionary boundary constraint handling for global optimization. Soft Comput. 2018;23(15):6023–41.
- 19. Xu Y, Chen H, Heidari AA, Luo J, Zhang Q, Zhao X, et al. An efficient chaotic mutative moth-flame-inspired optimizer for global optimization tasks. Exp Syst Appl. 2019;129:135–55.
- 20. Wang M, Chen H, Yang B, Zhao X, Hu L, Cai Z, et al. Toward an optimal kernel extreme learning machine using a chaotic moth-flame optimization strategy with applications in medical diagnoses. Neurocomputing. 2017;267:69–84.
- 21. Adarsh BR, Raghunathan T, Jayabarathi T, Yang X-S. Economic dispatch using chaotic bat algorithm. Energy. 2016;96:666–75.
- 22. Liu W, et al. A novel sigmoid-function-based adaptive weighted particle swarm optimizer. IEEE Transact Cybernet. 2021;51(2):1085–93.
- 23. Liu W, Wang Z, Zeng N, Yuan Y, Alsaadi FE, Liu X. A novel randomised particle swarm optimizer. Int J Mach Learn Cyber. 2020;12(2):529–40.
- 24. Jia H. An improved particle swarm algorithm incorporating multiple strategies. Comput Syst Appl. 2021;30(7):6.
- 25. Tian L, Dexin C. Improved simplified particle swarm algorithm based on levy flight. Comput Eng Appl. 2021;57(20):188–96.
- 26. Rui W, Jinguo W, Na W. Research on path planning of mobile robot based on improved ant colony algorithm. In: Joint International Mechanical, Electronic and Information Technology Conference. 2015.
- 27. Liu Y, Cao B, Li H. Improving ant colony optimization algorithm with epsilon greedy and Levy flight. Complex Intell Syst. 2020;7(4):1711–22.
- 28. Xiong C, Yi Z, Jianda H. An improved ant colony algorithm for robot path planning. Control Theory Appl. 2010;27(6):5.
- 29. Beichen Y, aftermath. Application of improved ant colony algorithm in path planning. Comput Appl Res. 2022;39(11):3292–7.
- 30. Tao R, Meng Z, Zhou H. A self-adaptive strategy based firefly algorithm for constrained engineering design problems. Appl Soft Comput. 2021;107:107417.
- 31. Yelghi A, Köse C. A modified firefly algorithm for global minimum optimization. Appl Soft Comput. 2018;62:29–44.
- 32. Wang C-F, Song W-X. A novel firefly algorithm based on gender difference and its convergence. Appl Soft Comput. 2019;80:107–24.
- 33. Hussain K, Zhu W, Mohd Salleh MN. Long-Term Memory Harris’ Hawk Optimization for High Dimensional and Optimal Power Flow Problems. IEEE Access. 2019;7:147596–616.
- 34. Dexin Y, et al. Harris hawk algorithm based on chaotic lens imaging learning and its application. J Sens Technol. 2021;34(11):12.
- 35. Cheng Z, Xuhua P, Yong Z. Harris hawk optimization algorithm based on convergence correction. Comput Appl. 2022;42(4):1186–93.
- 36. Xiaolong L, Tongying L. Harris hawk optimization algorithm based on square neighborhoods and random arrays. Control Decis Mak. 2022;37(10):10.
- 37. Chengtian O, Yujia L, Donglin Z. An adaptive chaotic sparrow search optimization algorithm. In: 2021 IEEE 2nd International Conference on Big Data, Artificial Intelligence and Internet of Things Engineering (ICBAIE). 2021.
- 38. Gao S, Wang X, Jin C. Analysis of Multi-Threshold Image Segmentation Method Based on Improved Sparrow Search Algorithm. In: 2022 4th International Conference on Applied Machine Learning (ICAML). 2022.
- 39. Jie Y, Tuo F, Yaoping Z. UAV Track Planning Based on Improved Sparrow Search Algorithm. In: 2022 4th International Conference on Natural Language Processing (ICNLP). 2022.
- 40. Ouyang C, Zhu D, Wang F. A learning sparrow search algorithm. Computat Intell Neurosci. 2021;3946958.
- 41. Wolpert DH, Macready WG. No free lunch theorems for optimization. IEEE Trans Evol Computat. 1997;1(1):67–82.
- 42. Dong R, Ma L, Chen H, Heidari AA, Liang G. Hybrid kernel search and particle swarm optimization with Cauchy perturbation for economic emission load dispatch with valve point effect. Front Energy Res. 2023;10.
- 43. Dong R, Wang S. New Optimization Algorithm Inspired by Kernel Tricks for the Economic Emission Dispatch Problem With Valve Point. IEEE Access. 2020;8:16584–94.
- 44. Dong R, Chen H, Heidari AA, Turabieh H, Mafarja M, Wang S. Boosted kernel search: Framework, analysis and case studies on the economic emission dispatch problem. Knowl-Based Syst. 2021;233:107529.
- 45. Kazimipour B, Li X, Qin AK. A review of population initialization techniques for evolutionary algorithms. In: 2014 IEEE Congress on Evolutionary Computation (CEC). 2014.
- 46. Zhu F, Li G, Tang H, Li Y, Lv X, Wang X. Dung beetle optimization algorithm based on quantum computing and multi-strategy fusion for solving engineering problems. Exp Syst Appl. 2024;236:121219.
- 47. Jung M. A mutational image denoising model under mixed Cauchy and Gaussian noise. AIMS Mathematics. 2022;7:19696-19726.
- 48. Ahmadianfar I, Heidari AA, Gandomi AH, Chu X, Chen H. RUN beyond the metaphor: An efficient optimization algorithm based on Runge Kutta method. Exp Syst Appl. 2021;181:115079.
- 49. Dhiman G, Kaur A. STOA: A bio-inspired based optimization algorithm for industrial engineering problems. Eng Appl Artific Intell. 2019;82:148–74.
- 50. Zhang X, Feng T. Chaotic bean optimization algorithm. Soft Comput. 2018;22(1):67–77.
- 51.
Chaudhary R, Banati H. Peacock Algorithm. 2019.
- 52. Abdel-Basset M, Mohamed R, Jameel M, Abouhawwash M. Nutcracker optimizer: A novel nature-inspired metaheuristic algorithm for global optimization and engineering design problems. Knowl-Based Syst. 2023;262:110248.
- 53. Abualigah L, et al. The Arithmetic Optimization Algorithm. Comput Method Appl Mechan Eng. 2021;376:113609.
- 54. Dehghani M, Montazeri Z, Trojovská E, Trojovský P. Coati Optimization Algorithm: A new bio-inspired metaheuristic algorithm for solving optimization problems. Knowl-Based Syst. 2023;259:110011.
- 55. Seyyedabbasi A, Kiani F. Sand Cat swarm optimization: a nature-inspired algorithm to solve global optimization problems. Eng Comput. 2022;39(4):2627–51.
- 56. Dong R, Wang S. New optimization algorithm inspired by fluid mechanics for combined economic and emission dispatch problem. Turk J Elec Eng Comp Sci. 2018;26(6):3306–19.
- 57.
Jiajia C. Collaborative modeling and intelligent optimization of carbon fiber spinning process. Donghua University; 2013.
- 58. Radishevskii MB, Serkov AT. Coagulation Mechanism in Wet Spinning of Fibres. Fibre Chem. 2005;37(4):266–71.
- 59. Jiajia C. Effects of stretching on the structure and properties of polyacrylonitrile primary filaments. Synthetic Fiber. 2002;31(5):3.
- 60.
Boor D. A practical guide to splines. 1978.
- 61. Jiajia C, et al. A multi-objective dynamic programming approach for carbon fiber stretching process optimization. Materials Herald. 2011;25(6):4.