Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Enhanced crayfish optimization algorithm: Orthogonal refracted opposition-based learning for robotic arm trajectory planning

  • Yuefeng Leng,

    Roles Conceptualization, Project administration, Resources, Supervision, Validation, Writing – review & editing

    Affiliation School of Mechanical Engineering, Liaoning Technical University, Fuxin, China

  • Chunlai Cui ,

    Roles Conceptualization, Data curation, Methodology, Validation, Visualization, Writing – original draft, Writing – review & editing

    1592725789@qq.com

    Affiliation School of Mechanical Engineering, Liaoning Technical University, Fuxin, China

  • Zhichao Jiang

    Roles Formal analysis, Investigation, Software, Validation

    Affiliation School of Mechanical Engineering, Liaoning Technical University, Fuxin, China

Abstract

In high-dimensional scenarios, trajectory planning is a challenging and computationally complex optimization task that requires finding the optimal trajectory within a complex domain. Metaheuristic (MH) algorithms provide a practical approach to solving this problem. The Crayfish Optimization Algorithm (COA) is an MH algorithm inspired by the biological behavior of crayfish. However, COA has limitations, including insufficient global search capability and a tendency to converge to local optima. To address these challenges, an Enhanced Crayfish Optimization Algorithm (ECOA) is proposed for robotic arm trajectory planning. The proposed ECOA incorporates multiple novel strategies, including using a tent chaotic map for population initialization to enhance diversity and replacing the traditional step size adjustment with a nonlinear perturbation factor to improve global search capability. Furthermore, an orthogonal refracted opposition-based learning strategy enhances solution quality and search efficiency by leveraging the dominant dimensional information. Additionally, performance comparisons with eight advanced algorithms on the CEC2017 test set (30-dimensional, 50-dimensional, 100-dimensional) are conducted, and the ECOA’s effectiveness is validated through Wilcoxon rank-sum and Friedman mean rank tests. In practical robotic arm trajectory planning experiments, ECOA demonstrated superior performance, reducing costs by 15% compared to the best competing algorithm and 10% over the original COA, with significantly lower variability. This demonstrates improved solution quality, robustness, and convergence stability. The study successfully introduces novel population initialization and search strategies for improvement, as well as practical verification in solving the robotic arm path problem. The results confirm the potential of ECOA to address optimization challenges in various engineering applications.

1. Introduction

With the continuous development of robotic arm technology, robotic arm trajectory planning has become a hot topic in current robotics research. Trajectory planning is a crucial component of robotic arm motion control system technology, affecting the robot’s movement patterns and operational performance [1,2]. It can determine the relationship between time and space during the industrial robot’s working process, plan the robotic arm’s trajectory, and ensure the accuracy and efficiency of the predetermined tasks [3]. Optimized trajectory planning not only saves movement time but also reduces collisions and extends the robotic arm’s lifespan [4,5]. However, due to the complexity of robotic arm systems, including kinematic and dynamic constraints as well as operational environment uncertainties, traditional trajectory planning methods often fail to meet the demands for high efficiency and high precision. To address this issue, traditional algorithms such as numerical optimization and discrete search have been widely used but are often limited by computational resources or the inherent limitations of the algorithms themselves. In contrast, metaheuristic algorithms, with their global search capabilities and high adaptability to complex problems, show great potential in handling high-dimensional and nonlinear problems in robotic arm trajectory planning.

Metaheuristic (MH) algorithms, inspired by the behavior of natural organisms, iteratively optimize solutions by exploiting patterns found in nature, with the aim of achieving efficient results in a limited amount of time [6,7]. In recent years, many advanced swarm intelligence algorithms have emerged, such as Lévy Arithmetic Algorithm [8], Newton-Raphson-based optimizer [9], Walrus optimizer [10], Prism refraction search [11]. These algorithms have been applied to various fields and have achieved commendable results. Their simplicity, versatility, and ease of use make this class of algorithms applicable to a variety of domains, including but not limited to image segmentation [12], global optimization [13,14], path planning [15,16], agricultural monitoring [17], engineering design problems [18,19], forest fire detection [20], rescue operations [21], UAV path planning [22,23], rotor system [24], and cubic transmission [25]. This broad practical value underscores the importance of meta-heuristic algorithms in solving a variety of optimization challenges, especially where traditional deterministic methods are inadequate.

The "No free lunch" (NFL) theorem reminds us that there is no single algorithm that solves all problems [26], highlighting the value of algorithms adapted to particular challenges. Therefore, the MH algorithm specially developed for the path planning of robotic arms is crucial to improve efficiency [27]. Robotic arm path planning is a complex optimization task, demanding specialized approaches to ensure efficiency and high-quality solutions. In light of these requirements, we propose a tailored metaheuristic (MH) algorithm to address the computational challenges specific to robotic arm trajectory planning. This approach aligns with the NFL theorem, underscoring the need for targeted solutions to complex, real-world optimization tasks.

Crayfsh Optimization Algorithm (COA) is a novel intelligent optimization algorithm proposed by Jia Heming et al in 2023 [28]. Inspired by crayfish foraging, summer and competition behavior, the algorithm has fast search speed, strong search ability, and can effectively balance the global search and local search ability. However, despite these properties, COA also has limitations, such as slower convergence and a tendency to fall into local optimality. To date, only a few researchers have attempted to apply COA to manipulator trajectory planning. As with all optimization algorithms, striking an optimal balance between exploration and exploitation is crucial to determine the ideal path [29]. In essence, as an emerging algorithm, COA requires additional research and improvement to more effectively address the complex needs of trajectory planning for robotic arms.

In our study, to improve the convergence speed and exploration capability of the COA algorithm, we proposed an improved COA algorithm. During the algorithm initialization process, the tent chaotic map is used to enhance the algorithm’s randomness and diversity; Subsequently, a newly designed nonlinear dynamic adjustment factor is incorporated into the traditional COA exploration phase to dynamically adjust the search behavior for better solutions, thereby improving global search efficiency. Additionally, in the later iterations, a orthogonal refracted opposition-based learning strategy is integrated to optimize the solution space, enhancing the algorithm’s global search capability and solution quality. This ensures a balanced interaction between exploration and exploitation. Finally, adaptive factors and random factors are introduced into the contemporary population update strategy, significantly enhancing the overall performance of the algorithm.

The experimental results of 29 functions in the CEC2017 test set show that ECOA algorithm significantly improves the global optimization ability of the algorithm. This improvement effectively improves the convergence speed and accuracy of the algorithm. Experimental results show that ECOA performs better than CPSOGSA[30], GQPSO[31], EDOLSCA[32], WOA[6], SCA[33], CPO[34], SWO[35], and the original COA algorithm [28].

By solving the UAV three-dimensional trajectory planning problem, the high applicability of the ECOA algorithm to engineering problems was verified. Addressing the limitations of the COA algorithm, the ECOA algorithm was introduced and applied to robotic arm trajectory planning.

The main contributions of this study are as follows:

  1. To address the limitations of the traditional COA, several key improvements were made: using the tent chaotic map for population initialization, incorporating a nonlinear dynamic adjustment factor, and integrating orthogonal refracted opposition-based learning strategy.
  2. The enhanced ECOA’s exploration and exploitation capabilities were rigorously evaluated using the CEC 2017 benchmark test set. The experimental results validated the algorithm’s significant ability in optimization performance and effective solution space exploration.
  3. The ECOA algorithm was applied to robotic arm trajectory planning to assess its practical application value, highlighting its high precision and efficiency in solving complex real-world engineering problems.

The second part introduces the related work of COA and MH algorithms in robotic arm trajectory planning. Section 3 provides an overview of the original algorithm structure and the proposed method. In Section 4, we conducted relevant experimental tests and performed an in-depth analysis of the proposed algorithm. Section 5 analyzes the application of the ECOA algorithm in UAV three-dimensional trajectory planning. Section 6 is the conclusion of the paper.

2. Related work

In recent years, the field of manipulator trajectory planning has attracted a lot of research interest due to the wide application of intelligent manipulators in various fields. Effective trajectory planning is essential for manipulators to accomplish tasks efficiently [36]. Faced with the inherent NP-hard complexity and the requirement of real-time reaction ability in manipulator trajectory planning, many researchers have deeply studied a series of optimization algorithms and strategies. Among them, swarm intelligence optimization algorithm has been widely used in manipulator trajectory planning because of its efficient and fast response ability [37]. This section aims to explore recent developments in the field, highlighting improved versions of COA and the utilization of various MH algorithms to solve trajectory planning problems in manipulator operations.

An environmental renewal mechanism, by simulating the survival habits of crayfish, was presented by Jia Haiming et al. in which water quality factors guided crayfish to find a better place. Moreover, the learning strategy based on ghost antagonism integrated in COA was helpful to enhance its ability to avoid local optimality [38]. Xiao Bingsong et al. used random search radius to optimize the foraging range, thus improving the operation efficiency of the algorithm [39]. Nebojsa Bacanin et al. improved COA by integrating COA algorithm and firefly algorithm, and improved the ability of the algorithm to escape from local optimization [40]. Meng Jiang et al. improved COA through Circle chaotic mapping to obtain more powerful global search capabilities [41].

In the past few years, the concept of oppositional learning has been widely used to improve the global search and local development capabilities of algorithms, many scholars have introduced Opposition-based Refraction Learning (ORL) into different optimization algorithms to improve their performance. Wen Long et al. proposed a novel refraction learning strategy based on the principle of light refraction, which assists the Whale Optimization Algorithm (WOA) in escaping from local optima [42]. Bilal H. Abed-alguni et al. employed a specific type of opposition-based learning, known as refraction learning, to enhance the Cuckoo Search (CS) algorithm’s capability of avoiding sub-optimal solutions [43]. Noor Aldeen Alawad and Bilal H. Abed-alguni introduced refraction learning combined with a triple mutation method (DJRL3M) to improve the DJaya algorithm for solving the Permutation Flow Shop Scheduling Problem (PFSSP) [44]. Therefore, optimization algorithms incorporating Opposition-based Refraction Learning (ORL) are often more capable of effectively balancing exploration and exploitation when addressing high-dimensional complex optimization problems, providing new directions and methodologies for current research.

There are a number of heuristic algorithms applied to trajectory planning problems of robotic arm operations recently. For instance, Lei Wang et al. in their study proposed TPBSO algorithm for solving problems of trajectory planning, particularly in the cases of robotic manipulators [45]. Jeong-Jung Kim et al. used particle swarm optimization (PSO) for trajectory optimization in robotic arm motion planning [46]. Gurjeet Singh et al. used different combinations of hybrid metaheuristic algorithms to address kinematic and trajectory planning problems. Kinematic parameters, including acceleration, deceleration, and speed, primarily affect the travel smoothness of the robot’s end effector along the trajectory path [47]. Pengfei Xin et al. proposed a particle swarm optimization-based algorithm for residual vibration suppression in spatial manipulator trajectory planning, achieving desirable results [48]. Xiaoman Cao proposed an improved multi-objective particle swarm optimization algorithm for trajectory planning in fruit-picking robotic arms [49]. Lunhui Zhang et al. proposed an efficient and highly stable adaptive cuckoo search (ACS) algorithm for time-optimal trajectory planning in serial robotic arms, minimizing total motion time under strict dynamic constraints [50]. H Guo et al. demonstrated a trajectory planning method for the safflower harvesting robotic arm based on the ant colony genetic algorithm. Then, the improved ant colony genetic algorithm realized the tasks of picking for the safflower harvesting robotic arm. The method obviously improved the picking efficiency in the safflower harvesting process [51].

Many researchers have applied metaheuristic algorithms to robotic arm trajectory planning and have made improvements, achieving good results. However, although COA has been applied in various fields, research on its application in the context of robotic arm trajectory planning remains limited. To achieve optimal trajectory planning, it is still necessary to conduct in-depth research on the two core mechanisms of swarm intelligence algorithms: exploration and exploitation.

In our study, an improved ECOA algorithm that integrates multiple strategies can effectively balance exploration and production processes. Experimental results show that the improved algorithm is superior to the existing algorithm, including CPSOGSA[30], GQPSO[31], EDOLSCA[32], WOA[6], SCA[33], CPO[34], SWO[35], and the original COA algorithm [28]. The improved algorithm has achieved good results in the trajectory planning of the robot arm.

3. The proposed methodology

This section briefly describes the behavior of the original COA and the corresponding mathematical model. In addition, this paper focuses on the proposed ECOA algorithm, including tent chaotic map, nonlinear dynamic adjustment factor and orthogonal refracted opposition-based learning strategy.

3.1. The original COA

The crayfish, also known as the red swamp crayfish or freshwater crayfish, is a crustacean living in freshwater. Due to its good feeding habits, rapid growth, fast migration, and strong adaptability, it has formed an absolute advantage in the ecological environment. Temperature changes often lead to changes in crayfish behavior. It is when the crayfish finds it too hot that it chooses to enter burrows to avoid damage from the heat; when the temperature is suitable, it chooses to crawl out of the burrows to forage. Among the crayfish, which are ectothermic animals, its behavior changes with temperature changes. It usually survives at temperatures ranging from 20°C to 35°C. The formula for the calculation of temperature is given by: (1) where temp represents the temperature of the crayfish’s environment, rand represents a number between [0,1].

3.1.1. Initializing the population.

In the d-dimensional optimization problem of COA, each crayfish is a 1×d matrix representing the solution of the problem. In a set of variables (X1, X2, X3… … Xd), the position (X) of each crayfish lies between the upper boundary (ub) and lower boundary (lb) of the search space. In each evaluation of the algorithm, an optimal solution is computed, and the solutions computed in each evaluation are compared, the optimal solution is found and stored as the optimal solution for the whole problem. The position to initialize the crayfish population is calculated using the following formula. (2) where Xi, j represents the ith only crayfish in position of j-dimension, ubj represents upper bound for the j-dimension, lbj represents lower bound for the j-dimension, rand is 0 ~ 1 random number.

3.1.2. Summer escape stage (exploration stage).

In this paper, the temperature is assumed to be 30° C as the dividing line to determine whether the current living environment is in a high temperature environment. When the temperature is greater than 30° C and it is in summer, in order to avoid the harm caused by the high temperature environment, crayfish will seek a cool and moist cave and enter the summer to avoid the influence of high temperature. The caverns are calculated as follows: (3) where, XG represents the current optimal position obtained by this evaluation number, XL represents the optimal location of the current population.

The behavior of crayfish fighting for a cave is a random event. To simulate the random event of crayfish competing for the cave, we define a random number rand, rand< 0.5 indicates that no other crayfish currently compete for the cave, and the crayfish will directly enter the cave for summer. At this point, the crayfish position update calculation formula is as follows: (4) where, Xnew represents the next generation location after the location is updated, C2 is a decline curve. C2 calculation method is as follows: (5) where t indicates the current number of iterations, and T indicates the maximum number of iterations

3.1.3. Competition stage (exploitation stage).

When the temperature is greater than 30°C and rand ≥0.5, it indicates that the crayfish has other crayfish competing with it for burrows during the summer. At this point, the two crayfish will fight the cave, and crayfish Xi will adjust its position according to the position of the other crayfish Xz. The adjustment position is calculated as follows: (6) (7)

where, z represents the random individual of crayfish and N represents the population size.

3.1.4. Foraging stage (exploitation stage).

The foraging behavior of crayfish is affected by temperature, and the temperature less than or equal to 30°C is an important condition for crayfish to climb out of the cave to find food. When the temperature is less than or equal to 30°C, the crayfish will drill out of the burrow and judge the location of the food according to the optimal location obtained by this assessment, so as to find the food to complete the foraging. The position of the food is calculated as follows: (8)

How much food crayfish eat depends on the temperature. When the temperature is between 20°C and 30°C, the crayfish has a strong foraging behavior, and the most food is found at 25°C, and the amount of food is also the largest. Thus, the food intake pattern of crayfish is approximately normal. Food intake is calculated as follows: (9) where μ indicates crayfish optimum temperature, σ and C1 denote crayfish feed intake under the different temperature control parameters.

The food crayfish get depends not only on the amount of food they eat, but also on the size of the food. If the food is too big, the crayfish can’t eat the food directly. Before eating food, they need to tear it apart with their claws. The size of food is calculated as follows: (10) where C3 indicates food factor which represents the largest food, its value is 3. fitnessi represents the fitness value of the ith only crayfish, fitnessfood represents the fitness value of the location of food.

Crayfish use the value of maximum food R to judge the size of the food obtained and thus decide the feeding method. When Q > (C3+ 1)/2, the food is too big, small lobster cannot eat directly, need to use claws ripping food, eating alternately with the second and third leg. The recipe for shredding food is as follows: (11)

Once the food has been torn down to an easy-to-eat size, pick it up with your second and third PAWS and place it alternately in your mouth. In order to simulate the bipedal feeding process, the mathematical model of sine function and cosine function was used to simulate the alternating feeding process of crayfish. The crayfish alternate feeding formula is as follows: (12)

When Q≤(C3 + 1)/2, it indicates that the food size at this time is suitable for crayfish to feed directly, and crayfish will move directly to the food location and feed directly. The recipe for feeding crayfish directly is as follows: (13)

3.2. The proposed ECOA

Considering the aforementioned analysis, we improved the COA algorithm from three perspectives:

  1. Using the tent chaotic map to initialize the positions of crayfish. Leveraging its nonlinear and dynamic characteristics, this method can generate a more diverse set of initial solutions, aiding the algorithm in searching a broader solution space.
  2. During the crayfish’s Summer escape stage, a nonlinear dynamic adjustment factor is designed to adaptively adjust the search step size, enhancing the exploitation capability.
  3. Using orthogonal refracted opposition-based learning strategy to increase solution diversity and reduce the risk of the algorithm getting trapped in local optima.

3.2.1. Improved population initialization with tent chaotic map.

A discrete, high-quality initial population can accumulate rich search experience for COA, laying the foundation for heuristic algorithm intelligent search. Existing algorithms typically use pseudo-random numbers to initialize candidate solutions. Such a configuration can maximize the algorithm’s global performance. However, the strong randomness of the algorithm prevents maintaining stable objective optimization accuracy. Additionally, relying on pseudo-random number initialization can result in insufficient population traversal, leading to a decline in population diversity. To enhance exploration capabilities and elevate the level of population diversity [52], we use chaotic maps to improve the population initialization. The tent map is a chaotic system that generates mapping relations based on probability density functions, helping to expand the search range of the initial population and improve the algorithm’s global search ability [53]. The tent mapping process is as follows: (14) where, xn represents current state map, xn+1 state for the next generation of mapping, γ for mapping parameters, in order to ensure the initial population of ergodicity, γ = 1.1. where, xn represents current state map, xn+1 state for the next generation of mapping, γ for mapping parameters, in order to ensure the initial population of ergodicity, γ = 1.1.

The scatter plot of positions initialized by the tent map is shown in Fig 1. The sequence values generated by the tent map are more evenly distributed between 0 and 1 compared to those generated by ordinary random numbers. Introducing the tent map into the initialization operation of the COA algorithm can increase population diversity and enhance the algorithm’s global search capability.

thumbnail
Fig 1. Initial population distribution.

(a) Without tent chaotic map; (b) With tent chaotic map.

https://doi.org/10.1371/journal.pone.0318203.g001

In summary, using the tent chaotic map during the COA initialization phase ensures that the initial population covers a wide solution space. This reduces the risk of premature convergence to local optima. Moreover, by diversifying the initial population, the COA algorithm can explore a broader search space, thus increasing the probability of finding the global optimal solution.

3.2.2. Nonlinear dynamic adjustment factor.

In the original COA algorithm, C2 is updated using Eq (5). Although this linear change can dynamically adjust the search step size to some extent, the variation of C2 is fixed, reducing solution diversity and search space coverage. The linear adjustment factor changes are fixed in each iteration, lacking randomness, which may easily lead to trapping in local optima. During the early exploration stage, the fast rate of change may lead to insufficient exploration, while in the mid-term, it may not allow for adequate leap searches. To enhance the algorithm’s global search capability, we introduced a random factor and designed a new nonlinear dynamic adjustment factor as follows: (15)

At this point, the crawfish position update calculation is replaced by Eq (4) with: (16)

In Eq (5), C2 decreases linearly from 2 to 1. Although this linear change is simple and intuitive, the rapid decrease in the early stages may lead to a premature loss of exploration ability. Moreover, since the step size and direction of each iteration are predictable, the risk of the algorithm getting trapped in local optima increases. Eq (15) provides a new nonlinear dynamic adjustment method, making Cnew have smaller initial values and slower changes, suitable for stable exploration. In the mid-to-late stages, it gradually increases, which helps escape local optima and enables broader searches. Its rate of change is influenced by a combination of factors and random numbers, providing high flexibility and adaptability. The iterative trend plot of Cnew is shown in Fig 2. The random factor rand in Eq (15) introduces a certain randomness to Cnew in each iteration, further increasing solution diversity and avoiding local optima.

thumbnail
Fig 2. Nonlinear dynamic adjustment factor.

In the mid-to-late stages, it gradually increases, which helps escape local optima and enables broader searches.

https://doi.org/10.1371/journal.pone.0318203.g002

3.2.3. Orthogonal refracted opposition-based learning strategy.

In view of the weak ability of MH algorithm to jump out of local optimum, lens imaging opposition-based learning strategy (LOBL) [54,55] is usually introduced to improve the performance of the algorithm, and the optimal solution is sought by generating the reverse position according to the current individual position. The principle of lens imaging is shown in Fig 3.

thumbnail
Fig 3. Principle of lens imaging.

The optimal solution is sought by generating the reverse position according to the current individual position.

https://doi.org/10.1371/journal.pone.0318203.g003

As shown in Fig 3, suppose that there exists an individual P in the spatial extent of the interval [lb, ub] with height h and projection X on the x-axis. By imaging with a convex lens placed at point o (which is the midpoint of [lb, ub]), P’of height h’ can be obtained, and its projection on the x-axis is X’. Then the imaging principle can be obtained as follows: (17)

where, let h/h′ = k, and transform the formula to get: (18)

The scaling factor k is calculated as follows: (19)

The lens imaging reverse learning strategy explores previously uncovered areas in the solution space by reflecting and scaling solutions, thereby increasing solution diversity and reducing the risk of the algorithm falling into local optima. Additionally, in the later stages of the algorithm, when the k value is large, the newly generated solutions will be more concentrated around the current optimal solution. This helps the algorithm to fine-tune these solutions more precisely, accelerating convergence to the global optimum or near-global optimum solutions.

Orthogonal experimental design (OED) can find the optimal experimental combination of multi-factor and multi-level verification through a small number of tests [56]. For example, for an experiment with 2 levels and 7 factors, if a full factorial test is used to identify the optimal combination, 27 = 128 tests are required. If the orthogonal experimental design is used, based on the orthogonal table L8 (27) as shown in Eq (20), the optimal or near-optimal combination can be found with only 8 tests, significantly improving the experimental efficiency. However, due to the characteristics of the orthogonal experimental design, it cannot guarantee that the solutions in the orthogonal table contain the true optimal solution of the experiment [56]. Therefore, when using orthogonal tables, it is generally necessary to perform factor analysis to identify the theoretical optimal combination, and then determine the final optimal solution by comparing it with all the combinations in the orthogonal table. Thus, for the experiment with 2 levels and 7 factors, it is necessary to first obtain 8 candidate optimal solutions based on the orthogonal table L8 (27), then conduct factor analysis to identify a theoretically optimal combination, and finally evaluate the 9 combinations to determine the overall optimal solution for the experiment.

(20)

In order to enhance the ability of COA algorithm to jump out of local optimum, this paper proposes a strategy called orthogonal lens opposition-based learning (OLOBL), and applies it to the leader individual to generate new candidate individuals.

OLOBL is a strategy designed by integrating OED and LOBL techniques. The optimal solution executes the OLOBL strategy, jumping to more promising search areas, thereby enhancing population diversity and reducing the probability of the algorithm falling into local optima. However, the study in reference [57] shows that for an individual, its opposite solution is only superior to the current solution in certain dimensions. To address this issue, an orthogonal reflection opposite learning strategy is designed by integrating OED and LOBL techniques, which fully explores each dimensional component of both the current and opposite solutions and combines their advantageous dimensions to generate a partial reflection opposite solution. To address this issue, an orthogonal reflection opposite learning strategy is designed by integrating OED and LOBL techniques, which fully explores each dimensional component of both the current and opposite solutions and combines their advantageous dimensions to generate a partial reflection opposite solution.

The OLOBL strategy is embedded into the COA algorithm, where the optimization problem’s dimension D corresponds to the factors in the orthogonal experimental design, and the individual and its reflection opposite solution represent the two levels in the orthogonal experimental design. The detailed process for constructing a partial reflection opposite solution is as follows: an orthogonal experiment with 2 levels and D factors is designed for the current solution and its reflection opposite solution, generating M partial reflection opposite solutions, where M is calculated according to Eq (21). Specifically, when generating partial opposite solutions based on the orthogonal table, if the element in the orthogonal table is 1, the value of the corresponding dimension in the trial solution is set to the value of the current solution; if the element is 2, the value of the corresponding dimension is set to that of the reflection opposite solution.

(21)

According to the characteristics of the orthogonal experimental design, all elements in the first row of the orthogonal table are 1, indicating that the first trial solution is identical to the original individual and thus does not require evaluation. The remaining M-1 trial solutions are combinations of the advantageous dimensions of the current individual and its reflection opposite individual, i.e., partial reflection opposite solutions, which need to be evaluated. When using orthogonal experimental design, it is necessary to perform factor analysis to identify a theoretically optimal combination that does not exist in the orthogonal table, which also requires evaluation. Therefore, executing the OLOBL strategy requires M function evaluations. During the evolutionary iterations, the OLOBL strategy is only applied to the leader, and the superior individual is selected from the leader and its orthogonal reflection opposite solutions to enter the next generation. This approach effectively enhances the global exploration ability of the algorithm, reduces the number of function evaluations, and improves the overall performance of the algorithm.

In the orthogonal reflection opposite learning strategy, a reflection opposite learning approach based on the lens imaging principle is employed to enhance exploration of the opposite solution space, significantly reducing the probability of the algorithm falling into local optima. The orthogonal experimental design is used to construct several partial opposite solutions by taking reflection opposite values in certain dimensions, thoroughly exploring and preserving the advantageous dimensional information of both the current individual and the reflection opposite individual.

3.2.4. ECOA algorithm description.

In the improved ECOA algorithm, the tent chaotic map is used to obtain a higher quality initial solution. Additionally, during the Summer escape stage, a novel nonlinear dynamic adjustment factor is designed to replace the search step update method. This change can adaptively adjust the search step size, balancing the exploration and exploitation process of the algorithm, and further enhancing the global search capability. The OLOBL strategy is introduced in each iteration to generate and select new candidate solutions, helping the algorithm to better explore and exploit the solution space. The pseudocode for ECOA is as follows (Algorithm 1). The detailed process of ECOA is shown in Fig 4.

Algorithm 1 ECOA

  1. Initialization phase
  2. Initialization iterations T, population N, dimension dim
  3. Utilize tent chaos mapping for population initialization
  4. Calculate the fitness value of the population to get XG, XL
  5. While (t< T) do
  6.  Defining temperature temp by Eq (1).
  7. If temp>30
  8.   Define cave Xshade according to Eq (3).
  9.   If rand<0.5
  10.   Crayfish conducts the summer resort stage according to Eqs (15) and (16).
  11.   Else
  12.   Crayfish compete for caves through Eq (6).
  13.  End
  14. Else
  15.  The food intake p and food size Q are obtained by Eqs (9) and (10).
  16. If Q>2
  17.   Crayfish shreds food by Eq (11).
  18.   Crayfish foraging according to Eq (12).
  19. Else
  20.   Crayfish foraging according to Eq (13).
  21.  End
  22. Generate new the best candidate solution according to Eqs (17)-(21).
  23.  Calculate the fitness value of the population to get XG, XL.
  24. End
  25. Update fitness values, XG, XL
  26. t = t+1
  27. End
thumbnail
Fig 4. Flowchart of ECOA.

The execution steps of our proposed algorithm are shown in detail.

https://doi.org/10.1371/journal.pone.0318203.g004

3.3. Computational complexity of ECOA

The computational complexity of the ECOA algorithm is primarily influenced by two key factors: solution initialization and the execution of core functions. These core functions encompass fitness function calculations and solution updates. The computational complexity is influenced by crucial variables, including the number of solutions (N), the maximum iteration limit (T), and the problem’s dimension (D). Specifically, the complexity of initializing solutions is represented as O(N), indicating its direct relationship with the number of solutions. As N increases, the computational complexity of the initialization phase also rises accordingly. The original DBO overall time complexity for the core functions of the algorithm is O(T×N×D), considering the number of iterations (T), the count of solutions (N), and the problem dimension (D). ECOA modifies this with Eqs (14)-(19), including enhancements to population diversity using the tent chaos mapping, a new nonlinear convergence factor is used to balance exploration and exploitation, and OLOBL strategy is introduced to obtain better solutions. The tent chaos mapping strategy, which requires computation for each individual, exhibits a complexity of O(N). The update from Eqs (15) and (16) is independent of population size and search dimensions, correlating only with the maximum number of iterations, resulting in a time complexity of O(T). Similarly, The update of Eqs (17)-(21) is independent of the population size, but only related to the maximum number of iterations and the search dimension, resulting in a time complexity of O(T × D). Furthermore, the computational complexity for both Eqs (11) and (12) is also O(T×N × D). Consequently, the overall time complexity of ECOA is , consistent with the original algorithm.

4. Algorithm performance testing and analysis

The simulation environment of this study runs on a Windows 11 64-bit operating system, with a CPU model of AMD Ryzen 74800H, a base frequency of 2.30GHz, and equipped with 16GB RAM. The algorithms were implemented on the Matlab 2023b platform.

4.1. Test functions and parameter settings

To evaluate the effectiveness of the newly proposed ECOA algorithm, it was tested using the CEC2017 test function set (dim = 30, 50, 100). The CEC series includes a set of basic test functions that can serve not only as benchmarks for comparing the performance of various optimization algorithms but also as tools for simulating the complexity of real-world problems. This test set includes 30 CEC2017 test functions, each composed of different basic test functions. Among them, F1 to F3 are unimodal functions, F4 to F10 are multimodal functions, F11 to F20 are hybrid functions, and F21 to F30 are composite functions, the F2 function was officially removed due to its instability in high-dimensional scenarios. The search domain of the CEC2017 test function set is uniformly set to [–100, 100]D.

Comparative experiments were conducted between the ECOA algorithm and seven highly cited algorithms: CPSOGSA[30], GQPSO[31], EDOLSCA[32], WOA[6], SCA[33], CPO[34], SWO[35], and the original COA algorithm [28]. Table 1 provides a detailed summary of the parameters used in these seven different MH algorithms. The parameters for the comparative algorithms were consistent with those in the original literature. The experimental results were meticulously recorded, including the mean (denoted as "Ave") and standard deviation (Std) of each algorithm. To clearly compare performance, the best results among the nine algorithms were highlighted in bold in the table. In this study, we selected the maximum number of iterations (T) as the termination criterion because it provides a consistent and straightforward measure of algorithm performance, especially for evaluating convergence behavior under controlled computational conditions. In these tests, the population size (N) for each algorithm was fixed at 30, and the maximum number of iterations (T) was set to 500, following the settings suggested in the original COA algorithm paper and other comparison algorithms’ papers. Each experiment was independently conducted 30 times, and the system recorded the best fitness value for each trial.

thumbnail
Table 1. Parameter configurations for competing algorithms.

https://doi.org/10.1371/journal.pone.0318203.t001

4.2. Comparative analysis of ECOA and other algorithm

The CEC2017 series of functions is a valuable tool for simulating complex real-world problems. In this study, we compared the proposed algorithm with eight other competitive algorithms: CPSOGSA[30], GQPSO[31], EDOLSCA[32], WOA[6], SCA[33], CPO[34], SWO[35], and the original COA algorithm [28]. To ensure consistency in the experimental setup, the parameters such as the number of runs, population size, test dimensions, and maximum number of iterations were kept consistent with Section 4.1.

The experiments were conducted independently 30 times, and the best fitness value for each set of trials was recorded. Tables 24 present the best fitness average (Ave) and standard deviation (Std) obtained from 30 repeated experiments for CPSOGSA[30], GQPSO[31], EDOLSCA[32], WOA[6], SCA[33], CPO[34], SWO[35], the original COA algorithm [28] and ECOA. The superiority of the ECOA algorithm was highlighted through comprehensive statistical analysis. The first row compiled the Wilcoxon rank-sum test results for all algorithms to reflect their performance, assessing the statistical significance of ECOA compared to other algorithms, with the significance threshold set at 5%. When the test result is p < 5%, it indicates a statistical difference between ECOA and the comparative algorithms; when the test result is p ≥ 5%, it indicates no statistical difference between ECOA and the comparative algorithms. The interpretation of these results is based on the rank-sum test: the symbols ’+’, ’-’, and ’ = ’ indicate that the optimization performance of ECOA is better than, worse than, or equal to the other algorithms, respectively. The second row provides overall ranking information derived from the final rankings by Friedman. These tables prominently display the top-ranked results, highlighting their superior performance. In each set of test functions, the algorithm with the lowest mean and standard deviation is highlighted in bold, indicating its superior performance.

In the performance comparison experiments on the CEC2017 test set with eight other advanced algorithms, ECOA performed the best. In the tests with dimensions of 30, 50, and 100, ECOA achieved the most first-place rankings out of the 29 test functions, attaining the best or near-best optimization performance on all test functions, without being the worst on any test function. This excellent performance of ECOA is mainly due to a number of key improvements introduced in the algorithm.

First, ECOA uses Tent Chaos mapping to generate diverse initial populations, effectively improving global search capabilities and reducing the risk of falling into local optima. In addition, the introduction of nonlinear dynamic adjustment factors enables ECOA to adjust the step size adaptively according to different stages, balancing exploration and development, thus improving the convergence speed and efficiency. OLOBL strategy further improves the diversity and quality of understanding, and helps the algorithm to break out of the local optimal and obtain higher quality solutions.

The statistical results of the Wilcoxon rank-sum test showed that ECOA consistently outperformed the other eight advanced algorithms in the CEC2017 function suite, highlighting its superior performance and proving the robustness of ECOA. It is the comprehensive application of Tent chaotic mapping, dynamic adjustment factor and OLOBL strategy that makes ECOA exhibit excellent performance in various dimensions and problem types, ensuring its efficiency in complex solution Spaces.

To further analyze the convergence speed and iteration process of the aforementioned algorithms, 12 different types of test functions are selected for comparison. As shown in Fig 5, ECOA outperformed other algorithms in terms of both convergence speed and accuracy. The experimental results indicate that ECOA consistently maintained the fastest convergence speed and highest convergence accuracy, further verifying its superior performance.

thumbnail
Fig 5. Comparison of convergence curves with different advanced algorithms.

ECOA outperformed other algorithms in terms of both convergence speed and accuracy. The experimental results indicate that ECOA consistently maintained the fastest convergence speed and highest convergence accuracy.

https://doi.org/10.1371/journal.pone.0318203.g005

In summary, ECOA is an intelligent optimization algorithm that can consistently obtain high-quality solutions. It has robust stability, fast convergence, high precision convergence, and the ability to avoid falling into local optima.

4.3. Ablation experiment

ECOA incorporates three improvement strategies: the tent chaotic map, a novel nonlinear dynamic adjustment factor, and the OLOBL strategy. To further explore the impact of these strategies on ECOA, we conducted ablation experiments in this section. Based on this, we proposed three improved algorithms: ECOA1 incorporating the tent chaotic map strategy, ECOA2 utilizing the nonlinear dynamic adjustment factor, and ECOA3 implementing the OLOBL strategy. In order to explore the interaction between strategies, we also conduct the combination of strategies. Specifically, we tested the combinations of two strategies, resulting in three additional variants: ECOA12 (combining strategies 1 and 2), ECOA13 (combining strategies 1 and 3), and ECOA23 (combining strategies 2 and 3). As shown in the experimental results in Fig 6, the 3 strategies have varying effects on COA’s performance, with ECOA demonstrating the most significant improvements.

thumbnail
Fig 6. Comparison of different improvement strategies.

ECOA12 (combining strategies 1 and 2), ECOA13 (combining strategies 1 and 3), and ECOA23 (combining strategies 2 and 3).

https://doi.org/10.1371/journal.pone.0318203.g006

The ablation experiments were conducted using the CEC 2017 benchmark suite (Dim = 100). When handling unimodal and multimodal functions, the results of ECOA2 and ECOA3 are relatively consistent, with both showing more noticeable improvements to COA compared to ECOA1. However, when dealing with more complex hybrid modal functions, ECOA3 demonstrates more significant enhancement effects compared to ECOA2, while the ECOA algorithm, which integrates all three strategies, continues to exhibit the best optimization performance. It is worth noting that, compared with the improvement of single strategy, the performance improvement of ECOA is more obvious after the combination of selected strategies, especially ECOA12 and ECOA13, which can conclude the effectiveness and adaptability of OLOBL strategy. The ECOA algorithm successfully overcame COA’s issues with local optima and premature convergence, significantly improving both convergence speed and accuracy. The research findings provide valuable insights for the further development and application of COA.

5. ECOA algorithm practical engineering application

This section is dedicated to studying the practical applications of the ECOA algorithm, particularly its application in robotic arm trajectory planning. To evaluate the performance of the ECOA algorithm, simulation experiments were conducted on the motion trajectory of a robotic arm. The simulation results verified the algorithm’s navigation capability in complex environments, making it suitable for robotic arm trajectory planning applications.

5.1. Trajectory planning of robot arm model

5.1.1. Length of path cost.

In robotic arm trajectory planning, the cost related to path length is primarily associated with the energy consumed during task execution. A shorter path generally means less energy consumption by the robotic arm during task execution. In industrial applications, reducing energy consumption is one of the key factors in lowering operational costs. To quantify this cost, a formula is designed that accurately reflects this relationship. Path length can be obtained by calculating the distance between all consecutive points. For a path in three-dimensional space, the formula for calculating the path length cost is as follows: (22) where represent two consecutive points on the path, and N is the total number of points on the path.

5.1.2. Angle of turn cost.

Frequent or sharp turns may increase wear on the manipulator joints and actuation system. Optimizing the turning Angle can prolong the service life of mechanical equipment. Then the cost function of the bending Angle is: (23) (24)

where υi and υi+1 represent on the path to the continuous two points.

5.1.3. Height variation cost.

In complex environments, such as factories or warehouses, the robotic arm needs to move among multiple objects and obstacles. Reasonably planning height changes can prevent the robotic arm from colliding with objects in the environment. Height variation H evaluates the vertical fluctuation of the path, which can be represented by the sum of the absolute differences between the heights of all points and the average height. Thus, the cost function for height variation is: (25)

where represent average value of zi.

5.1.4. Performance measurement function of trajectory planning of robot arm.

The Trajectory planning of robot arm considers three main costs: path length, angle of turn, and height variation cost. These costs have a weight coefficient w1, w2, w3. On this basis, the objective function of trajectory planning of robot arm is transformed into the weighted sum of different cost components. The formula is designed to strike a balance between various factors to determine the most efficient operating trajectory. The objective function of trajectory planning of robot arm is defined as follows: (26) where w1, w2, w3 denote the weight of each item.

5.2. Simulation and analysis of trajectory planning of robot arm

5.2.1. Algorithm application and experimental simulation.

According to the three loss functions introduced in Section A. Trajectory planning of robot arm model, the performance measurement function of robot arm trajectory planning is defined to carry out the optimal path planning, find the path with the minimum comprehensive loss, check whether the path collide with obstacles, and abandon the path selection through obstacles, To demonstrate the performance of ECOA on the robotic arm trajectory planning problem, CPSOGSA[30], GQPSO[31], EDOLSCA[32], WOA[6], SCA[33], CPO[34], SWO[35], the original COA algorithm [28] and ECOA were also applied to the same trajectory planning problem. The parameter settings for the algorithms are as follows: population size (N) is 30, and the maximum number of iterations (T) is 200. The parameter configurations for the comparative algorithms are consistent with Section 4.1. The simulation results are shown in the figures: Fig 7 shows the three-dimensional trajectory planning results. This comprehensive simulation and analysis highlight the effectiveness of the ECOA algorithm in navigating complex environments and its potential advantages in robotic arm trajectory planning compared to other algorithms.

thumbnail
Fig 7. Best three-dimensional trajectory planning by each algorithm.

This comprehensive simulation and analysis highlight the effectiveness of the ECOA algorithm in navigating complex environments and its potential advantages in robotic arm trajectory planning compared to other algorithms.

https://doi.org/10.1371/journal.pone.0318203.g007

5.2.2. Analysis of simulation result.

From the analysis of the paths given by different algorithms in Fig 7, the experimental results indicate that the paths generated by the CPO and CPSOGSA algorithms tend to be longer, resulting in increased energy consumption. More importantly, their turning angles are too sharp, leading to abrupt turns, which pose significant safety hazards and increase the likelihood of robotic arm malfunctions. The trajectories of the WOA and SCA algorithms exhibit significant height variations, greatly increasing the risk of collisions. Although the routes planned by the GQPSO and SWO algorithms avoid obstacles and have smoother trajectories, they increase the path length. In contrast, the optimized ECOA algorithm successfully mitigated these issues. Overall, ECOA not only has a shorter path length but also exhibits smoother trajectories with less severe height variations, significantly reducing the risk of malfunctions.

Under the same test conditions, 30 independent simulation experiments are carried out for the nine algorithms. The comprehensive cost models of the 9 algorithms are statistically analyzed, and the relevant statistics are listed in Table 5.

thumbnail
Table 5. Statistics of trajectory planning of robot arm results.

https://doi.org/10.1371/journal.pone.0318203.t005

The analysis of the 8 sets of experimental data consistently shows that the ECOA algorithm achieved the best results across various metrics, including the optimal cost, worst cost, average cost, and median, and obtained nearly the smallest standard deviation. These results emphasize the excellent optimization performance of the ECOA algorithm, particularly in the field of robotic arm trajectory planning, where its optimization results demonstrate higher stability. The Wilcoxon rank-sum test results indicate that, except for WOA and COA, ECOA showed statistically significant performance improvements over most of the comparative algorithms. In the Friedman ranking, ECOA achieved first place, demonstrating the leading performance of the algorithm. Notably, compared to other algorithms, the ECOA algorithm starts from a significantly lower initial best fitness value, indicating its proximity to the global optimum. This characteristic significantly reduces the likelihood of the algorithm getting trapped in local optima. This characteristic significantly reduces the likelihood of the algorithm getting trapped in local optima. This stability and reliability are crucial in practical applications. In summary, the ECOA algorithm performs outstandingly in handling robotic arm trajectory planning problems.

5.2.3. Discussion.

The experimental results in Section 5.2 highlight the superiority of the ECOA algorithm in robotic arm trajectory planning compared to eight other algorithms. This section provides an in-depth analysis of the factors contributing to ECOA’s superior performance.

The results consistently demonstrate ECOA’s superiority over other competitive algorithms. ECOA achieved the lowest Worst Cost, Best Cost, and Average Cost, along with an exceptionally low Standard Deviation (5.57E-13), reflecting both high-quality solutions and significant robustness across multiple trials. Statistical analyses, including the Wilcoxon rank-sum test and Friedman ranking, consistently positioned ECOA as the top-performing algorithm, underscoring its superior and reliable performance.

ECOA’s remarkable performance can be attributed to several key enhancements, particularly the nonlinear dynamic adjustment factor and the orthogonal refracted opposition-based learning strategy. The nonlinear dynamic adjustment factor adapts the search behavior to the optimization phase, enabling extensive exploration during initial stages and intensifying exploitation in later stages. This adaptive mechanism is crucial for preventing premature convergence and guiding the search toward the global optimum. Additionally, the orthogonal refracted opposition-based learning strategy plays a vital role in maintaining population diversity and avoiding local optima, thereby enhancing solution quality. Together, these integrated strategies enable ECOA to outperform other algorithms by achieving shorter, smoother, and more consistent solution paths, as evidenced by the experimental data.

6. Conclusion

In this study, we conducted a detailed analysis of the COA algorithm, identifying its computational challenges and limitations. To address these issues, we proposed and integrated three strategic improvements: tent chaotic mapping, nonlinear perturbation factors, and orthogonal refracted opposition-based learning strategy which improves the exploration ability of the algorithm and solves the dimension degradation problem of opposition-based learning. The integration of these three strategies not only enhanced the global search capability of the COA algorithm but also improved its precision during the local optimization phase, thereby significantly accelerating the convergence speed.

Evaluations based on the CEC2017 test set in 30, 50, and 100 dimensions showed that, compared to a series of well-known algorithms, ECOA exhibited rapid convergence performance and global optimization capability. We used the Wilcoxon rank-sum test and Friedman rank-sum test to statistically verify the superiority of ECOA. The ECOA algorithm was applied to robotic arm trajectory planning and compared with eight advanced algorithms, verifying its versatility and superiority. Experimental results showed that the ECOA outperformed CPSOGSA, GQPSO, EDOLSCA, WOA, SCA, CPO, SWO, and the original COA.

Given the excellent performance demonstrated by ECOA, its application is expected to expand to a broader range of real-world challenges, such as logistics, healthcare, and energy management.

However, ECOA also has certain limitations. Our comparison with cec award-winning algorithms such as LSHADE_cnEpSin, LSHADE_SPACMA, EA4eig, and MadDE shows that ECOA does not achieve state-of-the-art performance on these challenging benchmarks. ECOA has the best applicability for manipulator trajectory planning, but does not achieve performance beyond the state-of-the-art competition algorithms on the CEC test suit. To address these challenges and further enhance the algorithm, future research will focus on integrating multiple metaheuristic strategies to better balance the efficiency of exploration and exploitation, thereby improving both efficiency and scalability.

References

  1. 1. Ata AA. Optimal trajectory planning of manipulators: a review. Journal of Engineering Science and technology. 2007;2(1):32–54.
  2. 2. Ekrem Ö, Aksoy B. Trajectory planning for a 6-axis robotic arm with particle swarm optimization algorithm. Engineering Applications of Artificial Intelligence. 2023;122:106099.
  3. 3. Savsani P, Jhala RL, Savsani VJ. Comparative study of different metaheuristics for the trajectory planning of a robotic arm. IEEE Systems Journal. 2014;10(2):697–708.
  4. 4. Dai Y, Xiang C, Zhang Y, Jiang Y, Qu W, Zhang Q. A review of spatial robotic arm trajectory planning. Aerospace. 2022;9(7):361.
  5. 5. Shareef Z, Trächtler A. Simultaneous path planning and trajectory optimization for robotic manipulators using discrete mechanics and optimal control. Robotica. 2016;34(6):1322–34.
  6. 6. Mirjalili S, Lewis A. The Whale Optimization Algorithm. Adv Eng Software. 2016;95:51–67.
  7. 7. Yang X, Hao X, Yang T, Li Y, Zhang Y, Wang J. Elite-guided multi-objective cuckoo search algorithm based on crossover operation and information enhancement. Soft Computing. 2023;27(8):4761–78.
  8. 8. Barua S, Merabet A. Lévy Arithmetic Algorithm: An enhanced metaheuristic algorithm and its application to engineering optimization. Expert Systems with Applications. 2024;241:122335.
  9. 9. Sowmya R, Premkumar M, Jangir P. Newton-Raphson-based optimizer: A new population-based metaheuristic algorithm for continuous optimization problems. Engineering Applications of Artificial Intelligence. 2024;128:107532.
  10. 10. Han M, Du Z, Yuen KF, Zhu H, Li Y, Yuan Q. Walrus optimizer: A novel nature-inspired metaheuristic algorithm. Expert Systems with Applications. 2024;239:122413.
  11. 11. Kundu R, Chattopadhyay S, Nag S, Navarro MA, Oliva D. Prism refraction search: A novel physics-based metaheuristic algorithm. The Journal of Supercomputing. 2024;80(8):10746–95.
  12. 12. Abd Elaziz M, Lu S, He S. A multi-leader whale optimization algorithm for global optimization and image segmentation. Expert Systems with Applications. 2021;175:114841.
  13. 13. Fu S, Huang H, Ma C, Wei J, Li Y, Fu Y. Improved dwarf mongoose optimization algorithm using novel nonlinear control and exploration strategies. Expert Systems with Applications. 2023;233:120904.
  14. 14. Fu S, Ma C, Li K, Xie C, Fan Q, Huang H, et al. Modified LSHADE-SPACMA with new mutation strategy and external archive mechanism for numerical optimization and point cloud registration. Artificial Intelligence Review, 2025;58(3), 72.
  15. 15. Zhang L, Zhang Y, Li Y. Mobile robot path planning based on improved localized particle swarm optimization. IEEE Sensors Journal. 2020;21(5):6962–72.
  16. 16. Wei F, Li J, Zhang Y. Improved neighborhood search whale optimization algorithm and its engineering application. Soft Computing. 2023;27(23):17687–709.
  17. 17. Sung WT, Chung HY, Chang KY. Agricultural monitoring system based on ant colony algorithm with centre data aggregation. IET Communications. 2014;8(7):1132–40.
  18. 18. Li K, Huang H, Fu S, Ma C, Fan Q, Zhu Y. A multi-strategy enhanced northern goshawk optimization algorithm for global optimization and engineering design problems. Comput Methods Appl Mech Eng. 2023;415:116199.
  19. 19. Xiao Y, Cui H, Hussien AG, Hashim FA. MSAO: A multi-strategy boosted snow ablation optimizer for global optimization and real-world engineering applications. Advanced Engineering Informatics. 2024;61:102464.
  20. 20. Sudhakar S, Vijayakumar V, Kumar CS, Priya V, Ravi L, Subramaniyaswamy V. Unmanned Aerial Vehicle (UAV) based Forest Fire Detection and monitoring for reducing false alarms in forest-fires. Computer Communications. 2020;149:1–16.
  21. 21. Horyna J, Baca T, Walter V, Albani D, Hert D, Ferrante E, et al. Decentralized swarms of unmanned aerial vehicles for search and rescue operations without explicit communication. Autonomous Robots. 2023;47(1):77–93.
  22. 22. Fu S, Li K, Huang H, Ma C, Fan Q, Zhu Y. Red-billed blue magpie optimizer: a novel metaheuristic algorithm for 2D/3D UAV path planning and engineering design problems. Artificial Intelligence Review. 2024;57(6):134.
  23. 23. Fu Y, Liu D, Chen J, He L. Secretary bird optimization algorithm: a new metaheuristic for solving global optimization problems. Artificial Intelligence Review. 2024;57(5):123.
  24. 24. Mohammad S, Jusof MFM, Rizal NAM, Razak AAA, Nasir ANK, Ismail RMTR, et al., editors. Elimination-dispersal sine cosine algorithm for a dynamic modelling of a twin rotor system. InECCE2019: Proceedings of the 5th International Conference on Electrical, Control & Computer Engineering, Kuantan, Pahang, Malaysia, 29th July 2019; 2020: Springer.
  25. 25. Amponis G, Lagkas T, Tsiknas K, Radoglou-Grammatikis P, Sarigiannidis P. Introducing a New TCP Variant for UAV networks following comparative simulations. Simulation Modelling Practice and Theory. 2023;123:102708.
  26. 26. Wolpert DH, Macready WG. No free lunch theorems for optimization. IEEE transactions on evolutionary computation. 1997;1(1):67–82.
  27. 27. Neri F. Diversity management in memetic algorithms. Handbook of Memetic Algorithms: Springer; 2012. p. 153–65.
  28. 28. Jia H, Rao H, Wen C, Mirjalili S. Crayfish optimization algorithm. Artificial Intelligence Review. 2023;56(Suppl 2):1919–79.
  29. 29. Zhu F, Li G, Tang H, Li Y, Lv X, Wang X. Dung beetle optimization algorithm based on quantum computing and multi-strategy fusion for solving engineering problems. Expert Systems with Applications. 2024;236:121219.
  30. 30. Rather SA, Bala PS. Constriction coefficient based particle swarm optimization and gravitational search algorithm for multilevel image thresholding. Expert Systems. 2021;38(7):e12717.
  31. 31. Sansawas S, Roongpipat T, Ruangtanusak S, Chaikhet J, Worasucheep C, Wattanapornprom W, editors. Gaussian quantum-behaved particle swarm with learning automata-adaptive attractor and local search. 2022 19th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON); 2022: IEEE.
  32. 32. Zhang L, Hu T, Yang Z, Yang D, Zhang J. Elite and dynamic opposite learning enhanced sine cosine algorithm for application to plat-fin heat exchangers design problem. Neural Computing and Applications. 2023;35(17):12401–14.
  33. 33. Mirjalili S. SCA: a sine cosine algorithm for solving optimization problems. Knowledge-based systems. 2016;96:120–33.
  34. 34. Abdel-Basset M, Mohamed R, Abouhawwash M. Crested Porcupine Optimizer: A new nature-inspired metaheuristic. Knowledge-Based Systems. 2024;284:111257.
  35. 35. Abdel-Basset M, Mohamed R, Jameel M, Abouhawwash M. Spider wasp optimizer: A novel meta-heuristic optimization algorithm. Artificial Intelligence Review. 2023;56(10):11675–738.
  36. 36. Singh G, Banga VK. Kinematics and trajectory planning analysis based on hybrid optimization algorithms for an industrial robotic manipulators. Soft Computing. 2022;26(21):11339–72.
  37. 37. Yu L, Wang K, Zhang Q, Zhang J. Trajectory planning of a redundant planar manipulator based on joint classification and particle swarm optimization algorithm. Multibody System Dynamics. 2020;50:25–43.
  38. 38. Jia H, Zhou X, Zhang J, Abualigah L, Yildiz AR, Hussien AG. Modified crayfish optimization algorithm for solving multiple engineering application problems. Artificial Intelligence Review. 2024;57(5):127.
  39. 39. Xiao B, Wang R, Deng Y, Yang Y, Lu D, editors. Simplified Crayfish Optimization Algorithm. 2024 IEEE 7th Advanced Information Technology, Electronic and Automation Control Conference (IAEAC); 2024: IEEE.
  40. 40. Bacanin N, Petrovic A, Jovanovic L, Zivkovic M, Zivkovic T, Sarac M, editors. Parkinson’s Disease Induced Gain Freezing Detection using Gated Recurrent Units Optimized by Modified Crayfish Optimization Algorithm. 2024 5th International Conference on Mobile Computing and Sustainable Informatics (ICMCSI); 2024: IEEE.
  41. 41. Elkasem AH, Kamel S, Khamies M, Nasrat L. Frequency regulation in a hybrid renewable power grid: an effective strategy utilizing load frequency control and redox flow batteries. Scientific Reports. 2024;14(1):9576. pmid:38670981
  42. 42. Long W, Wu T, Jiao J, Tang M, Xu M. Refraction-learning-based whale optimization algorithm for high-dimensional problems and parameter estimation of PV model. Engineering Applications of Artificial Intelligence. 2020;89:103457.
  43. 43. Abed-alguni BH, Alawad NA, Barhoush M, Hammad R. Exploratory cuckoo search for solving single-objective optimization problems. Soft Computing. 2021;25(15):10167–80.
  44. 44. Alawad NA, Abed-alguni BH. Discrete Jaya with refraction learning and three mutation methods for the permutation flow shop scheduling problem. The Journal of Supercomputing. 2022;78(3):3517–38.
  45. 45. Wang L, Wu Q, Lin F, Li S, Chen D. A new trajectory-planning beetle swarm optimization algorithm for trajectory planning of robot manipulators. IEEE access. 2019;7:154331–45.
  46. 46. Kim J-J, Lee J-J. Trajectory optimization with particle swarm optimization for manipulator motion planning. IEEE transactions on industrial informatics. 2015;11(3):620–31.
  47. 47. Singh G, Banga VK. Combinations of novel hybrid optimization algorithms‐based trajectory planning analysis for an industrial robotic manipulators. Journal of Field Robotics. 2022;39(5):650–74.
  48. 48. Xin P, Rong J, Yang Y, Xiang D, Xiang Y. Trajectory planning with residual vibration suppression for space manipulator based on particle swarm optimization algorithm. Advances in Mechanical Engineering. 2017;9(4):1687814017692694.
  49. 49. Cao X, Yan H, Huang Z, Ai S, Xu Y, Fu R, et al. A multi-objective particle swarm optimization for trajectory planning of fruit picking manipulator. Agronomy. 2021;11(11):2286.
  50. 50. Zhang L, Wang Y, Zhao X, Zhao P, He L. Time-optimal trajectory planning of serial manipulator based on adaptive cuckoo search algorithm. Journal of Mechanical Science and Technology. 2021;35(7):3171–81.
  51. 51. Guo H, Qiu Z, Gao G, Wu T, Chen H, Wang X. Safflower Picking Trajectory Planning Strategy Based on an Ant Colony Genetic Fusion Algorithm. Agriculture. 2024;14(4):622.
  52. 52. Hu G, Du B, Wang X, Wei G. An enhanced black widow optimization algorithm for feature selection. Knowledge-Based Systems. 2022;235:107638.
  53. 53. Gokhale S, Kale V. An application of a tent map initiated Chaotic Firefly algorithm for optimal overcurrent relay coordination. International Journal of Electrical Power & Energy Systems. 2016;78:336–42.
  54. 54. Yuan P, Zhang T, Yao L, Lu Y, Zhuang W. A hybrid golden jackal optimization and golden sine algorithm with dynamic lens-imaging learning for global optimization problems. Applied Sciences. 2022;12(19):9709.
  55. 55. Ma G, Yue X, Zhu J, Liu Z, Lu S. Deep learning network based on improved sparrow search algorithm optimization for rolling bearing fault diagnosis. Mathematics. 2023;11(22):4634.
  56. 56. Zhang H, Heidari AA, Wang M, Zhang L, Chen H, Li C. Orthogonal Nelder-Mead moth flame method for parameters identification of photovoltaic modules. Energy Conversion and Management. 2020;211:112764.
  57. 57. Park S-Y, Lee J-J. Stochastic opposition-based learning using a beta distribution in differential evolution. IEEE transactions on cybernetics. 2015;46(10):2184–94. pmid:26390506