Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

An improved harris hawks optimization algorithm based on chaotic sequence and opposite elite learning mechanism

  • Ting Yang ,

    Contributed equally to this work with: Ting Yang, Chaochuan Jia

    Roles Writing – original draft

    Affiliation College of Electronic and Optoelectronic Engineering, West Anhui University, Lu’an, China

  • Jie Fang ,

    Roles Methodology

    ‡ JF, ZL and YL also contributed equally to this work.

    Affiliation College of Electronic and Optoelectronic Engineering, West Anhui University, Lu’an, China

  • Chaochuan Jia ,

    Contributed equally to this work with: Ting Yang, Chaochuan Jia

    Roles Writing – original draft

    ccjia@hfcas.ac.cn

    Affiliations College of Electronics and Information Engineering, West Anhui University, Lu’an, China, Intelligent networked vehicle laboratory, West Anhui University, Lu’an, China

  • Zhengyu Liu ,

    Roles Investigation

    ‡ JF, ZL and YL also contributed equally to this work.

    Affiliations College of Electronics and Information Engineering, West Anhui University, Lu’an, China, Intelligent networked vehicle laboratory, West Anhui University, Lu’an, China

  • Yu Liu

    Roles Validation

    ‡ JF, ZL and YL also contributed equally to this work.

    Affiliations College of Electronics and Information Engineering, West Anhui University, Lu’an, China, Intelligent networked vehicle laboratory, West Anhui University, Lu’an, China

Abstract

The Harris hawks optimization (HHO) algorithm is a new swarm-based natural heuristic algorithm that has previously shown excellent performance. However, HHO still has some shortcomings, which are premature convergence and falling into local optima due to an imbalance of the exploration and exploitation capabilities. To overcome these shortcomings, a new HHO variant algorithm based on a chaotic sequence and an opposite elite learning mechanism (HHO-CS-OELM) is proposed in this paper. The chaotic sequence can improve the global search ability of the HHO algorithm due to enhancing the diversity of the population, and the opposite elite learning can enhance the local search ability of the HHO algorithm by maintaining the optimal individual. Meanwhile, it also overcomes the shortcoming that the exploration cannot be carried out at the late iteration in the HHO algorithm and balances the exploration and exploitation capabilities of the HHO algorithm. The performance of the HHO-CS-OELM algorithm is verified by comparison with 14 optimization algorithms on 23 benchmark functions and an engineering problem. Experimental results show that the HHO-CS-OELM algorithm performs better than the state-of-the-art swarm intelligence optimization algorithms.

1. Introduction

The optimization issues in real-world problems have received increasing attention from researchers in the fields of artificial intelligence [1], computer vision [2], compressed sensing [3, 4], decision-making [5] and engineering for practical applications [6]. Traditional algorithms are based on derivative methods due to their mathematical complexity, which can only be used to deal with small-scale problems that must be continuous and derivable [7]. Therefore, it is difficult to achieve global optimization for multimodal functions and dynamically changing, strongly nonlinear problems using traditional algorithms. To solve complex and large-scale problems, many swarm intelligence (SI) optimization algorithms that imitate swarm behaviour in natural phenomena, including Cuckoo Search (CS) [8], Grey Wolf Optimizer (GWO) [9], Particle Swarm Optimization (PSO) [10], Artificial Bee Colony (ABC) [11], Suffled Frog Leaping Algorithm (SFLA) [12], Whale Optimization algorithm (WOA) [13], Gravitational Search algorithm (GSA) [14], Jaya [15] and Harris Hawk Optimization (HHO) [16] have been proposed. All SI algorithms have two search phases: global exploration, which searches the whole space for a promising area, and local exploitation, which searches a chosen area that is promising to contain the best solution. However, a single SI algorithm can not deal with all optimization problems. Still, the algorithms proposed recently or those that are not yet discovered have a wide range of application prospects.

HHO is a new swarm intelligence optimization algorithm proposed by Heidari et al. [16] in 2019 that mimics the way Harris eagles find and chase prey in nature, including global exploration, local besiege and pounce behaviour. HHO has been widely applied to address the optimization of functions and engineering applications due to its gradient-free and powerful nature with high performance. Heidari et al. used HHO to optimize 29 benchmark functions and 6 engineering applications, and the results show that HHO has better competitiveness and application prospects than other SI algorithms [16]. Houssein et al. [17] used the HHO in combination with the k-nearest neighbours and the support vector machines for chemical compound activities and descriptor selection, respectively. To denoise the satellite images, Golilarz et al. [18] determined optimal wavelet coefficients by using the HHO. HHO was applied to optimize the water network distribution of Homashahr city in Iran in [19]. Abbasi et al. [20] utilized HHO to microchannel heat sinks to minimize entropy generation. Jiao et al. [21] and Liu et al. [22] used HHO to find the optimal parameters of photovoltaic models. However, similar to the other SI algorithms, the HHO still has some limitations, such as the multiplicity of solutions generated by a randomized policy that is finite in the initialization phase. Moreover, because global exploration is only performed in the first half of the iteration, it is difficult to balance the global exploration and local exploitation capacities by using the escaping energy of prey, so the algorithm may converge slowly, has low solution accuracy and prematurely falls into a local optimal solution.

To conquer the limitations of HHO, many HHO variant algorithms have been proposed. For example, Ali et al. [23] used the best solution to deal with the boundary condition instead of the boundary of the search space in HHO. Hu et al. [24] also proposed an improved HHO algorithm, which embedded the velocity into the exploration phase and updated the solutions by using the crossover operator of the artificial tree algorithm in the exploitation phase. A boosted HHO (BHHO) technique was proposed by Houssein et al. [25]. BHHO used the mutation of the DE algorithm and the flower pollination process of flowering plants instead of the hard and soft siege with progressive rapid dives, respectively. Afterward, Houssein et al. [26] proposed a hybrid HHO algorithm (HHHO), which used a chaos map to update the escaping energy function to balance the exploration and exploitation phases. In addition, a cuckoo search algorithm was used to update the optimal and random solutions to improve the global search capability. Mohamed et al. [27] proposed an improved HHO algorithm, which employed the salp swarm algorithm to balance the exploitation and exploration capabilities. Qu et al. [28] also put forward an HHO variant algorithm, which was based on the information exchange between Harris’s hawks, while chaos disturbance was used to update the escaping energy function to balance the local and global search capabilities. Shi et al. [29] applied the grey wolf optimizer and the salp swarm algorithm to improve the search capability of HHO. Aneesh et al. [30] introduced mutation interval to update the escaping energy function and used the average fitness to determine the updating strategies of Harris hawks in the exploration phase. Chen et al. [31] came up with a new framework of HHO, which combined the chaos map, multi population and differential evolution mechanism to improve the performance of HHO. Ahmed et al. [32] proposed a chaotic Harris’s hawk optimization (CHHO) algorithm in which the chaotic sequences that are generated by the ten chaotic equations are used instead of the random parameter q in the exploration phase. Singh [33] also used chaotic sequences instead of random parameters, which are r1 and r2 in the exploration phase and vector S in the exploitation phase (CSHHO). Chen et al. [34] proposed a diversification-enriched Harris hawk optimization (DEHHO), which embedded the chaotic sequence to search the neighbourhood of the current optimal solution and introduced the OBL mechanism to enlarge the search area in the whole space.

Although the improvement strategies mentioned above have enhanced the capability of the standard HHO algorithm to a certain extent, it can still be improved by other strategies. Inspired by [3234], an improved HHO variant algorithm is proposed in this paper. The contributions of this paper are as follows: (i) a chaotic sequence chaotic sequence recombination mechanism (CSRM) strategy is proposed, which enhances the distribution of the initialized solutions in the search space, and accelerates the convergence rate of HHO;(ii) the generalized opposition-based learning recombination mechanism (OBLRM) is proposed, which can have the opportunity to carry out global search in the later period of iteration to jump out of the local optimum and improve the accuracy of the solution. The rest of the paper is organized as follows. Section 2 gives a detailed overview of the HHO algorithm. The proposed method is introduced in detail in Section 3. In Section 4, the experimental results are analysed. Finally, the conclusions are presented in Section 5.

2. Background studies

HHO is a new swarm intelligence optimization algorithm proposed by Heidari et al. [16] in 2019 that mimics the way Harris eagles find and chase prey in nature, which has strong global search capability and adjusts few parameters. The whole foraging process mainly consists of three phases: the exploration phase, transition from the exploration phase to the exploitation phase and the exploitation phase. All phases of HHO are shown in (Fig 1), and each phase is presented in detail as follows.

2.1 Exploration phase

During the exploration phase, Harris hawks primarily search for prey, which may be a rabbit. When Harris hawks detect and track a rabbit with their keen eyes, two strategies are used to update their locations, which can be formulated as: (1) where Xi (t) and Xrabbit (t) are the positions of the ith hawk and the rabbit, respectively, at the current iteration, t.Xrand (t) is the position of the randomly selected hawk, Xi (t + 1) is the position of the ith hawk at the next (t + 1)th iteration, r1, r2, r3 and r4 are four random numbers between [0, 1], q is a random number between [0,1] that is applied to switch the strategy, and Xm (t) is the mean position of the current population as follows: (2) where Np is the size of the population, and [LB, UB] denotes the search space.

2.2 Transition from exploration to exploitation

The transition from the global search (exploration) to local search (exploitation) of Harris hawks mainly depends on the escaping energy E of the prey (rabbit), where E can be calculated as follows. (3) where E0 is the random between (-1,1) in each iteration and T denotes the maximum number of iterations. Thus, the escaping energy E is within the interval (-2, 2). When |E|≥1, it indicates that the rabbit is capable of escaping, so the Harris hawks perform a global search (exploration). When |E|<1, it indicates that the rabbit is weak, and the Harris hawks perform a local search (exploitation).

2.3 Exploitation phase

When a rabbit is spotted, Harris hawks will besiege to the rabbit and wait for the chance to pounce. However, the rabbit may escape the encirclement during besieging, so Harris hawks should constantly adjust their flight strategies according to the behaviour of the rabbit. Four strategies, which will be switched by the escaping energy E and a random r, will be used in the exploitation phase to mimic the hunting behaviour of a Harris hawk. Each strategy is introduced in detail as follows.

2.3.1. Soft besiege.

When |E|≥0.5 and r≥0.5, the rabbit has enough energy to try to escape the siege by jumping at will, but is ultimately unable to escape, so Harris hawks can capture the rabbit by surrounding the rabbit and performing a surprise pounce. This strategy can be formulated as follows: (4) (5) (6) where ΔX(t) represents the difference between the optimal individual and the current individual, J is the random jump strength of the rabbit and r5 is a random number between (0,1).

2.3.2. Hard besiege.

When |E|<0.5 and r≥0.5, the rabbit is exhausted and has neither the energy nor the opportunity to escape, so the Harris hawks can capture the rabbit by surrounding the rabbit and performing a surprise pounce. This strategy can be formulated as follows: (7)

2.3.3. Soft besiege with progressive rapid dives.

When |E|≥0.5 and r<0.5, the rabbit has enough energy to successfully escape from its encirclement, so the Harris hawks need a more intelligent encirclement to surround the rabbit before performing a surprise pounce. Harris hawks surround the rabbit by performing the following two strategies; when the first strategy fails, the second strategy is performed. (8) (9) where S is a random vector with 1×D dimensions, D denotes the dimension of the search space, and LF(·) is the Levy flight function as follows: (10) where β is a constant to be set to 1.5, and μ and ν are random numbers between (0, 1). Thus, the updating strategy for this phase can ultimately be modelled as follows: (11)

2.3.4. Hard besiege with progressive rapid dives.

When |E|<0.5 and r<0.5, the rabbit may make a successful escape; however, its escape energy is insufficient, so the Harris hawks form a hard encirclement to surround the rabbit before performing a surprise pounce. They still perform two strategies to update their positions in this phase.

(12)(13)

Thus, the updating strategy for this phase can ultimately be formulated as follows: (14)

In summary, (Fig 2) shows the optimization process of the basic HHO algorithm.

3. Proposed scheme

To improve the diversity of initial population and enhance the ability to jump out of local optimal solution of HHO algorithm, the specific implementation of the proposed algorithm is described in detail in this section. Two recombination mechanisms, the chaotic sequence recombination mechanism (CSRM) and generalized opposition-based learning recombination mechanism (OBLRM), are introduced to enhance the performance of the HHO algorithm. The improved HHO algorithm does not change the structure of the HHO. (Fig 3) shows the optimization process of the proposed algorithm.

3.1 Chaotic sequence recombination mechanism

Since the chaotic system can vary randomly, if the running time is unlimited, every state will be realised. This means that the chaotic maps can be applied to build the search basis of optimization methods, or introduced into some raw optimization algorithms to improve their exploration competence [35, 36]. Due to sensitivities of the initial condition, randomness and ergodicity of a chaotic sequence, it is often used in optimization algorithms to decrease the chance of premature maturation [37, 38]. So a chaotic sequence can significantly enhance the capability of the HHO algorithm by replacing the random values, as confirmed in the literature [26, 3234]. Therefore, the chaotic sequence generated by logistic mapping is applied to generate the initial solutions in HHO. The logistic mapping can be modelled as follows: (15) where ui represents the chaotic variable in the ith iteration, k represents the iteration number, and c is the control parameter, which is set to 4. At the initial phase of HHO, the chaotic search around the initial candidate solutions can enhance the diversity of the population and then improve the exploration capability of the algorithm. The initial population P is generated according to the following equation: (16) where xi denotes the ith candidate solution and rand() is a function that generates a random number between [0, 1]. Then, an updated population Pc is obtained by combining P with the chaotic sequence ui.

(17)

The recombination mechanism is performed by recombining Pc and P; then, a new population will be generated by selecting the solutions corresponding to the first NP fitness values. These steps need to be executed k times, and finally a new initial population is obtained. Apart from the random distribution used by the standard HHO, the CSRM strategy enhances the distribution of the initialized solutions in the search space, thus speeding up the convergence of the HHO algorithm.

3.2 Opposite elite learning recombination mechanism

The opposition-based learning (OBL) technique, which was first proposed in 2005 by Tizhoosh, is a machine intelligence strategy that aims to improve the capabilities of SI algorithms. Its core idea is to find a better solution between the current individual and the corresponding opposite solution, according to their fitness values. It has been verified that the OBL strategy can have more chances to approach the global optimal solution of the objective function [39]. Therefore, the OBL strategy has been widely applied by researchers to enhance the capabilities of SI algorithms, such as the WOA [40], GOA [41], PSO [42], SSA [43] and CS [44] algorithms.

Suppose that xi is the current individual; then the corresponding generalized opposite solution can be calculated as follows: (18)

Then, a population P composed of xi(i = 1,2,…NP) is the parent generation, while a population Po composed of is the offspring. Finally, the recombination mechanism is performed by recombining Po and P; therefore, a new population will be obtained by selecting the solutions corresponding to the first NP fitness values.

It is well known that in the HHO algorithm, even if the selected region is not globally optimal, the global search is no longer carried out in the later periods of iteration, so the HHO tends to converge to the local optimum prematurely. However, when the above generalized opposition-based recombination mechanism is embedded in the HHO algorithm, the improved algorithm can have the opportunity to carry out global search in the later period of iteration to jump out of the local optimum and improve the accuracy of the solution.

4. Experimental results and analysis

To evaluate the performance of the HHO-CS-OELM algorithm, two experiments are implemented in this section. In the first experiment, the proposed HHO-CS-OELM algorithm was compared with 14 SI algorithms, such as PSO, SFLA, ABC, WOA, CS, GSA, GWO, Jaya, BHHO, HHHO, CHHO, CSHHO, DEHHO and HHO, that were applied to optimize the 23 benchmark functions [13]. Second, the HHO-CS-OELM algorithm was applied to optimize the thresholds and weights of the back propagation (BP) neural network for UWB indoor positioning. All experiments are carried out on a Windows 10 operating system with MATLAB R2019a on a PC with Inter(R) core i7-10750H and 16 GB RAM memory.

4.1 Benchmark functions experiment

The HHO-CS-OELM algorithm and the other 14 SI algorithms are applied to optimize the 23 benchmark functions, which are categorised as unimodal, multimodal and fixed dimension multimodal. F1-F7 are unimodal functions for which there is only one global optimal solution. These functions can be utilized to evaluate the convergence speed and exploitation capability of the proposed algorithm. On the other hand, F8-F13 and F14-F23 are multimodal and fixed dimension multimodal functions, respectively, which have one global optimal solution and several local optimal solutions. These functions can be applied to evaluate the local optimal avoidance and exploration capabilities of the proposed algorithm. The details of these benchmark functions are provided in the literature [13]. Typical two-dimensional diagrams of some of these functions are shown in (Fig 4), from which, the prominent characteristics of these functions can be observed, Fig 4(a) and 4(b) are unimodal functions which have only one minimum value, however, from Fig 4(c) to 4(f) are multimodal functions which have a lot of local minimum values.

thumbnail
Fig 4.

Typical 2D representations of benchmark mathematical functions: (a), (b) unimodal functions, (c), (d) multimodal functions, and (e), (f) fixed-dimension multimodal functions.

https://doi.org/10.1371/journal.pone.0281636.g004

For all experiments, we set the population size to 30, the maximum number of iterations to 500, and the parameters of each algorithm are taken from the literature. Each algorithm is executed independently for 51 times for each function. The average and standard deviation results of these benchmark functions are recorded in Table 1, and the convergence curves of F1-F7, F8-F13 and F14-F23 are shown in Figs 57, respectively.

4.1.1. Evaluation of exploitation capability (F1-F7).

The unimodal functions can be utilized to evaluate the exploitation capabilities of the SI algorithms because they only have one global optimal solution. The results in Table 1 show that the HHO-CS-OELM algorithm is highly competitive compared to other HHO variants and SI algorithms. For all unimodal functions excluding F6, the HHO-CS-OELM algorithm acquires the best optimal average values and standard deviations, which indicates that the accuracy and stability of the HHO-CS-OELM algorithm are the best among all algorithms. On the other hand, the results in Fig 5 show that the convergence of the HHO-CS-OELM algorithm is the fastest. In summary, the exploitation capability of the HHO-CS-OELM algorithm is more competitive than that of other SI optimization algorithms.

4.1.2. Evaluation of exploration capability (F8-F23).

The multimodal functions F8-F23 contain a large number of local optimal values, which increase exponentially with increasing dimension. Therefore, these functions are suitable for evaluating the exploration capability and the ability to avoid local optima. It can be seen from Table 1 and Figs 57 that the HHO-CS-OELM algorithm outperforms the other algorithms in most of the multimodal functions F8-F13. For F8, the HHO-CS-OELM algorithm is inferior only to the ABC algorithm, but superior to all other algorithms. For F9-F11, although most algorithms can obtain the optimal solutions, the convergence speed of the HHO-CS-OELM algorithm is the fastest. For F12, the HHO-CS-OELM algorithm is inferior only to SFLA algorithm, but superior to all other algorithms. For F13, the HHO-CS-OELM algorithm is inferior only to the SFLA, DEHHO and CHHO algorithms, but superior to all other algorithms. However, when compared to the standard HHO algorithm alone, the HHO-CS-OELM algorithm is a winner in all conditions, which indicates that HHO-CS-OELM is completely superior to the HHO algorithm.

For F14, the HHO-CS-OELM algorithm is superior to all other algorithms, and the convergence speed is also the fastest. For F15, the HHO-CS-OELM algorithm is inferior to only the BHHO algorithm. For F16, the HHO-CS-OELM algorithm is inferior to only the SFLA, ABC and DEHHO algorithms, but superior to all other algorithms. For F17, however, the HHO-CS-OELM algorithm is superior to only the HHO and HHHO algorithms, but inferior to all other algorithms. For F18, the HHO-CS-OELM algorithm is inferior to only the SFLA algorithm, but superior to all other algorithms. For F19, the HHO-CS-OELM algorithm is superior to only the HHO and BHHO algorithms, but inferior to all other algorithms. For F20, the HHO-CS-OELM algorithm is inferior to only the WOA and CS algorithms, but superior to all other algorithms. For F21-F23, the HHO-CS-OELM algorithm is inferior to only the ABC algorithm and superior to all other algorithms. For F14-F23, the HHO-CS-OELM algorithm performs barely satisfactory with respect to all other algorithms; however, it still completely outperforms the standard HHO algorithm. In summary, these results show that HHO-CS-OELM can provide superior exploration capability.

In addition, the proposed algorithm can obtain optimal values for 11 functions out of 23 functions; while it outperforms the standard HHO algorithm on all of the 23 functions. Therefore, the above results reveal that the chaotic sequence and opposite elite learning mechanism can effectively balance the exploitation and exploration capabilities and improve the performance of the HHO algorithm.

4.2 Engineering application

In this section, an engineering problem on indoor positioning based on the improved BP neural network, is performed to verify the performance of the proposed method. A BP neural network is a multilayer feedforward neural network equipped with error back propagation. The BP neural network is one of the most widely used artificial neural network models at present. It is composed of input layer, hidden layer and output layers, for which, the number of neurons in each layer can be set randomly according to requirements, while the performance of the network varies with the number of different structures. However, the initial thresholds and weights of the network generated randomly can easily steer the network into a local extrema. Therefore, to alleviate this shortcoming, many intelligent optimization algorithms are used to optimize the initial thresholds and weights [4548].

Thus, in this section, HHO, BHHO, HHHO, CHHO, CSHHO, DEHHO and the proposed improved HHO algorithms are applied to optimize the initial thresholds and weights, and then the improved BP neural network is applied to improve the accuracy of the ultrawide band (UWB) positioning system.

The indoor positioning system-based UWB is configured with four UWB base stations deployed at four corners of a 5.6 m x 4.8 m room, which is shown in (Fig 8). The red rectangle is the base station, and the blue rectangle is the location tag. In the offline training stage of the BP neural network, 64 sample points are sampled, with (1.0 m, 1.0 m) as the starting point, at intervals of 40 cm, to be used as the training dataset. As shown in (Fig 9), in the online test stage of BP neural network, 49 sample points are sampled with (1.2 m, 1.2 m) as the starting point, at intervals of 40 cm, to be used as the test dataset; where the red circle indicates the measured position and the blue asterisk denotes the real position.

According to the experimental configuration, the structure of the BP network consists of 2 input and 2 output neurons, and 7 neurons in hidden layer. The activation function is set as a Sigmoid function in hidden layer, transfer function is set as a Purelin function in the output layer, gradient descending method is used as the training method, mean square error function is used as the performance index, and the maximum training time is 2000, the learning rate is 0.01, and the initial weights and thresholds are generated by a random generation method and the swarm intelligent optimization algorithms, respectively. (Figs 10 to 13) show the experimental results.

thumbnail
Fig 12. Measuring and predicting errors of the test dataset.

https://doi.org/10.1371/journal.pone.0281636.g012

(Fig 10) shows that the proposed HHO-CS-OELM algorithm has the fastest convergence speed. In (Fig 11), the black pillar indicates the average positioning error of the BP neural network, when random weights and thresholds were 7.76 cm, which demonstrates the worst performance. On the other hand, the red pillar indicates that the resulting average error of the improved BP neural network optimized by the HHO-CS-OELM algorithm is 3.81 cm, which is the best performance. These results reveal that the improved BP neural network can improve the performance of the BP neural network.

In (Fig 12), the red line represents the positioning error of the improved BP neural network optimized by the HHO-CS-OELM algorithm, while the black line denotes the measuring error. In Fig 13, the green point, the red circle, and the blue asterisk indicate the predicted positions obtained by the BP neural network and the HHO-CS-OELM algorithm along with the measured positions and real positions, respectively. Figs 12 and 13 show that the positioning errors of approximately 67.35% (33/49) of the test points are significantly improved by the modified BP neural network. These results reveal that the HHO-CS-OELM can significantly improve the performance of the BP neural network and further verify that the two strategies can effectively enhance the performance of the HHO algorithm in this paper.

5 Conclusions and future works

In this paper, we propose a new HHO variant algorithm based on chaotic mapping and an opposite elite learning mechanisms. The chaotic sequence recombination mechanism is applied at the initial population stage to improve the diversity of the population and enhance the exploration ability of the HHO algorithm. The opposite elite learning recombination mechanism is used at the last stage of each iteration, to effectively maintain the optimal individual and enhance the exploitation ability. On the other hand, this method can overcome the shortcoming at the late iteration in the HHO algorithm: inability to perform global search. In addition, the exploitation and exploration capabilities of the HHO algorithm are balanced. The performance of the HHO-CS-OELM algorithm is verified by comparison with 14 optimization algorithms on 23 benchmark functions and an engineering problem which aims to optimize the weights and thresholds of the BP neural network for indoor positioning. Experimental analyses revealed that the proposed algorithm can obtain optimal values for 11 functions out of 23 functions compared to other state-of-the-art SI algorithms, while it outperforms the standard HHO algorithm on all of the 23 functions, so the HHO-CS-OELM algorithm offers competitive results compared to other state-of-the-art SI algorithms, and verified that the HHO-CS-OELM algorithm has better performance than others. The improved BP neural network optimized by the HHO-CS-OELM algorithm reduced the indoor average positioning error from 7.76 cm to 3.81 cm which is obtained by BP neural network, it also revealed that the indoor positioning accuracy obtained by using the BP neural network with the proposed algorithm is significantly improved, which further verifies the superiority of the HHO-CS-OELM algorithm.

To further improve the performance of the HHO-CS-OELM algorithm, outcomes of other benchmark functions still need to be enhanced, and the exploitation and exploration capabilities of the HHO-CS-OELM algorithm need to be investigated further. Moreover, the HHO-CS-OELM algorithm should be used to solve the system optimal parameters, optimal parameters of neural network model, and so on.

Acknowledgments

We truly appreciate the Intelligent Network Automobile Laboratory of West Anhui University in Lu’an for providing experimental equipment. We are also grateful to Professor Jie Fang for his guidance.

References

  1. 1. Agrawal Divyansh, Minocha Sachin, Namasudra Suyel, et al. Ensemble Algorithm using Transfer Learning for Sheep Breed Classification IEEE 15th International Symposium on Applied Computational Intelligence and Informatics.2021,199–204.
  2. 2. Chakraborty Rupak, Verma Garima and Namasudra Suyel, IFODPSO‑based multi‑level image segmentation scheme aided with Masi entropy. Journal of Ambient Intelligence and Humanized Computing,2021,1(12):7793–7811.
  3. 3. Debashree Devi, Suyel Namasudra and Seifedine Kadry, A Boosting-Aided adaptive cluster-based undersampling approach for treatment of class imbalance problem. International journal of data warehousing and mining,2020,16(30):60–86.
  4. 4. Tadepalli Yasasvy, Kollati Meenakshi, Kuraparthi Swaraja, et al., Content-based image retrieval using Gaussian–Hermite moments and firefly and grey wolf optimization. CAAI Transactions on intelligence technology,2021,6(2):135–146.
  5. 5. Sayyaadi H, Decision-making in optimization and assessment of energy systems. Modeling, Assessment, and Optimization of Energy Systems,2021,1(1):431–477.
  6. 6. Chen Rui, Pu Dong, Tong Ying, et al., Image-denoising algorithm based on improved K-singular value decomposition and atom optimization. CAAI Transactions on intlliegence technology, 2022,7(1):117–127.
  7. 7. Tang Zedong and Gong Maoguo, Adaptive multifactorial particle swarm optimisation. CAAI Transactions on intlliegence technology,2019,4(1):37–46.
  8. 8. Gandomi A H, Yang X S and Alavi A H, Cuckoo search algorithm: a metaheuristic approach to solve structural optimization problems. Engineering with Computers,2013,29(1):17–35.
  9. 9. Sm A, Smm B and Al A, Grey Wolf Optimizer. Advances in Engineering Software,2014,1(69):46–61.
  10. 10. Aguila-Leon J, Chias-Palacios C, Vargas-Salgado C, et al., Particle Swarm Optimization, Genetic Algorithm and Grey Wolf Optimizer Algorithms Performance Comparative for a DC-DC Boost Converter PID Controller. Advances in Science Technology and Engineering Systems Journal,2021,6(1):619–625.
  11. 11. Karaboga D and Basturk B, A powerful and efficient algorithm for numerical function optimization: artificial bee colony (abc) algorithm. Journal of Global Optimization,2007,39(3):459–471.
  12. 12. Eusuff M, Lansey K and Pasha F, Shuffled frog-leaping algorithm: a memetic meta-heuristic for discrete optimization. Engineering Optimization,2006,38(2):129–154.
  13. 13. Mirjalili Seyedali and Lewis Andrew, The Whale Optimization Algorithm. Advances in Engineering Software,2016,95:51–67.
  14. 14. Rashedi E., Nezamabadi-pour H. and Saryazdi S., GSA: a gravitational search algorithm. Information Sciences,2009,179(1):2232–2248.
  15. 15. Venkata Rao R., Jaya: A simple and new optimization algorithm for solving constrained and unconstrained optimization problems. International Journal of Industrial Engineering Computations,2016,7:19–34.
  16. 16. Heidari Ali Asghar, Mirjalili Seyedali, Faris Hossam, et al., Harris hawks optimization: Algorithm and applications. Future Generation Computer Systems,2019,97:849–872.
  17. 17. Houssein Essam H., Hosney Mosa E., Oliva Diego, et al., A novel hybrid Harris hawks optimization and support vector machines for drug design and discovery. Computers & Chemical Engineering,2020,133:1–16.
  18. 18. Golilarz Noorbakhsh Amiri, Mirmozaffari Mirpouya, Gashteroodkhani Tayyebeh Asgari, et al., Optimized Wavelet-Based Satellite Image De-Noising With Multi-Population Differential Evolution-Assisted Harris Hawks Optimization Algorithm. IEEE Access,2020,8:133076–133085.
  19. 19. Khalifeh S., Akbarifard S., Khalifeh V., et al., Optimization of water distribution of network systems using the Harris Hawks optimization algorithm (Case study: Homashahr city). MethodsX,2020,7:1–10.
  20. 20. Abbasi Ahmad, Firouzi Behnam and Sendur Polat, On the application of Harris hawks optimization (HHO) algorithm to the design of microchannel heat sinks. Engineering with Computers,2019,37:1409–1428.
  21. 21. Jiao Shan, Chong Guoshuang, Huang Changcheng, et al., Orthogonally adapted Harris hawks optimization for parameter estimation of photovoltaic models. Energy,2020,203:1–20.
  22. 22. Liu Yun, Chong Guoshuang, Heidari Ali Asghar, et al., Horizontal and vertical crossover of Harris hawk optimizer with Nelder-Mead simplex for parameter estimation of photovoltaic models. Energy Conversion and Management,2020,223:1–20.
  23. 23. Selim Ali, Kamel Salah, Alghamdi Ali S., et al., Optimal Placement of DGs in Distribution System Using an Improved Harris Hawks Optimizer Based on Single- and Multi-Objective Approaches. IEEE Access,2020,8:52815–52829.
  24. 24. Hu Hongping, Ao Yan, Bai Yanping, et al., An Improved Harris’s Hawks Optimization for SAR Target Recognition and Stock Market Index Prediction. IEEE Access,2020,8:65891–65910.
  25. 25. Ridha Hussein Mohammed, Heidari Ali Asghar, Wang Mingjing, et al., Boosted mutation-based Harris hawks optimizer for parameters identification of single-diode solar cell models. Energy Conversion and Management,2020,209.
  26. 26. Houssein E. H., Hosney M. E., Elhoseny M., et al., Hybrid Harris hawks optimization with cuckoo search for drug design and discovery in chemoinformatics. Sci Rep,2020,10(1):14439. pmid:32879410
  27. 27. Elaziz Mohamed Abd, Heidari Ali Asghar, Fujita Hamido, et al., A competitive chain-based Harris Hawks Optimizer for global optimization and multi-level image thresholding problems. Applied Soft Computing,2020,95:1–32.
  28. 28. Qu Chiwen, He Wei, Peng Xiangni, et al., Harris Hawks optimization with information exchange. Applied Mathematical Modelling,2020,84(1):52–75.
  29. 29. Shi Beibei, Heidari Ali Asghar, Chen Cheng, et al., Predicting Di-2-Ethylhexyl Phthalate Toxicity: Hybrid Integrated Harris Hawks Optimization With Support Vector Machines. IEEE Access,2020,8:161188–161202.
  30. 30. Wunnava Aneesh, Naik Manoj Kumar, Panda Rutuparna, et al., An adaptive Harris hawks optimization technique for two dimensional grey gradient based multilevel image thresholding. Applied Soft Computing,2020,95:1–35.
  31. 31. Chen Hao, Heidari Ali Asghar, Chen Huiling, et al., Multi-population differential evolution-assisted Harris hawks optimization: Framework and case studies. Future Generation Computer Systems,2020,111:175–198.
  32. 32. Menesy Ahmed S., Sultan Hamdy M., Selim Ali, et al., Developing and Applying Chaotic Harris Hawks Optimization Technique for Extracting Parameters of Several Proton Exchange Membrane Fuel Cell Stacks. IEEE Access,2020,8:1146–1159.
  33. 33. Singh Tribhuvan, A chaotic sequence-guided Harris hawks optimizer for data clustering. Neural Computing and Applications,2020,32(23):17789–17803.
  34. 34. Chen Huiling, Jiao Shan, Wang Mingjing, et al., Parameters identification of photovoltaic cells and modules using diversification-enriched Harris hawks optimization with chaotic drifts. Journal of Cleaner Production,2020,244:1–19.
  35. 35. Xu Y, Chen H and Heidari A A, An Efficient Chaotic Mutative Moth-flame-inspired Optimizer for Global Optimization Tasks. Expert Systems with Applications,2019,129(9):135–155.
  36. 36. Zhang Qian, Chen Huiling, Heidari Ali Asghar, et al., Chaos-Induced and Mutation-Driven Schemes Boosting Salp Chains-Inspired Optimizers. IEEE Access,2019,7:31243–31261.
  37. 37. Nasr S. M., Mousa A. A. and El-Shorbagy M. A., A chaos-based evolutionary algorithm for general nonlinear programming problems. Chaos, Solitons and Fractals: Applications in Science and Engineering,2016,1(85):8–21.
  38. 38. Ewees Ahmed A., Mohamed Abd El Aziz and Aboul Ella Hassanien, Chaotic multi-verse optimizer-based feature selection. Neural Computing and Applications,2017,31(4):991–1006.
  39. 39. Tizhoosh H. R. Opposition-Based Learning: A New Scheme for Machine Intelligence International Conference on International Conference on Computational Intelligence for Modelling, Control & Automation.2005,695–701.
  40. 40. Tubishat Mohammad, Abushariah Mohammad A. M., Idris Norisma, et al., Improved whale optimization algorithm for feature selection in Arabic sentiment analysis. Applied Intelligence,2018,49(5):1688–1707.
  41. 41. Ewees Ahmed A., Mohamed Abd Elaziz and Essam H. Houssein, Improved grasshopper optimization algorithm using opposition-based learning. Expert Systems with Applications,2018,112(1):156–172.
  42. 42. Shang J., Sun Y., Li S., et al., An Improved Opposition-Based Learning Particle Swarm Optimization for the Detection of SNP-SNP Interactions. Biomed Res Int,2015,2015:524821. pmid:26236727
  43. 43. Tubishat M, Idris N, Shuib L, et al., Improved Salp Swarm Algorithm based on opposition based learning and novel local search algorithm for feature selection. Expert Systems with Applications,2019,145(1):113–122.
  44. 44. Li Xiangtao and Yin Minghao, A particle swarm inspired cuckoo search algorithm for real parameter optimization. Soft Computing,2015,20(4):1389–1413.
  45. 45. Haixia Li, Geng Li, Zhiyong Huang, et al. Application of BP Neural Network Based on Genetic Algorithm Optimization International Conference on Intelligent Information Processing.2019,160–165.
  46. 46. Gu P, Zhu C M, Wu Y Y, et al., Energy Consumption Prediction Model of SiCp/Al Composite in Grinding Based on PSO-BP Neural Network. Solid State Phenomena,2020,305(1):163–168.
  47. 47. Li Y and Liu J, UWB indoor localization system based on IA-BP neural network. Electronic measurement technology,2019,42(5):109–112.
  48. 48. Liang F and Xiong L, UWB indoor positioning of mobile robot based on GA-BP neural network. Microelectronics and computer,2019,36(4):33–38.