Many-objective BAT algorithm

In many objective optimization problems (MaOPs), more than three distinct objectives are optimized. The challenging part in MaOPs is to get the Pareto approximation (PA) with high diversity and good convergence. In Literature, in order to solve the issue of diversity and convergence in MaOPs, many approaches are proposed using different multi objective evolutionary algorithms (MOEAs). Moreover, to get better results, the researchers use the sets of reference points to differentiate the solutions and to model the search process, it further evaluates and selects the non-dominating solutions by using the reference set of solutions. Furthermore, this technique is used in some of the swarm-based evolutionary algorithms. In this paper, we have used some effective adaptations of bat algorithm with the previous mentioned approach to effectively handle the many objective problems. Moreover, we have called this algorithm as many objective bat algorithm (MaOBAT). This algorithm is a biologically inspired algorithm, which uses echolocation power of micro bats. Each bat represents a complete solution, which can be evaluated based on the problem specific fitness function and then based on the dominance relationship, non-dominated solutions are selected. In proposed MaOBAT, dominance rank is used as dominance relationship (dominance rank of a solution means by how many other solutions a solution dominated). In our proposed strategy, dynamically allocated set of reference points are used, allowing the algorithm to have good convergence and high diversity pareto fronts (PF). The experimental results show that the proposed algorithm has significant advantages over several state-of-the-art algorithms in terms of the quality of the solution.


Introduction
For multi-objective optimization [1] decision-making is based on the multiple criteria. To solve the multi-objective problems (MOPs) [2], there is a well-known family of meta-heuristic based algorithms like MOEAs [1], Multi-objective Particle Swarm Optimization (MOPSOs) [1,3] and multi-objective bat algorithms (MOBATs) [4]. MOEAs achieve the pareto-front (PF) approximation in a single run with the use of their own population-based property [2]. The qualities of the solutions are discriminated based on pareto dominance relation [1][2]5]. Consequently, MOEAs, MOPSOs and multi-objective bat algorithms can be characterized accordingly. Non-dominated sort genetic algorithm II (NSGA-II) [6], Strength pareto evolutionary a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 or they are computationally expensive, or they are not consistently working on practical problems in the MaOPs. By understanding this, one way to solve MaOPs is with the use of reference set based methodology. Algorithms based on the reference sets have more success to get the good approximation of PF and remain a research challenge [9]. The evaluation of solutions uses the reference set and then select the non-dominating solutions to get the good convergence and high diversity. This approach is somewhat based on the preference-based approach, but we have to set preferences to get the full approximation of the PF. This paper presents the MaOBAT algorithm which uses reference point approach to solve the MaOPs. The main reason of using BAT based technique is that it converges rapidly to PF and gives good approximation of PF in MOPs as Bat algorithms are based on swarm intelligence and inspired from the echolocation behaviors of bat. Echolocation works as type of sonar; a loud sound is emitted, and echo returns after that sound hit the object. It is an important factor that bats use to identify distances, obstacles and the difference between food and prey [23]. Moreover, it also allows them to hunt in complete darkness [23]. The combination of echolocation with swarm intelligence enhances the properties of swarm-based algorithms so this thing makes BAT algorithm little more effective than the swarm-based algorithms in some of the scenarios. The fundamentals of the MOEAs and MOBATs are different from each other, so we cannot use the result and conclusion of MOEAs cannot implies anything for MOBATs. In fact, the MOEAs converge slowly to PF but BAT based algorithms converge fast in MaOPs as they converge fast in single objective optimization. Bat-based algorithms perform better than PSO-based algorithms in some problem specific scenarios [24,25] as different studies suggest. For instance, a comparative study between BAT algorithm and PSO algorithm is performed and radial basis function network (RBF) is trained in order to classify types of benchmarked data. It is seen that BAT algorithm performed better than PSO algorithm in terms of improving the weights of RBF network and to accelerate the training time and to get good convergence of optimal solutions, which led to increase in the network efficiency and reduced falling mistakes and nonoccurrence [24]. Moreover, in another study, a comparison of algorithms for training feed forward neural networks is done. Two gradient descent algorithms (Backpropagation and Levenberg-Marquardt), and three population-based heuristic: Bat algorithm, Genetic algorithm, and Particle Swarm optimization algorithm are used for testing. Bat algorithm outperforms all other algorithms in training feed forward neural networks [25]. These studies encourage to use BA in further experiments and in further real-world applications. The benefit of the using bat algorithm is to obtain solutions based on population and local searchbased algorithms. This combination gives us global diversity as well as local rigorous exploitation, which is important for metaheuristic algorithms. So, Bat algorithm is the combination of PSO and local search, which further uses pulse rate control and loudness [26]. By adapting the approach of reference sets, the MOBATs are used for the MaOPs and provide a good balance in diversity and convergence which is the main issue in the MaOPs. The main purpose of the paper is to improve the many-objective algorithm result, by implementing a new bat-inspired algorithm for many-objective optimization problems by using reference set approach to get good convergence and diversity.
The organization of the paper is as follows. Section 2 describes the Literature Review. Section 3 presents the proposed strategy for many-objective BAT-algorithm. Section 4 gives the brief explanation of state-of-the-art algorithms, which are used for the comparison. Section 5 gives introduction to test problems, parameter settings and the experiment results. conclusion and future work are presented in Section 6.

Literature review
In literature, to improve the convergence of the MOEAs, MOPSOs and MOBAT, the relaxed form of the dominance relationships is proposed by some authors, which increase the selection pressure towards the PF. In the following section, the representative work is reviewed and then the inspiration for our work is elaborated.
Sato et al. [5] floated the idea of modified dominance relationship and it enhanced the performance of the NSGA-II. Moreover, the relationship is known as Control of Dominance Area of Solutions (CDAS). Furthermore, De Carvalho and Pozo applied CDAS on two different MOPSOs, namely sigma MOPSO and SMPSO. However, because of this updated relationship the divergence is affected badly as the selective pressure towards the PF is increased.
Mostaghim et al. [16] in 2008 proposed two aggregation methods, the first is weighted sum approach and the second one is distance-based ranking. Moreover, it is shown that for some problem the distance-based approach with MOPSO outperformed NSGA-II and random search up to 20 objectives. However, the analysis is done for the convergence, but the diversity issue is still remaining.
In order to manage both convergence and diversity simultaneously, Garza-Fabre et al. [15] provided two aggregation-based MOEAs focusing on the use of fitness assignments to emphasize convergence called the clustering elitist genetic algorithm (CEGA) and to provide an explicit system for promoting diversity called the multi-directional fitness assignment (MDFA), but it still has issues as it converges toward the small part of the PF.
Deb and Jain in 2013 [27], proposed the reformation to NSGA-II based on the reference set approach to solve the MaOPs and its named as (Non-dominated Sorting genetic Algorithm-III) NSGA-III [27]. Moreover, in NSGA-III [27], the reference set approach is used, and it showed good convergence and diversity simultaneously. In addition, the NSGA-II crowding distance is substituted by clustering operator, in which each member of the population is connected with one of the well-distributed reference points in order to achieve excellent diversity.
Figueiredo et al. [28], in 2016, used the same reference set based approach in MOPSO to solve the MaOPs with good convergence and diversity. Moreover, they proposed some more important difference in accordance to the PSO algorithms. The Many Objective Particle Swarm Optimization (MaOPSO) [28] uses the external archive to save best non-dominant solutions which can be used in the later iterations for the selection of the leaders as needed in the PSO algorithms. Another significant distinction is that MaOPSO utilizes density and pareto dominance data to move the particles towards the PF. However, no unique reproduction operator is used by NSGA-III [27].
Zhihua et al. [29], proposed an improved version of NSGA-III, in which a new selection and elimination operator is used. At first, a selection operator is used to locate the reference point with the minimum niches count, and then one individual is selected with the shortest penalty-based intersection distance. Secondly, a reference point with the maximum niche count is defined, and the elimination operator removes one individual with the longest penalty-based boundary intersection distance.
Multi-objective and multidisciplinary in nature demand coherent optimization algorithms, especially for the engineering optimization problems as they are considered as complex constrained problems. To fix this, Xin-She Yang introduced an invariant of bat algorithm known as MOBAT [3]. Yang enhanced the bat algorithm for solving global and nonlinear optimization problems.
Moreover, multi-objective bat algorithm has been used to solve many real-life applications. Arash et al. [30] proposed a multi-objective Bat algorithm for optimizing the cost for allocating human resources to an emergency hospital. Anindita et al. [31] proposed a bio-inspired algorithm for solving different aspects of wireless sensor network (WSN) like optimal routing, clustering, dynamic allocation of motes, lifetime optimization and the energy problem.
Learning to rank is an important task in Information Retrieval (IR) and multi-objective optimization algorithms have shown success in solving these kinds of problems. Tie-Yan Liu [32] provided a comprehensive overview of learning to rank for IR and categorized the existing learning-to-rank algorithms into three approaches. A detailed review of these approaches, their relationship with loss functions and IR evaluation measures are presented. A ranking model is constructed using data, this task to construct model is known as Learning to rank. Li et al. [33] proposed a multi-objective optimization model of the robust learning to rank (LTR) and this model is helpful in sorting objects according to the importance, choice and relevance. [32,33]. The algorithm proposed in our paper can be used to solve the problem of learning to rank.
Recently, Li et al. [34] proposed a dandelion algorithm (DA), which shows excellent results in solving optimization problems, however, it converges slowly and can easily get trapped in local optima. In order to overcome these problems, Zhu et al. [35] proposed a dandelion algorithm with probability-based mutation. Levy and Gaussian mutations are used interchangeably, and it showed better results than standard DA.
Moreover, with promising results of the reference-based approach with MOEAs for handling the MaOPs, the adaption made in MOBAT using reference set approach to solve the MaOPs to achieve good convergence and diversity in more efficient manners. Additionally, this approach uses the archive to maintain best Bats as to select the leaders.

Multi and many objective Bat algorithms
In this Section, multi and many objective Bat algorithms are discussed. Multi-Objective optimization problems are those optimization problems in which there are two or three objectives, which need to be optimized and those problems having more than three objectives are categorized as Many-objective optimization problems.
The multi-objective bat algorithm and the proposed many-objective bat algorithm are described in this following chapter.

Multi-objective Bat algorithm (MOBAT)
Multi-objective and multidisciplinary problems in nature demand coherent optimization algorithms, especially for the engineering optimization problems as they are considered as complex constraint problems. To fix this, Xin-She Yang introduced an invariant of bat algorithm known as the MOBAT [3]. Yang enhanced the bat algorithm for solving global and nonlinear optimization problem. MOBAT is first tested on various subsets of trial functions and then it is used to give solution of multi-objective problems involving welded beam design problem. MOBAT shows promising results as compared to existing multi-objectives algorithms [2]. For MOBAT, there are number of challenges needed to be dealt with. Firstly, the multi-objective problems are quite complex and arduous to solve. As, there are no unique best solutions for a multi-objective problem, so algorithm must find a non-dominated approximated solution for the true PF. Secondly, it must guarantee that numerous PF points are distributed evenly on the front and that the algorithm works for multi-objective design issues without extra circumstances such as weighted sum methods to combine various objectives into single objectives. For the real-world optimization design problems, like engineering problems for which Yang designed MOBAT, there is mostly uncertainty and noise in the working solution.
The primary problem in multi-objective bat is to approximate or approach the optimal Pareto fronts and objectives. The bat inspired algorithm must be modified enough to cater the multi-objectives of the design problems in proper manner. In this algorithm as per Pareto optimality rule a vector v = (v1,. . ., vn)T is dominated by the solution space vector u = (u1,.., un)T 2 F if and only if ui � vi for 8i 2 {1,. . .,n} and 9i 2 {1,. . .,n}: ui < vi. According to this equation, it can be said that all corresponding components of (u) are equal or smaller than all components of (v) with an applied condition that at least one component of (u) should necessarily be smaller. From this we can drive dominance relationship equation From this it can be said that for finding the solutions of maximization problems, the dominance relationship sign � can be inverted with �. A non-dominated solution point x � 2 F can be approximated if no solution can be seen in the solution space that dominates it. The Pareto front (PF) can be represented with this equation PF = {s2S 9/s 2S: s �s} and PF can be written in the of Pareto optimality set in search domain can be written as: In the MOBAT algorithm the signal objective (f) can be found from the multi-objectives (f k ) as the weighted sum of all multi-objectives, where the summation of all weights is equal to 1. The weighted are approximated randomly from a uniform distribution so that weights can have the required diversity to approximate the PF. If there are multiple objective functions from 1 to k, where k is the quantity of objectives of the problem, then f 1 (x),. . ., f k (x) are the objective functions given the initial feature set x = (x 1 ,. . ., x d ) T.
Algorithm 1: Framework of the Multi-objective BAT 1: Initialize the Bat algorithm population with its parameters 2: Weight vectors for all objective functions are generated, where the sum of all weights is equal to 1. 3: Execute the algorithm for maximum iterations, which is the hyperparameter of MOBAT. 4: In every iteration the new solution for each objection function is generated by frequency and position updating equations. 5: A single objective is estimated by weighted sum of the multiple objective functions. 6: New solutions are also estimated by random walk and flying randomly, and best ones are accepted out of them. 7: After each iteration, the ratio of pulse emission and average loudness of bat algorithm is increased. 8: After each iteration, the bats are ranked as per weighted single objective function (f) and current best solution is found. 9: At the end best non-dominated solution is estimated and it is post processed, if required.

Many-objective Bat algorithm (MaOBAT)
The proposed algorithm is named as Many-Objective BAT Algorithm and that is summarized in Algorithm 2. In this algorithm, the uniformly distributed random bats are generated as the initial population. After this, the algorithm evaluate bat using problem specific evaluation function. To save non-dominated solutions, MaOBAT has an external archive (A t ). Initially Archive (A 0 ) is empty. After that, reference points are generated and then distributed uniformly, which are used to compute the social leaders based on discrimination of solutions in the external archive. In each iteration algorithm chooses the cognitive and social leaders from the external archive and update particle position and velocity to move in the decision space. In the next step of the iteration, the polynomial mutation is applied to 15% of the particles. And in the final step of the iteration, pruning of the external archive is done so that the size of the external archive doesn't overflow than the maximum size (N). The iterations are repeated till the maximum number of iterations (t max ).
A fitness function is defined by the algorithm, based on that function the discrimination between particles is done. Based on the fitness values, the choice of the social leaders and the pruning (Selection of the best non-dominated particles in the archive) of the external archive is done. The explanation of different components of the Algorithm 2 is discussed in following sections.
Fitness method. It mainly focuses on two things, diversity measure to get good diversity and convergence measure to get the high convergence. To calculate these two measures, a method based on the reference points is presented. The density of the particle is then calculated by making the cluster of these particles against reference points.
Computation of the reference points and hyperplane. To emphasize on the convergence and well distribution of the particles to the PF, the algorithm uses the set of the reference points. Moreover, the method to get the reference point is recommended by Das and Dennis [36]. Hyperplane has well-distributed set of points in the objective space and it makes equal angle with each axis and lies in the first quadrant. A factor (p) is used here which decide in how many distributions each axis is going be divided. The entire number of reference points is formed using the given formula. Translation of the solutions. The ideal point (fi � ) for each objective is the minimum value point. If the ideal is given than we use it as it is, otherwise, it calculates by taking the minimum value for each objective in the external archive. After selecting the ideal point, subtract it from each member of the external archive fi 0 (x) = fi(x) − fi � Every objective has an ideal point (fi � ) which is the minimum value of that objective. The objective is denoted by (i) and f � represents idea point.

Density operator.
To get the density operator, all the bats from the (A t ) are projected on the Hyperplane and each bat assigned to a reference point on the minimum perpendicular distance basis. For every bat, a cluster of bats are associated with each reference point. After computing the clusters, the density operator of each bat i.e. the number of bats attached with the reference point to which that bat attached is computed. For example, if the bat attached to the reference point j are φj = {a,b,c}, the density measure of all the bats (a,b,c) is 3.
Convergence operator. To get good convergence, convergence operator is used. The ASF of the bat (i) of external archive with respect to that reference point to which it is associated is calculated. The convergence measure of the bat (i) is the ASF value of that bat. Mathematically it is as follows: ρi = ASF (xi, λj), where i belongs to φj.
Pruning the external archive. The bats in the external archive plays a vital role in the process of updating pulse emission rate and loudness. For each bat to maintain the external archive properly is quite essential in the algorithm. Updating external archive at each step is explained as follows. Firstly, non-dominated solution are added to the external archives and if the added bat is dominated by any of the bat in the external archive then remove this bat from external archive, otherwise add this bat in external archive and remove the bat of the external archive which are dominated by this bat. Moreover, if the size of the external archive increases from the maximum size, then remove the bats based on the density measure but if the two bats have the same density measure, then the convergence measure is used to differentiate between the particles. After removing the bat, the density measure for bats, which are associated with that reference point, to which removing particle is attached are decremented by one. This elimination step is being done until the external archive reach to the maximum archive size.
Update the loudness and rate of pulse emission. Loudness and rate of pulse emission of the bat is updated using the random walk on each bat around its leader. The equation is: where � is a random value from [-1,1] for each bat and A t is the average loudness of all bats for specific timestamp, if the random value from [0,1] is less than the rate of pulse emission. In this process the leader of the bat is taken and if the rate of emission is less than the random values, the leader is updated using the given equation: Using random flying the new bat are created. Moreover, if the fitness of the newly created bat is better than that leader and random value (which is used above for the comparison with the rate of pulse emission) is less than the loudness of that bat, then previous bat is replace by new bat solution. For the fitness comparison, if the new bat dominates then we simply replace it with the old bat but if they are incomparable than we see which bat from these two bats is near to ideal solution. The solution with the least distance from ideal point is selected. This method encourages convergence as we seek to reduce the distance from the ideal point. In the last step, if the new solution replaces the old solution that we update the pulse emission rate (α) and loudness reduction (γ) with the following equation: Alpha and gamma are constants which represents pulse emission rate (α) and loudness reduction (γ) and respectively. In fact, alpha is similar to the cooling schedule in simulated annealing [37]. Moreover, the time complexity of MaOBAT is asymptotically equivalent to time complexity of NSGA-III.
Comparison of multi and many-objective bat algorithm. In multi-objective Bat algorithm problems arises when the objectives increase, as the convergence and diversity is compromised. Moreover, the proportion of non-dominated solutions in a randomly chosen set of objective vectors becomes exponentially large with the increase in number of objectives, so there is not much room left for creating new solutions and it slows down the search process. Through many-objective Bat algorithm, all the above problems are tackled as it uses reference point approach to solve the MaOPs by converging rapidly to PF and by giving good approximation of PF. Points corresponding to each reference point can be emphasized to find a set of widely distributed sets of Pareto-optimal points [27]. As the reference points are widely distributed throughout the normalized hyperplane, the solutions obtained are also likely to be widely distributed on or near the Pareto-optimal front. The goal in such a many-objective optimization is to find Pareto optimal points, which are in some way closest to the specified reference points [27].
Analysis of bat algorithm. Exploration (diversification) and exploitation (intensification) are two important components of a metaheuristic and there is a need to maintain an appropriate balance between them to find a near global optimum [39,40]. Exploration property of an algorithm helps to explore unknown and new regions of the search space by generating diverse set of solutions focusing on the search at global level [41,42]. On the other hand, exploitation capability of an algorithm exploits the information from the solution at hand and aims to improve it by searching in the local region of the search space [41,42]. The bat algorithm exhibits the properties of both population-based methods and local search [43]. It is based on PSO [44], and uses two components, which are simulated annealing and random walk direct exploitation heuristics (RWDE) [45]. Simulated annealing heuristic [37] introduces diversity in the population and enhances the explorative capability of the bat search. RWDE [46] is connected with the exploitative capability of a bat by exploring the local region of the search space to improve the solution at hand. Exploration and exploitation in a bat algorithm are controlled by two parameters: the loudness and the pulse rate, respectively [46]. The reason of bat algorithms generating good results is due to the combination of echolocation with swarm intelligence as it enhances the properties of swarm-based algorithms in some of the scenarios. During search, the loudness and the rate of pulse emission of bats have to be updated accordingly as the iterations proceed as these characteristics help to find the optimized solutions in the better way. In early iterations, the loudness is high, which helps in exploring different regions of the search space. On the other hand, in later iterations of the search, the rate of pulse emission is increased in order to exploit the promising region of the search space.
Typically, the loudness decreases, and the rate of pulse emission increases as the bats approach towards the near-optimal solution, and this mimics the behavior of a bat when it detects and localizes a prey [45]. To control the dynamics of a swarm of bats, these parameters are tuned to maintain an appropriate balance between exploration and exploitation components of the algorithm to find the near-optimal solution.

Many-objective evolutionary algorithms for testing MaOBAT
In this section the NSGA-III [27] and MaOPSO are explained. The performance of the MaO-BAT is compared with two algorithms MaOPSO [28] and NSGA-III [27]. The brief summary of these algorithms is given below:

NSGA-III
In NSGA-III, the initial population is created using randomly generated solutions which are also uniformly distributed. After this, the algorithm evaluates solutions using problem specific Evaluation function. Then, the offspring are generated with mutation and recombination. Then, combine the population and offspring to make the fronts based on pareto dominance relationship as done in the NSGA II [6]. Then, considering the population size, add these fronts one by one to next generation. After adding each front check whether the size of next generation is more or equal to population size. If it is equal than move to next iteration and if the algorithm hasn't reached to the maximum specified iteration then calculate how many solutions are required to be added from the last added front and then uniformly distribute reference points are generated, which are used to normalize all the solutions. The solutions are connected with each reference point on the basis of a minimum perpendicular distance from each reference point. The next step is calculating niche count. On the basis of the niche count, without the final selected front, add solutions from the final front into the next iteration one by one until it reaches to population size.

MaOPSO
In this algorithm [28], the uniformly distributed randomly generated particles are generated in the initial swarm S o. After this the algorithm evaluate particles using problem specific evaluation function. To save the non-dominated solutions MaOPSO has external archive (A t ), initially the Archive A 0 is empty. Then, uniformly distributed reference points are created, in order to compute the social leaders based on the discrimination of solutions in the external archive. After this, it works iteratively, in each iteration the cognitive and social leaders are updated from external archive and then these leaders are used to update position and velocity of particle to move in the decision space. In the next step of the iteration, to the 15% of the particles, polynomial mutation is applied. In the final step of the iteration, pruning of the external archive is done so that the external archives size does not overflow than the maximum size N. These iterations are repetitive until the maximum iterations (t max ). The algorithm defines a function for fitness and on the basis of the that function, the discrimination between particles is done. Based on the fitness standards, social leaders are selected, and external archive pruning is performed as to how to select the best non-dominated particles in the archive.

Experimental results and discussion
In the following section, the algorithms performance is discussed. The algorithms used for comparison are MaOPSO, NSGA III, MaOBAT and SMPSO [8]. These comparisons give three different aspect of how the convergence and diversity values are affected by using three different algorithms.
In the first aspect, the convergence of many objective algorithms is better than the multiobjective algorithms. In the second aspect, the analysis of the pareto dominance-based algorithms are not well converged towards PF in MaOPs. Well-known quality indicator such as inverted generational distance (IGD), generational distance (GD) and hypervolume (HV) are used to perform the analysis based on the diversity and convergence of the algorithms.
Experiments are performed with varying number of objectives (m) such that m = [2, 3, 5, 7, 10], on different algorithms such as MaOPSO, NSGA-III, MaOBAT, SMPSO and then each algorithm is evaluated with quality measure algorithm to get the qualitative analysis. Significance of the obtained results is also demonstrated using T-Test.

Benchmark test problems
In this paper, DTLZ [2,4,6] problems are used as the benchmark problem [36]. The number of decision variables (n) and number of objectives (m) are altered during the experimentations. The formula is as follows: n = m+k+1. DTLZ4 is defined as a difficult problem regarding diversity. DTLZ6 is used to see the how well an algorithm can converge to a curve. We have used the value of k as 10 for all problems.

Parameters Settings
MaOBAT. In MaOBAT algorithm, f min and f max are set to 0 and 1 respectively. Loudness is assigned the random value from [1,2] for each bat. Pulse emission rate (α) and loudness reduction (γ) are assigned values in between 0.7 to 0.9 and are used same as these values give optimal results [4]. The number of divisions in each objective is represented as p and m is the number of objectives and their values is used in the reference point calculation by this formula H ¼ c mþpÀ 1 p . Generally, the greater the number of reference point means better results. The size of archive is dependent on the number of objectives as shown in Table 1. The archive size is based on the number of reference points but in case of 10 objectives we assign archive size equal to 500 as of computational limitations. For the experiments, 100,000 iterations are used, as the many objective algorithms takes some time to show its convergent behavior for each algorithm as shown in Figs 1, 2 and 3. MaOPSO [28] also executed 100,000 iterations to calculate the final Pareto Front because it demonstrated the convergent behavior until this number of iterations. The number of iterations is kept same for all the compared algorithms to have a fair comparison of the algorithms with MaOBAT. Moreover, the parameters' settings in the selected algorithms are inspired from the MaOPSO and the values of the parameters were selected such that the aim was to achieve the convergence of the algorithms [28]. Mutation probability of (1/number of decision variable) is used, with mutation distribution of 20 while using polynomial mutation, according to Nebro et al [8].
MaOPSO. In MaOPSO, the parameter values used are followed from the previous paper [28]. We run the algorithm for 100,000 iterations with different population size and reference point as shown in the Table 1.
NSGA-III. In NSGA III, the parameter values used are followed from MaOPSO [28]. We run the algorithm for 100,000 iterations with different population size and reference point as shown in the Table 1. SMPSO. For SMPSO, the parameter values which are used here are according to the ones mentioned in MaOPSO [28]. The algorithm is run for 100,000 iterations to have the fair comparison with other algorithms. Mutation probability of (1/number of decision variable) is used, with mutation distribution of 20 while using polynomial mutation, according to Nebro et al [8].

Computational results and discussion
In the following section, we analyze the value of the GD, IGD and HV for MaOBAT, NSGA-III, MaOPSO and SMPSO. The analysis shows that how these algorithms perform in accordance to convergence and diversity values. Tables 2-4 shows the values of the GD    Table 2 (For GD values). In regard to IGD, the MaOBAT performs better and better as the number of objectives increased as shown in Table 5. With

PLOS ONE
HV consideration, MaOBAT shows the same behavior as shown in the IGD as it performs better when objectives are increased as shown in Table 8.
For DTLZ 4 (Tables 3, 6 and 9), using all the quality indicators (GD, IGD, HV). MaOBAT performs better or significantly better (at 95% confidence interval) for the most number of experiments for all instances of this problem. So, MaOBAT give the best compromise between diversity and convergence of this problem.
For DTLZ 6 (Tables 4, 7 and 10), MaOBAT performs better or significantly better (at 95% confidence interval) for the most number of experiments showing the good convergence. When IGD values are considered, the MaOBAT again performed significantly better (at 95% confidence interval) for different number of objectives. On the other hand, when considering the HV, the MaOBAT performs significantly better (at 95% confidence interval) than the other many objective algorithms at most times.
The combination of echolocation with swarm intelligence and search process enhances the properties of swarm-based algorithms so this thing makes BAT algorithm little more effective than the swarm-based algorithms in some of the scenarios.

Conclusion and future work
This paper proposes a many objective bat algorithm (MaOBAT) to solve many-objective optimization problems by effectively approximating the Pareto approximation (PA) with high   diversity and good convergence. In the proposed method, the reference-based approach in MOBAT is used and MaOBAT is proposed, as background study shows that this approach is being used effectively for MaOPs. Moreover, in the fitness assignment method, the reference points are evenly distributed for imposing the selection pressure for getting the good convergence against the true PF. The experimental results demonstrate that the MaOBAT have good capability in generating the PF with good convergence and high diversity. Significance of the obtained results is also demonstrated using T-Test. Empirical study shows that MaOBAT works efficiently in comparison to MaOPSO, SMPSO and NSGA III. Moreover, it computes high diversity and good convergence in a reasonable time. To maintain pressure selection towards PF and to get the high diversity, extreme solutions and uniformly distributed reference set are used. Furthermore, the experimental results demonstrate that the MaOBAT algorithm has significant advantages over several state-of-the-art many objective algorithms in terms of the excellence of the solution.
In future, more experiments can be done for different benchmark problems, specifically for practical problems having high dimensionality and complex PF to get more meticulous performance analysis of the MaOBAT.  DTLZ2 (100,000 iterations). Values in bold are best values. '�', '−' and '+' show that the result is statistically similar, significantly worse and significantly better to that of MaOBAT, respectively (with alpha = 0.05).