Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

IM-NSGAII: A novel approach to boost convergence speed and population diversity in multi-objective optimization

Abstract

Convergence speed and population diversity have long been central concerns in multi-objective evolutionary algorithms. However, the NSGAII algorithm often shows insufficient ability to maintain diversity when facing complex Pareto fronts. To address this issue, an improved NSGAII algorithm (IM-NSGAII) is proposed. First, a population evaluation technique is incorporated after non-dominated sorting to filter and select the best parent population. Second, a sparse population strategy with a high-pressure criterion is employed to guide sparse individuals in local exploration, thereby enhancing population diversity. Finally, a difference operator is introduced to facilitate information exchange among sparse individuals, compensating for the slow convergence speed of the original algorithm. The proposed IM-NSGAII is evaluated against five widely used algorithms on the ZDT, DTLZ, MaF, and WFG benchmark problems. Experimental results demonstrate that IM-NSGAII significantly improves both population diversity and convergence speed.

1. Introduction

Over the past two decades, the efficiency of multi-objective evolutionary algorithms (MOEAs) in solving MOPs has attracted considerable attention, leading to the development of various algorithms for handling complex PFs. According to their selection criteria, MOEAs can be broadly classified into three categories: dominance-based MOEAs [15], metric-based MOEAs [610], and decomposition-based MOEAs [1115]. The robustness of MOEAs has been demonstrated in [16]. To further accelerate convergence, some studies have incorporated mathematical techniques such as scalarisation methods [17], pairwise optimisation methods [18], and gradient-based methods [19]. Although these methods significantly improve convergence, they often lead to premature convergence and local optima.

Given that mathematical methods can help improve the convergence speed of algorithms, many studies have begun to focus on integrating mathematical techniques with MOEAs to achieve an effective balance between convergence efficiency and population diversity. For instance, [20] introduced a hybrid algorithm using kernel gradients and normal vectors, where the former accelerates convergence and the latter guides the search direction. In [21], gradient information from constrained subproblems was integrated into dominance-based MOEAs to balance convergence and diversity. In [22], the conjugate gradient method was incorporated as a mutation operator to enhance hybrid mutation search, while a feedback mechanism adaptively updated its weight coefficients. However, when addressing MOPs with complex PFs, these methods often underperform, as they prioritise convergence over diversity, leading to underexploration of promising regions and suboptimal solutions.

To enhance population diversity, various approaches have been proposed. For example, Xu et al. [23] introduced a new decision-variable-based indicator to evaluate the optimisation contribution of variables, along with two optimisation schemes. In [24], multiple individual selection criteria were analysed theoretically, and a criterion based on indicator sub-modulus was proposed. In [14], a global decomposition strategy based on infinitesimal analysis was designed to capture PF distribution information for adaptive reference vector adjustment. In [25], crossover operators were designed according to individual distribution states, and reinforcement learning was employed to select the most suitable operator, together with a weight adjustment mechanism to promote diversity. In [26], a clustered-population strategy with independent evolution and multiple selection criteria was introduced to preserve diversity, along with a new metric for balancing convergence and feasibility. Despite these advances, such algorithms increase computational complexity, and their convergence speed is often compromised when improving population diversity.

In summary, we consider whether focusing on the selection mechanism of elite individuals and the optimization of search operators can achieve an effective balance between convergence speed and population diversity without increasing algorithmic complexity. Motivated by this idea, this study proposes an improved NSGAII algorithm (IM-NSGAII). NSGAII is chosen as the baseline algorithm for several reasons. First, it features relatively low computational complexity, making it suitable for large-scale optimization problems. Second, NSGAII does not rely on gradient information, thereby avoiding the common drawback of premature convergence to local optima observed in gradient-based methods. Moreover, NSGAII is one of the most widely used multi-objective evolutionary algorithms in engineering applications, offering high practicality and ease of implementation. Nevertheless, despite its popularity and utility, NSGAII still exhibits limitations, particularly in maintaining population diversity and accelerating convergence, highlighting the need for further improvements. Inspired by this observation, we note that the environmental selection strategy in SPEA2, together with DE-based local search techniques, may provide a promising avenue for enhancing the performance of NSGAII. The main contributions of this study are summarised as follows:

  1. a. To improve population diversity, an individual removal strategy is proposed. Crowding information of sorted individuals is evaluated, and those with low crowding values are eliminated in time.
  2. b. A new high-pressure criterion is designed to filter out the best individuals and guide subsequent optimisation as a refined population.
  3. c. To enhance convergence speed, a local search strategy is incorporated. Specifically, a differential evolution (DE) operator is employed to generate offspring for heuristic exploration of promising populations.
  4. d. The performance of IM-NSGAII is evaluated against NSGAII, its variants, and other state-of-the-art MOEAs on 28 benchmark problems with varying characteristics. Experimental results confirm that IM-NSGAII achieves highly competitive performance.

2. Materials and methods

2.1. Related knowedge

To provide a clearer understanding of the improvements proposed in this work, Algorithm 1 presents the pseudo-code of the classical NSGAII in detail.

Algorithm 1 Pseudo-code of NSGAII

1: Input: Population size n, dimension size D, number of objective functions M.

2: Output: Final population P

3: Initialize population , crossover factor cr, mutation factor mu, distribution parameters , , iteration counter it, and maximum iterations .

4: Evaluate fitness values and perform non-dominated sorting.

5: Calculate crowding distances for all individuals.

6: while do

7:  Perform binary tournament selection to generate parent population P.

8:  Generate offspring population using crossover and mutation.

9:  Merge parent and offspring populations: .

10:  Evaluate fitness values and update the non-dominated sorting results.

11:  Recalculate crowding distances for all individuals.

12:  Increment iteration counter: it = it + 1.

13: end while

14: Return the final population .

Population individuals xi are initialized according to Eq 1:

(1)

where and denote the lower and upper bounds of the decision variables, respectively. The function rand(1,D) generates a D-dimensional decision vector with values uniformly distributed in the interval [0,1].

The fitness value for each individual is calculated as follows: for , the vector F(xi) is obtained using Eq 8.

Furthermore, the crowding distance of each individual is calculated according to Eq 2:

(2)

where fi(xk) is the i-th objective function of F(x), and xk+1 and xk−1 are the neighbouring individuals of xk. This crowding distance is used to maintain the distribution of solutions across the Pareto front.

2.1.1. Population individual removal strategy.

In the classical NSGAII, the effective selection of individuals is often applied only to the last non-dominated front using the crowding distance criterion, in order to truncate the population to size N. However, this approach may result in an uneven distribution of solutions along the Pareto front. To overcome this limitation, a refined individual removal strategy is introduced in this work.

First, the crowding distance of all individuals in the population is computed. Then, two individuals with the smallest crowding values, denoted as xi and xj, are identified. For these two individuals, there always exists another individual xk that is the closest to them in the objective space. To maintain population diversity, one of the individuals xi or xj is removed according to Eq 3.

(3)(4)

where fk(x) represents the k-th objective function of F(x), and are the candidate solutions. This mechanism ensures that individuals located in crowded regions are more likely to be removed, thereby promoting a more uniform distribution of solutions across the Pareto front.

The pseudo-code of the proposed population individual removal strategy is given in Algorithm 2.

Algorithm 2 Population Individual Selection Strategy

1: Input: Population P, population size N.

2: Output: Refined population P.

3: Initialize the population size .

4: if then

5:  Keep the population unchanged: P = P.

6: else

7:  Calculate fitness values and crowding distances for P.

8:  Identify individuals xi, xj with the lowest crowding distances, and find their nearest neighbour xk.

9:  Remove one individual according to Eq 3.

10:  Update the population: .

11:  Decrease the population size: .

12: end if

13: Return the refined population P.

2.1.2. Elite populations.

An elite population is introduced to retain individuals that meet specific selection criteria from both parent and offspring populations during each iteration. Its primary purpose is to identify and preserve the most valuable individuals for exploitation, thereby improving the convergence performance of the algorithm. Experimental results demonstrate that incorporating an elite population can significantly accelerate convergence while maintaining a well-distributed population.

In this study, the criteria for elite selection are defined as follows. First, elite individuals must belong to the first non-dominated front of the population, meaning they are among all non-dominated solutions. Second, the crowding distance of an elite individual must exceed a certain threshold r, ensuring that elite individuals are sufficiently isolated from others in the objective space to maintain diversity and prevent premature convergence.

The selection of the threshold r is crucial for algorithm performance. If r is too small, the elite population may include individuals that are not meaningfully distinct from one another, which increases computational overhead without significantly improving the algorithm’s effectiveness. Conversely, if r is too large, the number of elite individuals becomes very limited, reducing the algorithm’s ability to exploit promising regions effectively. To overcome this issue, an adaptive r-value strategy is proposed, calculated as:

(5)

where ods(xi) denotes the crowding distance of individual xi. Using this threshold, the elite population is defined as:

(6)

Here, r represents the mean of the sum of Euclidean distances between all population individuals and their nearest neighbours. Since the population X evolves with each iteration, the value of r is dynamically updated, allowing the elite selection process to adapt to the changing distribution of the population. This adaptive mechanism ensures a proper balance between exploitation and exploration, preserving both high convergence speed and population diversity. By maintaining a flexible elite population, the algorithm can effectively focus search efforts on promising regions while still exploring diverse areas of the objective space, thereby improving the overall optimization performance.

2.1.3. Localised search for elite populations.

The differential evolution (DE) algorithm, as a heuristic optimization method, possesses several notable advantages, including strong search capability, a small number of stable parameters, fast convergence, and independence from gradient information. To enhance information exchange between populations and accelerate the convergence of the algorithm, the DE operator is applied to perform heuristic learning on the elite population. In this work, the elite individuals identified in Equation 7 undergo a DE-based recombination combined with polynomial mutation to generate the offspring population Offspring. The operation can be expressed as:

(7)

Here, P.1 and P.2 denote subpopulations consisting of k individuals randomly selected from the parent population, and cr is the crossover factor. This strategy allows elite individuals to share information with other members of the population while generating new candidate solutions, thus improving the exploration capability of the algorithm and enhancing convergence in regions of the search space with promising solutions.

2.2. IM-NSGAII algorithm framework

2.2.1. Problem definition.

In practice, many real-world problems can be formulated as multi-objective optimisation problems (MOPs), such as logistics path optimisation [27], power system optimisation [28], and shop floor scheduling [29]. In general, MOPs are mathematically expressed as:

(8)

where is the decision vector, denotes the decision space, and F(x): is the m-dimensional vector of objective functions fi(x), mapping the decision space to the objective space.

Unlike single-objective optimisation, the conflicting nature of the functions fi(x) in MOPs prevents their individual optima from representing the global optimum. For decision variables x and y, if they satisfy the following conditions:

(9)

then x is said to dominate y.

In principle, the optimal solution set of MOPs consists of Pareto-optimal solutions. The collection of all Pareto-optimal solutions is referred to as the Pareto set (PS), and the Pareto front (PF) represents the corresponding set of points in the objective space obtained by mapping the PS through F(x) [30].

2.2.2. Motivation.

NSGAII is one of the most widely used multi-objective evolutionary algorithms, and its effectiveness has been demonstrated in numerous practical applications. Typical examples include integrated production and inventory scheduling [31], optimization of high-dimensional multi-objective initial cable forces in arch bridges constructed by the cantilevered cast-in-place method [32], and multi-objective path planning for mobile robots [33].

Despite its broad applicability, the classical NSGAII algorithm exhibits limitations in maintaining population diversity when addressing complex multi-objective optimization problems. To intuitively illustrate this issue, we analyze the performance of existing algorithms [3436] on benchmark problems with complex Pareto fronts characterized by non-convexity, discontinuity, and multimodality. Taking the benchmark functions ZDT1 and DTLZ7 as examples, we applied the classical NSGAII algorithm, and the resulting final solution sets are shown in Fig 1. The results indicate that the left panel of Fig 1 demonstrates that, upon reaching the stopping criteria, the algorithm fails to sufficiently approach the true Pareto front, highlighting the urgent need to improve convergence speed. The right panel of Fig 1 shows that, although the algorithm is able to approximate the true Pareto front, a large number of pseudo-optimal solutions are concentrated in local regions, indicating a tendency to be trapped in local optima. In practical optimization problems, we aim for algorithms that can generate a more uniformly distributed set of solutions to better adapt to environmental changes and decision-making requirements.

thumbnail
Fig 1. Final solution sets obtained by NSGAII.

https://doi.org/10.1371/journal.pone.0341439.g001

These observations motivate the development of the IM-NSGAII algorithm, which is designed to simultaneously enhance population diversity and convergence speed. IM-NSGAII integrates several key mechanisms, including elite population strategies, adaptive differential evolution operators, and crowding-aware individual selection, to generate a more uniformly distributed Pareto front while accelerating convergence. This combination allows the algorithm to explore sparse regions effectively while maintaining robust exploitation of promising areas in the objective space.

2.2.3. General Overview of the IM-NSGAII algorithm.

The proposed IM-NSGAII algorithm extends the classical NSGAII framework by integrating multiple strategies aimed at simultaneously improving convergence speed and population diversity. The key components of the algorithm include a population individual culling strategy, an elite population mechanism, and a differential evolution (DE) operator.

Algorithm 3 Pseudo-code of IM-NSGAII

1: Input: Population size N, dimension D, number of objectives M

2: Output: Final population P

3: Initialize population P, parameters cr, mu, , , it = 0, maxit,

4: Evaluate fitness and perform non-dominated sorting

5: Compute crowding distances

6: while it < maxit do

7:  Tournament selection to get parent population P

8:  Generate offspring P1 via crossover and mutation

9:  Combine:

10:  Non-dominated sorting; record front numbers

11:  Compute temp from

12:  if then

13:   P1 = P(temp)

14:  else

15:   P3 from Algorithm 2 and crowding distances

16:  end if

17:  Compute adaptive mean r (Eq 5) and identify elite P4 (Eq 7)

18:  Update population:

19:  it = it + 1

20: end while

21: Return P

The population individual culling strategy selectively removes individuals with lower crowding values while retaining those with higher crowding as parents. This approach ensures that the retained individuals are distributed more evenly across the objective space, preventing clustering in dense regions and promoting a uniform spread along the Pareto front. Experimental results indicate that this strategy substantially enhances population diversity and facilitates better coverage of the solution space.

The elite population mechanism identifies and preserves individuals from both parent and offspring populations that meet specific adaptive selection criteria. In particular, elite individuals are drawn from sparsely populated regions, ensuring that promising but underrepresented areas of the Pareto front are actively explored. This mechanism improves the algorithm’s exploratory capabilities and prevents premature convergence to suboptimal regions.

To further accelerate convergence, a differential evolution (DE) operator is applied to the elite population. By generating offspring through DE-based heuristic recombination, elite individuals exchange information with other population members, allowing promising traits to propagate more efficiently. This guided information exchange accelerates convergence while maintaining solution diversity, as it balances exploitation of high-quality solutions with exploration of sparse regions.

The overall workflow of IM-NSGAII is summarized in Algorithm 3. Initially, the population is randomly initialized, and algorithmic parameters—including crossover and mutation factors, crowding coefficients, and maximum iterations—are set. At each generation, the algorithm performs tournament selection to determine parent individuals, generates offspring using simulated binary crossover and polynomial mutation, and combines these with the elite population. Non-dominated sorting is conducted to assign Pareto front numbers, and crowding distances are recalculated to guide selection. An adaptive r-value is computed to identify elite individuals dynamically, which are then incorporated into the next generation. The iterative process continues until the maximum number of generations is reached, resulting in a final population that is well-distributed along the Pareto front and provides a rich set of trade-off solutions suitable for practical applications.

Additional Notes:

The adaptive r-value ensures that the definition of elite individuals evolves with the population distribution, maintaining a balance between exploration and exploitation.

By integrating DE operators specifically for elite individuals, IM-NSGAII accelerates convergence without sacrificing diversity.

The combination of culling, elite preservation, and DE-guided exploration provides a robust framework that performs effectively across a wide range of benchmark and real-world multi-objective problems.

3. Results and discussion

In this paper, IM-NSGAII is compared with the comparison algorithms NSGAII [37], SNSGAII [38], TNSGAII [39], and DRLOSEMCMO [40] on 28 benchmark problems ZDT [41], DTLZ [42], MaF [43], WFG [44], respectively. Among them, NSGAII is the classical algorithm, and the reason for comparing with it is that IM-NSGAII is a variant algorithm of NSGAII, whereas SNSGAII and TNSGAII are variants of NSGAII proposed in 2024 and 2022, respectively. DRLOSEMCMO is a different algorithmic framework from NSGAII, which is also proposed in 2024.

3.1. Parameter setting

As for the generation of the offspring solution, NSGAII, TNSGAII and the algorithm IM-NSGAII in this paper use simulated binary crossover and polynomial mutation, where the crossover probability is set, the mutation probability is set to, and the distribution index is set to 1, 1/20, and 20, respectively. Meanwhile, IM-NSGAII introduces the difference operator for the local exploitation of sparse populations. For the algorithms DRLOSEMCMO and SNSGAII, they use their own operators to generate the offspring solutions, where the parameters are defaulted. For problems with two objectives, the population size N was set to 100, and for problems with three objective functions, the population size N was set to 180. maxit, the maximum number of iterations, was set to 100 for the ZDT test problem (maxit was 300 for ZDT4). For the DTLZ, MaF, and WFG series of problems maxit is set to 200. specifically, for the three problems DTLZ3, MaF3, and MaF4 maxit is 300. In terms of performance evaluation metrics, IGD [45] and HV [46] are selected to evaluate how well different algorithms perform when facing the 28 benchmark problems. Where HV is used as a (1,1) reference point. The comparative algorithms are tested against IM-NSGAII by running the test problems independently for 30 times on the PlatEMO [47] platform and using the wilcoxon rank sum test with a significance level of 0.05, where + indicates that the algorithm is superior to IM-NSGAII, – indicates that the algorithm is inferior to IM-NSGAII, and = indicates that the algorithm is similar to IM-NSGAII. The CPU model of this experimental equipment is I7-14700KF, MATLAB version is 2021b.

3.2. Presentation and analysis of experimental results

Table 1 presents the mean and standard deviation of the IGD values of the five MOEAs over 30 independent runs on the benchmark problems. The results indicate that IM-NSGAII achieves superior performance, with lower IGD values indicating better outcomes.

For ZDT1 and ZDT4, IM-NSGAII significantly outperforms the four comparison algorithms. For ZDT2, IM-NSGAII outperforms NSGAII, SNSGAII, and TNSGAII, while DRLOSEMCMO performs similarly to IM-NSGAII. For ZDT3, IM-NSGAII achieves results approximately twice as good as SNSGAII and TNSGAII in terms of magnitude, whereas NSGAII’s performance is comparable to IM-NSGAII. For ZDT6, IM-NSGAII significantly outperforms NSGAII, SNSGAII, and TNSGAII, and is comparable to DRLOSEMCMO.

For the DTLZ series, IM-NSGAII outperforms the comparison algorithms on DTLZ1, DTLZ3–7, and IDTLZ1. For DTLZ2, the algorithm performs slightly worse than TNSGAII, which can be attributed to the high individual selection pressure resulting from a suboptimal value of r. Analysis of the Pareto front distribution indicates that the population generated by IM-NSGAII is more uniformly distributed.

For the MaF series, IM-NSGAII achieves the best results on MaF1-3 and MaF6 among all comparison algorithms. TNSGAII produces comparable results only on MaF6. For MaF4, the performance of IM-NSGAII is slightly worse than NSGAII and SNSGAII, which is attributed to the introduction of particle-generated offspring solutions. For MaF5, IM-NSGAII performs similarly to TNSGAII and better than the remaining three algorithms.

For the WFG series, IM-NSGAII achieves the optimal results on WFG2, WFG3, and WFG6. The IGD values of IM-NSGAII on WFG1, WFG3, WFG4, WFG7, WFG8, and WFG9 are lower than those of TNSGAII, with an average difference of approximately 0.133. This difference is partly attributed to variations in the function ranges, which make it challenging for IM-NSGAII to locate the global optima.

Statistical tests indicate that IM-NSGAII significantly outperforms NSGAII, SNSGAII, TNSGAII, and DRLOSEMCMO on 20, 26, 15, and 25 benchmark problems, respectively. Notably, IM-NSGAII converges extremely quickly on the ZDT series, primarily due to its offspring generation mechanism. The performance of IM-NSGAII is similar to that of NSGAII, SNSGAII, TNSGAII, and DRLOSEMCMO on 7, 1, 5, and 2 benchmark problems, respectively.

Table 2 presents the mean and standard deviation of the HV values for the five MOEAs over 30 independent runs. The results show that the probability of IM-NSGAII achieving optimal performance reaches 82%, 75%, 50%, and 79% when compared with NSGAII, SNSGAII, TNSGAII, and DRLOSEMCMO, respectively, across the 28 tested problems. Higher HV values indicate better performance.

Fig 2 illustrates the performance of IM-NSGAII in reaching the optimal solution within the maximum number of iterations. It clearly demonstrates that IM-NSGAII effectively balances convergence speed and population diversity compared to NSGAII.

thumbnail
Fig 2. The final solution obtained by IM-NSGAII on the benchmark function.

https://doi.org/10.1371/journal.pone.0341439.g002

Fig 3 shows the trend of IGD values for the five algorithms as the number of iterations increases. It can be observed that IM-NSGAII is an improved variant of NSGAII. Specifically, IM-NSGAII achieves the lowest IGD values on ZDT1, ZDT3, ZDT4, DTLZ1, DTLZ2, DTLZ4, DTLZ5, and IDTLZ1. Except for TNSGAII, our proposed algorithm converges the fastest. However, the IGD values of TNSGAII are not as low, indicating that its population diversity is inferior to that of IM-NSGAII.

thumbnail
Fig 3. The variation of IGD values with the number of function evaluations on different benchmark functions.

https://doi.org/10.1371/journal.pone.0341439.g003

3.3. Ablation study

To comprehensively verify the effectiveness of the three key strategies proposed in the algorithm-namely, the individual elimination mechanism, the elite-population-based adaptive strategy, and the DE-based local search operator-an ablation study was conducted. In particular, we selected two representative benchmark problems with discontinuous and complex Pareto front (PF) characteristics, i.e., ZDT3 and DTLZ7.

Specifically, ZDT3 was employed to evaluate the effectiveness of the DE-based local search operator, while DTLZ7 was used to examine the influence of the individual elimination and elite-population-based adaptive mechanisms on the overall performance of the algorithm. The maximum number of function evaluations for ZDT3 was set to 10,000 with a population size of 100, whereas for DTLZ7, the maximum number of evaluations was set to 42,000 with a population size of 210. During the experiments, a controlled variable approach was adopted to ensure fair comparisons, in which only one strategy was modified at a time while keeping all other parameters identical to the baseline configuration. This setup allows us to clearly isolate and analyze the contribution of each strategy to the algorithm’s convergence ability and population diversity.

As shown in Table 3, the proposed IM-NSGAII (Exp1) achieves the best IGD values on both benchmark problems, demonstrating superior convergence and diversity performance compared to its variants. Specifically, in the ZDT3 problem, IM-NSGAII attains the smallest IGD value of 7.03 × 10−3, significantly outperforming the other configurations. When the DE-based local search operator is replaced by the standard GA operator (Exp2), the IGD value increases to 1.04 × 10−2, indicating that the DE strategy provides stronger local exploitation ability and facilitates faster convergence toward the Pareto front.

thumbnail
Table 3. IGD Values Under Different Strategies.

https://doi.org/10.1371/journal.pone.0341439.t003

Furthermore, when the elite-population-based adaptive mechanism is removed (Exp3), the IGD further deteriorates to 1.48 × 10−2, suggesting that the adaptive adjustment of search parameters based on elite individuals effectively enhances the balance between exploration and exploitation. The worst performance is observed in Exp4, where both the crowding-based elimination and elite-adaptive mechanisms are removed, resulting in a substantial degradation of IGD to 2.57 × 10−1.

A similar trend can be observed in the DTLZ7 problem. The proposed IM-NSGAII again achieves the smallest IGD value (4.05 × 10−2), confirming its robustness and adaptability across different problem landscapes. Overall, these results clearly demonstrate that each designed strategy—particularly the DE-based local search operator and the elite-population-based adaptive mechanism—plays a crucial role in improving the convergence accuracy and maintaining population diversity of the IM-NSGAII algorithm.

3.4. Comparison with MODE

To further clarify the role of the Differential Evolution (DE) operator as a local search mechanism, we constructed a comparative variant of the classical NSGAII algorithm by replacing its original Genetic Algorithm (GA) operators (i.e., crossover and mutation) with DE operators, referred to as MODE. MODE was tested on the aforementioned 28 benchmark problems, with the Inverted Generational Distance (IGD) adopted as the primary performance metric. The experimental results, summarized in Table 1, clearly illustrate the search efficiency of MODE.

Across all 28 test problems, IM-NSGAII outperforms MODE on nearly every benchmark, achieving up to two to four orders of magnitude improvement in IGD. The algorithm shows particular strength on DTLZ and MaF problems, indicating its ability to adapt to complex Pareto front structures and maintain the balance between convergence and diversity. These results confirm that the proposed improved mechanisms-such as adaptive mutation, elite learning, and improved crowding distance-enable IM-NSGAII to achieve both higher convergence precision and stronger robustness compared with traditional MODE.

3.5. Engineering optimization case

To evaluate the potential extensibility of the proposed IM-NSGAII algorithm, it is applied to two complex multi-objective optimization problems with practical engineering backgrounds. The first problem is an unconstrained multi-objective optimization of a four-bar planar truss, aiming to simultaneously minimize the mass and flexibility of the structure. The second problem is a constrained multi-objective optimization of a two-bar planar truss, where structural performance metrics are optimized under safety constraints.

For both problems, the algorithm is configured with a population size of 100 and a maximum of 50,000 function evaluations. To comprehensively assess the algorithm’s performance, the Hypervolume metric is adopted as the evaluation criterion. The reference points are set as (3.0485281 × 103, 4.0000000 × 10−2) and (1.8704862 × 102, 6.7710178 × 10−5). The experimental results indicate that IM-NSGAII can effectively maintain solution diversity and approximate the Pareto front when solving complex engineering problems. Furthermore, compared with the benchmark algorithms used above, IM-NSGAII demonstrates significant advantages in both optimization performance and convergence, thereby further validating its potential for broader applications in engineering optimization. The mathematical formulations of the two problems are as follows.

3.5.1. Four-Bar Planar Truss.

(10)

where .

3.5.2. Two-bar planar truss.

(11)

where .

As shown in Fig 4, the proposed IM-NSGAII algorithm demonstrates good optimization performance for both the four-bar planar truss and the two-bar planar truss problems. Specifically, for the four-bar planar truss problem, although the performance metrics of IM-NSGAII are slightly lower than those of TNSGAII overall, the results of both algorithms are nearly identical as the algorithm approaches termination. Compared with the other three benchmark algorithms, IM-NSGAII and TNSGAII exhibit clear advantages in terms of stability and the Hypervolume (HV) metric. The observed fluctuations in the performance metrics of IM-NSGAII are mainly attributed to the differential operator and the individual elitism strategy. As a heuristic search mechanism, the differential operator consistently exploits the information of current solutions to explore potential optimal solutions, which explains the overall upward trend of the performance metrics observed in the figure.

thumbnail
Fig 4. Optimization results of IM-NSGAII in engineering optimization examples.

https://doi.org/10.1371/journal.pone.0341439.g004

For the two-bar planar truss problem, IM-NSGAII achieves the best results. Although the overall improvement in performance metrics is relatively modest, IM-NSGAII demonstrates superior stability compared with the benchmark algorithms. Its HV values remain nearly steadily increasing, indicating that the algorithm is capable of maintaining both convergence and solution diversity effectively. These results further demonstrate that IM-NSGAII possesses strong adaptability and robustness when solving different types of engineering optimization problems.

4. Conclusions

In this paper, we propose an improved multi-objective evolutionary algorithm (IM-NSGAII) for NSGAII, which can better balance the relationship between convergence speed and population diversity. IM-NSGAII solves the problem of uneven population distribution of the classical algorithm NSGAII by introducing a population individual removal strategy, which tries to retain the optimal solution in each iteration as the parent solution by increasing the selection pressure of population individuals. Uniformity problem. Secondly, the concept and conditions of elite population are customised and the GA operator is used to generate offspring populations to exploit possible unexplored regions during the iteration process. Finally, the DE individual generation method is successfully introduced to replace the simulated binary crossover and polynomial variate generation methods in the classical NSGAII to further improve the convergence speed of the algorithm. The experimental results show that IM-NSGAII effectively improves the convergence speed and population diversity, and is a successful improvement of the algorithm NSGAII.

Supporting information

S1 Dataset. Figures data (Excel file) used in the analyses.

https://doi.org/10.1371/journal.pone.0341439.s001

(RAR)

References

  1. 1. Wang S, Wang H, Wei Z, Wang F, Zhu Q, Zhao J, et al. A Pareto dominance relation based on reference vectors for evolutionary many-objective optimization. Applied Soft Computing. 2024;157:111505.
  2. 2. Zhang W, Liu J, Liu J, Liu Y, Tan S. A dual distance dominance based evolutionary algorithm with selection-replacement operator for many-objective optimization. Expert Systems with Applications. 2024;237:121244.
  3. 3. Cui Z, Qu C, Zhang Z, Jin Y, Cai J, Zhang W, et al. An adaptive interval many-objective evolutionary algorithm with information entropy dominance. Swarm and Evolutionary Computation. 2024;91:101749.
  4. 4. Zou J, Deng Q, Liu Y, Yang X, Yang S, Zheng J. A Dynamic-Niching-Based Pareto Domination for Multimodal Multiobjective Optimization. IEEE Trans Evol Computat. 2024;28(5):1529–43.
  5. 5. Zhang L, Zhang H, Liu S, Wang C, Zhao H. A Community Division-Based Evolutionary Algorithm for Large-Scale Multi-Objective Recommendations. IEEE Trans Emerg Top Comput Intell. 2023;7(5):1470–83.
  6. 6. Wang Z, Lin K, Li G, Gao W. Multiobjective Optimization Problem With Hardly Dominated Boundaries: Benchmark, Analysis, and Indicator-Based Algorithm. IEEE Trans Evol Computat. 2025;29(4):1070–84.
  7. 7. Li C, Deng L, Qiao L, Zhang L. An Indicator-Based Many-Objective Evolutionary Algorithm With Adaptive Reference Points Assisted by Growing Neural Gas Network. IEEE Trans Emerg Top Comput Intell. 2024;:1–16.
  8. 8. Li Y, Li W, Li S, Zhao Y. A performance indicator-based evolutionary algorithm for expensive high-dimensional multi-/many-objective optimization. Information Sciences. 2024;678:121045.
  9. 9. Wen C, Ma H. An indicator-based evolutionary algorithm with adaptive archive update cycle for multi-objective multi-robot task allocation. Neurocomputing. 2024;593:127836.
  10. 10. Yuan J, Liu H-L, Yang S. An adaptive parental guidance strategy and its derived indicator-based evolutionary algorithm for multi- and many-objective optimization. Swarm and Evolutionary Computation. 2024;84:101449.
  11. 11. He L, Shang K, Nan Y, Ishibuchi H, Srinivasan D. Relation Between Objective Space Normalization and Weight Vector Scaling in Decomposition-Based Multiobjective Evolutionary Algorithms. IEEE Trans Evol Computat. 2023;27(5):1177–91.
  12. 12. Yang S, Huang H, Luo F, Xu Y, Hao Z. Local-Diversity Evaluation Assignment Strategy for Decomposition-Based Multiobjective Evolutionary Algorithm. IEEE Trans Syst Man Cybern, Syst. 2023;53(3):1697–709.
  13. 13. Zhao Q, Guo Y, Yao X, Gong D. Decomposition-Based Multiobjective Optimization Algorithms With Adaptively Adjusting Weight Vectors and Neighborhoods. IEEE Trans Evol Computat. 2023;27(5):1485–97.
  14. 14. Wang J, Mei S, Liu C, Peng H, Wu Z. A decomposition-based multi-objective evolutionary algorithm using infinitesimal method. Applied Soft Computing. 2024;167:112272.
  15. 15. Yi X, Yu H, Xu T. Solving multi-objective weapon-target assignment considering reliability by improved MOEA/D-AM2M. Neurocomputing. 2024;563:126906.
  16. 16. Salih A, Eisenstadt Matalon E. Solving multi-objective robust optimization problems via Stakelberg-based game model. Swarm and Evolutionary Computation. 2024;91:101734.
  17. 17. Tang J, Wang H, Xiong L. Surrogate-assisted multi-objective optimization via knee-oriented pareto front estimation. Swarm and Evolutionary Computation. 2023;77:101252.
  18. 18. Liu Z-Z, Qin Y, Song W, Zhang J, Li K. Multiobjective-Based Constraint-Handling Technique for Evolutionary Constrained Multiobjective Optimization: A New Perspective. IEEE Trans Evol Computat. 2023;27(5):1370–84.
  19. 19. Mohammad Zadeh P, Mohagheghi M. Enhanced decomposition-based hybrid evolutionary and gradient-based algorithm for many-objective optimization. Appl Intell. 2023;53(24):30497–522.
  20. 20. Flor-Sánchez CO, Reséndiz-Flores EO, García-Calvillo ID. Kernel-based hybrid multi-objective optimization algorithm (KHMO). Information Sciences. 2023;624:416–34.
  21. 21. Wang Y, Gao W, Gong M, Li H, Xie J. A new two-stage based evolutionary algorithm for solving multi-objective optimization problems. Information Sciences. 2022;611:649–59.
  22. 22. Cao R, Li X, Chen W, Wang C, Si L, Pei X, et al. An adaptive conjugate gradient accelerated evolutionary algorithm for multi-objective spot optimization in cancer intensity modulated proton therapy. Applied Soft Computing. 2024;151:111177.
  23. 23. Xu Y, Xu C, Zhang H, Huang L, Liu Y, Nojima Y, et al. A Multi-Population Multi-Objective Evolutionary Algorithm Based on the Contribution of Decision Variables to Objectives for Large-Scale Multi/Many-Objective Optimization. IEEE Trans Cybern. 2023;53(11):6998–7007. pmid:35737628
  24. 24. Gu Y-R, Bian C, Li M, Qian C. Subset Selection for Evolutionary Multiobjective Optimization. IEEE Trans Evol Computat. 2024;28(2):403–17.
  25. 25. Wang J, Zheng Y, Zhang Z, Peng H, Wang H. A novel multi-state reinforcement learning-based multi-objective evolutionary algorithm. Information Sciences. 2025;688:121397.
  26. 26. Li G, Zhang W, Yue C, Yen GG. Clustering-based evolutionary algorithm for constrained multimodal multi-objective optimization. Swarm and Evolutionary Computation. 2024;91:101714.
  27. 27. Das SK, Yu VF, Roy SK, Weber GW. Location–allocation problem for green efficient two-stage vehicle-based logistics system: A type-2 neutrosophic multi-objective modeling approach. Expert Systems with Applications. 2024;238:122174.
  28. 28. Lei Z, Gao S, Zhang Z, Yang H, Li H. A Chaotic Local Search-Based Particle Swarm Optimizer for Large-Scale Complex Wind Farm Layout Optimization. IEEE/CAA J Autom Sinica. 2023;10(5):1168–80.
  29. 29. Zhao L, Fan J, Zhang C, Shen W, Zhuang J. A DRL-Based Reactive Scheduling Policy for Flexible Job Shops With Random Job Arrivals. IEEE Trans Automat Sci Eng. 2024;21(3):2912–23.
  30. 30. Yang D, Fan Q. Gradient-based hybrid method for multi-objective optimization problems. Expert Systems with Applications. 2025;272:126675.
  31. 31. Wang J, Liu Z, Li F. Integrated production and transportation scheduling problem under nonlinear cost structures. European Journal of Operational Research. 2024;313(3):883–904.
  32. 32. Tian Z, Zhang Z, Ning C, Peng T, Guo Y, Cao Z. Multi-objective optimization of cable force of arch bridge constructed by cable-stayed cantilever cast-in-situ method based on improved NSGA-II. Structures. 2024;59:105782.
  33. 33. Duan P, Yu Z, Gao K, Meng L, Han Y, Ye F. Solving the multi-objective path planning problem for mobile robot using an improved NSGA-II algorithm. Swarm and Evolutionary Computation. 2024;87:101576.
  34. 34. Jiang J, Tong P, Wang H, Hong J, Liu Z, Su B, et al. Dynamic Multivariation Multifactorial Evolutionary Algorithm for Large-Scale Multiobjective Optimization. IEEE Trans Emerg Top Comput Intell. 2026;10(1):689–704.
  35. 35. Huang Y, Wu S, Zhang W, Wu J, Feng L, Tan KC. Autonomous Multiobjective Optimization Using Large Language Model. IEEE Trans Evol Computat. 2026;30(2):594–608.
  36. 36. Wang F, Sun J, Gan X, Gong D, Wang G, Guo Y. A Dynamic Interval Multiobjective Evolutionary Algorithm Based on Multitask Learning and Inverse Mapping. IEEE Trans Evol Computat. 2025;29(5):1619–33.
  37. 37. Deb K, Pratap A, Agarwal S, Meyarivan T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans Evol Computat. 2002;6(2):182–97.
  38. 38. Kropp I, Nejadhashemi AP, Deb K. Improved Evolutionary Operators for Sparse Large-Scale Multiobjective Optimization Problems. IEEE Trans Evol Computat. 2024;28(2):460–73.
  39. 39. Ming F, Gong W, Wang L. A Two-Stage Evolutionary Algorithm With Balanced Convergence and Diversity for Many-Objective Optimization. IEEE Trans Syst Man Cybern, Syst. 2022;52(10):6222–34.
  40. 40. Ming F, Gong W, Wang L, Jin Y. Constrained Multi-Objective Optimization With Deep Reinforcement Learning Assisted Operator Selection. IEEE/CAA J Autom Sinica. 2024;11(4):919–31.
  41. 41. Zitzler E, Deb K, Thiele L. Comparison of multiobjective evolutionary algorithms: empirical results. Evol Comput. 2000;8(2):173–95. pmid:10843520
  42. 42. Deb K, Thiele L, Laumanns M, Zitzler E. Scalable test problems for evolutionary multiobjective optimization. Multiobjective optimization. London: Springer London. 2005. https://doi.org/10.1007/1-84628-137-7_6
  43. 43. Cheng R, Li M, Tian Y, Zhang X, Yang S, Jin Y, et al. A benchmark test suite for evolutionary many-objective optimization. Complex Intell Syst. 2017;3(1):67–81.
  44. 44. Huband S, Hingston P, Barone L, While L. A review of multiobjective test problems and a scalable test problem toolkit. IEEE Trans Evol Computat. 2006;10(5):477–506.
  45. 45. Sun R, Zou J, Liu Y, Yang S, Zheng J. A Multistage Algorithm for Solving Multiobjective Optimization Problems With Multiconstraints. IEEE Trans Evol Computat. 2023;27(5):1207–19.
  46. 46. Qiao K, Liang J, Yu K, Guo W, Yue C, Qu B, et al. Benchmark problems for large-scale constrained multi-objective optimization with baseline results. Swarm and Evolutionary Computation. 2024;86:101504.
  47. 47. Tian Y, Cheng R, Zhang X, Jin Y. PlatEMO: A MATLAB Platform for Evolutionary Multi-Objective Optimization [Educational Forum]. IEEE Comput Intell Mag. 2017;12(4):73–87.