Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Solving dynamic multi-objective problems with a new prediction-based optimization algorithm

Abstract

This paper proposes a new dynamic multi-objective optimization algorithm by integrating a new fitting-based prediction (FBP) mechanism with regularity model-based multi-objective estimation of distribution algorithm (RM-MEDA) for multi-objective optimization in changing environments. The prediction-based reaction mechanism aims to generate high-quality population when changes occur, which includes three subpopulations for tracking the moving Pareto-optimal set effectively. The first subpopulation is created by a simple linear prediction model with two different stepsizes. The second subpopulation consists of some new sampling individuals generated by the fitting-based prediction strategy. The third subpopulation is created by employing a recent sampling strategy, generating some effective search individuals for improving population convergence and diversity. Experimental results on a set of benchmark functions with a variety of different dynamic characteristics and difficulties illustrate that the proposed algorithm has competitive effectiveness compared with some state-of-the-art algorithms.

1 Introduction

The progress of optimizing multiple mutually conflicting objectives simultaneously and obtaining a set of tradeoff solutions is regarded as Multi-objective optimization problems (MOPs) [1], which involves different fields, including controller design [2], weapon selection [3] and machine learning [4]. Simultaneously, various multiobjective optimization algorithms have been proposed for solving MOPs successfully. Considering a minimization multiobjective optimization problem as follows, (1) where is the feasible area of the decision space, and F consists of m time-varying objective functions. x = (x1, x2, …, xD) defines the decision vector involving D variables, Li and Ui represent the lower and upper bounds of the ith variable xi, respectively. For two given decision vectors x and y, if ∀j ∈ [1, m], fi(x) ≤ fi(y) and ∃l ∈ [1, m]fl(x) < fl(y), then, x dominates y regarded as xy. If a vector x* can dominate any other solutions, x* is defined as Pareto optimal solution.

However, recent years, there exist an increasing number of multi-objective optimization problems recognised in various fields, such as scheduling [5, 6], planning [7, 8], resources allocation [9, 10], constrained optimization [11], and machine learning [12], needed to be solved in dynamic or uncertainties environment, which are named dynamic multi-objective optimization problems (DMOPs). The main characteristic of this kind of problem is that the constraints, the Pareto optimal set (POS) or Pareto-optimal front (POF), and the relevant control parameters can change dynamically, which brings great challenges to optimization algorithms. It has attracted a growing attention for exploring efficient optimization algorithms and obtaining high quality optimal solution sets. Although there may exist different classes of dynamic optimization problems, according to [1], this paper considers the following mathematical model of DMOPs. (2) where t is the time instant of the problem.

Compared with MOPs, dynamic dynamic multi-objective optimization problems have two important features: multiobjectivity and dynamism. It is generally known that multiobjectivity usually involves multiple conflicting objectives, which means the optimal solution of the problem will no longer be a single optimal value, but an optimal solution set containing tradeoff solutions. Dynamism in constraints and/or parameters causes the change of POF or POS and poses big difficulties to evolutionary algorithms. DMOPs are challenging due to the dynamic nature. They can be divided into a sequence of MOPs over the course of time. That is, the optimization goal is to obtain a sequence of approximations to the moving POS/POF.

2 Related work

In recent years, much effort has been devoted to designing efficient and effective dynamic multi-objective evolutionary algorithms (DMOEAs). A widely used framework of DMOEAs in literature can be described as Algorithm 1. As shown in this framework, the whole procedure of solving DMOPs contains two main components: change detection and multi-objective algorithms including MOEAs and DMOEAs.

Algorithm 1 The basic framework of DMOEA

1: Initialize time instance t = 1;

2: Generate an initial population Popt;

3: While the termination criterion is not satisfied

4: Change Detection

5:  If change is not detected, evolve population using MOEA;

6:  Otherwise, evolve population using DMOEA;

7: Return 3.

2.1 Change detection

As a significant component of DMOEAs, change detection is responsible for determining whether the environment has changed and in turn whether to implement a reaction mechanism. The existing dynamic extraction methods contain two categories: re-evaluating solutions [1315] and checking population statistical information [16]. The former is more widely used in many algorithms because it is simple and easy to implement, but it is likely sensitive to noise. In contrast, the latter is robust to noise, but it needs some additional parameters. Each method has its advantages and limitations for different DMOPs.

2.2 Multi-objective optimization algorithms

Apart from the dynamic reaction mechanism, MOEAs are significant components of solvers for DMOPs, since DMOPs can be regards as a sequence of MOPs. That is, any MOEAs can be directly used to evolve the population during the (short) period of any static environments.

As one of the most attractive and popular areas in intelligent computing field, the existing Multi-objective optimization algorithms can be classified into three categories as follows. The first class is Pareto ranking-based algorithms, which are designed based on the dominated relationships among population individuals. Some representative algorithms include the nond-ominated sorting genetic algorithm II (NSGA-II) [17], and strength Pareto evolutionary algorithm (SPEA2) [18]. Besides that, some classic and recent proposed efficient swarm intelligence algorithms inspired by different nature behaviors have also used to solve MOPs, such as multi-obejctve particle swarm optimization (MOPSO) [19], Multi-Objective Grasshopper Optimisation Algorithm (MOGOA) [20], Multi-Objective Multi-Verse Optimizer (MOMVO) [21], Multi-Objective Ant Lion Optimizer [22], and Multi-objective Salp Swarm Algorithm (MSSA) [23], and so on. Although the non-dominant ranking strategy can well screen out excellent individuals, it also produces marginal individuals, which generate negative effect on the whole optimization process. These algorithms can obtain good local optimal solutions, but it is difficult to achieve ideal global optimal solutions.

The second class is indicator-Based algorithms, which are designed based on the performance indicators. The hypervolume [24], the epsilon indicator and the R2 one are the most utilized for proposed various algorithms, such as, indicator-based EA (IBEA) [25], S-metric selection EMO algorithm [26], R2 EMO algorithm (R2EMOA) [27], and approximation-guided EMO (AGE) [28]. The last class is decomposition-Based algorithms, which aim to decompose the MOP into some optimization sub-problems and solve them simultaneously. The most used algorithms are NSGA-III [29, 30], and MOEA based on decomposition (MOEA/D) [31, 32]. Although this kind of algorithm is efficient, the division of sub-problems depends on the weight deeply.

2.3 Dynamic multi-objective optimization algorithms

Depending on the frequency or severity of change, changes may present various challenges, such as the finite computational time or resources to overcome the change, time-varying feasible region and constraint conditions. Therefore, effective and efficient dynamic multi-objective optimization algorithms are indeed important. Diversity and convergence are two important aspects in designing high-quality optimization methods, since the former aims to prevent the search from local optima whereas the latter helps algorithms to find promising solutions rapidly. Designing an effective strategy that is able to balance the diversity and convergence is one of the key topics in DMOPs. Existing Dynamic multi-objective optimization algorithms can be divided into four categories: diversity based algorithms, memory based algorithms, multi-population based algorithms, and prediction based algorithms.

The main purpose of diversity based algorithms is to maintain the search population diversity for avoiding local optima when a change is detected. Recently, a increasing number of diversity maintenance methods have been proposed. A general framework proposed by Li [33] maintains the diversity by utilizing hierarchical linkage clustering, which is able to generate subpopulations with good diversity while avoiding overlapping. Query-based strategy proposed by Chang et al [34] increases the population diversity by providing a guidance to particles. Immigration-based strategy aims to prevent local optima and achieve better search ability, such as hybrid immigration [35], memory-based immigration [36] and elitism-based immigration [37]. Besides that, hyper-mutation has been employed to combine with the nondominated sorting genetic algorithm II (NSGAII) [38] to create two different dynamic versions for DMOPs.

The main idea of memory based algorithms is to record some historical information, which can be reused to accelerate the convergence of algorithms whenever a change occurs. Branke [39] suggests that the best individuals in previous change environments were stored in an archive firstly, and used to replace some members of the existing population. Goh [40] proposed a strategy that employs an new population to replace the out-of-date archived members, which integrates competitive and cooperative mechanisms for DMOPs. In [41], memory, local search and random techniques are integrated, and an adaptive hybrid population management strategy is proposed by authors. Jiang and Yang [42] used a steady-state manner to respond to changes. These kinds of algorithms performs well on problems with periodical changing feature.

The main idea of multi-population based algorithms is that multiple subpopulations can be advantageous at maintaining diversity. In [43], a self-organizing scouts method is proposed by dividing the search population into two subpopulations, which are used to search in feasible regions. Li [44] combined an island model with particle swarm optimization for dealing with dynamic vehicle routing problems. Yang [45] employs hierarchical clustering to divide the population into several subpopulations of different sizes for effective diversity maintenance [46].

Prediction based algorithms aims to predict a possible POF/POS locations of new environments based on the solutions in previous environments. These algorithms are much popular in DMOEAs, since prediction-based mechanisms could help tracking the moving POS/POF if solutions in new environments are well predicted. Muruganantham [47] proposed a DMOEA by combining Kalman filter with evolutionary methods for solving DMOPs. The multimodal prediction approach proposed by Rong [48] refers to generate an effective initial population for the subsequent evolution. Population Prediction Strategy (PPS) [49] proposed by Zhou et al. is used to predict the manifold of the whole search population by using the univariate auto-regression (AR) model. Besides that, many other prediction approach have been proposed in different ways, such as multi-directions [50], knee points [51], center points [52], and boundary points [53].

Most of the existing DMOEAs have been proposed, showing promising performance in various applications. However, they neglect properties of decision variables, which is an important part of discovering high-quality search individuals. Simultaneously, according to [54, 55], curve fitting technique is a classic and popular technique, which can reflect the distribution relationship between variables to a certain extent and predict possible regions or directions. Motivated by this, this paper proposed a novel method for predicting a high-quality population based on the distribution and classification characteristics of variables after a change is detected. The proposed algorithm contains three different parts, firstly, a simple linear prediction strategy with two different stepsizes is designed to predict non-dominated solutions based on the information of previous environments. The second strategy is proposed by integrating fitting-based strategy for generating new members and improving the quality of population based on the probability distribution of variables. The last strategy aims to generate well-distributed individuals based on the classification features of decision variables. Numerical results on 14 benchmark functions show that the proposed algorithm performs well on tracking time-varying POF or POS.

The following summarizes the organization of this work. Section 2 presents the related work. The proposed algorithm is provided in Section3. In Section 4, the performance of the proposed technique is validated and analyzed on a comprehensive set of benchmark functions. Section 5 gives further discussion about the proposed algorithm. Section 6 concludes the paper.

3 Proposed DMOEA

This section mainly provides the main content of the proposed algorithm in detail. Like other predicted algorithms, our hypothesis is that there is sort of similarity between two consecutive changes. As obtained from the basic framework of the proposed algorithm listed in Algorithm 2, the main idea combines RM-MEDA with a new prediction-based dynamic reaction mechanism, which has three different strategies for predicting a new high-quality search population that tracks the new POS/POF efficiently and effectively.

Algorithm 2 The overall framework of the proposed algorithm

1: Initialize parameter settings.

2: Initialize and evaluate population (PopGen) and set Gen = 1.

3: If the stop condition is not satisfied.

4: If change detected, go to step 5; otherwise, go to step 10.

5: Generate the first subpopulation (SubPop1) using a linear prediction model.

6: Generate the second subpopulation (SubPop2) based on new fitting-based strategy.

7: Generate the third subpopulation (SubPop3) by recent proposed sampling strategy [56].

8: Merge these subpopulations MixPop = SubPop1SubPop2SubPop3.

9: Obtain a population of Popsize by non-dominated sorting the merged population.

10: Optimize population using RM-MEDA.

11: Gen = Gen + 1, return to 3.

3.1 Linear prediction model

This subsection mainly employs a simple linear prediction model with two different stepsizes for predicting non-dominated set. From statistical point of view, the geometric center is an important characteristic and can be used to represent the changing trend of population to some extent. Here, we compute the moving direction of the center points of the last two consecutive populations and use it to predict the position of the non-dominated members of current population in the new environment.

Suppose that Pct is the centroid of population (Popt) and Post is the non-dominated sets of Popt at the time t. Then, the pct can be calculated as follows. (3) where |Popt| is the population size, defines the decision vector of a solution at time t. Then, the moving direction (dirt) of center points at time t can be calculated by (4)

Then, the new position of members in Post at time t + 1 can be obtained by dirt and Post according to the following formula: (5) where step refers to the moving stepsize along the moving direction of dirt. Here, two different values of step (i.e., 0.3 and 1.0,) are used, representing a small and large movement of Post, respectively. Fig 1 illustrates the prediction process.

As shown in Fig 1, pct and pct−1 (black points) are utilized to obtain dirt. Post moves to three different regions described by and using the suggested step values. A combination of these two solution predictions is more likely to approximate the true POS of population () at time t+ 1. Algorithm 3 provides the implementation of this prediction strategy.

Two questions may arise here, on the one hand, the motivation about the two-step prediction strategy to produce good individuals. In the ideal environment, The widely used one-step strategy assumes the change between two continuous times is same to some extent. This proves effective in various algorithms and we would also like to keep it in our algorithm. However, as suggested by [38], sometimes a small variation to the population can be very effective. This inspires us that a smaller stepsize than the previous stepsize would be helpful in creating population individuals for environments that do not change significantly. That is, a smaller moving step may ensure that the predicted solution is much closer to the new POS after a change. As a result, this work attempts to design a two-step prediction strategy for DMOPs.

On the other hand, how to determine stepsize parameters is a major issue. The proposed strategy employs two stepsize values (0.3 and 1.0), which represents two different moving levels (small and normal). There are two reasons for this setting. First, step = 1 for the normal level is set according to fuzzy systems [57], which means that the change is similar to the previous change (normal changes). Second, the stepsize step = 0.3 for the small level should be smaller than that for the normal level. The stepsize setting is chosen not only for simplicity but also by sensitivity analysis as will be detailed in in Section 4.

Algorithm 3 Linear prediction model

1: Retrieve the populations Popt and Popt−1 at time t and t − 1, respectively;

2: Calculate the population centers according to Eq (2);

3: Predict moving direction according to Eq (3);

4: Generate three subpopulations and using Eq (4) with different step values;

5: Save the subpopulations to SubPop1.

3.2 Curve fitting-based strategy

This subsection proposes a curve fitting-based strategy for generating high-quality search individuals based on the distribution relationship of variables. As suggested in [56], the variables can be classified into two parts: principal and non-principal parts. We believe the correlation between principal variables and non-principal variables can be exploited to speed up the search. For example, if a variable x2 is highly correlated with another variable x1, then we can generate values for x2 based on the values of x1. As shown in Fig 2, the curve fittings at time t − 1 and t, denoted CFt−1 and CFt respectively, are computed by a polynomial fitting strategy on the corresponding non-dominated set. Then, the relationship between variables in the new environment (CFt+ 1) can be predicted using the last two consecutive CFt−1 and CFt. (6)

Then, the possible curve fitting characteristic of at time t + 1 can be calculated as (7)

In addition, individuals in the third subpopulation can be generated using the following formula, (8) where cr is the compression radio, which ensures that the newly generated individual surrounds the curve fitting closely. NDp refers to the normal distribution based on the pth variable, since this can make that the newly generated variables meet the characteristics of curve fitting as much as possible.

The implementation of this strategy is shown in Algorithm 4. Specifically, the the most principal variable is identified by the correlation matrix of variables, and the other variables are regards as non-principal variable. Then, for each non-principal variable, the corresponding values can be predicted by the curve fitting model which uses the values of the principal variable sampled from normal distribution. After, another subpopulation can be created by concatenating all the variables.

Algorithm 4 Implementation details of Curve fitting-based strategy

1: Find the populations (Popt and Popt−1) at time t and t − 1, respectively.

2: Compute the correlation matrix for each non-principal variable xi at time t − 1.

3: Estimate the new curve fitting feature for each xi at time t + 1 according to Eq (7).

4: Create a subpopulation SubPop2 by sampling from the decision space.

5: Calculate the bounds of xi.

4 Experimental studies

This section evaluates the performance of the proposed algorithm through experimental studies. It includes details about benchmark functions, performance indicators, compared methods, parameter settings and numerical results.

4.1 Test instances

This work utilizes a set of recently proposed DF problems with various difficulties, such as variable linkage, disconnectivity, irregular POF shapes, and time-dependent geometries. All parameter settings keep the same with the suggestion according to the literature [58].

4.2 Performance indicators

This study employs three widely used performance indicators described as follows for evaluating the effectiveness of the proposed algorithm.

4.2.1 Mean Inverted generational distance (MIGD).

The first performance indicator is MIGD, which is utilized to evaluate the convergence and diversity of solutions obtained by an algorithm, and the mathematical equation is provided as follows [56, 59]. (9) where is the true POF solutions, is a POF approximation, is the minimum Euclidian distance between g and the points in , and is the number of solution in . Then, the MIGD can be computed as (10) where T is a set of times instance and |T| is the total number of changes in a run.

4.2.2 Mean Schott’s Spacing Metric (MSP).

The second performance indicator is the Schott’s spacing metric, which is used to measure the distribution of the obtained solutions using the following formula: (11) where Di represents the Euclidean distance between the ith point in and its nearest point in . is the average value of Di. The MSP can be defined as follows: (12)

4.2.3 Hypervolume metric.

The second performance indicator is Hypervolume (HV) [48, 53], which is a important metric for evaluating solutions. Different from the other indicators mentioned above, HV needs to set a reference vector dominated by any points in the . (13) where refers to the hypervolume [52] of set . The reference point for the computation of hypervolume is (zj + 0.5, j = 1, …, m), where zj is the maximum value of the jth objective of true POF. The MHV can be calculated as follows: (14)

4.2.4 T-test.

To determine whether the results obtained by the proposed algorithm are essentially difference from the results computed by other algorithms, the t-test at a 0.05 significance level is employed to check the experimental results of all optimization methods [60]. A p−value less than 0.05 indicates that the performance of two compared techniques is statistically different (h = 1), otherwise, there is no significant difference (h = 0). Meanwhile, the bottom of each Table summarizes the comparison results, ‡, † and ≀ indicate that the performance of FBP is better than, worse than and similar to that of the corresponding algorithm, respectively.

4.3 Compared algorithms

In this section, several existing approaches are selected to compare with the proposed technique. A brief description of these algorithms and parameter settings is summarized as follows.

4.3.1 Population Prediction Strategy (PPS).

The main idea of PPS is to divide the PS/PF into two parts: population center and manifold. Autoregression (AR) model is adopted to predict the next population center based on a time series of historical population centers. Similarly, historical manifolds are also used to predict new manifold. Then, A new population will be assembled based on the predicted population center and manifold [48].

4.3.2 TrDMOEA.

TrDMOEA is an approach integrating transfer learning strategy and evolutionary algorithms to solve DMOPs. This main idea of this technique is that the agents at different times have different distributions for generating an effective search population. More details can be found in the literature [4].

4.3.3 MOE.

MOE is a mixture-of-experts-based computation framework with multiple prediction mechanism for generating robust POS and enhancing the overall prediction quality in dealing with DMOPs. Experimental results illustrate that MOE has significant performance with respective to other dynamic optimization algorithms. More details can be found in the literature [61].

4.3.4 MOEA/D-FD.

First-order difference model-based MOEA/D algorithm (MOEA/D-FD) [62] utilizes historical information to predict the location of the new POS after a change is detected. The new population is composed of two kinds of solutions: the old solutions and the predicted ones. The movement of population centroid defines a predicted direction. To make the new population diversified, evenly-distributed individuals selected from the previous population are used in the prediction.

4.3.5 MOGOA.

The Grasshopper Optimisation Algorithm (GOA) models is proposed according to the behaviour of grasshopper swarms in nature, and a multi-objective version of Grasshopper optimization algorithm, MOGOA, is also designed for solving different multi-objective optimization problems. To enhance the distribution of solutions, an archive and a roulette wheel selection technique are integrated to the algorithm, and the individuals with uncrowded distance tend to be deleted for avoiding premature convergence. More details can be found in the literature [20].

4.3.6 MOMVO.

The multi-verse optimization is proposed by imitating the white hole, black hole and wormhole mechanisms, which correspond to three different search strategies, exploration, exploitation, and local search, respectively. Meanwhile, a multi-objective version of multi-verse optimization, MOMVO, is also designed for solving different multi-objective optimization problems. In which, a leader selection strategy is utilized to choose the better agents from the archive, in addition, all the individuals will be ranked based on crowded distance with its neighbourhoods, and will be selected using the roulette wheel strategy for maintaining the convergence and diversity. More details can be found in the literature [21].

4.3.7 MOALO.

The ALO algorithm, a new population-based optimization technique, is proposed by simulating the interaction and hunting behaviors of antlions in nature. Recently, it is also considered as an extended version, Multi-objective ant lion optimizer (MOALO), in which the non-dominated relationships and roulette wheel strategy are utilized to generating promising solutions. In addition, a set of benchmark functions and some constrained engineering design problems are cited to check the performance of MOALO. More details can be found in the literature [22].

4.3.8 MSSA.

The SSA algorithm is designed based on the swarming behavior of salps when navigating and foraging in oceans for solving various optimization problems. Recently, it is also considered as an extended version, Multi-objective Salp Swarm Algorithm (MSSA), in which the guidance solution is selected from a set of non-dominated solutions based on ranking process and roulette wheel selection strategies, and the individuals with low rank tend to be deleted probability for maintaining the scale of archive. More details can be found in the literature [23].

4.4 Parameter settings

The parameters of the MOEAs considered in the experiment were referenced from their original papers. Some key parameters in these algorithms were set as follows.

4.4.1 Population size.

The population size (N) in all the algorithms was set to 100. Around 1000 points were uniformly sampled from the true POF for computing the performance metrics in both bi- and three-objective cases.

4.4.2 Other parameters.

All the parameters in the compared algorithms used the same settings as in their original studies.

Parameters in FBP: the degree of polynomial fit (dpf) is set to 2, the size of SubPop2 was set to 0.4N, the parameters of the third strategy are seen [56].

4.4.3 Stopping criterion and the number of executions.

Each algorithm terminates after a prespecified number of generations and should cover all possible changes. To minimize the effect of static optimization, we gave 50 generations for each algorithm before the first change occurs. The total number of generations was set to 3nt τt+ 50, which ensures there are 3nt changes during the evolution. Additionally, each algorithm was executed 25 independent times on each test instance.

4.4.4 Change detection.

For all the algorithms, a maximum number of 10% population are re-evaluated for change detection.

4.5 Experimental results

The severity of change (nt) and the frequency of change (τt) are two significant parameters in benchmark functions. To investigate the influence of these parameters on algorithms’ performance, they are set to different values (5,10,20) in this section. Tables 19 summarize the numerical results obtained by different algorithms, and the best values are also highlighted in bold face.

thumbnail
Table 1. Mean and standard deviation values of MIGD obtained by five algorithm for (nt, τt) = (5,20).

https://doi.org/10.1371/journal.pone.0254839.t001

thumbnail
Table 2. Mean and standard deviation values of MIGD obtained by five algorithm for (nt, τt) = (10,10).

https://doi.org/10.1371/journal.pone.0254839.t002

thumbnail
Table 3. Mean and standard deviation values of MIGD obtained by five algorithm for (nt, τt) = (10,20).

https://doi.org/10.1371/journal.pone.0254839.t003

thumbnail
Table 4. Mean and standard deviation values of MHV obtained by five algorithms for (nt, τt) = (5,20).

https://doi.org/10.1371/journal.pone.0254839.t004

thumbnail
Table 5. Mean and standard deviation values of MHV obtained by five algorithms for (nt, τt) = (10,10).

https://doi.org/10.1371/journal.pone.0254839.t005

thumbnail
Table 6. Mean and standard deviation values of MHV obtained by five algorithms for (nt, τt) = (10,20).

https://doi.org/10.1371/journal.pone.0254839.t006

thumbnail
Table 7. Mean and standard deviation values of MSP obtained by five algorithms for (nt, τt) = (5,20).

https://doi.org/10.1371/journal.pone.0254839.t007

thumbnail
Table 8. Mean and standard deviation values of MSP obtained by five algorithms for (nt, τt) = (10,10).

https://doi.org/10.1371/journal.pone.0254839.t008

thumbnail
Table 9. Mean and standard deviation values of MSP obtained by five algorithms for (nt, τt) = (10,20).

https://doi.org/10.1371/journal.pone.0254839.t009

The MIGD results of all the algorithms are recorded in Tables 13, and it can be seen that FBP has the best values compared with its peers for most of the benchmark functions. However, for two functions DF4 and DF8, FBP is not able to obtain the best value, but the difference is not large according to the statistical p-values. When the τt is set to 10, FBP generates the best result on DF1. Meanwhile, for different levels of nt and τt, the proposed technique still can achieve the best result on majority of the functions. This shows that the designed prediction strategies can generate good population tracking the true POF closely in dynamic environments.

As shown in Tables 46, which summarizes the MHV values of all the algorithms, although FBP has great better MHV values than the other techniques on a majority of the problems, it is not effective enough for solving DF5, DF6 and DF11 based on the statistical ttest results. In addition, MOE has little advantage over the others on DF9 and DF14. Therefore, the MHV metric further demonstrates that the proposed strategy responds to changes well.

It is observed from Tables 79, which lists MSP results obtained by all the algorithms, that although FBP can obtain the best solution on most of two bi-objective problems, e.g., DF1, DF5 and DF7, it seems ineffective in a few three objective problems, but the difference is not significant according to the statistical p-values. MOEA/D-FD obtains best distribution of solutions in other cases. MOEA/D-FD benefits from the even weights in its decomposition approach that improves the distribution of solutions. On the contrary, the other MOEAs utilizes dominance-based environmental selection approaches, which may not generate as uniform solutions as the decomposition-based technique, especially in three-objective problems. Besides that, well-distributed solutions does not mean that they approximate the true POF closely. MOEA/D-FD performs better than FBP in terms of MSP, but it is weaker than FBP on the other two indicators, i.e., MIGD and MHV, which are more reliable to distinguish between algorithms in terms of the overall performance.

As described before, it is obvious that the frequency of changes exerts a certain influence on algorithms’ performance. In three-objective functions, frequent changes increase the difficulty of finding high-quality approximations to the POF, as shown by the large MIGD and MHV results recorded in Tables 19, respectively. Overall, FBP seems less sensitive to the frequency and severity of change, as can be observed from its gradual improvement on the three measures when τt and nt increase in most cases, for which the compared algorithms have drastic changes in their performance.

Fig 3 presents some convergence graphs of the mean IGD values for a majority of the benchmark functions. It is obvious that FBP shows more stable ability and recovers faster from dynamic changes in most case, thereby gaining higher convergence process compared with the others. For DF10, FBP does not perform well for the first a few environments, but it has significant advantage over its peers in later environments. The overall performance of FBP is better than the others on DF8.

thumbnail
Fig 3. Mean IGD curves for different problems with nt = 10 and τt = 1.

https://doi.org/10.1371/journal.pone.0254839.g003

Figs 47 plot some POF approximation on DF3, DF5, DF7 and DF8, which are intuitive representations of the solutions. It is obvious that FBP performs better than the compared algorithms. The approximations demonstrate clearly that FBP has excellent tracking ability in varying environments, but it may generate some boundary individuals in DF8.

thumbnail
Fig 4. POF approximations of five algorithms for DF3 with nt = 10 and τt = 10.

https://doi.org/10.1371/journal.pone.0254839.g004

thumbnail
Fig 5. POF approximations of five algorithms for DF5 with nt = 10 and τt = 10.

https://doi.org/10.1371/journal.pone.0254839.g005

thumbnail
Fig 6. POF approximations of five algorithms for DF7 with nt = 10 and τt = 10.

https://doi.org/10.1371/journal.pone.0254839.g006

thumbnail
Fig 7. POF approximations of five algorithms for DF8 with nt = 10 and τt = 10.

https://doi.org/10.1371/journal.pone.0254839.g007

Apart from the above analysis, to investigate the performance of the proposed dynamic dynamic multiobjective algorithm further, some recent MO algorithms (MOGOA, MOMVO, MOALO and MSSA) are employed for comparisons. They are equipped with the same reaction mechanism used in FBP, Tables 1012 record the simulation results including mean values, standard deviation and t-test values. It can be seen that FBP outperforms the compared algorithms on the majority of test problems based on MIGD and MHV results, and the p-values summarized in the bottom of Tables also indicate that the differences among them are significant. For the MSP, the advantages of the algorithm are not obvious on the three functions (DF2, DF11 and DF13), but the p-values show that the differences among them are not significant. Totally, FBP is able to generate competitive results with respective to other compared approaches.

thumbnail
Table 10. Performance comparison of different multiobjective algorithms variants on MIGD.

https://doi.org/10.1371/journal.pone.0254839.t010

thumbnail
Table 11. Performance comparison of different multiobjective algorithms variants on MHV.

https://doi.org/10.1371/journal.pone.0254839.t011

thumbnail
Table 12. Performance comparison of different multiobjective algorithms variants on MSP.

https://doi.org/10.1371/journal.pone.0254839.t012

5 Discussion

5.1 Component analysis

As mentioned before, the proposed strategy contains three different key components. This subsection aims to discuss the role that each component plays in dealing with dynamic environment. Specifically, to demonstrate the importance of the linear prediction model with two different stepsizes, a one step prediction model is utilized to replace the proposed two steps strategy for predicting non-dominated solutions. This is, the step value is set to one (step = 1), which is a common setting in most existing prediction-based techniques, and the variant is named FBPV1. To demonstrate that the fitting-based strategy has important effect on the proposed strategy, FBPV2 is designed by removing the sampling strategy; in the other words, FBPV2 just has two prediction strategies. Similarly, to study the role of the third strategy, FBP is also modified by excluding the reference sampling strategy, called FBPV3.

These three variants are compared with the original FBP, and Table 13 report the corresponding computing results. The following discusses the influence of each component in detail.

thumbnail
Table 13. Performance comparison of different FBP variants on MIGD.

https://doi.org/10.1371/journal.pone.0254839.t013

5.1.1 Linear prediction mode.

It is clear that FBP is much superior to FBPV1 in terms of MIGD on some cases, but the differences among them are not too significant in most of test problems based on the p-values. The reason may come from the fact that FBP utilizes a two-step based prediction strategy, which would generate more boundary individuals than FBPV1. Therefore, the population diversity can be affected by too much non-dominated boundary solutions immediately. Despite that, the overall performance of the two-step technique performs much better than one-step strategy for the majority of the benchmark functions.

5.1.2 Curve fitting-based strategy.

It is not difficult to observe from the results that FBP outperforms the modified variant FBPV2 on most of the test functions. This means that the curve fitting-based strategy indeed helps improve the quality of population in varying environments. The reason may originate from the fact that the curve fitting-based strategy is designed by considering interlinks between variables, which helps to generate promising solutions to some extent.

The comparison between the three different variants and the proposed FBP illustrates that each part has an significant effect on the performance of FBP, and removing any of them reduces performance. Therefore, it is important to combine them together as in the FBP strategy.

5.1.3 Sampling strategy.

All the results illustrate that FBP performs much better than FBPV2 for almost all test problems, although FBP is slightly weaker than FBPV3 for DF14 problem. Thus, the designed sampling technique is able to improve the search ability of population in each varying environment clearly and can further improve the effectiveness of the proposed dynamic multiobjective optimization algorithm.

5.2 Influence of step values

As described before, the linear prediction model employs two different stepsizes, which are set to 1 and 0.3 for predicting non-dominated solutions, respectively. Here, to study whether the step values are well configured, step = 1 is fixed as it has proven effective in many prediction algorithms, and the other step is set to an increment of 0.2 from 0.1 to 0.7 (FBPS1-FBPS3). Numerical results in Table 14 for the fourteen functions shows that the algorithms become ineffective when step is too large shown by t-test values. The results illustrate that FBP outperforms other three versions on a majority of functions, although the differences between us are not very large on some cases. Therefore, it can be concluded from the experiment that FBP should utilize two different stepsize values (1 and 0.3) reasonably.

thumbnail
Table 14. Performance comparison of FBP variants on MIGD for (nt, τt) = (10,10).

https://doi.org/10.1371/journal.pone.0254839.t014

5.3 Influence of degree of polynomial regression

As a importance part of FBP, the curve fitting-based strategy has a significant parameter, the degree of polynomial regression (dpf). Here, the dpf is set to different values, with an increment of 1, from 1 to 4 (FBPL1-FBPL3) for exploring its influence on algorithms’ performance. The comparison results recorded in Table 15 show that the proposed technique is superior to the other versions on almost all the test problems. Although the higher the degree, the better the goodness of fit, too high degree may result in over-fitting. Thus, it is important to properly select the degree of polynomial regression and the experimental analysis supports the decision made to choose a degree of two.

thumbnail
Table 15. Performance comparison of FBP variants on MIGD for (nt, τt) = (10,10).

https://doi.org/10.1371/journal.pone.0254839.t015

5.4 Influence of cr values

In the third strategy, the new prediction fitting curve is obtained based on Eqs (6) and (7). After that, it will be used to generate new individuals using (Eq 8), which involves two important parameters, the compression ratio (cr) and subpopulation (Subpop2) size. The former is discussed in this subsection, and the latter will be analyzed below. cr ranges from 0.1 to 0.7, with an increment of 0.2 (FBPR1-FBPR3), and the results are summarized in Table 16. It is obvious that the original variant performs much better than the other three versions in almost all the problems. Especially in some cases, the difference between them are quite significant i.e., DF1, DF5, DF10. Therefore, 0.1 is the best one for cr in this study.

thumbnail
Table 16. Performance comparison of FBP variants on MIGD for (nt, τt) = (10,10).

https://doi.org/10.1371/journal.pone.0254839.t016

5.5 Influence of Subpop2 size

Another important parameter is the Subpop2 size. To investigate its influence, the Subpop2 size changes with an increment of 0.1 from 0.2 to 0.5 times of the total population size (FBPQ1-FBPQ3). The comparison results recorded in Table 17 show that there is no best values for this parameter for all the test functions. For instance, some cases (e.g. DF3 and DF13) are sensitive to the parameter value, while other cases (e.g. DF1 and DF2) are not affected by this parameter too much. This experiment supports that FBP has much better performance compared with the other variants when Subpop2 is defined as around 0.4N, although it is not always the best. Thus, 0.4N is chosen for this parameter in FBP.

thumbnail
Table 17. Performance comparison of FBP variants on MIGD for (nt, τt) = (10,10).

https://doi.org/10.1371/journal.pone.0254839.t017

5.6 Different Multi-objective algorithms

This subsection aims to verify the feasibility of the proposed dynamic reaction mechanism by combining it with four efficient and new proposed multiobjective algorithms.

5.7 More discussion

Apart from the aforementioned component and parameter analysis, this subsection further discusses the advantages and disadvantages of each strategy of the proposed technique. Firstly, the linear prediction strategy utilizes the two-step strategy for predicting non-dominated solutions, which increases the quality of the population in dynamic environments and improves the optimization performance. However, improvement comes at the cost of complexity, since compared with one-step strategy, the two-step strategy tends to generate more solutions. Meanwhile, these solutions contain some boundary individuals, which are not beneficial for global search, as shown in the numerical results where these boundary individuals are non-dominated. Therefore, this strategy should be modified by controlling the boundary members effectively.

Secondly, to obtain well-distributed solutions, FBP employs a recent sampling strategy by classifying decision variables into two groups. Experimental results also show that it is also an effective way for solving multiobjective problems in varying environments. However, the strategy heavily depends on variable classification. This study assumes that there exists principle and non-principle variables, but it not clear about the generalisation of this assumption. Thus, this strategy also needs to be improved effectively to avoid the principal being misidentified.

Thirdly, the curve-fitting based strategy aims to predict a subpopulation based on the distribution characteristic among variables in two consecutive environments. Simulation results show that it enhances performance in bi-objective problems, but is not helpful for triple-objective problems. Therefore, further improvement should be make on this strategy.

6 Conclusion

This paper proposed a new dynamic multiobjective optimization algorithm, named FBP, for dealing with multiobjective problems in changing environments. FBP mainly includes three different components, that is, a two-step approach for predicting non-dominated solutions, a sampling strategy and a curve-fitting strategy. Each component has an important role for create high-quality population, improving either diversity or convergence, when a change occurs in the environment. To verify the effectiveness of our algorithm, a recent test suite with different characteristics is utilized. Experimental comparisons demonstrate that FBP has better performance than the other algorithms on most cases, showing the proposed algorithm has a good tracking ability and responds fast to environmental changes. Besides, the role that each component and parameter plays in the proposed algorithm is also analysed and discussed extensively. In our future work, we will further improve the proposed algorithm by addressing some parameter issues as discussed previously.

Acknowledgments

The authors express sincerely appreciation to the anonymous reviewers for their helpful opinions.

References

  1. 1. Li K, Deb K, Zhang Q. An evolutionary many-objective optimization algorithm based on dominance and decomposition. IEEE Trans. Evol. Comput. 2014;19(5):694–176.
  2. 2. Li Y, Tong S, Li T. Adaptive fuzzy output feedback dynamic surface control of interconnected nonlinear pure-feedback systems. IEEE Trans. Cybern. 2014;45(1):138–149.
  3. 3. Xiong J, Zhou Z, Tian K, Liao T, Shi J. A multi-objective approach for weapon selection and planning problems in dynamic environment. J. ind. Manage. Opt. 2017;13(3):1189–1211.
  4. 4. Jiang M, Huang Z, Qiu L, Huang W. Transfer learning-based dynamic multiobjective opyimization algorithms. IEEE Trans. Evol. Comput. 2017;22(4):501–514.
  5. 5. Deb K, Rao UV, Karthik S. Dynamic multiobjective optimization and decision-making using modified NSGA-II: A case study on hydro-thermal power scheduling. Proc. EMO, LNCS 4403, 2007, 803–817.
  6. 6. Wang DJ, Liu F, Jin Y. D multi-objective evolutionary algorithm guided by directed search for dynamic scheduling. Comput. Oper. Res. 2016;79:279–290.
  7. 7. Yazici A, Kirlik G, Parlaktuna O, Sipahioglu A. A dynamic path planning approach for multirobot sensor-based coverage considering energy constraints. IEEE Trans. Cybern. 2014;44(3):305–314.
  8. 8. Wu P, Campbell D, Merz T. Multiobjective four-dimensional vehicle motion planning in large dynamic environments. IEEE Trans. Syst., Man, Cybern. B, Cybern. 2011;41(3):621–634.
  9. 9. Wu X, Ma Z, Wang Y. Joint user grouping and resource allocation for multi-user dual alyer beamforming in LTE-A. IEEE Commun. Lett. 2015;19(10):1822–1855.
  10. 10. Mashwani WK, Salhi A. Multiobjective evolutionary algorithm based on multimethod with dynamic resources allocation. Appl. Soft. Comput. 2016;39:292–309.
  11. 11. Zeng S, Chen S, Zhao J, Zhou A, Li Z, Jing H. Dynamic constrained multiobjective model for solving constrained optmization problems. Proc. IEEE CEC. 2011; 2041–2046.
  12. 12. Feng G, Yuan L, Zhang X, Qian Z. Dynamic adjustment of hidden node parameters for extreme learning machine. IEEE Trans. Cybern. 2015;45(2):279–288.
  13. 13. Isaacs A, Puttige V, Ray T, Smith W, Anavatti S. Development of a memetic algorithm for dynamic multiobjective optimization and its applications for online neural network modeling of UAVs. Proc. IJCNN. 2008;548–554.
  14. 14. Zhou A, Jin Y, Zhang Q, Sendhoff B, Tsang E. Prediction-based population re-initialization for evolutionary dynamic multiobjective optimization. Proc. EMO. 2007; LNCS 4403, 832–846.
  15. 15. Liu C, Wang Y. New evolutionary algorithm for dynamic multiobjective optimization problems. Proc. Evol. Comput. Theory Algorithms. 2006; LNCS 4221, 889–892.
  16. 16. Lu C, Gao L, Yi J. Grey wolf optimizer with cellular topological structure. Expert. Syst. Appl. 2018;107:89–114.
  17. 17. Deb K, Pratap A, Agarwal S, Meyarivan T. D fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput, 2002;6:182–197.
  18. 18. Zitzler E, Laumanns M, Thiele L. SPEA2: Improving the Strength Pareto Evolutionary Algorithm. Technical Report 103, Computer Engineering and Networks Laboratory (TIK). Swiss Federal Institute of Technology (ETH)Zurich, Switzerland. 2001.
  19. 19. Padhye N. Comparison of archiving methods in multiobjective particle swarm optimization (MOPSO): empirical study. Proceedings of the 11th annual conference on Genetic and evolutionary computation. 2009.
  20. 20. Mirjalili S, Mirjalili S, Saremi S, Aljarah I. Grasshopper optimization algorithm for multi-objective optimization problems. Appl. Intell. 2018; 48:805–820.
  21. 21. Mirjalili S, Jangir P, Mirjalili S, Saremi S, Trivedi I. Optimization of problems with multiple objectives using the multi-verse optimization algorithm. Knowl-based syst. 2017; 134: 50–71.
  22. 22. Mirjalili S, Jangir P, Saremi S. Multi-objective ant lion optimizer: a multi-objective optimization algorithm for solving engineering problems. Appl. Intell. 2016; 1–17.
  23. 23. Mirjalili S, Gandomi A, Mirjalili S, SaremiS S, Faris H, Mirjalili S. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017; 114:163–191.
  24. 24. Auger A, Bader J, Brockhoff D, Zitzler E. Hypervolume-based multiobjective optimization: Theoretical foundations and practical implications. Theor. Comput. Sci. 2012; 425(1): 75–103.
  25. 25. Phan D, Suzuki J. R2-IBEA: R2 indicator based evolutionary algorithm for multiobjective optimization. IEEE Congr. Evol. Comput. 2013; 1836–1845.
  26. 26. Zhang Q, Li H. MOEA/D: A multiobjective evolutionary algorithm based on decomposition. IEEE Trans. Evol. Comput. 2007; 11(6): 712–731.
  27. 27. Saxena D, Duro J, Tiwari A, Deb K, Zhang Q. OObjective reduction in many-objective optimization: Linear and nonlinear algorithms. IEEE Trans. Evol. Comput. 2013; 17(1): 77–99.
  28. 28. Bringmann K, Friedrich T, Neumann F, Wagner M. Approximation guided evolutionary multi-objective optimization. Proc. 21st Int. Joint Conf. Artif. Intell. 2011; 1198–1203.
  29. 29. Deb K, Jain H. An evolutionary many-objective optimization algorithm using reference-point based non-dominated sorting approach, part I: Solving problems with box constraints. IEEE Trans. Evol. Comput. 2014; 18(4): 577–601.
  30. 30. Deb K, Jain H. An evolutionary many-objective optimization algorithm using reference-point based non-dominated sorting approach, part II: Solving problems with box constraints. IEEE Trans. Evol. Comput. 2014; 18(4): 602–622.
  31. 31. Ke L, Zhang Q, Battiti R. MOEA/D-ACO: A multiobjective evolutionary algorithm using decomposition and ant colony. IEEE Trans. Cybern. 2013; 43(6): 1845–1859.
  32. 32. Ke L, Zhang Q, Battiti R. Hybridization of decomposition and local search for multiobjective optimization. IEEE Trans. Cybern. 2014; 44(10): 1808–1820.
  33. 33. Li C, Yang S. A general framework of multipopulation methods with clustering in indetectable dynamic environments. IEEE Trans. Evol. Comput. 2012; 16(4): 556–577.
  34. 34. Chang R, Hsu H, Lin S, Ho J. Query-based laerning for dynamic particle swarm optimization. IEEE Access. 2017; 5: 7648–7658.
  35. 35. Mavrovouniotis M, Yang S. Genetic algorithms with adaptive immigrants for dynamic environments. 2013 IEEE Congress on Evolutionary Computation. 2013.
  36. 36. Grefenstette J. Genetic algorithm for changeing environments. Parallel Problem Solving from Nature 2. 1992: 137–144.
  37. 37. Yang S. Genetic algorithms with memory- and elitism-based immigrants in dynamic environments. Evol. Comput. 2008; 16(3): 385–416.
  38. 38. Deb K, Karthik S. Dynamic multi-objective optimization and decision-making using modified NSGA-II: a case study on hydro-thermal power scheduling. Lecture Notes in Computer Science. Springer Science mathplus Business Media, 2007; 803–817.
  39. 39. Branke J. Memory enhanced evolutionary algorithms for changing optimization problems. Proceedings of the 1999 congress on evolutionary computation. Institute of Electrical and Electronics Engineers. 1999.
  40. 40. Goh C, Tan K. A competitive-cooperative coevolutionary paradigm for dynamic multiobjective optimzation. IEEE Trans. Evol. Comput. 2009; 13(1): 103–127.
  41. 41. Azzouz R, Bechikh S, Said L. A dynamic multi-objective evolutionary algorithm using a change severity-based adaptive population management strategy. Soft Comput. 2015; 21(4):1–22.
  42. 42. Jiang S, Yang S. A steady-state and generational evolutionary algorithm for dynamic multiobjective optimization. IEEE Trans. Evol. Comput. 2017; 21(1): 65–82.
  43. 43. Branke J, Kaussler T, Smidt C, Schmeck H. A Multi-population approach to dynamic optimizaton prblems. Evolutionary Design and Manufacture. Springer Science mathplus Business Media, 2000; 299–307.
  44. 44. Li C, Yang S. Fast Multi-Swarm Optimization for dynamic optimization problems. 2008 Fourth International conference on Natural Computation. Institute of Electrical and Electronics Engineers, 2008.
  45. 45. Yang S, Li C. A clustering particle swarm optimizer for locating and tracking multiple optima in dynamic environments. IEEE Trans. Evol. Comput. 2010; 14(6): 959–974.
  46. 46. Khouadjia M, Alba A, Smidt C, Schmeck H. A Multi-population approach to dynamic optimizaton prblems. Evolutionary Design and Manufacture. Springer Science mathplus Business Media, 2000; 299–307.
  47. 47. Muruganantham A, Tan K, Vadakkepat P. Evolutionary dynamic multiobjective optimization via kalman filter prediction. IEEE Trans. Evol. Comput. 2016; 46(12): 2862–2873.
  48. 48. Rong M, Gong D, Pedrycz W, Wang L. A multimodel prediction method for dynamic multiobejctive evolutionary optimization. IEEE Trans. Evol. Comput. 2020; 24(2): 290–304.
  49. 49. Zhou A, Jin Y, Zhang Q.. A population prediction strategy for evolutionary dynamic multiobjective optimization. IEEE Trans. Cybernetics. 2014; 44(1): 66–77.
  50. 50. Li Q, Zhou J, Yang S, Zheng J, Ruan G. A predictive strategy based on special points for evolurionary dynamic multiobjective optimization. Soft comput. 2019;23:3723–3739.
  51. 51. Zou J, Li Q, Yang S, Hu B, Zheng J. A prediction strategy based on center points and knee points for evolurionary dynamic multiobjective optimization. Applied Soft comput. 2017; 61: 806–818.
  52. 52. Ruan G, Yu G, Zheng J, Zou J, Yang S. The effect of diversity maintenance on prediction in dynamic multiobjecitve optimization. Applied Soft comput. 2017; 58: 631–647.
  53. 53. Hu Y, Ou J, Zheng J, et al. Solving dynamic multiobjective problems with an evolutionary multi-directional search approach. Knowl-based syst. 2020; 194: 1–15.
  54. 54. Salarieh B, Silva H. Review and comparison of frequency-domain curve-fitting techniques: Vector fitting, frequency-partitioning fitting, matrix pencil method and loewner matrix. Electr. pow. syst. res. 2021; 196: 107254.
  55. 55. Tuan Q, youn M, SukKimd Y. New procedure for determining the strain hardening behavior of sheet metals at large strains using the curve fitting method. Mech. Mater. 2021; 154: 103729.
  56. 56. Zhang Q, Yang S, Wang R, et al. Novel Prediction Strategies for Dynamic Multiobjective optimization. IEEE Trans. Evol. Comput. 2020; 24(2): 260–274.
  57. 57. Nauck D, Frank Klawonn K, Rudolf K. Foundations of Neuro-Fuzzy Systems. John Wiley Sons, Inc., 1997.
  58. 58. Jiang S, Yang S, Yao X, Tan K, Kaiser M, Krasnogor N. Benchmark problems for CEC2018 Competition on Dynamic multiobjective optimization. 2018 IEEE Congress on Evolutionary Computation, Competition on Dynamic Multiobjective Optimisation. 2018.
  59. 59. Zhang Z. Multiobjective optimization immune algorithm in dynamic environments and its application to greenhouse control. Appl. Soft Comput. 2008; 8(2): 959–971.
  60. 60. Sohna W, Jeong M, Jeonga K. Theoretical comparative study of t-tests and nonparametric tests for final status surveys of MARSSIM at decommissioning sites. Annals. Nuclear Energ. 2020; 135: 106945.
  61. 61. Rambabu R, Vadakkepat P, Tan K, Jiang M. A Mixture-of-Experts Prediction Framework for Evolutionary Dynamic Multiobjective Optimization. IEEE Trans. Cybernetics. 2020; 50(12): 5099–5112.
  62. 62. Cao L, Xu L, Goodman E, Li H. A First-Order Difference Model-Based Evolutionary Dynamic Multiobjective Optimization. Asia-Pacific Conference on Simulated Evolution and Learning, Simulated Evolution and Learning. 2017; 644–655.