Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Digital IIR Filters Design Using Differential Evolution Algorithm with a Controllable Probabilistic Population Size

  • Wu Zhu,

    Affiliation College of Information Science and Technology, Donghua University, Shanghai, China

  • Jian-an Fang,

    Affiliation College of Information Science and Technology, Donghua University, Shanghai, China

  • Yang Tang ,

    tangtany@gmail.com

    Affiliations College of Information Science and Technology, Donghua University, Shanghai, China, Research Institute for Intelligent Control and System, Harbin Institute of Technology, Harbin, China, Institute of Physics, Humboldt University, Berlin, Germany, Potsdam Institute for Climate Impact Research, Potsdam, Germany

  • Wenbing Zhang,

    Affiliations College of Information Science and Technology, Donghua University, Shanghai, China, Institute of Textiles and Clothing, The Hong Kong Polytechnic University, Hong Kong, China

  • Wei Du

    Affiliation Institute of Textiles and Clothing, The Hong Kong Polytechnic University, Hong Kong, China

Digital IIR Filters Design Using Differential Evolution Algorithm with a Controllable Probabilistic Population Size

  • Wu Zhu, 
  • Jian-an Fang, 
  • Yang Tang, 
  • Wenbing Zhang, 
  • Wei Du
PLOS
x

Abstract

Design of a digital infinite-impulse-response (IIR) filter is the process of synthesizing and implementing a recursive filter network so that a set of prescribed excitations results a set of desired responses. However, the error surface of IIR filters is usually non-linear and multi-modal. In order to find the global minimum indeed, an improved differential evolution (DE) is proposed for digital IIR filter design in this paper. The suggested algorithm is a kind of DE variants with a controllable probabilistic (CPDE) population size. It considers the convergence speed and the computational cost simultaneously by nonperiodic partial increasing or declining individuals according to fitness diversities. In addition, we discuss as well some important aspects for IIR filter design, such as the cost function value, the influence of (noise) perturbations, the convergence rate and successful percentage, the parameter measurement, etc. As to the simulation result, it shows that the presented algorithm is viable and comparable. Compared with six existing State-of-the-Art algorithms-based digital IIR filter design methods obtained by numerical experiments, CPDE is relatively more promising and competitive.

Introduction

Filtering problem [1], [2] is a widely studied research topic in various fields of control and signal processing. The main objective of filtering is synthesizing and implementing a filter network [3] to modify, reshape, or manipulate the frequency spectrum of a signal according to some desired specifications. As one of the most successful filter networks, the well-known Digital infinite-impulse-response (IIR) filter has been extensively used in many practical systems [4][7], such as engineering system, network system, nuclear reactor, biological system, chemical system and electrical networks system. However, it has been recognized now that the IIR filter will generally not guarantee satisfactory performance if its feedback coefficients are chosen not appropriately during the adaptation process [8]. Apart from this disadvantage, the possibility of having a multi-modal and nonlinear error surface is another important design challenge for recursive filters [9], [10]. To improve the robustness, in recent years, many heuristic optimization design methods have been developed, such as simulated annealing (SA) [11], ant colony optimization (ACO) [12], particle swarm optimization (PSO) [13], seeker optimization algorithm (SOA) [14], artificial bee colony (ABC) [15], [16] and differential evolution (DE) [17], etc.

A SA is usually sensitive to its starting point of the search and requires too many function evaluations to converge to the global minima. The ACO imitates the social behavior of real ant colonies and it has been originally developed for combinatorial optimization problems. But, it may occasionally be trapped into local stagnation or premature convergence resulting in a low optimizing precision or even failure [18]. What’s more, the conventional PSO algorithm [19] as shown in several studies can easily fly into the local optima. It also lacks the ability to jump out of the local optima when solving complex multimodal tasks. The SOA simulates the act of human searching and has been widely developed for system identification [20]. Nonetheless, the performance of SOA is also affected by its parameters, and it could not easily escape from premature convergence.

Differential evolution, proposed by Storn and Price [21], is a population-based heuristic search algorithm with dual features of reliability and flexibility. It implements the evolutionary generation-and-test paradigm for global optimization by using the current population information of distance and direction to guide the search. It has many advantages such as simplicity, reliability, high performance and easy implementation, which gives great potential application to IIR design. In the seminal DE algorithm, perturbation is operated by adding a weighted moving vector (the weight F is called scale factor) and modifying the values of some randomly selected coordinates. The perturbed solution, namely the offspring, is then evaluated by means of its objective function and compared with its corresponding parent. If the newly generated solution outperforms its parent, then a replacement occurs; otherwise the parent solution is retained. To provide a rigorous proof for its probabilistic convergence, [22] has modeled the population as a dynamical system in which the probability density function (PDF) of the population vectors changes with time. It was shown therein that the dynamics is asymptotically stable (which implies convergence) at the equilibrium PDF, which is a Dirac delta function placed at the global optima. Later on, various mutation strategies [23] were used for the generation of new solutions to augment the robustness of the underlying algorithm.

In DE, it is often the case that, for optimization problems such as single-objective, multi-objective, large scale, constrained and dynamic problems, the population size is naturally fixed on a constant value; see, e.g., [24]. Unfortunately, it is usually difficult to determine how large the population size is suitable for solving numerical optimization problems. For instance, a definite population size is given in [25] which increases linearly with the problem dimension; yet the sparse and noisy data makes it difficult to accurately estimate the maximum population size. Inspired by this fact, an efficient population utilization strategy for DE (DESAP) [26] was developed to automatically tune its population size from initialization to completion right through the evolutionary search process. Nevertheless, the population utilization method depends on its encoding methodology, which is a restriction for the current population with complex dynamical behaviors [27]. No significant advantages can be observed while using relative encoding. Subsequently, the idea of population adaptation has been applied in solving dynamic optimization problems [28], where multi-population approach (DynDE) is placed onto DE aimed at locating optima faster. Yet such a method requires a determinated topology that may be sensitive to the noise of measurement in some extent [29]. It is worth mentioning that although the population may not be as large as possible, it ought to meet the requirements of given engineering. Therefore, a new reduction method for the population size was shown up for the jDE in order to enhance algorithmic performance [30], where the population size was progressively declining until the arrival of the final budget during the optimization process. Unfortunately, this method can not keep track of the progress of individuals in the sustainable reduction.

Many studies have indicated that various computational predictors or models developed in biology and biomedicine, such as those in identifying DNA-binding proteins [31], predicting G-protein-coupled receptors (GPCRs) and their types [32], identifying nuclear receptors and their subfamilies [33], [34], identifying the subcellular localization of proteins from various organisms [35][39], can timely provide very useful insights and informations for both basic research and drug development. These predictors all use the methods of digital signal processing. In view of this, the present study is attempted to addresses an important problem in designing digital IIR filters in hopes that it may become a useful tool for the related information-treating areas. However, most of the developed adaptive population methods have their advantages and disadvantages. So far, it remains open that how to utilize the dynamic population strategy to solve real-world practical problem. We aim at employing a Markov jumping (switching) population updating DE for digital IIR filter, so that the dynamic population can quickly converge to the potential global optimum by taking advantage of the current search information. Thus, the CPDE-based evolutionary method is simulated in digital IIR filter design, and its performance is compared to that of three versions of DE, CMA-ES, GL-25 and SOA. In the community of six digital IIR filter design problems, it is shown empirically that CPDE is capable of producing highly competitive results compared with other EAs.

Results

0.1 Illustration

Application of the IIR filter in system identification has been widely studied since many problems encountered in signal processing can be characterized as a system identification problem (Figure 1) [40], [41]. Therefore, in the simulation study, IIR filters are designed for the system identification purpose. In this section, we will utilize a modified DE to adjust the parameters of the filters until the error between the output of the filter and the unknown system is minimized. Subsequently, we provide an overall comparison between the performance of CPDE and several other State-of-the-Art algorithms to verify the effectiveness and usefulness of the proposed method.

thumbnail
Figure 1. Block diagram of the system identification process using IIR filter designed by CPDE.

https://doi.org/10.1371/journal.pone.0040549.g001

Six typical system identification problems [14] make up the test suite used for this comparative study, which are listed in Tables 1 and 2. and specify the system and filter transfer functions, respectively; indicates the system input; SNR is the Signal to Noise Ratio; is the system noise, which is independent of ; presents the white Gaussian noise (WGN) in zero-mean normal distribution with variance . In Table 2, records all coefficients of six digital IIR filters; Search space is the predefined boundary constraints, that will be analyzed in Section 05; denotes the data length used in calculating the mean-square-error (MSE). The examples were selected so as to include problems with the following characteristics: unimodal/multi-modal, no noise/noisy. For each algorithm and each test function, 30 independent runs are conducted with 100,000 FES as the termination criterion.

Traditionally,”generation” is a natural form of computational cost for statistical comparison [11][14], [17]. However, the population may not be the same in different algorithms. The algorithm with a larger population may obtain a better performance together with much more function evaluations in every generation. Thus, in this paper, the function evaluations (FES) are conducted here to represent its computational cost for algorithm comparison.

In all simulations, the population size of the most EAs is 100 with the exception of EPSDE and SaDE. As suggested in Ref. [42], [43], the population size of EPSDE and SaDE is chosen to be 50. Seven existing EA algorithms are shown in Table 3 in detail. CMA-ES represents the state of the art of Evolution Strategies and it is a referent in the continuous optimization field. GL-25 is a hybrid real-coded genetic algorithm which combines the global and local search. EPSDE is an adaptive DE with ensemble of parameters which incorporates a self-organizing method. jDE is a standard DE with adapted parameter setting. SaDE delivers a mutation strategy pool where strategy is self-adapted based on their previous performance. SOA is a novel heuristic stochastic optimization algorithm based on the simulation of the act of human searching. The parameters for these EAs are provided in Table 3.

0.2 Comparison on the Solution Accuracy

In this section, an overall comparison of the performance is provided between the CPDE variant and other six State-of-the-Art EAs (i.e., CMA-ES, GL-25, EPSDE, jDE, SaDE and SOA). We evaluate the performance of seven heuristic algorithms over six typical nonlinear uncertain discrete-time problems. Fig. 2 illustrates the cost function value versus number of evaluations averaged over 30 random runs for the seven algorithms. The subfigures amplify the convergence graphs in clarity. Table 4 reports the experimental results of Examples 1–6, averaged over 30 independent runs with 100,000 FES.

thumbnail
Figure 2. Cost function value versus number of evaluations averaged over 30 random runs for the seven algorithms (a)

. (b) . (c) . (d) . (e) . (f) .

https://doi.org/10.1371/journal.pone.0040549.g002

thumbnail
Table 4. Experimental results of Examples 1–6, averaged over 30 independent runs with 100,000 FES.

https://doi.org/10.1371/journal.pone.0040549.t004

From the Table 4 and Fig. 2, the CPDE provides the best performance on the , and , then ranks the second on the and . SOA gives the best performance on the and . The results show that GL-25 and SOA have good ability of convergence speed. Fig. 3 also shows instance of evolution of the parameters of two filters for CPDE.

thumbnail
Figure 3. Instance of evolution of the parameters of two filters for CPDE (a)

. (b) .

https://doi.org/10.1371/journal.pone.0040549.g003

To be specific, on the three multimodal problems ( and ), CPDE performs much better than CMA-ES, GL-25, EPSDE, jDE, SaDE and SOA on the latter two functions. SOA delivers good accuracy and the highest convergence rate on , while CPDE outperforms other five methods. To sum up, CPDE is the winner on multimodal functions. This might be due to the fact that CPDE implements the overall adaptive variable population size method, which can help the DE search the optimum as well as maintain a higher convergence speed when dealing with multimodal rotated functions. On the remaining three unimodal functions (), CPDE performs significantly better than six others for and . SOA can provide good accuracy on , while CPDE achieves the highest convergence rate. The outstanding performance of CPDE is due to its dynamic PDS, which leads to very fast convergence. Overall, the CPDE is the best among the seven methods in the comparison conducted on unimodal functions and expanded multi-modal functions. For a thorough comparison, the t-test has also been carried out in this paper. Table 4 presents the total score on every function of this two-tailed test with a significance level of 0.05 between the CPDE variant and other heuristic algorithms. Rows “+ (Better),” “ =  (Same),” and “– (Worse)” give the number of functions that the CPDE performs significantly better than, almost the same as, and significantly worse than the compared algorithm on fitness values in 30 runs, respectively. As confirmed in t-test, the CPDE in general offers more improved performance than the other six State-of-the-Art EAs.

As mentioned above, the CPDE has shown a very competitive performance in the six filtering problems. In practical engineering, noise exist universally in nature [44]. Therefore, in the past few decades researchers have witnessed significant progress on filtering and control for linear/nonlinear systems with various types of noises among which Gaussian noise is one of the most general signals that has been widely studied [45], [46]. Here, we further evaluate the proposed framework on the six expanded stochastic systems, where a zero-mean Gaussian white-noise is added. The maximum number of FES is set to be 100,000 in all runs. Table 5 summarizes the experimental results.

thumbnail
Table 5. Experimental results of Examples 1–6 with noise perturbation, averaged over 30 independent runs with 100,000 FES.

https://doi.org/10.1371/journal.pone.0040549.t005

From the Table 5, the CPDE provides the best performance on the , , then ranks the second on the . SOA offers the best performance on the . The results show that GL-25 and jDE have a good ability of convergence speed.

More specifically, on the three multimodal problems ( and ), although it worked slightly weaker on some functions, the CPDE in general offered more improved performance than all the EAs compared. It performs much better on the and , and attains slightly worse performances than the best solutions on the . On the remaining three unimodal functions (), CPDE performs much better than other EAs. Hence, CPDE exhibits the highest performance in noise-expanded filtering problems, which can efficiently adjust the population structure and guide the evolution process toward more promising solutions.

The t-test is also summarized in Table 5. In fact, CPDE performs better than CMA-ES, GL-25, EPSDE, jDE, SaDE and SOA on 6, 6, 6, 6, 6 and 5 out of 6 test functions, respectively. Thus, CPDE is better than other six competitors in filtering system identification problems. Comparing the results and the cost function graphs, among these seven EA algorithms, the GL-25 can converge to the best solution found so far very quickly though it is easy to stuck in the local optima. The SOA has good global search ability and slow convergence speed. The jDE have good search capability on noise-expanded filtering problems. The CPDE has good local search ability and global search ability at the same time.

0.3 Comparison on Convergence Rate and Successful Percentage

The convergence rate for achieving the global optimum is another key point for testing the algorithm performance. Note that in solving real-world optimization problems, the “function evaluation” overwhelms the algorithm overhead [47]. Hence, the computation times of these algorithms are not provided here. Table 6 shows that CPDE needs the least FES to achieve the acceptable solution on and , which reveals that CPDE has a higher convergence rate than other algorithms. Though SOA or GL-25 might outperform CPDE on the other functions, SOA and GL-25 have much worse successful ratio and accuracy than CPDE on the problems for comparison. In addition, CPDE can achieve accepted value with a good convergence speed and accuracy on most of the problems, as shown in Tables 4, 5 and Fig. 2. Tables 4, 5 also show that CPDE yields a highest percentage for achieving acceptable solutions in 30 runs. According to the no free lunch theorem [48], any elevated performance over one class of problems is offset by performance over another class. Hence, one algorithm cannot perform better on convergence speed and accuracy than the others on every optimization problem.

thumbnail
Table 6. Convergence speed and algorithm reliability comparisons on Examples 1–6 with noise perturbation; ‘’ representing no runs reached an acceptable solution.

https://doi.org/10.1371/journal.pone.0040549.t006

In summary, the CPDE performs best on unimodal problems with or without noise and has a good search ability of multimodal problems. Owing to the controllable probabilistic technique, the CPDE processes capabilities of fast convergence speed, highest successful ratio and the best search accuracy among these EAs.

0.4 Performance of Controllable Probabilistic Approach

In this section, the controllable probabilistic (CP) approach is used to test the search performance of CPDE. In all the experiments, threshold is adjusted in the following. Moreover, the parameter is also considered, which denotes the number of potential candidates for perturbation.

In this paper, indicates the trigger thresholds, which is used to control the sensitivity of the dynamic CPDE. While is set as one, the population size will be adjusted in each generation. Setting a higher value will result in a lower sensitivity of the CPDE, while a lower value will lead to a higher efficiency of the population adjustment. Notice that the parameter should set to be larger than one. Failure to do this will result in an instant elimination of a newborn individual with poor performance, which may provide some degree of diversity preservation. On the other hand, coefficient also influences the perturbation process substantially.

Table 7 shows the comparisons between CPDE with other three parameter settings of CPDE over Examples 1–6 with noise perturbation. It indicates that CPDE is not sensitive to the adjustment of parameters. In order to make a balance of the search accuracy and robustness, and are used as a representative parameter setting in our paper. This setting will prevent the instant elimination of a newborn individual and keep the CP approach high sensitivity.

Discussion

The CPDE is an improved differential evolution algorithm with a controllable probabilistic population size. When particles are clustered together in a region and trapped into the local basin, CPDE perturbs the population and generates the necessary “fine” individuals to share their up-to-date information. In addition, CPDE removes redundant individual with its entropy and ranking metrics to save computational load. In this paper, a CPDE-based digital filter design method has been proposed, and the benefits of CPDE for designing digital IIR filters have been studied.

An overall comparison between the performance of the CPDE and other six State-of-the-Art EAs (i.e., jDE, SaDE, EPSDE, CMA-ES, GL-25 and SOA) was provided over 6 typical robust system identification problems, and the result clearly indicated the CPDE achieved a substantially significant improvement on the performance. Furthermore, convergence rate was also validated that the CPDE has good convergence performance to achieve the fixed accuracy level with acceptable generations. Thus, it is believed that the proposed CPDE is capable of rapidly and efficiently adapting the parameters of a wide variety of IIR structures and will become a promising candidate for digital filter design.

Previous work has shown the importance of system identification on digital IIR filter design. Furthermore, CPDE has effectively been applied to estimate the structure of nonlinear uncertain discrete-time system. Therefore, our method is possible to be used to reconstruct the topology structure for on-line adaptive filtering applications [49]. Another possible application is to identify topology and parameters of complex networks [50][52] and biological time series [53] by dynamic population strategy, provided online measurement and increasing/decreasing techniques are feasible. Generally, the suggested technique enables us to identify the unknown parameter of real networks which allows the required control applications (perturbations). Some possible experimental research is now under our investigation in controller design and tuning.

Materials and Methods

0.5 Description of the Problem

Consider the digital IIR filter with the input-output relationship governed by the difference equation:(1)where and are the filter’s input and output, respectively, and is the filter order. The transfer function of this IIR filter can be written in the following general form:

(2)Hence, the design of this filter can be formulated as an optimization problem with the cost function stated as follows:(3)where and are the desired and actual responses of the filter, respectively; and is the filter’s error signal; is the number of samples used to calculate the objective function.(4)denotes the filter coefficient vector. The aim is to minimize the cost function by adjusting . An important consideration during the adaptive process is to maintain the stability of the IIR filter. Not all filters defined by Eq. (1) are feasible or implementable. An efficient way of maintaining stability is to convert the direct form to a lattice form and make sure that all reflection coefficients , , have magnitudes less than one. We will adopt a similar approach as in [11] to guarantee the stability of the IIR filter during adaptation. Thus, the actual filter coefficient vector used in optimization is . In the circumstances, the coefficient space is formed by the constraints of and the magnitudes of that are less than one. For the sake of simplicity, we adopt the predefined boundary constraints as to compare other existing EAs fairly.

0.6 A Controllable Probabilistic DE

The stochastic system is iterated forward in time using a synchronous DE updating scheme. In our work, a mode-dependent population updating equation with Markovian switching parameters is introduced with the hope to keep track of the progress of individuals and further improve the search abilities. A detailed algorithm design of CPDE can be found in Documentation S1.

In CPDE, there are two levels of sub-optimizers, population decreasing strategy (PDS) and population increasing strategy (PIS). The probability of selecting different sub-optimizer to improve the online solution-searching status is completely up to its non-homogeneous Markov chain. For choosing required sub-optimizers adaptively, consider the following probability transition matrix in Eq. (5):(5)

Here, , and stand for the population maintaining state, population increasing state and population decreasing state, respectively. The expectations of Markov chain are automatically updated by the search environment. It is worthwhile to mention that and are set to be . In this case, the increasing and decreasing operators will not be performed in consecutive generations.

If few trial vectors can outperform the corresponding parent in selection operation, particles may be clustered together and trapped into the local basin. In such a case, the PIS is employed to add new individuals into the population and share their up-to-date information to help the individuals escape the local basin.(6)where is the number of dimensions for perturbation, which is monotonically decreasing through the evolutionary search. Parameters a and b are the magnification coefficient. Parameters and are here considered as the generation variables. During early stages of the optimization process, much more reproductions will be generated to spread out its particles within the decision space. Nevertheless, solutions of the population tend to concentrate in specific parts of the decision space during the later period of optimization process.

However, in case most of individuals can spawn new promising offspings in the evolutionary process, it then signifies that redundant intermediate particles exist. In this case, we introduce the PDS to remove poor particles to avoid undesirable computational cost and excessive search complexity.(7)where denots an overall deletion indicator. The variable and indicate the rank metric and the entropy metric for individual , respectively. It can be observed that from Eq. (7), for the individuals that have high rank values (i.e., away from the global best solution) or low entropy values (i.e., located in the crowded regions); these particles will have a higher probability of elimination.

Supporting Information

Documentation S1.

A material algorithm design of CPDE.

https://doi.org/10.1371/journal.pone.0040549.s001

(PDF)

Author Contributions

Conceived and designed the experiments: WZ JAF YT. Performed the experiments: WZ JAF. Analyzed the data: WZ WD. Contributed reagents/materials/analysis tools: WBZ YT. Wrote the paper: WZ YT.

References

  1. 1. Li Z, O’Doherty JE, Hanson TL, Lebedev MA, Henriquez CS, et al. (2009) Unscented kalman filter for brain-machine interfaces. PLoS ONE 4: e6243.Z. LiJE O’DohertyTL HansonMA LebedevCS Henriquez2009Unscented kalman filter for brain-machine interfaces.PLoS ONE4e6243
  2. 2. Ralph JF, Oxtoby NP (2011) Quantum filtering one bit at a time. Physical Review Letters 107: 274101.JF RalphNP Oxtoby2011Quantum filtering one bit at a time.Physical Review Letters107274101
  3. 3. Pajevic S, Plenz D (2009) Efficient network reconstruction from dynamical cascades identifies small-world topology of neuronal avalanches. PLoS Computational Biology 5: e1000271.S. PajevicD. Plenz2009Efficient network reconstruction from dynamical cascades identifies small-world topology of neuronal avalanches.PLoS Computational Biology5e1000271
  4. 4. Tang Y, Gao H, Kurths J, Fang J (2012) Evolutionary pinning control and its application in uav coordination. IEEE Transactions on Industrial Informatics. Y. TangH. GaoJ. KurthsJ. Fang2012Evolutionary pinning control and its application in uav coordination.IEEE Transactions on Industrial InformaticsDOI:10.1109/TII.2012.2187911. DOI:10.1109/TII.2012.2187911.
  5. 5. Luukko J, Rauma K (2008) Open-loop adaptive filter for power electronics applications. IEEE Transactions on Industrial Electronics 55: 910–917.J. LuukkoK. Rauma2008Open-loop adaptive filter for power electronics applications.IEEE Transactions on Industrial Electronics55910917
  6. 6. Arenas A, Diaz-Guilera A, Kurths J, Moreno Y, Zhou C (2008) Synchronization in complex networks. Physics Reports 469: 93–153.A. ArenasA. Diaz-GuileraJ. KurthsY. MorenoC. Zhou2008Synchronization in complex networks.Physics Reports46993153
  7. 7. Liu YH, Kuo CY, Chang CC, Wang CY (2011) Electro-osmotic flow through a two-dimensional screen-pump filter. Physical Review E 84: 036301.YH LiuCY KuoCC ChangCY Wang2011Electro-osmotic flow through a two-dimensional screen-pump filter.Physical Review E84036301
  8. 8. Luque SP, Fried R (2011) Recursive filtering for zero offset correction of diving depth time series with gnu r package divemove. PLoS ONE 6: e15850.SP LuqueR. Fried2011Recursive filtering for zero offset correction of diving depth time series with gnu r package divemove.PLoS ONE6e15850
  9. 9. Tumminello M, Michele LF, Mantegna RN (2007) Kullback-leibler distance as a measure of the information filtered from multivariate data. Physical Review E 76: 031123.M. TumminelloLF MicheleRN Mantegna2007Kullback-leibler distance as a measure of the information filtered from multivariate data.Physical Review E76031123
  10. 10. Yu D, Parlitz U (2011) Inferring network connectivity by delayed feedback control. PLoS ONE 6: e24333.D. YuU. Parlitz2011Inferring network connectivity by delayed feedback control.PLoS ONE6e24333
  11. 11. Chen S, Istepanian RH, Luk BL (2001) Digital iir filter design using adaptive simulated annealing. Digital Signal Processing 11: 241–251.S. ChenRH IstepanianBL Luk2001Digital iir filter design using adaptive simulated annealing.Digital Signal Processing11241251
  12. 12. Karaboga N, Kalinli A, Karaboga D (2004) Designing iir filters using ant colony optimisation algorithm. Engineering Applications of Artificial Intelligence 17: 301–309.N. KarabogaA. KalinliD. Karaboga2004Designing iir filters using ant colony optimisation algorithm.Engineering Applications of Artificial Intelligence17301309
  13. 13. Krusienski DJ, Jenkins WK (2004) Particle swarm optimization for adaptive iir filter structures. In: Congress on Evolutionary Computation CEC2004: 2805–2810.DJ KrusienskiWK Jenkins2004Particle swarm optimization for adaptive iir filter structures.In: Congress on Evolutionary ComputationCEC200428052810
  14. 14. Dai C, Chen W, Zhu Y (2010) Seeker optimization algorithm for digital iir filter design. IEEE Transactions on Industrial Electronics 51: 1710–1718.C. DaiW. ChenY. Zhu2010Seeker optimization algorithm for digital iir filter design.IEEE Transactions on Industrial Electronics5117101718
  15. 15. Karaboga N, Cetinkaya MB (2011) A novel and efficient algorithm for adaptive filtering:artificial bee colony algorithm. Turkish Journal of Electrical Engineering & Computer Sciences 19: 175–190.N. KarabogaMB Cetinkaya2011A novel and efficient algorithm for adaptive filtering:artificial bee colony algorithm.Turkish Journal of Electrical Engineering & Computer Sciences19175190
  16. 16. Karaboga N (2009) A new design method based on artificial bee colony algorithm for digital iir filters. Journal of the Franklin Institute 346: 328–348.N. Karaboga2009A new design method based on artificial bee colony algorithm for digital iir filters.Journal of the Franklin Institute346328348
  17. 17. Karaboga N (2005) Digital iir filter design using differential evolution algorithm. EURASIP Journal on Applied Signal Processing 8: 1269–1276.N. Karaboga2005Digital iir filter design using differential evolution algorithm.EURASIP Journal on Applied Signal Processing812691276
  18. 18. Dorigo M, Stutzle T (2004) Ant Colony Optimization. MIT Press, Cambridge. M. DorigoT. Stutzle2004Ant Colony Optimization.MIT Press, Cambridge
  19. 19. Tang Y, Wang Z, Fang J (2011) Feedback learning particle swarm optimization. Applied Soft Computing 11: 4713–4725.Y. TangZ. WangJ. Fang2011Feedback learning particle swarm optimization.Applied Soft Computing1147134725
  20. 20. Dai C, Chen W, Li L, Zhu Y, Yang Y (2011) Seeker optimization algorithm for parameter estimation of time-delay chaotic systems. Physical Review E 83: 036203.C. DaiW. ChenL. LiY. ZhuY. Yang2011Seeker optimization algorithm for parameter estimation of time-delay chaotic systems.Physical Review E83036203
  21. 21. Storn R, Price KV (1995) Differential evolution–A simple and efficient adaptive scheme for global optimization over continuous spaces. ICSI, Bloomington, MN, Tech. Rep. TR-95-012. [Online]. R. StornKV Price1995Differential evolution–A simple and efficient adaptive scheme for global optimization over continuous spaces. ICSI, Bloomington, MN, Tech. Rep. TR-95-012.[Online]Available: http://http.icsi.berkeley.edu/~storn/litera.html. Available: http://http.icsi.berkeley.edu/~storn/litera.html.
  22. 22. Ghosh S, Das S, Vasilakos AV, Suresh K (2012) On convergence of differential evolution over a class of continuous functions with unique global optimum. IEEE Transaction on Systems Man and Cybernetics: Part B 42: 107–124.S. GhoshS. DasAV VasilakosK. Suresh2012On convergence of differential evolution over a class of continuous functions with unique global optimum.IEEE Transaction on Systems Man and Cybernetics: Part B42107124
  23. 23. Das S, Suganthan PN (2011) Differential evolution: A survey of the state-of-the-art. IEEE Transactions on Evolutionary Computation 15: 4–31.S. DasPN Suganthan2011Differential evolution: A survey of the state-of-the-art.IEEE Transactions on Evolutionary Computation15431
  24. 24. Chakraborty UK (2008) Advances in Differential Evolution, studies in computational intelligence. Springer, Berlin. UK Chakraborty2008Advances in Differential Evolution, studies in computational intelligence.Springer, Berlin
  25. 25. Price KV, Chakraborty UK (2008) Eliminating drift bias from the differential evolution algorithm. Advances in Differential Evolution, Studies in Computational Intelligence 143: 33–88.KV PriceUK Chakraborty2008Eliminating drift bias from the differential evolution algorithm.Advances in Differential Evolution, Studies in Computational Intelligence1433388
  26. 26. Teng NS, Teo J, Hijazi M (2009) Self-adaptive population sizing for a tune-free differential evolution. Soft Computing 13: 709–724.NS TengJ. TeoM. Hijazi2009Self-adaptive population sizing for a tune-free differential evolution.Soft Computing13709724
  27. 27. Osipov GV, Kurths J, Zhou C (2007) Synchronization in Oscillatory Networks. Springer, Berlin. GV OsipovJ. KurthsC. Zhou2007Synchronization in Oscillatory Networks.Springer, Berlin
  28. 28. du Plessis MC, Engelbrecht AP (2012) Using competitive population evaluation in a differential evolution algorithm for dynamic environments. European Journal of Operational Research 218: 7–20.MC du PlessisAP Engelbrecht2012Using competitive population evaluation in a differential evolution algorithm for dynamic environments.European Journal of Operational Research218720
  29. 29. Yu D (2010) Estimating the topology of complex dynamical networks by steady state control: Generality and limitation. Automatica 46: 2035–2040.D. Yu2010Estimating the topology of complex dynamical networks by steady state control: Generality and limitation.Automatica4620352040
  30. 30. Brest J, Maucec MS (2011) Self-adaptive differential evolution algorithm using population size reduction and three strategies. Soft Computing 15: 2157–2174.J. BrestMS Maucec2011Self-adaptive differential evolution algorithm using population size reduction and three strategies.Soft Computing1521572174
  31. 31. Lin WZ, Fang JA, Xiao X, Chou KC (2011) idna-prot: Identification of dna binding proteins using random forest with grey model. PLoS ONE 6: e24756.WZ LinJA FangX. XiaoKC Chou2011idna-prot: Identification of dna binding proteins using random forest with grey model.PLoS ONE6e24756
  32. 32. Xiao X, Wang P, Chou KC (2011) Gpcr-2l: predicting g protein-coupled receptors and their types by hybridizing two different modes of pseudo amino acid compositions. Molecular Biosystems 7: 911–919.X. XiaoP. WangKC Chou2011Gpcr-2l: predicting g protein-coupled receptors and their types by hybridizing two different modes of pseudo amino acid compositions.Molecular Biosystems7911919
  33. 33. Xiao X, Wang P, Chou KC (2012) inr-physchem: A sequence-based predictor for identifying nuclear receptors and their subfamilies via physical-chemical property matrix. PLoS ONE 7: e30869.X. XiaoP. WangKC Chou2012inr-physchem: A sequence-based predictor for identifying nuclear receptors and their subfamilies via physical-chemical property matrix.PLoS ONE7e30869
  34. 34. Wang P, Xiao X, Chou KC (2011) Nr-2l: A two-level predictor for identifying nuclear receptor subfamilies based on sequence-derived features. PLoS ONE 6: e23505.P. WangX. XiaoKC Chou2011Nr-2l: A two-level predictor for identifying nuclear receptor subfamilies based on sequence-derived features.PLoS ONE6e23505
  35. 35. Wu ZC, Xiao X, Chou KC (2011) iloc-plant: a multi-label classifier for predicting the subcellular localization of plant proteins with both single and multiple sites. Molecular BioSystems 7: 3287–3297.ZC WuX. XiaoKC Chou2011iloc-plant: a multi-label classifier for predicting the subcellular localization of plant proteins with both single and multiple sites.Molecular BioSystems732873297
  36. 36. Wu ZC, Xiao X, Chou KC (2012) iloc-gpos: A multi-layer classifier for predicting the subcellular localization of singleplex and multiplex gram-positive bacterial proteins. Protein & Peptide Letters 19: 4–14.ZC WuX. XiaoKC Chou2012iloc-gpos: A multi-layer classifier for predicting the subcellular localization of singleplex and multiplex gram-positive bacterial proteins.Protein & Peptide Letters19414
  37. 37. Xiao X, Wu ZC, Chou KC (2011) A multi-label classifier for predicting the subcellular localization of gramnegative bacterial proteins with both single and multiple sites. PLoS ONE 6: e20592.X. XiaoZC WuKC Chou2011A multi-label classifier for predicting the subcellular localization of gramnegative bacterial proteins with both single and multiple sites.PLoS ONE6e20592
  38. 38. Chou KC, Wu ZC, Xiao X (2012) iloc-hum: Using accumulation-label scale to predict subcellular locations of human proteins with both single and multiple sites. Molecular Biosystems 8: 629–641.KC ChouZC WuX. Xiao2012iloc-hum: Using accumulation-label scale to predict subcellular locations of human proteins with both single and multiple sites.Molecular Biosystems8629641
  39. 39. Xiao X, Wu ZC, Chou KC (2011) iloc-virus: A multi-label learning classifier for identifying the subcellular localization of virus proteins with both single and multiple sites. Journal of Theoretical Biology 284: 42–51.X. XiaoZC WuKC Chou2011iloc-virus: A multi-label learning classifier for identifying the subcellular localization of virus proteins with both single and multiple sites.Journal of Theoretical Biology2844251
  40. 40. Tang Y, Wang Z, Fang J (2009) Pinning control of fractional-order weighted complex networks. CHAOS 19: 013112.Y. TangZ. WangJ. Fang2009Pinning control of fractional-order weighted complex networks.CHAOS19013112
  41. 41. Schumann-Bischoff J, Parlitz U (2011) State and parameter estimation using unconstrained optimization. Physical Review E 84: 056214.J. Schumann-BischoffU. Parlitz2011State and parameter estimation using unconstrained optimization.Physical Review E84056214
  42. 42. Mallipeddi R, Suganthana PN, Pan QK, Tasgetiren MF (2011) Differential evolution algorithm with ensemble of parameters and mutation strategies. Applied Soft Computing 11: 1679–1696.R. MallipeddiPN SuganthanaQK PanMF Tasgetiren2011Differential evolution algorithm with ensemble of parameters and mutation strategies.Applied Soft Computing1116791696
  43. 43. Qin AK, Huang VL, Suganthan PN (2009) Differential evolution algorithm with strategy adaptation for global numerical optimization. IEEE Transactions on Evolutionary Computation 13: 398–417.AK QinVL HuangPN Suganthan2009Differential evolution algorithm with strategy adaptation for global numerical optimization.IEEE Transactions on Evolutionary Computation13398417
  44. 44. Bostani N, Kessler DA, Shnerb NM, Rappel WJ, Levine H (2012) Noise effects in nonlinear biochemical signaling. Physical Review E 85: 011901.N. BostaniDA KesslerNM ShnerbWJ RappelH. Levine2012Noise effects in nonlinear biochemical signaling.Physical Review E85011901
  45. 45. Gao J, Hu J, Tung W (2011) Facilitating joint chaos and fractal analysis of biosignals through nonlinear adaptive filtering. PLoS ONE 6: e24331.J. GaoJ. HuW. Tung2011Facilitating joint chaos and fractal analysis of biosignals through nonlinear adaptive filtering.PLoS ONE6e24331
  46. 46. Peng HP, Li LX, Yang YX, Sun F (2011) Conditions of parameter identification from time series. Physical Review E 83: 036202.HP PengLX LiYX YangF. Sun2011Conditions of parameter identification from time series.Physical Review E83036202
  47. 47. Lu LY, Liu WP (2011) Information filtering via preferential diffusion. Physical Review E 83: 066119.LY LuWP Liu2011Information filtering via preferential diffusion.Physical Review E83066119
  48. 48. Wolpert DH, Macready WG (1997) No free lunch theorems for optimization. IEEE Transactions on Evolutionary Computation 1: 67–82.DH WolpertWG Macready1997No free lunch theorems for optimization.IEEE Transactions on Evolutionary Computation16782
  49. 49. Merlin JC, Rajasekaran S, Mi T, Schiller MR (2012) Reducing false-positive prediction of minimotifs with a genetic interaction filter. PLoS ONE 7: e32630.JC MerlinS. RajasekaranT. MiMR Schiller2012Reducing false-positive prediction of minimotifs with a genetic interaction filter.PLoS ONE7e32630
  50. 50. van Dijk ADJ, van Mourik S, van Ham RCHJ (2012) Mutational robustness of gene regulatory networks. PLoS ONE 7: e30591.ADJ van DijkS. van MourikRCHJ van Ham2012Mutational robustness of gene regulatory networks.PLoS ONE7e30591
  51. 51. Kiss IZ, Rusin C, Kori H, Hudson JL (2007) Engineering complex dynamical structures: Sequential patterns and desynchronization. Science 316: 1886–1889.IZ KissC. RusinH. KoriJL Hudson2007Engineering complex dynamical structures: Sequential patterns and desynchronization.Science31618861889
  52. 52. Achard S, Bullmore E (2007) Efficiency and cost of economical brain functional networks. PLoS Computational Biology 3: e17.S. AchardE. Bullmore2007Efficiency and cost of economical brain functional networks.PLoS Computational Biology3e17
  53. 53. Chang X, Liu S, Yu YT, Li YX, Li YY (2010) Identifying modules of coexpressed transcript units and their organization ofsaccharopolyspora erythraea from time series gene expression profiles. PLoS ONE 5: e12126.X. ChangS. LiuYT YuYX LiYY Li2010Identifying modules of coexpressed transcript units and their organization ofsaccharopolyspora erythraea from time series gene expression profiles.PLoS ONE5e12126
  54. 54. Hansen N, Ostermeier A (2001) Completely derandomized selfadaptation in evolution strategies. Evolutionary Computation 9: 159–195.N. HansenA. Ostermeier2001Completely derandomized selfadaptation in evolution strategies.Evolutionary Computation9159195
  55. 55. Garcia-Martinez C, Lozano M, Herrera F, Molina D, Sanchez AM (2008) Global and local real-coded genetic algorithms based on parent-centric crossover operators. European Journal of Operational Research 185: 1088–1113.C. Garcia-MartinezM. LozanoF. HerreraD. MolinaAM Sanchez2008Global and local real-coded genetic algorithms based on parent-centric crossover operators.European Journal of Operational Research18510881113
  56. 56. Brest J, Greiner S, Boscovic B, Mernik M, Zumer V (2006) Self-adapting control parameters in differential evolution: a comparative study on numerical benchmark problems. IEEE Transactions on Evolutionary Computation 10: 646–657.J. BrestS. GreinerB. BoscovicM. MernikV. Zumer2006Self-adapting control parameters in differential evolution: a comparative study on numerical benchmark problems.IEEE Transactions on Evolutionary Computation10646657