Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Non-fragile mixed H and passive synchronization of Markov jump neural networks with mixed time-varying delays and randomly occurring controller gain fluctuation

Abstract

This paper studies the non-fragile mixed H and passive synchronization problem for Markov jump neural networks. The randomly occurring controller gain fluctuation phenomenon is investigated for non-fragile strategy. Moreover, the mixed time-varying delays composed of discrete and distributed delays are considered. By employing stochastic stability theory, synchronization criteria are developed for the Markov jump neural networks. On the basis of the derived criteria, the non-fragile synchronization controller is designed. Finally, an illustrative example is presented to demonstrate the validity of the control approach.

Introduction

There have been significant attentions on dynamic behaviors of neural networks, since they have various current and future potential applications, i.e., signal processing, optimization problems, pattern recognition and so forth. [19]. In particular, the study of Markov jump neural networks has been a significant topic during the past years, since this model can better describe the neural networks with different structures in real life. Generally speaking, the mode jumps displayed in the Markov jump neural networks are commonly considered to be governed by an ideal homogeneous Markov chain. With the help of analysis and synthesis for Markov jump systems, some remarkable results on Markov jump neural networks have been reported in the literature and the references therein [1019].

On another research line, the synchronization problem has become a hot topic in the fields of neural networks [9, 12]. When one neural network can synchronize the other neural network, they can display complicated dynamic behaviors, which can give an insight into the characteristics of neural network. As is well known, time delays exist in neural networks, such that there is a need for the synchronization problem with time delays [20, 21]. Moreover, it is noted that another unavoidable factor affecting the synchronization in neural networks is the disturbance. As a result, several effective synchronization strategies for neural networks with disturbances have been proposed, especially for some finite-time cases [2228].

It is worth mentioning that the theory of passivity has provided a powerful tool in analyzing and synthesis of complex dynamic systems [29, 30]. Note that some initial researches are on the mixed H and passive filtering design, which can provide a more flexible design than common H approach [31]. In addition, the non-fragile synchronization controller should be designed for controller gain fluctuation attenuation [32]. Furthermore, it can be found that the controller gain fluctuation can happen in a stochastic way [33]. Consequently, a natural question arises: how to solve the synchronization problem for Markov jump neural networks? Unfortunately, up to now, such a question has not been fully addressed and remains open.

This paper is to deal with the above question. In this paper, a stochastic variable is adopted for describing the controller gain fluctuation. Based on stochastic methods, synchronization criteria are first established, such that the drive and response Markov jump neural networks can be synchronized in the presence of mixed time-varying delays and disturbance. Base on the derived results, a design procedure is given for the synchronization controller.

The remainder of the paper is arranged as follows. The Markov jump neural network model is first introduced, and the non-fragile synchronization problem is formulated. The main results of the synchronization problem are then provided. Moreover, the simulation results are given and this paper is concluded in the end.

Notation: denotes the n dimensional Euclidean space, denotes the set of m × n matrices. denotes the space of square-integrable vector functions over [0, ∞). is a probability space, Ω is the sample space, is the σ-algebra of subsets of the sample space and is the probability measure on . Pr{α} means the occurrence probability of the event α, and Pr{α|β} means the occurrence probability of α conditional on β. means the expectation of the stochastic variable x and means the expectation of the stochastic variable x conditional on the stochastic variable y. * denotes the ellipsis in symmetric block matrices and denotes a block-diagonal matrix.

Methods

Consider the Markov jump neural networks with mixed time-varying delays in : (1) where x(t) = [x1(t), x2(t), …, xn(t)]T denotes the state of the neuron; f(x(t)) = [f1(x1(t)), f2(x2(t)), …, fn(xn(t))]T is the neuron activation function; is a diagonal matrix with positive entries; Matrices A(r(t)) = (aij(r(t)))n×n, B(r(t)) = (bij(r(t)))n×n and D(r(t)) = (dij(r(t)))n×n represent the connection weight matrix, the discretely delayed connection weight matrix and the distributively delayed connection weight matrix, respectively; τ(t) and d(t) denote the discrete delay and distributed delay, respectively, which satisfy (2) (3) where , μ, and are known positive constants. The initial condition of Eq (1) is given by x(s) = ϕ(s), .

{r(t), t ≥ 0} is a right continuous continuous-time Markov process with described as (4) with Δt > 0, lim(ot)/Δt) = 0 and πij ≥ 0 () is the transition rate from mode i at time t to mode j at time t + Δt, while for .

Assumption 1. The activation function f(x(t)) in Eq (1) is continuous and bounded, and satisfies (5) where fi(0) = 0, α, , αβ and and are known real constants.

Denote Eq (1) as the drive neural network. For the sake of simplicity, we denote the Markov process r(t) by i indices. Moreover, it is assumed that the mode of the drive and the response neural networks can be identical all the time [34]. Then, the response neural network can be given by (6) where u(t) denotes the control input and is the disturbance.

Define synchronization error as (7) then it follows that (8)

We develop the following mode-dependent controller as: (9) where is the mode-dependent controller gain and ΔKi is the controller gain fluctuation with (10) where Hi and Ei are known constant matrices, satisfies (11)

is defined by (12) with (13) (14) where δ ∈ [0, 1] is a known constant.

Consequently, System (8) can be rewritten as (15)

The following definitions and lemmas are introduced.

Definition 1. [31] System (15) is said to have mixed H and passive performance γ, if there exists a constant γ > 0 such that (16) for all tp > 0 and any non-zero where θ ∈ [0, 1] represents the parameter that defines the trade-off between H and passive performance.

Definition 2. The mixed H and passive synchronization of the Markov jump neural networks Eqs (1) and (6) is said to be achieved if System (15) can achieve the mixed H and passive performance with the prescribed γ.

Lemma 1. [35] For any positive symmetric constant matrix , scalars h1, h2 satisfying h1 < h2, a vector function , such that the integrations concerned are well defined, then (17)

Lemma 2. [36] For any matrix M > 0, scalars τ > 0, τ(t) satisfying 0 ≤ τ(t)≤τ, vector function such that the concerned integrations are well defined, then (18) where (19) (20)

Lemma 3. [37] Let LT = L, H and E be real matrices of appropriate dimensions with F(t) satisfying FT(t)F(t) ≤ I. Then, L + HFE + ET FT HT < 0, if and only if there exists a scalar ε > 0 such that L + ε−1 HHT + εET E < 0, or equivalently (21)

Results

In this section, delay-dependent synchronization conditions will be developed, based on which the non-fragile synchronization controller is designed.

Theorem 1. For given upper bounds of mixed time-varying delays and , and given scalars θ and γ, the mixed H and passive synchronization of the Markov jump neural networks Eqs (1) and (6) can be achieved in the sense of Definition 1 and 2, if there exist mode-dependent symmetric matrices Pi > 0, symmetric matrices Q > 0, R > 0 and a constant ε > 0 such that (22) hold for all , respectively, where (23) (24) (25) (26) (27) (28) (29) (30)

proof. Choose the Lyapunov-Krasovskii function as: (31) where (32) (33) (34) (35) (36)

The infinitesimal operator of V(t) is defined by (37)

Then for each mode i, by taking the derivative of Eq (31) along the solution of System (15), one has (38) (39) (40) (41)

By Lemma 1 and Lemma 2, it holds that: (42) (43)

It follows from Assumption 1 that (44) (45) such that the following inequality holds (46) (47)

Define (48)

Noting that , it can be deduced that (49) where (50) (51) (52) (53) (54) (55) (56) (57)

By Schur complement [38], it can be obtained that is equivalent to , where (58) (59) (60) (61)

By performing congruence transformation of to Eq (58) and considering the inequality −Pi R−1 PiR − 2Pi, can be further rewritten as (62) where (63) (64) (65) (66)

Then, it can be derived by Lemma 3 that holds, if Πi < 0. Therefore, under zero initial condition, it can be obtained by integrating both sides of Eq (48) that J ≤ 0 holds, which means that the mixed H and passive synchronization of the Markov jump neural networks is well achieved according to Definition 2 and completes the proof.

Theorem 2. For given upper bounds of mixed time-varying delays and , and given scalars θ and γ, the mixed H and passive synchronization of the Markov jump neural networks Eqs (1) and (6) can be achieved in the sense of Definition 1 and 2, if there exist mode-dependent symmetric matrices Pi > 0, mode-dependent matrices Vi, symmetric matrices Q > 0, R > 0 and a constant ε > 0 such that (67) hold for all , respectively, where (68) (69) (70) (71) (72) (73) , are defined in Eq (22) and the controller gain can be obtained as .

proof. Let Vi = Pi Ki. The rest of the proof can follow from the proof of Theorem 1 directly.

Discussion

In order to verify our designed synchronization scheme, the following simulation example is presented.

Consider the Markov jump neural networks with two modes, where and the neuron activation function is taken as which satisfies Assumption 1 with and , such that

In the simulation, the transition probability matrix is given as where the time step is set as Δt = 0.01.

The time-varying delays are assumed to be τ(t) = 0.25 + 0.05 sin t and d(t) = 0.15 + 0.05 cos t, such that and . The disturbance is supposed to be The parameters δ, θ and γ are set by δ = 0.5, θ = 0.4 and γ = 0.2.

The controller gain fluctuation satisfies the condition Eq (9) with

By solving Ψi < 0, i = 1, 2 in Theorem 2, the mode-dependent controller gains can be obtained as follows:

Set the initial values as x(t) = [1, −1]T and y(t) = [−5, 5]T, respectively. Under the modes evolution shown in S1 Fig, it can be seen from S2 and S3 Figs that the synchronization can be achieved with the designed mode-dependent controllers, which demonstrates our control scheme.

Conclusion

The non-fragile mixed H and passive synchronization problem of Markov jump neural networks with mixed time-varying delays is addressed. By utilizing the stochastic stability theory, delay-dependent criteria are derived for ensuring that the desired synchronization is achieved and the non-fragile synchronization controller is designed. An interesting further research direction is extending the derived results to the uncertainty cases.

Supporting information

S1 Fig. The jumping modes of the neural networks.

https://doi.org/10.1371/journal.pone.0175676.s001

(TIF)

S2 Fig. The controlled synchronization error of the neural networks.

https://doi.org/10.1371/journal.pone.0175676.s002

(TIF)

S3 Fig. The control input of the neural networks.

https://doi.org/10.1371/journal.pone.0175676.s003

(TIF)

Author Contributions

  1. Conceptualization: CM.
  2. Data curation: CM.
  3. Formal analysis: CM.
  4. Funding acquisition: CM.
  5. Investigation: CM.
  6. Methodology: CM.
  7. Project administration: CM.
  8. Resources: CM.
  9. Software: CM.
  10. Supervision: CM.
  11. Validation: CM.
  12. Visualization: CM.
  13. Writing – original draft: CM.
  14. Writing – review & editing: CM.

References

  1. 1. Cao J, Liang J. Boundedness and stability for cohen–grossberg neural network with time-varying delays. Journal of Mathematical Analysis and Applications. 2004;296(2):665–685.
  2. 2. Wang W, Li L, Peng H, Xiao J, Yang Y. Synchronization control of memristor-based recurrent neural networks with perturbations. Neural Networks. 2014;53:8–14. pmid:24524891
  3. 3. Zhang H, Wang X, Lin X, Liu C. Stability and synchronization for discrete-time complex-valued neural networks with time-varying delays. PloS one. 2014;9(4):e93838. pmid:24714386
  4. 4. Liu H, Wang X, Tan G. Adaptive cluster synchronization of directed complex networks with time delays. PloS one. 2014;9(4):e95505. pmid:24763228
  5. 5. Liao X, Chen G, Sanchez E. Delay-dependent exponential stability analysis of delayed neural networks: an LMI approach. Neural Networks. 2002;15(7):855–866. pmid:14672162
  6. 6. Cao J, Wang J. Global asymptotic and robust stability of recurrent neural networks with time delays. IEEE Transactions on Circuits and Systems I: Regular Papers. 2005;52(2):417–426.
  7. 7. Wang Z, Liu Y, Liu X. On global asymptotic stability of neural networks with discrete and distributed delays. Physics Letters A. 2005;345(4):299–308.
  8. 8. Liu P. Delay-dependent global exponential robust stability for delayed cellular neural networks with time-varying delay. ISA transactions. 2013;52(6):711–716. pmid:23870320
  9. 9. Du Y, Xu R. Robust synchronization of an array of neural networks with hybrid coupling and mixed time delays. ISA transactions. 2014;53(4):1015–1023. pmid:24709387
  10. 10. Zhao X, Zeng Q. New robust delay-dependent stability and H analysis for uncertain Markovian jump systems with time-varying delays. Journal of the Franklin Institute. 2010;347(5):863–874.
  11. 11. Shen H, Huang X, Zhou J, Wang Z. Global exponential estimates for uncertain Markovian jump neural networks with reaction-diffusion terms. Nonlinear Dynamics. 2012;69(1-2):473–486.
  12. 12. Liu Y, Wang Z, Liang J, Liu X. Stability and synchronization of discrete-time Markovian jumping neural networks with mixed mode-dependent time delays. IEEE Transactions on Neural Networks. 2009;20(7):1102–1116. pmid:19473937
  13. 13. Arunkumar A, Sakthivel R, Mathiyalagan K, Park J. Robust stochastic stability of discrete-time fuzzy Markovian jump neural networks. ISA transactions. 2014;53(4):1006–1014. pmid:24933353
  14. 14. Xu Y, Jin X, Zhang H, Yang T. The availability of logical operation induced by dichotomous noise for a nonlinear bistable system. Journal of Statistical Physics. 2013;152(4):753–768.
  15. 15. Xu Y, Wu J, Zhang HQ, Ma SJ. Stochastic resonance phenomenon in an underdamped bistable system driven by weak asymmetric dichotomous noise. Nonlinear Dynamics. 2012;70(1):531–539.
  16. 16. Wang Z, Xu Y, Yang H. Lévy noise induced stochastic resonance in an FHN model. Science China Technological Sciences. 2016;59(3):371–375.
  17. 17. Xu Y, Li Y, Li J, Feng J, Zhang H. The phase transition in a bistable Duffing system driven by Lévy noise. Journal of Statistical Physics. 2015;158(1):120–131.
  18. 18. Xu Y, Feng J, Li J, Zhang H. Lévy noise induced switch in the gene transcriptional regulatory system. Chaos: An Interdisciplinary Journal of Nonlinear Science. 2013;23(1):013110.
  19. 19. Xu Y, Feng J, Li J, Zhang H. Stochastic bifurcation for a tumor–immune system with symmetric Lévy noise. Physica A: Statistical Mechanics and its Applications. 2013;392(20):4739–4748.
  20. 20. Yu W, Cao J. Adaptive synchronization and lag synchronization of uncertain dynamical system with time delay based on parameter identification. Physica A: Statistical Mechanics and its Applications. 2007;375(2):467–482.
  21. 21. Karimi H, Gao H. New delay-dependent exponential synchronization for uncertain neural networks with mixed time delays. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics). 2010;40(1):173–185.
  22. 22. Qi D, Liu M, Qiu M, Zhang S. Exponential synchronization of general discrete-time chaotic neural networks with or without time delays. Exponential H synchronization of general discrete-time chaotic neural networks with or without time delays. IEEE Transactions on Neural Networks. 2010;21(8):1358–1365. pmid:20601309
  23. 23. Shen B, Wang Z, Liu X. Bounded synchronization and state estimation for discrete time-varying stochastic complex networks over a finite horizon. IEEE Transactions on Neural Networks. 2011;22(1):145–157. pmid:21095865
  24. 24. Yang X, Feng Z, Feng J, Cao J. Synchronization of discrete-time neural networks with delays and Markov jump topologies based on tracker information. Neural Networks. 2017;85:157–164. pmid:27846430
  25. 25. Yang X, Lu J. Finite-time synchronization of coupled networks with Markovian topology and impulsive effects. IEEE Transactions on Automatic Control. 2016;61(8):2256–2261.
  26. 26. Yang X, Ho DW, Lu J, Song Q. Finite-time cluster synchronization of T–S fuzzy complex networks with discontinuous subsystems and random coupling delays. IEEE Transactions on Fuzzy Systems. 2015;23(6):2302–2316.
  27. 27. Yang X, Ho DW. Synchronization of delayed memristive neural networks: Robust analysis approach. IEEE transactions on cybernetics. 2016;46(12):3377–3387.
  28. 28. Yang X, Cao J, Liang J. Exponential Synchronization of Memristive Neural Networks With Delays: Interval Matrix Method. IEEE transactions on neural networks and learning systems. 2016;
  29. 29. Gao H, Chen T, Chai T. Passivity and passification for networked control systems. SIAM Journal on Control and Optimization. 2007;46(4):1299–1322.
  30. 30. Ma C, Zeng Q, Zhang L, Zhu Y. Passivity and passification for Markov jump genetic regulatory networks with time-varying delays. Neurocomputing. 2014;136:321–326.
  31. 31. Wu Z, Park J, Su H, Song B, Chu J. Mixed H and passive filtering for singular systems with time delays. Signal Processing. 2013;93(7):1705–1711.
  32. 32. Fang M, Park J. Non-fragile synchronization of neural networks with time-varying delay and randomly occurring controller gain fluctuation. Applied Mathematics and Computation. 2013;219(15):8009–8017.
  33. 33. Yang X, Cao J, Lu J. Synchronization of randomly coupled neural networks with Markovian jumping and time-delay. IEEE Transactions on Circuits and Systems I: Regular Papers. 2013;60(2):363–376.
  34. 34. Yang X, Cao J, Lu J. Synchronization of Markovian coupled neural networks with nonidentical node-delays and random coupling strengths. IEEE Transactions on Neural Networks and Learning Systems. 2012;23(1):60–71. pmid:24808456
  35. 35. Zhu X, Yang G. Jensen integral inequality approach to stability analysis of continuous-time systems with time-varying delay. IET Control Theory & Applications. 2008;2(6):524–534.
  36. 36. Park P, Ko J, Jeong C. Reciprocally convex approach to stability of systems with time-varying delays. Automatica, 2011;47(1):235–238.
  37. 37. Xie L. Output feedback H control of systems with parameter uncertainty. International Journal of Control. 1996;63(4):741–750.
  38. 38. Wu M, He Y, She J, Liu G. Delay-dependent criteria for robust stability of time-varying delay systems. Automatica. 2004;40(8):1435–1439.