Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

New conditions for stability of multiple delayed Cohen-Grossberg Neural Networks of neutral-type

Abstract

In this research article, we essentially aim to examine the stability properties of a certain type of Cohen-Grossberg neural network. The analysed neural network involves multiple delay parameters. These delay parameters complicate the dynamical behaviour of the system, thereby increasing the risk of oscillations and chaotic behaviour, which adversely affect system stability. However, under specific system parameter constraints, the stability of the system can be ensured. In our study, we developed new adequate stability conditions that guarantee global asymptotic stability for neutral-type Cohen-Grossberg artificial neural networks with multiple delays. These conditions, which can serve as an alternative to the results in the literature, are derived by utilizing suitable Lyapunov functionals and the Lyapunov theorem. The proposed stability conditions are formulated as algebraic equations. Within this context, our proposed stability conditions can be easily examined by using some mathematical methods and software tools. By carrying out a detailed analysis of an instructive numerical example, the results obtained in this article are also shown to establish alternative stability criteria to the corresponding stability conditions given in the past literature.

1. Introduction

In recent years, stability analysis of many different types of neural systems has received some particular attention because these neural networks have been proved to possess the abilities to solve different important practical engineering related problems in some particular areas related to some signal and image processing problems, control theory problems, intelligent computational systems, fault diagnosis, pattern recognition, some optimization problems and associative memories applications, (e.g., see [15]). When using these neural networks in engineering related applications, one needs to guarantee convergence dynamics of designed neural network models. Because of these reasons, it is crucially important to set up the suitable network parameters which ensure the aimed equilibria properties and stability dynamics of the implemented neural network. In the recent literature, some neural network classes have been adequately applied to real time engineering related problems. In the case of real time applications of neural systems, the convergence dynamics and stability properties may be subject to variation due to the some delay parameters resulting from finite switching rates of electronic circuit components as well as signal transmission times of neurons in neural systems. For instance, introducing delay parameters into the mathematical model of a stable non-delayed neural network may turn this stable neural network into such a neural network that is unstable. Because of such reasons, stability analysis is a critical concept for the dynamical neural systems having some time delay parameters in their mathematically represented models in mathematical neurodynamics of neural systems. Recently, a very large number of the past literature papers have carried out stability investigation of neural networks accepting time delays into state variables of neurons (see, e.g., [614]). It should be remarked, at this point that, presenting these time delay parameters to state variables of network neurons may not provide an appropriate modeling approach for dynamical neural networks, as the time derivatives of network neurons state variables can additionally possess some different types of delays. This class of neural networks may show some unexpected, complicated and complex dynamical behaviours. That is why, we must introduce some delays into used mathematical model for these neural systems. Literature defines delays in state variables as time delays, whereas delays in the derivatives of state variables are defined as neutral delays. These type of neural models referred to as neutral-type were successfully employed to be applied to some specific problems including population ecologies, the distributed systems with lossless transmission lines, the propagation-diffusion models [1517]. Hence, conducting stability of neutral neural networks emerges a crucial importance.

In this research paper, our primary aim is to analyze the convergence dynamics and stability characteristics of Cohen-Grossberg neural systems involved multiple delay parameters. A most commonly used mathematical model for these neural systems is given by the following formulation:

(1)

In this mathematical model, n represents the number of neurons, the component is the state variable of neuron i, the components and represent the behaved functions and amplification functions of neurons respectively. and are the neuronal interconnection parameters. These parameters are constant. In the literature, and denote time delays and neutral delays respectively and they are also constant. are the fixed coefficients. and denote nonlinear neuronal activation function for ith neuron and constant external input for ith neuron respectively. In neutral-type neural network represented by equation (1), take and , with . Thus, the neural system described as mathematical model (1) possesses initial conditions defined as with that will be usually convered in a compact set . It is needed to note that is a set representing the functions from to R.

The neural network given in the form described by (1) has time delays and neutral delays. Because of these delay parameters, neural system (1) is said to have multiple delays.

Various subclasses of the neural system (1) can be stated, where the neural system has fewer delay parameters. A neural network that involves n time delays and n neutral delays is said to exhibit discrete delay terms and is modeled by the following mathematical expression:

(2)

The following vector and matrix representation can be used to express the delayed neural system defined by (2):

(3)

in which are some discrete time delays, are the discrete neutral delays, ( and represent the system matrices, , the vector is the state variables of neural systems (2), the state vector involving time delays is represented by , , and .

If a neural system possesses only one time delay parameter and only one neutral delay term, then this neural network is said to have two delays. Then, a Cohen-Grossberg neural network involving such delay terms will have the mathematical model given by:

(4)

In formulation (4), the parameter represents a fixed time delay, while the term denotes a fixed neutral delay. Neutral neural system defined by the equation (4) can be written as

(5)

with the new time delayed system vector of having a form and the new neutral delayed system state vector of having a form .

Network functions denoted by and and activation functions denoted by have the nonlinear form. Some basic properties of these nonlinear functions are generally stated by the following conditions:

: The amplification functions hold the following conditions

with and are being some real positive constants.

: The behaved functions hold the following conditions

.

with and are being some real positive constants.

: The activation functions hold the following conditions

with are being some real positive constants.

Since neural networks models (2) and (4) can be directly put in some suitable vector-matrix forms, it is relatively easier to study stability analysis of these systems when compared with stability analysis of neural system (1). In the past literature, various techniques, methods and approaches have been employed to determine various useful and sufficient criteria for stability of neural network models given by (2) and (4). In [18] and [19], the integral inequality techniques, in [20], general delay partitioning techniques and the Jensen inequality, in [21], the semimartingale convergence technique, in [22], the delay partitioning method, in [23], functionals of triple or quadruple integral terms, in [24] the Wirtinger –type integral inequalities and some convex combination techniques, in [25], Leibniz-Newton formula principle, in [26], the stochastic analysis theory principle, In [27], the descriptor transformation theory method has been combined with Lyapunov stability theorems. In addition to these stability conditions which are defined by different forms of linear matrix inequalities (LMI), some stability conditions of neural network models (2) and (4) in algebraic forms have also been presented in [2837].

The first results regarding global stability of neutral neural network models having multiple delay terms have been given in [38]. In this study, neutral-type Hopfield neural networks have been analyzed. We also note here that when the amplification functions are equal to 1 for all i, and for some positive constants , for all i, the multiple delayed neutral neural system stated in (1) will take the form of a delayed-type Hopfield neural network. Hence, the author of [38] has initiated the area of stability issues for neutral neural network models admitting multiple delay parameters. Then, motivated by the results of [38], some other researchers have made some important contributions to the investigation of stability conditions of systems (1). In [3943], various sets of criteria for stability of neural system (1) have been presented. In this new research article, we will conduct and alternative Lyapunov stability analysis of neural network model (1) and obtain new alternative conditions that ensure stability of this neural systems.

2. Stability analysis

We will proceed, in this section, with obtaining new results which assure global asymptotic stability of neural network whose mathematical representation is stated in (1). For this objective, one can find it to be helpful to transfer equilibrium points that are possessed by neutral neural system given in (1) to the origin. If the constant vector denoted by whose components are , represents a fixed equilibrium point of the neural network whose dynamics is described in (1), then the simple equation will directly transform the equilibrium point of the system (1) to the origin. By assigning to the difference , in equation (1), a new neutral-type neural system is obtained, and dynamical behaviour of this system can be defined using the following mathematical equation:

(6)

In the system given by (6), the transformed nonlinear strictly positive system functions and the transformed nonlinear strictly increasing system functions are obtained to be in the forms and respectively. Finally, it is stated that the transformed nonlinear activation functions are expressed as . Considering original constraints conditions on system functions, it is obvious that the following conditions will be satisfied by these new functions under conditions and :

In the proceeding theorem, we derive the major results of our paper:

2.1. Theorem 1

For the system (6), assume that the system functions justify assumptions and . In this case, the transformed neural system expressed by (6) is globally asymptotically stable if there exist real positive numbers and such that the criteria given below are satisfied.

(7)

and

(8)

where 1.

Proof:

Construct three different functionals:

(9)(10)(11)

In the equation (11), p and q are positive constants. These values will be determined as needed.

Now, construct a positive real valued Lyapunov functional for neural system expressed by equation (6):

(12)

For , which is one of the functions within the constructed Lyapunov functional, is calculated and the following mathematical equation is obtained:

(13)

Condition ensures that . Thus, (13) leads to

(14)

For , which is another function within the constructed Lyapunov functional, is calculated:

(15)

Note that

(16)

Inserting [16] into [15] results in:

(17)

One may state the following inequalities that are important in the proofs:

(18)(19)(20)(21)(22)

and

(23)

Based on (18)-(23), (14) and (17) will respectively lead to

(24)

and

(25)

Combining (24) and (25) results in

(26)

Finally, following inequality is calculated for :

(27)

Combining equation (27) with (26)

(28)

in which and . In (28), the conditions and assure that hold if . Additionally, it can be examined that is also satisfied under the condition that . Moreover, one can also derive that if , then and , which signifies that . Hence, equation (27) directly yields the inequality.

(29)

It is obvious from inequality (29) that if the states satisfy the conditions for some arbitrary chosen indices i and j, in which case, (29) directly guarantees that . Additionaly, if for some arbitrary chosen indices i and j, then (29) also directly guarantees that . We can also observe that, in all cases of as well as together with the condition , it follows that assuring the condition of . For the sake of ensuring exact global convergence and stability properties of origin, we require to control whether is radially unbounded. One may write:

(30)

From (30), we may realise that holds if , proving that the positive real valued is certainly radially unbounded. Q.E.D.

Some useful particular cases of results of Theorem 1 are stated below. For , Theorem 1 directly yields the following stability criteria:

2.2. Corollary 1

Let neural network (1) be governed by system functions that satisfy conditions , and . Under these conditions, the neural network (1) is globally asymptotically stable if there exist some positive real constants satisfying the following algebraic criteria:

(31)

and

(32)

where . For , and .

The following corollary results from Theorem 1.

2.3. Corollary 2

Let neural network (1) be governed by system functions that satisfy conditions , and . Under these conditions, the neural network (1) is globally asymptotically stable if the algebraic criteria given in the following form are satisfied,

(33)

and

(34)

where .

When the results of Corollary 1 and Corollary 2 are compared, the following conclusions can be drawn: The result of Corollary 2 is a special case of Corollary 1. Corollary 1 imposes less restrictive constraint conditions on the network parameters of neural system (1) than those imposed by Corollary 2. However, it is more difficult to validate the conditions stated in Corollary 1 than the validation of the conditions stated in Corollary 2.

2.4. An example and comparisons

The aim of the current section will be the analysis of an example for the sake of comparisons between the results of this study and some existing stability conditions proposed in the literature. Firstly, we are required to restate some previous literature results.

2.4.1. Theorem 2 [39].

Let neural network (1) be governed by system functions that satisfy conditions , and . Under these conditions, the neural network (1) is globally asymptotically stable if the algebraic criteria given in the following form are satisfied,

(35)

and

(36)

2.4.2 Theorem 3 [40].

Let neural network (1) be governed by system functions that satisfy conditions , and . Under these conditions, the neural network (1) is globally asymptotically stable if the algebraic criteria given in the following form are satisfied,

(37)

and

(38)

2.4.3 Theorem 4 [41].

Let neural network (1) be governed by system functions that satisfy conditions , and . Under these conditions, the neural network (1) is globally asymptotically stable if there exist real constants satisfying the following algebraic criteria:

(39)

and

(40)

2.4.4 Theorem 5 [42].

Let neural network (1) be governed by system functions that satisfy conditions , and . Under these conditions, the neural network (1) is globally asymptotically stable if there exist real constants satisfying the following algebraic criteria:

(41)

and

(42)

2.4.5 Theorem 6 [43].

Let neural network (1) be governed by system functions that satisfy conditions , and . Under these conditions, the neural network (1) is globally asymptotically stable if there exist real constants satisfying the following algebraic criteria:

(43)

The following example will lead us to compare the proposed results with some of the existing stability conditions.

2.4.6 Example.

Consider system (1) that has system parameters:

, ,

a and b are real-valued positive constants, and ,

Let . Then, the conditions in Corollary 2 are calculated as:

(44)

and

(45)

If , then , and if , then . Therefore, the condition ensures that the conditions stated in Corollary 2 hold.

For this example, let . The state responses of the system (1) are illustrated using graphical representations respectively for the different network functions given in the following equations:

We also choose different amplified functions in the following forms for each neuron:

The state responses of system (1) for the given network functions which are different from each other are shown in Fig 3.

thumbnail
Fig 1. The time response of the system (1).

The time response of the system (1) with .

https://doi.org/10.1371/journal.pone.0343312.g001

thumbnail
Fig 2. The time response of the system (1).

The time response of the system (1) with .

https://doi.org/10.1371/journal.pone.0343312.g002

thumbnail
Fig 3. The time response of the system (1). The time response of the system (1) with .

https://doi.org/10.1371/journal.pone.0343312.g003

It can be observed from Figs 13, that the equilibrium point of system (1) is stable.

In Theorem 2, the parameter is calculated as:

(46)

For , it follows that . Hence, the conditions of Theorem 2 are not applicable to the case of the parameter values given in the example.

In Theorem 3, the parameter is calculated as:

(47)

For , it follows that . Hence, the conditions of Theorem 3 are not applicable to the case of the parameter values given in the example.

In this example, we observe that the first stability condition set in Theorem 4 can be obtained as follows:

(48)

We note that, for this example, there are positive constants and for which and if is determined to satisfy the nonsingular M-matrix condition (requiring that the real parts of the eigenvalues associated with this newly formed matrix are positive, see reference [44], regarding some useful properties of M-matrices). For this example, is obtained as

If this formed matrix possesses the nonsingular M-matrix property, then it is a fact that at least one column of is strictly dominant [44]. Thus, this property of nonsingular M-matrices implies that and . In this case, we can obtain the following equation for the conditions in Theorem 4:

(49)

For , it follows that . Thus, the conditions of Theorem 4 are not applicable to the case of the parameter values given in the example.

For the parameters of the given example, the conditions of Theorem 5 are largely identical to the conditions of Theorem 4. Therefore, the conditions and are satisfied if and only if and . In this case, . Let . Then, we can observe that

(50)

For , it follows that . Thus, the conditions of Theorem 5 are not applicable to the case of the parameter values given in the example.

In applying the conditions of Theorem 6 to the case of this example, we obtain

(51)

For , we can derive that . Note that takes its minimum value when . In this case,

(52)

For , it follows that . Thus, the conditions of Theorem 6 are not applicable to the case of the parameters given in this example.

Based on the above comparisons, it can be concluded that this paper obtains some novel and alternative conditions that guarantee stability of Cohen-Grossberg systems with neutral model where the dynamical model representation of the system admits multiple delay parameters.

3. Conclusions

This paper examines the stability characteristics of neutral-type Cohen-Grossberg artificial neural networks. By making appropriate modifications to certain classes of Lyapunov functionals, novel sufficient algebraic conditions are formulated to guarantee the global asymptotic stability of Cohen-Grossberg neural systems with multiple constant delay parameters. It has been shown that the obtained stability criteria are of the forms of some algebraic equations that involve only some system parameters of the examined delayed Cohen-Grossberg neural network. Therefore, our proposed stability criteria can be checked by using various simple mathematical methods and software tools. By carrying out a detailed analysis of an instructive numerical example, the results obtained in this article are also shown to establish alternative stability criteria to some corresponding stability conditions given in the past literature.

Supporting information

S1 Data. This file archive contains the minimal data set required to replicate all study findings, including the raw numerical values used to generate the graphical representations of system responses under different network functions. All data files are provided in CSV format.

https://doi.org/10.1371/journal.pone.0343312.s001

(ZIP)

References

  1. 1. Chua LO, Yang L. Cellular neural networks: applications. IEEE Trans Circuits Syst. 1988;35(10):1273–90.
  2. 2. Cohen MA, Grossberg S. Absolute stability of global pattern formation and parallel memory storage by competitive neural networks. IEEE Trans Syst, Man, Cybern. 1983;SMC-13(5):815–26.
  3. 3. Hopfield JJ. Neural networks and physical systems with emergent collective computational abilities. Proc Natl Acad Sci U S A. 1982;79(8):2554–8. pmid:6953413
  4. 4. Guez A, Protopopsecu V, Barhen J. On the stability, storage capacity, and design of nonlinear continuous neural networks. IEEE Trans Syst, Man, Cybern. 1988;18(1):80–7.
  5. 5. Wang J, Cai Y, Yin J. Multi-start stochastic competitive Hopfield neural network for frequency assignment problem in satellite communications. Expert Systems with Applications. 2011;38(1):131–45.
  6. 6. Chen Y, Wang Z, Liu Y, Alsaadi FE. Stochastic stability for distributed delay neural networks via augmented Lyapunov–Krasovskii functionals. Applied Mathematics and Computation. 2018;338:869–81.
  7. 7. Zhu Q, Cao J, Rakkiyappan R. Exponential input-to-state stability of stochastic Cohen–Grossberg neural networks with mixed delays. Nonlinear Dyn. 2014;79(2):1085–98.
  8. 8. Manivannan R, Samidurai R, Cao J, Alsaedi A, Alsaadi FE. Stability analysis of interval time-varying delayed neural networks including neutral time-delay and leakage delay. Chaos, Solitons & Fractals. 2018;114:433–45.
  9. 9. Zhu H, Rakkiyappan R, Li X. Delayed state-feedback control for stabilization of neural networks with leakage delay. Neural Netw. 2018;105:249–55. pmid:29883852
  10. 10. Ma J, Xu S, Li Y, Chu Y, Zhang Z. Neural networks-based adaptive output feedback control for a class of uncertain nonlinear systems with input delay and disturbances. Journal of the Franklin Institute. 2018;355(13):5503–19.
  11. 11. Song Q, Yu Q, Zhao Z, Liu Y, Alsaadi FE. Boundedness and global robust stability analysis of delayed complex-valued neural networks with interval parameter uncertainties. Neural Netw. 2018;103:55–62. pmid:29626733
  12. 12. Song Q, Chen X. Multistability analysis of quaternion-valued neural networks with time delays. IEEE Trans Neural Netw Learn Syst. 2018;29(11):5430–40. pmid:29994739
  13. 13. Wang J, Jiang H, Ma T, Hu C. Delay-dependent dynamical analysis of complex-valued memristive neural networks: Continuous-time and discrete-time cases. Neural Netw. 2018;101:33–46. pmid:29477447
  14. 14. Zhu Q, Cao J. Exponential stability analysis of stochastic reaction-diffusion Cohen–Grossberg neural networks with mixed delays. Neurocomputing. 2011;74(17):3084–91.
  15. 15. Chen H, Shi P, Lim C-C, Hu P. Exponential Stability for Neutral Stochastic Markov Systems With Time-Varying Delay and Its Applications. IEEE Trans Cybern. 2016;46(6):1350–62. pmid:27187938
  16. 16. Kolmanovskii VB, Nosov VR. Stability of Functional Differential Equations. London: Academic Press; 1986.
  17. 17. Kuang Y. Delay Differential Equations with Applications in Population Dynamics. Boston: Academic Press; 1993.
  18. 18. Song Q, Long L, Zhao Z, Liu Y, Alsaadi FE. Stability criteria of quaternion-valued neutral-type delayed neural networks. Neurocomputing. 2020;412:287–94.
  19. 19. Muralisankar S, Manivannan A, Balasubramaniam P. Mean square delay dependent-probability-distribution stability analysis of neutral type stochastic neural networks. ISA Trans. 2015;58:11–9. pmid:25862099
  20. 20. Lee SM, Kwon OM, Park JH. A novel delay-dependent criterion for delayed neural networks of neutral type. Physics Letters A. 2010;374(17–18):1843–8.
  21. 21. Shi K, Zhu H, Zhong S, Zeng Y, Zhang Y, Wang W. Stability analysis of neutral type neural networks with mixed time-varying delays using triple-integral and delay-partitioning methods. ISA Trans. 2015;58:85–95. pmid:25835437
  22. 22. Zhang Z, Liu K, Yang Y. New LMI-based condition on global asymptotic stability concerning BAM neural networks of neutral type. Neurocomputing. 2012;81:24–32.
  23. 23. Dharani S, Rakkiyappan R, Cao J. New delay-dependent stability criteria for switched Hopfield neural networks of neutral type with additive time-varying delay components. Neurocomputing. 2015;151:827–34.
  24. 24. Shi K, Zhong S, Zhu H, Liu X, Zeng Y. New delay-dependent stability criteria for neutral-type neural networks with mixed random time-varying delays. Neurocomputing. 2015;168:896–907.
  25. 25. Samidurai R, Marshal Anthoni S, Balachandran K. Global exponential stability of neutral-type impulsive neural networks with discrete and distributed delays. Nonlinear Analysis: Hybrid Systems. 2010;4(1):103–12.
  26. 26. Liu P-L. Improved delay-dependent stability of neutral type neural networks with distributed delays. ISA Trans. 2013;52(6):717–24. pmid:23871149
  27. 27. Huang H, Du Q, Kang X. Global exponential stability of neutral high-order stochastic Hopfield neural networks with Markovian jump parameters and mixed time delays. ISA Trans. 2013;52(6):759–67. pmid:23953509
  28. 28. Arik S. A modified Lyapunov functional with application to stability of neutral-type neural networks with time delays. Journal of the Franklin Institute. 2019;356(1):276–91.
  29. 29. Arik S. An analysis of stability of neutral-type neural systems with constant time delays. Journal of the Franklin Institute. 2014;351(11):4949–59.
  30. 30. Cheng C-J, Liao T-L, Yan J-J, Hwang C-C. Globally asymptotic stability of a class of neutral-type neural networks with delays. IEEE Trans Syst Man Cybern B Cybern. 2006;36(5):1191–5. pmid:17036823
  31. 31. Akça H, Covachev V, Covacheva Z. Global asymptotic stability of cohen–grossberg neural networks of neutral type. J Math Sci. 2015;205(6):719–32.
  32. 32. Ozcan N. New conditions for global stability of neutral-type delayed Cohen-Grossberg neural networks. Neural Netw. 2018;106:1–7. pmid:29990758
  33. 33. Samli R, Senan S, Yucel E, Orman Z. Some generalized global stability criteria for delayed Cohen-Grossberg neural networks of neutral-type. Neural Netw. 2019;116:198–207. pmid:31121418
  34. 34. Zhang Y, Xie T, Ma Y. Robustness analysis of exponential stability of Cohen-Grossberg neural network with neutral terms. MATH. 2025;10(3):4938–54.
  35. 35. Samli R, Arik S. New results for global stability of a class of neutral-type neural systems with time delays. Applied Mathematics and Computation. 2009;210(2):564–70.
  36. 36. Zhang Z, Liu W, Zhou D. Global asymptotic stability to a generalized Cohen-Grossberg BAM neural networks of neutral type delays. Neural Netw. 2012;25(1):94–105. pmid:21855293
  37. 37. Wan L, Zhou Q. Stability analysis of neutral-type cohen-grossberg neural networks with multiple time-varying delays. IEEE Access. 2020;8:27618–23.
  38. 38. Arik S. New criteria for stability of neutral-type neural networks with multiple time delays. IEEE Trans Neural Netw Learn Syst. 2020;31(5):1504–13. pmid:31265413
  39. 39. Faydasicok O. New criteria for global stability of neutral-type Cohen-Grossberg neural networks with multiple delays. Neural Netw. 2020;125:330–7. pmid:32172142
  40. 40. Ozcan N. Stability analysis of Cohen-Grossberg neural networks of neutral-type: Multiple delays case. Neural Netw. 2019;113:20–7. pmid:30776673
  41. 41. Wan L, Zhou Q. Exponential stability of neutral-type cohen-grossberg neural networks with multiple time-varying delays. IEEE Access. 2021;9:48914–22.
  42. 42. Faydasicok O. An improved Lyapunov functional with application to stability of Cohen-Grossberg neural networks of neutral-type with multiple delays. Neural Netw. 2020;132:532–9. pmid:33069117
  43. 43. Zhang Z, Zhang X, Yu T. Global exponential stability of neutral-type cohen–grossberg neural networks with multiple time-varying neutral and discrete delays. Neurocomputing. 2022;490:124–31.
  44. 44. Horn RA, Johnson CR. Topics in Matrix Analysis. Cambridge University Press; 1991.