Stability and Synchronization for Discrete-Time Complex-Valued Neural Networks with Time-Varying Delays

In this paper, the synchronization problem for a class of discrete-time complex-valued neural networks with time-varying delays is investigated. Compared with the previous work, the time delay and parameters are assumed to be time-varying. By separating the real part and imaginary part, the discrete-time model of complex-valued neural networks is derived. Moreover, by using the complex-valued Lyapunov-Krasovskii functional method and linear matrix inequality as tools, sufficient conditions of the synchronization stability are obtained. In numerical simulation, examples are presented to show the effectiveness of our method.


Introduction
In the past decades, the study of dynamical neural network has attracted researchers' attentions because of its potential applications in a variety of areas, such as image processing, combinatorial optimization problems, pattern recognition, signal processing and so on (see, for instance [1][2][3][4][5]). In the investigations of dynamical neural network, synchronization, control and stability analysis have attracted people's interests in the research of chaotic systems, complex nonlinear systems and dynamical neural networks (see [6][7][8][9][10][11] and references therein) since Pecora and Carroll achieved synchronization between two chaotic oscillators by PC (Pecora and Carroll) method [12]. As we know, time delays commonly exist in the neural networks because of the network traffic congestions and the finite speed of information transmission in networks. So the study of dynamic properties with time delay is of great significance and importance. However, most of the studied networks are real number valued. Currently, in order to investigate the complex properties in complex-valued neural networks, some complex-valued network models are proposed. For example, Hu gives the global stability of complex-valued recurrent continuous neural networks with time-delays [13] and Zhou further studies the boundedness and complete stability of complex-valued neural networks [14] in 2013. For discrete-time complex-valued neural networks, the boundness and stability of neural networks without time delays [15,16] and the boundness and stability of neural networks with time delays are studied by Zhou and Duan. However, in aforementioned discrete-time complex-valued neural networks, time delays are fixed and parameters are constant, which are not suitable. It should be noticed that time delay is a common phenomenon in network because of the signal transmission. Generally, time delays can be divided into time-varying time delays and constant time delays. Time-varying time delays are different from constant delays because the delay varies with time. In practice, time delays always vary in a bounded range instead of a fixed point and the constant delays can be seen as a special case of time-varying delays. So the study of time-varying delays has more potential applications. On the other hand, the study of time-varying delays is more challenging, the derived theorems of time-varying delays can be easily applied in the network with constant delays when upper limit equals to the lower limit as t t~ t. However, the corresponding theorems of constant delays are hard to be used in the study of time-varying delay network.
Motived by above discussions, by separating the real part and imaginary part of the neural networks and constructing complex Lyapunov-Krasovskii functional candidates, we will investigate the synchronization problem for a class of discrete-time complex-valued neural networks with time-varying delays. Although some sufficient conditions for stability and synchronization problems of discrete real number neural networks have been derived by some researchers, as far as we know, there has been no literatures investigate the discretetime complex-valued neural networks with time-varying delays. We believe that the synchronization problems for such kind of networks are still remains open and challenging.
The rest paper is organized as follows. In section 2, the basic models, preliminaries and lemmas are presented. Section 3 presents stability analysis and sufficient conditions with linear matrix inequality (LMI). Some numerical simulations and examples to show the robustness and effectiveness of our methods are in section 4. Finally, we give some concluding remarks in section 5.
Notations: Throughout the paper, R n represents the n-dimensional Euclidean space. R n|m is the set of n|m real matrices. T means the transpose of the corresponding matrix and the symmetric matrix X §0 (respectively, X ƒ0,X w0 and X v0) means that X is positive semidefinite (respectively, negative semidefinite, positive definite and negative definite). diagfÁ Á Ág denotes a block-diagonal matrix and . is used to represent a term induced by symmetry. If not explicitly stated, matrices dimensions are assumed to be compatible for algebraic operations.

The system model and preliminaries
In this paper, we will consider the following discrete-time complex-valued neural network model consisting of n coupled nodes with time-varying delays: where x~x 1 ,x 2 , Á Á Á ,x n ð Þ T is the state vector, nis the number of neural cells. A~a ij À Á n|n [R n|n , B~b ij À Á n|n [R n|n are the connection weight matrix and the delayed connection weight matrix.
Then the model can be separated as Variables at step k in these two neural networks are defined as x(k,Q) and x(k,y) and x R (k,Q), x R (k,y), x I (k,Q) and x I (k,y) are corresponding real parts and imaginary parts. It follows from the models (1-1) and (1-2) that Then we have The initial condition associated with complex-valued network (3) is defined as where Re(w i (s)) and Im(w i (s)) are continuous when s[½{t,0.
In this paper, following definition and lemmas are listed to get the stability conditions.
Definition [16]: The equilibrium pointx x of the model (1) with the initial condition x i (s)~w i (s) is said to be globally exponentially stable if there exist two positive constants Mw0 and 1wew0 satisfies Lemma 1 (Schur complement, [17]): Given the following matrix where S 1 is a non-singular matrix and S 1~S T 1 , S 2 w0 and S 3 is a constant matrix, then we say S 1 zS T 3 S {1 2 S 3 is the schur complement of S about S 1 and we have the following conclusion: holds if and only if the following schur complement holds
Lemma 2 [18]: Consider a symmetric positive-semidefinite matrix Y[R n|n (that is to say, Y T~Y w0), scalar a i §0(i~1,2, Á Á Á ) and vector x i [R n . We can get the following inequality Assumption [13]: g i (.),i~1,2, Á Á Á ,n satisfy the Lipschitz continuity condition in the complex domain, which means there exists a positive constant e i ,i~1,2, Á Á Á ,n for any state variable u and v, one has Where e i is the corresponding Lipschitz constant.

Methods
In this section, we will deal with the synchronization problem of the aforementioned complex-valued neural networks (1). First, we will give the main result in this paper as follows.

W~{
Pz(1z t t{ t)QzD 0 0 where Proof: The detailed proofs for eq. (4) can be found in the Appendix S1.
As we mentioned in the introduction, time-varying delays are completely different from constant delays. Constant delays can be seen a special case of time-varying delays when the delays vary in a fixed point. On the other hand, some existing methods can not be applied in time-varying systems. In order to show the difference, the following corollary will be presented.
where Y~{ Other variables and matrices are the same with definitions in theorem 1. Proof: For the stability of a complex-valued neural network with constant time delay t, with the similar method in [16], it is easy to get the corollary.

Remark 1
The class of complex-valued neural network with constant time delay has been investigated in [16] and theorem has been derived. Different from the previous study, the corollary 1 is established by separating the real part and imaginary part from the complex-valued neural network. Which means the problem of complex-valued neural network is translated into the stability of corresponding real neural networks and the network data are easier to handle. However, in practice, time delays are always time-varying, and then the above corollary will be infeasible.
Above linear matrix inequalities are established under the circumstance of not taking the assumption into account. If above assumption, which is an extension of the real-valued function, is satisfied. We have
where P, Q, D, L and V have the same meaning as in theorem 1. Proof: The detailed proofs for eq. (6) can be found in the Appendix S2.

Remark 2
Compare the theorem 1 with the corollary 1, we can find that the theorem is the special case when the Lipschitz constant e~1. For real-valued neural network, the activation function is always chosen smooth and bounded. However, in complex-valued network, according to Liouville's theorem [19], the activation functions cannot be both bounded and analytic. Which means theorem 1 is less conservative and more general in practice.

Results and Discussion
In this section, examples are provided to demonstrate the robustness and effective of our method.

Example 1
Consider a two-neuron complex-valued network, where

Conclusion
In this article, we have presented a new approach to deal with the stability problem of discrete-time complex-valued neural networks with time-varying delays. Compared with existing results, the derived theorems and corollaries are easier to be solved with the Matlab LMI toolbox because real part and the imaginary part of complex-valued variables are separated and derived results are real number style. What's more, we propose a feasible method to solve time-varying delays of discrete-time complex-valued neural networks. Constant time delays are considered as a special case of time-varying delays in our paper and also solvable. In addition, it should be noted that time-varying delays are bounded and lager time-varying ranges will lead the LMI unsolvable. Finally, simulations show the effective and robustness of our method.