Volume 2013, Issue 1 517604
Research Article
Open Access

Stability Analysis for Uncertain Neural Networks of Neutral Type with Time-Varying Delay in the Leakage Term and Distributed Delay

Qi Zhou

Corresponding Author

Qi Zhou

College of Information Science and Technology, Bohai University, Jinzhou, Liaoning 121013, China bhu.edu.cn

Search for more papers by this author
Xueying Shao

Xueying Shao

School of Mathematics and Physics, Bohai University, Jinzhou, Liaoning 121013, China bhu.edu.cn

Search for more papers by this author
Jin Zhu

Jin Zhu

School of Mathematics and Physics, Bohai University, Jinzhou, Liaoning 121013, China bhu.edu.cn

Search for more papers by this author
Hamid Reza Karimi

Hamid Reza Karimi

Department of Engineering, Faculty of Engineering and Science, University of Agder, N-4898 Grimstad, Norway uia.no

Search for more papers by this author
First published: 31 December 2013
Citations: 4
Academic Editor: Ming Liu

Abstract

The stability problem is investigated for a class of uncertain networks of neutral type with leakage, time-varying discrete, and distributed delays. Both the parameter uncertainty and the generalized activation functions are considered in this paper. New stability results are achieved by constructing an appropriate Lyapunov-Krasovskii functional and employing the free weighting matrices and the linear matrix inequality (LMI) method. Some numerical examples are given to show the effectiveness and less conservatism of the proposed results.

1. Introduction

Neural networks are a complex large-scale power system and have very rich dynamic property. In the past years, neural networks have some applications in areas of associative memory [1], pattern recognition [2], and optimization problems [35]. Recently, the stability problem for neural networks has been widely investigated and some results have been reported in [614].

Time delays are often encountered in various engineering, biological, and economical systems [1523]. Due to the finite speed of information processing, the existence of time delays frequently causes oscillation, divergence, or instability in neural networks. Recently, the stability of neural networks with time delays has drawn considerable attention, and many results on stability of neural networks with time delays have been reported in the literature [2430]. In practice, the time-varying delay often belongs to a given interval, and the lower bound of the delays may not be zero. Recently, some papers investigated the stability conditions for neural networks with interval time-varying delay in the literature, for example [31]. Furthermore, due to the presence of parallel pathways of different axonal sizes and lengths, there may exist a spatial extent in neural networks, which may cause distributed time delays [3235]. Recently, the stability problem of neural networks with neutral-type was studied in [36, 37].

More recently, more and more attention has been paid to time delay in the leakage (or “forgetting”) term [38, 39], since there exist some theoretical and technical difficulties when handling the leakage delay [40]. It has been pointed out that the leakage delay has a great impact on the dynamics of neural networks [41, 42]. The authors in [43] showed that time delay in the stabilizing negative feedback term has a tendency to destabilize a system. More recently, the authors in [44] focused on recurrent neural networks with time delay in the leakage term and showed that all the results mentioned about the existence and uniqueness of the equilibrium point are independent of time delays and initial conditions. Then, it can be seen that time delays in leakage terms do not affect the existence and uniqueness of the equilibrium point. Recently, some results about stability analysis for neural networks with leakage delay have been reported in [38, 39, 41]. To mention a few, the work in [14] investigated the problems of delay-dependent stability analysis and strict (Q, S, R)-α-dissipativity analysis for cellular neural networks with distributed delay. However, it should be mentioned that there are few results about stability analysis for uncertain neural networks of neutral type with time-varying delay in the leakage term and distributed delay, which motivates this study.

In this paper, the stability problem is investigated for uncertain neural networks of neutral type with time-varying delay in the leakage term and time-varying distributed delay. Firstly, by constructing a new type of Lyapunov functional and developing some novel techniques to handle the delays considered in this paper, some novel robust stability criteria are proposed. Secondly, the proposed conditions can be expressed in terms of linear matrix inequalities (LMIs), which can be easily solved via standard software. Finally, some numerical examples are given to demonstrate the effectiveness and less conservatism of the proposed results.

Notation. Throughout this paper, the notations are standard. n and n×m denote the n-dimensional Euclidean space and the set of all n × m real matrices, respectively. In this paper, the superscript T denotes matrix transposition. For real symmetric matrices X and Y, the natation XY (X > Y resp.) means that the XY is positive-semidefinite (positive-definite resp.). The notation diag { } stands for a block-diagonal matrix. I is the identity matrix with appropriate dimensions. Matrices if not explicitly stated, where the symbol “*” stands for the symmetric term in a matrix, are assumed to have compatible dimensions.

2. Problem Formulation

Consider the following uncertain neural network of neutral type with time delay in the leakage term and distributed delay:
(1)
where is the network state vector at time t; denotes the activation function at time t; W1 is a positive diagonal matrix; W1 = (W1ij) n×n, W2 = (W2ij) n×n, W3 = (W3ij) n×n, W4 = (W4ij) n×n, and W5 = (W5ij) n×n are known constant matrices; ΔW1(t), ΔW2(t), ΔW3(t), ΔW4(t), and ΔW5(t), are unknown matrices; δ is leakage delay, τ(t) is time-varying discrete delay; h(t) is neutral delay; and r(t) is time-varying distributed delay. They satisfy the following conditions:
(2)
where δ, τ1, τ2, τd, h, and hd are constants. The time-varying parameter uncertainties ΔW1(t), ΔW2(t), ΔW3(t), ΔW4(t), and ΔW5(t) are assumed to be of the form
(3)
where H, E1, E2, E3, E4, E5 are known constant matrices and G(t) is an unknown time-varying matrix satisfying
(4)
Throughout this paper, we make the following assumption:
(H1) for any i = 1, 2, …, n,  fi(0) = 0, there exist constants , , for all α1, α2Rn with α1α2 such that
(5)

Lemma 1 (see [45].)For any positive symmetric constant matrices Mn×n, scalar τ > 0, vector function ω(s) ∈ n such that the integrations concerned are well defined; then

(6)

Lemma 2 ([46] Schur complement). Given constant matrices Ω1, Ω2, and Ω3 with appropriate dimensions, where and , then if and only if

(7)

In order to present novel stability criteria for neural networks (1), the following notations are defined:

(8)

3. Main Results

In this section, by considering the delay in the leakage term and distributed delay and using some new techniques, novel stability criteria will be proposed for uncertain neural network of neutral-type with time-varying delay in (1). Firstly, considering neutral-type delay, the delay in the leakage term, and distributed delay, we have the following theorem.

Theorem 3. For given scalars 0 ≤ δ, 0 < τ1 < τ2, 0 < h, , hd < 1, and τd, neural network (1) under Assumption (H1) is robustly asymptotically stable, if there exist matrices Pi > 0  (i = 1,2, …, 5), , , Zi > 0  (i = 1,2, …, 7), D = diag {d1, d2, …, dn} > 0, L =    diag {l1, l2, …, ln} > 0, positive diagonal matrices R and S, and appropriately dimensioned matrices Nk, Mk  (k = 1,2, …, 14), such that the following LMIs hold:

(9)
(10)
where
(11)

Proof. By using Newton-Leibniz formulation and considering neural network (1), the following equalities hold for appropriately dimensioned matrices N and M:

(12)
with
(13)
For positive diagonal matrices R and S, based on Assumption (H1), it can be seen that the following inequalities hold:
(14)

Now, choosing the following Lyapunov-Krasovskii functional:

(15)
where
(16)
where τ12 = τ2τ1, and
(17)
Then, the derivatives of Vi(t), (i = 1, …, 7) with time t can be obtained as
(18)
By Lemma 1, one can have
(19)
(20)
(21)
where α1 = (τ(t) − τ1)/τ12 and α2 = (τ2τ(t))/τ12.

It can be seen from the condition (9) that

(22)
which means
(23)
It follows from (21) and (23) that
(24)
Therefore, it is straight forward to obtain that
(25)
where ΔΞ = MHF(t)E + (MHF(t)E) T. By Schur complement, it can be seen from the condition (10) that , which means that neural network (1) under Assumption (H1) is robustly asymptotically stable. This completes this proof.

When neural network (1) without uncertainties, we have
(26)

The stability condition can be easily obtained from Theorem 3 in the following corollary.

Corollary 4. Given scalars 0 ≤ δ, 0 ≤ τ1 < τ2, 0 < h, , hd < 1, and τd, neural network (26) is asymptotically stable, if there exist matrices Pi > 0  (i = 1,2, …, 5), , , Zi > 0  (i = 1,2, …, 7), D = diag {d1, d2, …, dn} > 0, L = diag {l1, l2, …, ln} > 0, positive diagonal matrices R and S, and appropriately dimensioned matrices Nk, Mk  (k = 1,2, …, 14), such that the following LMIs hold:

(27)
where Ξ, Π and N have been defined in Theorem 3.

Remark 5. For neural network (1), if time delay in the leakage term is not considered in this paper, the following model can be obtained:

(28)
In order to present stability criterion for neural network (28), we choose the following Lyapunov-Krasovskii functional:
(29)
where Vi (i = 2,3, 4,5, 6,7) is defined in (15). From the proof of Theorem 3, the following stability condition for neural network (28) can be obtained.

Corollary 6. For given scalars 0 ≤ τ1 < τ2,   0 < h, , hd < 1, and τd, neural network (1) under Assumption (H1) is asymptotically stable, if there exist matrices Pi > 0  (i = 1,4, …, 5), , , Zi > 0  (i = 1,2, …, 7), D = diag {d1, d2, …, dn} > 0, L = diag {l1, l2, …, ln} > 0, positive diagonal matrices R and S, and appropriately dimensioned matrices , , such that the following LMIs hold:

(30)
where
(31)

Remark 7. When neural network (1) without time-varying neutral delay h(t), the following model can be obtained:

(32)

In order to present stability criterion for neural network (32), we choose the following Lyapunov-Krasovskii functional:

(33)
where Vi (i = 1, …, 6) are defined in (15). Following the same line of Theorem 3, the following corollary can be presented.

Corollary 8. Given scalars 0 ≤ δ, 0 ≤ τ1 < τ2, , and τd, the neural network (32) under Assumptions (H1)-(H2) is asymptotically stable, if there exist matrices Pi > 0  (i = 1,2, …, 5), , , Zi > 0  (i = 1,2, …, 4), D = diag {d1, d2, …, dn} > 0, L = diag {l1, l2, …, ln} > 0, positive diagonal matrices R and S, and appropriately dimensioned matrices , such that the following LMIs hold:

(34)
where
(35)

Consider the following neural network with time-varying delay:

(36)

By choosing the Lyapunov-Krasovskii functional

(37)
where Vi (i = 2,4, 6) is defined in (15). The novel stability condition for neural network with time-varying delay (36) can be presented from Theorem 3 in the following corollary.

Corollary 9. For given scalars 0 ≤ τ1 < τ2, and τd, neural network (36) under Assumptions (H1)-(H2) is asymptotically stable, if there exist matrices Pi > 0  (i = 1,4, 5), , , Zi > 0  (i = 1,2, 3), D = diag {d1, d2, …, dn} > 0, L =    diag {l1, l2, …, ln} > 0, positive diagonal matrices R and S, and appropriately dimensioned matrices , such that the following LMIs hold:

(38)
where
(39)

4. Numerical Examples

In this section, two numerical examples are provided to show the effectiveness of the proposed results.

Example 1. Consider uncertain neural network of neutral type with leakage delay time-varying delay, and distributed delay as follows:

(40)
where
(41)

By using Matlab LMI Toolbox, from Theorem 3, it can be found that the uncertain neural network (1) under Assumption (H1) is robustly asymptotically stable for 0 < δ = τd = hd < 0.7. To calculate the maximum allowable for different δ = τd = hd in this paper, the numerical results in Table 1 illustrate the effectiveness of the proposed results.

Table 1. Allowable upper bound for different δ = τd = hd.
Method τ1 δ = τd = hd = 0.1 δ = τd = hd = 0.3 δ = τd = hd = 0.5
Theorem 3 τ1 = 0.1
Theorem 3 τ1 = 0.2

Example 2. Consider the neural network with time-varying delay as follows:

(42)
(43)

By using Matlab LMI Toolbox, it can be found from Corollary 9 that the maximum allowable τ2 can be obtained to guarantee the stability of neural network with time-varying delay in (42) for different τd. Compared with the previous results proposed in [28], it is clear that the new stability condition in Corollary 9 is less conservative than the one in [28] (see Table 2).

Table 2. Allowable upper bound τ2 for different τd.
Methods τ1 τd = 0.1 τd = 0.5 τd = 0.9
[28] τ1 = 1 τ2 = 3.3068 τ2 = 2.5802 τ2 = 2.2736
Corollary 9 τ1 = 1 τ2 = 3.6720 τ2 = 2.6649 τ2 = 2.3779
[28] τ1 = 2 τ2 = 3.3125 τ2 = 2.7500 τ2 = 2.6468
Corollary 9 τ1 = 2 τ2 = 3.7871 τ2 = 2.9085 τ2 = 2.8008

5. Conclusion

In this paper, the problem of stability analysis for neural networks of neutral type with time-varying delay in the leakage term and distributed delay has been studied. By constructing appropriate Lyapunov-Krasovskii functional and employing some advanced methods, some novel stability criteria have been proposed in terms of LMIs, which can be easily solved by standard software. Two examples have been given to illustrate the effectiveness and merit of the proposed results. It should be mentioned that the leakage delay handling method proposed in this paper can also be used to investigate the systems with the delay in the leakage term, for example, the fault-tolerant control systems [47, 48].

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

    Acknowledgment

    This work was partially supported by the National Natural Science Foundation of China (61304003, 11226138, and 61304002).

        The full text of this article hosted at iucr.org is unavailable due to technical difficulties.