Volume 2013, Issue 1 172906
Research Article
Open Access

Passivity Analysis of Markovian Jumping Neural Networks with Leakage Time-Varying Delays

N. Mala

Corresponding Author

N. Mala

Department of Mathematics, Kovai Kalaimagal College of Arts and Science, Coimbatore, Tamil Nadu 641 109, India kkcas.edu.in

Search for more papers by this author
A. R. Sudamani Ramaswamy

Corresponding Author

A. R. Sudamani Ramaswamy

Department of Mathematics, Avinashilingam Deemed University for Women, Coimbatore, Tamil Nadu 641 043, India avinuty.ac.in

Search for more papers by this author
First published: 18 July 2013
Academic Editor: Ali Cemal Benim

Abstract

This paper is concerned with the passivity analysis of Markovian jumping neural networks with leakage time-varying delays. Based on a Lyapunov functional that accounts for the mixed time delays, a leakage delay-dependent passivity conditions are derived in terms of linear matrix inequalities (LMIs). The mixed delays includes leakage time-varying delays, discrete time-varying delays, and distributed time-varying delays. By employing a novel Lyapunov-Krasovskii functional having triple-integral terms, new passivity leakage delay-dependent criteria are established to guarantee the passivity performance. This performance not only depends on the upper bound of the time-varying leakage delay σ(t) but also depends on the upper bound of the derivative of the time-varying leakage delay σμ. While estimating the upper bound of derivative of the Lyapunov-Krasovskii functional, the discrete and distributed delays should be treated so as to appropriately develop less conservative results. Two numerical examples are given to show the validity and potential of the developed criteria.

1. Introduction

In the past few decades, neural networks (NNs) have been a hot research topic because of their emerged application in static image processing, pattern recognition, fixed-point computation, associative memory, combinatorial optimization [15]. Because the interactions between neurons are generally asynchronous in biological and artificial neural networks, time delays are usually encountered. Since the existence of time delays is frequently one of the main sources of instability for neural networks, the stability analysis for delayed neural networks had been extensively studied and many papers have been published on various types of neural networks with time delays based on the LMI approach [614].

On the other hand, the main idea of passivity theory is that the passive properties of a system can keep the system internally stable. In addition, passivity theory is frequently used in control systems to prove the stability of systems. The problem of passivity performance analysis has also been extensively applied in many areas such as signal processing, fuzzy control, sliding mode control [15], and networked control [16]. The passivity idea is a promising approach to the analysis of the stability of NNs, because it can lead to more general stability results. It is important to investigate the passivity analysis for neural networks with time delays. More recently, dissipativity or passivity performances of NNs have received increasing attention and many research results have been reported in the literature, for example, [1721].

In practice, the RNNs often exhibit the behavior of finite state representations (also called clusters, patterns, or modes) which are referred to as the information latching problems [22]. In this case, the network states may switch (or jump) between different RNN modes according to a Markovian chain, and this gives rise to the so-called Markovian jumping recurrent neural networks. It has been shown that the information latching phenomenon is recognized to exist universally in neural networks [23, 24], which can be dealt with extracting finite state representation from a trained network, that is, a neural network sometimes has finite modes that switch from one to another at different times. The results related to all kinds of Markovian jump neural networks with time delay can also be found in [2527] and the references therein. It should be pointed out that all the above mentioned references assume that the considered transition probabilities in the Markov process or Markov chain are time invariant, that is, the considered Markov process or Markov chain is assumed to be homogeneous. It is noted that such kind of assumption is required in most existing results on Markovian jump systems [28, 29]. The detailed discussion about piecewise homogeneous and nonhomogeneous Markovian jumping parameters has been given in [30] and references therein.

On the other hand, a typical time delay called as leakage (or “forgetting”) delay may exist in the negative feedback terms of the neural network and it has a great impact on the dynamic behaviors of delayed neural networks and more details are given in [3136]. In [34] the authors introduced leakage time-varying delay for dynamical systems with nonlinear perturbations and derived leakage delay-dependent stability conditions via constructing a new type of Lyapunov-Krasovskii functional and LMI approach. Recently, the passivity analysis for neural networks of neutral type with Markovian jumping parameters and time delay in the leakage term have been addressed in [37]. With reference to the results above, it has been studied that many results get to be found out for passivity analysis of Markovian jumping neural networks with leakage time-varying delays. Thus, the main purpose of this paper is to shorten such a gap by making the first attempt to deal with the passivity analysis problem for a type of continuous-time neural networks with time-varying transition probabilities and mixed time delays.

In this paper, the problem of passivity analysis of Markovian jump neural networks with leakage time-varying delay and discrete and distributed time-varying delays is considered. The Markov process in the under lying neural networks is assumed to be finite piecewise homogeneous, which is a special nonhomogeneous (time-varying) Markov chain. Motivated by [30] a novel Lyapunov-Krasovskii functional is constructed in which the positive definite matrices are dependent on the system mode and a triple-integral term is introduced for deriving the delay-dependent stability conditions. By employing a novel Lyapunov-Krasovskii functional having triple integral terms, new passivity leakage delay-dependent criteria are established to guarantee the passivity performance of the given systems. This performance not only depends on the upper bound of the time-varying leakage delay σ(t) but also depends on the upper bound of the derivative of the time-varying leakage delay σμ. When estimating an upper bound of the derivative of the Lyapunov-Krasovskii functional, we handle the terms related to the discrete and distributed delays appropriately so as to develop less conservative results. Two numerical examples are given to show the validity and potential of the development of the proposed passivity criteria.

Notations.  Let n denote the n-dimensional Euclidean space and the superscript “T” denotes the transpose of a matrix or vector. I denote the identity matrix with compatible dimensions. For square matrices M1 and M2, the notation M1 > (≥, <, ≤)  M2 denotes positive-definite (positive-semidefinite, negative, negative semidefinite) matrix. Let (Ω, 𝔉, P) be a complete probability space with a natural filtration {𝔉t} t≥0 and E[·] stand for the correspondent expectation operator with respect to the given probability measure P. Also, let τ > 0 and C([−τ, 0]; n) denote the family of continuously differentiable function ϕ from [−τ, 0] to n with the uniform norm ∥ϕτ = max  {max τθ≤0|ϕ(θ)|, max τθ≤0|ϕ(θ)|}.

2. Problem Description and Preliminaries

Fix a probability space (Ω, , 𝒫), Ω is the sample space, is the σ-algebra of subsets of the sample space, and 𝒫 is the probability measure on , and consider the following Markov jump neural networks with mixed time-delays:
()
where x(tσ(t)) = [x1(tσ(t)) x2(tσ(t)) ⋯ xn(tσ(t))] T and are the state of the ith neuron at time t with leakage time varying delay and gi(xi(t)) denotes the neuron activation function; C(r(t)) = diag {C1(r1(t)) C2(r2(t)) ⋯ Cn(rn(t))} is a diagonal matrix with positive entries; and , are, respectively, the connection weight matrix, the discretely delayed connection weight matrix, and the distributively delayed connection weight matrix; y(t) is the output of the neural network, and u(t) ∈ 𝕃2[0, ) is the output; τ(t) and d(t) denote the discrete delay and distributed delay, respectively, and the time varying delay τ(t) satisfies
()
where τ1,  τ2,  τμ,  σμ,  σ, and d are some real constants. By the simple transformation, model (1) has an equivalent form as follows:
()
Here, {rt, t ≥ 0} is a right continuous markov chain on the probability space taking values in a finite state space 𝒮 = {1,2, …, N} with transition rate matrix given by
()
in which h ≥ 0, lim h→0o(h)/h = 0, and for ji is the transition rate from mode i at time t to mode j at time t + h and .
Similarly, the parameter {ηt, t ≥ 0} is also a right continuous markov chain on the probability space taking values in a finite state space = {1,2, …, T} with transition rate matrix Λ≜{pmn} given by
()
in which h ≥ 0, lim h→0o(h)/h = 0, and pmn ≥ 0 for nm, are the transition rate from mode m at time t to mode n at time t + h and .

In this paper, we make the following assumption, definition, and lemmas for deriving the main result.

Assumption 1. Each activation function fi(·) in (1) is continuous and bounded and satisfies

()
where gi(0) = 0,  α1, α2,  α1α2, and and are known real scalars. It follows from (6) that the neural activation function satisfies
()

Lemma 2 (Jensen Inequality). For any matrix M ≥ 0, any scalars a and b with ab and a vector function x(t):[a, b] → n such that the integrals concerned are well defined, the following inequality holds:

()

Lemma 3. For any constant matrix Z = ZT > 0 and scalars σ > 0,  τ1 > 0,  τ2 > 0 such that the following inequalities hold

()

The main purpose of this paper is to establish a delay-dependent sufficient condition to ensure that neural networks (1) are passive.

Definition 4. The system (1) is said to be passive, if there exists a scalar ν ≥ 0 such that for all tp ≥ 0 and for all the solutions of (1), the following inequality

()
holds under zero initial conditions.

3. Main Results

In this section, we derive a new delay-dependent criterion for passivity of the delayed Markovian jumping neural networks (1) using the Lyapunov-Krasovskii functional method combining with LMI approach. For presentation convenience, in the following, we denote
()
Now, we establish the following passivity condition for the system (1).

Theorem 5. The given Markovian jumping neural networks (1) is passive if there exist

()
positive symmetric matrices ,  ,  ,  ,  ; the positive definite matrices W1 > 0,  W2 > 0 the diagonal matrices ,  ,  and a scalar γ > 0 such that for any (i, m)∈(𝒮, ) the following LMI holds:
()
where
()
and the remaining coefficients are all zero.

Proof. Denote ζ = [x(t) Tg(x(t)) T] T and consider the following Lyapunov-Krasovskii functional for neural network (1):

()
where
()
Define infinitesimal generator (denoted by 𝕃) of the markov process acting on V(xt, rt, ηt) as follows:
()
It can be calculated that
()
From (15), it can be seen that
()
Based on the above equation, along the solution of the neural network (3), we obtain that for each (i, m) ∈ 𝒮 ×
()
Moreover, based on Lemma 2, we can get the following inequalities:
()
()
By using Lemma 3, we can also get that
()
Similarly, we can use Lemmas 2 and 3 for other integrals. On the other hand, we have from (6) that for any λ = 1,2, …, n,
()
which is equivalent to
()
where denotes the unit column vector having 1 element on its λth row and zeros elsewhere. Thus, for any appropriately dimensioned diagonal matrix , the following inequality holds:
()
Similarly, for any appropriately dimensioned diagonal matrices , and , the following inequalities also hold:
()
Using inequalities (20)–(23) in (19) and adding (26)–(27) in (19), we get
()
where with
()

Hence we can obtain from (10) that,

()
Now, to show the passivity of the delayed neural networks in (1), we set
()
where tp ≥ 0.

Using Dynkin’s formula, we have

()
Now, we can deduce that
()

Thus, if (33) holds, then since and V(x0, r0, η0) = 0 holds under zero initial condition, from (31) it follows that J(tp) ≤ 0 for any tp ≥ 0, which implies that (13) is satisfied and therefore the delayed neural networks (1) are locally passive. Next we shall prove that 𝔼[∥x(t)∥2] → 0 as t. Taking expectation on both sides of (28) and integrating from 0 to t we have

()
By using Dynkin’s formula, we have
()
Hence
()
Using Jenson’s inequality and (36), we have
()
Similarly, it follows from the definition of V1(xt, rt, ηt) that
()
Hence, it can be obtained that
()
where
()
From (39) and (40), it can be deduced that the trivial solution of system (1) is locally passive. Then the solutions x(t) = x(t, 0, ϕ) of system (1) is bounded on [0, ). considering (1), we know that is bounded on [0, ), which leads to the uniform continuity of the solution x(t) on [0, ). From (36), we note that the following inequality holds:
()
By Barbalats’ lemma [38], it holds that 𝔼[∥x(t)∥2] → 0 as t and this completes the proof of the global passivity of the system (1).

Remark 6. When σ(t) = σ, the system (1) becomes

()
The system (42) can be written in its equivalent form as follows:
()
The time varying delay τ(t) satisfies
()
where τ1, τ2, τμ, d are some constants and the leakage delay σ ≥ 0 is a constant.

Now, the passivity condition for the neural networks (43) is given in the following corollary and the result follows from Theorem 5.

Corollary 7. Neural networks (43) are passive if there exist

()
positive symmetric matrices ,; the positive definite matrices W1 > 0,  W2 > 0; the diagonal matrices ;  and a scalar γ > 0 such that for any (i, m)∈(𝒮, ) the following LMI holds:
()
where
()
and the remaining coefficients are all zero.

Proof. We can define the Lyapunov functional for the above neural networks as in Theorem 5 by replacing σ(t) by σ. The proof is the same as that of Theorem 5, and hence it is omitted.

4. Problem without Switching

4.1. Description and Preliminaries

In this section, we derive passivity criterion for the delayed neural networks using the Lyapunov-Krasovskii functional without Markovian jumping parameters.

Consider the following neural networks with mixed time-delays:
()
Or, it has an equivalent form as follows:
()
Now, we establish the following passivity condition for the system (49).

Theorem 8. Neural network (49) is passive if there exist

()
positive symmetric matrices , , ; the positive definite matrices W1 > 0,  W2 > 0; the diagonal matrices Λ1 > 0,   Λ2 > 0,   Λ3 > 0,   Λ4 > 0,   Λ5 > 0;  and a scalar γ > 0 such that the following LMI holds:
()
where
()
and the remaining coefficients are all zero.

Proof. Denote ζ = [x(t) T g(x(t)) T] T and consider the following Lyapunov-Krasovskii functional for neural network (49):

()
where
()
Taking time derivative acting on V(xt) along the neural networks (49) is defined as follows:
()
Similarly like Theorem 5 we can use Lemmas 2 and 3 for the integrals. On the other hand, we have from (5) that for any λ = 1,2, …, n,
()
which is equivalent to
()
where eλ denotes the unit column vector having 1 element on its λth row and zeros elsewhere. Thus, for any appropriately dimensioned diagonal matrix Λ1 > 0, the following inequality holds:
()
Similarly, for any appropriately dimensioned diagonal matrices Λ2 > 0,  Λ3 > 0,  Λ4 > 0, and Λ5 > 0, the following inequalities also hold:
()
Using inequalities (55) and adding (58)-(59) to , we get
()
where with
()
Hence we can obtain from (51) that
()
The remaining part of the proof is the same as Theorem 5.

Remark 9. In this paper, Theorem 5 provides passivity criteria for the Markovian jumping neural networks with leakage time varying delays. Such stability criterion is derived based on the assumption that the leakage time varying delays are differentiable and the values of σμ are known. A new set of triple integral terms have been introduced in the Lyapunov-Krasovskii functional to derive the leakage delay-dependent passivity conditions via LMI approach. New type of Lyapunov-Krasovskii functional is constructed in which the positive definite matrices Q1i,m, Q2i,m, Q3i,m are dependent on the system mode and a triple-integral term is introduced for deriving the delay-dependent passivity conditions.

5. Numerical Examples

In this chapter, we provide two simple examples presented here in order to illustrate the usefulness of our main results. Our aim is to examine the passivity analysis of given delayed neural networks.

Example 1. Consider the delayed neural networks (1) with the following parameters and having Markovian jumping parameters as below:

()
where
()
and the activation functions are taken as follows: g1(α) = g2(α) = tanh(α). It is found that and . Furthermore, the transition probability matrices are
()
We choose the lower and upper bounds of delay values of τ(t), σ(t), and d(t) are τ1 = 0.2,   τ2 = 1.5,   σ = 0.3,   σμ = 0.4,   τμ = 0.6,   d = 0.5. By applying MATLAB LMI toolbox, we obtain the feasible solution as follows:
()
This shows that the given Markovian jumping neural networks (1) or (3) are globally passive with respect to the passive control.

Example 2. Consider the delayed neural network (49) with the following parameters and without markovian jumping parameters as below:

()
where
()
Further, we have the matrices
()
Here, the bounds of time delays of τ(t), σ(t), and d(t) are chosen as follows: τ1 = 0.5,   τ2 = 1,  σ = 0.1,   σμ = 0.1,   τμ = 0.2,  d = 0.5. By applying MATLAB LMI toolbox, we obtain the feasible solution as follows:
()
This shows that the given Markovian jumping neural networks (49) are globally passive with respect to the passive control.

6. Conclusion

In this paper, stochastic stability analysis of Markovian jump neural networks with leakage time-varying delay and discrete and distributed time-varying delays is considered. The Markov process in the underlying neural networks is finite piecewise homogeneous. A leakage delay-dependent passivity conditions have been derived in terms of LMIs by constructing novel Lyapunov-Krasovskii functional having triple integral terms. This performance not only depends on the upper bound of the time-varying leakage delay σ(t) but also depends on the upper bound of the derivative of the time-varying leakage delay σμ. Two numerical examples have been provided to demonstrate the effectiveness of the proposed methods for both with and without Markovian jumping parameters.

    The full text of this article hosted at iucr.org is unavailable due to technical difficulties.