Volume 2013, Issue 1 870486
Research Article
Open Access

An LMI Approach for Dynamics of Switched Cellular Neural Networks with Mixed Delays

Chuangxia Huang

Chuangxia Huang

College of Mathematics and Computing Science, Changsha University of Science and Technology, Changsha, Hunan 410114, China csust.edu.cn

Search for more papers by this author
Hanfeng Kuang

Hanfeng Kuang

College of Mathematics and Computing Science, Changsha University of Science and Technology, Changsha, Hunan 410114, China csust.edu.cn

Search for more papers by this author
Xiaohong Chen

Xiaohong Chen

School of Business, Central South University, Changsha, Hunan 410083, China csu.edu.cn

Search for more papers by this author
Fenghua Wen

Corresponding Author

Fenghua Wen

School of Business, Central South University, Changsha, Hunan 410083, China csu.edu.cn

Search for more papers by this author
First published: 15 April 2013
Citations: 16
Academic Editor: Jinde Cao

Abstract

This paper considers the dynamics of switched cellular neural networks (CNNs) with mixed delays. With the help of the Lyapnnov function combined with the average dwell time method and linear matrix inequalities (LMIs) technique, some novel sufficient conditions on the issue of the uniformly ultimate boundedness, the existence of an attractor, and the globally exponential stability for CNN are given. The provided conditions are expressed in terms of LMI, which can be easily checked by the effective LMI toolbox in Matlab in practice.

1. Introduction

Cellular neural networks (CNNs) introduced by Chua and Yang in [1, 2] have attracted increasing interest due to the potential applications in classification, signal processing, associative memory, parallel computation, and optimization problems. In these applications, it is essential to investigate the dynamical behavior [35]. Both in biological and artificial neural networks, the interactions between neurons are generally asynchronous. As a result, time delay is inevitably encountered in neural networks, which may lead to an oscillation and furthermore to instability of networks. Since Roska et al. [6, 7] first introduced the delayed cellular neural networks (DCNNs), DCNN has been extensively investigated [810]. The model can be described by the following differential equation:
()
where t ≥ 0, n(≥2) corresponds to the number of units in a neural network; xi(t) denotes the potential (or voltage) of cell i at time t; fj(·) denotes a nonlinear output function; Ji denotes the ith component of an external input source introduced from outside the network to the cell i at time t; di(>0) denotes the rate with which the cell i resets its potential to the resting state when isolated from other cells and external inputs; aij denotes the strength of the jth unit on the ith unit at time t; bij denotes the strength of the jth unit on the ith unit at time tτj; τj(≥0) corresponds to the time delay required in processing and transmitting a signal from the jth cell to the ith cell at time t.
Although the use of constant fixed delays in models of delayed feedback provides of a good approximation in simple circuits consisting a small number of cells, recently, it has been well recognized that neural networks usually have a spatial extent due to the presence of a multitude of parallel pathways with a variety of axon sizes and lengths. Therefore, there will be a distribution of conduction velocities along these pathways and a distribution of propagation delays. As the fact that delays in artificial neural networks are usually time varying and sometimes vary violently with time, system (1) can be generalized as follow:
()
where, x(t) = (x1(t), …, xn(t)) T, D = diag (d1, …, dn), F(·) = (f1(·), …, fn(·)) T, A = (aij) n×n, B = (bij) n×n, C = (cij) n×n, τ(t) = (τ1(t), …, τn(t)) T, h(t) = (h1(t), …, hn(t)) T, J = (J1, …, Jn) T.
On the other hand, neural networks are complex and large-scale nonlinear dynamics; during hardware implementation, the connection topology of networks may change very quickly and link failures or new creation in networks often bring about switching connection topology [11, 12]. To obtain a deep and clear understanding of the dynamics of this complex system, one of the usual ways is to investigate the switched neural network. As a special class of hybrid systems, switched neural network systems are composed of a family of continuous-time or discrete-time subsystems and a rule that orchestrates the switching between the subsystems [13]. A switched DCNN can be characterized by the following differential equation:
()
where σ(t):[0, +) → Σ = {1,2, …, m} is the switching signal, which is a piecewise constant function of time.

Corresponding to the switching signal σ(t), we have the switching sequence , which means that the ikth subsystem is activated when t ∈ [tk, tk−1).

Over the past decades, the stability of the unique equilibrium point for switched neural networks has been intensively investigated. There are three basic problems in dealing with the stability of switched systems: (1) find conditions that guarantee that the switched system (3) is asymptotically stable for any switching signal; (2) identify those classes of switching signals for which the switched system (3) is asymptotically stable; (3) construct a switching signal that makes the switched system (3) asymptotically stable [14]. Recently, some novel results on the stability of switched systems have been reported; see for examples [1422] and references therein.

Just as pointed out in [23], when the activation functions are typically assumed to be continuous, bounded, differentiable, and monotonically increasing, such as the functions of sigmoid type, the existence of an equilibrium point can be guaranteed. However, in some special applications, one is required to use unbounded activation functions. For example, when neural networks are designed for solving optimization problems in the presence of constraints (linear, quadratic, or more general programming problems), unbounded activations modeled by diode-like exponential-type functions are needed to impose constraints satisfaction. Different from the bounded case where the existence of an equilibrium point is always guaranteed, for unbounded activations it may happen that there is no equilibrium point. In this case, it is difficult to deal with the issue of the stability of the equilibrium point for switched neural networks.

In fact, studies on neural dynamical systems involve not only the discussion of stability property but also other dynamics behaviors such as the ultimate boundedness and attractor [24, 25]. To the best of our knowledge, so far there are no published results on the ultimate boundedness and attractor for the switched system (3).

Motivated by the above discussions, in the following, the objective of this paper is to establish a set of sufficient criteria on the attractor and ultimate boundedness for the switched system. The rest of this paper is organized as follows. Section 2 presents model formulation and some preliminary works. In Section 3, ultimate boundedness and attractor for the considered model are studied. In Section 4, a numerical example is given to show the effectiveness of our results. Finally, in Section 5, conclusions are given.

2. Problem Formulation

For the sake of convenience, throughout this paper, two of the standing assumptions are formulated below:
  • (H1) Assume the functions τ(t) and h(t) are bounded:

    ()
    where τ, h are scalars.

  • (H2) Assume there exist constants lj and Lj, i = 1,2, …, n, such that

    ()

Remark 1. We shall point out that the constants lj and Lj can be positive, negative, or zero, and the boundedness on fj(·) is no longer needed in this paper. Therefore, the activation function fj(·) may be unbounded, which is also more general than the form |fj(u)| ≤ Kj | u | , Kj > 0, j = 1,2, …, n. Different from the bounded case where the existence of an equilibrium point is always guaranteed, under the condition (H2), in the switched system (3) it may happen that there is no equilibrium point. Thus it is of great interest to investigate the ultimate boundedness solutions and the existence of an attractor by replacing the usual stability property for system (3).

Without loss of generality, let C([−τ*, 0], Rn) denote the Banach space of continuous mapping from [−τ*, 0] to Rn equipped with the supremum norm . Throughout this paper, we give some notations: AT denotes the transpose of any square matrix A, A > 0  (<0) denotes a positive (negative) definite matrix A, the symbol “*” within the matrix represents the symmetric term of the matrix, λmin (A) represents the minimum eigenvalue of matrix A, and λmax (A) represents the maximum eigenvalue of matrix A.

System (3) is supplemented with initial values of the type

()
Now, we briefly summarize some needed definitions and lemmas as below.

Definition 2 (see [24].)System (3) is uniformly ultimately bounded; if there is , for any constant ϱ > 0, there is t = t(ϱ) > 0, such that for all tt0 + t, t0 > 0, ∥φ∥<ϱ.

Definition 3. The nonempty closed set 𝔸Rn is called an attractor for the solution x(t; φ) of system (3) if the following formula holds:

()
where d(x, 𝔸) = inf y𝔸xy∥.

Definition 4 (see [26].)For any switching signal σ(t) and any finite constants T1, T2 satisfying T2 > T1 ≥ 0, denote the number of discontinuity of a switching signal σ(t) over the time interval (T1, T2) by Nσ(T1, T2). If Nσ(T1, T2) ≤ N0 + (T2T1)/Tα holds for Tα > 0, N0 > 0, then Tα > 0 is called the average dwell time.

3. Main Results

Theorem 5. Assume there is a constant μ, such that , and denote g(μ) as

()
For a given constant a > 0, if there exist positive-definite matrixes P = diag (p1, p2, …, pn), Yi = diag (yi1, yi2, …, yin), i = 1,2, such that the following condition holds:
()
where
()
and then system (2) is uniformly ultimately bounded.

Proof. Choose the following Lyapunov functional:

()
where
()
Computing the derivative of V1(t) along the trajectory of system (2), one can get
()
Similarly, computing the derivative of V2(t) along the trajectory of system (2), one can get
()

From assumption (H2), we have

()

Then we have

()
Denote ; combing with (11)–(16), we have
()
and then we have
()
where .

Therefore, we obtain

()
where K = λmin (P), which implies
()

If one chooses , then for any constant ϱ > 0 and ∥φ∥<ϱ, there is t = t(ϱ) > 0, such that eatV(x(0)) < 1 for all tt. According to Definition 2, we have for all tt. That is to say, system (2) is uniformly ultimately bounded. This completes the proof.

Theorem 6. If all of the conditions of Theorem 5 hold, then there exists an attractor for the solutions of system (2), where .

Proof. If one chooses , Theorem 5 shows that for any ϕ there is t > 0, such that for all tt. Let be denoted by . Clearly, is closed, bounded, and invariant. Furthermore, . Therefore, is an attractor for the solutions of system (2). This completes the proof.

Corollary 7. In addition to all of the conditions of Theorem 5 holding, if J = 0 and fi(0) = 0 for all i = 1,2, …, n, then system (2) has a trivial solution x(t) ≡ 0, and the trivial solution of system (2) is globally exponentially stable.

Proof. If J = 0 and fi(0) = 0 for all i = 1,2, …, n, then R1 = 0, and it is obvious that system (2) has a trivial solution x(t) ≡ 0. From Theorem 5, one has

()
where K* = V(x(0))/K. Therefore, the trivial solution of system (2) is globally exponentially stable. This completes the proof.

By (11) and (19), there is a positive constant C0, such that
()
where Λ = a−1R1.
We now consider the switched cellular neural networks without uncertainties as system (3). When t ∈ [tk, tk+1], the ikth subsystem is activated; from (22) and Theorem 5, there is a positive constant , such that
()
where .

Theorem 8. For a given constant a > 0, if there exist positive-definite matrixes Pi = diag (pi1, pi2, …, pin), Yi = diag (yi1, yi2, …, yin), i = 1,2, such that the following condition holds:

()
where
()
Then system (3) is uniformly ultimately bounded for any switched signal with average dwell time satisfying
()
where .

Proof . Define the Lyapunov functional candidate

()
Since the system state is continuous, it follows from (23) that
()

If one chooses , then for any constant ϱ > 0 and ∥φ∥<ϱ, there is t = t(ϱ) > 0, such that for all tt. According to Definition 2, we have for all tt. That is to say, system (3) is uniformly ultimately bounded, and the proof is completed.

Theorem 9. If all of the conditions of Theorem 8 hold, then there exists an attractor for the solutions of system (3), where .

Proof. If one chooses , Theorem 8 shows that for any ϕ there is t > 0, such that for all tt. Let be denoted by . Clearly, is closed, bounded, and invariant. Furthermore, . Therefore, is an attractor for the solutions of system (3). This completes the proof.

Corollary 10. In addition to all of the conditions of Theorem 8 holding, if J = 0 and fi(0) = 0 for all i, then system (2) has a trivial solution x(t) ≡ 0, and the trivial solution of system (3) is globally exponentially stable.

Proof. If J = 0 and fi(0) = 0 for all i, then it is obvious that system (3) has a trivial solution x(t) ≡ 0. From Theorem 8, one has

()
where
()
Therefore, the trivial solution of system (3) is globally exponentially stable. This completes the proof.

Remark 11. Up to now, various dynamical results have been proposed for switched neural networks in the literature. For example, in [15], synchronization control of switched linearly coupled delayed neural networks is investigated; in [1620], the authors investigated the stability of switched neural networks; in [21, 22], stability and L2-gain analysis for switched delay system have been investigated. To the best of our knowledge, there are few works about the uniformly ultimate boundedness and the existence of an attractor for switched neural networks. Therefore, results of this paper are new.

Remark 12. We notice that Lian and Zhang developed an LMI approach to study the stability of switched Cohen-Grossberg neural networks and obtained some novel results in a very recent paper [20], where the considered model includes both discrete and bounded distributed delays. In [20], the following fundamental assumptions are required: (i) the delay functions τ(t), h(t) are bounded, and , ; (ii) fi(0) = 0, lj ≤ (fj(x) − fj(y))/(xy) ≤ Lj, for all i = 1,2, …, n; (iii) the switched system has only one equilibrium point. However, as a defect appearing in [20], just checking the inequality (13) in [20], it is easy to see that the assumed condition on is not correct, which should be revised as . On the other hand, just as described by Remark 1 in this paper, for a neural network with unbounded activation functions, the considered system in [20] may have no equilibrium point or have multiple equilibrium points. In this case, it is difficult to deal with the issue of the stability of equilibrium point for switched neural networks. In order to modify this imperfection, after relaxing the conditions , , and fi(0) = 0, replacing (i), (ii), and (iii) with assumptions (H1) and (H2), we drop out the assumption of the existence of a unique equilibrium point and investigate the issue of the ultimate boundedness and attractor; this modification seems more natural and reasonable.

Remark 13. When investigating the stability, although the adopted Lyapunov function in this paper is similar to those used in [20]; just from Corollaries 7 and 10, the conservatism of the conditions of the delay function in this paper has been further reduced. Hence, the obtained results on stability in this paper are complementary to the corresponding results in [20].

Remark 14. When the uncertainties appear in the system (3), employing the Lyapunov function as (27) in this paper and applying a similar method to the one used in [20], we can get the corresponding dynamical results. Due to the limitation of space, we choose not to give the straightforward but the tedious computations here for the formulas that determine the uniformly ultimate boundedness, the existence of an attractor, and stability.

4. Illustrative Example

In this section, we present an example to illustrate the effectiveness of the proposed results. Consider the switched cellular neural networks with two subsystems.

Example 15. Consider the switched cellular neural networks system (3) with di = 1, fi(xi(t))  =  0.5tanh(xi(t))(i = 1,2), τ(t)  =  0.5sin2(t), h(t) = 0.3sin2(t), and the connection weight matrices where

()

From assumptions (H1) and (H2), we can obtain d = 1, li = 0, Li = 0.5, i = 1,2, τ = 0.5, h = 0.3, μ = 1.

Choosing a = 2 and solving LMIs (23), we get

()
Using (26), we can get the average dwell time .

5. Conclusions

In this paper, the dynamics of switched cellular neural networks with mixed delays (interval time-varying delays and distributed-time varying delays) are investigated. Novel multiple Lyapunov-Krasovkii functional methods are designed to establish new sufficient conditions guaranteeing the uniformly ultimate boundedness, the existence of an attractor, and the globally exponential stability. The derived conditions are expressed in terms of LMIs, which are more relaxed than algebraic formulation and can be easily checked by the effective LMI toolbox in Matlab in practice.

Acknowledgments

The authors are extremely grateful to Professor Jinde Cao and the anonymous reviewers for their constructive and valuable comments, which have contributed much to the improvement of this paper. This work was jointly supported by the National Natural Science Foundation of China under Grants nos. 11101053, 70921001, and 71171024, the Key Project of Chinese Ministry of Education under Grant no. 211118, and the Excellent Youth Foundation of Educational Committee of Hunan Provincial no. 10B002, the Scientific Research Funds of Hunan Provincial Science and Technology Department of China.

      The full text of this article hosted at iucr.org is unavailable due to technical difficulties.