An LMI Approach for Dynamics of Switched Cellular Neural Networks with Mixed Delays
Abstract
This paper considers the dynamics of switched cellular neural networks (CNNs) with mixed delays. With the help of the Lyapnnov function combined with the average dwell time method and linear matrix inequalities (LMIs) technique, some novel sufficient conditions on the issue of the uniformly ultimate boundedness, the existence of an attractor, and the globally exponential stability for CNN are given. The provided conditions are expressed in terms of LMI, which can be easily checked by the effective LMI toolbox in Matlab in practice.
1. Introduction
Corresponding to the switching signal σ(t), we have the switching sequence , which means that the ikth subsystem is activated when t ∈ [tk, tk−1).
Over the past decades, the stability of the unique equilibrium point for switched neural networks has been intensively investigated. There are three basic problems in dealing with the stability of switched systems: (1) find conditions that guarantee that the switched system (3) is asymptotically stable for any switching signal; (2) identify those classes of switching signals for which the switched system (3) is asymptotically stable; (3) construct a switching signal that makes the switched system (3) asymptotically stable [14]. Recently, some novel results on the stability of switched systems have been reported; see for examples [14–22] and references therein.
Just as pointed out in [23], when the activation functions are typically assumed to be continuous, bounded, differentiable, and monotonically increasing, such as the functions of sigmoid type, the existence of an equilibrium point can be guaranteed. However, in some special applications, one is required to use unbounded activation functions. For example, when neural networks are designed for solving optimization problems in the presence of constraints (linear, quadratic, or more general programming problems), unbounded activations modeled by diode-like exponential-type functions are needed to impose constraints satisfaction. Different from the bounded case where the existence of an equilibrium point is always guaranteed, for unbounded activations it may happen that there is no equilibrium point. In this case, it is difficult to deal with the issue of the stability of the equilibrium point for switched neural networks.
In fact, studies on neural dynamical systems involve not only the discussion of stability property but also other dynamics behaviors such as the ultimate boundedness and attractor [24, 25]. To the best of our knowledge, so far there are no published results on the ultimate boundedness and attractor for the switched system (3).
Motivated by the above discussions, in the following, the objective of this paper is to establish a set of sufficient criteria on the attractor and ultimate boundedness for the switched system. The rest of this paper is organized as follows. Section 2 presents model formulation and some preliminary works. In Section 3, ultimate boundedness and attractor for the considered model are studied. In Section 4, a numerical example is given to show the effectiveness of our results. Finally, in Section 5, conclusions are given.
2. Problem Formulation
-
(H1) Assume the functions τ(t) and h(t) are bounded:
()where τ, h are scalars. -
(H2) Assume there exist constants lj and Lj, i = 1,2, …, n, such that
()
Remark 1. We shall point out that the constants lj and Lj can be positive, negative, or zero, and the boundedness on fj(·) is no longer needed in this paper. Therefore, the activation function fj(·) may be unbounded, which is also more general than the form |fj(u)| ≤ Kj | u | , Kj > 0, j = 1,2, …, n. Different from the bounded case where the existence of an equilibrium point is always guaranteed, under the condition (H2), in the switched system (3) it may happen that there is no equilibrium point. Thus it is of great interest to investigate the ultimate boundedness solutions and the existence of an attractor by replacing the usual stability property for system (3).
Without loss of generality, let C([−τ*, 0], Rn) denote the Banach space of continuous mapping from [−τ*, 0] to Rn equipped with the supremum norm . Throughout this paper, we give some notations: AT denotes the transpose of any square matrix A, A > 0 (<0) denotes a positive (negative) definite matrix A, the symbol “*” within the matrix represents the symmetric term of the matrix, λmin (A) represents the minimum eigenvalue of matrix A, and λmax (A) represents the maximum eigenvalue of matrix A.
System (3) is supplemented with initial values of the type
Definition 2 (see [24].)System (3) is uniformly ultimately bounded; if there is , for any constant ϱ > 0, there is t′ = t′(ϱ) > 0, such that for all t ≥ t0 + t′, t0 > 0, ∥φ∥<ϱ.
Definition 3. The nonempty closed set 𝔸 ⊂ Rn is called an attractor for the solution x(t; φ) of system (3) if the following formula holds:
Definition 4 (see [26].)For any switching signal σ(t) and any finite constants T1, T2 satisfying T2 > T1 ≥ 0, denote the number of discontinuity of a switching signal σ(t) over the time interval (T1, T2) by Nσ(T1, T2). If Nσ(T1, T2) ≤ N0 + (T2 − T1)/Tα holds for Tα > 0, N0 > 0, then Tα > 0 is called the average dwell time.
3. Main Results
Theorem 5. Assume there is a constant μ, such that , and denote g(μ) as
Proof. Choose the following Lyapunov functional:
From assumption (H2), we have
Then we have
Therefore, we obtain
If one chooses , then for any constant ϱ > 0 and ∥φ∥<ϱ, there is t′ = t′(ϱ) > 0, such that e−atV(x(0)) < 1 for all t ≥ t′. According to Definition 2, we have for all t ≥ t′. That is to say, system (2) is uniformly ultimately bounded. This completes the proof.
Theorem 6. If all of the conditions of Theorem 5 hold, then there exists an attractor for the solutions of system (2), where .
Proof. If one chooses , Theorem 5 shows that for any ϕ there is t′ > 0, such that for all t ≥ t′. Let be denoted by . Clearly, is closed, bounded, and invariant. Furthermore, . Therefore, is an attractor for the solutions of system (2). This completes the proof.
Corollary 7. In addition to all of the conditions of Theorem 5 holding, if J = 0 and fi(0) = 0 for all i = 1,2, …, n, then system (2) has a trivial solution x(t) ≡ 0, and the trivial solution of system (2) is globally exponentially stable.
Proof. If J = 0 and fi(0) = 0 for all i = 1,2, …, n, then R1 = 0, and it is obvious that system (2) has a trivial solution x(t) ≡ 0. From Theorem 5, one has
Theorem 8. For a given constant a > 0, if there exist positive-definite matrixes Pi = diag (pi1, pi2, …, pin), Yi = diag (yi1, yi2, …, yin), i = 1,2, such that the following condition holds:
Proof . Define the Lyapunov functional candidate
If one chooses , then for any constant ϱ > 0 and ∥φ∥<ϱ, there is t′ = t′(ϱ) > 0, such that for all t ≥ t′. According to Definition 2, we have for all t ≥ t′. That is to say, system (3) is uniformly ultimately bounded, and the proof is completed.
Theorem 9. If all of the conditions of Theorem 8 hold, then there exists an attractor for the solutions of system (3), where .
Proof. If one chooses , Theorem 8 shows that for any ϕ there is t′ > 0, such that for all t ≥ t′. Let be denoted by . Clearly, is closed, bounded, and invariant. Furthermore, . Therefore, is an attractor for the solutions of system (3). This completes the proof.
Corollary 10. In addition to all of the conditions of Theorem 8 holding, if J = 0 and fi(0) = 0 for all i, then system (2) has a trivial solution x(t) ≡ 0, and the trivial solution of system (3) is globally exponentially stable.
Proof. If J = 0 and fi(0) = 0 for all i, then it is obvious that system (3) has a trivial solution x(t) ≡ 0. From Theorem 8, one has
Remark 11. Up to now, various dynamical results have been proposed for switched neural networks in the literature. For example, in [15], synchronization control of switched linearly coupled delayed neural networks is investigated; in [16–20], the authors investigated the stability of switched neural networks; in [21, 22], stability and L2-gain analysis for switched delay system have been investigated. To the best of our knowledge, there are few works about the uniformly ultimate boundedness and the existence of an attractor for switched neural networks. Therefore, results of this paper are new.
Remark 12. We notice that Lian and Zhang developed an LMI approach to study the stability of switched Cohen-Grossberg neural networks and obtained some novel results in a very recent paper [20], where the considered model includes both discrete and bounded distributed delays. In [20], the following fundamental assumptions are required: (i) the delay functions τ(t), h(t) are bounded, and , ; (ii) fi(0) = 0, lj ≤ (fj(x) − fj(y))/(x − y) ≤ Lj, for all i = 1,2, …, n; (iii) the switched system has only one equilibrium point. However, as a defect appearing in [20], just checking the inequality (13) in [20], it is easy to see that the assumed condition on is not correct, which should be revised as . On the other hand, just as described by Remark 1 in this paper, for a neural network with unbounded activation functions, the considered system in [20] may have no equilibrium point or have multiple equilibrium points. In this case, it is difficult to deal with the issue of the stability of equilibrium point for switched neural networks. In order to modify this imperfection, after relaxing the conditions , , and fi(0) = 0, replacing (i), (ii), and (iii) with assumptions (H1) and (H2), we drop out the assumption of the existence of a unique equilibrium point and investigate the issue of the ultimate boundedness and attractor; this modification seems more natural and reasonable.
Remark 13. When investigating the stability, although the adopted Lyapunov function in this paper is similar to those used in [20]; just from Corollaries 7 and 10, the conservatism of the conditions of the delay function in this paper has been further reduced. Hence, the obtained results on stability in this paper are complementary to the corresponding results in [20].
Remark 14. When the uncertainties appear in the system (3), employing the Lyapunov function as (27) in this paper and applying a similar method to the one used in [20], we can get the corresponding dynamical results. Due to the limitation of space, we choose not to give the straightforward but the tedious computations here for the formulas that determine the uniformly ultimate boundedness, the existence of an attractor, and stability.
4. Illustrative Example
In this section, we present an example to illustrate the effectiveness of the proposed results. Consider the switched cellular neural networks with two subsystems.
Example 15. Consider the switched cellular neural networks system (3) with di = 1, fi(xi(t)) = 0.5tanh(xi(t))(i = 1,2), τ(t) = 0.5sin2(t), h(t) = 0.3sin2(t), and the connection weight matrices where
From assumptions (H1) and (H2), we can obtain d = 1, li = 0, Li = 0.5, i = 1,2, τ = 0.5, h = 0.3, μ = 1.
Choosing a = 2 and solving LMIs (23), we get
5. Conclusions
In this paper, the dynamics of switched cellular neural networks with mixed delays (interval time-varying delays and distributed-time varying delays) are investigated. Novel multiple Lyapunov-Krasovkii functional methods are designed to establish new sufficient conditions guaranteeing the uniformly ultimate boundedness, the existence of an attractor, and the globally exponential stability. The derived conditions are expressed in terms of LMIs, which are more relaxed than algebraic formulation and can be easily checked by the effective LMI toolbox in Matlab in practice.
Acknowledgments
The authors are extremely grateful to Professor Jinde Cao and the anonymous reviewers for their constructive and valuable comments, which have contributed much to the improvement of this paper. This work was jointly supported by the National Natural Science Foundation of China under Grants nos. 11101053, 70921001, and 71171024, the Key Project of Chinese Ministry of Education under Grant no. 211118, and the Excellent Youth Foundation of Educational Committee of Hunan Provincial no. 10B002, the Scientific Research Funds of Hunan Provincial Science and Technology Department of China.