Volume 2012, Issue 1 405939
Research Article
Open Access

Stability of the Stochastic Reaction-Diffusion Neural Network with Time-Varying Delays and p-Laplacian

Pan Qingfei

Corresponding Author

Pan Qingfei

College of Civil Engineering and Architecture, Sanming University, Sanming 365004, China

Search for more papers by this author
Zhang Zifang

Zhang Zifang

Department of Mathematics and Physics, Huaihai Institute of Technology, Lianyungang 222005, China hhit.edu.cn

Search for more papers by this author
Huang Jingchang

Huang Jingchang

Science and Technology on Microsystems Laboratory, Shanghai Institute of MicroSystems and Information Technology, CAS, Shanghai 200050, China cas.cn

Search for more papers by this author
First published: 03 December 2012
Citations: 4
Academic Editor: Maoan Han

Abstract

The main aim of this paper is to discuss moment exponential stability for a stochastic reaction-diffusion neural network with time-varying delays and p-Laplacian. Using the Itô formula, a delay differential inequality and the characteristics of the neural network, the algebraic conditions for the moment exponential stability of the nonconstant equilibrium solution are derived. An example is also given for illustration.

1. Introduction

In many neural networks, time delays cannot be avoided. For example, in electronic neural networks, time delays will be present due to the finite switching speed of amplifies. In fact, time delays are often encountered in various engineering, biological, and economical systems. On the other hand, when designing a neural network to solve a problem such as optimization or pattern recognition, we need foremost to guarantee that the neural networks model is globally asymptotically stable. However, the existence of time delay frequently causes oscillation, divergence, or instability in neural networks. In recent years, the stability of neural networks with delays or without delays has become a topic of great theoretical and practical importance (see [116]).

The stability of neural networks which depicted by partial differential equations was studied in [6, 7]. Stochastic differential equations were employed to research the stability of neural networks in [811], while [12, 13] used stochastic partial differential equations to analysis this question. In [15], the authors studied almost exponential stability for a stochastic recurrent neural network with time-varying delays. In addition, moment exponential stability for a stochastic reaction-diffusion neural network with time-varying delays is discussed in [16].

In this paper, we consider the stochastic reaction-diffusion neural network with time-varying delays and p-Laplacian as follows:
()
()
()
In (1.1), ∇ui = (ui/x1, …, ui/xm) T, p ≥ 2 is a common number. Ω⊆Rm is a bounded convex domain with smooth boundary Ω and measure mes Ω > 0. n denotes the numbers of neurons in the neural network, ui(t, x) corresponds to the state of the ith neurons at time t and in space x, the ai(u) is an amplification function. Ii is output. gj(uj(tτj, x)) denotes the output of the jth neuron at time tτj and in space x, namely, activation function which shows how neurons respond to each other. W(t) = (W1(t), …, Wm(t)) T is an m-dimensional Brownian motion which is defined on a complete probability space (𝒮, , 𝒫) with a natural filtration (i.e., t = σ{W(s) : t0st}). σ(u) = (σ1(u1), …, σn(un)) T, σi(ui) = (σi1(ui), …, σim(ui)). σij(ui) denotes the intensity of the stochastic perturbation. Functions σij(ui) and gi are subject to certain conditions to be specified later. T : = (Tij) n×n is a real constant matrix and represents weight of the neuron interconnections, namely, Tij denotes the strength of jth neuron on the ith neuron at time tτj and in space x, and τj ∈ [0, τ] corresponds to axonal signal transmission delay.

2. Definitions and Lemmas

Throughout this paper, unless otherwise specified, let |·| denote Euclidean norm. Define that and where x = (x1, …, xn) TRn. Denote by C([−τ, 0] × Ω; Rn) the family of continuous functions φ from [−τ, 0] × Ω to Rn. For every tt0 and p ≥ 2, denote by × Ω; Rn) the family of all t-measurable C([−τ, 0] × Ω; Rn) valued random variables such that , where ∥ϕ(θ)∥p = (∫Ω  | ϕ(θ, x)|pdx) 1/p, E(ϕ) denotes the expectation of random variable ϕ.

Definition 2.1. The u(t, x) = (u1(t, x), …, un(t, x)) T is called a solution of problem (1.1)–(1.3) if it satisfies following conditions (1), (2), and (3):

  • (1)

    u(t, x) adapts ;

  • (2)

    for , u(t, x) ∈ C([t0, T] × Ω, Rn), and , where ∇u(x, t) = (u/x1, …, u/xn);

  • (3)

    for , t ∈ (t0, T], it holds that

    ()

Definition 2.2. The u = u*(x) is called a nonconstant equilibrium solution of problem (1.1)–(1.3) if and only if u = u*(x) satisfies (1.1) and (1.2).

Definition 2.3. The nonconstant equilibrium solution u*(x) of (1.1) about the given norm ∥·∥Ω is called exponential stability in pth moment, if there are constants M > 0, δ > 0 for every stochastic field solution u(t, x) of problem (1.1)–(1.3) such that

()
namely,
()
The constant −δ on the right hand side in (2.3) is called Lyapunov exponent of every solution of problem (1.1)–(1.3) converging on equilibrium about norm ∥·∥Ω.

In order to obtain pth moment exponential stability for a nonconstant equilibrium solution of problem (1.1)–(1.3), we need the following lemmas.

Lemma 2.4 (see [17].)Let P = (pij) n×n and Q = (qij) n×n be two real matrices. The continuous function ui(t) ≥ 0 satisfies the delay differential inequalities

()
If pij > 0 for ij and qij ≥ 0  (i, j = 1,2, …, n) and −(P + Q) is an M-matrix, then there are constants ki > 0, α > 0 such that
()
where ϕ = (ϕ1, …, ϕ1) are initial functions. D+ is right-hand upper derivate.∥·∥ represents a norm.

Lemma 2.5 (see [10].)Let p > 2, then there are positive constants ep(n) and dp(n) for any x = (x1, …, xn) TRn such that

()

Remark 2.6. If p = 2, Lemma 2.5 also holds with ep(n) = dp(n) = 1.

Suppose that σij(ui), ai(u), and gi are Lipschitz continuous such that the following conditions hold:

  • (H1) |(σi(v1) − σi(v2))(σi(v1) − σi(v2)) T | ≤ 2λi | v1v2|2, i = 1,2, …, n,

  • (H2) |gi(v1) − gi(v2)| ≤ ci|v1v2|, i = 1,2, …, n,

  • (H3) (uv)(ai(u)uai(v)v) ≥ ai | uv|2, for all u, vR, i = 1,2, …, n,

where ci, ai, and λi  (1 ≤ in) are positive constants.

3. Main Result

Set u(t, x) = (u1(t, x), …, un(t, x)) T as a solution of the problem (1.1)–(1.3) and as a nonconstant equilibrium solution of the problem (1.1)–(1.3).

Theorem 3.1. Let p ≥ 2 and (H1)–(H3) hold. Assume that there are positive constants ε1, …, εn such that the matrix Mp = −(pji + qij) n×n : = PT + Q is an M-matrix, where

()
then the nonconstant equilibrium solution of problem (1.1)–(1.3) about Lp norm is exponential stability in pth moment, that is, there are constants C1 > 0 and α > 0, for any t0R+ and any such that
()

Proof. Set . For every tt0 and dt > 0, by means of Itô formula and (H3), one has that

()
Both sides of Inequality (3.3) are integrated about x over Ω. Set . One has that
()
Set . By (1.2), one has that
()
where is unit outer cotangent vector on Ω. By (3.4), (3.5), (H1), and Young’s inequality, one has that
()
where pij and qij are defined by (3.1).

For Δt > 0, both sides of (3.6) are integrated about t from t to t + Δt, then both sides of (3.6) are calculated expectation. By the properties of Brownian motion, one has that

()
Since the integrals and are finite, by Fubini theorem [18] and (3.7), one obtain that
()

Set . Both sides of Inequality (3.8) are divided by Δt, let Δt → 0, one has the following inequality:

()
By Lemma 2.4, there are positive constants Ki, α such that
()
where ϕj is initial value. Set K = max {Ki : 1 ≤ in}, then
()
By (3.11) and Lemma 2.5, one obtains that
()

In order to prove Theorem 3.1, we need the following lemma.

Lemma 3.2. The nonconstant equilibrium solution of the problem (1.1)–(1.3), u*(x) satisfies .

Proof. Set . Similar to (3.8) in proof of Theorem 3.1, one has that

()
By (3.13) and the assumption that −(pji + qij) n×n : = PT + Q is an M-matrix, one obtains that
()
Because of and Δt > 0, one has that .

We continue the proof of Theorem 3.1 as the following.

By Lemma 3.2, one has that

()
where M > 0 is a common number. We derive every solution of problem (1.1)–(1.3) such that
()
then a nonconstant equilibrium solution of problem (1.1)–(1.3) about Lp norm is exponential stability in pth moment. The proof of Theorem 3.1 is complete.

In order to illustrate the application of the theorem, we give an example.

Example 3.3. Discuss the stochastic reaction-diffusion neural network with time-varying delays and p-Laplacian as the following:

()
()
()
()
where
()
Set as a nonconstant equilibrium solution of (3.17) and (3.18). One can derive that
()
Taking εi = 1  (i = 1,2) and p = 3, one has that
()
and M3 is an M-matrix. The nonconstant equilibrium solution of (3.17) and (3.18) about L3 norm is exponential stability in the 3rd moment.

Remark 3.4. The Theorem 3.1 extends the correlative results in [12, 13, 16] to the situation related to the p-Laplacian.

Acknowledgments

The authors thank the reviewers for their constructive comments.This work is supported by the National Science Foundation of China (no. 10971240).

      The full text of this article hosted at iucr.org is unavailable due to technical difficulties.