ieee transactions on neural networks, vol. 19, no. 5, … · zhang et al.: global asymptotic...

19
IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 5, MAY 2008 855 Global Asymptotic Stability of Recurrent Neural Networks With Multiple Time-Varying Delays Huaguang Zhang, Senior Member, IEEE, Zhanshan Wang, and Derong Liu, Fellow, IEEE Abstract—In this paper, several sufficient conditions are established for the global asymptotic stability of recurrent neural networks with multiple time-varying delays. The Lya- punov–Krasovskii stability theory for functional differential equations and the linear matrix inequality (LMI) approach are employed in our investigation. The results are shown to be gen- eralizations of some previously published results and are less conservative than existing results. The present results are also applied to recurrent neural networks with constant time delays. Index Terms—Recurrent neural networks, global asymptotic stability, multiple time-varying delays, linear matrix inequality (LMI), Lyapunov–Krasovskii functional. I. INTRODUCTION R ECENTLY, dynamical recurrent neural networks have at- tracted considerable attentions as they are proved to be essential in applications such as classification of patterns, asso- ciative memories, and optimization. In particular, the recurrent neural network model introduced by Hopfield in [10] has been studied extensively and has been successfully applied to opti- mization problems. Moreover, time delay is commonly encoun- tered in the implementation of neural networks due to the finite speed of information processing, and it is frequently a source of oscillation and instability in neural networks. Therefore, the stability of delayed neural networks has become a topic of great theoretical and practical importance. When a neural network is designed to function as an associative memory, it is required that there exist many stable equilibrium points, whereas in the case of solving optimization problems, it is necessary that the designed neural network has a unique equilibrium point that is globally asymptotically stable. Correspondingly, it is of great in- terest to establish conditions that ensure the global asymptotic stability of a unique equilibrium point of a neural network. Re- cently, many researchers have studied the equilibria and stability Manuscript received May 1, 2006; revised February 3, 2007 and June 23, 2007; accepted October 9, 2007. This work was supported by the National Natural Science Foundation of China under Grants 60534010, 60572070, 60521003, 60774048, and 60728307, the National High Technology Research and Development Program of China under Grant 2006AA04Z183, and the Program for Changjiang Scholars and Innovative Research Groups of China under Grant 60521003. H. Zhang and Z. Wang are with the School of Information Science and En- gineering, Northeastern University, Shenyang, Liaoning 110004, P. R. China and also with the Key Laboratory of Integrated Automation of Process Industry, Northeastern University, Ministry of Education of China, Shenyang 110004, China (e-mail: [email protected]; [email protected]). D. Liu is with the Department of Electrical and Computer Engineering, Uni- versity of Illinois at Chicago, Chicago, IL 60607 USA (e-mail: [email protected]. edu). Digital Object Identifier 10.1109/TNN.2007.912319 properties of neural networks and established various sufficient conditions for the uniqueness and global asymptotic stability of the equilibrium point for recurrent neural networks [1], [2], [4]–[9], [11]–[31], in which [6], [9], [11], [14], [31] are for the case of time-varying delays and others are for the case of con- stant delays. In this paper, we will establish several new sufficient condi- tions for the global asymptotic stability of the equilibrium point for recurrent neural networks with multiple time-varying delays or constant delays. Compared to some previously published re- sults, results of this paper are less conservative and more gen- eral. The main contributions of the paper include the following. 1) First, we present some linear matrix inequality (LMI)-based stability results for a class of neural networks with different multidelays and with different activation functions. The concerned model regards the models in [11], [14], [21], and [22] as special cases. Our present results can directly be applied to the models studied in [11], [14], [21], and [22]. In these works, stability results are obtained mainly in the form of -matrix or algebraic inequality. No LMI-based results have been reported in the existing literature. On the other hand, because the LMI-based results consider the sign difference of the elements in connection matrices, neuron’s excitatory and inhibitory effects on the neural network have been con- sidered, which overcome the shortcomings of the results based on -matrix and algebraic inequality. Correspond- ingly, results obtained using LMI techniques will be less conservative. 2) Many existing stability results in the form of LMI are spe- cial cases of the present results. Therefore, comparing to existing results, the present results are less conservative and have wider fields of applications. This paper is organized as follows. In Section II, we intro- duce some notation, problem description, and preliminaries. In Section III, we establish main results of this paper and some corollaries. In Section IV, an example is included to demon- strate the applicability of the present results. In Section V, some pertinent remarks are provided to conclude this paper. II. PROBLEM STATEMENT AND PRELIMINARIES Throughout the paper, let , and denote the transpose, the in- verse, the largest eigenvalue, the smallest eigenvalue, and the Euclidean norm of a square matrix , respectively. Let denote a semipositive (positive, nega- tive) definite symmetric matrix, respectively. Let and denote 1045-9227/$25.00 © 2008 IEEE

Upload: others

Post on 21-May-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 5, … · ZHANG et al.: GLOBAL ASYMPTOTIC STABILITY OF RECURRENT NEURAL NETWORKS WITH MULTIPLE TIME-VARYING DELAYS 857 with and

IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 5, MAY 2008 855

Global Asymptotic Stability of Recurrent NeuralNetworks With Multiple Time-Varying Delays

Huaguang Zhang, Senior Member, IEEE, Zhanshan Wang, and Derong Liu, Fellow, IEEE

Abstract—In this paper, several sufficient conditions areestablished for the global asymptotic stability of recurrentneural networks with multiple time-varying delays. The Lya-punov–Krasovskii stability theory for functional differentialequations and the linear matrix inequality (LMI) approach areemployed in our investigation. The results are shown to be gen-eralizations of some previously published results and are lessconservative than existing results. The present results are alsoapplied to recurrent neural networks with constant time delays.

Index Terms—Recurrent neural networks, global asymptoticstability, multiple time-varying delays, linear matrix inequality(LMI), Lyapunov–Krasovskii functional.

I. INTRODUCTION

RECENTLY, dynamical recurrent neural networks have at-tracted considerable attentions as they are proved to be

essential in applications such as classification of patterns, asso-ciative memories, and optimization. In particular, the recurrentneural network model introduced by Hopfield in [10] has beenstudied extensively and has been successfully applied to opti-mization problems. Moreover, time delay is commonly encoun-tered in the implementation of neural networks due to the finitespeed of information processing, and it is frequently a sourceof oscillation and instability in neural networks. Therefore, thestability of delayed neural networks has become a topic of greattheoretical and practical importance. When a neural network isdesigned to function as an associative memory, it is requiredthat there exist many stable equilibrium points, whereas in thecase of solving optimization problems, it is necessary that thedesigned neural network has a unique equilibrium point that isglobally asymptotically stable. Correspondingly, it is of great in-terest to establish conditions that ensure the global asymptoticstability of a unique equilibrium point of a neural network. Re-cently, many researchers have studied the equilibria and stability

Manuscript received May 1, 2006; revised February 3, 2007 and June 23,2007; accepted October 9, 2007. This work was supported by the NationalNatural Science Foundation of China under Grants 60534010, 60572070,60521003, 60774048, and 60728307, the National High Technology Researchand Development Program of China under Grant 2006AA04Z183, and theProgram for Changjiang Scholars and Innovative Research Groups of Chinaunder Grant 60521003.

H. Zhang and Z. Wang are with the School of Information Science and En-gineering, Northeastern University, Shenyang, Liaoning 110004, P. R. Chinaand also with the Key Laboratory of Integrated Automation of Process Industry,Northeastern University, Ministry of Education of China, Shenyang 110004,China (e-mail: [email protected]; [email protected]).

D. Liu is with the Department of Electrical and Computer Engineering, Uni-versity of Illinois at Chicago, Chicago, IL 60607 USA (e-mail: [email protected]).

Digital Object Identifier 10.1109/TNN.2007.912319

properties of neural networks and established various sufficientconditions for the uniqueness and global asymptotic stabilityof the equilibrium point for recurrent neural networks [1], [2],[4]–[9], [11]–[31], in which [6], [9], [11], [14], [31] are for thecase of time-varying delays and others are for the case of con-stant delays.

In this paper, we will establish several new sufficient condi-tions for the global asymptotic stability of the equilibrium pointfor recurrent neural networks with multiple time-varying delaysor constant delays. Compared to some previously published re-sults, results of this paper are less conservative and more gen-eral. The main contributions of the paper include the following.

1) First, we present some linear matrix inequality(LMI)-based stability results for a class of neural networkswith different multidelays and with different activationfunctions. The concerned model regards the models in[11], [14], [21], and [22] as special cases. Our presentresults can directly be applied to the models studied in[11], [14], [21], and [22]. In these works, stability resultsare obtained mainly in the form of -matrix or algebraicinequality. No LMI-based results have been reported inthe existing literature. On the other hand, because theLMI-based results consider the sign difference of theelements in connection matrices, neuron’s excitatory andinhibitory effects on the neural network have been con-sidered, which overcome the shortcomings of the resultsbased on -matrix and algebraic inequality. Correspond-ingly, results obtained using LMI techniques will be lessconservative.

2) Many existing stability results in the form of LMI are spe-cial cases of the present results. Therefore, comparing toexisting results, the present results are less conservativeand have wider fields of applications.

This paper is organized as follows. In Section II, we intro-duce some notation, problem description, and preliminaries. InSection III, we establish main results of this paper and somecorollaries. In Section IV, an example is included to demon-strate the applicability of the present results. In Section V, somepertinent remarks are provided to conclude this paper.

II. PROBLEM STATEMENT AND PRELIMINARIES

Throughout the paper, let ,and denote the transpose, the in-verse, the largest eigenvalue, the smallest eigenvalue, andthe Euclidean norm of a square matrix , respectively. Let

denote a semipositive (positive, nega-tive) definite symmetric matrix, respectively. Let and denote

1045-9227/$25.00 © 2008 IEEE

Page 2: IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 5, … · ZHANG et al.: GLOBAL ASYMPTOTIC STABILITY OF RECURRENT NEURAL NETWORKS WITH MULTIPLE TIME-VARYING DELAYS 857 with and

856 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 5, MAY 2008

the identity matrix and the zero matrix with appropriate dimen-sions, respectively. Time-varying delays are assumed tobe bounded, , where are constants. Let

, and . denotesthe rate of change of .

Now, we consider the following neural network model withdifferent multiple time-varying delays:

(1)

or in a matrix-vector format

(2)

where is the neuron statevector, is the number of neurons,with and

are the connection weight matrixand delayed connection weight matrices, respectively,

denotes the number of delayed connection matrices,

, anddenotes the external input vector.

Assumption 2.1: The activation functions andsatisfy the following conditions, respectively:

for arbitrary and , and for some positive constant.

Let . Ob-viously, and are nonsingular.

In the proof of the main results of this paper, we require thefollowing lemmas.

Lemma 2.1: Let and be two real vectors with appro-priate dimensions, and let and be two matrices with ap-propriate dimensions, where . Then, for any two positiveconstants and , the following inequality holds:

This lemma can be proved as follows:

Lemma 2.2 (see [7] and [30]): If a map satisfiesthe following conditions:

1) is injective on ;2) as ;

then is a homeomorphism of .

III. GLOBAL ASYMPTOTIC STABILITY RESULTS

Prior to the global asymptotic stability analysis for recur-rent neural networks (2), we first establish the following result,which guarantees the existence and uniqueness of the equilib-rium point of neural networks (2).

Proposition 3.1 (See Appendix I for a Proof): Supposethat . If there exist positive–defi-nite symmetric matrix , positive diagonal matrices

,and such that theLMI in (3), shown at the bottom of the page, holds, then theneural network (2) has a unique equilibrium point for a given

, where is the symmetric part of the corresponding elementin a matrix, and .

In the following, we assume that neural network (2) has anequilibrium point, namely, , for a given

. By linear transformation , neural network(2) is converted into the following form:

(4)

where is the state vector ofthe transformed model

with

......

......

.... . .

...

(3)

Page 3: IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 5, … · ZHANG et al.: GLOBAL ASYMPTOTIC STABILITY OF RECURRENT NEURAL NETWORKS WITH MULTIPLE TIME-VARYING DELAYS 857 with and

ZHANG et al.: GLOBAL ASYMPTOTIC STABILITY OF RECURRENT NEURAL NETWORKS WITH MULTIPLE TIME-VARYING DELAYS 857

with and is a continuous and boundedvector valued function with the supremum norm

. By Assumption 2.1, we can see thatand

for .Clearly, the equilibrium point of neural network (2) is

globally asymptotically stable if the zero solution of model (4)is globally asymptotically stable.

A. Recurrent Neural Networks With Different MultipleTime-Varying Delays

Theorem 3.1 (See Appendix II for a Proof): Suppose that. If the condition (3) in Proposition 3.1

holds, then the equilibrium point of neural network (2) is glob-ally asymptotically stable, independent of the magnitude of timedelays.

For the following model:

(5)

which is a special case of model (1) [if let we in (1)], wecan easily obtain the following result.

Corollary 3.1: Suppose that . Ifthere exist a positive–definite symmetric matrix ,positive diagonal matrices

, and , suchthat the following LMI holds:

(6)

then the equilibrium point of neural network (5) is globallyasymptotically stable, independent of the magnitude of timedelays, where represents the symmetric parts of the corre-sponding element in a matrix,

, and .Proof: In a similar manner to the proof of Theorem 3.1, we

can easily obtain the result.Remark 3.1: The global asymptotic stability problem for

model (5) was studied in [6]. The existence and uniqueness ofthe equilibrium point, which is stated in [6, Th. 1 and 2], areproved by using homotopic map and -matrix theory. Theglobal asymptotic stability results, which are stated in [6, Th.3–5], are derived using the -matrix theory and some algebraicinequality techniques. We emphasize that the techniques usedin [6] ignored the neuron’s excitatory and inhibitory effects onneural networks. In contrast, we use homeomorphism map andLMI technique to prove the existence and uniqueness of theequilibrium point, the global asymptotic stability conditions areestablished in the form of LMIs, and our derivation considers

the neuron’s excitatory and inhibitory effects on neural net-works. Obviously, the results in [6] and this paper are different,and cannot be replaced by each other.

Example 3.1: Consider the neural network (5) with constant

delays, where

and

. In this example, in connection matricesand represents the excitatory connection of neurons, and

in connection matrices and represents the in-hibitory connection of neurons.

Theorem 1 or 3 in [6] requires that

Clearly, Theorem 1 or 3 in [6] holds only for ,while our Corollary 3.1 holds for . Obviously, for

, Theorem 1 or 3 in [6] is not satisfied. The mainreason for Theorems 1 and 3 of [6] to fail is the neglect of thesign differences of elements in connection matrices andbecause of the use of the absolute values in the previous expres-sion. In general, the inhibitory effect of a neuron can stabilizea neural network, while the excitatory effect may destabilize aneural network.

When in model (1), then it becomes the fol-lowing model:

(7)

or in a matrix-vector format

(8)

Different from Proposition 3.1, we give another sufficientcondition to ensure the existence and uniqueness of the equi-librium point of model (8).

Proposition 3.2 (See Appendix III for a Proof): Suppose that. If there exist positive diagonal ma-

trices andsuch that the following inequality holds:

(9)

Page 4: IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 5, … · ZHANG et al.: GLOBAL ASYMPTOTIC STABILITY OF RECURRENT NEURAL NETWORKS WITH MULTIPLE TIME-VARYING DELAYS 857 with and

858 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 5, MAY 2008

where , then theneural network (8) has a unique equilibrium point for a given

.By Proposition 3.2, neural network (8) has an equilibrium

point, namely, , for a given . By lineartransformation , neural network (8) is con-verted into the following form:

(10)

where the notation is the same as that in (4).Clearly, the equilibrium point of neural network (8) is

globally asymptotically stable if the zero solution of model (10)is globally asymptotically stable. Next, we will establish a suf-ficient condition to ensure the global asymptotic stability ofmodel (10), which is different from Theorem 3.1.

Theorem 3.2 (See Appendix IV for a Proof): Suppose that. If the condition (9) in Proposition

3.2 holds, then the equilibrium point of neural network (8) isglobally asymptotically stable, independent of the magnitude oftime delays.

In the following, Theorem 3.2 will be modified for the caseof constant delays.

Corollary 3.2: Suppose that , i.e., .If there exist positive diagonal matricesand , such that thefollowing inequality holds:

(11)

then the equilibrium point of neural network (8) is globallyasymptotically stable, independent of time delays.

Remark 3.2: Schur’s Complement [3] can be stated as fol-lows. The LMI

where and dependsaffinely on , is equivalent to

where denotes the Moore–Penrose inverse of . BySchur’s complement, the conditions in Theorem 3.2 and Corol-lary 3.2 can easily be expressed in the form of LMIs. For ex-ample, condition (9) in Theorem 3.2 can be converted into thefollowing form of LMI:

......

.... . .

...

and condition (11) in Corollary 3.2 can be converted into thefollowing form of LMI:

......

.... . .

...

where . Therefore,the results obtained in this paper can easily be checked.

So far, many delayed neural network models have beenstudied in the literature. In [1], [25], and [30], the followingneural network with single time delay was studied:

(12)

A new delayed neural network model, called delayed cellularneural network model, has been introduced in [5], [17], [18],[20], [26], and [27], which is given by the following state equa-tion:

(13)

The version of (13) with multiple time delays, which wasstudied in [16], [19], [28], and [29], is in the following form:

(14)

where are scalars, .On the other hand, in [8], different time delays were

introduced into system (12) to obtain the following neural net-work model [8], [11]:

(15)

Another kind of neural systems of retarded functional differen-tial equations considered in [14], [21], and [22], which gener-alizes both Hopfield neural network model and cellular neuralnetwork model, is in the following form:

(16)

The neural network models (12)–(16) have attracted attentiondue to their promising potential for the task of classification,associative memories, parallel computation, and optimization.Such engineering applications rely crucially on the analysis ofdynamical behavior of the designed neural networks. Therefore,the analysis of the network equilibrium point and the stabilityproperties of the equilibrium points are of prime importance.Generally, the neural network models (12)–(16) can be dividedinto two different groups, i.e., model (14) and model (16). Thatis to say, models (12) and (13) are special cases of model (14),and model (15) is a special case of model (16). To the best of our

Page 5: IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 5, … · ZHANG et al.: GLOBAL ASYMPTOTIC STABILITY OF RECURRENT NEURAL NETWORKS WITH MULTIPLE TIME-VARYING DELAYS 857 with and

ZHANG et al.: GLOBAL ASYMPTOTIC STABILITY OF RECURRENT NEURAL NETWORKS WITH MULTIPLE TIME-VARYING DELAYS 859

knowledge, the analysis of models (14) and (16) has been dealtwith independently, especially in the case of using the LMI ap-proach, and it does not appear to have an integrated frameworkwhich covers both model (14) and model (16) in the literature.

Next, we will show that the model studied in this paper,i.e., model (7) or (8), can unify model (14) and model(16). Letting in (7), we obtain model (14).Next, we will show that model (16) can be changed tothe form of (8). If we let denote a square matrix,whose th row is composed of the th row of square ma-trix and other rows are all zeros, and let

, thenmodel (16) can be written in a matrix–vector form as

(17)

where . Obviously, model(17) is a special case of model (8). Therefore, the obtained re-sults for model (7) or (8) in this paper can also be applied tomodels (12)–(16).

Remark 3.3: For the neural network model (16), to the bestof our knowledge, no stability criterion in the form of LMI hasbeen reported in the literature. We establish in this paper a sta-bility criterion for model (7) or (8) in the form of LMI. Becausemodel (16) is a special case of the neural network model (7)or (8) studied in our paper, the present results for model (7) or(8) represent a significant step for the stability analysis of therecurrent neural networks with delays and they can also be ap-plied to model (16). Besides, results in this paper cannot be ob-tained using existing results in the literature. In the framework ofTheorem 3.2, we will present some stability results for models(12)–(14) [which are special cases of model (19) to be intro-duced next] and give some comparisons between our results andsome existing results. These comparisons will show that manyexisting stability results in the form of LMI are special cases ofthe results established in this paper.

Remark 3.4: Note that for models (12)–(14), the diagonalmatrices in Corollary 3.2 can be relaxed to positive–definitesymmetric matrices . Without loss of gener-ality, for instance, let us consider model (14). By linear trans-formation , model (14) is converted into thefollowing form:

where ,and other notation is the same as that in (10). Now, we considerthe following Lyapunov–Krasovskii functional:

where , and other notation is the sameas that in the proof of Theorem 3.2 in Appendix IV [see (75)].In a similar manner to the proof of Theorem 3.2, we can obtaincondition (9) except that , are positive–definitesymmetric matrices.

B. Recurrent Neural Networks With Multiple Time-VaryingDelays

In the following, we will consider the time-varying version ofmodel (14) with time-varying delays, i.e.,

(18)

where and other notation is defined the same way asthat in (8). By linear transformation , model(18) is changed to the following form:

(19)

Now, we establish another sufficient condition to guarantee theglobal asymptotic stability of the origin of model (19).

Theorem 3.3 (See Appendix V for a Proof): Suppose that. Then, the origin of system (19) is globally

asymptotically stable if there exist positive–definite symmetricmatrices , and a positive diagonal matrix

such that the following conditionholds:

(20)

Remark 3.5: When , and, i.e.,

, model (18) has been studied in [2]. Theorem 3 in[2] requires the following: 1) is positivedefinite and 2) if , or if

. In fact, from 2) we have 2’)if or if . Adding1) and 2’) together, we get

if (21)

or

if (22)

In this case, (20) becomes

(23)

If we let , or , (23) can betransformed to (21) and (22), respectively, which shows that [2,

Page 6: IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 5, … · ZHANG et al.: GLOBAL ASYMPTOTIC STABILITY OF RECURRENT NEURAL NETWORKS WITH MULTIPLE TIME-VARYING DELAYS 857 with and

860 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 5, MAY 2008

Th. 3] is a special case of the present Theorem 3.3. Similarly,Theorem 1 in [2] and the main result in [20] are also specialcases of Theorem 3.3 of this paper.

Remark 3.6: When , the delay-dependent stability cri-teria for (18) are established in [18]. Now, as far as the globalasymptotic stability problem is concerned, we will summarizethe relationships between the LMI-based delay-dependent sta-bility criteria in [18] and Theorem 3.3 in this paper. For brevityof comparison, we consider the case of constant time delay formodel (18) with . In this case, Theorem 3.3 is expressedas follows:

(24)

By Lemma 2.1 and for any vector with appropriatedimension, from (24), we have

(25)

for any and . Letting , from (25), wehave

(26)

where

(27)

is in the same form as the stability condition in [18, Th.1]. (Note that in [18, Th. 1] is positive definite.) Obviously,in the case of being diagonal matrix, is a sufficientcondition to ensure , where we have used the fact that

is equivalent to for nonsingular matrix .That is to say, Theorem 1 in [18] is a special case of Theorem3.3 of this paper in the case of being a diagonal matrix.Similarly, Theorems 2 and 3 in [18] are also special cases ofTheorem 3.3 of this paper in the case of being a diagonalmatrix.

Corollary 3.3: The origin of system (19) is globallyasymptotically stable if there exist positive–definite sym-metric matrices and positive diagonal matrix

such that the following conditionholds:

where .

Proof: Letting in Theorem 3.3, and in asimilar manner to the proof of Theorem 3.3, we can easily obtainthe corollary. The details are omitted.

Remark 3.7: When , the global asymptotic stabilityfor model (18) is studied in [17]. Theorems 2 and 4 in [17] arespecial cases of Corollary 3.3. For brevity of comparison, weconsider the case of constant time delay for model (18) with

. Then, Theorem 2 in [17] is restated as follows:

(28)

By Lemma 2.1, and for any vector with appropriatedimension, from (28), we have

(29)where . If we let and , whereis a scalar, from (29), we have

where

(30)

is in the same form as the condition in [17, Th. 1]. (Notethat the matrix in [17, Th. 1] is positive definite.) Obviously,

is only a sufficient condition to ensure , wherewe have used the fact that is equivalent tofor nonsingular matrix . In the case of being a diagonalmatrix, Theorem 1 in [17] is a special case of Theorem 2 in [17].Similarly, Theorem 3 in [17] is a special case of Theorem 4 in[17]. Therefore, all the results in [17] are special cases of thepresent Corollary 3.3 in the case of being a diagonal matrix.

Remark 3.8: Global asymptotic stability problem for neuralnetwork model (18) with multidelays is studied in [19]. In asimilar manner to the analysis in Remark 3.7, we can easilyshow that all results in [19] are special cases of Theorem 3.3of this paper in some means.

If we choose another Lyapunov–Krasovskii functional, thenwe have the following sufficient conditions to guarantee theglobal asymptotic stability of the origin of model (19).

Theorem 3.4: Suppose that . Then, theorigin of system (19) is globally asymptotically stable if thereexist positive–definite symmetric matrices , and

, a positive diagonal matrix suchthat (31), shown at the bottom of the next page, holds, where

and .

Page 7: IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 5, … · ZHANG et al.: GLOBAL ASYMPTOTIC STABILITY OF RECURRENT NEURAL NETWORKS WITH MULTIPLE TIME-VARYING DELAYS 857 with and

ZHANG et al.: GLOBAL ASYMPTOTIC STABILITY OF RECURRENT NEURAL NETWORKS WITH MULTIPLE TIME-VARYING DELAYS 861

Proof: Consider the following Lyapunov–Krasovskii func-tional:

(32)

The derivative of (32) along the trajectories of (19) is as follows:

where, and we have used

. Then,for , where is defined

in (31). By Lyapunov stability theory, the origin of system (19)is globally asymptotically stable.

Corollary 3.4: Suppose that . Then, theorigin of system (19) is globally asymptotically stable if thereexist positive–definite symmetric matrices ,and , a positive diagonal matrix ,and positive–semidefinite diagonal matrices and , suchthat (33), shown at the bottom of the page, holds, where

and .Proof: Consider the Lyapunov–Krasovskii functional de-

fined in (32). The derivative of (32) along the trajectories of (19)is as follows:

(34)

......

.... . .

...

......

......

. . ....

(31)

......

.... . .

......

. . ....

......

.... . .

......

. . ....

(33)

Page 8: IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 5, … · ZHANG et al.: GLOBAL ASYMPTOTIC STABILITY OF RECURRENT NEURAL NETWORKS WITH MULTIPLE TIME-VARYING DELAYS 857 with and

862 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 5, MAY 2008

where we have used

and , and isdefined the same way as that in Theorem 3.4. Then,

for , where is defined in (33). ByLyapunov stability theory, the origin of system (19) is globallyasymptotically stable.

Remark 3.9: From the proof of Corollary 3.4, we can deriveanother expression for . From (34), we have

(35)

where and are defined in (31) and (33), respectively. Now,we have two cases to discuss for (35).

Case 1) . In this case, (35) can be written asfollows:

where

where we have used

(a special case of Lemma 2.1) to show that

and, are identity matrices with com-

patible dimensions, and . Therefore, there existsan such that

. Obviously, if , the right-hand side ofmay not be negative definite. Therefore, in this case,

Theorem 3.4 is less conservative than Corollary 3.4.Case 2) . In this case, (35) can be written as

follows:

where we have used the inequality

and

Page 9: IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 5, … · ZHANG et al.: GLOBAL ASYMPTOTIC STABILITY OF RECURRENT NEURAL NETWORKS WITH MULTIPLE TIME-VARYING DELAYS 857 with and

ZHANG et al.: GLOBAL ASYMPTOTIC STABILITY OF RECURRENT NEURAL NETWORKS WITH MULTIPLE TIME-VARYING DELAYS 863

where, are identity

matrices with compatible dimensions, and . Then,. Obviously, if

, the right-hand side of inequality maynot be negative definite. Therefore, in this case, Theorem 3.4 isless conservative than Corollary 3.4.

Remark 3.10: When and ,model (18) has been studied in [9] and a global asymp-totic stability criterion is obtained by constructing a newLyapunov–Krasovskii functional. The characteristic of thisLyapunov–Krasovskii functional is that an integral termof state is involved in the Lya-punov–Krasovskii functional (see [9, eq. (10)]). In [9], it waspointed out that the new Lyapunov–Krasovskii functionaltogether with the S-procedure provided an improved globalasymptotic stability criterion. However, the result in [9] can beobtained from the present Corollary 3.4 with and ,while Corollary 3.4 is derived only using some inequality cal-culations. Therefore, S-procedure may not be necessary in theproof of the main result in [9].

Corollary 3.5: Suppose that . Then, theorigin of system (19) is globally asymptotically stable if thereexist positive–definite symmetric matrices and ,and positive diagonal matrix such thatthe following condition holds:

......

.... . .

...

(36)

where and.

Proof: Consider the following Lyapunov–Krasovskii func-tional:

(37)

The derivative of (37) along the trajectories of (19) is as follows:

where we have used

Let, then for . By

Lyapunov stability theory, the origin of system (19) is globallyasymptotically stable.

Remark 3.11: When , and, model

(18) has been studied in [27]. Theorem 1 in [27] can be restatedto require that

(38)

which is just Corollary 3.5 with , and .However, as shown in [24], (38) is essentially equivalent to

which is the main result in [24]. Therefore, the result in [27] is aspecial case of Theorem 3.3. Similarly, Theorems 1 and 2 in [4]are also special cases of the present Corollary 3.5 and Theorem3.3. Moreover, if we let , and inTheorem 3.4 or let ,and in Corollary 3.4, where is a sufficientlysmall scalar, then using the same approach as in [24], Theorem1 in [27] can also be derived from Theorem 3.4 and Corollary3.4.

Remark 3.12: Similar to the proof in [24], we can directly seethat Corollary 3.5 is equivalent to Theorem 3.3. Similarly, The-orem 3.4 is equivalent to Corollary 3.5. Therefore, the presentTheorem 3.3, Theorem 3.4, and Corollary 3.5 are equivalent inessence and are less conservative than the results of [4], [24],and [27].

Remark 3.13: Now, we make a comparison among Theo-rems 3.1–3.4. Obviously, from the viewpoint of different mul-tiple time delays, model (18) is a special case of model (7) or(8). From the viewpoint of different activation functions, model(18) and model (7) or (8) are special cases of model (1) or (2).Therefore, Theorem 3.1 is more general than Theorem 3.2. BothTheorems 3.1 and 3.2 are more general than Theorems 3.3 and3.4, and have wider application domains than Theorems 3.3 and3.4. To the best of our knowledge, no stability results in the formof LMI are reported in the literature for neural network model(1) or (2), even for model (7) or (8), while this paper establishedsuch LMI-based stability results for both model (2) and model(8). It is important to establish the LMI-based stability resultsfor model (2) and model (8) as well as for model (18). Althoughmodel (16), as a special case of model (7), has been studied inthe literature, for instance, [11], [14], and [21], all the resultsin [11], [14], and [21] did not consider the neuron’s excitatoryand inhibitory effects on neural networks, which led to stabilityresults that are more conservative.

Remark 3.14: When , both condition (3) in Propo-sition 3.1 and condition (9) in Proposition 3.2 should be mod-ified as follows. We can let in Propositions 3.1 and 3.2 be

Page 10: IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 5, … · ZHANG et al.: GLOBAL ASYMPTOTIC STABILITY OF RECURRENT NEURAL NETWORKS WITH MULTIPLE TIME-VARYING DELAYS 857 with and

864 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 5, MAY 2008

1 and derive similar condition as in (3) and (9), respectively.Similarly, for the case of , we can also obtain the cor-responding results if we replace by 1 in Theorems 3.3 and3.4 and Corollaries 3.3–3.5, respectively.

Remark 3.15: If the activation functions and inAssumption 2.1 are bounded, then both neural networks (1) and(7) have an equilibrium point [5], [18], [30]. In this case, thecondition in Proposition 3.1, Theorem3.1, Proposition 3.2, and Theorem 3.2 can be relaxed to

. Similarly, the condition inTheorems 3.3 and 3.4 and Corollaries 3.3–3.5 can be relaxedto , respectively.

C. Recurrent Neural Networks With Constant Delays

Now, consider the system (19) with constant delays, whichrequires the use of a different Lyapunov–Krasovskii functional.

Theorem 3.5 (See Appendix VI for a Proof): The origin ofsystem (19) with constant delays is globally asymptoticallystable if

(39)

For neural networks with constant delays described by

(40)

we have the following stability result.Corollary 3.6: The origin of system (40) is globally asymp-

totically stable if

(41)

Proof: Choose the following Lyapunov–Krasovskii func-tional:

which is the same as that in the proof of Theorem 3.5 [see(93) in Appendix VI]. Choose

. By, and substituting (96) in Appendix VI by the fol-

lowing inequalities:

we can prove Corollary 3.6 similarly to the Proof of Theorem3.5. The details are omitted.

Remark 3.16: When in (40), the asymptotic stabilityof the equilibrium point was studied in [1]. Theorem 2 in [1]requires

(42)

while in this case, our result, Corollary 3.6 with , requires

(43)

Obviously, (42) and (43) are not equivalent. The reason for thisis that Theorem 2 in [1] can only be applied when

or when is a diagonal matrix, under which the pre-vious conditions (42) and (43) are equivalent (see the equationbefore (8) in [1]).

IV. APPLICATION EXAMPLE

Consider the continuous neutralization of an acid streamby a highly concentrated basic stream, which can be expressedin the following form [23]:

(44)

where is the volume of the mixing tank, is the manipu-lated variable representing the base flow rate, is the acid flowrate, and are some constants, is the strong acid equiv-alent, and is the measured output signal.

In fact, time delay is always inevitable in the control process.Therefore, we slightly modify model (44) as follows:

(45)

Adopting output feedback controller , thepurpose is to design an appropriate feedback gain such thatthe closed-loop system is asymptotically stable. The closed-loopsystem can be expressed in the following form:

(46)

Let . Then, system (46) is changed to the fol-lowing form:

(47)

where, and . Obviously, the origin is the

equilibrium point of system (47), and is to be designed.The closed-loop system (47) has the same structure as system

(18). Applying Theorem 3.3 to system (47), we require

(48)

Page 11: IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 5, … · ZHANG et al.: GLOBAL ASYMPTOTIC STABILITY OF RECURRENT NEURAL NETWORKS WITH MULTIPLE TIME-VARYING DELAYS 857 with and

ZHANG et al.: GLOBAL ASYMPTOTIC STABILITY OF RECURRENT NEURAL NETWORKS WITH MULTIPLE TIME-VARYING DELAYS 865

Fig. 1. Response curve of closed-loop system (46) with � � ������.

Pre- and postmultiplying on both sides of(48), we have

(49)

where. Let and

. Then, (49) becomes

(50)If (50) has solutions , and , then feedback gain

.We take

, and . Then,, and . Solving inequality (50), we have

, and . Forthe closed-loop system (47) with previous parameters, Theo-rems 1 and 3 in [6] are not satisfied.

The response curve of closed-loop system (46) is depicted inFig. 1.

Now, we consider model (44), which can directly be derivedfrom model (45) if we let . In this case, solving inequality(50), we have , and thefeedback gain .

The response curve of closed-loop system (46) withis depicted in Fig. 2.

V. CONCLUSION

Several sufficient conditions are established for the globalasymptotic stability of recurrent neural networks with differentmultiple time-varying delays. The Lyapunov–Krasovskii sta-bility theory for functional differential equations and the LMI

approach are employed in this study. Global asymptotic sta-bility results are also established for recurrent neural networkswith multiple time-varying delays. These results are shown tobe generalizations of some previously published results and areless conservative than existing results. Finally, global asymp-totic stability results are established for recurrent neural net-works with constant time delays.

APPENDIX IPROOF OF PROPOSITION 3.1

The primary goal of Proposition 3.1 is to find the conditionto ensure that recurrent neural networks (2) has a unique equi-librium point. An equilibrium point is a constant solution of(2), i.e., it satisfies the algebraic equation

(51)

Similar to the proof in [7] and [30], we define the followingmap associated with (51):

(52)

Now, we prove that is a homeomorphism of by twosteps.

First, we will show that is injective on . Suppose, forpurpose of contradiction, that there exists two vectorswith such that . Then

(53)

Page 12: IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 5, … · ZHANG et al.: GLOBAL ASYMPTOTIC STABILITY OF RECURRENT NEURAL NETWORKS WITH MULTIPLE TIME-VARYING DELAYS 857 with and

866 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 5, MAY 2008

Because , then and .Premultiplying , and

on both sides of (53) yields

(54)

By Assumption 2.1, the following inequalities hold:

(55)

(56)

Noting that , then

(57)

for any positive diagonal matrix .Substituting (55)–(57) into (54), we have

(58)

where

and is the same as in (3).From (3), we have for any . Obviously, this

contradicts with (58). Therefore, and is injective.Second, we will show that approaches infinity as

approaches infinity. If and are bounded asapproaches infinity, it is easy to verify that when

. For the case that and are unbounded,we will show that as . Let

(59)

where and . Obviously,is equivalent to .

Multiplying on both sides of(59), and by Assumption 2.1, we have

(60)

where and is the sameas in (3).

By (3), there exists a sufficiently small constantsuch that . Note that

, where. Then, from (60), we have

(61)

From (61), we have

Page 13: IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 5, … · ZHANG et al.: GLOBAL ASYMPTOTIC STABILITY OF RECURRENT NEURAL NETWORKS WITH MULTIPLE TIME-VARYING DELAYS 857 with and

ZHANG et al.: GLOBAL ASYMPTOTIC STABILITY OF RECURRENT NEURAL NETWORKS WITH MULTIPLE TIME-VARYING DELAYS 867

Fig. 2. Response curve of closed-loop system (46) with � � �. Solid line is with � � ������; dashed line is the result of [23] with � � ������.

where

Therefore, . Clearly, as, which is equivalent to as .

By Lemma 2.2, is a homeomorphism of . Therefore,for every external constant input , system (2) has a uniqueequilibrium point . This completes the proof of Proposition3.1.

APPENDIX IIPROOF OF THEOREM 3.1

Consider the following Lyapunov–Krasovskii functional formodel (4):

(62)

where and. The time derivative of the functional (62) along the

trajectories of model (4) is obtained as follows:

(63)

By Assumption 2.1, we have

(64)

(65)

where and are positive diagonal matrices.Substituting (64) and (65) into (63) yields

(66)

for , whereand is the

same as in (3).Thus, by Lyapunov stability theory, it follows that the origin

of system (4) or the equilibrium point of system (2) is globallyasymptotically stable, independent of the magnitude of time-varying delays. This completes the proof of Theorem 3.1.

Page 14: IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 5, … · ZHANG et al.: GLOBAL ASYMPTOTIC STABILITY OF RECURRENT NEURAL NETWORKS WITH MULTIPLE TIME-VARYING DELAYS 857 with and

868 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 5, MAY 2008

APPENDIX IIIPROOF OF PROPOSITION 3.2

The primary goal of Proposition 3.2 is to find condition en-suring that recurrent neural networks (8) has a unique equilib-rium point. An equilibrium point is a constant solution of (8),i.e., it satisfies the algebraic equation

(67)

Similar to the proof in [7] and [30], we define the followingmap associated with (67):

(68)

Now, we prove that is a homeomorphism of by twosteps.

First, we will show that is injective on . Suppose, forpurpose of contradiction, that there exists two vectorswith such that . Then

(69)Because , then . Multiplying

on both sides of (69), and by Assumption 2.1 andLemma 2.1, yields

(70)

From (9), we have

(71)

for . Obviously, (71) contradicts with (70).Therefore, and is injective.

Second, we will show that approaches infinity asapproaches infinity. If is bounded, it is easy to verify thatwhen . For the case that isunbounded, we will show that as .Let

(72)

where . Obviously, is equiv-alent to .

Multiplying on both sides of (72), and by Assump-tion 2.1 and Lemma 2.1, we have

(73)

By (9), there exists a sufficiently small constant suchthat

(74)

for .Combining (73) with (74), we have

, or , or. Clearly, as

, which is equivalent to as.

By Lemma 2.2, is a homeomorphism of . Therefore,for every external constant input , system (8) have a uniqueequilibrium point . This completes the proof of Proposition3.2.

APPENDIX IVPROOF OF THEOREM 3.2

Consider the following Lyapunov–Krasovskii functional formodel (10):

(75)

Page 15: IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 5, … · ZHANG et al.: GLOBAL ASYMPTOTIC STABILITY OF RECURRENT NEURAL NETWORKS WITH MULTIPLE TIME-VARYING DELAYS 857 with and

ZHANG et al.: GLOBAL ASYMPTOTIC STABILITY OF RECURRENT NEURAL NETWORKS WITH MULTIPLE TIME-VARYING DELAYS 869

where and will be defined later, .The time derivative of the functional (75) along the trajectoriesof model (10) is obtained as follows:

(76)

By Assumption 2.1, we have

(77)

By Lemma 2.1, the following inequality holds:

(78)

and the following inequalities hold for :

(79)

(80)

Substituting (77)–(80) into (76) yields

(81)

If we choose , and as follows:

(82)

where

then, from (9) and (81), we can directly obtain andfor any .

Note that implies that . Now, letand . In this case, is in the fol-

lowing form (according to Lemma 2.1):

Page 16: IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 5, … · ZHANG et al.: GLOBAL ASYMPTOTIC STABILITY OF RECURRENT NEURAL NETWORKS WITH MULTIPLE TIME-VARYING DELAYS 857 with and

870 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 5, MAY 2008

Consider (82) again, then for.

Now, we consider the case where . In thiscase, takes the form

It is easily seen that for. Hence, we have proved that if and only if

. Otherwise,. On the other hand, is radially unbounded. Thus, by

Lyapunov stability theory, it follows that the origin of system(10) or the equilibrium point of system (8) is globally asymp-totically stable, independent of the magnitude of time-varyingdelays. This completes the proof of Theorem 3.2.

APPENDIX VPROOF OF THEOREM 3.3

Choose a Lyapunov–Krasovskii functional as follows:

(83)

where , and are to be defined later.The derivative of (83) along the trajectories of (19) is as follows:

(84)

Because , it is clear that .Hence, from (84), we have

(85)

Because

then by Lemma 2.1 and Assumption 2.1, the following inequal-ities hold for :

(86)

(87)

(88)

(89)

Let

(90)

Substituting (86)–(89) into (85) yields

(91)

Page 17: IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 5, … · ZHANG et al.: GLOBAL ASYMPTOTIC STABILITY OF RECURRENT NEURAL NETWORKS WITH MULTIPLE TIME-VARYING DELAYS 857 with and

ZHANG et al.: GLOBAL ASYMPTOTIC STABILITY OF RECURRENT NEURAL NETWORKS WITH MULTIPLE TIME-VARYING DELAYS 871

If we choose

then, from (91), we have for . Notethat implies that . Now, letand . In this case, is in the following form:

(92)

Using (88) and (90), from (92), we obtain

Now, we consider the case where . In thiscase, takes the form

It is easily seen that for .Hence, we have proved that if and only if

. Otherwise,. By Lyapunov stability theory, it follows that the origin of

system (19) is globally asymptotically stable. This completesthe proof of Theorem 3.3.

APPENDIX VIPROOF OF THEOREM 3.5

Choose the following Lyapunov–Krasovskii functional:

(93)

where . Thederivative of (93) along the trajectories of model (19) with con-stant delays is as follows:

(94)

Because, by Lemma 2.1 we have for

(95)

(96)

(97)

By Assumption 2.1, we have

(98)

(99)

Substituting (95)–(99) into (94) yields

Hence, if (39) holds, for all . Becauseimplies , then when and

Page 18: IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 5, … · ZHANG et al.: GLOBAL ASYMPTOTIC STABILITY OF RECURRENT NEURAL NETWORKS WITH MULTIPLE TIME-VARYING DELAYS 857 with and

872 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 5, MAY 2008

satisfies

Therefore

for . In addition, is radially unbounded.Then, by Lyapunov stability theory, the origin of (19) is globallyasymptotically stable if condition (39) holds. This completes theproof of the Theorem 3.5.

REFERENCES

[1] S. Arik, “Stability analysis of delayed neural networks,” IEEE Trans.Circuits Syst. I, Fundam. Theory Appl., vol. 47, no. 7, pp. 1089–1092,Jul. 2000.

[2] S. Arik, “An analysis of global asymptotic stability of delayed cel-lular neural networks,” IEEE Trans. Neural Netw., vol. 13, no. 3, pp.1239–1242, May 2002.

[3] S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan, Linear MatrixInequalities in System and Control Theory, Studies in Applied Mathe-matics. Philadelphia, PA: SIAM, 1994, vol. 15.

[4] J. Cao and D. W. C. Ho, “A general framework for global asymptoticstability analysis of delayed neural networks based on LMI approach,”Chaos Solitons Fractals, vol. 24, pp. 1317–1329, 2005.

[5] J. Cao, D. S. Huang, and Z. Qu, “Global robust stability of delayedrecurrent neural networks,” Chaos Solitons Fractals, vol. 23, pp.221–229, 2005.

[6] J. Cao and J. Wang, “Global asymptotic stability of a general classof recurrent neural networks with time-varying delays,” IEEE Trans.Circuits Syst. I, Fundam. Theory Appl., vol. 50, no. 1, pp. 34–44, Jan.2003.

[7] M. Forti and A. Tesi, “New conditions for global stability of neuralnetworks with applications to linear and quadratic programming prob-lems,” IEEE Trans. Circuits Syst. I, Fundam. Theory Appl., vol. 42, pp.354–366, Jul. 1995.

[8] K. Gopalsamy and X. Z. He, “Stability in asymmetric Hopfield netswith transmission delays,” Physica D: Nonlinear Phenomena, vol. 76,no. 4, pp. 344–358, Sept. 1994.

[9] Y. He, M. Wu, and J. H. She, “An improved global asymptotic stabilitycriterion for delayed cellular neural networks,” IEEE Trans. NeuralNetw., vol. 17, no. 1, pp. 250–252, Jan. 2006.

[10] J. J. Hopfield, “Neurons with graded response have collective compu-tational properties like those of two state neurons,” Proc. Nat. Acad.Sci. USA, vol. 81, no. 10, pp. 3088–3092, May 1984.

[11] C. Hou and J. Qian, “Stability analysis for neural dynamics with time-varying delays,” IEEE Trans. Neural Netw., vol. 9, no. 1, pp. 221–223,Jan. 1998.

[12] S. Hu and J. Wang, “Global stability of a class of continuous-time re-current neural networks,” IEEE Trans. Circuits Syst. I, Fundam. TheoryAppl., vol. 49, no. 9, pp. 1334–1347, Sep. 2002.

[13] S. Hu and J. Wang, “Absolute exponential stability of a class of contin-uous-time recurrent neural networks,” IEEE Trans. Neural Netw., vol.14, no. 1, pp. 35–45, Jan. 2003.

[14] H. Huang, D. W. C. Ho, and J. Cao, “Analysis of globalexponential stability and periodic solutions of neural networkswith time-varying delays,” Neural Netw., vol. 18, no. 2, pp.161–170, Mar. 2005.

[15] M. Joy, “On the global convergence of a class of functional differentialequations with applications in neural network theory,” J. Math. Anal.Appl., vol. 232, no. 1, pp. 61–81, 1999.

[16] C. Li, X. Liao, and R. Zhang, “Global robust asymptotical stability ofmulti-delayed interval neural networks: An LMI approach,” Phys. Lett.A, vol. 328, no. 6, pp. 452–462, Aug. 2004.

[17] X. Liao, G. Chen, and E. N. Sanchez, “LMI-based approach for asymp-totically stability analysis of delayed neural networks,” IEEE Trans.Circuits Syst. I, Fundam. Theory Appl., vol. 49, no. 7, pp. 1033–1039,Jul. 2002.

[18] X. Liao, G. Chen, and E. N. Sanchez, “Delay-dependent exponentialstability analysis of delayed neural networks: An LMI approach,”Neural Netw., vol. 15, pp. 855–866, 2002.

[19] X. Liao and C. Li, “An LMI approach to asymptotical stability of multi-delayed neural networks,” Physica D: Nonlinear Phenomena, vol. 200,no. 1–2, pp. 139–155, Jan. 2005.

[20] T. L. Liao and F. C. Wang, “Global stability for cellular neural net-works with time delay,” IEEE Trans. Neural Netw., vol. 11, no. 6, pp.1481–1484, Nov. 2000.

[21] X. X. Liao and J. Wang, “Algebraic criteria for global exponentialstability of cellular neural networks with multiple time delays,” IEEETrans. Circuits Syst. I, Fundam. Theory Appl., vol. 50, no. 2, pp.268–275, Feb. 2003.

[22] X. X. Liao, J. Wang, and Z. Zeng, “Global asymptotic stability andglobal exponential stability of delayed cellular neural networks,” IEEETrans. Circuits Syst. II, Exp. Briefs, vol. 52, no. 7, pp. 403–409, Jul.2005.

[23] M. Liu, “Delayed standard neural network model and its application,”ACTA Automatica Sinica, vol. 31, no. 5, pp. 750–758, 2005, (in Chi-nese).

[24] H. Lu, “Comments on “A generalized LMI-based approach to theglobal asymptotic stability of delayed cellular neural networks”,”IEEE Trans. Neural Netw., vol. 16, no. 3, pp. 778–779, May2005.

[25] C. M. Marcus and R. M. Westerveld, “Stability of analog neuralnetworks with delay,” Phys. Rev. A, vol. 39, no. 1, pp. 347–359,Jan. 1989.

[26] T. Roska, C. W. Wu, M. Balsi, and L. O. Chua, “Stability and dynamicsof delay-type general and cellular neural networks,” IEEE Trans. Cir-cuits Syst. I, Fundam. Theory Appl., vol. 39, no. 6, pp. 487–490, Jun.1992.

[27] V. Singh, “A generalized LMI-based approach to the global asymp-totic stability of delayed cellular neural networks,” IEEE Trans. NeuralNetw., vol. 15, no. 1, pp. 223–225, Jan. 2004.

[28] H. Zhang, C. Ji, and D. Liu, “Robust stability analysis ofa class of Hopfield neural networks with multiple delays,” inProc. Int. Symp. Neural Netw., Chongqing, China, May 2005,pp. 209–214.

[29] H. Zhang, Z. Wang, and D. Liu, “Exponential stability analysis ofneural networks with multiple time delays,” in Proc. Int. Symp. NeuralNetw., Chongqing, China, May 2005, pp. 142–148.

[30] J. Zhang and X. Jin, “Global stability analysis in delayed Hopfieldneural networks,” Neural Netw., vol. 13, pp. 745–753, 2000.

[31] Q. Zhang, X. Wei, and J. Xu, “On global exponential stability ofdelayed cellular networks with time-varying delays,” Appl. Math.Comput., vol. 162, no. 2, pp. 679–686, Mar. 2005.

Page 19: IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 5, … · ZHANG et al.: GLOBAL ASYMPTOTIC STABILITY OF RECURRENT NEURAL NETWORKS WITH MULTIPLE TIME-VARYING DELAYS 857 with and

ZHANG et al.: GLOBAL ASYMPTOTIC STABILITY OF RECURRENT NEURAL NETWORKS WITH MULTIPLE TIME-VARYING DELAYS 873

Huaguang Zhang (SM’04) received the B.S. andM.S. degrees in control engineering from North-eastern Electric Power University, Jilin, China, in1982 and 1985, respectively, and the Ph.D. degreein thermal power engineering and automation fromSoutheast University, Nanjing, China, in 1991.

He joined the Automatic Control Department,Northeastern University, Shenyang, China, in 1992,as a Postdoctoral Fellow. Since 1994, he has beena Professor and Head of the Electric AutomationInstitute, Northeastern University. He has authored

and coauthored over 200 journal and conference papers, four monographs, andcoinvented nine patents. His main research interests are neural-networks-basedcontrol, fuzzy control, chaos control, nonlinear control, signal processing, andtheir industrial applications.

Dr. Zhang is an Associate Editor of the IEEE TRANSACTIONS ON SYSTEMS,MAN, AND CYBERNETICS—PART B: CYBERNETICS. He was awarded the “Excel-lent Youth Science Foundation Award” by the National Natural Science Foun-dation of China in 2003. He was named the Changjiang Scholar by China Edu-cation Ministry in 2005.

Zhanshan Wang was born in Liaoning, China, in1971. He received the M.S. degree in control theoryand control engineering from Fushun Petroleum In-stitute, Fushun, China, in 2001 and the Ph.D. degreein control theory and control engineering from North-eastern University, Shenyang, China, in 2006.

Currently, he is an Associate Professor at theNortheastern University. His research interests in-clude stability analysis of recurrent neural networks,fault diagnosis, fault tolerant control, and nonlinearcontrol.

Derong Liu (S’91–M’94–SM’96–F’05) received thePh.D. degree in electrical engineering from the Uni-versity of Notre Dame, Notre Dame, IN, in 1994.

From 1993 to 1995, he was a Staff Fellow withGeneral Motors Research and Development Center,Warren, MI. From 1995 to 1999, he was an AssistantProfessor in the Department of Electrical and Com-puter Engineering, Stevens Institute of Technology,Hoboken, NJ. He joined the University of Illinois atChicago in 1999, where he is now a Full Professorof Electrical and Computer Engineering and of Com-

puter Science. Since 2005, he has been Director of Graduate Studies in theDepartment of Electrical and Computer Engineering, University of Illinois atChicago. He is coauthor (with A. N. Michel) of Dynamical Systems with Satu-ration Nonlinearities: Analysis and Design (New York: Springer-Verlag, 1994),(with A. N. Michel) Qualitative Analysis and Synthesis of Recurrent Neural Net-works (New York: Marcel Dekker, 2002), and (with H. G. Zhang) Fuzzy Mod-eling and Fuzzy Control (Boston, MA: Birkhauser, 2006). He is coeditor (withP. J. Antsaklis) of Stability and Control of Dynamical Systems with Applications(Boston, MA: Birkhauser, 2003), (with F. Y. Wang) Advances in ComputationalIntelligence: Theory and Applications (Singapore: World Scientific, 2006), and(with S. M. Fei, Z. G. Hou, H. G. Zhang, and C. Y. Sun) Advances in NeuralNetworks—ISNN2007 (Berlin, Germany: Springer-Verlag, 2007).

Dr. Liu is an Associate Editor of Automatica. He was General Chair for the2007 International Symposium on Neural Networks (Nanjing, China). He wasa member of the Conference Editorial Board of the IEEE Control SystemsSociety (1995–2000) and an Associate Editor of the IEEE TRANSACTIONS

ON CIRCUITS AND SYSTEMS—I: FUNDAMENTAL THEORY AND APPLICATIONS

(1997–1999), the IEEE TRANSACTIONS ON SIGNAL PROCESSING (2001–2003),and the IEEE TRANSACTIONS ON NEURAL NETWORKS (2004–2006). Since2004, he has been the Editor of the IEEE Computational Intelligence Society’sELECTRONIC LETTER. Since 2006, he has been the Letters Editor of the IEEETRANSACTIONS ON NEURAL NETWORKS and an Associate Editor of the IEEECOMPUTATIONAL INTELLIGENCE MAGAZINE. He is General Chair for the 2008IEEE International Conference on Networking, Sensing and Control (Sanya,China). He is Program Chair for the 2008 International Joint Conference onNeural Networks; the 2007 IEEE International Symposium on ApproximateDynamic Programming and Reinforcement Learning; the 21st IEEE Interna-tional Symposium on Intelligent Control (2006); and the 2006 InternationalConference on Networking, Sensing and Control. He is an elected AdCommember of the IEEE Computational Intelligence Society (2006–2008), Chairof the Chicago Chapter of the IEEE Computational Intelligence Society, Chairof the Technical Committee on Neural Networks of the IEEE ComputationalIntelligence Society, and Past Chair of the Technical Committee on Neural Sys-tems and Applications of the IEEE Circuits and Systems Society. He receivedthe Michael J. Birck Fellowship from the University of Notre Dame (1990),the Harvey N. Davis Distinguished Teaching Award from Stevens Institute ofTechnology (1997), the Faculty Early Career Development (CAREER) awardfrom the National Science Foundation (1999), and the University ScholarAward from University of Illinois (2006–2009). He is a member of Eta KappaNu.