multiple almost periodic solutions in nonautonomous delayed neural networks

29
LETTER Communicated by Tianping Chen Multiple Almost Periodic Solutions in Nonautonomous Delayed Neural Networks Kuang-Hui Lin [email protected] Chih-Wen Shih [email protected] Department of Applied Mathematics, National Chiao Tung University, Hsinchu, Taiwan, R.O.C. A general methodology that involves geometric configuration of the net- work structure for studying multistability and multiperiodicity is devel- oped. We consider a general class of nonautonomous neural networks with delays and various activation functions. A geometrical formulation that leads to a decomposition of the phase space into invariant regions is employed. We further derive criteria under which the n-neuron network admits 2 n exponentially stable sets. In addition, we establish the existence of 2 n exponentially stable almost periodic solutions for the system, when the connection strengths, time lags, and external bias are almost periodic functions of time, through applying the contraction mapping principle. Finally, three numerical simulations are presented to illustrate our theory. 1 Introduction Rhythms are ubiquitous in nature. Experimental and theoretical studies suggest that a mammalian brain may be exploiting dynamic attractors for its storage of associative memories (Kopell, 2000; Skarda & Freeman, 1987; Izhikevich, 1999; Yau, Freeman, Burke, & Yang, 1991). Therefore, investiga- tion of dynamic attractors as limit cycles and strange attractors is essential in neural networks. Many important studies of neural networks, however, focus mainly on static attractors, that is, equilibrium type. On the other hand, cyclic behaviors or rhythmic activities may not be represented ex- actly by periodic functions from a phenomenological point of view. It is therefore interesting to investigate almost periodic functions, the naturally generalized notion of periodic functions. In this letter, we consider the following neural network model, dx i (t) dt =−µ i (t)x i (t) + n j =1 α ij (t)g j (x j (t)) + n j =1 β ij (t)g j (x j (t τ ij (t))) + I i (t), (1.1) Neural Computation 19, 3392–3420 (2007) C 2007 Massachusetts Institute of Technology

Upload: chih-wen

Post on 19-Dec-2016

216 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Multiple Almost Periodic Solutions in Nonautonomous Delayed Neural Networks

LETTER Communicated by Tianping Chen

Multiple Almost Periodic Solutions in NonautonomousDelayed Neural Networks

Kuang-Hui [email protected] [email protected] of Applied Mathematics, National Chiao Tung University,Hsinchu, Taiwan, R.O.C.

A general methodology that involves geometric configuration of the net-work structure for studying multistability and multiperiodicity is devel-oped. We consider a general class of nonautonomous neural networkswith delays and various activation functions. A geometrical formulationthat leads to a decomposition of the phase space into invariant regions isemployed. We further derive criteria under which the n-neuron networkadmits 2n exponentially stable sets. In addition, we establish the existenceof 2n exponentially stable almost periodic solutions for the system, whenthe connection strengths, time lags, and external bias are almost periodicfunctions of time, through applying the contraction mapping principle.Finally, three numerical simulations are presented to illustrate our theory.

1 Introduction

Rhythms are ubiquitous in nature. Experimental and theoretical studiessuggest that a mammalian brain may be exploiting dynamic attractors forits storage of associative memories (Kopell, 2000; Skarda & Freeman, 1987;Izhikevich, 1999; Yau, Freeman, Burke, & Yang, 1991). Therefore, investiga-tion of dynamic attractors as limit cycles and strange attractors is essentialin neural networks. Many important studies of neural networks, however,focus mainly on static attractors, that is, equilibrium type. On the otherhand, cyclic behaviors or rhythmic activities may not be represented ex-actly by periodic functions from a phenomenological point of view. It istherefore interesting to investigate almost periodic functions, the naturallygeneralized notion of periodic functions.

In this letter, we consider the following neural network model,

dxi (t)dt

= −µi (t)xi (t) +n∑

j=1

αi j (t)g j (xj (t)) +n∑

j=1

βi j (t)g j (xj (t − τi j (t))) + Ii (t),

(1.1)

Neural Computation 19, 3392–3420 (2007) C© 2007 Massachusetts Institute of Technology

Page 2: Multiple Almost Periodic Solutions in Nonautonomous Delayed Neural Networks

Multiple Almost Periodic Solutions 3393

where i = 1, 2, . . . , n; the integer n corresponds to the number of units inthe network; xi (t) corresponds to the state of the ith unit at time t; µi (t)represents the rate with which the ith unit resets its potential to the restingstate in isolation when disconnected from the networks and external input;αi j (t) and βi j (t) denote the connection strength of the j th unit on the ithunit at time t and time t − τi j (t), respectively; g j (xj (t)) denotes the outputof the j th unit at time t; τi j (t) denotes the transmission delay of the ith unitalong the axon of the j th unit at time t and is a nonnegative function; andIi (t) corresponds to the external bias on the ith unit at time t.

Many theoretical studies of neural networks are predominantly con-cerned with autonomous systems; namely, all the parameters in system1.1 are constants. Such an autonomous network system is often referredto in the literature as a delayed cellular neural network (Chua & Yang,1988; Roska & Chua, 1992). The autonomous cellular neural networks withdelay and without delay have been successfully applied to many fields,such as optimization, associative memory, signal and image processing,and pattern recognition. The theory on the existence and stability of equi-librium and periodic solution for such autonomous systems has been exten-sively investigated (Cao, 2003; Chen, Lu, & Chen, 2005; Juang & Lin, 2000;Mohamad & Gopalsamy, 2003; Peng, Qiao, & Xu, 2002; Shih, 2000; Tu &Liao, 2005; van den Driessche & Zou, 1998; Zhou, Liu, & Chen, 2004). Theautonomous neural network systems are modeled on the assumption thatthe environment is stationary, that is, its characteristics do not change withtime. However, frequently the environment of interest is not stationary, andthe parameters of the information-bearing signal generated by the environ-ment vary with time. In general, nonautonomous phenomena often occur inmany realistic systems. Particularly when we consider a long-term dynam-ical behavior of a system, the parameters of the system usually change withtime. The investigations of nonautonomous delayed neural networks (seeequation 1.1) are important in understanding the dynamical characteristicsof neural behavior in time varying environments.

In this letter, we develop a geometrical approach that leads to a decom-position of the phase space into several positively invariant and stable sets.We then study the existence of multiple, almost periodic solutions for thenonautonomous delayed neural network 1.1, under the assumption thatall the parameters and delays in system 1.1 are almost periodic. Most ofthe studies of autonomous systems and nonautonomous systems centeraround the existence of a unique equilibrium, periodic, multiple almost pe-riodic solutions and their global convergence. A unique periodic solutionand its globally exponentially robust stability were investigated by Caoand Chen (2004) for system 1.1 with periodic external bias Ii (t), using theLyapunov method. The globally exponential stability of a unique almostperiodic solution has been analyzed by Cao, Chen, and Huang (2005), Chenand Cao (2003), Fan and Ye (2005), Huang and Cao (2003), Huang, Cao, andHo (2006), Liu & Huang (2005), Mohamad and Gopalsamy (2000), Zhang,

Page 3: Multiple Almost Periodic Solutions in Nonautonomous Delayed Neural Networks

3394 K. Lin and C. Shih

Wei, and Xu (2005), Mohamad (2003), and Zhao (2004). The techniques theyemployed include exponential dichotomy, fixed-point theorem, contractionmapping principle, Lyapunov methods and Halanay inequality. A neuralnetwork in a more general form, which includes system 1.1 and distributeddelays, has been analyzed in Lu and Chen (2004, 2005), who obtained theglobally exponential stability of a single periodic solution and a single al-most periodic solution. Multistability in neural networks is essential in nu-merous applications such as content-addressable memory storage and pat-tern recognition. In addition, the application of multistability is importantin decision making, digital selection, and analogy amplification (Hahnloser,1998). Recently the theory of the existence of multiple stable equilibria forautonomous networks has been established in Cheng, Lin, and Shih (2006);multiple stable periodic orbits for autonomous networks with periodic in-puts have been exploited in Cheng, Lin, and Shih (2007), and Zeng andWang (2006). This presentation moves this direction of studies to wider andmore general considerations. We establish the existence of 2n exponentiallystable almost periodic solutions for general n-dimensional nonautonomousnetworks, 1.1, with various activation functions. Our derivation also indi-cates the basins of attraction for these almost periodic solutions.

The presentation is organized as follows. In section 2, we justify that allthe solutions for system 1.1 are bounded, under the assumption that all theparameters in this system are bounded continuous functions of t. 2n subsetsof the phase space are shown to be positively invariant under the flowsgenerated by system 1.1. Within these 2n subsets, we further construct 2n

exponentially stable subsets (to be defined later) for system 1.1. In section 3,we derive the existence of 2n almost periodic solutions for system 1.1 withalmost periodic parameters and time lags through the contraction mappingprinciple. Finally, in section 4, we arrange three numerical simulations onthe dynamics of system 1.1 to illustrate the theory.

2 Boundedness, Invariant Sets, and Exponentially Stable Regions

We consider that µi (t), αi j (t), βi j (t), Ii (t), τi j (t) in system 1.1 are boundedcontinuous functions defined for t ∈ R, the set of all real numbers. Wedenote that

0 < µi≤ µi (t) ≤ µi , αi j ≤ αi j (t) ≤ αi j , β

i j≤ βi j (t) ≤ βi j , (2.1)

I i ≤ Ii (t) ≤ Ii , 0 ≤ τ i j ≤ τi j (t) ≤ τi j

for all i, j = 1, 2, . . . , n and t ∈ R with

inft∈R

µi (t) = µi, inf

t∈R

αi j (t) = αi j , inft∈R

βi j (t) = βi j, inf

t∈R

Ii (t) = I i ,

Page 4: Multiple Almost Periodic Solutions in Nonautonomous Delayed Neural Networks

Multiple Almost Periodic Solutions 3395

inft∈R

τi j (t) = τ i j , supt∈R

µi (t) = µi , supt∈R

αi j (t) = αi j , supt∈R

βi j (t) = βi j ,

supt∈R

Ii (t) = Ii , supt∈R

τi j (t) = τi j .

Moreover, we set

α∗i j = max{|αi j |, |αi j |}, β∗

i j = max{|βi j|, |βi j |}, I ∗

i = max{|I i |, | Ii |}. (2.2)

The activation functions g j usually admit sigmoidal configuration or arenondecreasing with saturation. In this presentation, we mainly adopt

g j (ξ ) = g(ξ ) := tanh(ξ ), for all j = 1, 2, . . . , n. (2.3)

Its graph is depicted in Figure 1a. Our theory can be applied to more generalactivation functions:

gi ∈ C2,

{ui < gi (ξ ) < vi , g′

i (ξ ) > 0, (ξ − σi )g′′i (ξ ) < 0, for all ξ ∈ R,

limξ →+∞ gi (ξ ) = vi , limξ →−∞ gi (ξ ) = ui ,

where ui , vi , and σi are constants with ui < vi , and the nondecreasing con-tinuous functions with saturation

gi (ξ ) =

ui if − ∞ < ξ < pi ,

increasing if pi ≤ ξ ≤ qi ,

vi if qi < ξ < ∞,

where ui , vi , qi , pi are constants with ui < vi , pi < qi , i = 1, 2, . . . , n. Thelatter contains the standard piecewise linear activation function for cellularneural networks: (|ξ − 1| + |ξ + 1|)/2. Notably, the analysis in Zeng andWang (2006) is performed within the framework of this piecewise linearactivation function. The configurations for these functions are depicted inFigures 2a to 2c, respectively.

We set τ := max1≤i, j≤n{τi j }. System 1.1 is one of nonautonomous delayeddifferential equations. The evolution of the system is described by the solu-tion x(t; t0;ϕ) = x(t; t0) = (x1(t; t0), x2(t; t0), . . . , xn(t; t0)), t ≥ t0 − τ , startingfrom the initial condition: xi (t0 + θ; t0) = ϕi (θ ) for θ ∈ [−τ, 0], where ϕ =(ϕ1, ϕ2, . . . , ϕn) ∈ C([−τ, 0], R

n). We note that the fundamental theory forthe delayed equation x(t) = F (t, xt) employs the conventional expressionxt(θ ) = x(t + θ; t0), θ ∈ [−τ, 0] with phase space C([−τ, 0], R

n). One usuallyregards the image of solution xt(θ ), t ≥ t0 as a trajectory {x(t; t0) : t ≥ t0 − τ }lying in R

n.We first illustrate that all solutions of system 1.1 are bounded and then

establish criteria under which system 1.1 admits 2n positively invariant

Page 5: Multiple Almost Periodic Solutions in Nonautonomous Delayed Neural Networks

3396 K. Lin and C. Shih

Figure 1: (a) Graph of activation function g(ξ ) = tanh(ξ ). (b) Configurations forf i and fi .

subsets of phase space C([−τ, 0], Rn). Each of these invariant sets induces

an exponentially stable region in Rn in which the trajectories {x(t; t0) :

t ≥ t0 − τ } lie. Note that the norm ‖ϕ‖ = max1≤i≤n{sups∈[−τ,0] |ϕi (s)|} onC([−τ, 0], R

n) is adopted here.

Proposition 1. All solutions of system 1.1 with bounded activation functions arebounded. Moreover, for system 1.1 with activation function 2.3,

|xi (t0; t0)| ≤ Mi implies |xi (t; t0)| ≤ Mi , for t > t0,

|xi (t0; t0)| > Mi implies limt→∞ sup |xi (t; t0)| ≤ Mi ,

for all i = 1, 2, . . . , n, where Mi = [∑n

j=1(α∗i j + β∗

i j ) + I ∗i ]/µ

i.

Page 6: Multiple Almost Periodic Solutions in Nonautonomous Delayed Neural Networks

Multiple Almost Periodic Solutions 3397

(c)

(a)

(b)

Figure 2: Configurations of typical (a) smooth sigmoidal activation functionsand (b) saturated activation functions. (c) Graph of the standard piecewise linearactivation function.

Proof. We prove only the situation of activation function 2.3. Using equa-tions 2.1 to 2.3 in system 1.1, we derive

d+

dt|xi (t; t0)| ≤ −µ

i|xi (t; t0)| +

n∑j=1

(α∗i j + β∗

i j ) + I ∗i , t > t0, i = 1, 2, . . . , n,

where d+/dt denotes the right-hand derivative. It follows that

|xi (t; t0)| ≤ Mi + e−µi(t−t0)(|xi (t0; t0)| − Mi ), t > t0, i = 1, 2, . . . , n,

where Mi is defined above. Hence, |xi (t0; t0)| ≤ Mi implies |xi (t; t0)| ≤Mi , t > t0, i = 1, 2, . . . , n. On the other hand, if |xi (t0; t0)| − Mi = Ci > 0,then |xi (t; t0)| ≤ Mi + e−µ

i(t−t0)Ci for t > t0 and e−µ

i(t−t0)Ci → 0 as t → ∞.

Page 7: Multiple Almost Periodic Solutions in Nonautonomous Delayed Neural Networks

3398 K. Lin and C. Shih

Let us denote by F = (F1, F2, . . . , Fn) the right-hand side of equation 1.1with activation function 2.3,

Fi (t, xt) : = −µi (t)xi (t) +n∑

j=1

αi j (t)g j (xj (t))

+n∑

j=1

βi j (t)g j (xj (t − τi j (t))) + Ii (t),

with g j (ξ ) = tanh(ξ ), for every j . Next, we define, for i = 1, 2, . . . , n,

fi (ξ ) ={−µ

iξ + (αi i + βi i )gi (ξ ) + k+

i , ξ ≥ 0−µiξ + (αi i + β

i i)gi (ξ ) + k+

i , ξ < 0, (2.4)

fi (ξ ) ={−µiξ + (αi i + β

i i)gi (ξ ) + k−

i , ξ ≥ 0−µ

iξ + (αi i + βi i )gi (ξ ) + k−

i , ξ < 0,

where

k+i :=

n∑j=1, j =i

(α∗i j + β∗

i j ) + Ii , k−i := −

n∑j=1, j =i

(α∗i j + β∗

i j ) + I i ,

with α∗i j and β∗

i j defined in equation 2.2. It follows that

fi (xi ) ≤ Fi (t, xt) ≤ fi (xi ), (2.5)

for all xt ∈ C([−τ, 0], Rn), xi ∈ R, t ≥ t0 and i = 1, 2, . . . , n, since |g j | ≤ 1 for

all j .Our approach is motivated by the geometric configurations of fi and fi

depicted in Figure 1b. To establish such a configuration, several conditionsare required. The first parameter condition is

(H1) : 0 = infξ∈R

g′i (ξ ) <

µi

αi i + βi i,

µi

αi i + βi i

< maxξ∈R

g′i (ξ ) = 1, i = 1, 2, . . . , n.

Lemma 1. Assume that condition H1 holds. Then (i) there exist two points pi

and qi with pi < 0 < qi such that fi′( pi ) = 0 and fi

′(qi ) = 0, i = 1, 2, . . . , n;and (ii) there exist two points p

iand q

iwith p

i< 0 < q

isuch that fi

′(pi) = 0

and fi′(q

i) = 0, i = 1, 2, . . . , n.

Page 8: Multiple Almost Periodic Solutions in Nonautonomous Delayed Neural Networks

Multiple Almost Periodic Solutions 3399

Proof. We prove only case i. For each i , when ξ ≥ 0, since fi′(ξ ) = −µ

i+

(αi i + βi i )g′i (ξ ), we have fi

′(ξ ) = 0 if and only if

g′i (ξ ) =

µi

αi i + βi i.

Note that the graph of function g′i (ξ ) is concave down. In addition, g′

i at-tains its maximal value at ξ = 0 and limξ →±∞ g

′i (ξ ) = 0. Hence, since g′

i iscontinuous, if

0 = infξ∈R

g′i (ξ ) < g′

i (ξ ) =µ

i

αi i + βi i< max

ξ∈R

g′i (ξ ) = 1, i = 1, 2, . . . , n,

there exists a point qi with qi > 0 such that

g′i (qi ) =

µi

αi i + βi i.

Similarly, when ξ < 0, there exists a point pi with pi < 0, such that

g′i ( pi ) = µi

αi i + βi i

.

Note that condition H1 implies αi i + βi i

> 0 for all i = 1, 2, . . . , n, sinceeach µ

iis already assumed a positive constant. The second parameter con-

dition is

(H2) : f i ( pi ) < 0, fi (q i) > 0, i = 1, 2, . . . , n.

The configuration that motivates H2 is depicted in Figure 1. Such a con-figuration is due to the characteristics of the activation functions gi . Underassumptions H1 and H2, there exist points ai , bi , ci with ai < bi < ci suchthat fi (ai ) = fi (bi ) = fi (ci ) = 0 as well as points ai , bi , ci with ai < bi < ci

such that fi (ai ) = fi (bi ) = fi (ci ) = 0.We consider the following 2n subsets of C([−τ, 0], R

n). Let e = (e1, . . . , en)with ei = l or r, where, l, r mean, respectively, “left,” and “right.” Set

�e = {ϕ = (ϕ1, ϕ2, . . . , ϕn) | ϕi ∈ �li if ei = l, ϕi ∈ �r

i if ei = r}, (2.6)

where

�li := {ϕi ∈ C([−τ, 0], R) | ϕi (θ ) < bi , for all θ ∈ [−τ, 0]},

�ri := {ϕi ∈ C([−τ, 0], R) | ϕi (θ ) > bi , for all θ ∈ [−τ, 0]}.

Page 9: Multiple Almost Periodic Solutions in Nonautonomous Delayed Neural Networks

3400 K. Lin and C. Shih

Theorem 1. Assume that H1 and H2 hold and βi i (t) > 0 , for all t ∈ R, i =1, 2, . . . , n. Then each �e is positively invariant under the solution flow generatedby system 1.1.

Proof. Under assumptions H1, H2, and by lemma 1, we obtain the configu-ration as in Figure 1b and 2n subsets �e of C([−τ, 0], R

n) defined in equation2.6. Consider one of the 2n sets �e, that is, fix an e = (e1, e2, . . . , en), ei = l, r,i = 1, 2, . . . , n. Let φ = (φ1, φ2, . . . , φn) ∈ �e. Then there exists a sufficientlysmall constant ε0 > 0 such that φi (θ ) ≥ bi + ε0 for all θ ∈ [−τ, 0], if ei = rand φi (θ ) ≤ bi − ε0 for all θ ∈ [−τ, 0], if ei = l. We assert that the solutionx(t) = x(t; t0;φ) remains in �e for all t ≥ t0. If this is not true, there exists acomponent of x(t) that is the first one or among the first ones escaping from�l

i or �ri . More precisely, there exist some i ∈ {1, 2, . . . , n} and t1 > t0 such

that either xi (t1) = bi + ε0, dxidt (t1) ≤ 0, and xi (t) > bi + ε0 for t0 − τ ≤ t < t1

or xi (t1) = bi − ε0, dxidt (t1) ≥ 0 and xi (t) < bi − ε0 for t0 − τ ≤ t < t1. For the

first case with xi (t1) = bi + ε0 and dxidt (t1) ≤ 0, it follows from equation 1.1

that

dxi

dt(t1) =−µi (t1)(bi + ε0) + αi i (t1)gi (bi + ε0) + βi i (t1)gi (xi (t1 − τi i (t1)))

+n∑

j=1, j =i

αi j (t1)g j (xj (t1)) +n∑

j=1, j =i

βi j (t1)g j (xj (t1 − τi j (t1)))

+Ii (t1) ≤ 0. (2.7)

On the other hand, H2 and the previous description of bi yield fi (bi + ε0) >

0:,

−µi (bi + ε0) + (αi i + βi i

)gi (bi + ε0) − ∑nj=1, j =i (α

∗i j + β∗

i j ) + I i > 0,

if bi + ε0 ≥ 0

−µi(bi + ε0) + (αi i + βi i )gi (bi + ε0) − ∑n

j=1, j =i (α∗i j + β∗

i j ) + I i > 0,

if bi + ε0 < 0.

(2.8)

Notice that gi (xi (t1 − τi i (t1))) > gi (bi + ε0), by the monotonicity of functiongi . In addition, t1 is the first time for xi to decrease across the value bi + ε0.By βi i (t1) > 0 and |gi (·)| ≤ 1, we obtain from equation 2.8 that

−µi (t1)(bi + ε0) + αi i (t1)gi ( bi + ε0) + βi i (t1)gi (xi (t1 − τi i (t1)))

+n∑

j=1, j =i

αi j (t1)g j (xj (t1)) +n∑

j=1, j =i

βi j (t1)g j (xj (t1 − τi j (t1))) + Ii (t1)

Page 10: Multiple Almost Periodic Solutions in Nonautonomous Delayed Neural Networks

Multiple Almost Periodic Solutions 3401

−µi (bi + ε0) + (αi i + βi i

)gi (bi + ε0) − ∑nj=1, j =i (α

∗i j + β∗

i j )+ I i > 0, if bi + ε0 ≥ 0

−µi(bi + ε0) + (αi i + βi i )gi (bi + ε0) − ∑n

j=1, j =i (α∗i j + β∗

i j )+ I i > 0, if bi + ε0 < 0,

which contradicts equation 2.7. Hence, xi (t) ≥ bi + ε0 for all t > t0. Similararguments justify that xi (t) ≤ bi − ε0, for all t > t0 for the situation thatxi (t1) = bi − ε0 and dxi

dt (t1) ≥ 0. Therefore, �e is positively invariant underthe flow generated by equation 1.1.

Remark 1. The assertion of theorem 1 actually holds for the subset �e inequation 2.6 with bi replaced by any number between bi and ci , and bi

replaced by any number between bi and ai , as observed from the proof.Such an observation provides further understanding of the dynamics of thesystem.

Definition 1. A positively invariant set � ⊂ C([−τ, 0], Rn) is called an expo-

nentially stable set for system 1.1 if there exist constants γ > 0 and L > 0 suchthat

‖xt(·; t0;ϕ) − xt(·; t0;ψ)‖ ≤ Le−γ (t−t0)‖ϕ − ψ‖ for all t ≥ t0,

for solutions xt(·; t0;ϕ) and xt(·; t0;ψ) of system 1.1, starting from any two ini-tial conditions ϕ,ψ ∈ �. In this situation, := {ψ(θ ), θ ∈ [−τ, 0], for all ψ ∈�} ⊂ R

n, the images of all elements in �, is called an exponentially stable regionfor system 1.1.

Let η j ∈ R such that η j > max{g′j (ξ ) | ξ = c j , a j }, j = 1, 2, . . . , n. We

consider the following criterion, which yields exponentially stable sets forsystem 1.1:

(H3) : inft∈R

µi (t) −

n∑j=1

η j [|αi j (t)| + |βi j (t)|] > 0, i = 1, 2, . . . , n.

We define d j and d j as

d j := min{ξ |g′j (ξ ) = η j }, d j := max{ξ |g′

j (ξ ) = η j }. (2.9)

Then d j > a j , d j < c j . We consider the following 2n subsets of C([−τ, 0], Rn).

Let e = (e1, e2, . . . , en) with ei = l or r, and set

�e = {ϕ = (ϕ1, ϕ2, . . . , ϕn) | ϕi ∈ �l

i if ei = l, ϕi ∈ �ri if ei = r}, (2.10)

Page 11: Multiple Almost Periodic Solutions in Nonautonomous Delayed Neural Networks

3402 K. Lin and C. Shih

where

�li :={ϕi ∈ C([−τ, 0], R) | ϕi (θ ) ≤ di , for all θ ∈ [−τ, 0]},

�ri :={ϕi ∈ C([−τ, 0], R) | ϕi (θ ) ≥ di , for all θ ∈ [−τ, 0]}.

Notably, �li ⊆ �l

i and �ri ⊆ �r

i .In the following, we shall derive that each of these 2n subsets �

e ofC([−τ, 0], R

n) is an exponentially stable set for system 1.1.

Theorem 2. Assume that conditions H1, H2, H3 hold and βi i (t) > 0 , for allt ∈ R, i = 1, 2, . . . , n. Then each of the 2n sets �

e is exponentially stable forsystem 1.1.

Proof. We consider any one of these 2n subsets �e, defined in equation

2.10. The positive invariance property of �e follows from remark 1. Let

x(t) = x(t; t0;φ) and y(t) = x(t; t0;ψ) be two solutions of system 1.1 withrespective initial conditions φ,ψ ∈ �

e. For each i = 1, 2, . . . , n, we definethe single-variable functions Gi (·) by

Gi (ζ ) = inft∈R

µi (t) − ζ −

n∑j=1

η j |αi j (t)| −n∑

j=1

η j |βi j (t)|eζ τi j

.

Then Gi is continuous and Gi (0) > 0 from H3. Moreover, there exists aconstant λ > 0 such that Gi (λ) > 0, for all i = 1, 2, . . . , n, due to continuityof Gi . From equations 2.1 to 2.3 and by using η j > max{g′

j (ξ ) | ξ = c j , a j },we obtain

d+

dt|xi (t) − yi (t)| ≤ −µi (t)|xi (t) − yi (t)| +

n∑j=1

η j |αi j (t)||xj (t) − yj (t)|

+n∑

j=1

η j |βi j (t)||xj (t − τi j (t)) − yj (t − τi j (t))|. (2.11)

We assume that

K := max1≤i≤n

{sup

s∈[t0−τ,t0]|xi (s) − yi (s)|

}> 0. (2.12)

Now consider functions zi (·) defined by

zi (t) = eλ(t−t0)|xi (t) − yi (t)| for t ≥ t0 − τ. (2.13)

Page 12: Multiple Almost Periodic Solutions in Nonautonomous Delayed Neural Networks

Multiple Almost Periodic Solutions 3403

Then zi (t) ≤ K for t ∈ [t0 − τ, t0]. Using equations 2.11 and 2.13 we obtain

d+

dtzi (t) ≤ −[µi (t) − λ]zi (t) +

n∑j=1

η j |αi j (t)|z j (t)

+n∑

j=1

η j |βi j (t)|eλτi j

(sup

s∈[t−τ,t]z j (s)

)(2.14)

for i = 1, 2, . . . , n, t ≥ t0. We assert that

zi (t) ≤ K for all t ≥ t0, i = 1, 2, . . . , n. (2.15)

Suppose on the contrary, there exists an i ∈ {1, 2, . . . , n} (say, i = k) and at1 ≥ t0 at the earliest time such that

zi (t) ≤ K , t ∈ [−τ, t1], i = 1, 2, . . . , n, i = k,

zk(t) ≤ K , t ∈ [−τ, t1), zk(t1) = K , withddt

zk(t1) ≥ 0. (2.16)

From equations 2.14 and 2.16, due to Gi (λ) > 0, we obtain

0 ≤ d+

dtzk(t1)

≤ −[µk(t1) − λ]zk(t1) +n∑

j=1

η j |αk j (t1)|z j (t1)

+n∑

j=1

η j |βk j (t1)|eλτk j

(sup

s∈[t1−τ,t1]z j (s)

)

≤ −µk(t1) − λ −

n∑j=1

η j |αk j (t1)| −n∑

j=1

η j |βk j (t1)|eλτk j

K < 0,

which is a contradiction. Hence assertion 2.15 holds. It then follows fromequations 2.12, 2.13, and 2.15 that

|xi (t) − yi (t)| ≤ e−λ(t−t0) max1≤i≤n

{sup

s∈[t0−τ,t0]|xi (s) − yi (s)|

},

for all i = 1, 2, . . . , n, t ≥ t0.

Page 13: Multiple Almost Periodic Solutions in Nonautonomous Delayed Neural Networks

3404 K. Lin and C. Shih

3 Existence and Stability of Almost Periodic Solutions

In this section, we investigate the existence and stability of almost periodicsolutions for system 1.1. We consider that µi (t), αi j (t), βi j (t), τi j (t), and Ii (t),i, j = 1, 2, . . . , n, are almost periodic functions defined for all t ∈ R. First, letus recall some basic definitions and properties of almost periodic functions.

Definition 2. (Bohr, 1947; Yoshizawa, 1975). Let h : R → R be continuous.h is said to to be (Bohr) almost periodic on R if, for any ε > 0, the setT(h, ε) = {p : |h(t + p) − h(t)| < ε, for all t ∈ R} is relatively dense, that is,for any ε > 0, it is possible to find a real number l = l(ε) > 0, and for anyinterval with length l(ε), there exists a number p = p(ε) in the interval suchthat

|h(t + p) − h(t)| < ε, for all t ∈ R.

The number p is called the ε-translation number or ε-almost period of h.We denote by AP the set of all such functions. Notably, (AP, ‖ · ‖) with

the supremum norm is a Banach space. We collect some properties of almostperiodic functions, which will be used in the following discussions.

Proposition 2. (Corduneanu, 1968; Fink, 1974).

i. Every periodic function is almost periodic.ii. Each h in AP is bounded and uniformly continuous.

iii. AP is an algebra (closed under addition, product, and scalar multiplication).iv. If h ∈ AP and H is uniformly continuous on the range of h, then H ◦ h ∈

AP .v. If h ∈ AP , and inft∈R |h(t)| > 0 , then 1/h ∈ AP .

vi. AP is closed under uniform limits on R.vii. Suppose h1, h2, . . . , hm ∈ AP . Then for every ε > 0, there exist a common

l(ε) and a common ε-translation number p(ε) for these functions.

Proposition 2(vii) shows that definition 2 can be extended to vector-valued functions, as the situation for our solutions x(t; t0) to system 1.1. Wedenote by AP(Rn) the space of all almost periodic functions from R to R

n

with norm ‖h‖ = max1≤i≤n supt∈R|hi (t)|.

Proposition 3. Assume that H, h ∈ AP , and define H(t) = H(t + h(t)). ThenH ∈ AP .

Proof. H is uniformly continuous since H ∈ AP . Thus, for any ε > 0, thereexists δ = δ(ε) > 0 (we choose δ < ε/2) such that

|H(ξ ) − H(η)| < ε/2, provided |ξ − η| < δ. (3.1)

Page 14: Multiple Almost Periodic Solutions in Nonautonomous Delayed Neural Networks

Multiple Almost Periodic Solutions 3405

On the other hand, for the δ = δ(ε) > 0 in equation 3.1, there exists a realnumber l = l(δ) > 0, and for any interval of length l(δ), there exists a numberp = p(δ) in the interval such that

|H(t + p) − H(t)| < δ, and |h(t + p) − h(t)| < δ, for all t ∈ R,

due to H, h ∈ AP and proposition 2(vii). Therefore, for any ε > 0, thereexists l = l(ε) = l(δ(ε)), and p(ε) = p(δ(ε)) in any interval of length l suchthat

|H(t + h(t + p)) − H(t + h(t))| < ε/2,

|H(t + h(t + p) + p) − H(t + h(t + p))| < δ < ε/2.

Subsequently,

|H(t + p + h(t + p)) − H(t + h(t))|≤ |H(t + h(t + p) + p) − H(t + h(t + p))| + |H(t + h(t + p))

−H(t + h(t))|< ε.

If we set µi (t), αi j (t), βi j (t), τi j (t) and Ii (t), i, j = 1, 2, . . . , n, to be almostperiodic functions, then theorems 1 and 2 can also be established underthe same conditions, since every almost periodic function is bounded. Inthe following discussion, we investigate the existence of almost periodicsolutions of system 1.1 under those conditions in theorem 2 and αi i (t) > 0,for all t.

Theorem 3. Let µi (t), αi j (t), βi j (t), τi j (t), and Ii (t), i, j = 1, 2, . . . , n, be almostperiodic functions. Assume that conditions H1, H2, H3, and αi i (t) > 0 , βi i (t) >

0, t ∈ R, i = 1, 2, . . . , n, hold. Then there exist 2n exponentially stable almostperiodic solutions for system 1.1.

Proof. We consider the following 2n subsets of C(R, Rn). For each e =

(e1, . . . , en), ei = l or r, we set

�e = {

φ = (φ1, φ2, . . . , φn) | φi ∈ �li if wi = l, φi ∈ �r

i if wi = r}, (3.2)

where

�li := {φi ∈ C(R, R) | φi (θ ) ≤ di , for all θ ∈ R},

�ri := {φi ∈ C(R, R) | φi (θ ) ≥ di , for all θ ∈ R},

Page 15: Multiple Almost Periodic Solutions in Nonautonomous Delayed Neural Networks

3406 K. Lin and C. Shih

and di , di are defined in equation 2.9. Consider a fixed �e. Let us define a

mapping P , P(φ) = (P1(φ), P2(φ), . . . , Pn(φ)), by

Pi (φ)(t) =∫ ∞

0

[ n∑j=1

αi j (t − r )g j (φ j (t − r ))

+n∑

j=1

βi j (t − r )g j (φ j (t − r − τi j (t − r )))

+ Ii (t − r )]

× e− ∫ r0 µi (t−s)dsdr, t ∈ R. (3.3)

It will be shown below that P : �e ⋂AP(Rn) → �

e ⋂AP(Rn). We dividethe following justifications into four steps.

Step 1. Let φ ∈ �e. We derive from equation 3.3 that

Pi (φ)(t) =∫ ∞

0[αi i (t − r )gi (φi (t − r )) + βi i (t − r )gi (φi (t − r − τi i (t − r )))

+n∑

j=1, j =i

αi j (t − r )g j (φ j (t − r ))

+n∑

j=1, j =i

βi j (t − r )g j (φ j (t − r − τi j (t − r )))

+Ii (t − r )]e− ∫ r0 µi (t−s)dsdr

≥∫ ∞

0[αi i (t − r )gi (di ) + βi i (t − r )gi (di )

+n∑

j=1, j =i

αi j (t − r )g j (φ j (t − r ))

+n∑

j=1, j =i

βi j (t − r )g j (φ j (t − r − τi j (t − r )))

+Ii (t − r )]e− ∫ r0 µi (t−s)dsdr

≥∫ ∞

0[(αi i + β

i i)gi (di ) −

n∑j=1, j =i

(α∗i j + β∗

i j ) + I i ]e− ∫ r

0 µi dsdr,

(3.4)

Page 16: Multiple Almost Periodic Solutions in Nonautonomous Delayed Neural Networks

Multiple Almost Periodic Solutions 3407

since αi i (t), βi i (t) ≥ 0, t ∈ R, |gi (·)| ≤ 1, di > 0, and gi (φi (t − r )), gi (φi (t − r −τi i (t − r ))) ≥ gi (di ) > 0, due to the monotonicity of functions gi and H1, H2,and H3. On the other hand, since fi (di ) > 0 and di > 0, we have

fi (di ) = −µi di + (αi i + βi i

)gi (di ) −n∑

j=1, j =i

(α∗i j + β∗

i j ) + I i > 0,

that is, (αi i + βi i

)gi (di ) − ∑nj=1, j =i (α

∗i j + β∗

i j ) + I i > µi di . With this, it followsfrom equation 3.4 that

Pi (φ)(t) > µi di

∫ ∞

0e− ∫ r

0 µi dsdr = di , for all t ∈ R.

Similar arguments can be employed to show that Pi (φ)(t) < di , for all t ∈ R

and i = 1, 2, . . . , n. Therefore, P : �e → �

e.Step 2. We justify that P : AP(Rn) → AP(Rn). It suffices to show that φ ∈

AP(Rn) implies Pi (φ) ∈ AP(R1) for all i = 1, 2, . . . , n. Let ε > 0 be arbitrary.Choose ε′ such that

ε′ = min

5

µ2i∑n

k=1(α∗ik + β∗

ik) + I ∗i

i,ε

5

µi∑n

k=1(α∗ik + β∗

ik), i = 1, 2, . . . , n

}.

By propositions 2 and 3, there exists a common ε′-almost period p = p(ε′)for µi (·), αi j (·), βi j (·), Ii (·), φ j (·), and Hi j (·) with Hi j (t) := φ j (t − τi j (t)) suchthat for each i = 1, 2, . . . , n,

|µi (t + p − s) − µi (t − s)| ≤ ε′ ≤ ε

5

µ2i∑n

k=1(α∗ik + β∗

ik) + I ∗i

,

n∑j=1

|αi j (t + p − r ) − αi j (t − r )| ≤ ε′ ≤ ε

i,

n∑j=1

|βi j (t + p − r ) − βi j (t − r )| ≤ ε′ ≤ ε

i, (3.5)

|Ii (t + p − r ) − Ii (t − r )| ≤ ε′ ≤ ε

i,

|φi (t + p − r ) − φi (t − r )| ≤ ε′ ≤ ε

5

µ�∑n

k=1(α∗�k + β∗

�k),

|φi (t + p − r − τ�i (t + p − r )) − φi (t − r − τ�i (t − r ))|

≤ ε′ ≤ ε

5

µ�∑n

k=1(α∗�k + β∗

�k),

Page 17: Multiple Almost Periodic Solutions in Nonautonomous Delayed Neural Networks

3408 K. Lin and C. Shih

for all � = 1, 2, . . . , n, and r, s, t ∈ R. We compute that

|Pi (φ)(t + p) − Pi (φ)(t)|

≤∫ ∞

0

n∑j=1

|αi j (t + p − r ) − αi j (t − r )||g j (φ j (t + p − r ))|e− ∫ r0 µi (t+p−s)dsdr

+∫ ∞

0

n∑j=1

|βi j (t + p − r ) − βi j (t − r )||g j (φ j (t + p − r − τi j (t + p − r )))|

e− ∫ r0 µi (t+p−s)dsdr

+∫ ∞

0

n∑j=1

|αi j (t − r )||g j (φ j (t + p − r )) − g j (φ j (t − r ))|e− ∫ r0 µi (t+p−s)dsdr

+∫ ∞

0

n∑j=1

|βi j (t − r )||g j (φ j (t + p − r − τi j (t + p − r )))

− g j (φ j (t − r − τi j (t − r )))|e− ∫ r0 µi (t+p−s)dsdr

+∫ ∞

0

n∑j=1

|αi j (t − r )||g j (φ j (t − r ))||e− ∫ r0 µi (t+p−s)ds − e− ∫ r

0 µi (t−s)ds |dr

+∫ ∞

0

n∑j=1

|βi j (t − r )||g j (φ j (t − r − τi j (t − r )))|

×|e− ∫ r0 µi (t+p−s)ds − e− ∫ r

0 µi (t−s)ds |dr

+∫ ∞

0

n∑j=1

|Ii (t + p − r ) − Ii (t − r )|e− ∫ r0 µi (t+p−s)dsdr

+∫ ∞

0

n∑j=1

|Ii (t − r )||e− ∫ r0 µi (t+p−s)ds − e− ∫ r

0 µi (t−s)ds |dr (3.6)

for φ ∈ AP(Rn). By the mean value theorem, we obtain the following:

|g j (φ j (t + p − r )) − g j (φ j (t − r ))| = | tanh(φ j (t + p − r ))

− tanh(φ j (t − r ))| = |sech2(θo)[φ j (t + p − r ) − φ j (t − r )]|

≤ |φ j (t + p − r ) − φ j (t − r )| ≤ ε

5

µi∑n

k=1(α∗ik + β∗

ik), (3.7)

Page 18: Multiple Almost Periodic Solutions in Nonautonomous Delayed Neural Networks

Multiple Almost Periodic Solutions 3409

for all i = 1, 2, . . . , n, where θo lies between φ j (t + p − r ) and φ j (t − r ), and

|g j (φ j (t + p − r − τi j (t + p − r ))) − g j (φ j (t − r − τi j (t − r )))|= | tanh(φ j (t + p − r − τi j (t + p − r ))) − tanh(φ j (t − r − τi j (t − r )))|= |sech2(θτ )[φ j (t + p − r − τi j (t + p − r )) − φ j (t − r − τi j (t − r ))]|≤ |φ j (t + p − r − τi j (t + p − r )) − φ j (t − r − τi j (t − r ))|

≤ ε

5

µi∑n

k=1(α∗ik + β∗

ik), (3.8)

for all i = 1, 2, . . . , n, where θτ lies between φ j (t + p − r − τi j (t + p − r ))and φ j (t − r − τi j (t − r )). Similarly,

|e− ∫ r0 µi (t+p−s)ds − e− ∫ r

0 µi (t−s)ds |

=∣∣∣∣eθe

(−∫ r

0µi (t + p − s)ds +

∫ r

0µi (t − s)ds

)∣∣∣∣≤ e− ∫ r

0 µids∫ r

0|µi (t + p − s) − µi (t − s)|ds

≤ re−µir ε

5

µ2i∑n

k=1(α∗ik + β∗

ik) + I ∗i

, (3.9)

for all i = 1, 2, . . . , n, where θe lies between − ∫ r0 µi (t + p − s)ds and

− ∫ r0 µi (t − s)ds. By substituting equations 3.5 and 3.7 to 3.9 into 3.6, we

have

|Pi (φ)(t + p) − Pi (φ)(t)|

≤ ε

i

∫ ∞

0e− ∫ r

0 µidsdr + ε

i

∫ ∞

0e− ∫ r

0 µidsdr

+n∑

j=1

α∗i j

ε

5

µi∑n

k=1(α∗ik + β∗

ik)

∫ ∞

0e− ∫ r

0 µidsdr

+n∑

j=1

β∗i j

ε

5

µi∑n

k=1(α∗ik + β∗

ik)

∫ ∞

0e− ∫ r

0 µidsdr

+n∑

j=1

α∗i j

ε

5

µ2i∑n

k=1(α∗ik + β∗

ik) + I ∗i

∫ ∞

0re−µ

ir dr

+n∑

j=1

β∗i j

ε

5

µ2i∑n

k=1(α∗ik + β∗

ik) + I ∗i

∫ ∞

0re−µ

ir dr

Page 19: Multiple Almost Periodic Solutions in Nonautonomous Delayed Neural Networks

3410 K. Lin and C. Shih

+ ε

i

∫ ∞

0e− ∫ r

0 µidsdr + I ∗

5

µ2i∑n

k=1(α∗ik + β∗

ik) + I ∗i

∫ ∞

0re−µ

ir dr

≤ ε

5+ ε

5+ ε

5+ ε

5+ ε

5= ε (3.10)

for φ ∈ AP(Rn). Therefore, we obtain that the common ε′-almost periodp that satisfies equation 3.5 is actually an ε-almost period of Pi (φ) for alli = 1, 2, . . . , n.

Step 3. We shall prove that P is a contraction mapping. Notably, condi-tion H3 yields

µi (t) >

n∑j=1

η j [|αi j (t)| + |βi j (t)|] for all t ∈ R, i = 1, 2, . . . , n. (3.11)

For arbitrary ϕ, ψ ∈ �e ⋂AP(Rn), we derive

‖P(ϕ) − P(ψ)‖= max

1≤i≤nsupt∈R

|Pi (ϕ)(t) − Pi (ψ)(t)|

= max1≤i≤n

supt∈R

|∫ ∞

0e− ∫ r

0 µi (t−s)ds

{n∑

j=1

αi j (t − r )[g j (ϕ j (t − r )) − g j (ψ j (t − r ))]

+n∑

j=1

βi j (t − r )[g j (ϕ j (t − r − τi j (t − r )))

− g j (ψ j (t − r − τi j (t − r )))]

}dr | ≤ max

1≤i≤nsupt∈R

∫ ∞

0e− ∫ r

0 µi (t−s)ds

×

n∑j=1

η j |αi j (t − r )||ϕ j (t − r ) − ψ j (t − r )|

+n∑

j=1

η j |βi j (t − r )||ϕ j (t − r − τi j (t − r )) − ψ j (t − r − τi j (t − r ))| dr

≤ max1≤i≤n

supt∈R

∫ ∞

0

{n∑

j=1

η j [|αi j (t − r )| + |βi j (t − r )|]}

e− ∫ r0 µi (t−s)dsdr‖ϕ − ψ‖

= κ‖ϕ − ψ‖,

Page 20: Multiple Almost Periodic Solutions in Nonautonomous Delayed Neural Networks

Multiple Almost Periodic Solutions 3411

where

κ := max1≤i≤n

supt∈R

∫ ∞

0

{n∑

j=1

η j [|αi j (t − r )| + |βi j (t − r )|]}

e− ∫ r0 µi (t−s)dsdr

< max1≤i≤n

supt∈R

∫ ∞

0µi (t − r )e− ∫ r

0 µi (t−s)dsdr = 1,

due to equation 3.11, and that µi (t) are positive almost periodic functions oft ∈ R. Accordingly, P is a contraction mapping. Thus, P has a unique fixedpoint x = x(·) ∈ �

e ⋂AP(Rn), that is, P(x) = x, by the contraction mappingprinciple.

Step 4. Finally, we show that x = x(t) = (x1(t), x2(t), . . . , xn(t)) defined fort ∈ R is a solution of equation 1.1. Indeed,

ddt

xi (t) = ddt

Pi (x)(t)

= ddt

∫ ∞

0

[n∑

j=1

αi j (t − r )g j (x j (t − r ))

+n∑

j=1

βi j (t − r )g j (x j (t − r − τi j (t − r ))) + Ii (t − r )

]e− ∫ r

0 µi (t−s)dsdr

= ddt

∫ t

−∞

[n∑

j=1

αi j (r )g j (x j (r )) +n∑

j=1

βi j (r )g j (x j (r − τi j (r ))) + Ii (r )

]e−

∫ tr µi (s)dsdr

= −µi (t)xi (t) +n∑

j=1

αi j (t)g j (x j (t)) +n∑

j=1

βi j (t)g j (x j (t − τi j (t))) + Ii (t).

Hence system 1.1 admits 2n almost periodic solutions. In addition, eachof the 2n regions {(x1, x2, . . . , xn) ∈ R

n : xi ≥ di , or xi ≤ di } contains one ofthese almost periodic solutions. Moreover, according to theorem 2, these2n almost periodic solutions are exponentially stable. This completes theproof.

Theorems 2 and 3 yield corollary 1, and the proof of theorem 3 can bemodified to justify corollary 2:

Corollary 1. Each �e defined in equation 3.2 contains the basin of attraction forthe almost periodic solution lying in �e.

Corollary 2. Let µi (t), αi j (t), βi j (t), τi j (t) and Ii (t), i, j = 1, 2, . . . , n, beT-periodic functions defined for all t ∈ R. Assume that conditions H1, H2, H3,

Page 21: Multiple Almost Periodic Solutions in Nonautonomous Delayed Neural Networks

3412 K. Lin and C. Shih

and αi i (t) > 0, βi i (t) > 0, t ∈ R, i = 1, 2, . . . , n, hold. Then there exist exactly2n exponentially stable T-periodic solutions for system 1.1.

4 Numerical Illustrations

In this section, we present three examples to illustrate our theory.

4.1 Example 1. Consider the two-dimensional system 1.1 with activa-tion functions g1(ξ ) = g2(ξ ) = tanh(ξ ):

dx1(t)dt

= −µ1(t)x1(t) + α11(t)g1(x1(t)) + α12(t)g2(x2(t))

+ β11(t)g1(x1(t − τ11(t))) + β12(t)g2(x2(t − τ12(t))) + I1(t)

dx2(t)dt

= −µ2(t)x2(t) + α21(t)g1(x1(t)) + α22(t)g2(x2(t))

+ β21(t)g1(x1(t − τ21(t))) + β22(t)g2(x2(t − τ22(t))) + I2(t).

The parameters are set as follows:

α11(t) = 4 + 0.3 sin(t) α12(t) = 0.6 + 0.1 cos(πt)β11(t) = 3 + 0.2 sin(t) β12(t) = 0.4 + 0.1 cos(πt)α21(t) = 0.7 + 0.2 cos(t) α22(t) = 5 + 0.5 sin(πt)β21(t) = 0.8 + 0.2 cos(t) β22(t) = 7 + 0.5 sin(πt)µ1(t) = 1 + |0.1 sin(t)| + |0.2 sin(π t)| µ2(t) = 3 + |0.2 cos(t)|

+ |0.1 cos(πt)|I1(t) = cos(t) + cos(π t) I2(t) = sin(t) + sin(

√2t)

τ11(t) = τ21(t) = 10 + sin(t) + sin(π t) τ12(t) = τ22(t) = 5 + cos(√

2t)

Using equation 2.1, we obtain, for all t ∈ R,

3.7 = α11 ≤ α11(t) ≤ α11 = 4.3 0.5 = α12 ≤ α12(t) ≤ α12 = 0.72.8 = β

11≤ β11(t) ≤ β11 = 3.2 0.3 = β

12≤ β12(t) ≤ β12 = 0.5

0.5 = α21 ≤ α21(t) ≤ α21 = 0.9 4.5 = α22 ≤ α22(t) ≤ α22 = 5.50.6 = β

21≤ β21(t) ≤ β21 = 1 6.5 = β

22≤ β22(t) ≤ β22 = 7.5

1 = µ1

≤ µ1(t) ≤ µ1 = 1.3 3 = µ2

≤ µ2(t) ≤ µ2 = 3.3−2 = I 1 ≤ I1(t) ≤ I1 = 2 −2 = I 2 ≤ I2(t) ≤ I2 = 2

By equation 2.4, a direct computation gives

f1(x1) ={−x1 + 7.5g1(x1) + 3.2, x1 ≥ 0

−1.3x1 + 6.5g1(x1) + 3.2, x1 < 0

f1(x1) ={−1.3x1 + 6.5g1(x1) − 3.2, x1 ≥ 0

−x1 + 7.5g1(x1) − 3.2, x1 < 0

Page 22: Multiple Almost Periodic Solutions in Nonautonomous Delayed Neural Networks

Multiple Almost Periodic Solutions 3413

Table 1: Local Extreme Points and Zeros of f1, f1, f2, f2.

a1 = −2.466999 p1 = −1.443655 b1 = −0.768372 q1 = 1.665432 c1 = 10.7a1 = −10.7 p

1= −1.665432 b1 = 0.768372 q

1= 1.443655 c1 = 2.466999

a2 = −2.331622 p2 = −1.209935 b2 = −0.440329 q2 = 1.362874 c2 = 5.364478a2 = −5.364478 p

2= −1.362874 b2 = 0.440329 q

2= 1.209935 c2 = 2.331622

and

f2(x2) ={−3x2 + 13g2(x2) + 3.1, x2 ≥ 0

−3.3x2 + 11g2(x2) + 3.1, x2 < 0

f2(x2) ={−3.3x2 + 11g2(x2) − 3.1, x2 ≥ 0

−3x2 + 13g2(x2) − 3.1, x2 < 0.

Herein, the parameters satisfy our conditions in theorem 3:Condition H1

0 <µ

1

α11 + β11= 1

7.5,

µ1

α11 + β11

= 1.36.5

< 1

0 <µ

2

α22 + β22= 3

13,

µ2

α22 + β22

= 3.311

< 1

Condition H2

f1( p1) =−0.737051 < 0, f1(q1) = 0.737051 > 0

f2( p2) =−2.110474 < 0, f2(q2) = 2.110474 > 0

Condition H3

inft∈R

{µ1(t) − η1(|α11(t)| + |β11(t)|) − η2(|α12(t)| + |β12(t)|)} = 0.106 > 0

inft∈R

{µ2(t) − η1(|α21(t)| + |β21(t)|) − η2(|α22(t)| + |β22(t)|)} = 1.25 > 0

where η1 = 0.1 > g′(a1) = g′(c1) = 0.028381 and η2 = 0.12 > g′(a2) =g′(c2) = 0.037041. Local extreme points and zeros of f1, f1, f2, f2 are listed inTable 1.

The components x1(t), x2(t) of evolutions from several constant initialvalues and time-dependent initial values are depicted in Figures 3 and 4,respectively. The two-dimensional phase diagram is arranged in Figure 5.

Page 23: Multiple Almost Periodic Solutions in Nonautonomous Delayed Neural Networks

3414 K. Lin and C. Shih

0 10 20 30 40 50 60

0

5

10

time t

x 1

Figure 3: Evolution of state variable x1(t) in example 1.

4.2 Example 2. Consider the two-dimensional system 1.1 with activa-tion functions g1(ξ ) = g2(ξ ) = 1

2 (|ξ − 1| + |ξ + 1|) and the following param-eters:

α11 (t) = 2 + 0.2 cos(

t2

)α12 (t) = 1 + 0.1 sin

(√2t2

)

β11 (t) = 3 + 0.2 cos(

t2

)+ 0.1 sin

(√2t2

)β12 (t) = 1 + 0.2 sin

(√2t2

)

α21 (t) = −1 + 0.2 sin(

t2

)α22 (t) = 4 + 0.5 cos

(√2t2

)

β21 (t) = 2 + 0.3 sin(

t2

)β22 (t) = 5 + 0.3 sin

(t2

)

+ 0.3 cos

(√2t2

)

µ1 (t) = 1 +∣∣∣∣0.1 cos

(t2

)∣∣∣∣ +∣∣∣∣∣0.2 sin

(√2t2

)∣∣∣∣∣ µ2 (t) = 2 +∣∣∣∣0.1 sin

(t2

)∣∣∣∣+∣∣∣∣∣0.3 sin

(√2t2

)∣∣∣∣∣

Page 24: Multiple Almost Periodic Solutions in Nonautonomous Delayed Neural Networks

Multiple Almost Periodic Solutions 3415

0 10 20 30 40 50 60

0

2

4

6

8

10

time t

x 2

Figure 4: Evolution of state variable x2(t) in example 1.

I1 (t) = 0.1 sin(

t2

)+ 0.2 cos

(√2t2

)I2 (t) = 1 + 0.1 cos

(t2

)

+ 0.2 sin(√

2t2

)τ11 (t) = τ21 (t) = 6 + cos (t) + sin

(√2t)

τ12 (t) = τ22 (t) = 15+ sin(t) + cos(πt)

These parameters satisfy the conditions analogous to the ones of theo-rem 3, as adapted to the activation functions considered here. The evolutionsof components x1(t), x2(t) are depicted in Figures 6 and 7, respectively.

4.3 Example 3. Consider the three-dimensional system 1.1 with acti-vation functions g j (ξ ) = g(ξ ) := 1

1+e−ξ/ε , ε = 0.5 > 0, j = 1, 2, 3. Let αi j =0, i, j = 1, 2, 3, that is, the three-dimensional nonautonomous delayedHopfield neural networks,

dx1(t)dt

=−µ1(t)x1(t) + β11(t)g1(x1(t − τ11(t))) + β12(t)g2(x2(t − τ12(t)))

+ β13(t)g3(x3(t − τ13(t))) + I1(t)

Page 25: Multiple Almost Periodic Solutions in Nonautonomous Delayed Neural Networks

3416 K. Lin and C. Shih

0 5 10

0

2

4

6

8

10

x1

x 2

Figure 5: Phase portrait for example 1.

dx2(t)dt

= −µ2(t)x2(t) + β21(t)g1(x1(t − τ21(t))) + β22(t)g2(x2(t − τ22(t)))

+ β23(t)g3(x3(t − τ23(t))) + I2(t)

dx3(t)dt

= −µ3(t)x3(t) + β31(t)g1(x1(t − τ31(t))) + β32(t)g2(x2(t − τ32(t)))

+ β33(t)g3(x3(t − τ33(t))) + I3(t).

The parameters are set as follows:

µ1(t) = 1 + 0.1 sin(t) µ2(t) = 1 + 0.1 cos(t) µ2(t) = 3 + 0.2 sin(t)β11(t) = 19 + sin(t) β12(t) = 1 + cos(t) β13(t) = sin(t)β21(t) = 1 + sin(t) β22(t) = 22 + cos(t) + sin(t) β23(t) = 1 + sin(t)β31(t) = sin(t) β32(t) = 1 + cos(t) β33(t) = 32 + 2 sin(t)

+ cos(t)I1(t) = −9 + sin(t) I2(t) = −10 + cos(t) I3(t) = −15 + sin(t)τk1(t) = 10 + sin(t) τk2(t) = 20 + cos(t) τk3(t) = 30 + sin(t)

Page 26: Multiple Almost Periodic Solutions in Nonautonomous Delayed Neural Networks

Multiple Almost Periodic Solutions 3417

0 10 20 30 40 50 60

0

5

10

time t

x 1

Figure 6: Evolution of state variable x1(t) in example 2.

0 10 20 30 40 50 60

0

2

4

6

8

10

time t

x 2

Figure 7: Evolution of state variable x2(t) in example 2.

Page 27: Multiple Almost Periodic Solutions in Nonautonomous Delayed Neural Networks

3418 K. Lin and C. Shih

0

5

10

05

1015

20

0

5

10

15

20

x1

x2

x 3

Figure 8: The dynamics in example 3.

where k = 1, 2, 3. These parameters satisfy the conditions analogous to theones of corollary 2, as adapted to the activation functions considered here.Hence, there are eight stable periodic solutions. The dynamics of the systemare illustrated in Figure 8.

5 Conclusion

We have established a new methodology for investigating multistabilityand multiperiodicity for general nonautonomous neural networks withdelays. The presentation is mainly formulated for the system with activationfunction 2.3. All the conditions of the theorems can be modified to adapt toother typical activation functions. Our approach, which makes use of thegeometric configuration of network structure, is effective and illuminating.The treatment is expected to contribute toward exploring further fruitfuldynamics in neural networks.

Acknowledgments

We are grateful to the reviewers for their comments and suggestions, whichled to an improvement of this presentation. This work is partially supportedby the National Science Council, the National Center of Theoretical Sciences,and the MOEATU program, of R.O.C. on Taiwan.

Page 28: Multiple Almost Periodic Solutions in Nonautonomous Delayed Neural Networks

Multiple Almost Periodic Solutions 3419

References

Bohr, H. (1947). Almost periodic function. New York: Chelsea.Cao, J. (2003). New results concerning exponential stability and periodic solutions

of delayed cellular neural networks. Phys. Lett. A, 307, 136–147.Cao, J., & Chen, T. (2004). Globally exponentially robust stability and periodicity of

delayed neural networks. Chaos Soli. Frac., 22, 957–963.Cao, J., Chen, A., & Huang, X. (2005). Almost periodic attractor of delayed neural

networks with variable coefficients. Phys. Lett. A, 340, 104–120.Chen, A., & Cao, J. (2003). Existence and attractivity of almost periodic solutions for

cellular neural networks with distributed delays and variable coefficients. Appl.Math. Comp., 134, 125–140.

Chen, T., Lu, W., & Chen, G. (2005). Dynamical behaviors of a large class of generaldelayed neural networks. Neural Computation, 17, 949–968.

Cheng, C. Y., Lin, K. H., & Shih, C. W. (2006). Multistability in recurrent neuralnetworks. SIAM J. Appl. Math., 66, 1301–1320.

Cheng, C. Y., Lin, K. H., & Shih, C. W. (2007). Multistability and convergence indelayed neural networks. Physica D, 225, 61–74.

Chua, L. O., & Yang, L. (1988). Cellular neural networks: Theory. IEEE Trans. CircuitsSyst., 35, 1257–1272.

Corduneanu, C. (1968). Almost periodic functions. New York: Chelsea.Fan, M., & Ye, D. (2005). Convergence dynamics and pseudo almost periodicity of

a class of nonautonomous RFDEs with applications. J. Math. Anal. Appl., 309,598–625.

Fink, A. M. (1974). Almost periodic differential equations. Berlin: Springer-Verlag.Hahnloser, R. L .T. (1998). On the piecewise analysis of networks of linear threshold

neurons. Neural Networks, 11, 691–697.Huang, X., & Cao, J. (2003). Almost periodic solution of shunting inhibitory cellular

neural networks with time-varying delay. Phys. Lett. A, 314, 222–231.Huang, X., Cao, J., & Ho, W. C. (2006). Existence and attractivity of almost periodic

solution for recurrent neural networks with unbounded delays and variablecoefficients. Nonlinear Dynamics, 45, 337–351.

Izhikevich, E. M. (1999). Weakly connected quasi-periodic oscillators, FM interac-tions, and multiplexing in the brain. SIAM J. Appl. Math., 59, 2193–2223.

Juang, J., & Lin, S. S. (2000). Cellular neural networks: Mosaic pattern and spatialchaos. SIAM J. Appl. Math., 60, 891–915.

Kopell, N. (2000). We got rhythm: Dynamical systems of the nervous system. NoticesAmer. Math. Soc., 47, 6–16.

Liu, B., & Huang, L. (2005). Existence and exponential stability of almost periodicsolutions for cellular neural networks with time-varying delays. Phys. Lett. A,341, 135–144.

Lu, W., & Chen, T. (2004). On periodic dynamical systems. China Anna. Math. Seri. B,25(4), 455–462.

Lu, W., & Chen, T. (2005). Global exponential stability of almost periodic solutionfor a large class of delayed dynamical systems. Science in China, Seri. A Math., 48,1015–1026.

Page 29: Multiple Almost Periodic Solutions in Nonautonomous Delayed Neural Networks

3420 K. Lin and C. Shih

Mohamad, S. (2003). Convergence dynamics of delayed Hopfield-type neural net-works under almost periodic stimuli. Acta. Appl. Math., 76, 117–135.

Mohamad, S., & Gopalsamy, K. (2000). Extreme stability and almost periodicity incontinuous and discrete neural models with finite delays. Anziam J., 44, 261–282.

Mohamad, S., & Gopalsamy, K. (2003). Exponential stability of continuous-time anddiscrete-time cellular neural networks with delays. Appl. Math. Comp., 135, 17–38.

Peng, J., Qiao, H., & Xu, Z. B. (2002). A new approach to stability of neural networkswith time-varying delays. Neural Networks, 15, 95–103.

Roska, T., & Chua, L. O. (1992). Cellular neural networks with nonlinear and delay-type template. Int. J. Circuit Theory Appl., 20, 469–481.

Shih, C. W. (2000). Influence of boundary conditions on pattern formation and spatialchaos in lattice systems. SIAM J. Appl. Math., 61, 335–368.

Skarda, C. A., & Freeman, W. J. (1987). How brains make chaos in order to makesense of the world. Brain Behav. Sci., 10, 161–195.

Tu, F., & Liao, X. (2005). Estimation of exponential convergence rate and exponentialstability for neural networks with time-varying delay. Chaos Soli. Frac., 26, 1499–1505.

van den Driessche, P., & Zou, X. (1998). Global attractivity in delayed Hopfield neuralnetwork models. SIAM J. Appl. Math., 58, 1878–1890.

Yau, Y., Freeman, W. J., Burke, B., & Yang, Q. (1991). Pattern recognition by distributedneural networks: An industrial application. Neural Networks, 4, 103–121.

Yoshizawa, T. (1975). Stability theory and the existence of periodic solutions and almostperiodic solutions. Berlin: Springer-Verlag.

Zeng, Z., & Wang, J. (2006). Multiperiodicity and exponential attractivity evoked byperiodic external inputs in delayed cellular neural networks. Neural Computation,18, 848–870.

Zhang, Q., Wei, X., & Xu, J. (2005). On global exponential stability of nonautonomousdelayed neural networks. Chaos Soli. Frac., 26, 965–970.

Zhao, H. (2004). Existence and global attractivity of almost periodic solution forcellular neural network with distributed delays. Appl. Math. Comp., 154, 683–695.

Zhou, J., Liu, Z., & Chen, G. (2004). Dynamics of periodic delayed neural networks.Neural Networks, 17, 87–101.

Received September 3, 2006; accepted October 29, 2006.