an algorithm for the n lyapunov exponents of an n ...rgencay/jarticles/pd-nlyapunov.pdfphysica d 59...

16
Physica D 59 (1992) 142-157 North-Holland An algorithm for the n Lyapunov exponents of an n-dimensional unknown dynamical system Ramazan Gencay a'~ and W. Davis Dechert b aDepartment of Economics, University of Windsor, 401 Sunset, Windsor, Ontario N9B 3P4, Canada bDepartment of Economics, University of Houston, 4800 Calhoun, Houston, TX 77204-5882, USA Received 7 June 1991 Revised manuscript received 26 May 1992 Accepted 26 May 1992 Communicated by J. Guckenheimer An algorithm for estimating Lyapunov exponents of an unknown dynamical system is designed. The algorithm estimates not only the largest but all Lyapunov exponents of the unknown system. The estimation is carried out by a multivariate feedforward network estimation technique. We focus our attention on deterministic as well as noisy system estimation. The performance of the algorithm is very satisfactory in the presence of noise as well as with limited number of observations. I. Introduction The Lyapunov exponents measure quantities which constitute the exponential divergence or convergence of nearby initial points in the phase space of a dynamical system. A positive Lyapunov exponent measures the average ex- ponential divergence of two nearby trajectories whereas a negative Lyapunov exponent mea- sures exponential convergence of two nearby trajectories. If a discrete nonlinear system is dissipative, a positive Lyapunov exponents quan- tifies a measure of chaos. The popular algorithms for calculating Lyapunov exponents are designed by [1,2]. The important contribution of the algorithm pre- sented in [1] is that it is the first attempt to calculate the Lyapunov exponents from observed time series. The major shortcomings of these algorithms are that the accuracy of the estimates 1R.G. gratefully acknowledges financial support under a grant from the University of Windsor Research Board. are sensitive to the number of the observations as well as the degree of the measurement or the system noise in the observations. The nature of these shortcomings are discussed in [1]. Our purpose is to design and implement a Jacobian algorithm for calculating the Lyapunov exponents such that it would achieve the follow- ing three objectives: (1) it calculates all n Lyapunov exponents of an n-dimensional unknown dynamical system from observations, (2) it achieves greater accuracy of Lyapunov exponent estimates on a relatively short length of observations and, (3) the accuracy of the estimates is robust to the system as well as measurement noise. The achievement of the first objective depends on how the algorithm is constructed. We use a result of [3] which shows that the n largest Lyapunov exponents of a diffeomorphism which is topologically conjugate to the data generating process are the n Lyapunov exponents of the data generating process. The second and third 0167-2789/92/$05.00 © 1992- Elsevier Science Publishers B.V. All rights reserved

Upload: letu

Post on 11-May-2018

215 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: An algorithm for the n Lyapunov exponents of an n ...rgencay/jarticles/pd-nlyapunov.pdfPhysica D 59 (1992) 142-157 North-Holland An algorithm for the n Lyapunov exponents of an n-dimensional

Physica D 59 (1992) 142-157 North-Holland

An algorithm for the n Lyapunov exponents of an n-dimensional unknown dynamical system

R a m a z a n G e n c a y a'~ and W. Davis D e c h e r t b

aDepartment of Economics, University of Windsor, 401 Sunset, Windsor, Ontario N9B 3P4, Canada bDepartment of Economics, University of Houston, 4800 Calhoun, Houston, TX 77204-5882, USA

Received 7 June 1991 Revised manuscript received 26 May 1992 Accepted 26 May 1992 Communicated by J. Guckenheimer

An algorithm for estimating Lyapunov exponents of an unknown dynamical system is designed. The algorithm estimates not only the largest but all Lyapunov exponents of the unknown system. The estimation is carried out by a multivariate feedforward network estimation technique. We focus our attention on deterministic as well as noisy system estimation. The performance of the algorithm is very satisfactory in the presence of noise as well as with limited number of observations.

I. Introduction

The Lyapunov exponents measure quantities

which constitute the exponential divergence or

convergence of nearby initial points in the phase

space of a dynamical system. A positive

Lyapunov exponent measures the average ex-

ponential divergence of two nearby trajectories whereas a negative Lyapunov exponent mea-

sures exponential convergence of two nearby

trajectories. If a discrete nonlinear system is

dissipative, a positive Lyapunov exponents quan-

tifies a measure of chaos.

The popular algorithms for calculating Lyapunov exponents are designed by [1,2]. The important contribution of the algorithm pre- sented in [1] is that it is the first attempt to calculate the Lyapunov exponents from observed time series. The major shortcomings of these

algorithms are that the accuracy of the estimates

1R.G. gratefully acknowledges financial support under a grant from the University of Windsor Research Board.

are sensitive to the number of the observations

as well as the degree of the measurement or the

system noise in the observations. The nature of

these shortcomings are discussed in [1].

Our purpose is to design and implement a

Jacobian algorithm for calculating the Lyapunov

exponents such that it would achieve the follow- ing three objectives:

(1) it calculates all n Lyapunov exponents of

an n-dimensional u n k n o w n dynamical system from observations,

(2) it achieves greater accuracy of Lyapunov

exponent estimates on a relatively short length of

observations and,

(3) the accuracy of the estimates is robust to the system as well as measurement noise.

The achievement of the first objective depends on how the algorithm is constructed. We use a

result of [3] which shows that the n largest Lyapunov exponents of a diffeomorphism which is topologically conjugate to the data generating process are the n Lyapunov exponents of the data generating process. The second and third

0167-2789/92/$05.00 © 1992- Elsevier Science Publishers B.V. All rights reserved

Page 2: An algorithm for the n Lyapunov exponents of an n ...rgencay/jarticles/pd-nlyapunov.pdfPhysica D 59 (1992) 142-157 North-Holland An algorithm for the n Lyapunov exponents of an n-dimensional

R. Gencay, W. Davis Dechert / Algorithm for Lyapunov exponents 143

objectives will depend on the choice of the esti- mation technique. We employ multilayer feed- forward networks (MFNs) as a nonparametric #1 estimation technique. As shown by [4], these networks can asymptotically approximate a (dif- ferentiable) function and its derivatives to any degree of accuracy. In practice, as we will show in section 5, MFNs have the capability to ap- proximate a function and its derivatives with as few as a hundred observations. In [5], it is pointed out that other types of nonparametric estimation techniques such as projection pursuit may not provide the same level of accuracy as feedforward networks. The performance of vari- ous nonparametric estimation techniques such as local thin plate splines, radial basis functions, projection pursuit and feedforward networks are also presented in [5].

In section 2 we will briefly describe the defini- tion of Lyapunov exponents. In section 3, we will explain the algorithm studied in this paper. Section 4 will focus on multilayer feedforward networks. In section 5, simulation results with four chaotic maps will be given.

nents which are customarily ranked from largest to smallest:

~ 1 ~ 2 - - > ' ' ' - - > ~ n • (2)

Associated with each exponent, ] = 1, 2 . . . . , n, there are nested subspaces V j C •" of dimension n + 1 - j and with the property that

Aj = lim t -1 lnll(D f)x0vll (3)

for all v ~ V J \ V j+l. It is a consequence of Oseledec's Theorem [6], that the limit in eq. (3) exists for a broad class of functions #3. Additional properties of Lyapunov exponents and a formal definition are given in [10, p. 256]. Notice that for j _> 2 the subspaces V j are sets of Lebesgue measure zero, and so for almost all o E R" the limit in eq. (3) equals A 1. This is the basis for the computational algorithm of [1] which is a method for calculating the largest Lyapunov exponent.

Eq. (3) suggests a more direct approach to calculating the exponents. Since

(D ft)x ° = (D f)x,(D f)x,_,"" (D f)xo (4)

2. Lyapunov exponents

The Lyapunov exponents for a dynamical sys- tem, f : R"---~ •", with the trajectory,

xt+l=f(x,), t = 0 , 1 , 2 , . . . , (1)

are measures of the average rate of divergence or convergence of a typical trajectory #2. For an n-dimensional system as above, there are n expo-

• 1The te rms parametric and nonparametric are convention- ally used to distinguish those problems in which the regres- sion is known up to a finite-dimensional parameter from those in which the regression is known to be a member of some non-finite dimensional space of functions. For example, the regression might be known to be a cont inuous function.

~2The trajectory is also written in terms of the iterates o f f . With the convent ion that f o is the identity map, and f '+~ = fo f ' , then we also write x, = f'(xo). A trajectory is also called an orbit in the dynamical system literature.

all of the Lyapunov exponents can be calculated by evaluating the Jacobian of the function f along a trajectory, {x,}. In [11] the QR decom- position is proposed for extracting the eigen- values from (D f t ) x ° . T h e QR decomposition is one of many ways to calculate eigenvalues. One advantage of the QR decomposition is that it performs successive rescaling to keep magnitudes under control. It is also well studied and ex- tremely fast subroutines #4 are available for this type of computation. It is the method that we use here.

An attractor is a set of points towards which the trajectories of f converge. More precisely, A is an attractor if there is an open U C R" with

~3See [7,8,9] for precise conditions and the proofs of the theorem.

*4We use the IMSL routines L Q R R R and L Q E R R for our purposes .

Page 3: An algorithm for the n Lyapunov exponents of an n ...rgencay/jarticles/pd-nlyapunov.pdfPhysica D 59 (1992) 142-157 North-Holland An algorithm for the n Lyapunov exponents of an n-dimensional

144 R. Gencay, W. Davis Dechert Algorithm for Lyapunov exponents

A = O i f ( U ) , (5) t _ > 0

where U is the closure of U. The attractor A is said to be indecomposable if there is no proper subset A C A with f (A) = A. An attractor can be chaotic or ordinary (i.e., non-chaotic). There is more than one definition of chaotic attractors in the literature. In practice the presence of a posi- tive Lyapunov exponent is taken as a signal that the at tractor is chaotic.

where

m

Y,+I = (Yt+m, Yt+m 1 . . . . . Yt+l)" (10)

But notice that

m m Y,+I = J (Xt+l) = J m ( f ( x t ) ) . (11)

Hence from (9) and (11)

j m ( f ( x , ) ) = g ( j m ( x , ) ) . (12)

3. The algorithm

One rarely has the advantage of observing the state of the system, x t, let alone knowing the actual functional form, f , that generates the dynamics. The model that is widely used is the following: associated with the dynamical system in eq. (1) there is a measurement function h: Rn---~ R which generates observations,

Yt = h ( x , ) . (6)

It is assumed that all that is available to the researcher is the sequence {y,}. For notational purposes let

m

Yt = (Yt+m-l, Y,+m-2, " • • , Y,) • (7)

Under general conditions, it is shown in [12] that if the set 0 is a compact manifold then for ~5

m > _ 2 n + l

J m ( x ) = y ?

= ( h ( f ' - l ( x ) ) , h ( f m - 2 ( x ) ) , . . . , h(x))

(8)

is an embedding of 0 onto Jm(O). Generically, for m>--2n + 1 there exists a function g: R"---~ R" such that

m

Y,+~ = g ( Y T ) ,

"SHere 2n + 1 is the worst-case upper limit.

(9)

Under the assumption that jm is a homeomor- phism, f is topologically conjugate to g. This implies that certain dynamical properties .6 of f and g are the same. From eq. (9) the mapping g (which is to be estimated) may be taken to be

Yt+m-I 1

g i r t + m - 2 1 -----)

Yt+m-1, Y,+m-2 . . . . . Y , ) ]

J Yt+ m - 1

Yt+ l (13)

and this reduces to estimating

Y,+m = v ( y , + m - I , Yt+m-2 . . . . , Y,)" (14)

It is the unknown nature of v which requires a specification-free estimation technique such as feedforward networks. In [13] and [14], a trun- cated Taylor series is used to calculate the func- tion v. In [5], the feedforward networks are used to calculate the largest Lyapunov exponent.

The derivative of g is the matrix

(D g )y 7 =

U m U r n - 1 U r n - 2 • . . U 2 U 1

1 0 0 . . . 0 0

0 1 0 . . . 0 0

() 0 0 . . . 1 0 (15)

where

# 6 S u c h a s the correlation and fractal dimensions, as well as the Lyapunov exponents.

Page 4: An algorithm for the n Lyapunov exponents of an n ...rgencay/jarticles/pd-nlyapunov.pdfPhysica D 59 (1992) 142-157 North-Holland An algorithm for the n Lyapunov exponents of an n-dimensional

R. Gencay, W. Davis Dechert / Algorithm for Lyapunov exponents 145

Ov Ov (16) ~ - - ~ . . . , V l = -

Vm OYt+m_ 1 OYt

In [3] it was shown that the n largest Lyapunov exponents of g have the same values as the Lyapunov exponents of f. The statement of this

theorem is the following.

Theorem 3.1. Assume that M is a manifold of

dimension n, f : M - - > M and h : M - - * R are (at least) C 2. Define J " : M - - > R " by Jm(X)=

( h ( f m - l ( x ) ) , h ( f m - E ( x ) ) , . . . , h(x)) . Let /xl(x ) >-/ZE(X ) - > . . . ->/x,(x) be the eigenvalues of

(D J " ) ' ( D J ' ) x , and suppose that

inf /z,(x) > 0 , sup/x~(x) < oo. x • M x E M

Let A~-->Azr-- > . . . - > ) t . I be the Lyapunov expo-

nents of f and A1 g -> )t~ -> It2 g >- - - - -> Agm be the Lyapunov exponents of g where g : Jm(M)-----> Jm(M) and Jm( f ( x ) ) = g( jm(x) ) on

M. Then generically A { = )t g for i = 1, 2 , . . . , n.

This is the basis of our approach: estimate the

function g based on the data sequence {J"(x , )} , and calculate the Lyapunov exponents of g. As m increases there is a value between n and 2n + 1 at which the n largest exponents remain

constant and the remaining m - n exponents di- verge to -oo as the number of observations

increases.

4. Multilayer feedforward networks

In this paper , we use a single layer feedfor-

ward network,

VN,m(Z; /3, W, b) = ~ , /3ik wijz i + b i , (17) ] = 1 "=

be est imated are /3, w, b, and k is a known hidden unit activation function. Here L is the number of hidden units, 13 E R/~ represents hid- den to output unit weights, and w E R L×r" and b E R L represent input to hidden unit weights.

In [4] a set of conditions under which the single layer feedforward networks are dense in a

Sobolev space of functions is analysed. The im- por tant part of their result of which we make use is that both a function and its derivatives can be asymptotically approximated to any arbitrary de-

gree of accuracy with a single layer feedforward network. In [15] it is shown that feedforward networks can be used to consistently estimate both a function and its derivatives ~8. They show

that t he least squares estimates are consistent in Sobolev norm, provided that the number of hid- den units increases with the size of the data set. This would mean that larger number of data points would require larger number of hidden units to avoid overfitting in noisy environments.

For a single layer network, the least squares criterion for a data set of length T is

T - m - 1

L(/3, w, b ) = t = O

[Yt+m -- VN,m(yr~; /3, W, b)] z .

(18)

This is a straightforward multivariate minimiza- tion problem. We found that conjugant gradient routines given in [16] worked very well for this problem. In our work we used the logistic func- tion (which is a sigmoid ~9 function)

k(x) = /3 1 + e x p ( - w x - b ) (19)

as the hidden layer activation function. The posi- tion and the slope of the curve are determined

where z E •m is the input #7, the parameters to

'~7In comparison to a thin plate spline, the complexity of a feedforward network does not increase in m. The functional form remains as a simple sum of simple univariate functions. In contrast, the number of polynomial terms grows exponen- tially with m in a thin plate spline.

~'8A minimal property for any estimation procedure is that of consistency. A stochastic sequence {0r} is consistent for {O0} if the probability that {0r} exceeds any specified level of approximation error relative to {00} tends to zero as the sample size T tends to infinity.

~gk is a sigmoid function if k:•---*[0,/3], k(a)---*O as a---->-~, k(a)---*fl as a---~ ~ and k is. monotonic.

Page 5: An algorithm for the n Lyapunov exponents of an n ...rgencay/jarticles/pd-nlyapunov.pdfPhysica D 59 (1992) 142-157 North-Holland An algorithm for the n Lyapunov exponents of an n-dimensional

y ( t ) 2

1 . 5

1

0.5

0

- 0 . 5

k(x)-l/(l+exp(-5x-O.5))

I I I I - 1 i i i J i

- i -o.5 -o.8 -o.4 -0.2 o 0.2 0.4 o.8 o.5 x ( t )

k(x)=8/(l*exp(-O.O5x)) y( t )

4 . 7

4 . 2

3.7

3 .2 i i i i i L L I I

- I -o.8 -o.5 -o.4 -o.2 o 0.2 0.4 o.e o.8 x ( t )

k(x)=8/(1*exp(-150x)) y ( t )

1 0

8

5

4

2

-1.5

/ -

L _ . i i

-1 - 0 . 5 0 0 . 5 1

x ( t )

1.5

q(x) = 8/(l+exp(-2x)) -8/( l+exp(-2x*0.5)) y ( t )

1 , 2

1

0 . 8

0.5

0 . 4

0 . 2

0 -1.5

/ / '\ %

/ % i

j . °

S . J

i L i L i

-1 - 0 . 5 0 0.5 1

x l t ) 1.5

q(x)=8/(l+exp(-2x)) - 8/(1+exp(-20x+4)) y ( t )

4

- 2

f f ~ o . ~

3

2

1

0

- 1

- 3 .... J i I i i i i i i L i

- 1 . 2 - 1 - 0 . 8 - 0 . 6 - 0 , 4 - 0 , 2 0 0 . 2 0 . 4 0 . 6 0 . 5 1 1 .2

x ( t )

q(x)=8/(l*exp(-2x)) -8/(1*exp(-150x+4)) y ( t )

5

4

0

- 2 ~ ' ~ " ~ ' ~

v t - 4 i i i i i

-1.5 -1 -0.5 0 0.5 1 x ( t )

q(x)=8/(l+exp(-9x)) -8/(l+exp(9x)) y ( t )

1 0

1.5

8

8

4

2

0 -

- 2

- 4

- 5

- 8

- 1 0 -1.5

/

i I

/"

i ; i

-1 - 0 . 5 0

x ( t )

i

0.5

q(x)=12/(1+exp(-.OO2x))-6/(1*exp(-100x)) y( t )

7

8

5

4

3

2

1

0 - - -1.5

i i ' . . -

-0 .5 0 0 .5

x( t )

Page 6: An algorithm for the n Lyapunov exponents of an n ...rgencay/jarticles/pd-nlyapunov.pdfPhysica D 59 (1992) 142-157 North-Holland An algorithm for the n Lyapunov exponents of an n-dimensional

R. Gencay, W. Davis Dechert / Algorithm for Lyapunov exponents 147

by b and w and the height of the function is

de te rmined by /3 (see fig. 1 below). For small values of w the curve is more of a straight line

whereas for large values of w the function is more like a step function. These two specifica-

tions are illustrated in figs. 2 and 3. A combinat ion of activation functions can re-

sult in a bell-shaped curve. This can be done by the difference of two logistic functions. Choose b of the second function to displace it f rom the

first. Let b I = 0 and b 2 = - c < 0 . Choose w of the two functions to be the same so that they have the same orientation. Finally, choose/32 = /3~ so that the two functions have the same

height,

q(x ) = k l ( x ) - kz(x) , (20)

/3z (21) k l ( x ) = 1 + e x p ( - w l x ) '

and

/32 (22) k2(x) = 1 + exp(-WzX + c ) "

The function q(x ) is illustrated in fig. 4. We can

obtain skewed curves, sharp spikes, or bi-modal curves by using various combinations of the pa- rameters /3, w and b. Some examples of these combinat ions are given in figs. 5-8. These figures clearly show the flexibility of the sigmoid func-

tions.

5. Numer ica l results

In this section, we present the results on four

examples. (1) The Logistic map is the one-dimensional,

discrete t ime, unimodal map

x,+ 1 =/3x, (1 - x , ) . (23)

Fo r /3 E [0, 4] the state of the system maps itself onto itself in the closed interval [0, 1]. In the

interval /3 ~ (3.5699,4], the Logistic map ex- hibits deterministic chaos and contains the pres- ence of periodic as well as aperiodic cycles. We

set / 3 - - 4 and est imated the Logistic map with 100 observations. Four hidden units are used in a one-layer feedforward network. The results are given in figs. 9-11 along with the mean squared error of the approximations. The mean square error is calculated by

1 T-m-1 Z (Yt+m--U-N,m,t+m) 2 (24) DISeN'm = T - m t=0

where 6N.,,,t+, . is the fit f rom a single hidden layer feedforward network. The mean square error of the est imate of the map and its deriva- tive are repor ted in figs. 9-11 and the orders of these errors are less than 10 -3 . The quality of this fit was achieved with only 100 observations.

The value of the Lyapunov exponent f rom simu- lated data is 0.673 and the value of our estimate

of the Lyapunov exponent is 0.674. (2) The H6non map

xt+ l = 1 - 1.4x~ + Yt , Yt+l = 0.3x, (25)

is a widely used example, in spite of the fact that

it is not known whether or not the at tractor is indecomposable . We use it in part as a bench- mark to test our method against the results of others. The matrix of derivatives of the H6non map is

--2.4X t 0.3 10)" (26)

Figs. 1-8. First column, from top to bottom: fig. 1, fig. 2, fig. 3 and fig. 4. Second column, from top to bottom: fig. 5, fig. 6, fig. 7 and fig. 8.

Figs. 1-3. In the implementation of feedforward networks the Logistic function is used as a hidden unit activation function. The figures above demonstrate this function for various parameter values. Figs. 4-8. By the sum of two Logistic functions with different parameter values, bell-shaped curves, spikes and step functions can be obtained.

Page 7: An algorithm for the n Lyapunov exponents of an n ...rgencay/jarticles/pd-nlyapunov.pdfPhysica D 59 (1992) 142-157 North-Holland An algorithm for the n Lyapunov exponents of an n-dimensional

148 R. Gencay, W. Davis Dechert / Algorithm for Lyapunov exponents

x(t+l) 1.2

!

0.8

0.6

0.4

0.2

0 0

+.

++ ¢

/

**

I I I

0 . 2 0 . 4 0 . 6

x(t )

+++++ +

+ +

+ ÷

\

0.8 1 1.2

a c t u a l + e s t i m a t e d

M e a n S q u a r e E r r o r = 2 . 3 E - 5

Fig. 9. The comparison of the Logistic map and its estimate by a single hidden layer feedforward network with four hidden units and 100 observations.

Since the determinant of this matrix is constant, the Lyapunov exponents for this map satisfy

A1 + A2 = I n ( 0 . 3 ) ~ - 1 . 2 . ( 2 7 )

We estimated the Hdnon map and the first col-

umn of the matrix of partial derivatives based on 200 observations. In the estimation of this map, six hidden units are used in a single layer feed- forward network. We have compared the quality of the fit with seven and eight hidden units as well, but did not observe any improvement in

x(t+2) 1.2

1

0.8

0.6

0.4

0.2

0

- #

! \ +

+ +

* + -i +

+

I

0 0.2

\ +

+ I

0.4

+

+

+ +

+

+

0.6

x(t)

%

o.a l 1.2

a c t u a l + e s t i m a t e d

Mean S q u a r e E r r o r = 3 . 1 E - 5

+ +

+

+

Fig. 10. The comparison of the Logistic map and its estimate by a single hidden layer feedforward network with four hidden units and 100 observations.

Page 8: An algorithm for the n Lyapunov exponents of an n ...rgencay/jarticles/pd-nlyapunov.pdfPhysica D 59 (1992) 142-157 North-Holland An algorithm for the n Lyapunov exponents of an n-dimensional

R. Gencay, W. Davis Dechert / Algorithm for Lyapunov exponents 149

0

-2

-4

-6 0

++ ~'~+.~+ q,,

++ + +++++

++'~ ++m,, ~,~

I I

0.2 0.4

I I

0.6 0.8

x(t)

a c t u a l + e s t i m a t e d

Mean Square Error=l.3E-4

1.2

Fig. 11. The comparison of the derivative of the Logistic map and its est imate by a single hidden layer feedforward network with four hidden units and 100 observations.

x ( t ) - 1 . 5

- - 1 .

--0.5

0

0.5

"1

1.5 --1.5

+ ÷ + +++ + +

+ +

~++~ ~ + +++ ~t+~+ ++ ~¢

I I

-1 - 0 . 5 0

x(t÷l)

++ +

+ + ¢ - + + ÷ ÷++

+

+++*~

I I

0.5 1

1.5

a c t u a l + e s t i m a t e d

Mean Square Error=6.3E-5

Fig. 12. The comparison of the H6non map and its est imate by a single hidden layer feedforward network with six hidden units and 200 observations.

the quality of the fit #1°. The results are given in figs. 12-14, along with the mean square error of

"1°In [15] it is pointed out that the number of activation funct ions should grow with the size of the data set at just the right rate to ensure good approximation without overfitting. In s imulat ion exper iments , they also reported that the quality of the fit was not very sensitive in slight variations in this proportionali ty.

the approximations. It can be seen that the single hidden layer feedforward network approximates the H6non map as well as derivatives closely. For these pa ramete r values the Lyapunov expo- nents of the H6non map are 0.408 and -1 .620. The calculated Lyapunov exponents are 0.405 and - 1.625.

Page 9: An algorithm for the n Lyapunov exponents of an n ...rgencay/jarticles/pd-nlyapunov.pdfPhysica D 59 (1992) 142-157 North-Holland An algorithm for the n Lyapunov exponents of an n-dimensional

150 R. Gencay, W. Davis Dechert / Algorithm for Lyapunov exponents

-2

+

~+ ~,+ +

-4 I I I r

- 1 . 5 - 1 - 0 . 5 0 0.5 1 1.5

x(t)

a c t u a l + e s t i m a t e d

Mean Square Error=4.6E-3

Fig. 13. The comparison of the derivative of the H6non map with respect to x(t) and its estimate by a single hidden layer feedforward network with six hidden units and 200 observations.

0.6

0.5

0.4

0.3

0.2

0.1

_ ~ : : ~ : ~ :

0 i L i L i

- 1 . 5 - 1 - 0 . 5 0 0.5 1 1.5

y(t)

7 a c t u a l + e s t i m a t e d

J Mean Square Error=3.04E-4

Fig. 14. The comparison of the derivative of the H6non map with respect to y(t) and its estimate by a single hidden layer feedforward network with six hidden units and 200 observations.

Page 10: An algorithm for the n Lyapunov exponents of an n ...rgencay/jarticles/pd-nlyapunov.pdfPhysica D 59 (1992) 142-157 North-Holland An algorithm for the n Lyapunov exponents of an n-dimensional

R. Gencay, W. Davis Dechert I Algorithm for Lyapunov exponents 151

(3 ) T h e w e l l - k n o w n L o r e n z a t t r a c t o r is t h e

t h r e e - d i m e n s i o n a l , c o n t i n u o u s - t i m e s y s t e m ,

Yc = a ( x - y ) ,

= x ( b - z ) - y ,

= x y - c z , (28)

w h e r e a, b a n d c a r e p a r a m e t e r s a n d set to

a = 16.0 , b = 45 .92 a n d c = 4.0. F o r t h e s e p a r a m -

e t e r v a l u e s , t h e L y a p u n o v e x p o n e n t s A~, A 2 a n d

A 3 a r e 1 .50, 0 .0 a n d - 2 2 . 5 , r e s p e c t i v e l y a n d t h e y

sa t i s fy t h e r u l e that /~1 -~ /~2 "[- /~3 = - a - c - 1 #11.

I n t h e e s t i m a t i o n o f this m a p , 12 h i d d e n uni t s

a n d 1000 o b s e r v a t i o n s a r e u s e d in a s ing le h i d d e n

l a y e r f e e d f o r w a r d n e t w o r k . I n figs. 1 5 - 2 0 #12 t h e

L o r e n z s y s t e m a n d its e s t i m a t e s wi th M F N s a re

p l o t t e d . T h e e s t i m a t e d v a l u e s o f t h e L y a p u n o v

e x p o n e n t s a re : 1 .510, 0 .7 x 10 -4 a n d - 2 2 . 5 7 .

(4) A d i s c r e t e t i m e v e r s i o n o f t h e M a c k e y -

G l a s s e q u a t i o n ,

a x t _ s _ b X t _ l ) , x t = x t - 1 ,+ 1 . . k ( X t _ s ) c

(29)

where we used a = 0.2, b = 0.1, and c = 10.0 and s = 17. The actual Mackey-Glass delay-differen- tial equation is an infinite-dimensional dynamical system. This equation is chosen to show the performance of the feedforward network with higher dimensional systems and the resulting Lyapunov exponent estimates. The first four

*HA differential equation has one zero exponent corre- sponding to perturbations along an orbit. For the Lorenz system, volumes in R3 contract everywhere as exp(-a - c - 1)t when t---~ o0. Hence A~+A 2+A 3 = - a - c - I , by inte- grating the Lorenz system at a step size 0.02. It is reported in [13] that the Lyapunov exponents of the Lorenz system are A~ ~ 1.50, h 2 =0.0 and A 3 ~- -22.5.

*~2The estimates of the Lorenz system are plotted in the following way. Let x,, y,, and z, denote the components of the Lorenz system. In separate regressions each component of the system is regressed on its own three lags (e.g. x, is regressed on x,_~, x,_ 2 and x,_3). The fitted values from these three regressions are obtained• In the creation of the figures, fitted values of a component are plotted against the fitted values of the other component•

o~ <5

00 <5

r~ c5

to <5

tO ,:5

<5

t-3 <5

e4 <5

o

c) o

F i g u r e 1 5 : L o r e n z S y s t e m : z - x P r o j e c t i o n

1 0 0 0 O b s e r v a t i o n s

• . . . . . . . . ' .

--. . - .

: . "'..; : . ! . ' . ' . " : . . . ° . . . . . . ~ . - . .

" .:. . • ' , , i "q ". . " . ~ . . " "~ . . ',~.~#.. , . ' ~ ' " , ' ; d " ' " , ' . ~ . . ~ ~. ¢ ' "

• ,"-. O...':: " : . . ..-..:-z:-:~ ".':. " • ".. "-. :-.- " - : ; ..0.' :" .':':'

• ~ • ... .......:. ~;-..; : . . : - . . . • " . . " ' ~ - ' " " v . " ; . ' . - - " L . . ' " " " " • : - . . : . . . y ~.::,: ;~.'.;'f,, ~.<.'..:. •

. . .- . . . . :..... :1...~ :..,,(-- . . . . . . . . . . . ; ; : ; , " ~ "v , ' ! - - .O . . .

,,: J/ \ i : :

i i i i i i i i

- 0 . 4 - 0 . 3 - 0 . 2 - 0 . 1 - 0 . 0 0 . 1 0 . 2 0 . 3 0 . 4

Fig. 15. The z - x projection of the Lorenz system by fourth order Runge-Kutta integration at a sampling rate of 0.02 and with 1000 observations.

largest Lyapunov exponents of the simulated data are 0.0086, 0.001, -0 .0395 and -0.0505. In the estimation of this map 16 hidden units and 2000 observations are used in a single hidden layer feedforward network. The four largest esti- mated Lyapunov exponents of this map are 0.0091, 0.002, -0 .0403 and-0.0514. The fit and its comparison to the actual simulated data are given in figs. 21-28. The mean square errors are

o~ <5

00 ,5

r-- <5

to ,5

tO ¢5

(5

~o <5

04 <5

o

F i g u r e 1 6 : E s t i m a t e o f t h e L o r e n z S y s t e m : z - x P r o j e c t i o n

1 0 0 0 O b s e r v a t i o n s M e a n S q u a r e E r r o r ~ l . 9 E - 5

.-. . . .

, , o ". : . " . . . . . . .'. -. ; ~. •

. . . . " ' . . : - . .-.,...,:, ~... ." "¢. - . . . . - . a9 "3 '

• ," " C " . . ' : : " . ' . . " ": : . : : ' , £ ' t : . ' . : ' . "

• ~ " ".. :.'.. .:..r:'. ; .: .'.:.',"

" ' . "'..-," .'..': /: ' : , " . ;~ .~ . , ,~ ' . .Z "

: ' : : : : J iii . . .."~ i ........ " .

" , 3 , . ,J,'~ ~ "... . . : . . . / i~ ',, ..... -

i i i i i i i

- 0 . 4 - 0 . 5 - 0 . 2 - 0 . 1 - 0 . 0 0 . t 0 . 2 0 . 3 0 . 4

Fig. 16. The estimate of the z - x projection of the Lorenz system by a single hidden layer feedforward network with 12 hidden units and 1000 observations.

Page 11: An algorithm for the n Lyapunov exponents of an n ...rgencay/jarticles/pd-nlyapunov.pdfPhysica D 59 (1992) 142-157 North-Holland An algorithm for the n Lyapunov exponents of an n-dimensional

152 R. Gencay, W. Davis Dechert f Algorithm for Lyapunov exponents

F i g u r e 1 7 : L o r e n z S y s t e m : y - x Pro e c t i o n 1 0 0 0 O b s e r v a t i o n s

. : : ~ . " . : : ." . . . " , : ' . : , ~ ' . ~ : : . ' . .

• . . • . , . ". : ~..,',:ff.., .- : . .

, ' " ~ t ' ~ , : " ' : : : , "

. . . . . . . . ~ . . . ~ . ~ , / , : ~;, --,, , .. ,: : ......~:,...!~;:= (.. • .

, : ; ; :: , . " .. , , .~,.,;--y . . . . . . .

, ~. . . . : ' : . , ) "

.- : . . . , ; , ' ; : : : i . . . .

".:.;;v: ..

. ' : ' : :',

i i i i i i i i

- 0 . 4 - - 0 . 3 - - 0 . 2 - 0 1 - 0 . 0 0 . 1 0 . 2 0 . 3 0 . 4

Fig. 17. The y - x projection of the Lorenz system by fourth order Runge-Kutta integration at a sampling rate of 0.02 and with 1000 observations.

o~

O

co

d - •

r ~ • •

d . . . . , . ,

' 7 , " " .

, ii ' i : : : " : . N d3

o -' ' ' i " " " . ' . - " - : " :: . . . . . . . . . . . . . . ' . . . . : . '

" ": i : -: ~:i;::::::~ w

c~ .

. ..-.............:::~ :::: • ........... ,, "" .

l : n I I I l

o - 0 4 0 3 - 0 2 0 1 - 0 . 0 0.1 0 .2 0 .3 0 .4 y

Fig. 19. The z - y projection of the Lorenz system by fourth order Runge-Kutta integration at a sampling rate of 0.02 and with 1000 observations.

less than 10 -3 and the quality of the fit is satis- factory.

5. I. Deterministic chaotic data

We chose initial conditions 0.3 for the Logistic map; (0 .1 ,0 .0 ) for the H6non map; (0.0, 1.1, 0.0) for the Lorenz map and 0.1 for the first 17 lags of the Mackey-Glass delay equation. These

initial conditions are then iterated forward in time. For all maps, the first 200 observations are discarded to avoid transients. The number of observations used in the estimations were: 100 observations of x, from the Logistic map; 200 observations of x, from the H~non map; 1000 observations of x t from the Lorenz map; and 2000 observations of x, from the Mackey-Glass equation. The Lorenz map was numerically in-

d

d

• . . , . • . , . . .

• : .- . . . . . . . ~ , . , , ~ ' ~ ' : : : . . • . : . . . .

ml m

• . ,

. ,' . L...-:':.tY

" . , . ; ; ; ' . " .-

d n ~ n L i n n i

- 0 4 - 0 . 3 - 0 . 2 - - 0 . 1 - - 0 . 0 0 . 1 0 . 2 0 . 5 0 . 4

Fig. 18. The estimate of the y - x projection of the Lorenz system by a single hidden layer feedforward network with 12 hidden units and 1000 observations.

o~ F i g u r e 2 0 : E s t i m a t e of t h e L o r e n z S y s t e m : z y P r o j e c t i o n d 1 0 0 0 O b s e r v a t i o n s , Mean S q u a r e E r r o r = 1 . 8 9 E - 5

co d

r-- (5 ,. ., . . .

2: : i . • :. . . . . . . . " * , . - '.-....,,' . Z : : i ~ : . : - : -.. . . . : . , (5

- , , . , . . . . . . " ' / - " ~ : ".. :.::,.. . . . . . . . . : . ' . , d

• . ..:..- ". , . . . . . •

d " '-...-.-..: i ::::::;:" " ............ • " . . •

"7 n n i i i i

o - - 0 . 4 - - 0 . 3 - - 0 . 2 - -0 .1 - - 0 . 0 0.1 0 .2 0 . 3 0 . 4 y

Fig. 20. The estimate of the z - y projection of the Lorenz system by a single hidden layer feedforward network with 12 hidden units and 1000 observations.

Page 12: An algorithm for the n Lyapunov exponents of an n ...rgencay/jarticles/pd-nlyapunov.pdfPhysica D 59 (1992) 142-157 North-Holland An algorithm for the n Lyapunov exponents of an n-dimensional

R. Gencay, W. Davis Dechert / Algorithm for Lyapunov exponents 153

q

6

~" ,

. ,).~.-~

(~ i i 15 o 0 .0 0 .5 1.0 1.

×(t-U

Fig. 21. A discrete variant of the Mackey-Glass delay equa- tion with 2000 observations.

q

u3 6

,' ..,v-":..-'..!.: ...#:~- : :.. ~..;:. : '

~ j , j . - - . . ~ . - . m , ~ , . . - , ¢ . ~

~.:: : . , ' : . ' , :': . :.~ ,-., --::~:. , ' . :> ,'" "" . L : ~ k . . .

o

0.0 015 ' '5 1.0 1. x(t - 17)

Fig. 23. A discrete variant of the Mackey-Glass delay equa- tion with 2000 observations.

tegrated forward in time by using a fourth-order Runge-Kutta integration at a sampling rate of 0.02.

In table 1 we have summarized the estimates for the Lyapunov exponents of the Logistic and H6non maps and the Lorenz system together with their comparisons with the algorithms of [1,13]. In table 1, m refers to the embedding dimension• Our estimates are denoted by DG and the estimates in [1] and [13] are denoted by

WSSV and BBA, respectively• Two points worth noting are that for embedding dimension of 2-4 the DG estimates of the first two Lyapunov exponents of the H6non map are quite stable at approximately their true values. The spurious Lyapunov exponents at embedding dimensions 3 and 4 are quite unstable. On larger data sets, these spurious exponents converge to -oo and are more easily identified• We also capture the Lyapunov exponents of the Lorenz map quite

Q

,5

;...:,.

.... ".,' - ~ r . . ~ : ,, .-;Jr

: , : , ' : J

Mean Square E r r o r = 2 . 1 E - 5

q i '0 ' o 0 .0 0 5> 1. 1.5 × ( t - l )

Fig. 22. An estimate of the Mackey-Glass delay equation by a single hidden layer feedforward network with 16 hidden units and 2000 observations.

u3 .C

o

tt3 (5

• L ' : ' ° : ' ' I ' . . L ,

!~/ ".. ::. ~. ; - ;

,..~.-1." ~. , . : " - . . , - . : - ' : - ( : , • " ' • . : , , . - - " "L':~-

• ' : ; ~ ' : ' : " "i¢;~: •

~ "Z ":. , . ' , " : - i ' . - • . . . . . . - . . : . :

Mean Square E r r o r = 2 . 1 E - 5

Q '5 ° 0.0 015 1'.0 1.

x(t- 17)

Fig. 24• An estimate of the Mackey-Glass delay equation by a single hidden layer feedfnrward network with 16 hidden units and 2000 observations.

Page 13: An algorithm for the n Lyapunov exponents of an n ...rgencay/jarticles/pd-nlyapunov.pdfPhysica D 59 (1992) 142-157 North-Holland An algorithm for the n Lyapunov exponents of an n-dimensional

154 R. Gencay, W. Davis Dechert / Algorithm for Lyapunov exponents

. . ~ - ~ , ~ : ~ , - , : . . i " : , . "1 ",: ' ' ' Y:";:"

. . . . . :..: ~ . - : . . . . . "

• .. ~ . ' ~ - : ...,-.-..- . . . J" . ', , . .

" . '.: . . . . . . . . . : . " " , ~ , ; ; ' 3 " . . ' . '" , ~ ' . ' " • . . , , ; , . : - . . , . . . . . ~ . . . . , . . ' " . , . . . ' . ' : . t . Z ' ~ ~ . ~ - ~ . . , . ~ . ,

c~ i i i

/ 0 . 0 0 5 1 0 1 . 5 ~( t -1 )

Fig. 25. Derivative of the Mackey-Glass delay equation with respect to x(t - 17) and with 2000 observations.

\ ' , , ~

I O O 0 . 5 1 0 1 . 5 ,(t 17)

Fig. 27. Derivative of the Mackey-Glass delay equation with respect to x ( t - 17) and with 2000 observations•

accurately with 1000 observations. For larger data sets the spurious Lyapunov exponent at m = 4 converges to -0o. The major advantage of our algorithm is that the n Lyapunov exponents of an n-dimensional system can be calculated quite accurately with relatively small data sets. To compare the performance of our algorithm with the other algorithms, we calculated the

percentage error of the estimates by In , ~ m

pe = r'j=l I -r,j=, I jl x 100. (30)

It is worth comparing our results with the B B A algorithm results. Their method is to use a truncated Taylor series in estimating the

.-. q ' . " " ~ , ~ , . . . :>, : . . : . . . . . ;.: ":;;','," ! ?.'%'..

' :: .. .

. . . . . . / , . . % : , ; ~ . : • , . . ~ . . . . : . . .

• : ' , ' . . ' " . " :Y-':" .. ' . " . ' . ' : " . ~ . . l O . ". ~ "

o i , i

'T 0 .0 0 .5 1 0 1.5 ~(t ~)

Fig. 26. Estimate of the derivative of the Mackey-Olass delay equation with respect to x ( t - 17) by a single layer feedforward network with 16 hidden units and 2000 observa- tions.

,q ?

\

c~ i i r

7 0.0 0.5 1.0 1.5 • (t - 17)

Fig. 28. Estimate of the derivative of the Mackey-Glass delay equation with respect to x ( t - 17) by a single layer feedforward network with 16 hidden units and 2000 observa- tions.

Page 14: An algorithm for the n Lyapunov exponents of an n ...rgencay/jarticles/pd-nlyapunov.pdfPhysica D 59 (1992) 142-157 North-Holland An algorithm for the n Lyapunov exponents of an n-dimensional

R. Gencay, W. Davis Dechert / Algorithm for Lyapunov expo'nents 155

Table 1 Lyapunov exponent estimates. The BBA Algorithm estimates are obtained from [13, pp. 2790, 2799 and p. 2800]. The WSSV Algorithm estimates are obtained from [1, p.289]. The Lyapunov exponent of the Logistic map from the simulated data is 0.673.

Logistic map H6non map Lorenz system

DG DG BBA WSSV True DG BBA WSSV True

Number of observations 100 200 11 000 128 1000 50 000 8192

m = 1 0.674

m = 2 0.696 0.405 0.445 -5.607 -1.625 -1.609

m = 3 0.693 0.440 0.441 -3.623 -1.628 -0.893 -5.785 -3.797 -1.654

m = 4 0.675 0.440 0.442 -1.902 -1.646 -0.307 -2.411 -2.705 -0.804 -4.373 -2.997 -1.625

0.408 -2.240 -1.620

1.510 1.517 0.7 × 10 -4 -0.008

-22.57 -23.09

1.524 1.538 0.003 -0.070

-23.89 -22.15 -156.9 -108.2

2.16 1.51 0.00 0.00

-32.4 -22.5

dynamics #13. Since the H6non map is in fact a polynomial of degree two, their methods should have a strong advantage in this case. However, they reported that when they used a quadratic polynomial in two variables, the estimated Lyapunov exponents were

A 1 = 0.44707, A 2 = -1.5096. (31)

These estimates were based on 11 000 observa- tions. The percentage error of these estimates is 3.52%, compared with the percentage error of our estimates 0.0986% (which is based on only 200 observations). The percentage error of the estimated Lyapunov exponent from the WSSV algorithm is 40.2%. For the Lorenz attractor, the percentage error from our algorithm is 0.29%, the BBA algorithm's percentage error is 0.45% and the percentage error from the WSSV al- gorithm is 43.9%. We also calculated the Lyapunov exponents of a variant of the Mackey- Glass delay equation (29). The four largest esti- mated Lyapunov exponents of this map are

~'~3In [5] a Jacobian technique is used to estimate the largest Lyapunov exponent by various nonparametric estima- tion techniques including single hidden layer feedforward networks.

0.0091, 0.002, -0.0403 and -0.0514. The four largest Lyapunov exponents of the simulated data are 0.0086, 0.001, -0.0395 and -0.0505. The percentage error from these estimates is 3.2%.

5.2. Noisy chaotic data

An important issue is the robustness of these estimates to both measurement noise and to system noise. Table 2 summarizes a Monte Carlo simulation study with the H6non map in the presence of two types of noise.

Table 2 H6non map with noise. Number of observations: 200.

Type of noise % A~ A 2

Measurement noise 0.01 0.3899 -1.7251 (0.0573) (0.6114)

0.05 0.3612 -1.7961 (0.0633) (0.6114)

0.10 0.3591 -2.2514 (0.0909) (0.5369)

System noise 0.005 0.3398 - 1.8109 (0.0879) (0.6869)

0.007 0.2937 - 2.1902 (0.0925) (0.7522)

Page 15: An algorithm for the n Lyapunov exponents of an n ...rgencay/jarticles/pd-nlyapunov.pdfPhysica D 59 (1992) 142-157 North-Holland An algorithm for the n Lyapunov exponents of an n-dimensional

156 R. Gencay, W. Davis Dechert / Algorithm for Lyapunov exponents

For measurement noise we use the system equation (25) along with the observation equation

z t = x, + o'et , (32)

where {e,} is an independent and identically distributed normal random variable with zero mean and standard deviation t r . If tr x is the standard deviation of the system data, then the signal-to-noise ratio in the data {z,} is

°x (33) O'O"

The estimated Lyapunov exponents are given in terms of the reciprocal of the signal-to-noise ratio.

For system noise we used the system model

Xt+l = f ( x t ) Jl- orEt , Ye+l = Xt (34)

and again the results are reported in terms of the reciprocal of the signal-to-noise ratio. The maxi- mum value of 0.007 for the system noise is due to the fact that for any larger value of the noise term the system gets knocked out of the basin of attraction and diverges. Each entry in table 2 is an average of 100 simulations and the numbers in parentheses are the standard deviations. The existence of noise can have two effects on the estimates of the Lyapunov exponents. First, the noise itself reduces the extent to which the de- terministic relationship can be uncovered. Sec- ondly, as pointed out in [17] the QR algorithm may deteriorate due to noise. With both types of noise, the distortion introduced to the Lyapunov exponents is rather minimal keeping in mind that the 0.007 level of system noise is the upper limit which can be imposed on the data. The estimates are also quite stable as it can be detected from the estimated standard deviations.

5.3. Detection of spurious Lyapunov exponents

Another important issue is the performance of

the algorithm in embedding dimensions higher than the dimensionality of the dynamical system under study. The Takens embedding theorem gives sufficient conditions for the minimum em- bedding dimension which recreates the local dynamics of the underlying attractor. However, this theorem does not tell us what would be the contribution of the embedding dimensions which are sufficiently large relative to the dimensionali- ty of the attractor. Ideally, the algorithm should assign a zero derivative vector to a vector which does not explain the attractor. Therefore, in embedding dimensions which are large relative to the dimension of the attractor, it is expected that the Lyapunov exponents of these additional dimensions are minus infinity. With the sample size studied here, the derivative estimates of these additional dimensions are approximately 10 3. With this level of accuracy the QR al- gorithm calculates spurious Lyapunov exponents values which are negative but far larger than minus infinity. As the sample size gets larger, the values of these spurious Lyapunov exponents get smaller negative numbers. Therefore, for a given sample size one method of detecting spurious Lyapunov exponents is to observe whether the true Lyapunov exponents are invariant in their magnitude as the embedding dimension is in- creased. This situation is clearly demonstrated in table 1 both in the Hrnon map as well as Lorenz system Lyapunov exponents estimates.

6. Conclusion

An algorithm of estimating Lyapunov expo- nents of an unknown dynamical system is de- signed. The algorithm estimates all Lyapunov exponents of an unknown system accurately. This is due to the property that the largest Lyapunov exponents of the constructed diffeo- morphism are the Lyapunov exponents of the unknown system under study. We focused our attention on deterministic as well as noisy system estimation. The performance of the algorithm is

Page 16: An algorithm for the n Lyapunov exponents of an n ...rgencay/jarticles/pd-nlyapunov.pdfPhysica D 59 (1992) 142-157 North-Holland An algorithm for the n Lyapunov exponents of an n-dimensional

R. Gencay, W. Davis Dechert / Algorithm for Lyapunov exponents 157

ve ry sa t i s fac to ry in the p re sence of noise as well

as wi th l imi ted n u m b e r of obse rva t ions . The

sa t i s fac to ry p e r f o r m a n c e of the a lgo r i thm with

l imi t ed n u m b e r of obse rva t ions ra ises the ques-

t ion tha t when this t ype of p e r f o r m a n c e might

fail. W i t h the t ypes of e x a m p l e s we s tud ied he re ,

t he n u m b e r of obse rva t i ons were long enough to

t r ave l the en t i r e a t t r ac to r . T h e r e f o r e , we have

suff ic ient i n f o r m a t i o n to ca lcu la te the L y a p u n o v

e x p o n e n t s . F o r s o m e d y n a m i c a l sys tems, it might

t a k e a much l a rge r n u m b e r of da t a po in ts to

t r ave l the en t i r e a t t r ac to r and 1000 obse rva t ions

m a y cha rac t e r i z e only a p o r t i o n of the a t t r ac to r .

In t hose cases , it is b e t t e r to use the larges t

n u m b e r o f obse rva t i ons ava i lab le .

References

[1] A. Wolf, B. Swift, J. Swinney and J. Vastano, De- termining Lyapunov exponents from a time series, Physica D 16 (1985) 285-317.

[2] J. Kurths and H. Herzel, An Attractor in a Solar Time Series, Physica D 25 (1987), 165-172.

[3] W.D. Dechert and R. Gencay, Estimating Lyapunov exponents with multilayer feedforward network learn- ing, Department of Economics, University of Houston (1990).

[4] K. Hornik, M. Stinchcombe and H. White, Universal approximation of an unknown mapping and its deriva- tives using multilayer feedforward networks, Neural Networks 3 (1990) 535-549.

[5] D. McCaffrey, S. Ellner, A.R. Gallant and D. Nychka, Estimating Lyapunov exponents with nonparametric re-

gression, North Carolina University, Institute of Statis- tics Mimeo Series No: 1977R (1990).

[6] V.I. Oseledec, A multiplicative ergodic theorem. Liapunov characteristic numbers for a dynamical sys- tem, Trans. Moscow Math. Soc. 19 (1968), 197-221.

[7] M.S. Raghunathan, A proof of Oseledec's multiplicative ergodic theorem, Israel J. Math. 32 (1979) 356-362.

[8] D. Ruelle, Ergodic theory of differentiable dynamical systems, Publ. Math. IHES 50 (1979) 27-58.

[9] J.E. Cohen, J. Kesten and C.M. Newman, eds. Random matrices and their application, Contemporary Mathe- matics, Vol. 50 (American Mathematical Society, Provi- dence, RI, 1986).

[10] J. Guckenheimer and P. Holmes, Nonlinear Oscilla- tions, Dynamical Systems and Bifurcations of Vector Fields (Springer, Berlin, 1983).

[11] J.P. Eckmann and D. Ruelle, Ergodic theory of strange attractors, Rev. Mod. Phys. 57 (1985) 617-656.

[12] F. Takens, Detecting strange attractors in turbulence, in: Dynamical Systems and Turbulence (Warwick, 1980), eds. D. Rand and I. Young (Springer, Berlin, 1981) pp. 366-381.

[13] R. Brown, P. Bryant and H.D.I. Abarbanel, Computing the Lyapunov spectrum of a Dynamical system from an observed time series, Phys. Rev A 43 (1991) 2787-2806.

[14] H.D.I. Abarbanel, R. Brown and M.B. Kennel, Lyapunov Exponents in Chaotic Systems: Their Impor- tance and Their Evaluation using Observed Data, Insti- tute for Nonlinear Science, University of California, San Diego, 1991.

[15] A.R. Gallant and H. White, On learning the derivatives of an unknown mapping with muitilayer feedforward networks, Neural Networks 5 (1992) 129-138.

[16] W.H. Press, B.E Flannery, S.A. Teukolsky and W.T. Vetterling, Numerical Recipes, The Art of Scientific Computing (Cambridge U.E, Cambridge, 1986).

[17] S. Eliner, A.R. Gallant, D. McCaffrey and D. Nychka, Convergence rates and data requirements for Jacobian- based estimates of Lyapunov exponents from data, Phys. Lett. A 153 (1991) 357-363.