# Computing Lyapunov exponents of continuous dynamical systems - method of Lyapunov vectors

Chaos, Solitons and Fractals 23 (2005) 1879–1892 www.elsevier.com/locate/chaos

Computing Lyapunov exponents of continuous dynamical systems: method of Lyapunov vectors

Jia Lu

a

a,*

, Guolai Yang b, Hyounkyun Oh c, Albert C.J. Luo

d

d

Department of Mechanical and Industrial Engineering, Center for Computer Aided Design, University of Iowa, Iowa City, IA 52242, USA b Department of Mechanical Design and Automation, Nanjing University of Science and Technology, Nanjing 210094, PR China c Applied Mathematical and Computational Sciences, University of Iowa, Iowa City, IA 52242, USA Department of Mechanical and Industrial Engineering, Southern Illinois University Edwardsville, Edwardsville, IL 62026-1805, USA Accepted 5 July 2004

Abstract This paper proposes a new method for computing the Lyapunov characteristic exponents for continuous dynamical systems. Algorithmic development is discussed and implementation details are outlined. Numerical examples illustrating the e?ectiveness and accuracy of the method are presented. ? 2004 Elsevier Ltd. All rights reserved.

1. Introduction Lyapunov exponents are asymptotic measures characterizing the average rate of growth (or shrinking) of small perturbations to the solutions of a dynamical system. The concept was introduced by Lyapunov when studying the stability of non-stationary solutions of ordinary di?erential equations, and has been widely employed in studying dynamical systems since then. Lyapunov exponents provide quantitative measures of response sensitivity of a dynamical system to small changes in initial conditions. One feature of chaos is the sensitive dependence on initial conditions; for a chaotic dynamical system at least one Lyapunov exponent must be positive. For non-chaotic systems all the Lyapunov exponent are non-positive. Therefore, the presence of positive Lyapunov exponents has often been taken as a signature of chaotic motion [1]. There are several methods for ?nding the numerical values of Lyapunov exponents. The simplest method is perhaps Wolf?s algorithm [2], the idea of which is to trace the evolution of an initial sphere of small perturbation to a nominal trajectory. To the ?rst order, the initial sphere of perturbation evolves as a time-varying ellipsoid. The Lyapunov exponents are the average rate of the logarithmic stretch of the ellipsoidal principal axes. In Wolf?s code, the ellipsoidal axes (referred to as the Lyapunov vectors) are approximated by the set of vectors obtained via Gram–Schmidt orthogonalization to the fundamental solution of the linearized system. Numerically, orthogonalization plays an indispensable role for computing Lyapunov exponents; without which the vectors in the fundamental solution turn to align to the direction of the largest growth. It has been demonstrated that, for continuous dynamical systems, properly implemented

*

Corresponding author. Tel.: +1 319 335 6405; fax: +1 319 335 5669. E-mail address: jia-lu@uiowa.edu (J. Lu).

0960-0779/$ - see front matter ? 2004 Elsevier Ltd. All rights reserved. doi:10.1016/j.chaos.2004.07.034

1880

J. Lu et al. / Chaos, Solitons and Fractals 23 (2005) 1879–1892

continuous orthogonalization methods appear superior than Wolf?s algorithm [3,4]. The QR method is an important tool of continuous orthogonalization. Instead of solving the fundamental solution directly, one performs the QR decomposition to the fundamental solution, and derives the di?erential equations for the rotation factor Q. The Lyapunov exponents can be computed from the history of Q. The advantage of continuous orthogonalization lies in that one can control the accuracy of orthogonality between Lyapunov vectors. Dieci [5] showed that the orthogonality of the rotation tensor Q in QR method can be exactly preserved using any simplectic Runge–Kutta integrator. For small systems, Udwadia [6] proposed the use of the exponential map Q = exp(S), with which di?erential equation for Q is recast into di?erential equation for S. Numerical integration is conducted in the parametric space and transformed back to ?nd Q. Thus, the method exactly preserves the orthogonality of Q. However, Udwadia?s method relies on closed-form representations of the exponential map. In [6], the method is implemented in two- and three-dimensions (and systems that can be reduced to two-dimension case) where the exponential map is readily available and relatively simple. General closed-form formulae for the exponential map in higher dimensions, although available [7], can be prohibitively complicated to entail an e?ective numerical algorithm. Dieci?s unitary integration technique, on the other hand, does not apply to partial solutions in which only a few Lyapunov exponents are computed [8]. In this case, the QR transformation leads to a weakly skew system, and no RK scheme with constant coe?cients can preserve the orthogonality of Q. For details, see [8]. This paper explores an alternative approach for computing the Lyapunov exponents. Akin to the QR method, the idea is to trace the evolution of a set of vectors which are orthogonal continuously. The key di?erence lies in that the length of the Lyapunov vectors are not constrained. As a result of relaxing length requirement, we are able to establish a set of recursive linear di?erential equations for the Lyapunov vectors, and to derive numerical algorithms that automatically preserve the orthogonality of some vectors. In particular, the ?rst two Lyapunov vectors are automatically orthogonal. This, among other unique features, makes the algorithm particularly suitable for computing the ?rst few Lyapunov exponents. The development is inspired in part by a recent work of Bartler [9] who used the idea of tracking Lyapunov vectors. However, the present contribution di?ers from Barlter?s work on several key aspects, most notably on the identi?cation of Lyapunov vectors and the numerical treatment of orthogonality. In Section 2, we provide a brief examination of Wolf?s algorithm and the QR method as a passage to motivate the proposed method, which is discussed in Section 3. Numerical examples are presented in Section 4. Throughout this paper, matrices and vectors are denoted by bold face upper and lower letters respectively. A superposed dot means time derivative. Inner product between vectors a and b is denoted by a ? b. The operator ‘‘’’ denotes tensor product, de?ned p????????? by (a b)c = a(b ? c) for any vectors a, b and c. The norm of a vector is denoted by k ? k, meaning kak ? a ? a.

2. Methods for computing Lyapunov exponents Consider an m-dimensional dynamical system _ x ? f?x; t?; t P 0; x 2 Rm ; x?0? ? x0 ;

m m

?1?

where f : R ? R7!R is a smooth vector function. Small perturbations to a trajectory is governed by the linearized equation _ y? of ?x?t?; t?y :? A?t?y: ox ?2?

Here A 2 Rm?m is the Jacobian tensor. The solution of (2) with a given initial perturbation y(0) can be expressed as y(t) = Y(t)y(0), where Y(t) is called the fundamental solution, satisfying _ Y ? A?t?Y; Y?0? ? I: ?3?

We assume that det Y(t) > 0 "t, implying that initially separated trajectories remain separated for all time. Geometrically, the fundamental solution Y(t) can be visualized as a linear map that maps an initial m-sphere in the phase space into an evolving ellipsoid. The Lyapunov vectors are de?ned as the ellipsoidal principal axes. Namely, the ith Lyapunov vector is given as p?????????? where ni(t) is a (unit) eigenvector of the matrix YT Y. The ith Lyapunov exponent of the system is de?ned as ki ? lim

t!1

pi ?t? ? Y?t?ni ?t?;

?4?

1 log kpi ?t?k: t

?5?

The exponent characterizes the long time average rate of exponential growth of the ith ellipsoidal principal axis.

J. Lu et al. / Chaos, Solitons and Fractals 23 (2005) 1879–1892

1881

The simplest algorithm for computing the Lyapunov directions is Wolf?s algorithm [2]. The method proceeds as follows: Let Qn = [q1,n, q2,n, . . . , qk,n] (k 6 m) be the (approximate) Lyapunov vectors at time tn, Assume that the vectors are orthonormal, that is, QT Qn ? I. n (1) Obtaining Yn+1 by integrating (3), with initial condition Y(tn) = Qn. (2) Obtaining vectors Qn+1 = [q1,n+1, q2,n+1, . . . , qk,n+1] from Yn+1 by performing Gram–Schmidt orthogonalization: y1;n?1 ; ky1;n?1 k

i?1 P yi;n?1 ? ?yi;n?1 ? qi;n?1 ?qi;n?1 j?1 ; ? i?1 P yi;n?1 ? ?yi;n?1 ? qi;n?1 ?qi;n?1 j?1

q1;n?1 ?

qi;n?1

i ? 1; . . . ; k:

?6?

(3) Accumulating P Lyapunov exponents N i;s ? yi;s ? i?1 ?yi;s ? qi;s ?qi;s , then j?1 ki ? lim

n?1 1 X n!1 t n?1 s?1

from

the

norm

of

Lyapunov

vectors.

Denoted

by

log N i;s :

?7?

The underlying logic is similar to that of the power method [10] for the algebraic eigenvalue problem of a symmetric matrix. If only one Lyapunov vector is involved, the vector obtained from Wolf?s code is expected to converge to the Lyapunov direction corresponding to the largest Lyapunov exponent. This is because the component in this direction will be ampli?ed the most, and thus outgrows any other directions. Numerically, this happens even if the initial vector does not contain a component in the ?rst direction, since the latter will be brought in by round-o? error. When multiple vectors are considered simultaneously, maintaining orthogonalization is a key to the success of the method (recall that the Lyapunov vectors, by de?nition, are mutually orthogonal). Without orthogonalization, the vectors turn to fall parallel to the ?rst direction, leading to inaccurate results. The QR decomposition is an important theoretical and numerical tool for computing the Lyapunov exponents. An in-depth discussion of the method can be found in many references, e.g. [3,4]. From the standpoint of numerical computation, the method may be regarded as a method of continuous orthogonalization. Instead of integrating (3) directly, one seeks the decomposition Y(t) = Q(t)R(t), where Q(t) is orthogonal and R(t) is upper triangular with positive diagonal elements. With this decomposition one can write (3) into the form _ _ QT Q ? RR?1 ? QT AQ: _ _ Since QT Q is skew and RR?1 is upper triangular, one reads o? the di?erential equations 8 T i > j; > ?Q AQ?ij ; < _ i ? j; Q ? QX; where Xij ? 0; > : T ??Q AQ?ji ; i < j; and _ ii Rii R?1 ? ?QT AQ?ii ; ?10? ?8?

?9?

where Rii are the diagonal elements of R. It has be established that at long time the Lyapunov exponents are related to the diagonal elements through the formula [4] ki ? lim

t!1

1 log Rii : t

?11?

The factor R needs not to be solved for computing the Lyapunov exponents. Noticing _ Rii d log Rii ? ? ?QT AQ?ii ? qi ? Aqi ; dt Rii where qi is the ith column vectors of Q, one can accumulate the Lyapunov exponents using ki ? lim 1 t Z

0 t

?12?

t!1

qi ? Aqi dt:

?13?

1882

J. Lu et al. / Chaos, Solitons and Fractals 23 (2005) 1879–1892

Rt In computation, the Lyapunov exponent can be accumulated through a recursive formula. Let ki;n ? t1 0n qi ? Aqi dt. R tn?1 R tn R tn?1 n 1 1 1 Notice ki;n?1 ? tn?1 0 qi ? Aqi dt ? tn?1 0 qi ? Aqi dt ? tn?1 tn qi ? Aqi dt. Replacing the ?rst integral with tnki,n gives ki;n?1 ? tn tn?1 ki;n ? 1 tn?1 Z

tn tn?1

qi ? Aqi dt:

?14?

The QR method and Wolf?s method are closely related. In matrix computations, the Gram–Schmidt process is an explicit algorithm for computing the QR decomposition. The fundamental di?erence lies in the manner orthogonality of Lyapunov vectors is maintained. In Wolf?s code, orthogonalization is performed numerically at discrete time points. The QR method, on the other hand, seeks to maintain the orthogonality via solving di?erential equations that encode the orthogonality continuously. One is thus allowed to control the accuracy of orthogonality in numerical solution. As alluded in Section 1, it is possible to derive integration schemes that automatically preserve the orthogonality of the solution vectors. However, the QR method results in a nonlinear equation for Q due to the orthogonality constraint imposed on Q. Also, the existing orthogonality-preserving algorithms [6,8] work for limited cases. This motives us to develop a new method for computing the Lyapunov exponents.

3. Method of Lyapunov vectors As mentioned in Section 1, the proposed method seeks to derive di?erential equations for Lyapunov vectors that encode orthogonality in continuum sense. The point of departure is to relax the length constraint on the Lyapunov vectors. Relaxing length restriction allows us to establish linear di?erential equations for the Lyapunov vectors, and to derive algorithms that numerically preserve the orthogonality between some vectors. As will be shown shortly afterwards, the ?rst two vectors (corresponding to the largest and the second largest growth directions) are automatically orthogonal regardless of the dimension of the problem. In addition, the Lyapunov vectors can be computed recursively, and each involves the solution of linear di?erential equations. These features make the method attractive for partial solutions which compute only the ?rst few Lyapunov exponents. In some applications, it su?ces to know a few leading Lyapunov exponents. 3.1. Evolution equations for Lyapunov vectors Let Y = [y1, y2, . . . , ym] be the fundamental solution to (3). We identify a set of orthogonal vectors by Gram–Schmidt process without normalization p1 ? y1 ; q1 ? p1 =kp1 k; p2 ? y2 ? ?q1 ? y2 ?q1 ; q2 ? p2 =kp2 k; . . . i?1 X ?qj ? yi ?qj ; qi ? pi =kpi k; pi ? yi ?

j?1

?15?

where i = 1, . . . , k. It is noted that k 6 m, highlighting the intended application to partial solution. As in QR and Wolf?s method, it is expected that at long time the vectors pi will approach the Lyapunov vectors. Introduce Q ? ?p1 ; p2 ; . . . ; pk ?; the above orthogonalization process can be written as a QR-bar decomposition Y ? Q R; ?17? ?16?

where Q 2 Rm?k , the columns of which are mutually orthogonal but not necessarily orthonormal, and R 2 Rk?k is an upper triangular matrix with diagonal elements Rii ? 1. Substituting (17) into (3) and post-multiplying both sides with ?1 R gives _ _ ?1 Q ? Q R R ? AQ: ?18?

J. Lu et al. / Chaos, Solitons and Fractals 23 (2005) 1879–1892

1883

_ ?1 Let aij be the entries of R R , and notice that aii = 0 (since Rii ? 1), we can write (18) as _ p1 ? Ap1 ; _ p2 ? a12 p1 ? Ap2 ; . . . _ pi ?

i?1 X j?1

?19? i 6 k:

aji pj ? Api ;

We then eliminate aij from (19) and write the equations into di?erential equations for pi. Starting from (19)2, taking the inner product of both sides with p1, _ p1 ? p2 ? a12 kp1 k2 ? p1 ? Ap2 : _ _ By construction, p1 ? p2 = 0, hence p1 ? p2 ? ?p2 ? p1 ? ?p2 ? Ap1 ? ?p1 ? A p2 . Therefore ? 1 ? a12 ? p1 ? Ap2 ? p1 ? AT p2 : kp1 k2 Substituting a12 into (19)2 and making use of q1 = p1/kp1k yields _ p2 ? ?A ? q1 q1 A ? q1 q1 AT ?p2 :? A2 p2 : Proceeding to (19)3, eliminating a13 using the same procedure gives _ p3 ? a23 p2 ? A2 p3 : Further, eliminate a23 from (23), ? ? _ p3 ? A2 ? q2 q2 A2 ? q2 q2 AT p3 :? A3 p3 : 2 Continuing the process inductively we ?nd _ pi ? Ai pi where Ai ? Ai?1 ? qi?1 qi?1 Ai?1 ? qi?1 qi?1 AT ; A1 ? A: i?1 ?26? ?no summation?; ?25? ?23? ?22?

T

?20?

?21?

?24?

Interestingly, the equation for the ith vector involves vectors p1 through pi and is a linear and homogeneous of degreeone in pi. Eqs. (25) and (26), to the best of the authors? knowledge, have never been reported in the literature. 3.2. Orthogonality between vectors Next, we derive integration algorithms that preserve the orthogonality between some of the vectors. Notice ?rst that the operators in (25) satisfy the identity ? ? ?27? pi?1 ? AT ? Ai pi ? 0; i ? 2; . . . ; k; i?1 for any pi?1, pi and Ai?1, Ai. This identity can be readily veri?ed using (26). With this we then prove that any simplectic Runge–Kutta method preserves the orthogonality between pi?1 and pi for i = 2, . . . , k. Consider an s-stage RK method for (25). Let pi,n be the discrete value of pi at time tn and let pi,nj be the corresponding values at intermediate points. Applying the RK scheme [11] to the ith equation in (25), pi;n?1 ? pi;n ? h pi;nj ? pi;n ? h

s X j?1 s X j?1

bj Ai ?tj ?pi;nj ; ?28?

ajl Ai ?tl ?pi;nl ;

where h = tn+1 ? tn, tj = tn + cjh, and aij, bj, cj are integration coe?cients as de?ned in [11]. For the sake of notation clarity, the dependence of Ai on q1, . . . , qi?1 is omitted. An s-stage RK scheme is simplectic if bj ajl ? bl alj ? bj bl ? 0; j; l ? 1; . . . ; s: ?29?

1884

J. Lu et al. / Chaos, Solitons and Fractals 23 (2005) 1879–1892

See [12] for details. A straightforward manipulation shows that s X ? ? bj pi?1;nj ? AT ?tj ? ? Ai ?tj ? pi;nj pi?1;n?1 ? pi;n?1 ? pi?1;n ? pi;n ? h i?1

j?1 s s XX ? ? ? h2 ?bj ajl ? bl alj ? bj bl ?pi?1;nj ? AT ?tj ?Ai ?tl ? pi;nl : i?1 j?1 l?1

?30?

The h2-terms drops out due to simplectic condition (29). The h-term is zero due to the identity (27). It follows that pi?1,n+1 ? pi,n+1 = pi?1,n ? pi,n. Therefore, if pi?1,n ? pi,n = 0, then pi?1,n+1 ? pi,n+1 = 0. In particular, if p1 and p2 are initially orthogonal, they remain orthogonal for all time. 3.3. Lyapunov exponents Upon solving pi, the corresponding Lyapunov exponents are computed according to ki ? lim

t!1

1 log kpi k: t

?31?

Similar to the QR method, we can recast (31) into a form that depends only on the direction of Lyapunov vectors. To this end, introduce ki ?t? ? 1 log kpi k; t ?32?

so that ki = limt!1ki(t). Multiplying both sides of (32) with t and taking the time derivative of the resulting equation, and invoking (25), we obtain d d ?tki ?t?? ? log kpi k ? qi ? Ai qi ; dt dt qi ? pi : kpi k ?33?

This expression naturally leads to a recursive formula parallel to (14). Integrating (33) over the time interval [tn, tn+1] yields Z tn?1 tn?1 ki;n?1 ? tn ki;n ? qi ? Ai qi dt: ?34?

tn

Therefore, ki;n?1 ? tn tn?1 ki;n ? 1 tn?1 Z

tn tn?1

qi ? Ai qi dt:

?35?

The integral on the right-hand-side can be computed using quadrature rules consistent with the RK method. Upon converging, we set ki = ki,n+1. 3.4. Renormalization of Lyapunov vectors If ki is positive or negative, the magnitude of the corresponding Lyapunov vector will grow or shrink exponentially and quickly leads to numerical over?ow or under?ow. The problem can be easily resolved by normalizing the vectors at time steps whenever their magnitudes exceed given upper or lower bound. Renormalization at any time step will not a?ect the values of Lyapunov exponents because the latter depends only on the direction of the Lyapunov vectors. One can easily see that qi is not e?ected by scaling transformation pi # api since the di?erential equation for pi is invariant under this transformation. Numerically, if pi,n # api,n at time tn, any decent numerical integrator will yield pi,n+1 # api,n+1 since the di?erential equation is linear and homogeneous in pi. Therefore qi is also invariant numerically. The computation procedure is summarized below: Given pi,n, i = 1, . . . , k at time tn. _ (1) Integrate pi ? Ai ?t?pi sequentially for i = 1, 2, . . . , k using an s-stage simplectic RK scheme. Rt (2) Compute tnn?1 qi ? Ai qi dt using quadrature rule consistent with the RK method, and update ki.

J. Lu et al. / Chaos, Solitons and Fractals 23 (2005) 1879–1892

1885

(3) Check the norm of Lyapunov vectors. Normalize any vector of which the magnitude exceeds given lower or upper bound. Since the algorithm preserves the inner product between some vectors, normalizing a vector will a?ect the preserved quantities and for this reason, a re-orthogonalization is performed whenever a vector is re-scaled to ensure the inner products remain small. The solution proceeds recursively; solving the ith Lyapunov exponents requires the knowledge of Lyapunov vectors up to (i ? 1)th but not of i + 1 on. When the ith equation is integrated, the matrices needed for solving the next equation are formed. This recursive structure and the fact that any two consecutive Lyapunov vectors remain orthogonal make the method attractive for computing the ?rst few Lyapunov exponents. Notably, only linear di?erential equations are involved in the computations.

4. Numerical examples To assess the capability and numerical properties, the proposed method is tested against selected systems for which some features of the Lyapunov exponents are known. In Section 4.1, the method is applied to linear systems with constant Jacobian. This is one of the few cases in which the exact values of the Lyapunov exponents are known. In Section 4.2, the method is tested for partial solution in which the ?rst two Lyapunov exponents of a six degree-of-freedom system are computed. In Section 4.3, a Lorentz equation is used as an example of general nonlinear system. Finally, the method is applied to a model of mechanical oscillator under the in?uence of short-range van der Waals force. Due to the presence of strong singular force, the Lyapunov exponents of this system are notoriously di?cult to extract. Unless otherwise stated, numerical integration is performed using the mid-point rule, the lowest-order simplectic RK scheme corresponding to s = 1, b1 = 1, a11 = c1 = 0.5 in (28). 4.1. System of constant Jacobian If A is constant, the Lyapunov exponents equal to the real part of the eigenvalues of A [9]. This is one of the few cases where the exact Lyapunov exponents are known. In two-dimensional case, Udwadia and co-authors derived a closed-form solution for the time history of ki(t) [6]. Here, we consider an example for which ! 0 ?1 A? : ?1 ?1 p??? The eigenvalues of A are ?0:5 ? 3=2i. The history of Lyapunov exponents in the time interval [0, 50], computed using a step size Dt = 0.1, are plotted in Fig. 1. The curves are virtually identical to the closed-form solutions by Udwadia, as

0 –0.1 –0.2

λ1 λ2

Lyapunov exponent

–0.3 –0.4 –0.5 –0.6 –0.7 –0.8 –0.9 –1 0 5 10 15 20 25 30 35 40 45 50

Time

Fig. 1. Evolution of Lyapunov exponents for the 2 · 2 system.

1886

J. Lu et al. / Chaos, Solitons and Fractals 23 (2005) 1879–1892

compared in Fig. 2. The computed Lyapunov exponents clearly converge towards the exact value ?0.5. Results at t = 100 and t = 200 are listed in Table 1. Numerical values of p1 ? p2/kp1k/kp2k versus time are plotted in Fig. 3. Note that the algorithm preserves the inner product p1 ? p2, not the normalized counterpart. Nevertheless, the normalized values provide a good measure of error in orthogonality. As seen from Fig. 3, orthogonality is maintained to within machine precision. The sum of Lyapunov exponents can also be an indicator of accuracy. It is known that for a general system Z m X 1 t ki ? lim tr A?t? dt; t!1 t 0 i?1 see e.g. [4] for a proof. If trA is constant, we have m X ki ? tr A:

i?1

For this problem, k1 + k2 = ? 1. The numerical sums are plotted in Fig. 4. It is clear that the exact values are replicated except at the initial point, where the Lyapunov exponents are set to zero. 4.2. Partial solution One of the unique features of the proposed method is its convenience in computing a few leading Lyapunov exponents. As an example, we consider a 6 · 6 constant Jacobian 2 3 1:9501 0:4565 0:9218 0:4103 0:1389 0:0153 6 0:2311 1:0185 0:7382 0:8936 0:2028 0:7468 7 6 7 6 7 6 0:6068 0:8214 1:1763 0:0579 0:1987 0:4451 7 7 A?6 6 0:4860 0:4447 0:4057 1:3529 0:6038 0:9318 7 6 7 6 7 4 0:8913 0:6154 0:9355 0:8132 1:2722 0:4660 5 0:7621 0:7919 0:9169 0:0099 0:1988 1:4186 and compute the ?rst two Lyapunov exponents. The eigenvalue of A are eig?A? ? ? 3:9187 1:3305 1:1537 0:8062 ? 0:3476i 0:8062 ? 0:3476i 0:1734 ?:

The exact values of the ?rst two Lyapunov exponents are k1 = 3.9187, k2 = 1.3305. The problem is solved using the present method and Wolf?s code, with a step size Dt = 0.1. Numerical values at t = 50 at t = 100 are listed in Table 2.

0

–0.1 –0.2

This method Closedform Solution

Lyapunov exponent

–0.3 –0.4 –0.5 –0.6 –0.7 –0.8 –0.9 –1 0

λ1

λ2

5

10

15

20

25

30

35

40

45

50

Time

Fig. 2. Comparison of the Lyapunov exponents with the closed-form solution from Udwadia?s exp(S) method.

J. Lu et al. / Chaos, Solitons and Fractals 23 (2005) 1879–1892 Table 1 Values of Lyapunov exponents at selected time points t This method k1 100 200 ?0.4979 ?0.4991 k2 ?0.5021 ?0.5009 Closed-from [6] k1 ?0.4979 ?0.4991 k2

1887

?0.5021 ?0.5009

1.5

x 10

–15

1

Error in orthogonality

0.5

0

–0.5

–1 0

5

10

15

20

25

30

35

40

45

50

Time

Fig. 3. Error in orthogonality.

0 –0.2

Sum of Lyapunov exponents

–0.4

–0.6

–0.8 λ1 + λ2

–1

–1.2

–1.4 0

50

100

150

200

250

300

350

400

450

500

Time

Fig. 4. Time history of the sum of Lyapunov exponents.

The current method evidently yields more accurate predictions. The error in k1 at t = 100 appears to be one order less than that of Wolf?s method. Fig. 5 presents the time-history of the Lyapunov exponents computed using the present method and the Wolf?s algorithm. The predicted k2 is close to the exact solution, however, the Wolf?s method converges

1888

J. Lu et al. / Chaos, Solitons and Fractals 23 (2005) 1879–1892

Table 2 Values of the ?rst two Lyapunov exponents at selected time points t This method k1 50 100 3.9082 3.9134 Error (%) ?0.2652 ?0.1325

k?kexact kexact

Wolf?s code k2 1.3358 1.3331 Error (%) 0.4058 0.2027 k1 3.9590 3.9645 Error (%) 1.0318 1.1710 k2 1.3380 1.3352 Error (%) 0.5657 0.3566

Errors are computed as

? 100%.

4

λ1

3.5 3

This method Wolf code

Lyapunov exponent

2.5 2 1.5 1 0.5 0

λ2

0

10

20

30

40

50

60

70

80

90

100

Time

Fig. 5. Evolution of two leading Lyapunov exponents of the 6 · 6 system. Comparison with Wolf?s method.

6 5

x 10–16

Error in orthogonality

4 3 2 1 0 –1 –2

0

10

20

30

40

50

60

70

80

90

100

Time

Fig. 6. Error in orthogonality between the ?rst two Lyapunov vectors.

J. Lu et al. / Chaos, Solitons and Fractals 23 (2005) 1879–1892

1889

to a slightly di?erent value for k1. Orthogonality between vectors is also maintained at machine precision, as shown in Fig. 6. 4.3. Lorentz system Consider a Lorentz system _ x1 ? r?x2 ? x1 ?; _ x2 ? ax1 ? x1 x3 ? x2 ; _ x3 ? x1 x2 ? bx3 with an initial condition (x1(0), x2(0), x3(0)) = (0, 1, 0), and r = 16, b = 4 and a = 45.92. The same system was investigated by Udwadia and co-author in [6], where the Lyapunov exponents are computed using the exponential-map method with a fourth-order RK scheme. To compare with Udwadia?s results, the fourth-order Gauss RK scheme (known to be simplectic) is employed here, and step sizes of Dt = 0.03 and Dt = 0.003 are used. The computed values of the Lyapunov exponents and their sum at t = 1000 are tabulated in Table 3. For this system, the trace of the Jacobian is constant and equals to ?21. As is seen from the Table, the current method appears to be less sensitive to step size. Indeed, the current method predicts converged values at around t = 300 for both step sizes. It is know that, for Lorentz system, one of the Lyapunov exponents is zero. Here, k2 is close to zero, further validating the results. Udwadia?s method nevertheless is more accurate in computing the sum of the Lyapunov exponents. This is because the orthogonality of Q is preserved, whereas in the present method only the orthogonality between the pairs (p1, p2) and (p2, p3) are preserved. Recall that k1 ?t? ? k2 ?t? ? k3 ?t? ? and notice q1 ? Aq1 ? q2 ? Aq2 ? q3 ? Aq3 ? tr A ? A : ?QT Q ? I?; where ‘‘ : ’’ means contraction between two second-order tensor. It is clear that in Udwadia?s approach the sum of the Lyapunov exponents can be computed to with machine precision if tr A is constant. Time-history of Lyapunov exponents, computed using Dt = 0.03 and Dt = 0.003, are shown in Figs. 7 and 8. Clearly, the results converge quickly in both cases. 4.4. Lennard–Jones oscillator Finally, we apply the method to compute the Lyapunov exponents of forced vibration of a small-scale oscillator under the in?uence of van der Waals force. This oscillator, introduced in [13], models a micro-dynamical system which involves short-range surface force described by a Lennard–Jones potential. The (non-dimensionalized) Hamilton equation for this system is given as _ x1 ? x2 ; _ x2 ? ?x1 ? d ?x1 ? a?

2

1 t

Z

0

t

?q1 ? Aq1 ? q2 ? Aq2 ? q3 ? Aq3 ? dt

?

dR6 30?x1 ? a?8

? cx2 ? C cos Xt:

Table 3 Lyapunov exponents of the Lorentz system Dt This method k1 0.03 0.003 1.4838 1.4921 k2 ?0.0727 ?0.0051 k3 ?22.3943 ?22.4865 P3

i?1 ki

Udwadia [6] k1 1.5792 1.4898 k2 ?1.1311 0.0048 k3 ?22.4481 ?22.4946 P3

i?1 ki

?20.9832 ?20.9995

?21.0000 ?21.0000

Udwadia?s results in the second row are computed using Dt = 0.001.

1890

J. Lu et al. / Chaos, Solitons and Fractals 23 (2005) 1879–1892

20 λ 10

1

λ2 λ3

Lyapunov Exponent

0

10

20

30

40

0

200

400 Time

600

800

1000

Fig. 7. Time history of Lyapunov exponents of the Lorentz system, Dt = 0.03.

20 λ1 10 λ2 λ3

Lyapunov Exponent

0

10

20

30

40

0

100

200 Time

300

400

500

Fig. 8. Time history of Lyapunov exponents of the Lorentz system, Dt = 0.003.

The Jacobian of the linearized system is derived as " # 0 1 A? 4dR6 ?1 ? ?x 2d 3 ? 15?x ?a?9 ?c ?a?

1 1

Our recent experience with small-scale oscillators [14] indicates that, when such a system is oscillating near the singular point, it is very hard to compute the Lyapunov exponents due to the severe singularity in the force term. We are interested in testing the method against motions for which the singular point are frequently approached. For this purpose we adopt the following parameters: R ? 0:3; d? 4 ; 27 a ? 1:2; X ? 1; C ? 2; c ? 0:04:

It has been demonstrated in [13] that the motion under the given parameters is chaotic. The phase-space trajectory corresponding to zero initial condition is shown in Fig. 9. Clearly, the oscillator approaches the singular point of x1 = ?1.2 frequently, causing drastic changes in velocity as if being bounced on a rigid surface.

J. Lu et al. / Chaos, Solitons and Fractals 23 (2005) 1879–1892

6

1891

4

2

Velocity

0

–2

–4

–6 –2

–1

0

1

2

3

4

Displacement

Fig. 9. Phase-space trajectory corresponding to zero initial condition.

The Lyapunov exponents are computed with step size Dt = 0.005. For this system, the trace of Jacobian is constant and we expect k1 + k2 = ? 0.04. At t = 500, the computed values are k1 ? 0:031387; k2 ? ?0:071365; k1 ? k2 ? ?0:039978: The results agree reasonably with the values in [13], which gives k1 = 0.03956, k2 = ?0.07956. Due to the severe singularities, Udwadia?s exp(S) method fails in integrating the nonlinear di?erential equation for the rotation angle, at least for the mid-point rule used here. Fig. 10 shows the evolution of Lyapunov exponents versus time. Despite the periodical spikes that appear in the time history (occurring when the singular point is approached), the Lyapunov exponents clearly converge.

1.5

λ1 λ2

1

Lyapunov exponent

0.5

0

–0.5

–1

–1.5 0

50

100

150

200

250

300

350

400

450

500

Time

Fig. 10. Time evolution of Lyapunov exponents for the Lennard–Jones oscillator.

1892

J. Lu et al. / Chaos, Solitons and Fractals 23 (2005) 1879–1892

5. Concluding remarks We have developed a new method for computing the Lyapunov exponents of continuous dynamical systems. The idea of this method is akin to that of the QR method; it seeks to extract the Lyapunov exponents by tracking the evolution of a set of orthogonal vectors. In the QR method, the vectors are restricted to be orthonormal, and thus parameterized by a rotation tensor. In the proposed method, the vectors are orthogonal but not necessarily orthonormal. By relaxing the length requirement, we established a set of recursive linear di?erential equations for the evolution of these vectors. In addition, we derived numerical integration schemes that automatically preserve the orthogonality between any two consecutive vectors. The method is applicable to both partial and full solutions; it is particularly e?ective for computing the ?rst few Lyapunov exponents of large-scale dynamical systems such as those arisen from ?nite element modeling. Since some of the Lyapunov vectors remains orthogonal, the method is expected to be more accurate than the Wolf?s algorithm in the case of partial solutions. The method appears to be stable even for strongly nonlinear systems, perhaps because the computation involves only linear di?erential equations.

Acknowledgments Part of the work was conducted while the second author (G. Yang) was visiting the Center of Computer Aided Design, the University of Iowa. G. Yang gratefully acknowledges the ?nancial support from the State Scholarship Fund of the Ministry of Education of China.

References

[1] Parker T, Chua L. Practical numerical algorithms for chaotic systems. Berlin: Springer; 1989. [2] Wolf A, Swift J, Swinney H, Vastano J. Determining Lyapunov exponents from a time series. Physica D 1985;16:285–317. [3] Goldhirsch I, Sulem P, Orszag S. Stability and Lyapunov stability of dynamical system: a di?erential approach and a numerical method. Physica D 1987;27:311–37. [4] Dieci L, Russell R, Vleck EV. On the computation of Lyapunov exponents for continuous dynamical systems. SIAM J Numer Anal 1997;34:403–23. [5] Dieci L, Russell R, Vleck EV. Unitary integrators and application to continuous orthogonalization techniques. SIAM J Numer Anal 1994;31:261–81. [6] Udwadia F, von Bremen H. An e?cient and stable approach for computation of Lyapunov characteristic exponents of continuous dynamical systems. Appl Math Comput 2001;121:219–58. [7] Lu J. Exact expansions of arbitrary tensor function f(a) and their derivative. Int J Solids Struct 2004;41:337–49. [8] Dieci L, Vleck EV. Computation of a few Lyapunov exponents for continuous and discrete dynamical systems. Appl Numer Math 1995;17:275–91. [9] Bartler T. Lyapunov exponents and chaos investigation. Ph.D. thesis, Department of Aerospace Engineering, The University of Cincinnati, Cincinnati, OH, 1999. [10] Parlett B. The symmetric eigenvalue problem. Upper Saddle River, NJ: Prentice-Hall; 1998. [11] Hairer E, Wanner G. Solving ordinary di?erential equations I. Berlin: Springer; 1987. [12] Sanz-Serna J, Calvo M. Numerical Hamiltonian problems. London: Chapman & Hall; 1994. [13] Basso M, Giarro L. Complex dynamics in a harmonically excited Lennard–Jones oscillator: microcantilevers-sample interaction in ? scanning probe microscopes. ASME J Dyn Syst Measure Control 2000;122:240–5. [14] G. Yang, J. Lu, A. Luo, On the computation of Lyapunov exponents for forced vibration of a Lennard–Jones oscillator, Chaos, Solitons & Fractals, in press.