ON PROPERTIES OF A CLASS OF SPECTRAL CHARACTERISTICS OF MATRICES AND APPLICATIONS TO ORDINA


RE CH ER C H E

R

E N

IN ST IT UT

I

DE

ON PROPERTIES OF A CLASS OF SPECTRAL CHARACTERISTICS OF MATRICES AND APPLICATIONS TO ORDINARY DIFFERENTIAL EQUATIONS GENNADII DEMIDENKO AND INESSA MATVEEVA

ISSN 1166-8687

IN F O

E U Q TI A M R

I

ET

ES ?M T YS S

S

S RE OI T ?A AL

A

PUBLICATION INTERNE No 820

IRISA
CAMPUS UNIVERSITAIRE DE BEAULIEU - 35042 RENNES CEDEX - FRANCE

IRISA

` ? INSTITUT DE RECHERCHE EN INFORMATIQUE ET SYSTEMES ALEATOIRES

Campus de Beaulieu – 35042 Rennes Cedex – France Tel. : (33) 99 84 71 00 – Fax : (33) 99 84 71 71 ?

On properties of a class of spectral characteristics of matrices and applications to ordinary di erential equations
Gennadii Demidenko and Inessa Matveeva
Programme 6 | Calcul scienti que, modelisation et logiciel numerique Projet ALADIN Publication interne n 820 | Mai 1994 | 41 pages a class of the spectral matrix characteristics p; p 0. The report contains the proofs of their basic properties. Using these properties, we formulate criterion for a matrix spectrum to belong to the closed half-plane fRe 0g and prove theorems about qualitative properties of solutions of systems of ordinary di erential equations. In particular, we establish new criteria of the asymptotic stability and stability in the sense of Lyapunov for systems of linear ordinary di erential equations. We discuss an algorithm to compute the parameters p. We include some examples of systems of linear ordinary di erential equations depending on parameters. For these systems, using the characteristics p, we determine the asymptotic stability zones by means of a computer. Key-words: spectral characteristics of matrices, matrix exponential, stability in the sense of Lyapunov, Lyapunov equation
(Resume : tsvp)

Abstract: In this report we consider a class of the integral matrices Hp and

This work was supported by the French ministry of Defence, the French ministry of Foreign A airs and the Russian Foundation of Fundamental Investigations. Institute of Mathematics, Siberian Branch of Russian Academy of Sciences, 630090 Novosibirsk, Russia, demidenk@math.nsk.su

Centre National de la Recherche Scienti?que (URA 227) Universite de Rennes 1 – Insa de Rennes ?

Institut National de Recherche en Informatique et en Automatique – unite de recherche de Rennes ?

Resume : Dans ce rapport on considere une classe de matrices integrales Hp

A propos de proprietes d'une classe de caracteristiques spectrales des matrices et d'applications aux equations di erentielles ordinaires

et une classe des caracteristiques spectrales matricielles p; p 0. Le rapport contient les preuves de leurs proprietes fondamentalles. En utilisant ces proprietes, nous formulons un critere d'appartenance des spectres matriciels au demi-plan ferme fRe 0g et prouvons des theoremes sur les proprietes qualitatives des solutions des systemes d'equations di erentielles ordinaires. En particulier, on etablit des nouveaux criteres de stabilite asymptotique et de stabilite au sens de Lyapounov des solutions des systemes d'equations di erentielles ordinaires lineaires. On presente un algorithm de calcul des parametres p. On inclut quelques exemples de systemes d'equations di erentielles ordinaires lineaires dependant de parametres. Pour ces systemes, utilisant les caracteristiques p, on determine les zones de stabilite au moyen d'un ordinateur. Mots-cle : caracteristiques spectrales des matrices, exponentielle de matrice, stabilite au sens de Lyapounov, equation de Lyapounov.

On properties of spectral characteristics of matrices

1

1 Introduction
It is well-known that the problem of calculation of eigenvalues of unsymmetric matrices on a computer is "incorrect" (see, e.g., 7, 14]). Therefore, in order to obtain information on the location of the spectrum of a matrix A of order N in the complex plane or on the behaviour of the solutions of the system of ordinary di erential equations dy = Ay; (1) dt we would like to have numerical characteristics for the matrix A which allow us to answer on these questions. One of characteristics of such type is the parameter of the asymptotic stability (A) for the system (1). This parameter was introduced by S.K.Godunov and A.Ya.Bulgakov 1, 9]. For a Hurwitz matrix A (A) = 2kAkkH k (2) where the matrix H is the solution of the Lyapunov equation

HA + A H = ?I; (3) I being the unit matrix. If A is non-Hurwitz, then (A) = 1. The parameter (A) gives a numerical criterion of the asymptotic stability of the solutions of (1). This fact follows from the Lyapunov theorem about the asymptotic stability (see, e.g., 8]). S.K.Godunov and A.Ya.Bulgakov elaborated 2] an algorithm for calculation of (A) on a computer with a guaranteed accuracy. It permits to solve with a guaranteed accuracy the problem of the asymptotic stability for systems of the form (1). This is equivalent to the problem of characterizing the matrices whose spectra belong to the open left half-plane fRe < 0g. However, the question of characterizing the matrices whose spectra are contained in the closed left half-plane fRe 0g was open. And the problem of obtaining a numerical criterion of stability in the sense of Lyapunov for the system (1) was not solved too. Last year one of the authors of the present paper proposed 4] a solution to these problems. An approach for solving is connected with his investigations in the theory of partial di erential equations (see, e.g., 3]) and is based upon use of the characteristics p(A), p is real, p 0, which he introduced in his

2

G.Demidenko and I.Matveeva

lectures on ordinary di erential equations in Novosibirsk State University (Russia) in 1987. According to 4] the characteristics p(A) for the matrix A are de ned as follows: if the integral Z1 (4) Hp = (1 + tkAk)? petA etAdt; exists, then (5) p (A) = ap kAkkHpk; where Z1 ? ap = (1 + s)? pe? sds : If the integral (4) diverges, then p(A) = 1. It should be noted here that for the Hurwitz matrices the integral (4) exists for p = 0 and is the unique solution of the Lyapunov equation (3), i.e. the parameter (A) coincides with (A). The spectral characteristics p(A) allow to introduce new criteria 4, 5, 6] of the asymptotic stability and stability in the sense of Lyapunov for the solutions of the system (1). Moreover, using the characteristics p(A) for 0 < p 1=2 instead of (A) one can obtain stronger numerical results. Note that the approach from 4] permits also to solve some problems of linear algebra. Thus, the characteristics p(A) permit to answer on the question: Does a matrix spectrum belong to a line (an angle, a strip or a convex polygon) in the complex plane ? The organization of the paper is as follows. In Section 2-4 we present the basic results of 4-6]. In particular, we formulate some properties of the matrix Hp in Section 2 and some properties of the characteristics p(A) in Section 3. Section 4 contains some spectral criteria for matrices and the theorems about the properties of the solutions of (1). In Section 5 we discuss a numerical algorithm which allows to obtain two-sided estimates for p(A) by means of a computer. Section 6 presents some numerical examples.
2 0 2 2 1 0 0

2 Properties of Hp

Let us establish some properties of the matrices Hp. We assume that A 6= 0. Theorem 2.1 The matrix Hp is Hermitian positive-de nite and hHpv; vi (apkAk)? kvk : (6)
1 2

On properties of spectral characteristics of matrices

3

Proof. From the de nition (4) it follows immediately that the matrix Hp is Hermitian. To prove that the matrix Hp is positive de nite we consider the quadratic form hHp v; vi for any vector v. It is not di cult to show that the following identity holds Z1 (7) hHpv; vi = (1 + skAk)? pkesA k ds:
2 2 0

Indeed, according to (4)

(1 + skAk)? pesA esA ds v; vi Z Z1 ? phesA esA v; v ids = 1 (1 + skAk)? p kesA k ds: = (1 + skAk) Taking into account e?jsjkAkkvk kesAvk; by (7) we obtain Z1 (1 + skAk)? pe? skAkdskvk : hHp v; vi

hHp v; vi = h
2

Z1
0

2

2

2

0

0

2

2

2

0

This leads to (6).

2

Theorem 2.2 If there exists the integral Hp, then for any q > p the integral Hq exists and kHpk > kHq k: Proof. Let v ; kv k = 1, be a vector such that kHq k = hHq v ; v i:
0 0 0 0

From (7) we have

<

Z1
0

kHq k =

Z1
0

(1 + skAk)? qkesAv k ds
2 0 2 2 0 2 0 0

(1 + skAk)? pkesA v k ds = hHp v ; v i max hHpv; vi = kHpk: kvk
=1

2

4

G.Demidenko and I.Matveeva

Theorem 2.3 If for an integer p 0 the matrix Hp exists then all eigenvalues of the matrix A have nonpositive real parts, moreover, the maximal size of the Jordan blocks corresponding to the imaginary eigenvalues (if they exist) is not greater than p.
of the matrix A with Re j > 0. Then by (7) for a corresponding eigenvector vj we have Z1 hHpvj ; vj i = (1 + skAk)? pkesA vj k ds (8) Z1 = (1 + skAk)? pe sRe j dskvj k :
j
2 2 0 2 2 2 0

Proof. Suppose that there exists an eigenvalue

But if Re j > 0 then this integral doesn't exist. Consequently, the matrix Hp is inde nite. Thus, we have a contradiction, i.e. the eigenvalues cannot belong to the right half-plane. Assume that there exists an imaginary eigenvalue j of the matrix A for which the size of the corresponding Jordan block is equal to q; q > p. Then the system (1) has a solution of the form

y(t) = et j (tq? v + . . . + vq ) = etAvq :
1 1

According to (7) for the generalized eigenvector vq we have Z1 hHp vq; vq i = (1 + skAk)? pkesAvq k ds Z1 = (1 + skAk)? pksq? v + . . . + vqk ds: Since v is an eigenvector of the matrix A, this integral diverges for q > p. Hence, the matrix Hp is inde nite. We have a contradiction. 2
2 2 0 2 1 0 1 2 1

Note that if the spectrum of the matrix A belongs to the left half-plane and if there exists at least one imaginary eigenvalue, then by the Lyapunov theorem about the asymptotic stability the minimal number p = pmin such that there exists a matrix of the form (4) must be greater than zero. The following two theorems assert that pmin is uniquely de ned by the maximal size of the Jordan blocks corresponding to the imaginary eigenvalues.

On properties of spectral characteristics of matrices

5

Theorem 2.4 If p is the minimal natural number such that there exists a

matrix of the form (4), then the matrix A has at least one imaginary eigenvalue. Moreover, the maximal size of the Jordan blocks corresponding to the imaginary eigenvalues is p. Theorem 2.5 Let all eigenvalues of A belong to the closed left half-plane and there exists at least one imaginary eigenvalue. If the maximal size of the Jordan blocks corresponding to the imaginary eigenvalues is p, then Hp exists and Hp?1 doesn't exist. The proofs of Theorems 2.4 and 2.5 will be obtained in Section 3 as simple corollaries of some theorems about properties of the spectral characteristics p. Theorem 2.6 If the matrix Hp exists, then the following relation holds HpA + A Hp = ?I + 2pkAkHp+1=2: (9) Proof. Introduce the matrix

Zt Hp(t) = (1 + skAk)? pesA esAds:
2 0 2

(10)
+1 2

We now prove that Hp(t)A + A Hp(t) = (1 + tkAk)? petA etA ? I + 2pkAkHp = (t): (11) Indeed, using properties of the matrix exponential we have Zt d Hp(t)A + A Hp(t) = (1 + skAk)? p ds esA esA ds Z ? petA etA ? I + 2pkAk t (1 + skAk)? p? esA esA ds: = (1 + tkAk) Hence, by the de nition of Hp = we obtain (11). From Theorem 2.3 ketAk c(1 + t)p? ; t 0: Since Hp(t) ! Hp; Hp = (t) ! Hp = with t ! +1, we have (9).
2 0 2 2 1 0 +1 2 1 +1 2 +1 2

2

6
below holds

G.Demidenko and I.Matveeva

Theorem 2.7 For the norm of the matrix Hp the following estimate from
(12) Furthermore, if there exists at least one imaginary eigenvalue of the matrix A, then kHpk (2pkAk)? : (13) Note that the inequality (13) is more strict than (12) because for p 6= 0 Z1 Z1 a? = (1 + s)? pe? sds < (1 + s)? p? ds = 21p : p Proof. The inequality (12) follows from (6) as kHpk = kmax hHp v; vi: vk
1 1 1 2 2 2 1 0 0 =1

kHpk (apkAk)? :

We now prove (13). Let y(t) be a solution of the system (1), then we have d hH y(t); y(t)i = h(H A + A H )y(t); y(t)i: (14) p p dt p By (9) the equality (14) can be rewritten as d hH y(t); y(t)i + hy(t); y(t)i = 2pkAkhH y(t); y(t)i: p = dt p Using the estimate hHp y(t); y(t)i kHpkky(t)k and Theorem 2.2 we obtain d hH y(t); y(t)i + kH k? hH y(t); y(t)i 2pkAkhH y(t); y(t)i: p p p dt p Rewrite this inequality in the form d exp(tkH k? ? t2pkAk)hH y(t); y(t)i 0: p p dt Consequently, hHp y(t); y(t)i exp(t2pkAk ? tkHpk? )hHpy(0); y(0)i: If there exists at least one imaginary eigenvalue of the matrix A, then for an arbitrary solution y(t) the quadratic form hHp y(t); y(t)i cannot decrease when t ! +1. Hence, it is necessary that 2pkAk ? kHpk? 0. 2
+1 2 2 1 1 1 1

On properties of spectral characteristics of matrices holds

7

Theorem 2.8 Let Hq exist. Then for any p > q +1=2 the following estimate
kHp k ((2p ? 1)kAk)? (2kAkkHq k + 1): Proof. As p ? 1=2 > q, using Theorem 2.6 we have Hp? = A + A Hp? = = ?I + (2p ? 1)kAkHp:
1 1 2 1 2

By Theorem 2.2 we obtain

kHpk = ((2p ? 1)kAk)? kHp? = A + A Hp? = + I k ((2p ? 1)kAk)? (kHp? = kkAk + kA kkHp? = k + 1) = ((2p ? 1)kAk)? (2kAkkHp? = k + 1) ((2p ? 1)kAk)? (2kAkkHq k + 1):
1 1 2 1 2 1 1 2 1 1 2 1 2 1

2
+

According to Theorems 2.2, 2.7 and 2.8 we obtain that if for some p the matrix Hp = limt! 1 Hp(t) exists, then the family of the norms fkHp q kg; q 0, is strictly decreasing with growth of q. Moreover,
+

kHp q k = O(1=q); q ! 1:
+

Theorem 2.9 Let p be the minimal natural number such that there exists the matrix Hp . Then for t ! +1 we have the following relations kHp ? Hp(t)k = O(t? ); (15) kHp = ? Hp = (t)k = O(t? ); (16) k(Hp ? Hp(t))A + A (Hp ? Hp(t))k = O(t? ) (17)
1 +1 2 +1 2 2 2

where Hp(t) is de ned by (10).

The theorem will be proved in Section 3.

8

G.Demidenko and I.Matveeva

3 Properties of

p We now establish some properties of the spectral characteristics by (5). We assume again that A 6= 0.
p(A) = ap kAk sup v6=0

p

de ned

Theorem 3.1 The following equalities hold
Z1
0

(1 + skAk)? pkesA vk ds kvk? ;
2 2 2

(18) (19)

p (A) = p (A=kAk): p (A)

and Hp. Let us prove (19). Since the asymptotic properties of the solutions of the systems dy = Ay and dz = A z dt dt kAk are identical then p(A=jjAjj) = 1 if and only if p(A) = 1. If p(A) and p(A=jjAjj) are nite then (19) follows from the de nitions of these parameters. Indeed, Z1 (1 + skAk)? pesA esA dsk p (A) = apkAkkHp k = ap kAkk Z1 = apk (1 + )? pe A =kAke A=kAkd k = p(A=kAk):
2 0 2 0

Proof. The equality (18) follows immediately from the de nitions of

2

Theorem 3.2 If p(A) < 1, then 1 < q (A) < p(A); q > p: Proof. Using the de nition (5) and Theorem 2.2, we have q (A) <
=1

Taking into account (6), we obtain kHpk (apkAk)? . Hence, p(A) 1. Since the family f p(A)g is monotonically decreasing with increase of p, it follows that p(A) > 1: 2
1

p (A); q > p: By Theorem 2.1 the matrix Hp is Hermitian positive de nite. Therefore kHpk = kmax hHp v; vi: vk

On properties of spectral characteristics of matrices then

9

Theorem 3.3 If the matrix A has at least one eigenvalue j with Re j > 0 p(A) = 1 for all p 0: Proof. Let vj be an eigenvector corresponding to j . By (8) we have
hHp vj ; vj i =
Z1
0

(1 + kAk)? pe sRe j dskvj k :
2 2 2

Since Re j > 0, it follows that the integral diverges. Consequently, p(A) = 1 for all p. 2

Theorem 3.4 If
tive real part.

N (A) = 1,

then A has at least one eigenvalue with posi-

positive real parts, i.e. Re j 0; j = 1; . . . ; N . From the Gelfand-Shilov inequality (see, e.g., 13]) we have ! (2tkAk) + . . . + (2tkAk)N ? ; tA k ke 1 + 1! t 0: (20) (N ? 1)!
1

Proof. We use an indirect proof. Assume that all eigenvalues have non-

That is

! (2tkAk) + . . . + (2tkAk)N ? kvkkwk 1 + 1! (N ? 1)! for any vectors v; w. Hence, the quadratic form Zt h (1 + skAk)? N esA esAds v; wi Zt = (1 + skAk)? N hesA v; esAwids has the limit when t ! +1. Consequently, the matrix HN exists and must be nite. Thus, we have a contradiction.

hetAv; etAwi

1

2

2

0

2

0

N (A)

2

Theorem 3.5 Let all eigenvalues of the matrix A have nonpositive real parts. If the maximal size of the Jordan blocks corresponding to the imaginary eigenvalues is p, then p? (A) = 1 and p (A) < 1.
1

10

G.Demidenko and I.Matveeva
1

J . Since the maximal size of the Jordan blocks corresponding to the imaginary eigenvalues is p, then there exists a constant c > 0 such that the following estimate holds ketAk = kTetJ T ? k kT kkT ? kketJ k c(1 + tkAk)p? ; t 0: Using this inequality, we obtain hetAv; etAwi c (1 + tkAk) p? kvkkwk for any vectors v; w. Since for Hp(t) from (10) Zt hHp(t)v; wi = (1 + skAk)? phesA v; esAwids;
1 1 1 2 2 2 2

Proof. Let J be the Jordan canonical form of the matrix A , i.e. T ? AT =

it follows that there exists the limit lim hH (t)v; wi: t! 1 p
+ 1

0

Consequently, the matrix Hp is de nite and p(A) < 1. Now we prove that p? (A) = 1. Assume that p? (A) < 1. From (18) we have Z1 (1 + skAk)? p kesAvk ds kvk? p? (A) = ap? kAk sup v6 Z1 = ap? kAk sup (1 + skAk)? p kTesJ yk ds kTyk? :
1 1 1 2 +2 2 2 =0 0 1

Hence, for any vector y 2 EN , y 6= 0, Z1 (1 + skAk)? p kTesJ yk ds kTyk? ap? kAk
1 2 +2 2 0 0

y6=0

2 +2

2

2

0

2

p?1 (A) < 1:

Since the maximal size of the Jordan blocks corresponding to the imaginary eigenvalues is p, then there exists a vector y such that j TetJ y = et j (tp? vj + . . . + vp); Re j = 0: Consequently, the integral Z1 (1 + skAk)? p kTesJ y k ds
0 1 1 2 +2 0 2

diverges. Thus, we obtain a contradiction.

0

2

On properties of spectral characteristics of matrices

11

The proof of Theorem 2.5 can then be obtained as a consequence of this Theorem. = 1 and p(A) < 1 for natural p, then A has imaginary eigenvalues. Moreover, the maximal size of the Jordan blocks corresponding to the imaginary eigenvalues is p.
p?1 (A)

Theorem 3.6 If

Proof. It is known that all eigenvalues of the matrix A have negative real parts if and only if (A) < 1. On the other hand, according to Theorem 3.3, the matrix A has not any eigenvalues with positive real parts. Therefore, the spectrum of the matrix A is the union of two sets: the rst one consists of all eigenvalues with negative real parts (it may be empty), the second one consists of all imaginary eigenvalues (A can have only imaginary eigenvalues). Let the maximal size of the Jordan blocks corresponding to the imaginary eigenvalues be p + k. From Theorem 3.5 we have p k? (A) = 1 and p k (A) < 1. Since p (A) < 1, it follows that k 6= 1. Taking into account Theorem 3.2, we obtain that k cannot be positive. On the other hand, if k < 0, then k cannot be equal to ?1 as p? (A) = 1. If k ?2, then by Theorem 3.2 we have p? (A) < p k (A) < 1. This leads to a contradiction. Hence, k can be only zero. 2
0 + 1 + 1 1 +

The proof of Theorem 2.4 follows immediately from this Theorem.

Theorem 3.7 If p? (A) = 1 and p(A) < 1 for natural p, then p? " (A) < 1 for any " > 1=2. Proof. From Theorem 3.1 Z1 (A) = ap? "kAk sup (1 + skAk)? p ? "kesAvk dskvk? : p? "
1 1+ 1+ 1+ 2 +2 2 2 2

kvk=1

0

Using Theorem 3.6, we have

kesAvk c(1 + skAk)p? kvk; s 0: (21) This leads to p? " (A) < 1 for any " > 1=2. 2 Theorem 3.8 Let p be a nonnegative integer. Then p(A) < 1 if and only if p = (A) < 1.
1 1+ +1 2

12
+1 2

G.Demidenko and I.Matveeva

Proof. Taking into account Theorem 3.2, it is su cient to verify that if p = (A) < 1, then p (A) < 1. First we consider the case of p = 0. Let = (A) < 1. Suppose that (A) = 1. From Theorem 3.2 we have (A) < 1. Consequently, by Theorem 3.6 the matrix A has at least one imaginary eigenvalue j . Let vj ; kvj k = 1 be a corresponding eigenvector. Using Theorem 3.1, we obtain
1 2 0 1

the inequality
1 2

Z1 = a = kAk (1 + skAk)? kesAvj k dskvj k? = (A) < 1: But the integral is divergent. We have a contradiction. Consider the case when p is natural. We assume that p(A) = 1. As noted earlier, if p = (A) < 1, then p (A) < 1. Therefore, by Theorem 3.6 we obtain that there exists j with Re j = 0 and linearly independent vectors v ; v ; . . . ; vp such that the vector function y(t) = et j (tpv + tp? v + . . . + vp ) is a solution of the system (1). Using Theorem 3.1, we have the estimate Z1 ap = kAk (1 + skAk)? p? kspv + sp? v + . . . + vp k dskvp k?
0 1 2 2 0 1 2 +1 2 +1 1 2 +1 1 1 2 +1 +1 2 2 1 0 1 1 2 +1 2 +1 2

a = kAk
1 2

Z1

(1 + skAk)? ds
1

It can be rewritten as Z1 (1 + skAk)? p? kspv + sp? v + . . . + vp k ds != p = (A) ap = kAk kvp k: Taking into account the Minkovskii inequality, we obtain Z1 = (1 + skAk)? p? jsj pkv k ds
2 1 0 1 1 2 +1 2 +1 2 1 2 +1 2 +1 2 1 2

p+1=2(A):

1 2

=

1= 0Z p? 1 X j ? @ (1 + skAk)? p? k s vp?j k dsA
1 2 1 0

0

1

2

1 2

1 2

j =0

+1

2

On properties of spectral characteristics of matrices
p+1=2 (A) ap+1=2kAk

13

!=

1 2

kvp k:
+1 1 2

Hence,
p?1 X j =0

Z1
0

Z1
0

(1 + skAk)? p? jsj pds
2 1 2 2 1 2

=

kv k
1 +1

But all summands in the right-hand side are nite and the integral from the left is divergent. Thus, we have a contradiction. Hence, p(A) < 1. 2 Finally, we prove Theorem 2.9. Proof. From the de nitions of the matrices Hp; Hp(t) we obtain that Z1 Hp ? Hp(t) = (1 + skAk)? pesA esAds: By the conditions of the Theorem, p is the minimal natural number such that the matrix Hp exists, i.e. p? (A) = 1; p(A) < 1. Then by Theorem 3.6 the inequality (21) holds for any vector v. Hence, estimating the integral Z1 (1 + skAk)? pkesAvk ds
1

(1 + skAk)? p? jsj j ds kvp?j k != p = (A) + a kAk kvp k:
+1 2 1 2

1 2

=

p+1=2

+1

t

2

we obtain (15). Formulae (16) and (17) can be proved in a similar way.

t

2

2

2

4 On spectral properties of matrices and qualitative behaviour of solutions of systems of linear ordinary di erential equations
This Section includes some spectral criteria and theorems about qualitative properties of the solutions of the system (1). In particular, it contains new

14

G.Demidenko and I.Matveeva

criteria of the asymptotic stability and stability in the sense of Lyapunov of the solutions of (1). At rst, we formulate the spectral criteria for the matrix A.

Theorem 4.1 The spectrum of A belongs to the open left half-plane fRe < 0g if and only if there exists p 2 0; ] such that p(A) < 1. Proof. If all eigenvalues j of A have negative real parts, then by the Lyapunov theorem (A) < 1. We now prove that if p(A) < 1 for some 0 p 1=2, then the spectrum of A is contained in the open left half-plane. If = (A) < 1, then by Theorem 3.8 (A) < 1. Hence, according to the Lyapunov theorem, the spectrum of A belongs to the open left half-plane. If there exists p 2 (0; ) such that p(A) < 1, then from Theorem 3.2 we obtain the above case of 2 = (A) < 1. Theorem 4.2 The spectrum of A belongs to the closed left half-plane fRe 0g if and only if there exists p 0 such that p(A) < 1. Proof. If all eigenvalues j of A have Re j 0, then by Theorem 3.5 p (A) < 1 for natural p which is the maximal size of the Jordan blocks
1 2 0 1 2 0 1 2 1 2

corresponding to the imaginary eigenvalues. Conversely, if p(A) < 1 for some p 0, then from Theorem 3.3 the matrix A cannot have any eigenvalues with positive real parts. 2

We now formulate some criteria 5, 6] of the asymptotic stability and stability in the sense of Lyapunov based upon the properties of the spectral characteristics p(A); p 0.

t > 0 if and only if all eigenvalues of A are contained in the open left halfplane. Therefore, the proof follows immediately from Theorem 4.1. 2

Theorem 4.3 The null solution of the system (1) is asymptotically stable for t > 0 if and only if there exists p 2 0; ] such that p(A) < 1. Proof. Remind that the null solution of (1) is asymptotically stable for
1 2

On properties of spectral characteristics of matrices

15

For p = 0 Theorem 4.3 is equivalent to the Lyapunov theorem about the asymptotic stability because (A) = (A). In the case of p > 0 the present criterion of the asymptotic stability is new. Note that, according to Theorem 3.2, p(A) < (A). Therefore, it seems to be necessary to use the characteristics p(A); 0 < p 1=2; in order to obtain stronger results on a computer. This is con rmed by computational experiments. Thus, Section 6 includes a simple example which shows that there exist matrices A whose spectra belong to the open left half-plane but the ratio (A)= = (A) can be very large. Theorem 4.4 Let (A) = 1. The null solution of the system (1) is stable in the sense of Lyapunov for t > 0 if and only if there exists p 2 ( ; ] such that p (A) < 1. Proof. According to the classical spectral criterion of stability of solutions for t > 0 (see, e.g., 12]), the null solution of (1) is stable in the sense of Lyapunov for t > 0 if and only if all eigenvalues j of A have Re j 0 and only one-dimensional Jordan blocks correspond to the imaginary eigenvalues. Taking into account Theorems 3.5 and 3.6, this is equivalent to (A) < 1. Using Theorems 3.2, 3.7 and 3.8, we obtain that (A) < 1 if and only if 2 p (A) < 1 for 1=2 < p 3=2. The following theorem contains an integral representation of the solutions of (1). This representation characterizes asymptotic properties when t ! +1. Theorem 4.5 Let p? (A) = 1 and p(A) < 1 for some natural p. Then for a solution y(t) of the system (1) the following relation holds ky(t)k = (1 + tkAk) p h((Hp(t) ? Hp )A + A (Hp(t) ? Hp))y(0); y(0)i (22) + 2pkAkh(Hp = ? Hp = (t))y(0); y(0)i] where Hp is de ned by (4), Hp(t) by (10). Proof. Since a solution y(t) of (1) may be written in the form y(t) = etAy(0), it follows from (11) that h(Hp(t)A + A Hp(t))y(0); y(0)i (23) = (1 + tkAk)? pky(t)k ? ky(0)k + 2pkAkhHp = (t)y(0); y(0)i:
0 0 0 1 2 0 1 2 3 2 1 1 1 2 2 +1 2 +1 2 2 2 2 +1 2

16 From Theorem 2.6

G.Demidenko and I.Matveeva

I = 2pkAkHp
then
2 +1 2 2

+1 2

=

? (HpA + A Hp);

ky(0)k = 2pkAkhHp = y(0); y(0)i ? h(HpA + A Hp )y(0); y(0)i: Hence, substituting ky(0)k in (23) we obtain (22). ky(t)k = O(tp? ); for t ! +1:
1

2

If the conditions of Theorem 4.5 are ful lled, then from (22) and Theorem 2.9 we obtain that the solution y(t) of (1) satis es the limit relation (24) By Theorems 3.5 and 3.6, p is the maximal size of the Jordan blocks corresponding to the imaginary eigenvalues of A. Therefore, (24) can be obtained from the representation for the matrix exponential

etA = TetJ T ?

1

where J is the Jordan canonical form of A. It should be noted here that one can verify the conditions of Theorem 4.5 without knowledge of the matrix J . This will follow from a computational algorithm described in Section 5. Finally, we establish uniform estimates for the solutions of (1) on the half-line t > 0. These estimates will be essentially used in Section 5 for the foundation of the algorithm for p(A).

Theorem 4.6 Let
hHp y(t); y(t)i

and p (A) < 1 for some natural p. Then for a solution y(t) of the system (1) the following estimate holds

p?1 (A) = 1

(1 + tkAk)

2(

p?1+") exp(?t=kH
1+

p k)hHp y (0); y (0)i

+ 2(p + 1 ? ")kAkkHpkhHp? " y(0); y(0)i];

(25)

t 0;

" > 1=2:

On properties of spectral characteristics of matrices

17

Proof. Let us consider the form h(t) = (1 + tkAk)?

2(

p?1+")hH

p y (t); y (t)i

for an arbitrary solution y(t). Since d h(t) = (1 + tkAk)? p? " d hH y(t); y(t)i dt dt p ?2(p ? 1 + ")kAk(1 + tkAk)? h(t) and d hH y(t); y(t)i = h(H A + A H )y(t); y(t)i; p p dt p from Theorem 2.6 we have d h(t) + (1 + tkAk)? p? " ky(t)k dt = 2pkAk(1 + tkAk)? p? " hHp = y(t); y(t)i ?2(p ? 1 + ")kAk(1 + tkAk)? h(t): We denote by f (t) the right-hand side of this equality. Using the inequality
2( 1+ ) 1 2( 1+ ) 2 2( 1+ ) +1 2 1

hHpy(t); y(t)i kHpkky(t)k ;
2

we obtain or Hence,

d (exp(t=kH k)h(t)) exp(t=kH k)f (t): p p dt h(t) exp(?t=kHpk)h(0) +

d h(t) + kH k? h(t) f (t) p dt
1

Zt
0

exp(?(t ? s)=kHpk)f (s) ds:

We now estimate the function f (t). By the monotony property

hHp

+1 2

=

y(t); y(t)i hHpy(t); y(t)i

18 and from the de nition of h(t) we have

G.Demidenko and I.Matveeva

f (t) 2pkAk(1 ? (1 + tkAk)? )h(t)
1

+2(1 ? ")kAk(1 + tkAk)? h(t) 2pkAkh(t) +2(1 ? ")kAk(1 + tkAk)? h(t) 2(p + 1 ? ")kAkh(t): This leads to the inequality
1 1

h(t) exp(?t=kHpk)h(0) Zt +2(p + 1 ? ")kAk exp(?(t ? s)=kHpk)h(s) ds exp(?t=kHpk)h(0) Zt +2(p + 1 ? ")kAk (1 + skAk)? p? " hHp y(s); y(s)ids: As y(s) = esA y(0), by Theorem 3.7 from the previous section we obtain
0 2( 1+ ) 0

h(t) exp(?t=kHpk)h(0) + 2(p + 1 ? ")kAk kHpkhHp? " y(0); y(0)i:
1+

Consequently, from the de nition h(t) we have (25).

2

Corollary 4.1 The following estimate holds ketAk (1 + tkAk) p? " (
2 2( 1+ )

p = p ) max min

exp(?t=

p ) max

+ 2(p + 1 ? ")

(

p?1+") max kAk];

(26)

1 " > 1=2; where p and p are the maximal and the minimal eigenvalues of the mamin max trix Hp respectively, p? " is the maximal eigenvalue of the matrix Hp? " . max
1+ 1+

t 0;

The estimate (26) is an analog of the Gelfand-Shilov estimate (20).

On properties of spectral characteristics of matrices

19

5 Numerical algorithm for
2

p(A) From the de nitions of the numerical characteristics p(A) it follows that p (A) < 1 if and only if the integral (4) converges. Since the matrix
(1 + skAk)? pesA esAds is Hermitian positive de nite, therefore, in order to prove convergence of the integral (4), it is su cient to obtain the estimate from above

kHp(t)k const; t 0;
where Hp(t) is de ned by (10). Note that by Theorem 3.1, one can suppose that kAk = 1. We further assume that p is a natural number (see the case of p = 0 in 2]). Consider the sequence fHp(m)g where Zm (27) Hp (m) = (1 + s)? pesA esA ds; kAk = 1:
2 0

Then

Rewrite the integral (27) in the form m XZ k (1 + s)? pesA esAds: Hp(m) =
k=1 k?1
2

lim p (A) = ap m!1 kHp (m)k:

It is obvious that the matrices Zk esA esAds; k?
1 2

Zk
k?1

(1 + s)? pesA esA ds
2

are Hermitian positive de nite and the following estimates hold Z Z ? p k esA esA ds < k (1 + s)? p esA esA ds (1 + k) k? k? Zk < k? p k? esA esA ds; k 1; p 0:
2 1 1 2 1

20

G.Demidenko and I.Matveeva

Therefore, for the matrix Hp (m) we have the estimates Zk m m X ? p Z k sA sA X k k? e e ds: esA esA ds Hp (m) (1 + k)? p
2

k=1

k?1

2

k=1

1

As

0 < k? p ? (1 + k)? p k? p(1 ? 1=4p ); then the convergence lim m!1 kHp (m)k is equivalent to convergence of the series 1 X ? p Z k sA sA e e ds: k
2 2 2 2

k=1

k?1

(28)

Consider the following integrals Zk Bk = k? esA esAds;
1 1

k 1:

Having calculated B and eA we can determine other integrals Bk , k 2, by means of the relations

Bk = e k ?
(

1)

A

B e k? A :
1 ( 1)

Hence, we have the simple formula for calculation of the partial sum Sm of the series (28) m X (29) Sm = k?pe k? A B k?pe k? A:
( 1)

However, for solving real problems, it may be better to estimate the characteristic p = (A) instead of p(A). By Theorem 3.8 both these characteristics give the same information about the spectrum of the matrix A. On the other hand, by Theorem 3.2 we have 1 < p = (A) < p(A). Therefore, it is possible that for certain matrix A we will be able to calculate p = (A) and won't be able to calculate p(A) by means of a computer because its value is equal to in nity for given computer. Section 6 contains one example which shows that such situation is quite real even for matrix of order two.
+1 2 +1 2 +1 2

k=1

1

(

1)

On properties of spectral characteristics of matrices
+1 2

21

For p = (A) we use the same scheme as for p(A). In this case it is necessary to consider the integrals Zm Hp = (m) = (1 + s)? p? esA esA ds; kAk = 1;
+1 2 2 1 0

instead of the integrals (27) and p = (A) = ap
+1 2

+1 2

lim = m!1 kHp+1=2 (m)k:
+1 2

We obtaine analogously that existence of the limit lim m!1 kHp = (m)k is equivalent to convergence of the series 1 X ? p? Z k sA sA k e e ds and for the
k?1 k=1 0 partial sum Sm we have m X 0 Sm = k?p?1=2e(k?1)A k=1
1 2 1

(30)

B k?p? = e k? A:
1 1 2 ( 1)

Note that if p? (A) = 1 and p(A) < 1 for natural p then one can point out a majorant convergent series for 1 X ? p? Z k sA sA (31) e e dsk: kk
2 1

k=1

k?1

Indeed, according to the inequality (26), we have the following estimates

ke k? Ak
( 1)

2

k

2(

p p?1+") max p min

e? k?
(

1)

= p + 2(p + 1 ? ") (p?1+") max max

Consequently,

1 > " > 1=2:
1

c(")k

2(

p?1+");

kk? p?
2

Zk
k?1

esA esAdsk

22
1 2 ( 1) 1 1 2 ( 1)

G.Demidenko and I.Matveeva
1 2 3

= kk?p? = e k? A B k?p? = e k? Ak c(")kB kk "? : Therefore, for any 1 > " > 1=2, the series 1 X c(")kB kk "? is a majorant convergent one for (31). Hence, if p? (A) = 1 and p(A) < 1, then one can estimate the convergence rate of the series (30). The described algorithm is not optimal, but it demonstrates a principal possibility of obtaining estimates for the spectral characteristics p(A) by means of a computer.
1

k=1

1

2

3

6 Numerical Examples
In this Section we illustrate the practical e ciency of the spectral characteristics p(A) for study of stability of solutions of systems of ordinary di erential equations on a computer. Example 1. Consider the following matrix ! ?0:001 b A= 0 ?0:001 where b is a parameter. It is clear that the eigenvalues of A are equal to ?0:001 for any b. It is interesting to note that the values of the characteristics (A) and = (A) increase with growth of b. The following table contains some values of (A) and = (A) with respect to the parameter b. b order of (A) order of = (A) 1 10 10 10 10 10 10 10 10 10 10 10 10 10 10 One can see that (A) grows faster than = (A) (see Figure 1). Thus, for example, (A) 10 for b = 1 (A)
0 1 2 0 1 2 0 9 6 8 1 2 12 2 3 4 15 18 21 10 12 14 0 1 2 0 3 1 2

=

On properties of spectral characteristics of matrices

23

(A) 10 for b = 10 : = (A) Hence, the use of the characteristic = (A) instead of (A) permits to obtain more rigorous results when solving the problem about the asymptotic stability by means of a computer. Examples 2 - 6 illustrate a possibility of picking out the asymptotic stability zones for systems of ordinary di erential equations depending on two parameters. Figures 2-9 represent the asymptotic stability regions calculated by means of a computer by using the characteristics (A) and = (A). Thus, on the gures having the even numbers one can see the asymptotic stability regions obtained by using (A), on the gures having the odd numbers | those obtained by using = (A). We point out also the change character of these characteristics: point-types order of (A) ( = (A)) . 10 o 10 * 10 + 10 x 10 It should be noted that the orders of the parameters (A) and = (A) coincide in the interior subregions. However, near boundaries values of (A) are greater than values of = (A) by some orders (see the gures on pp.3039). In examples 2-5 we consider the system of ordinary di erential equations x00 + eRx0 + (K + vF )x = 0 (32) where R; K; F are square matrices of order 4, v; e are parameters. In particular, the system of Lagrangian equations is reduced to the system of such type (see, e.g., 11]). The system (32) can be rewritten in the form dy = A(v; e)y dt where ! ! 0 I : x ; A(v; e) = y = x0 ?(K + vF ) ?eR
0 7 4 1 2 1 2 0 0 1 2 0 1 2 0 1 2 3 4 5 1 2 0 1 2 0 1 2

but

24
0 1 2

G.Demidenko and I.Matveeva

Calculating (A(v; e)) and = (A(v; e)) for each xed pair (v; e) we indicate the asymptotic stability zones in ranges of variation of these parameters.

Example 2.

01 B K=B 0 B0 @ 0

0 2 0 0

0 0 3 0

Example 3.

1 0 1 ?0:5 0 0 1 C 2 0 0 C; C C ; R = B ?0:5 B C B 0 A A @ 0 3 0:5 C 0 0 0:5 4 0 1 0 2 01 C B F = B ?0 0 3 ?2 C : B 2 2 0 0C A @ 0 2 0 4
0 0 0 4 0 0 3 0

01 B K=B 0 B0 @ 0

0 2 0 0

Example 4.

1 0 1 ?1 0 2 1 C C ; R = B ?1 2 1 0 C ; B C C B 0 1 3 0:5 C A A @ 2 0 0:5 4 01 0 4 01 C B F = B 0 2 0 0 C: B4 0 3 0C A @ 0 0 0 4
0 0 0 4

0 0:5 B K=B 0 B 0 @ 0

0 2 0 0

0 0 0 0 2 0 0 0:5

1 0 1 ?1 ?2 0 1 C 1 0 C; C C ; R = B ?1 2 B C B ?2 ?1 A A @ 3 0:5 C 0 0 ?0:5 4 01 0 2 01 C B F = B 0 2 0 0 C: B2 0 3 1C A @ 0 0 1 4

On properties of spectral characteristics of matrices

25

Example 5.

1 0 2 ?1 ?2 1 1 C 1 0 C; C C ; R = B ?1 2 B C B ?2 ?1 A A @ 2 0:5 C 1 0 ?0:5 2 5 0 7 ?1 1 0 1 0 0 C: C A 1 0 1 ?1 C ?1 ?5 ?1 5 Examples 6. Consider the system ( D dy + (A + (v ? e)D)y ? A x = 0 dt eA y ? (A + v D)x = 0 0 10 B K =B 0 B 0 @ 0
0 0 0 1 0 0 0 0:5 0 0 0 0:5 0 B F =B B @
1 2 2 1 2 2

where

0 Ba Bb B B 0 B B 0 B Bc B B B 0 B A =B 0 B B B 0 B B 0 B B B 0 B B 0 @ 0
1 1 1 1 1 1 1 1 1 2 1

b a b 0 0 c 0 0 0 0 0 0
1 1 2 1 2 1 2 1

0 b a b 0 0 c 0 0 0 0 0
2 1 3 1 3 1 3 1

0 0 b a b 0 0 c 0 0 0 0
3 1 4 1 4 1 4 1 3 1

c 0 0 b a b 0 0 c 0 0 0
1 1 4 1 5 1 5 1 5 1

0 c 0 0 b a b 0 0 c 0 0
2 1 5 1 6 1 6 1 6 1

0 0 c 0 0 b a b 0 0 c 0
3 1 6 1 7 1 7 1 7 1 4 1

0 0 0 c 0 0 b a b 0 0 c
4 1 7 1 8 1 8 1 8 1

0 0 0 0 c 0 0 b a b 0 0
5 1 8 1 9 1 9 1 5 1

0 0 0 0 0 c 0 0 b a b 0
6 1 9 1 10 1 10 1

1 0 0C 0 0C C 0 0C C 0 0C C C 0 0C C 0 0 C; C C c 0C C 0 c C C C 0 0C C b 0C C a b C A b a
7 1 8 1 10 1 11 1 11 1 11 1 12 1 6 1

a = 0:67; a = 1:22; a = 1:33; a = 0:55; a = 1:22; a = 2:67; a = 3:11; a = 1:44; a = 0:56; a = 1:44; a = 1:78; a = 0:89; b = ?0:33; b = ?0:44; b = ?0:44; b = 0:; b = ?0:44; b = ?0:89;
7 1 8 1 9 1 10 1 11 1 12 1 6 1 1 1 2 1 3 1 4 1 5 1

26
7 1 8 1 9 1 10 1 1 1 2 1 3 1 4 1

G.Demidenko and I.Matveeva

b = ?0:89; b = 0; b = ?0:11; b = ?0:44; b = ?0:44; c = 0:67; c = 1:22; c = 1:33; c = 0:55; c = 1:22; c = 2:67; c = 3:11; c = ?0:44;
11 1 5 1 6 1 7 1 8 1

1 0 a 0 0 0 0 0 0 0 0 0 0 0C B 0 a 0 0 0 0 0 0 0 0 0 0C B B C B 0 0 a 0 0 0 0 0 0 0 0 0C B C B 0 0 0 a 0 0 0 0 0 0 0 0C B C Bc 0 0 0 a 0 0 0 0 0 0 0C C B B C B 0 c 0 0 0 a 0 0 0 0 0 0C C B A = B 0 0 c 0 0 0 a 0 0 0 0 0 C; C B B C B 0 0 0 c 0 0 0 a 0 0 0 0C B C B 0 0 0 0 c 0 0 0 a 0 0 0C C B C B 0 0 0 0 0 c 0 0 0 a B 0 0C C B B 0 0 0 0 0 0 c 0 0 0 a 0C A @ 0 0 0 0 0 0 0 c 0 0 0 a a = ?1; a = ?1:33; a = ?1:33; a = ?0:33; a = ?1:33; a = ?2:67; a = ?2:67; a = ?1:33; a = 0; a = 0; a = 0; a = 0; ck = ?ak ; k = 1; . . . ; 8; 0 1 3 0 0 0 0 0 0 0 0 0 0 0C B0 4 0 0 0 0 0 0 0 0 0 0C B B B0 0 4 0 0 0 0 0 0 0 0 0C C B B0 0 0 1 0 0 0 0 0 0 0 0C C B B0 0 0 0 4 0 0 0 0 0 0 0C C B C B B0 0 0 0 0 8 0 0 0 0 0 0C C B D = B 0 0 0 0 0 0 8 0 0 0 0 0 C: C B C B B0 0 0 0 0 0 0 4 0 0 0 0C C B B0 0 0 0 0 0 0 0 1 0 0 0C C B C B B0 0 0 0 0 0 0 0 0 4 0 0C C B B0 0 0 0 0 0 0 0 0 0 4 0C C @ A 0 0 0 0 0 0 0 0 0 0 0 3 Systems of such type arise in problems of convection of geological structure (see, e.g., 10]).
1 2 2 2 3 2 1 2 4 2 2 2 2 5 2 3 2 6 2 4 2 7 2 5 2 8 2 6 2 9 2 7 2 10 2 8 2 11 2 12 2 6 2 1 2 2 2 3 2 4 2 5 2 7 2 8 2 9 2 10 2 11 2 12 2 2 2

On properties of spectral characteristics of matrices

27

7 Conclusion
In this paper we presented the approach of 4] for solving the problem about characterizing location of the spectrum of a matrix A in the closed left halfplane. This approach is based upon the use of the family of the spectral characteristics p(A); p 0. In our opinion, it is necessary to continue both theoretical and applied researches in this direction. At present, the following investigations can be started: I. Algorithm with a guaranteed accuracy. To solve concrete problems with the help of a computer it is necessary to take into account the structure of machine representation of real numbers. The general peculiarity of all computers is that any number stored in a computer or generated in intermediate computations is indistinguishable from any other one su ciently close to it. To elaborate an algorithm with a guaranteed accuracy we must take into consideration this e ect. Therefore, it is necessary to investigate in more detail the dependence of properties of the characteristics p(A) on A and p. In particular, it is necessary to elaborate a perturbation theory for these parameters, i.e. to study a character of changes of p(A) for small perturbations of the elements of A and p. II. E ciency of the algorithm. The algorithm from Section 5 is preliminary. It demonstrates a principal possibility to estimate the spectral characteristics p on a computer. This is con rmed by a series of experiments. However, to solve stability problems for systems depending on parameters it requires much computer time. Now there exists a real possibility for elaboration of an optimal algorithm. In particular, one can create a more optimal algorithm than that from Section 5. One can also investigate analytic properties of p(A) in order to de ne the boundaries of the stability zones computing values of p(A) on a rare grid in a range of variation of the parameters. III. Solving some problems of linear algebra. At present, using the approach from 4], the problems about location of a matrix spectrum on a line, a strip, an angle or a convex polygon (both open and closed) can be solved. For each of these problems one can introduce spectral characteristics which are analogous to p. Using properties of these characteristics one can propose a justi ed algorithm for their computation on a computer.

28

G.Demidenko and I.Matveeva

The investigations were conducted at the Institute of Mathematics, Siberian Branch of the Russian Academy of Sciences, Novosibirsk, Russia and at IRISA, Rennes, France. We express a gratitude to S.K.Godunov and B.Philippe for useful discussions, to J-F.Carpraux and A.N.Malyshev for editing French and English texts.

On properties of spectral characteristics of matrices

29

10

22

10

20

10

18

10

16

10

14

10

12

10

10

10

8

10

6

10

0

10

1

10 parameter b

2

10

3

10

4

Figure 1: Graphs of the characteristics

0

(A) and

1 2

=

(A)

30
1.5

G.Demidenko and I.Matveeva

1
parameter e

0.5

0 -1

-0.5

0

0.5

1

1.5 2 parameter v

2.5

3

3.5
0

4

Figure 2: Asymptotic stability zone computated by using

(A)

On properties of spectral characteristics of matrices

31

1.5

1
parameter e

0.5

0 -1

-0.5

0

0.5

1

1.5 2 parameter v

2.5

3

3.5
1 2

4
=

Figure 3: Asymptotic stability zone computated by using

(A)

32
1.8 1.6 1.4 1.2
parameter e

G.Demidenko and I.Matveeva

1 0.8 0.6 0.4 0.2 0 -0.4

-0.2

0

0.2 parameter v

0.4

0.6

0.8
0

Figure 4: Asymptotic stability zone computated by using

(A)

On properties of spectral characteristics of matrices

33

1.8 1.6 1.4 1.2
parameter e

1 0.8 0.6 0.4 0.2 0 -0.4

-0.2

0

0.2 parameter v

0.4

0.6

0.8
1 2

Figure 5: Asymptotic stability zone computated by using

=

(A)

34
1.6

G.Demidenko and I.Matveeva

1.4

1.2

1
parameter e

0.8

0.6

0.4

0.2

0 -0.5

0

0.5

1 1.5 parameter v

2

2.5
0

3

Figure 6: Asymptotic stability zone computated by using

(A)

On properties of spectral characteristics of matrices

35

1.6

1.4

1.2

1
parameter e

0.8

0.6

0.4

0.2

0 -0.5

0

0.5

1 1.5 parameter v

2

2.5
1 2

3
=

Figure 7: Asymptotic stability zone computated by using

(A)

36
1.8 1.6 1.4 1.2
parameter e

G.Demidenko and I.Matveeva

1 0.8 0.6 0.4 0.2 0 -0.2

0

0.2

0.4

0.6

0.8 1 parameter v

1.2

1.4

1.6
0

1.8

Figure 8: Asymptotic stability zone computated by using

(A)

On properties of spectral characteristics of matrices

37

1.8 1.6 1.4 1.2
parameter e

1 0.8 0.6 0.4 0.2 0 -0.2

0

0.2

0.4

0.6

0.8 1 parameter v

1.2

1.4

1.6
1 2

1.8
=

Figure 9: Asymptotic stability zone computated by using

(A)

38
2

G.Demidenko and I.Matveeva

1.5

parameter e

1

0.5

0

-0.5 -2

-1.5

-1

-0.5

0 0.5 parameter v

1

1.5
0

2

Figure 10: Asymptotic stability zone computated by using

(A)

On properties of spectral characteristics of matrices

39

2

1.5

parameter e

1

0.5

0

-0.5 -2

-1.5

-1

-0.5

0 0.5 parameter v

1

1.5
1 2

2
=

Figure 11: Asymptotic stability zone computated by using

(A)

40

G.Demidenko and I.Matveeva

References
1] A.Ya. Bulgakov, E ective-calculated parameters of stability quality of linear di erential equations system, Sibirsk. Mat. Zh., 21(3): 32-41, 1980 (Russian). 2] A.Ya. Bulgakov and S.K. Godunov, Calculation of positive de nite solutions of the Lyapunov equation, Trudy Inst. of Math. AN SSSR, Siberian Branch, 6: 17-38, 1985 (Russian). 3] G.V. Demidenko, Integral operators de ned by boundary value problems for quasielliptic equations, Dokl. Akad. Nauk 326(5): 765-769, 1992; English transl. in Russian Acad. Sci. Dokl. Math., 46(2): 343-348, 1993. 4] G.V. Demidenko, To question about determination of matrices whose spectrums lie on the imaginary axis, Preprint Inst. of Math. Russian Acad. Sci., Siberian Branch, No. 12, 1993 (Russian). 5] G.V. Demidenko, On a class of spectral characteristics of matrices, Sibirsk. Mat. Zh., (to appear). 6] G.V. Demidenko, Integrals of Lyapunov type and their applications, C. R. Acad. Sci., (to appear). 7] D.K. Faddeev and V.N. Faddeeva, Computational Methods of Linear Algebra, Fizmatgiz, Moscow, 1963; English transl.: Computational Methods of Linear Algebra, W.H.Freeman & Co, San Francisco, 1963. 8] F.R. Gantmacher , The Theory of Matrices, Nauka, Moscow, 1967 (Russian). 9] S.K. Godunov and A.Ya. Bulgakov, Di cultes de calcul dans le probleme de Hurwitz et methodes pour les surmonter, Analysis and Optimization of Systems, Versailles, Springer-Verlag, pp.843-851, 1982. 10] P. Menegazzi, Convection naturelle dans les structures geologiques poreuses: etude numerique bidimensionnelle et stabilite des ecoulements. These de l'Universite de Bordeaux 1, 1989.

On properties of spectral characteristics of matrices

41

11] Ya.G. Panovko, Introduction to Theory of Mechanical Oscillations, Nauka, Moscow, 1980 (Russian). 12] I.G. Petrovskii, Lectures on Theory of Ordinary Di erential Equations, Nauka, Moscow, 1970 (Russian). 13] G.E. Shilov, Mathematical Analysis. Second Special Course, Nauka, Moscow, 1965 (Russian). 14] J.H. Wilkinson, The Algebraic Egenvalue Problem, Clarendon Press, Oxford, 1965.


相关文档

更多相关文档

Emergence of scaling in random networks
Spectral Properties of a Class of Reflectionless Schrodinger Operators
Spectral Properties of the k-Body Embedded Gaussian Ensembles of Random Matrices for Bosons
On the local spectral properties of weighted shift operators
Spectral Properties of a Magnetic Quantum Hamiltonian on a Strip
电脑版