# A remark on the Jordan normal form of matrices

Linear Algebra and its Applications 310 (2000) 5–7 www.elsevier.com/locate/laa

A remark on the Jordan normal form of matrices

Vlastimil Pták ?

Praha, Czech Republic Received 8 October 1998; accepted 9 January 2000 Submitted by H. Schneider

(Translation of part of the paper: Eine Bemerkung zur Jordanschen normal form von Matrizen, Acta Scientiarum Mathematicarum, Szeged 17 (1956) 190–194) It is the purpose of the present note to show that applications of duality theory – which proved to be a powerful tool in the theory of in?nite-dimensional vector spaces – are by no means restricted to that area. Duality methods may also be applied in classical matrix theory; even if no new results are gained, a deeper understanding of the geometric substance and simpli?cation of proofs may be obtained. In fact, it seems to us that geometric considerations represent the only right way to penetrate into the principles of the theory of normal forms. We intend to show that simultaneous consideration of the given space and of its dual makes it possible to give almost trivial proofs of both basic theorems of the theory of normal forms. The standard treatment of these results in textbooks requires considerably more time. Notation. Suppose X and Y are two given ?nite-dimensional linear spaces, dual to each other. Following [1], the product of the vectors x ∈ X and y ∈ Y will be denoted by x, y . Given a linear mapping A of X into itself, the image of the vector x will be denoted by xA. The adjoint mapping A? is de?ned in the usual manner: xA, y = x, yA? . A subspace X0 ? X is said to be invariant with respect to A if x0 A ∈ X0 for every x0 ∈ X0 . The set of all y ∈ Y that satisfy M, y = 0 will be called the annihilator of M. The annihilator of an arbitrary set is always a linear subspace of Y . We shall use the following two well-known facts from duality theory. The proofs are immediate but we include them.

? 8 November 1925 – 9 May 1999.

0024-3795/00/$ - see front matter 2000 Elsevier Science Inc. All rights reserved. PII: S 0 0 2 4 - 3 7 9 5 ( 0 0 ) 0 0 0 5 2 - 5

6

V. Pt? k / Linear Algebra and its Applications 310 (2000) 5–7 a

Fact 1. Let Y0 be a subspace of Y, invariant with respect to A? . Then the annihilator of Y0 is invariant with respect to A. Proof. Suppose x0 belongs to the annihilator of Y0 ; we shall show that so does x0 A. Since Y0 A? ? Y0 by hypothesis, we have x0 A, Y0 = x0 , Y0 A? = 0 which completes the proof. Fact 2. Suppose the subspaces X0 ? X and Y0 ? Y are dual to each other. Then X0 and the annihilator of Y0 constitute a direct sum decomposition of X. Proof. Denote by X1 the annihilator of Y0 . If x0 ∈ X0 ∩ X1 , then x0 , Y0 = 0. The subspaces X0 and Y0 being dual to each other, x0 ∈ X0 implies x0 = 0. We have thus X0 ∩ X1 = {0}. The duality of X0 and Y0 implies dim X0 = dim Y0 whence dim X1 = dim X? dim Y0 = dim X? dim X0 . It follows that X = X0 + X1 and the proof is complete. The above results will be applied as follows. In order to ?nd a direct sum decomposition X = X0 + X1 of the given space X with invariant X0 and X1 , it suf?ces to ?nd two mutually dual subspaces X0 ? X and Y0 ? Y such that X0 is invariant with respect to A and Y0 is invariant with respect to A? . If X1 stands for the annihilator of Y0 , then X1 is invariant according to Fact 1 and constitutes, together with X0 , a direct decomposition of X according to Fact 2. The “geometric” theory of linear operators is based on the following two theorems on representations of the given space as the direct sum of two invariant subspaces. Theorem 1. There exists a direct decomposition X = Xs + Xr , both Xs and Xr being invariant with respect to A, such that A is nilpotent on Xs and regular on Xr . / Theorem 2. Let Aq = 0 and let x0 be a vector with x0 Aq?1 = 0. Let X0 be the smallest invariant subspace containing x0 . Then there exists an invariant subspace X such that X is the direct sum of X0 and X . Proof of Theorem 1. Let Xs (Ys ) be the set of all vectors x ∈ X (y ∈ Y ) that satisfy an equation xAi = 0 (yA?j = 0) for some i(j ). Clearly both these sets are invariant subspaces. We claim that Xs and Ys are dual. In view of symmetry, it suf?ces to ?nd, / for each nonzero x ∈ Xs , a vector y ∈ Ys such that x, y = 0. / Let x ∈ Xs , x = 0 be given. Let q be the smallest exponent for which xAq = 0. / Hence there exists a y0 ∈ Y with xAq?1 , y0 = 0. The sequence y0 , y0 A? , y0 A?2 , . . . cannot be linearly independent. Let p be the smallest exponent for which y0 A?p may be represented as a linear combination of elements y0 A?i for i > p. Hence there exists a vector z with y0 A?p = zA?p+1 . We claim that p q. Otherwise it would be possible to write y0 A?q?1 in the form vA?q ; this is impossible since

V. Pt? k / Linear Algebra and its Applications 310 (2000) 5–7 a

7

0 = xAq?1 , y0 = x, y0 A?q?1 = x, vA?q = xAq , v = 0. / If y = y0 A?q?1 ? zA?q , then yA?p?q+1 = y0 A?p ? zA?p+1 = 0 so that y ∈ Ys . Furthermore / x, y = x, y0 A?q?1 = xAq?1 , y0 = 0 and the proof is complete. / / Proof of Theorem 2. Choose y0 ∈ Y such that x0 Aq?1 , y0 = 0. Thus y0 A?q?1 = 0 so that the dimension of the smallest invariant subspace Y0 of Y containing y0 equals q. We claim that X0 and Y0 are dual. Given x ∈ X0 , x = 0, then x = a0 x0 + / a1 x0 A + · · · + aq?1 x0 Aq?1 . Let ak be the ?rst nonzero coef?cient. Then / x, y0 A?q?1?k = xAq?1?k , y0 = ak x0 Aq?1 , y0 = 0 and the proof is complete.

Reference

[1] N. Bourbaki, Algèbre linéaire, Actualités Scientifiques et Industrielles, 1032, Paris, 1947.