# Two-forms and Noncommutative Hamiltonian dynamics

Swan-Maths-01/1

Two-forms and Noncommutative Hamiltonian dynamics

E. J. Beggs Department of Mathematics University of Wales Swansea Wales SA2 8PP Abstract. In this paper we extend the standard di?erential geometric theory of Hamiltonian dynamics to noncommutative spaces, beginning with symplectic forms. Derivations on the algebra are used instead of vector ?elds, and interior products and Lie derivatives with respect to derivations are discussed. Then the Poisson bracket of certain algebra elements can be de?ned by a choice of closed 2-form. Examples are given using the noncommutative torus, the Cuntz algebra, the algebra of matrices, and the algebra of matrix valued functions on R2 .

arXiv:math/0101132v1 [math.QA] 16 Jan 2001

1

Introduction

We begin from the usual de?nition of a di?erential calculus on a noncommutative algebra. The derivations on the algebra are used to substitute for the vector ?elds in the commutative case. Then we can de?ne an interior product and Lie derivative, and prove noncommutative analogues of the standard results in classical di?erential geometry. The proofs are almost identical to the classical ones. The caveat is that we must consider only those derivations which are compatible with the relations in the di?erential structure. From a closed 2-form ω we de?ne a map ω from the collection of derivations to the ? 1-forms. Classically this would be a 1-1 correspondence if ω was nondegenerate. We shall just assume that ω is 1-1, and as a result only certain elements of the algebra ? (called Hamiltonian elements) will correspond to derivations. The variety of examples of di?erential calculi on noncommutative algebras means that to insist on comparable sizes for the set of derivations and the 1-forms would be over restrictive, and the fact that ω might not be onto will not cause major problems. For Hamiltonian elements we can ? de?ne a Poisson bracket, which is antisymmetric and satis?es the Jacobi identity. It may be suprising that a noncommutative di?erential geometry can have antisymmetric Poisson brackets. However the reader should note that we do not achieve this by imposing any sort of asymmetry on the di?erential forms, but by imposing asymmetry on the interior product , by insisting that it is a signed derivation. Then there are the examples. For the noncommutative torus T2 (with uv = e2πiρ vu) ρ we take ω = u?1 du dv v ?1 . If ρ is rational there is a class of Hamiltonian elements with non-zero Poisson brackets, although the Hamiltonian elements commute in the algebra multiplication. For the matrix algebra Mn (R) with ω = ij dEij dEij we see that the antisymmetric matrices are Hamiltonian elements, and the corresponding derivations are just the adjoint maps of the antisymmetric matrices. The Poisson bracket is just the 1

matrix commutator. The Cuntz algebra On provides another example. We conclude by examining Hamiltonian dynamics on the algebra of matrix valued functions on R2 . It is time for a public health warning: Di?erential calculi on C ? algebras frequently require passing to a smaller ‘smooth’ subalgebra to make things work. For an example see the calculation of cyclic cohomology in [2]. We shall work purely algebraically in what follows, without worrying about the topology. The author would like to thank Tomasz Brzezi? ski for much useful advice. n

2

Di?erential calculus and derivations

De?nition 2.1 A di?erential calculus on an algebra A is a collection of A-bimodules ?n for n ≥ 0 and a signed derivation d : ?n → ?n+1 , i.e. d(ωτ ) = d(ω) τ + (?1)|ω| ω d(τ ). Here |ω| = n if ω ∈ ?n . We set ?0 = A and suppose that the subspace spanned by elements of the form ω db ( for all ω ∈ ?n and b ∈ A) is dense in ?n+1 . We also impose d2 = 0. De?nition 2.2 De?ne V to be a vector space of derivations on the algebra A, i.e. θ(ab) = θ(a)b + aθ(b) for θ ∈ V . We suppose that V is closed under the commutator [θ, φ] = θφ ? φθ. This will take the place of the vector ?elds in commutative di?erential geometry. Now that we have the analogue of vector ?elds, we can de?ne the following operations. De?nition 2.3 We de?ne the evaluation map or ‘interior product’ : V ? ?1 → A by θ da = θ(a). To be consistent with the rule d(ab) = da b + a db we set θ (da b) = θ(a) b and θ (a db) = a θ(b). Extend this de?nition recursively to : V ? ?n+1 → ?n as a signed derivation, i.e. θ (ωτ ) = (θ ω) τ + (?1)|ω| ω (θ τ ). De?nition 2.4 We de?ne the Lie derivative Lθ : ?1 → ?1 in the direction θ ∈ V of a 1form by Lθ (da) = d(θ(a)), and extend it as a derivation, i.e. θ(a db) = θ(a) db + a d(θ(b)). This is compatible with the rule d(ab) = da b + a db. Now we extend the de?nition to Lθ : ?n → ?n as a derivation, i.e. Lθ (ωτ ) = ωLθ (τ ) + Lθ (ω)τ . The problem with these operations is that they may not be well de?ned, that is there may be a linear combination of elements of the form a db ∈ ?1 which vanishes, but for which the corresponding sum of θ (a db) or Lθ (a db) would not be zero. If the di?erential calculus is given in terms of generators and relations, we must check that the interior product and the Lie derivative vanish on all the relations. If necessary we must restrict the set V of derivations so that these operations are well de?ned. In what follows, we assume that these operations are well de?ned. Proposition 2.5 For all θ ∈ V and ω ∈ ?n , d(θ ω) + θ (dω) = Lθ (ω).

2

Proof By induction on the degree of ω. The statement is true for 0-forms (elements of A). Now we suppose that the statement is true for n-forms, and take ω ∈ ?n and b ∈ A. d(θ (ω db)) = d( (θ ω) db + (?1)n ω θ(b) ) = d(θ ω) db + (?1)n dω θ(b) + ω dθ(b) , θ (d(ω db)) = θ (dω db) = (θ dω) db + (?1)n+1 dω θ(b) . Now add these together to get d(θ (ω db)) + θ (d(ω db)) = (d(θ ω) + θ dω) db + ω dθ(b) = Lθ (ω) db + ω dθ(b) = Lθ (ω db) . Proposition 2.6 For all θ ∈ V and ω ∈ ?n , dLθ (ω) = Lθ (dω). Proof By induction on the degree of ω. The statement is true for 0-forms (elements of A). Now we suppose that the statement is true for n-forms, and take ω ∈ ?n and b ∈ A. dLθ (ω db) = d( (Lθ (ω)) db + ω dθ(b) ) = (dLθ (ω)) db + dω dθ(b) = Lθ (dω) db + dω Lθ (db) = Lθ (dω db) = Lθ (d(ω db)) . Proposition 2.7 For all θ, φ ∈ V and ω ∈ ?n , Lφ (θ ω) = θ Lφ (ω) + [φ, θ] ω. Proof By induction on the degree of ω. The statement is true for 0-forms (elements of A). Now we suppose that the statement is true for n-forms, and take ω ∈ ?n and b ∈ A. Then Lφ (θ (ω db)) = = θ (Lφ (ω db)) = = Lφ ((θ ω) db + (?1)n ω θ(b)) Lφ (θ ω) db + (θ ω) dφ(b) + (?1)n Lφ (ω) θ(b) + (?1)n ω φθ(b) , θ (Lφ (ω) db + ω dφ(b)) (θ Lφ (ω)) db + (?1)n Lφ (ω) θ(b) + (θ ω) dφ(b) + (?1)n ω θφ(b) ,

and on subtraction we get Lφ (θ ω db) ? θ (Lφ (ω db)) = Lφ (θ ω) db ? (θ Lφ (ω)) db + (?1)n ω [φ, θ](b) = ([φ, θ] ω) db + (?1)n ω [φ, θ](b) = [φ, θ] (ω db) . Proposition 2.8 For all θ, φ ∈ V and ω ∈ ?n , φ (θ ω) = ? θ (φ ω) .

3

Proof By induction on the degree of ω. The statement is true for 0-forms (elements of A). Now we suppose that the statement is true for n-forms, and take ω ∈ ?n and b ∈ A. φ (θ (ω db)) = φ ((θ ω) db + (?1)n ω θ(b)) = (φ (θ ω)) db + (?1)n?1 (θ ω) φ(b) + (?1)n (φ ω) θ(b) . Now just add this formula to the one with φ and θ swapped to get zero. Proposition 2.9 For all θ, φ ∈ V and ω ∈ ?n , Lθ Lφ (ω) ? Lφ Lθ (ω) = L[θ,φ] (ω) . Proof By induction on the degree of ω. The statement is true for 0-forms (elements of A). Now we suppose that the statement is true for n-forms, and take ω ∈ ?n and b ∈ A. Lθ Lφ (ω db) = Lθ (Lφ (ω) db + ω dφ(b)) = Lθ Lφ (ω) db + Lθ (ω) dφ(b) + Lφ (ω) dθ(b) + ω dθφ(b) , and if we swap θ and φ and subtract we get (Lθ Lφ ? Lφ Lθ )(ω db) = (Lθ Lφ ? Lφ Lθ )(ω) db + ω d[θ, φ](b) = L[θ,φ] (ω db) .

3

Hamiltonian dynamics

In this section we take a speci?ed ω ∈ ?2 which is closed, i.e. dω = 0. From proposition 2.5 we see that Lθ (ω) = 0 if and only if d(θ ω) = 0. Take the subset V ω to consist of those θ ∈ V for which Lθ (ω) = 0, and Z 1 to be the set of closed 1-forms. De?ne the map ω : V ω → Z 1 by ω (θ) = θ ω. We say that ω is nonsingular if ω is 1-1, and we suppose ? ? ? this for the rest of the section. De?nition 3.1 We say that a ∈ A is a Hamiltonian element if da ∈ Z 1 is in the image of ω . If a is Hamiltonian, we de?ne Xa ∈ V ω by ω (Xa ) = da. If both a and b are ? ? Hamiltonian, we de?ne their Poisson bracket by {a, b} = Xa (db) = Xa (b) ∈ A. Proposition 3.2 If both a and b are Hamiltonian, then {a, b} = ?{b, a}, i.e. the Poisson bracket is antisymmetric. Proof From proposition 2.8, Xa (Xb ω) = Xa db = Xa (b) = ? Xb (Xa ω) = ? Xb (a) . Proposition 3.3 If both a and b are Hamiltonian, then {a, b} is Hamiltonian, and further X{a,b} = [Xa , Xb ].

4

Proof First [Xa , Xb ] ∈ V as V is closed under commutator. Then L[Xa ,Xb ] = 0 by 2.9, so [Xa , Xb ] ∈ V ω . Finally from proposition 2.7, [Xa , Xb ] ω = LXa (Xb ω) ? Xb LXa (ω) = LXa (db) = dXa (b) = d{a, b} . Proposition 3.4 If a, b and c are Hamiltonian, then {c, {a, b}} + {b, {c, a}} + {a, {b, c}} = 0, i.e. the Poisson bracket satis?es the Jacobi identity. Proof By using proposition 2.7, Xc {a, b} = Xc d{a, b} = Xc dXa (b) = Xc LXa (db) = LXa (Xc db) ? [Xa , Xc ] db . From this we deduce, using 3.3, {c, {a, b}} + {{a, c}, b} = LXa (Xc db) = LXa {c, b} = {a, {c, b}} . Proposition 3.5 If a, b, c and bc are Hamiltonian, then {a, bc} = {a, b} c + b {a, c}, i.e. the Poisson bracket is a derivation. Proof Use the result {a, bc} = Xa (bc), where Xa is a derivation. Now we can formally extend a derivation to an automorphism by the following procedure (we make no attempt to verify convergence): If θ is a derivation on the algebra A, there is an action of (R, +) by automorphisms on A by a → exp(tθ)a = a(t) for a ∈ A and t ∈ R. Then we get the usual relation for the time derivatives of functions a(t) ∈ A generated by a Hamiltonian b ∈ A and the Poisson bracket: a(t) = Xb (a(t)) = {b, a(t)} . ˙ (Note that strictly we should stop at Xb (a(t)) in the case where a is not Hamiltonian, as we did not de?ne the Poisson brackets for non-Hamiltonian elements.)

4

Example: the noncommutative torus

We take the algebra T2 generated by invertible elements u and v, subject to the conditions ρ uv = qvu, where q = e2πρ is a unit norm complex number. This can be completed to form a C ? algebra, or a smooth algebra [2, 4], but we will not consider such completions here. The simplest di?erential calculus on T2 [1] is generated by {u, v, du, dv}, subject to the q relations du dv = ?q dv du [u, du] = [v, dv] = 0 , u dv = q dv u , v du = q ?1 du v , , (du)2 = (dv)2 = 0 . (1)

As there are no non-zero 3-forms, all 2-forms are closed. We will now try to carry out the construction given in the previous sections. Note that a derivation θ is uniquely speci?ed by giving θ(u) and θ(v). This is because we can deduce θ(u?1 ) = ? u?1 θ(u) u?1 from the relation θ(u u?1) = θ(1) = 0, and likewise for v ?1 . The set of derivations V which we use must be consistent with the relations on the algebra and on the di?erential structure. 5

Proposition 4.1 Suppose that we are in the rational case where q has order p. Then the derivations θ consistent with the di?erential structure (1) are of the form θ(u) =

s,t∈Z

bts u1+sp v tp ,

θ(v) =

s,t∈Z

cts usp v 1+tp ,

for some constants bts , cts ∈ C Proof By applying θ to [u, du] = [v, dv] = 0 we see that [u, θ(u)] = [v, θ(v)] = 0. By applying θ to u dv = q dv u we see that u θ(v) = q θ(v) u. By applying θ to v du = q ?1 du v we see that v θ(u) = q ?1 θ(u) v. By combining these we see that the consistent derivations are those for which are of the form above. Now we check with the algebra relation θ(uv) = qθ(vu) to get cts u1+sp v 1+tp +

s,t∈Z s,t∈Z

bts u1+sp v 1+tp = q

s,t∈Z

cts usp v 1+tp u + q v

s,t∈Z

bts u1+sp v tp ,

which is automatically satis?ed. Example 4.2 Set ω = u?1 du dv v ?1 , and suppose that we are in the rational case where q has order p. Then for θ ∈ V we have θ ω = u?1 θ(u) dv v ?1 ? u?1 du θ(v) v ?1, so if we know θ ω we can recover θ(u) and θ(v) uniquely, so ω is 1-1. Proceeding on the ? assumption that a ∈ T2 is a Hamiltonian element, we set ρ a =

nm

anm un v m

for some numbers anm . We now examine the equation Xa (u?1 du dv v ?1 ) = da =

nm

n anm un?1 du v m + m anm un v m?1 dv ,

and deduce that Xa (u) =

nm

m anm un+1 v m

and

Xa (v) = ?

nm

n anm un v m+1 .

For Xa to be a derivation consistent with our given di?erential structure, we can only have nonzero anm when n and m are multiples of p. The Hamiltonian functions are linear combinations of elements of the form usp v tp for t, s ∈ Z. The corresponding derivations are Xusp vtp (u) = tp usp+1 v tp and Xusp vtp (v) = ? sp usp v tp+1 , and the Poisson brackets are given by {usp v tp , us p v t p } = (t s′ ? t′ s) p2 u(s+s )p v (t+t )p . Note that the Hamiltonian elements in this case are exactly the central elements of the algebra.

′ ′ ′ ′

6

5

Example: the algebra of matrices

We take A = Mn (R), and then we de?ne ?1 to be the kernel of the multiplication map ? : Mn ? Mn → Mn [5, 2]. The map d : ?0 = A → ?1 is de?ned as da = 1 ? a ? a ? 1. Also ?2 = τ ∈ Mn ? Mn ? Mn : (? ? id)(τ ) = (id ? ?)(τ ) = 0 . The map d : ?1 → ?2 is de?ned by d(a ? b) = 1 ? a ? b ? a ? 1 ? b + a ? b ? 1. In case the reader is concerned that the interior product is not well de?ned on this model of the di?erential calculus, note that θ : ?1 → ?0 = Mn is θ (a ? b) = a θ(b), and θ : ?2 → ?1 is θ (a ? b ? c) = a θ(b) ? c ? a ? b θ(c). The Lie derivative Lθ : ?1 → ?1 is given by Lθ (a ? b) = θ(a) ? b + a ? θ(b), and Lθ : ?2 → ?2 is given by Lθ (a ? b ? c) = θ(a) ? b ? c + a ? θ(b) ? c + a ? b ? θ(c). 1 Set ω = 2 ij dEij dEij , where Eij is the matrix with 1 in the row i column j position and 0 elsewhere. Take a derivation θ on Mn (R) given by coe?cients Θklij ∈ R: θ(Eij ) =

kl

Θklij Ekl .

Then we calculate 1 Θklij Ekl dEij ? dEij Ekl θ ω = 2 ijkl 1 = Θklij Ekl ? Eij ? Ekl Eij ? 1 ? 1 ? Eij Ekl + Eij ? Ekl 2 ijkl 1 1 1 Θklij + Θijkl Ekl ? Eij ? Θklij Ekl Eij ? 1 ? Θijkl 1 ? Ekl Eij . = 2 ijkl 2 ijkl 2 ijkl Given a matrix S ∈ Mn (R), take the adjoint map adS (C) = [S, C], which is a derivation. Now adS (Eij ) = [S, Eij ] = Ski Ekj ? Eil Sjl ,

k l

so the coe?cients corresponding to θ = adS are Θklij = Ski δjl ? Sjl δik . If S is an antisymmetric matrix, then Θklij + Θijkl = 0, so 1 adS ω = Θklij 1 ? Ekl Eij ? Ekl Eij ? 1 2 ijkl 1 = Ski δjl ? Sjl δik δli 1 ? Ekj ? Ekj ? 1 = 1 ? S ? S ? 1 . 2 ijkl Now we see that adS ω = dS, so the antisymmetric matrices are Hamiltonian, and XS = adS . If S and T are antisymmetric, then {S, T } = adS (T ) = [S, T ], so the Poisson bracket is just the commutator.

6

Example: the Cuntz algebra

The Cuntz algebra On [3] is the unital C ? algebra with n generators {s1 , . . . , sn } and relations si s? = 1 . s? sj = δij , i i

i=1,...,n

7

Linear combinations of the form s? s? are dense in the algebra, where ? and ν are words ν for the alphabet {1, . . . , n}. For example if ? = 12 and ν = 123, then s? s? = s1 s2 s? s? s? . ν 3 2 1 Now we have to decide what di?erential calculus to equip On with. The forms would be generated by si , s? , dsi and ds? . We must have relations given by applying d to the i i relations for the algebra, i.e. ds? sj + s? dsj = 0 , i i

i=1,...,n

(dsi s? + si ds? ) = 0 . i i

(2)

If u is any unitary in On , then the map si → u si extends to a unital *-endomorphism αu of On . Conversely, suppose that α is a unital *-endomorphism of On . If we de?ne u = α(si ) s?, we see that u is a unitary in On , and that α(si ) = u si. We shall use this i to de?ne a derivation on On by taking the in?nitesimal version of this construction. For h ∈ On we de?ne a derivation by θh (si ) = h si and θh (s? ) = ?s? h. If this were to be i i a *-derivation, we would ?nd that h had to be antiHermitian, but we shall not suppose this. Now we should check that these derivations preserve the relations (2): θh θh (ds? sj + s? dsj ) = ? s? h sj + s? h sj = 0 , i i i i (dsi s? + si ds? ) = (h si s? ? si s? h) = h ? h = 0 , i i i i

i=1,...,n

L θh

Lθh (ds? sj + s? dsj ) = ds? h sj ? d(s? h) sj ? s? h dsj + s? d(h sj ) i i i i i i = ? s? dh sj + s? dh sj = 0 , i i (d(h si) s? ? dsi s? h ? si d(s? h) + h si ds? ) (dsi s? + si ds? ) = i i i i i i

i=1,...,n

i=1,...,n

i=1,...,n

= Now we choose ω =

i

( dh si s? ? si s? dh ) = 0 . i i

i=1,...,n

dsi ds? , and note that dω = 0. If we choose h = sk s? , then i l θsk s? ω = l

i

(sk s? si ds? + dsi s? sk s? ) l i i l

= sk ds? + dsk s? = d(sk s? ) . l l l The set of derivations spanned by θsk s? for 1 ≤ k, l ≤ n is closed under commutator, and l we call it V . We see that the Hamiltonian element corresponding to the derivation θsk s? l is sk s? , and that the Poisson brackets are given by l {sk s? , sr s? } = θsk s? (sr s? ) = δl,r sk s? ? δm,k sr s? . l m m m l l

7

An example of tensor products and interactions

Consider the algebra A = C ∞ (R2 , M2 (R)), where we use coordinates x, y for R2 . The calculus we use will be the standard tensor product one, i.e. ?n ( C ∞ (R2 ) ? M2 (R) ) =

p+q=n

?p ( C ∞ (R2 ) ) ? ?q ( M2 (R) ) ,

with d operator and multiplication given by d(τ ? η) = dτ ? η + (?1)|τ | τ ? dη , 8

(τ ? η) (τ ′ ? η ′ ) = (?1)|η| |τ | τ τ ′ ? ηη ′ .

′

(3)

The d operator on M2 (R) is the one we de?ned earlier (we use dM2 to avoid confusion later), and the d operator on C ∞ (R2 ) is the usual one: dτ = dx ?τ ?τ + dy . ?x ?y

There are derivations on the algebra given, for f : R2 → M2 (R), by θ(f ) = θx ?f ?f + θy + [θS , f ] , ?x ?y

where θx and θy are real valued functions times the identity matrix on R2 , and θS is an antisymmetric matrix valued function on R2 . This has evaluations on the 1-forms given by θ dx = θx , θ dy = θy and θ dEij = [θS , Eij ]. We shall take the 2-form ω = dx dy + 1 2 dEij dEij + dx (dE12 ? dE21 ) ,

ij

where we have added the last term to ensure some interaction between the vector ?eld and the antisymmetric matrix parts of the derivations. Then we calculate θ ω = θx dy ? θy dx + dM2 θS + θx (dE12 ? dE21 ) ? dx [θS , E12 ? E21 ] . The last term here vanishes, since in M2 (R) any antisymmetric matrix is a multiple of E12 ? E21 , so θ ω = θx dy ? θy dx + dM2 (θS + θx (E12 ? E21 )) . It is now reasonably simple to see that ω is non-degenerate for all derivations of the form we are considering. Given an element of the algebra a ∈ C ∞ (R2 , M2 (R)) we have da = ?a ?a dx + dy + dM2 a , ?x ?y

?a ?a so if θ ω = da then dM2 (θS + θx (E12 ? E21 )) = dM2 a, θx = ?y and θy = ? ?x . We see that if we put a(x, y) = T + f (x, y) I2, where T is a constant antisymmetric matrix and f (x, y) is a real valued function, then (Xa )x = ?f I2 , (Xa )y = ? ?f I2 and (Xa )S = ?y ?x ?f T ? ?y (E12 ? E21 ). Now we can calculate the Poisson bracket of two such Hamiltonian functions:

{T + f (x, y) I2, R + g(x, y) I2} =

?f ?g ?f ?f ?g I2 ? I2 + [T ? (E12 ? E21 ) , R + g I2 ] ?y ?x ?x ?y ?y ?f ?g ?f ?g I2 . ? = ?y ?x ?x ?y

9

References

[1] T. Brzezi? ski, H. Dabrowski & J. Rembieli? ski, On the quantum di?erential calculus n n and the quantum holomorphicity, Jour. Math. Phys. 33 (1992), 19-24. [2] A. Connes, Non-commutative di?erential geometry, Publ. Math. I.H.E.S. 62 (1985), 41-144. [3] J. Cuntz, Simple C ? -algebras generated by isometries, Comm. Math. Phys. 57 (1977), 173-185. [4] G.A. Elliott & D.E. Evans, The structure of the irrational rotation C ? -algebra, Ann. of Math. (2) 138 (1993), no. 3, 477–501. [5] M. Karoubi, Homologie cyclique et K-th?orie, Ast?risque 149 (1987), 147pp. e e

10