Kolmogorov entropy and "quantum chaos" - American Chemical Society

vj-0. (2j + 1 )H(E -. Efc -. Efc) = (2=7 + 1 )P°EC*(E -. EBC -. E) -. E\ -. El). 8xma,bc -=o j,w ... EBC) (A. 19). ;=0. This is eq 2.14 of the text. ...
1 downloads 0 Views 604KB Size
J. Phys. Chem. 1982, 86, 2239-2243

where m

eSf(E) =

c H(E - E!',

(A.ll)

P O

initial vibrational state Y of BC. To obtain an explicit expression for it let us use (A.lO) to rewrite (A.14) and from (A.15) we have after cancelling the eBC(E) factor m

+

where E = Et E:' is the total energy in the collinear world. Now we express the right-hand side of (A.9) in terms of the functions P m and eab and rewrite (A.9) as Jm

2239

Q,(E - E!')@ - E!' uJ=O

- ETc) X

(2j + l ) H ( E - E:'

- ET") =

d~ exp(-E/kn P(E)Q(E) = (A.16)

Assuming that the functions in (A.16) are all analytic, we have for each v m

If the function eBC(E)&(E) has a Laplace transform then

P ( E ) Q ( E )= h2

2

(2J+ 1 ) P Q(E

~ W A , B C J-%v~

Q,(E - E:')

-a-

( E - E!' j=O

(2j t 2

- EBc) X I

+ 1)H(E - E!'

- E?') =

m

EL - EL,)c%(E - E$ - EL - EL,) (A.13) and the final result, eq 2.11 of the text 12

n-

i

m

Q(E) = -- C 8"pA.BC

eBC(E) . J,h%'=O _ _

(2J

(A.17)

+ l)Pw(E- E$ -

EL - EL,)c$f(E - Ef, - EL - EL,) (A.14) follows. Next, consider the following implicit definition of a new cross section Q,(E). m

Q(E)

C Q,(E - E!C)(2j

wJ=O

( E - E!'

+ I) x

- ETc) H ( E - E!'-

and finally

Q, ( E - E y ) = where €rot B'(E - E!')

Ef3')/eec(E) (A.15)

Clearly, Q,(E - E!') is an average cross section for the

1

5

(2J+

€:?(E - EFC) J , ~ , Y ~ = O 1)eEQ(E - E:' - E$ - EL (A.18) ~WA,BC

I

C H ( E - E:' j=O

h2

- ETc)(2j + 1)(E - EFC - ETc) (A.19)

This is eq 2.14 of the text. (Note Q,(E - Em) is equivalent to Q,(E), as discussed in the text.)

Kolmogorov Entropy and "Quantum Chaos" Phlllp Pechukas Departmnt of Chemistry, Columble Unhersity, New Yofk, New Yofk 10027 (Recelved:JuW 13, 1981)

The Kolmogorov entropy of a classical system is a measure of the degree of "chaos" inherent in the dynamics of the system. A quantum analogue of K-entropy is considered here; the definition of it involves repeated measurements, one every t seconds, on a representative system selected from an appropriate ensemble. The leading term in this entropy, as t 0, is evaluated; it turns out to be sensitive to the nearest-neighbor level spacing distribution in the quantum spectrum, and therefore to the difference in this distribution in the "quasi-periodic" and "chaotic" regions of the spectrum. Somewhat paradoxically, the short-time quantum K-entropy is slightly lower in the "chaotic" regime than in the "quasi-periodic" regime. The nature of quantum K-entropy at longer times, which is more important as a diagnostic of "quantum chaos" than the short-time behavior, is briefly discussed.

-

I. Introduction Classical mechanical motion, in a system of nonlinearly coupled oscillators or in a classical model of a polyatomic molecule, is typically quasi-periodic at low energies and chaotic at high energies.' The question whether there is an analogous "transition to chaos" in the quantum mechanics of these systems has been vigorously debated over the past few y e a r ~ . ~ -Many l ~ of the proposed criteria for (1)For an excellent review of work on both classical and quantum aspecte of molecular motion, ~ e D. e W. Noid, M. L. Koszykowski, and R. A. Marcus, Annu. Rev. Phys. Chem., 32, 267 (1981). 0022-3654/82/2086-2239$01.25/0

"quantum chaos" are qualitative and subjective: a stationary state is chaotic if its nodal pattern looks sufficiently (2)K. S.J. Nordholm and S. A. Rice, J. Chem. Phys., 61,203(1974); S . Nordholm and S. A. Rice, ibid., 61,768 (1974);ibid., 62,157 (1975). (3)D.W.Noid and R. A. Marcus, J. Chem. Phys., 67, 559 (1977). (4)E.J. Heller, Chem. Phys. Lett., 60,338 (1979);R.Kosloff and S. A. Rice, ibid., 69,209 (1980);E. J. Heller, preprint. (5)R. M. Stratt, N. C. Handy, and W. H. Miller, J.Chem. Phys., 71, 3311 (1979). (6)K. G.Kay, J . Chem. Phys., 72,5955 (1980). (7)I. C. Percival, J. Phys. B , 6,L229 (1973);N. Pomphrey, ibid., 7, 1909 (1974);D. W. Noid, M. L. Koszykowski, M. Tabor, and R. A. Marcus, J . Chem. Phys., 72,6169 (1980).

0 1982 American Chemical Society

2240

The Journal of Physical Chemisfry, Vol. 86, No. 12, 1982

scrambled,48or its expansion coefficients in some suitably chosen basis are sufficiently rand0m;~2.~ a nonstationary state is chaotic if it forgets its initial conditions sufficiently well, or sufficiently rapidly.1° Recently, Kosloff and Rice12 made the valuable suggestion that a quantitative measure of “quantum chaos” could be devised, based on the notion of Kolmogorov entropy. The Kolmogorov entropy of a classical dynamical system is a measure of the long-term unpredictability of the motion.16 It involves repeated observations, one every t seconds, on a single system drawn from an appropriate ensemble; the long-term unpredictability of the motion is calculated as the expected amount of new information to be gained in a measurement on the system after numerous measurements have already been made. The Kolmogorov entropy of a classical mechanical system is zero in the quasi-periodic regime, positive in the chaotic regime. It turns out that for a classical system the Kolmogorov entropy is linear in t , the time between measurements, so one can speak of a “rate of entropy production” during the motion. Kosloff and Rice made the following important point: if one defines a quantum analogue of the classical Kolmogorov entropy in such a way that its value is linear in t , the proportionality constant will turn out to be zero. This is because in a finite-dimensional state space, such as the space spanned by the eigenfunctions of the Hamiltonian with energies lying in an “energy shell” from E to E + AE, the quantum propagator exp(-iHt/h) comes arbitrarily close to the identity at certain sufficiently long times, and so the “rate of entropy production” in the system must vanish. In this paper I discuss a quantum analogue of the Kolmogorov entropy which, because of the nature of measurements in quantum mechanics, is not simply proportional to the time between measurements and does not vanish for all quantum systems, “chaotic” or “quasi-periodic”. This definition of quantum K-entropy seems, to me, closer in spirit to the classical definition than that preferred by Kosloff and Rice. However, I do not intend to take anything away from the work of Kosloff and Rice and in fact Kosloff and Rice briefly discussed this alternate definition of quantum K-entropy.12 Nor do I deny the important point set out in the preceding paragraph if the time between measurements is taken so long that the quantum propagator is essentially the identity, then the Kolmogorov entropy defined below will nearly vanish. But that is a very long time, for a system in which the density of states is reasonably high, a time on the order of hf 6E where 6E is such that all level spacings in the energy shell are nearly integral multiples of 6E. In practice, whether intramolecular motion is chaotic or not matters only for times short compared to this “recurrence” time. Section I1 reviews the Classical definition of Kolmogorov (8) S. W. McDonald and A. N. Kaufman, Phys. Reu. Lett., 42,1189 (1979).

(9)J. S. Hutchinson and R. E. Wyatt, Chem. Phys. Lett., 72, 378 (1980). (10)P. Brumer and M. Shapiro, Chem. Phys. Lett., 72,528(1980);M. J. Davis, E. B. Stechel, and E. J. Heller, ibid., 76,21(1980);Y. Wehman and J. Jortner, ibid., 78,224 (1981). (11)E. J. Heller, J. Chem. Phys., 72,1337 (1980). (12)R. Kosloff and S. A. Rice, J . Chem. Phys., 74,1340 (1981). (13)R. A. Marcus, Ann. N.Y.Acad. Sci., 357,169 (1980);R. Ramaswamy and R. A. Marcus, J. Chem. Phys., 74,1379 (1981). (14)S.A. Rice in “Quantum Dynamics of Molecules”, R. G. Woolley, Ed., Plenum, New York, 1980,p 257. (15)M. Tabor, Adu. Chem. Phys., 46,73 (1981). (16)P. Billingsley, ‘Ergodic Theory and Information”, Wiley, New York, 1965;V. I. Arnold and A. Avez, “Ergodic Problems of Classical Mechanics”, Benjamin, New York, 1968 Ya. G.Sinai, “Introduction to Ergodic Theory”, translated by V. Scheffer, Princeton University Press, Princeton, NJ, 1976.

Pechukas

entropy in a form that motivates the quantum analogue presented in section 111. The leading term in the quantum K-entropy can be evaluated for short t , the time between measurements (section IV), and turns out to be sensitive to the distribution of nearest-neighbor energy level spacings and therefore to the difference in level spacing distributions in the “quasi-periodic” and “chaotic” regions of the spectrum (section V). Section VI is a brief discussion of what one may expect to find at longer times. 11. Classical K-Entropy

The Kolmogorov entropy associated with classical motion on a phase space surface of constant energy can be defined in the following way. Partition the surface fl into a finite number of disjoint sets R1,f12, ..., fl, and let x l , x2, ...,xm be the characteristic functions of these sets xj(r)= 1 if I’ t Q j = 0 otherwise (1) Then m

XjXk

= 6jkXj

xj

j=l

=1

(2)

These equations state that every point r on the energy surface is unambiguously labeled by one of the integers 1 , 2,..., m. Let A be the observable m

A = C jxj j=1

(3)

Every point I’ on the energy surface generates a sequence of integers, obtained by measuring A every t seconds as the system evolves from I’ at time zero:

r

-

x 0 , xl,

..., x,, ...

(4)

where

x,(r) A@‘,,t) (5) The sequence xo, xl, ..., xn, ... is a crude representation f

of the classical trajectory through r, coarse grained in both space and time. Two distinct classical trajectories may generate precisely the same sequence through, say, the first 10000 entries, but then the next measurement may distinguish the two. Each measurement on a particular system in general gives us new information about the trajectory it is following, in the sense of enabling us better to distinguish it from its neighbors. We can quantify the information gained, at a given measurement, according to standard practice in information theory.16 If the initial point r is chosen “at random”-i.e., according to Liouville measure p on the energy surface, where we assume ~1 is normalized, p ( f l ) = 1-the expected information gained, in the measurement at time nt, is I,(A,t) = -C...C p ( ~ , - 1..., , xO) Io

X

%I

m

C P(xnlxn-1,***,xo)In P(xnlxn-1,***Pd (6) x.4

Here p(x,Ix,l,...,xo) is the conditional probability that the measurement gives x,, given that the preceding measurements gave x , - ~ , ..., xo:

P (XnIxn-1,.

* 9x0)

= P (Xn,xn-l,** * ,101 /P(xn-1,. * * 9x0)

(7)

where P(Xn,-*.,Xo)

= p ( { r : x , ( r ) = x , , ...,x o ( r ) = x o ) )

(8)

The Journal of Physical Chemistry, Vol. 86, No. 12, 1982 2241

Kolmogorov Entropy and Quantum Chaos

The numbers In do not increase with n-the amount of new information to be gained in a measurement on a system certainly cannot go up as one learns more about the system-and are 10,so the sequence In has a nonnegative limit I ( A , t ) = lim In(A,t) (9) n-m

which can be regarded as the long-term unpredictability of the observable A under the classical motion on the energy surface. Some observables are more predictable than others. For example, if we do not partition the energy surface at all-i.e., Ql = a-then A = 1and In(A,t)= I(A,t) = 0, since we are certain at each measurement of A what the outcome will be. Kolmogorov suggested that the least upper bound of the various numbers I ( A , t ) generated by all possible observables A of the indicated form

h(t) = sup I(A,t) A

(10)

would be an appropriate measure of the amount of "chaos" inherent in the classical motion; h(t) is the classical Kentropy. Determining the least upper bound of I(A,t) is what makes practical calculation of the classical K-entropy so difficult, for there are arbitrarily fine partitions of the energy surface, and they must all be considered. 111. Quantum K-Entropy

The Kolmogorov entropy of a quantum system is defiied as in section 11,with certain essential modifications. Instead of an energy surface we consider an "energy shell", the N-dimensional subspace spanned by the eigenstates of H with energies in the range E to E + AE. Out of respect for the linear structure of quantum mechanics, we partition the energy shell into m IN linear subspaces by orthogonal projection operators Pj

so P(Xl,XO)

= Tr p x l ~ ~ x $ p x o ~ - ' p x ,

(17)

Proceeding in this fashion, we find that P(xn,xn-1,***,xo)=

Tr Px.UPx,_lU...UPx$PxoU-l...U-lPx"-,U-lPx, (18)

I,(A,t) and I(A,t) are defined as in eq 6 and 9. I(A,t) is a measure, relative to the partition defined by the projectors Pi,of the long-term unpredictability of the quantum motion between measurements. It should be emphasized that although repeated measurements introduce a stochastic element into the history of an individual quantum system-a stochastic element that has no classical counterpart-it is the undisturbed quantum motion between measurements that determines I(A,t). In particular, if there is no motion-if U = exp(-iHt/h) happens to be the identity in the energy shell-then I ( A , t )is zero, no matter how we partition the energy shell. As in the classical case, I(A,t)in general depends on the partition that defines A ; for example, if we partition by projectors onto the eigenstates of H, then I(A,t) = 0. The quantum K-entropy is defined as in eq 10, as the supremum of I(A,t) over all observables A of the form (12). Calculating this supremum is easier in quantum mechanics than in classical mechanics, because a quantum partition cannot be arbitrarily fine; there is no way to divide an N-dimensional energy shell into more than N linear subspaces. In particular, it is possible to calculate the quantum K-entropy for short t , the time between measurements; that calculation is the subject of the next section. IV. Short-Time Behavior I t turns out (see the end of this section) that the projectors that determine the K-entropy at short t are onedimensional, so we are concerned with a basis Il),..., IN) in the energy shell. Then

m

c Pj = I

(11)

P(xn(xn-1,.**rxO)= P(xnIxn-1) = I(xnl exp(-iHt/h)lxd12 (19)

(see eq 2). We select a representative system from an appropriate ensemble, described by a normalized invariant density operator p on the energy shell, and generate a sequence of integers by repeated measurements, one every t seconds, of the observable

Setting p = I / N , the density operator for uniform distribution in the energy shell, we find

PjP, = 6jkPj

j=1

m

jPj

A = j=1

(see eq 3). We specify that the measurements be "ideal" measurements. Thus, the probability that the first measurement, at time zero, gives xo is P b o ) = Tr PPXO= Tr PX@PXO

(13)

and if it does, the normalized density operator immediately after this measurement is (14) Px@Px,/Tr P X @ P X O Then immediately before the next measurement, at time t , the density operator is UPx@Px0U-'/Tr PX@PXO

(15)

where U = exp(-iHt/h) is the quantum propagator for undisturbed motion of the system between measurements; and the probability that this measurement gives x1 is P(x1lxo) = Tr ~ x l ~ ~ x @ P x , ~ - lPz@Pxo ~x,/T =r

P ( x , , X o ) / P ( x , ) (16)

N

N

I(A,t) = -N-l C C pi, In pij i = l ,=I

(20)

where pij = I(jl exp(-iHt/ti)li)12 is the transition probability from state i to state j in time t. When the energy shell is partitioned into one-dimensional subspaces, repeated quantum measurements on a system generate a Markov chain (see eq 19), and it is encouraging that the quantum information entropy I(A,t) reduces in this case to the standard formula for the entropy of a Markov chain.16 As t 0 we have

-

pij

N

=

IHjft2/h2 O'Z i) 1 - ( ) t 2 O' = i)

(21)

and the leading term in I(A,t) is

I(A,t) N (-2t2 In t/h2)(N-l C C IHjiI2) i j#i

(22)

Notice that the second term in parentheses is essentially the average rate of transition out of a basis state: the faster this rate of transition the greater the short-time entropy, which makes some physical sense. Notice also the peculiar t dependence, which guarantees that the quantum K-entropy is not linear in t.

2242

Pechukas

Th8 Journal of physical Chemistry, Voi. 86, No. 12, 1982

To determine the short-time K-entropy we have now to find the basis that maximizes I(A,t);that is, we want the basis that maximizes

Here is the solution to this variational problem: Number the eigenstates of H in order of increasing energy, El IE2 I ... SEN, where &, 42, ..., 4N are the corresponding orthonormal eigenstates. The desired basis is the set of states (dl f 4N)/21/2,(d2 f 4N-1)/2lI2,... where-if N is odd-the middle state 4(N+1)/2 appears “by itself‘ in the basis. The rest of this section is devoted to the proof of this statement (and to the demonstration that at short times only one-dimensional projectors need be considered), and the reader who is not intriqued by the variational problem of maximizing the sum (23) should proceed directly to section V, after reading the resulting formula for the short-time quantum K-entropy:

h(t) N (-t2 In t/h2)N-l((EN-

+ EN-^ - E2)2 + ...) (24)

Note that

C C IHjiI2= Tr H2 - C IHiiI2 i j#i

(25)

i

and since TTH2 is independent of basis we are looking for the basis that minimizes EIHiil2. Suppose we have found it, and consider an infinitesimal change of basis that = 0 implies “scrambles”only vectors k and 1; then 6(CJHcI2)

Hkl(Hkk - Hll) = 0 (26) Equation 26 must hold for all k and 1, so in the desired basis the matrix of H is block diagonal, with equal diagonal elements in each block. Let us consider the contribution of each block to expression (25). If the block is 1 X 1, it is an eigenvalue of H , and the contribution is zero. If the block is 2 X 2 , we can diagonalize to get two eigenvalues of H-E, and E,,, say-and the diagonal elements of the block are therefore equal to (E, + E,)/2. The contribution to (25) is Em2+ E 2

- 2((E,

+ E , ) / 2 ) 2 = ( E , - E,)’/2

...

Z(A,t) N (-2t2 In t / f i 2 ) ( N - l

+ (E - h1)2 - a 2 + (E + X# + (E + ... = ( ( E +AI) - (E - A , ) ) 2 / 2 + ( ( E + A,) - ( E - A 2 ) ) 2 / 2 + ... (28) X2)Z

2E2

If the dimension of R is odd, the secular determinant is odd in A; there is a zero eigenvalue, and the others come in pairs, &A1, *A2, .... The contribution of the block to (25) is just as in (28);the single eigenvalue of H at energy E contributes nothing. No matter what the size of the block, then, the contribution to (25) involves nothing more complicated than pairs of eigenvalues of H . The problem of maximizing (25) reduces to the following: group the eigenvalues of H into pairs and singles so as to maximize C(6E)2where the s u m

C Tr PjHPiHPj)

(29)

i j#i

(cf. eq 22). Let us evaluate eq 29 in a basis chosen so that each basis vector lies entirely in one of the subspaces defined by the projectors Pj, a situation we symbolize as k E i if basis vector k lies in the subspace defined by Pi. Then

Z(A,t) N (-2t2 In t / h 2 ) ( N - lC C C C l(11Hlk)12) i kci j # i &j

(27)

Now suppose the block is larger than 2 X 2, with equal diagonal elements E. Split off the diagonal and imagine diagonalizing the remainder, R; R is a matrix with zeros on the diagonal. If the dimension of R is even, the secular determinant is an even function of the eigenvalue parameter A, and the eigenvalues come in pairs, &A1, fA2, .... The eigenvalues of H from this block therefore come in pairs, E f A1, E f A2, etc., and the contribution of the block to (25) can be written

( E + X1)2

is over the eigenvalue pairs and 6E is the difference between the two eigenvalues in each pair. We can then find a basis in which (25)attains this maximum: it consists of the f linear combinations of the eigenvectors associated with each pair. Let us first solve this pairing problem for small matrices. For a 2 X 2, with E, I E,, the solution is obviously the pair (E1,E2).For a 3 X 3, with El IE2 I E3, the solution is obviously the pair (E1J3). For a 4 X 4, with El I E2 I E3 IE4, it is easy to verify-by comparing the three possibilities-that the solution is the pairing (El,E4), (E2JE3). Now consider an N X N , with El I Ez I ... I EN. In the “ideal” pairing, can El and EN be singles? No; we pair them and we pick up a contribution (EN - E J 2 / 2to (25). Can El be a single while EN is paired with, say, E,? No; that is not optimal pairing for the three-eigenvalue system E l , E,, EN. Similarly, EN cannot be a single. Can EN and El be paired with a couple of other eigenvalues, say Ei and Ej? No; that is not optimal pairing for the four-eigenvalue sytem E l , Ei, E . ,EN. In the “ideal” pairing, then, EN and El are paired. h e problem reduces to the pairing problem for the eigenvalues E2 IE3 I ... I EN-1, and by repetition of the same argument we finally arrive at the solution described just after expression (23). We have still to verify the premise on which this section is based, that at short times the projectors that determine the K-entropy are one dimensional. Consider eq 6. No matter what the sequence xo, ..., the conditional probabilities p(x,,~x,,-l,...,xo)go at short times as t 2 ,if x , # x,,-~, or as 1 - ( ) t 2 ,if x , = x,,-~; the probabilities p ( x , , - ,..., ~ xo) are all small unless xo = x1 = = x,~. The leading term in eq 6 , then, is obtained by setting xo = x1 = ... = x,,-~, and we find

I (-2t2 In t / h 2 ) ( N - lC C l(11Hlk)12) k lfk

(30)

which establishes the result.

V. K-Entropy and Nearest-Neighbor Level Spacings The short-time K-entropy (eq 24) is determined by energy level differences and should therefore be sensitive to the distribution of nearest-neighbor level spacings in the spectrum. There is evidence that the form of this distribution differs in the “quasi-periodic” and “chaotic” regions of the spectrum.18 We can ask, given two systems with different nearest-neighbor spacing distributions, which has the higher K-entropy? It turns out that the short-time K-entropy is mainly determined by the level density, so to make fair comparison we look at systems with precisely N states in the energy range E to E + AE. Let 6Ei = Ei+l - Eibe the ith level spacing in the energy shell. We have in mind a “thin” energy shell-so that the level spacing distribution is es~~

(17) See J. M. Jauch, “Foundations of Quantum Mechanics”, Addison-Wesley, Reading, MA, 1968. (18)See the contribution by M. V. Berry in this issue.

sentially constant over the range E to E + AE-which nevertheless contains many states. We adopt the following model: the 6Ei are identically distributed and independent except for the constraint N-1 i-1

6Ei = AE

Let

((6E’) - ( 6 E ) 2 ) / ( 6 E ) 2

(33)

be the normalized mean square dispersion of the level spacing distribution; then the constraint (31) implies (if i # j) ((6Ei6Ej)- ( 6 E ) 2 ) / ( 6 E )=2 - a 2 / ( N - 2) (34) We find, up to terms of o d e r 1 / N

h(t)

(-t2 In t/h2)(@/6)(l

+ a2/2N)

(35) Notice that the leading term in h(t)depends only on the width of the energy shell, AE. Roughly speaking, the short-time K-entropy measures a dephasing rate (see the discussion following eq 22) and short-time dephasing of a nonstationary state is governed mainly by the spread of energies in its wave function. Nevertheless, the short-time K-entropy is definitely sensitive to the nearest-neighbor level spacing distribution, through a2: the larger the mean square dispersion of this distribution the higher the entropy. Berry and T a b o P have deduced the form of the distribution in the energy range where the classical motion is quasi-periodic; they find N

p ( S ) = e-s where S is the normalized spacing S = 6E/(6E)

be characterized by “level repulsion”, p ( S ) 0 as S 0; McDonald and Kaufman8 found “level repulsion” in numerical calculations on the stadium problem. Perhaps the distribution is of the form proposed by Wigner for the level spacings of complex nuclei21

(31)

We ask, what is the expected value of the short-time Kentropy? The calculation is done by expressing the energy differences in eq 24 in terms of the 6Ei and then averaging. Obviously ( 6 E ) = U / ( N - 1) (32) a2 =

- -

The Journal of Physical Chemisfry, Vol. 86, No. 12, 1982 2243

Kolmogorov Entropy and Quantum Chaos

(36) (37)

This is a distribution that encourages “level clustering”, since p ( S ) is a maximum at S = 0, and has a fairly large dispersion: a2 = 1. Less is known about the distribution in the energy range where the classical motion is chaotic. Zaslavskiimhas given a semiclassical argument that the “chaotic” spectrum will (19)M. V. Berry and M. Tabor, h o c . R. SOC.London,Ser. A, 356,375 (1977). (20) G.M. Zaslavskii, Sou. Phys. JETP, 46,1094 (1977).

p ( s ) = (7r/2)se-9/4

(38)

If so, a’ = 4/7r - 1. We conclude that the short-time K-entropy of a quantum system is slightly lower in the “chaotic” regime than in the “quasi-periodic” regime, because the nearestneighbor level spacing distribution is less disperse in the former than in the latter. Because of level repulsion, the energy level spectrum looks more “ordered” in the %haotic” regime than in the “quasi-periodic” regime, and this ordering is reflected in the short-time K-entropy.

VI. Discussion What happens at longer times? I do not know; I cannot solve the quantum variational problem posed by eq 10. If the quantum K-entropy studied here is a useful measure of “quantum chaos”-which is by no means certain, given the results of section V-the short-time behavior is less significant than the behavior at times between h / A E and h/6E. These are times long enough that the K-entropy will not be determined mainly by AE, the width of the energy shell we choose to examine, but by the intrinsic dynamics of the system within that shell, yet short enough to avoid long-time mathematical peculiarities of no physical significance in a system with high level density, such as quantum “recurrences”, exp(-iHt/h) N I. If quantum K-entropy-either in the version proposed here or in some variant-can ever be evaluated in the time interval of physical interest, the value of it will turn out to depend only on the spectrum of the Hamiltonian. K-entropy is not sensitive to the shape of the eigenfunctions, or to their spatial extent, or to their nodal pattern, or to any property of a stationary state other than ita energy. This is because K-entropy is a dynamical invariant, the same for any two dynamical systems that are isomorphic,lBand two quantum systems with the same energy level spectrum are obviously isomorphic, under the linear mapping that takes each eigenstate of one system to that of the same energy in the second system. I have no idea what the quantum analogue of classical chaos really is, but if it turns out to be something richer than just a spectral property, Kolmogorov entropy will not measure it. Acknowledgment. This work was supported in part by the National Science Foundation (NSF CHE 78-20066). (21) See A. Bohr and B. Mottalson, “Nuclear Structure”, Benjamin, New York, 1969.