Collisional relaxation via eigenfunction-eigenvalue ... - ACS Publications

The solution of the problem of collisional relaxation in gases is commonly given by an expansion in terms of eigenvalues and eigenvectors of transitio...
0 downloads 0 Views 5MB Size
edited by JOHN W.

MOORE

Collisional Relaxation via Eigenfunction-Eigenvalue Expansion Analysis of a Simple Case Wendell Forst INPL-ENSIC and UniversitB Nancy I 1, rue Grandviile, 54042 Nancy Cedex, France The solution of the problem of collisional relaxation in cases, or indeed of any stochastic Markovian Drocess. is commonly given by an expansion in terms of the e'ip,en\,al'ues and eigenvectors of a transition matrix (ea 13 helow). These concepts are not easy to grasp and often aie somewhat mystifying to students. The purpose of this paper is to illustrate, using only elementary matrix algebra and a transparent example, how the solution of the relaxation problem is put together. Although the example is simple enough t o allow explicit numerical calculation by hand, i t conserves all the features of a more realistic case. Figure 4 shows a simple program for a microcomputer written in APL that generates all the results shown in this paper, and allows scale-up to larger more realistic systems. In running this program, the student can quickly see how the solution is affected by changing the various parameters and the system size. The Master Equatlon In the discussion of kinetic theory of gases, i t is shown in every textbook of physical chemistry1 that collisions are random. Collisions cause processes that depend on them to evolve in time, and therefore such processes are called stochastic. Collisions also are uncorrelated, meaning that the system has no memory beyond the immediately preceding collision; such a process is called Markovian. Thus collisional activation and deactivation in a gas is a stochastic Markovian process, which can be conveniently described by what is commonly referred to as "master equation". There exists a voluminous literature on the subject2, but in its simplest form the master equation is a bookkeeping device that keeps track of all collisions that affect the population of molecules of a given energy. T o proceed, we first need a model for the relaxing system. Let us suppose that we have a number of molecules A highly dispersed in a large number of molecules M; as a result, encounters between two molecules A are unlikely and only binary collisions A M occur and need to be dealt with. The energy exchange in a collision between A and M can be represented by the equation

+

change of M, since the larce number of molecules M assumed to be present is equivalent u, a heat bath, large enough so that its properties remain independent of time. Thus molecules A are in effect immersed in a thermostat characterized by a temperature T. We shall assume that energies x and y in eq 1 are not sufficient to cause the molecule A to dissociate, which means that we shall confine our interest to collisions that merely transfer energy. We shall also assume, for the present, that energies x and y represent a continuum. The master equation then takes the form

where c(x, t ) is the fractional population, i.e., the probability, per unit energy, of molecules A having energy x a t time t, q(x, y) is the probability, per collision and per unit energy, that in a collision the molecule A shall undergo a transition from initial energy y to a final energy x and w is the collision frequency (8-I). We can look upon the product w q ( x , y) as the first-order rate constantlunit energy for the transition x y. Equation 2 is a linear first-order integro-differential equation, also known as the Pauli equation.%The upper integration limit in eq 2 is in principle the energy of the highest hound level of A, which for convenience is taken to be infinity. The term on the LHS of eq 2 represents the time rate of change of population in energy level x. The positive term on the RHS of eq 2 sums over transitions from all y to the specified x , i.e., represents all processes that populate energy level x, while the negative term sums over transitions from specified x to all y, i.e., represents all processes that depopulate level x. Equation 2 is thus a rate equation composed according to the usual rules of chemical kinetics. Since the transition probability q(x, y) is a stochastic matrix, i t is time-independent and is normalized t o unity: +

for ally, meaning that whatever the initial energy in a colliwhere y represents the energy of A before the collision, and x the energy of A after the collision. The case y > x corresponds to collisional deactivation, y < x corresponds to collisional activation, the requisite energy difference y - x being provided or absorbed by M. No statement is made about the corresponding energy 142

Journal of Chemical Education

'See, for'example, Moore. W. J. Physical Chemistry, 4th ed.: Prentice-Hail: Englewood Cliffs.NJ 1972; Chapter 4. For areview, see Oppenheim. I.: Shuier, K. E.: Welss, G. Stochas tic Processes in Chemical Physics: The Master Equation; MIT: Cambridge. MA 1977:Van Kampen, N. G. Stochastic Processes in Physics andchemistry. North-Holland: Amsterdam, 1981.

sion, summing over all final energies accounts for all possible outcomes of that collision. The result is therefore certainty, i.e., unit probability. As a result, we have for the negative term on the RHS of eq 2:

and therefore eq 2 reduces t o

]

dc(x, lVdt = a[[ q(r, y)c& t) dy - ci., t)

(5)

For a given transition probability q(x, y), the solution of eq 5 consists of finding an expression for the time-dependent population c(x, t). Analytic solutions are known for a few particular models2s3of q(x, y). Matrlx Formulation For the present purpose, which is to illustrate the solution in terms of an eigenfunction-eigenvalue expansion, as well as for the purpose of obtaining numerical results for an arbitrary transition probability model, it is more instructive to assume that molecule A has a discrete spectrum of energies. We then have to rewrite eq 5 in matrix form: where C is a (time-dependent) vector (one-column matrix) of fractional ~ouulationswith elements ci(t), which are the equivalent ofe(;, t ) ; Q is a matrix of the transition probabilities with elements qij, whichare theequivalent of q(x, y); and I is the unit matrix (a matrix that has unit diagonal elements, all off-diagonal elements being zero). We shall also find i t more convenient to work with the dimensionless time r = ot. Matrices Q and I are square; if the discrete energy spectrum of A consists of nlevels, we shall take the size of the two matrices to be n X n; the consequence of this choice of size will emerge shortly. For every r the vector C will have n elements c;(T) which represent the populations of every energy level of the system a t (dimensionless) time r. In the elements q;, index i numbers rows and index j numbers columns. Thus, for example, the first row of Q consists only of down-transitions to level 1 (1 j), while the last row contains onlv uo-transitions to level n (n- .i). . The diaeonal elements qji correspond to an elastic collision, i.e., no ciange in energy. Matrix P = Q - I has elements p , that are the same as qij except that elements on the diagonal are pi; = qii - 1. Using rules of matrix multiplication, the student can easily show that eq 6 is indeed equivalent to eq 5. The analogue - of ea 3 for the normalization condition of matrix Q is ~~~

~~

~~

~

".

-

The Transltlon Matrlx

In order to make the explicit calculations shown below as transparent as possible, we shall assume that the energy spectrum of A consists of only four levels; in addition, we shall take for Q a particularly simple form derived by Hoare4, which refers to a system relaxing in a "cold" (0K) heat bath subject to transition probabilities based on the proposition that energy is randomly distributed between collision partners (seeFig. 4 for aroutine that can deal with a larger size system and different Q). Clearly in such a system only down-transitions are possible, and therefore the matrix will be upper-triangular. This is the prototype of a system that relaxes in a room-temperature heat bath following laser excitation to some high energy. The 4 X 4 Hoare matrix is

It is clearly normalized to unity. The extension to size n X n is obvious: the last column will then contain n elements l l n , the penultimate column will have (n - 1)elements ll(n - 11, and so on. The Hoare P matrix of size 4 X 4 corresponding to the matrix of eq 10 is

with obvious normalization to zero. In order to see how a system described by a matrix P evolves in time, we first have to find the n eigenvalues pk and the associated eigenvectors &, labeled k = 1,.. . , n, of the matrix P. The terms eigenvalueleigenvector are defined by the so-called characteristic equation, which for any general form of P reads p#k = P d k

(12)

for every k. In other words, if P is matrix-multiplied by the column vector &, the result is vector 4 k multiplied by the constant pa. This is quite analogous to the perhaps more familiar stationary-state Schrodinger wave equation in quantum mechanics, if we identify P with the Hamiltonian of the system, & with the wave functions and the eigenvalues Pk with the allowed energies of the s y ~ t e m . ~ The Solutlon

Once the eigenvalues and eigenvectors are known, the solution of the master equation eq 6 is and therefore the normalization condition of matrix P is

meaning that the sumover rows of every column must add to unity or zero, respectively. Since we decided to take matrix 8 to be of size n X n. index i in eos 7 and 8 runs over then discrete energy levels df the system. It follows therefore from eq 6 that the conseouence of matrix P beine" normalized to zero (eq 8) is that ~~

1dci/dr = 0

(9)

This represents conservation of the number of particles of A in the system, in accordance with the condition that the system be nonreactive.

where the ak are coefficients that depend on initial conditions and the & and pk are the eigenfunctions and eigenvalues determined in eq. 12. T o show that eq 13 is indeed the solution, let us differentiate eq 13 with respect to time r: Montroll, E. W.;Shuler, K. E. J. Chem. Phys. 1957, 28,454; Brau, C. A,; Keck, J. D.; Carrier, D. F. Phys. Fluids 1988, 9, 1885;Cooper, R. D.; Hoare, M. R. Chem. Phys. Lett. 1977, 12, 123; Forst, W.; Penner, A.P. J. Chem. Phys. 1980, 72, 1435; Forst, W. J. Chem. Phys. 1984, 80, 2504. 'Hoare. M. R. J. Chem. Phys. 1982, 38. 1630. For further details of the analogy between the wave equation of quantum mechanics and the master equation, see Bartis, J. T.; Widom, B. J. Chem. Phys. 1974, 60, 3474. Volume 66 Number 2 February 1989

143

which identifies the corresponding eigenvalue, and i, which identifies the element of a particular eigenvector. but from eq 12 pk& is equal to PQk;thus

The lnltlal and Flnal Dlslrlbutions

In order to find the eigenvalues, we follow the general procedure which is to rewrite eq 12 for every k in the form (16) (P - ~kI)dk= 0 which represents a system of homogeneous equations. It has a nontrivial solution onlv if the determinant of ( P - u k l ) is zero. For P of eq 11this means

Expanding the determinant along the minors of its first column, eq 17 is easily seen t o reduce to pk(pk + '/2)(~k + 2/3)(pk %) = 0. The solution is thus p k =0, -%, -2/3, -%,for k = 1,2,3,4, respectively, i.e., the eigenvalues of P are in fact its diagonal elements. This is true of any triangular matrix, as can be readily verified. Note that except for PI, all the eigenvalues are negative; this result is quite general for any stochastic matrix and is necessary for C in eq 13 to remain -. The result that the smallest eigenvalue in finite as r absolute terms (pl) is zero, is again quite general for a nonreactive system: it is another consequence of the conservation of total number of particles. To obtain the corresponding eigenvectors, we have to solve the system of equations of eq 16 for each eigenvalue in turn. Suppose we start with p4 = 3Ia;the explicit form of eq 16 is then

The last element of the solution still missing a t this stage is the coefficients a.We reauire that a t 7 = 0 the exnansion on theRHS of eq 13be e q u i t o the initialdistribution vector C(r = O), with elements c,(O) (i = 1,. . . , n):

In the general case eq 18 is a system of n equations with n unknowns ah. Suppose, for simplicity, that in the present case of the 4 X 4 matrix P of eq 11 the initial conditions correspond to delta function excitation in the topmost level of the system; thus for energy levels i = 1, 2, 3, 4 we have, respectively, ci(0) = 0 0 0 1.Therefore the explicit form of eq 18, using the eigenvectors of Tahle 1,is

o + o +a3/@ - 3a,lJZij

+

-

where xi (i = 1,. . . ,4) are the elements of the (as yet) unknown eigenvector belonging to eigenvalue k = 4. Line (d) offers no information since it will be obviously satisfied by any values of xi. However from line (c) we have x3 = -3~4; then from (b) we get xz = 3x4, and finally line (a) yields xt = -x4. Thus d elements of this eigenvector can be expressed in terms of x4, for which we can choose any value we wish; hence the eigenvector can be specified only to within a constant factor. If we choose for simplicity xa = 1, then the elements of the eigenvector are -1,3, -3,l. We now impose the condition that the eigenvector be normalized such that the sum of squares of its elements be equal to 1. The same normalization is used in auantum mechanics t o normalize wave functions. Let the normalization constant of our eigenvector bea: we then reauire that a 2 ( 1 t 9 9 1) = I, or a = I/@. ~ h uthe i elements of the normalized eigenvec'tor are -I/@, 31@, -3/@, I/@. These elements, which we identify by the label i, correspond each t o the n energy levels of the system (n = 4for matrix P of eq 11; thus in this case i = 1 2 3 4). Proceeding in the same fashion with respect to eigenvectors of the remaining eigenvalues, we find quite easily that their elements, normalized in the sense mentioned above, are as shown in Table 1. The student can verify that the eigenvectors of Table 1satisfy eq 12. To identify an eigenvector fully it isclear fromTable 1that we need twolabels: k,

=0

(iii)

From (iv) we have immediately aa = @,then from (iii) aa = $3,then from (ii) an = J18,and finally from (i) al = 1. When these constants are combined with the eigenvectors &of Table 1,the resultant matrix = aa@khas aparticularly simple form, as shown in Tahle 2. Note that the first column in Table 2, i.e., nl& = 1 0 0 0, which is the first term in the expansion for C (eq 13), means that all particles are in energy level i = 1and all other levels are empty. This is a population distribution that corresponds to 0 K, whichis of course the temperature of our heat bath; hence nl& is the equilibrium distribution that the relaxing system described by matrix P of eq 11will eventually reach. This is again a quite general result for any stochastic matrix P in the sense that if P represents relaxation in a heat bath a t temperature T, and assuming we adopt the convention that the zero eigenvalue and its associated eigenvector are always assigned k = 1, then the term al& corresponds to an equilibrium distribution a t T. Consequently, the first term of the expansion in eq 13 in a nonreactive system is a time-independent constant that represents the equilibrium distribution corresponding to the

+

Table 1.

Efgenvalues pk and Elgenvecton &ol Matrlx P (Eq 10) energy levels. i

/ 1

k

2

3

4

+ +

144

Journal of Chemical Education

Table 2.

Numerbal Values of Matrlx 3 = a d * lor P of Eq 11 and Delta Functlon Excltatlon In Level I= 4

temperature of the heat bath, which instead of a161 we will now call C.,. Equation 13 can therefore also be written quite generally in the form

-

Since all eigenvalues p k (k 2 2) are negative, eq 19 shows clearly that as T -, C decays to C., All the elements of the solution are now available, so that we need only specify the timer in order to calculate the time evolution of the system. The Time Dependence

For the purpose of the actual calculations it is convenient t o first construct the matrix T = exp ( p k r ) , the columns of which represent exp (pkr) for a given 7 . For 7 = 0 1 2 3 4 we have, with eigenvalues of Table 1,

The first column represents elements of T for r = 0, the second for T = 1, and soon. The first row involvesp~,whichis zero; hence the first row represents exp (FIT),which is therefore equal to 1for all r. The population C then follows by simple matrix multiplication: C = QT.For T of eq 20 and Q of Table 2, we have

initial delta function excitation in level i = 4 is correctly rendered. Each row of matrix C gives the time evolution of the population in the corresponding energy level. This is shown graphically in Figure 1 for a larger time interval running from i = 0 to r = 10. Observe how the population distribution relaxes towards the 0 K equilibrium distribution: population of level 1 approaches unity, while the population of the other levels slowlydrops to zero. Note also that everv column in ea 21 sums to unitv. -.since the total number of particles is conserved. With the determination of the ~ooulationmatrix C of ea 21 the relaxation problem is in p;iiciple solved. Any othe; time-dependent average - Propertv . . . of the relaxing system can be obtained by suitable averagingover thepopuiaGon distribution. This is because C represents fractional population, it thus is a probability of occupation of a given energy level. We have just seen that this probability is properly normalized for all r. As an example of averaging, we shall use C of eq 21 t o calculate the average energy of the relaxing system. To do this, we first have to assign energies to the energy levels of the system. Suppose the energies are, in cm-', E = 0 1580 3160 4740 (these are the first four energy levels of 02). The average energy of A as a function of time, ( E ) , is the sum of the weighted energies in all energy levels. The weighted energy in agiven level a t time r is given by the energy of that level, multiplied by the fractional population of that level a t 7. If E is formed into one-row, four-column matrix E, and is then matrix-multiplied by matrix C, we accomplish in one operation the two steps that are necessary to obtain (E): (E) = EC

The first column gives the population a t r = 0;note that the

2

4

6

(22)

Using C of eq 21, the result is ( E ) = 4740 2875 1743.7 1057.6 641.49 (all in em-') for r = 0 1 2 3 4. Figure 2 shows a plot of ( E ) vs. T for alonger time interval (0 5 T 5 10). As expected,

8

dimensionless time Flgure 1: Timedependence of fractional population (cf. C of eq 21). Numbers identify the lndlvldual energy levels of the four-level system discusred in the text.

0

2

4

6

8

10

dimensionless time Figure 2. Time dependence of average energy (E) of the same four-level system as in Figure 1. Initial excitation is delta function In level 4 (4740 cm-').

Volume 66 Number 2

February 1989

145

v

111 I21 131

I 151 161 171 181 191 1101 1111

IIZI 1131

IN! 1151 1161 I171

W L E N:I:P:Q:Z;N;IN:*,F:t;T:E:C;Eav ' Co11isi-1 reiaxation v i a eisenfmction-eigenMlue -ian ' Q = "oare wtrir size ' . ( . N ) . ' " ' , I I W ) , ' : ' o -5

Flgure 4.

dimensionless time Figure and of

3. Semllogarlthmlc plot of the timedependence of X = (E) (line (a)) X = the populationof level 4 (line (b)), both normalizedto unity at 7 = 0.

( E ) decavs and aooroaches zero. since this is the eround &ate e n e k that iKe system must reach when equilibrium between the 0 K heat bath and svstem is attained. Figure 3 shows a semilogarith;nic plot of ( E ) and of the population in level 4 (the top level), from which it may be iedn that, since the plots a r e h e a r , the two decay exponentially in time. (It is obvious from Figure 1 that this is not the case for the oooulations of the other levels.) Note that the slope of the s'triight line for ( E ) is -'/z, which corresponds to the eieenvalue u". and the slooe of the straieht line for DODUlationof level 4 -314, which'corresponds t o the eige&alue ua (cf. Tahle 1). Bv varvine the size of the transition matrix Q, thestudent caneasiiy verify that it isalways (E)and only the oooulation of the tor, level that decay exoonentiallv: ( E ) with siope always equaito pz, and popuiatiin of level k with slope equal top, for Q of size n X n. These results are by no means general; they reflect the peculiar properties of the Hoare-type matrix. The Computer Program

Fieure 4 shows a simole oroeram called EXAMPLE that gen&ates all the result's sho& in the text. I t is written in APL. a language that makes manioulation of matrices particularly easy [e.g., the symbol fo; matrix multiplicatik is +.XI. The program runs on STSC's personal computer APL software called APL'PLUSIPC, version 6 or later, and makes use of function EIG, a matrix eigenvalue routine, part of the EIGENVAL workspace provided with the software. Therefore EXAMPLE is best executed within that workspace. Although finding the eigenvalues and eigenvectors of the matrix used in the text is trivial, EIG will find them for any properly formed stochastic matrix. Hence with only a

146

Journal of Chemical Education

'

~ ( ~ N . N ~ ~ ~ ~ N I X 9~ " I N I ~ . ~ L N 1 = Identity wtrix size ' . l i ~ I . ' x ' . ( i ~ ):'. ~0 o ~ I ~ ( ~ w I ~ 0. =', ~N ' W t r i x P;P-I :' 0 DPPI 0 " ~ e r o '~e ioa e n d u e s o r p : ' . ~ T Z I I ; I + ~ ~ ' EiBenvectors of P :' 0 DRR1LYIIl-Nl,WltZ e " ' Check that e l e n n o t o r s are n o m l i r e d : ' . i + l l l l w . ~0 ' Vector of i n i t i a l powlation distribution : o T W O 0 " ' Coefficients * t Z : ',.IA.,I*BYlIZ 0 " ' Pbtrix @ : 0'~1191BIII,NlslN/A)x,I(BV e " ' speciry di.ensian~ess rim :' o tro 8 ' Pbtrix T :' 0 D R l N . ~ P t l l ~ ~ l l N . l l ~ Z I l : l l ~ . X l l ~ t l $, l '' lpt ' C = b t r i x of P w l a t i :' 0 OlET D R . r T 9 " ' Total pprlation : ' . i + l l l l c o " ' Specify e n e m levels : ' o e Brll,lpB1I~B0 " ' Decay or : ' Q O ~ E ~ W E + . X Ca Q ' o ro

'

'

+

"

APL listing 01

programs EXAMPLE and

'w

RST.

trivial modification EXAMPLE will work with other matrices as well. The function EXAMPLE takes as right argument N, which is the size of the system. To generate results for the 4 X 4 matrix in the text, type EXAMPLE 4. Matrix Q (eq 10) is calculated in line 2, the unit matrix I in line 4, and matrix P (ea 11) in line 5. Eieenvalues and eieenvectors of P are gedeiateb. by the functyon EIG in line 6;they are consigned inmatrix Z. which has the same form as Tahle 1:the first row are the eig&values (displayed in line 6) and the remaining rows are the eigenvectors, which are displayed in line 7. Function RST resets small nuhmers (10-l6 or less) to zero. The elements of the eigenvectors are squared and summed in line 8 to verify their normalization. In line 9 the system prompts for the initial population distribution. This distribution can he anvthine. but must have N elements; type 0 0 0 1for the delta Eunctildn distrihution discussed in the text. The coefficients aa are calculated in line 10 and their squares displayed to facilitate comparison with the results obtained in the text. (Note that the laborious calculation described in the text is accomplished in APL by a single statement involving the domino symbol.) Using these coefficients, matrix (Tahle 2) is assembled in line 11.Inline 12 the svstem promotsfor the (dimensionless) time vector, which c& haveany number of elements; type 0 1 2 3 4 to generate results shown in the text. The corresponding matrix T (eq 20) is calculated and displayed in line 13, and the population matrix C (eq 21) in line 14. Line 15 gives the sum of everv column in C to show that total fractional population = 1 independent of time. Finally in line 16 the system prompts for the energy levels ofthe system. Again, thesecan beanything, but must have N terms: t w e 0 1580 3160 4740 to reoroduce the results shown in thetext. The average energy (E) (eq 22) is calculated and displayed in line 17. By varying responses to the system prompts, the reader can auicklv see how the nature of the solution changes. ~oluGons;an be obtained for any properly formed mat& Q inside the workspace by. skipping .. .lines 2 and 3 and directing execution to lini4.