The Quasi-Steady-State Approximation: Numerical Validation

differential equations describing the time dependence of chemi- cal reactions ..... methods being considered, the solution advances one step at time, ...
1 downloads 0 Views 154KB Size
Research: Science and Education

The Quasi-Steady-State Approximation: Numerical Validation Richard A. B. Bond, Bice S. Martincigh, and Janusz R. Mika University of Natal, Durban 4041, Republic of South Africa Reuben H. Simoyi West Virginia University, Morgantown, WV 26506-6045

The quasi-steady-state approximation (QSSA) of chemical kinetics is an essential tool in practical and theoretical chemistry. It can produce quite remarkable simplification of differential equations describing the time dependence of chemical reactions, and often has the effect of reducing mathematical needs to the level of elementary calculus and algebra. This technique is so useful in producing approximate solutions of kinetic equations that it appears in many undergraduate chemistry texts, explicitly or implicitly. Unfortunately, misuse of the QSSA by the unwary can result in gross errors (1). Therefore, validity testing should accompany its application to practical work. This can be done by singular perturbation theory, in which the QSSA is the first step on a ladder of increasingly accurate approximations. Singular perturbation theory merges the initial sharp transient and the later steady or “outer” part of the solution by means of mathematical manipulations (2). QSSA error is then estimated as the difference between the QSSA and the next higher approximation. If error is small, the QSSA is valid. Singular perturbation theory has been developed for advanced application in several fields, notably for the boundary layer theory of fluid dynamics. This has involved abstract mathematics accessible only to specialists. Despite valiant attempts to make singular perturbation theory more userfriendly, its standard form is too complicated for widespread use. Introductory discussions of chemical kinetics are naturally restricted to small reaction systems, the dynamics of which are described by small sets of ordinary differential equations, which are soluble numerically, but sometimes only if the right numerical methods are used. This allows complicated perturbation theory to be replaced by computer simulation when assessing the accuracy of the QSSA: compare accurate numerical solutions with the corresponding QSSA solutions. This approach does not seem to have been reported on in any detail, and it is the aim of this note to show that with some ingenuity and the right mathematical software, one may extract from its numerical model insights into the relation between the system’s parameters and the QSSA error, in reactions involving only a few species. One likely reason numerical validation has not received much attention in the literature is that often the QSSA is applied to systems that can be classified as (numerically) “stiff ”. A stiff system exhibits very sharp transient behavior when disturbed from a smooth steady development in time. Such systems are not integrable by well-known classical numerical algorithms such as the explicit fourth order Runge– Kutta scheme. Intricate numerical mathematical procedures are needed to cope with stiffness. This can make the modeling of chemical kinetics appear rather daunting. Indeed, in the past, the combination of the difficulties of perturbation 1158

theory and stiffness caused some research workers to avoid the QSSA in favor of reliance on the numerical solution (3). There are good reasons to favor the policy that numerical modeling of chemical kinetic systems, even if stiff, should be done in parallel with steady-state analysis (4). This provides a confidence-building check on results produced separately by each method. Extra insight into the actual reaction system comes from viewing the problem from both the QSSA and the numerical standpoints. A good example is the Oregonator model for the Belousov–Zhabottinskii oscillating reaction (4– 6 ). This policy calls for the ability to deal with stiff systems. Users of stiff solvers should have some understanding of what stiffness means. We include in this note a brief discussion of this topic because it is seldom even mentioned in undergraduate courses in engineering and science. In fact, it has been more than forty years since Curtiss and Hirschfelder pointed out its significance in chemistry and technology (7). The need for a mathematically straightforward QSSA validity test has stimulated a sequence of papers (8–10) seeking a clear way of indicating if the QSSA is valid, in specific reaction systems, especially for teaching purposes. These treatments lack generality and are mathematically quite complicated, from a student viewpoint. In practical terms, a methodical numerical investigation of error seems to provide a simpler way to obtain at least as much information as in these studies, but with the significant advantage of being more widely applicable. Qualitative Analysis of the QSSA The QSSA is concerned with time dependent differential equation systems in which the variables are split into a so-called “fast” set, denoted by (the vector) y, and a “slow” set, called z. The differential equations are correspondingly split and written as fast:

dy = f (y, z) dt

(1)

slow:

d z = g (y, z) dt

(2)

in which the time of interest is from initial time t = 0 until final time t = t f. Initial values of y and z are known and are denoted as y(0) and z(0). It is assumed that for practical purposes the “fast” variable set y has a negligible rate of change, and then f (¯y , z) = 0 is solved to produce an approximating function y¯ = φ0(z) which is substituted back in eq 2 to produce the differential

Journal of Chemical Education • Vol. 75 No. 9 September 1998 • JChemEd.chem.wisc.edu

Research: Science & Education

form of the QSSA:

d z¯ = g φ z¯ ,z¯ (3) 0 dt in which the initial value for z¯ is z(0) (the actual initial value of z). Equation 3 is integrated to obtain z¯, which is the QSSA estimate of z. The QSSA value for the fast variable set is y¯ = φ 0( z¯ ) When the QSSA is valid, it ignores a short-lived transient, during which the slow variable z is effectively constant as y moves from y(0) towards a value close to φ0(z(0)). We wish to show that important aspects of the QSSA can be explained in plain language, after which detailed mathematical error analysis arises in a natural way. Linear Example: Typically, consider a simple reaction chain A

k1 k-1

of slow change. We call y a “fast” variable because of its rapid change during the initial settling-down phase, called the “inner solution” or transient. This is followed by a period of smooth change in all the variables, this phase being called the “outer” or “steady” solution. A system that is suitable for the QSSA and which is described by eqs 1 and 2 shows a specific reaction to perturbation. Suppose this system is in the outer solution initially and then a fast variable is slightly perturbed impulsively, after which the system is allowed to again run freely. The fast variables return quickly towards the outer solution, and during this recovery time, the slow variables exhibit little disturbance. In effect, the QSSA will work if a feedback mechanism forces the fast variables to “follow” the slow variables with a response time that is short compared to the time that characterizes appreciable change in the slow variables. We can rewrite eqs 4 and 5 to make the negative feedback behavior apparent at the settings used in Figure 1, by scaling the right side of eq 4:

k2

B→C

dy = ᎑20 y – 0.05z dt

involving species A, B, and C. Let y be the concentration of intermediate B, z the concentration of A, and p the concentration of product C. Then the kinetic equations are, by the law of mass action:

dy = ᎑ k ᎑1 + k 2 y + k 1 z dt

(4)

dz = k y – k z ᎑1 1 dt

(5)

dp = k2 y dt

(6)

Suppose the time of interest is from t = 0 to t f = 4. Also, let y = 0 and let z = 1 at t = 0. The precise numerical solution for the case k1 = 1, k ᎑1 = 10, and k 2 = 10 is drawn in Figure 1. This figure shows that the slow variable z changes smoothly, whereas y rises sharply at first before it also goes into a phase

d z = ᎑z + 10y dt

(7)

where initially y = 0 and z = 1. At the start, y – 0.05z is negative, and so y increases from zero, at a rate that falls off as y approaches the value 0.05z. To begin with, dy/dt = +1 (and dz/dt is negative); therefore y should be close to 0.05z within about 0.1 s, on the assumption that z changes slowly. Certainly during the first 0.1 s, z does not drop by more than 0.1 (᎑10%)—which is to say that during the “response time” of change in y, z is indeed roughly constant, allowing y to approach close to 0.05z during a short transient phase. Now consider applying the QSSA to eqs 4 and 5. First rearrange eq 4 and write

y=

k1 z k ᎑1 + k 2



dy 1 k ᎑1 + k 2 d t

and then approximate y as a function of z by dropping off the dy/dt term:

1 0.9

y ≈ φ0 (z) =

0.8

k1 z k ᎑1 + k 2

(8)

Concentration

0.7

when it is assumed that dy/dt is negligibly small. In eq 5 replace y by φ0(z), thus producing a single differential equation to be solved for the QSSA, whose solution is denoted now as y¯ and z¯ :

0.6 0.5

(A)

0.4

(B) 0.3

k k k1z d z¯ = ᎑k z¯+ k = ᎑ 1 2 z¯ 1 ᎑1 k ᎑1 + k 2 dt k ᎑1 + k 2

0.2 0.1 0 0

0.5

1

1.5

2

2.5

3

3.5

4

Time t / s

Figure 1. Solution of the linear example, when k 1 = 1, k ᎑1 = k 2 = 10. Curve (A) shows the slow variable as a function of time t. Curve (B) shows the fast variable magnified by a factor of 10.

(9)

subject to the initial value ¯z (0) = z (0) = 1. Integration of this equation produces the QSSA approximation to z. The corresponding approximation to the fast variable is ¯y = φ 0(z¯ ). These approximations only apply to the smooth outer solution; they ignore the initial transient or inner part of the

JChemEd.chem.wisc.edu • Vol. 75 No. 9 September 1998 • Journal of Chemical Education

1159

Research: Science and Education

solution. Integration of eq 9 is a direct consequence of the definition of the exponential function. The integrated QSSA expressions are

z¯ = e ᎑αt where

α=

k1 k2 k ᎑1 + k 2

(10)

and

y¯ =

k1 e ᎑α t = α e ᎑α t k2 k ᎑1 + k 2

(11)

The exact solution of eqs 4–6 can be derived straightforwardly (10, 11), but this not only involves more complicated manipulation, it also produces an expression which is less informative than the QSSA. For example, from eq 10, assuming the QSSA to be adequately accurate, it is clear that when k2 is much larger than k ᎑1, the rate of decay of z is approximately k1. Then the first step in the reaction system is the “rate controlling step” (12). This is in no way obvious from the exact solution. Numerical Stiffness If a system of differential equations has a solution that can be identified as consisting of a steady part and a transient, and if the transient dies out on a time scale very small in comparison with the time taken by the steady part to change significantly, then the system may be called stiff. There is no absolute definition of stiffness, but the “acid test” is that the system is stiff if it can only be integrated numerically by a so-called implicit method. Only in recent years have efficient and convenient packages of implicit Gear type (4, 13) become readily available for small computers, because implicit solvers require large amounts of computer memory. This software allows one to obtain with ease accurate numerical solutions of chemical kinetics equations despite stiffness, thanks to painstakingly careful design by numerical mathematicians (13, 14 ). The numerical examples used in this note were prepared with the well-known Matlab package, which incorporates a reliable and easy-to-use Gear solver. Since Matlab and similar packages can be made widely available on site license, the handling of stiffness should now be regarded as a routine task for both students and professional scientists, even if the topic is still considered rather esoteric. By examining a typical case, as follows, much is revealed. Consider not a single equation but instead a family of problems

ε

dy =᎑ y–t dt

or

dy 1 = ᎑ε y – t dt (12)

y(0) = 1

tf = 5

in which ε is a parameter that is constant in the differential equation but which, as it takes on any positive value, specifies one of a family of problems. The exact solution can be found by methods found either in a first or second course in calculus. The solution consists of the homogeneous solution, y H, which is a solution of ε dyH/dt + y H = 0, plus any other solution y P of eq 12 (called the particular part of the solution or a “particular integral”). Then the exact solution is y = y H + y P, where 1160

y p = t – ε and y H = (1 + ε)e ᎑t/ε, which is a smooth solution plus a transient that is sharp if ε is small (e.g., if ε = 1/20). The size of the part y H is adjusted to give the correct initial value. The constant ε clearly serves as the characteristic time constant of the exponentially decaying transient y H. The simplest but fundamental method to solve this equation numerically is the explicit Euler method on a uniformly spaced time grid; it uses only values of y at time points spaced apart by step size h. The usual notation is tn = nh, n = 0, 1, 2,…, N, where Nh = t f and yn = approximate y at tn. In the methods being considered, the solution advances one step at time, and they are sometimes called “marching algorithms”. In Euler’s method, in one forward step from tn to tn+1 the right-hand side is evaluated at tn and dy/dt is represented by ( yn+1 – yn )/h, which gives the stepping formula for eq 12: yn+1 – y n = ᎑ (h/ε)( y n – tn ) or y n+1 = [1 – (h/ε)] yn + (h/ε) tn (13) It is easily seen that if h/ε is much bigger than 1, the oscillating numerical solution is completely unlike the real solution. For example if ε =1/20 and h = 1/10, then y0 = 1, y 1 = ᎑1, y 2 = 1.2, y3 = ᎑ 0.8, y 4 = 1.4. The true solution at t = 0.4 is 0.35! Accuracy requires that h be less than ε when the Euler method is used. In practical kinetics, the equivalent to ε may be a millionth of a second (in photochemistry) in a chemical reaction lasting for hours, and then obviously the Euler method is useless. The same restriction applies to any known explicit method in which the stepping process uses only values of y corresponding to times before tn+1. It has been shown by numerical analysts that the step size restriction of the simple Euler method carries over into more sophisticated methods such as the well-known fourth-order Runge–Kutta scheme (15). For stiff systems it is essential to work with implicit methods. The implicit Euler method is the same as the explicit Euler method, except that the right-hand side is to be evaluated at the endpoint of the current step:

y n+1 – y n = ᎑ hε y n+1 – t n+1

(14)

All implicit methods involve solving algebraic equations at each step (2). For eq 14, this is done by inspection:

y n+1 =

y n + hε t n+1 1 + hε

(15)

Now, if h is given a large value, for example h = 1 when ε = 1/20, the numerical solution produced by eq 15 is quite close to the true solution and a roughly accurate solution at t = 5 is obtained after five steps, compared with about 100 steps needed by the explicit Euler method for the same accuracy. This possibility of using time steps much larger than the lifetime of the transient is valid for much more sophisticated and accurate methods, including implicit Runge–Kutta algorithms. In the implicit Euler method the transient is automatically damped out, following a geometrical explanation given in an early paper on kinetics (7). The implicit Euler method deals with stiffness in a robust way. Therefore one sees that, in principle, stiffness can be dealt with by standard numerical techniques. Professionally designed stiff solvers use variable order interpolation polynomials for fuller use of the sequence of data available from times earlier than the initial

Journal of Chemical Education • Vol. 75 No. 9 September 1998 • JChemEd.chem.wisc.edu

Research: Science & Education

time point of a marching step, and they incorporate automatic algebraic equation solvers (13, 16 ). To improve accuracy and reliability they also incorporate variable time steps and switch the level of approximation. Despite this complexity, it seems enough that users of these packages appreciate the difference between the explicit and implicit Euler methods and exercise some care in supplying the tolerances that must be set by the user (13). Simulating the QSSA Singular perturbation theory includes the QSSA. It deals with systems that can be written in the so-called singular perturbation form fast: slow:

ε

dy dy 1 = f ( y,z) or = f ( y,z) dt dt ε

(16)

d z = g ( y, z) dt

(17)

with given initial values of y and z. In general, y and z can be vectors, but in this paper we mainly consider scalar y and z. The constant ε is positive, with chief interest in actual problems in which ε is small. It is essential that the fast variable be subject to a feedback process that forces y to attain values that make f ( y, z) approximately zero. This is equivalent to having ∂f ( y, z)/∂y negative for values of y and z close to the QSSA solution. This can be explained heuristically as follows. Consider a solution of eq 16 when ε is very small during a short period in which z is virtually constant. Let y¯ (t) be a corresponding QSSA solution and let y(t) = y¯ (t) + p(t) be a nearby perturbed solution. Now substitute this expression into eq 16, expand by Taylor’s theorem, and discard terms involving products of the small quantity p(t). After cancelling d ¯y (t)/dt and f ( y¯ (t), z), one is left with a so-called variational equation

dp 1 ¯ p ≈ f y,z dt ε y from which it is clear that if f y is negative in the working zone, p will decay to near zero, which is to say back to the QSSA solution (as opposed to divergence from it). It is customary to assume that scaling is such that when ε = 1, the transient has a time constant of roughly the same size as the period of time during which the slow variable changes between a maximum and a neighboring minimum (or vice versa). The variable ε is called the “perturbation parameter”. In chemical kinetics, it does not appear in the rate equations in an obvious way but is related to scaling of the variables and coefficients. For example, in eq 7, we can identify ε as being 1/20. As ε approaches zero, a smooth solution of eqs 16 and 17 tends to coincidence with the QSSA but does not, in general, agree with the initial values of the fast variables. Singular perturbation theory is concerned with using perturbation techniques to find the outer and inner solutions separately, as power series in ε, which are matched to form an overall solution on making due allowance for the initial values. This solution takes the form of a ladder of increasingly accurate approximations, the lowest of which is the QSSA. Although singular perturbation theory tends to be cumbersome algebraically, it provides useful qualitative information on the

QSSA. For example, it shows that a good quality stiff solver can be used to simulate the QSSA numerically. It is only necessary to set ε in eq 16 to a very low artificial value before running the solver. In practice, set up the solver for the actual differential equations and then modify the program to allow multiplying the rates of change (“right-hand side”) of a suspected stiff variable set by the factor 1/ε. Do a series of runs with progressively reduced settings of ε, making sure that user-set tolerances are consistent with the short-lived transients associated with small values of ε. When the computed solution is insensitive to further decreases in ε, the numerical output corresponds to the QSSA. This is very useful in practical work, as it means that the accuracy of the QSSA can be assessed without preliminary algebraic simplifications, which are usually part of the QSSA procedure. The attractiveness of this QSSA simulation depends on using a stiff solver and on working the computer interactively with rapid repetitive runs and easily adjustable parameters. Consider again eqs 16 and 17, when the QSSA is a good approximation, and ε is small. During the initial transient phase when z is almost constant, ε/ fy (z, y¯ ) acts as the time constant for change in y as y approaches its QSSA value. Consequently, as ε gets smaller, the length of the inner solution gets arbitrarily small in proportion to ε. Reduction of ε has little effect on the slow variable z. Thus stiffness, as measured by the ratio of the time scale of the slow variable to the time scale of the fast variable, increases as ε decreases. Reduction of ε also has the effect of making the QSSA more accurate when comparing systems of the form of eq 16 which differ only in the value of ε. This is rigorously established by O’Malley (2). The plausibility is a consequence of the feedback effect which drives y towards φo(z), which is enhanced as ε is reduced. Working with the QSSA In some cases, the QSSA solution can be expressed algebraically, after integration. Provided the QSSA is accurate, this form of solution is highly informative, but even in cases where the actual solution must still be formed by numerical integration, the differential equations of the QSSA may give useful insight into the system’s behavior. For example, suppose that in the reaction system of eqs 4–6 (the linear example) the values of the rate constants are to be estimated from measurements of concentrations made mainly in the steady part of the system’s history, when the measurements are subject to some random error. If the QSSA is accurate relative to the random error, it would be found that it is virtually impossible to compute all three rate constants by least squares regression. This is because the three constants are bundled together into two in the QSSA (i.e., k2 and α ) and it is the QSSA that is effectively observed by measurements made in the outer solution. The parameters of the QSSA are what can be reliably estimated by regression, in such situations (17 ). The notion of bundled parameters suggests combining numerical integration with some nondimensionalization of the differential equations.

Linear Example: Error Function In eqs 4, 5, and 9, divide across by k1 and replace the independent variable t by s = k1t. Also, let r1 = k ᎑1/k1 and r2 = k 2/k1. Then these equations are seen to have solutions that are

JChemEd.chem.wisc.edu • Vol. 75 No. 9 September 1998 • Journal of Chemical Education

1161

Research: Science and Education

functions of s, r1, r2, and the initial values z(0) and y(0), after integration of the partly nondimensionalized exact equations

dy = ᎑ r1 + r2 y + z ds

(18)

dz = r y – z 1 ds

(19)

dp = r2 y ds

(20)

and after integration of the QSSA equation

d z¯ = ᎑ r 2 z¯ r1 + r2 ds

(21)

Both the exact and QSSA solutions for z go to zero as time becomes large. Hence for large enough end times, tf, the maximum difference between the two solutions depends only on the two ratios r1 and r2 apart from the initial values. Using the Matlab facilities, this maximum magnitude of error was computed over a grid of values of r1 and r2 and the results are represented as a surface in Figure 2. The initial value used for both z and ¯z was 1. The initial value for y in eqs 18 and 19 was that of eq 8: y(0) = z(0)/( r1 + r2 ). Hence the computed solutions are all within the smooth outer solution zone, where transients become negligible. Note too that because of linearity, in both cases the solutions are proportional to the initial value of z, and therefore the meaning of Figure 2 is not restricted by the value of z(0) used in its preparation. Figure 2 shows that by excluding the initial transient phase, the QSSA is accurate to a few per cent for a large range of rate constants. If parameter estimation uses only data from this zone of validity, it is seen that the least squares estimation indeterminacy mentioned above may occur even when the rate constants are of the same order of magnitude (as we have verified with experimental data in this particular mechanism and similar small reaction systems).

Nonlinear Example The linear example of eqs 4–6 that we have analyzed so far is of course very favorable to the numerical investigation of error. In other cases, some mathematical manipulation may be needed to get full advantage from a sequence of numerical error estimates. Consider a particular chemical reaction scheme from the educational literature (1, 18), in which ozone, atomic oxygen, and oxygen react: O3

k1 k᎑1

O2 + O ;

k2

O3 + O → 2O 2

Let x, y, and z be respectively the concentrations of O3 , O, and O2; then from the law of mass action, the following differential equations describe the development of the reaction:

d x = ᎑k x + k zy – k x y 1 ᎑1 2 dt

(22)

dy = k 1 x – k ᎑1 zy – k 2 x y dt

(23)

d z = k x – k zy + 2k x y 1 ᎑1 2 dt

(24)

in which k1, k ᎑1, and k2 are constants. The initial conditions are x(0) = 1, y (0) = 0, and z(0) = 0. Suppose that y is the fast variable and that x and z are slow. The QSSA formula for the fast variable is

10

Error in Z, % Of Z(0)

In recent treatments of the QSSA applied to eqs 4–6 (9, 10), it was suggested that a validity criterion for the QSSA is that k ᎑1 + k2 should be much larger than k1. Inspection of eqs 4 and 5 suggests that the system is then stiff, because there is sharp negative feedback forcing y towards its QSSA value. One might conclude that stiffness is necessary for the QSSA to be valid (4 ). In fact, it is seen in Figure 2 that the QSSA is quite accurate even where k ᎑1 + k2 is not much different from k1: for these values, the system is not stiff. This can be established by eigenvalue analysis with stiffness measured by the ratio of the eigenvalues. More intuitively for this system, one only expects stiffness when there is a very pronounced negative feedback, as is the case when k ᎑1 + k 2 is much bigger than k1. In practical terms, nonstiffness is verified beyond all doubt for the complete range of parameters used in preparing Figure 2 by the fact that for this range the differential equations can be handled perfectly well by an explicit Runge–Kutta method.

8 6

y¯ =

4 2 0 1

(25)

By substituting this approximation for y into eqs 22 and 24 one arrives at the differential equations for the QSSA:

2 10

4 8

6

k2/k1

6 8

4 10

2 1

k -1/k1

Figure 2. Error of the linear example. The surface represents the maximum magnitude of the difference between the exact and the QSSA solutions for the slow variable, z . Initial values are y = k1/(k᎑1 + k2 ) and z = 1.

1162

k1 x = φ0 (x, z) k ᎑1 z + k 2 x

2 d x¯ = ᎑ 2k 1 k 2 x¯ k ᎑1 z¯ + k 2 x¯ dt

(26)

2 d z¯ = 3k 1 k 2 x¯ d t k ᎑1 z¯ + k 2 x¯

(27)

Journal of Chemical Education • Vol. 75 No. 9 September 1998 • JChemEd.chem.wisc.edu

Research: Science & Education

with initial data x¯ (0) = 1 and ¯z (0) = 0. We also set ¯y (0) = k1/k2, from eq 25. Note that 3x¯ + 2z¯ is constant according to the QSSA, and this actually allows one to integrate eq 26 analytically. More usefully, eqs 26 and 27 give qualitative information on how the reaction progresses in time. For example, if k 2x¯ >> k ᎑1¯z, eqs 26 and 27 simplify to

d x¯ ≈ ᎑2k x¯ 1 dt d z¯ ≈ 3k x¯ 1 dt and y¯ ≈ k1/k 2. This shows that if k 2x¯ >> k ᎑1z¯ , then k2 can only be estimated using data exclusively from the outer solution, when measurements of y as well as either x or z are available: k1 can be estimated from measurements of x, and then k2 ≈ k1/y. Since z(0) = 0 and x(0) = 1 in this example, there is indeed an initial period when this approximation applies. As an example of what could be a student exercise, on rearranging eqs 22–24, 26, and 27 in the same way as in the linear example, it is seen that the maximum magnitude of error of the QSSA in the slow variable x depends on the two rate constant ratios r1 and r2 and also on the initial data, so that QSSA error in x can be represented as a surface, for a given set of initial values. In the case when the initial value of the fast variable is chosen to satisfy eq 25, with x(0) = 1 and z(0) = 0, the maximum absolute error in x (i.e., [O 3]) is shown in Figure 3. Again, the QSSA is seen to be quite accurate even when the rate constants are relatively close. Interestingly, 3¯x + 2¯z is the number of oxygen atoms contributed by the ozone and molecular oxygen, and the QSSA keeps this number constant. Physically, what is constant is the total number of oxygen atoms, namely 3x + y + 2z; hence there is a weakness in the QSSA. With many choices for the rate constants, the fast variable is small and then the discrepancy is clearly not too serious. It is possible to modify the QSSA to maintain exact conservation of atoms (10), at the cost of some complexity. Alternatively, if the QSSA is regarded as part of singular perturbation theory, the proper conservation

Error in X, % of X(0)

35 30 25 20 15 10 5 0 1 5 15

10

k2/k1

10

15

5 1

k -1/k1

Figure 3. Error of the nonlinear example. The surface represents the maximum magnitude of the difference between the exact and the QSSA solutions for the slow variable, x . Initial values are x = 1, y = k1/k2, and z = 0.

of atoms could be better approximated by a higher-order singular perturbation solution. Happily, Figure 3 shows that the basic form of the QSSA is quite accurate over a wide region of r1, r 2 space, subject to the initial concentrations defined above.

Oregonator Oscillations In this well-known model of an oscillating chemical reaction system (4, 19, 20), the dynamics are described by three differential equations in which the variables are the concentrations of HBrO2, Br᎑, and Ce(IV). The kinetic equations incorporate five constant coefficients of interest. Some information about their magnitudes can be inferred by measurements of the concentrations made at known times. For this purpose, the QSSA can be used when trying to understand the mathematical aspects of the reaction dynamics, as a preliminary to analyzing experimental data that contain random fluctuations. In this note, we wish to show how the qualitative aspects of the QSSA enter into this task. The kinetic equations have been put into nondimensional form by Tyson (5): the nondimensional variables are x = [HBrO2]/x*, y = [Br ᎑]/y*, z = [Ce(IV)]/z*, and t = (time in s)/t *, where x *, y *, z *, and t * are formed from the rate constants and the effectively constant concentrations of three of the reacting species. The kinetics equations become ε′ d x = q y – x y + x – x 2 dt

(28)

dy = ᎑q y – x y + z dt

(29)

dz = x – z dt

(30)

ε

in which q, ε, and ε′ are also nondimensional groupings of parameters. Likely values are ε′ = 4 × 10 ᎑2, ε = 2 × 10᎑4, and q = 8 × 10 ᎑4. At these values, and for run times of say 20 units of scaled time t, these equations are stiff but are handled easily enough by stiff solvers. Within the context of currently available numerical mathematics packages suitable for undergraduate students, one can solve these equations in a few seconds and display the results of a single run or a set of runs rapidly, in various formats. There is then little computational restriction on the size assigned to ε′ and ε. This allows one to numerically simulate the algebra of the QSSA, which is useful because if x and y are taken as the fast or slave variables, they are certainly defined as functions of z by eqs 28– 30 with ε′ and ε both set to zero, but these functions cannot be expressed in exact algebraic form. As pointed out earlier, setting ε′ and ε at very small values is equivalent to setting up and solving the QSSA equation for z. It is indeed found that as these two perturbation parameters are reduced, the system oscillates with little sensitivity to the size of these parameters, provided they are small enough. Thus, we have established numerically the validity of the QSSA for this system, despite difficulties with the algebra. Explanations of the “relaxation oscillations” (5, 20) undergone by the system depend on the existence of two phases during which the QSSA is valid. Consequently these explanations involve the QSSA concepts described earlier in this note, as is shown in what follows. In one account, suppose that ε′ is so small that the QSSA can be applied to eq 28.

JChemEd.chem.wisc.edu • Vol. 75 No. 9 September 1998 • Journal of Chemical Education

1163

Research: Science and Education 3

10

B

C 2

dZ/dt > 0

Fast variable, Y

10

-1

C

A

dY/dt > 0

10 1

Concentrations

10

D 0

Z

10

-2

10

B

-1

10

E

Slow Variable, Z

D

-2

dY/dt = 0

dZ/dt = 0

-3

-3

10

dZ/dt < 0 dY/dt > 0

E

10

0

1

2

3

4

5

6

7

10 -3 10

8

-2

10

Time

-1

0

10

10

1

10

2

10

Y

Figure 4. Relaxation oscillation in two-dimensional form of Oregonator plotted on semilog scale. The fast variable is y (solid line). The slow variable is z (broken line).

Figure 5. Oregonator z , y phase plane plot on a log–log scale. The unbroken line is the y null cline. The broken line is the z null cline. The solution of the reduced model, eqs 32 and 33, is shown by 䊊. The solution of the complete model, eqs 28–30, is shown by ×.

The nonnegative variable x becomes a function of y:

correspond to the same qualitative description, if some allowance is made for a slurring of the sharp transients of BC and DE and for the log–log scale. Another more mechanistic explanation of the oscillations can be given by assuming that ε′ and ε are so small that the QSSA can be applied to both x and y. Now, according to the QSSA, x is again given by the function in eq 31. This function is equal to 1 when y = 0, and, as y increases, it falls steeply to become very small for y ≥ 1. Referring to Figure 4, from C to D, y is large, x is near zero and from eq 30, z decays exponentially. During this time, eq 29 with a vanishing left-hand side gives y ≈ z/q; but when near D, y falls away, x jumps up to a value close to 1—much larger than q, and now y sharply jumps down to y = z/(x + q), between D and E. During this drop, x is much bigger than z and so from eq 30, z rapidly rises; and y follows z upwards as a “slave” variable. When y approaches 1 near B, x switches to a small value and dz/dt is again small, y jumps towards C where y ≈ z/q, and the cycle restarts.

def

x ≈ χ (y) =

(1 – y) + 2

2

(1 – y) + qy 4

(31)

Consider then the modified system

ε

dy = ᎑q y – χ (y )y + z dt

(32)

d z = χ (y ) – z dt

(33)

Initial data are not important here because the solution falls into the same limit cycle oscillation regardless of initial values. A cycle of the oscillation is shown in Figure 4, computed from eqs 32 and 33 using the likely values listed above; that is, ε = 2 × 10᎑4 and q = 8 × 10᎑4. This same oscillation is also illustrated on a “phase plane” plot in Figure 5 in which a point represents the system instantaneously, the coordinate axes representing z and y. The y null cline of the modified system (the line along which dy/dt is zero) is seen to have a shape that assigns three values of y to one of z, in part of the plot. The feedback action discussed earlier moves the system on to this line, subject to restrictions on the signs of dy/dt and dz/dt. The initial state is at point A, and its development in time is shown by a sequence of 䊊’s. The solution moves rapidly along AB to virtual coincidence with the y null cline, and then upwards beyond the peak in the null cline. Next it shoots rapidly over to C, with z nearly constant. This part of the motion can be regarded as a sharp transient preceding a smooth solution region from C to D. At D there is again a transientlike jump to E, followed by a smooth traverse along the null cline to B, from where the whole clocklike oscillation is repeated. Students could be justified in having doubts about whether eqs 32 and 33 are a fair approximation to eqs 28–30 for a particular set of parameter values. With the high-quality educational software now available, this question can be investigated by numerical experiment. Also shown in Figure 5 is the solution of eqs 28–30, at the likely values, in the form of a sequence of crosses. The two solutions appear to 1164

Concluding Comments We have found that graduate chemistry students with little formal mathematics background can successfully simulate stiff systems with minimal supervision, but some understanding by the students of what stiffness means is nevertheless desirable. The quality of numerical computer programs suitable for general use by scientists allows the QSSA to be investigated in computer laboratories. By using these facilities to check the accuracy of the QSSA, one can be expected to attain a justifiable confidence in the use of the QSSA. In particular, the algebra-free QSSA simulation trick introduced in this note should be useful in preventing erroneous use of the QSSA, even with quite large reaction systems. Numerical validation of the QSSA involves little formal mathematics, especially in comparison to singular perturbation procedures. However, it has been found that a variation on the usual approach to singular perturbation theory (developed for the kinetic theory of gases) can be adapted to form a more natural form of singular perturbation theory error estimate (21), quite suitable for the type of problems dealt with in

Journal of Chemical Education • Vol. 75 No. 9 September 1998 • JChemEd.chem.wisc.edu

Research: Science & Education

this note and only using basic calculus. It is our intention to report on this method in due course. At research level, it is hoped that improved numerical and analytical validation procedures may lead to better and wider use of the QSSA, overcoming reluctance to get involved with the QSSA or even to recognize its potential, because of difficulty in establishing its validity. It has been pointed out that the availability of stiff differential equation solvers challenges the importance of the QSSA for small reaction systems (22). Nevertheless, in recent investigations of complicated atmospheric chemistry (23) based on current theory and practice, the QSSA has proved its value when used together with a stiff solver. Details of Matlab programs are available from [email protected]. Acknowledgment We thank Leo F. Salter for indicating to us the importance of the QSSA in the research program, under his leadership, on photochemically induced dimerization of pyrimidine bases in DNA. Literature Cited 1. Tardy C. T.; Cater E. D. J. Chem. Educ. 1983, 60, 109–111. 2. O’Malley R. E. Singular Perturbation Methods for Ordinary Differential Equations; Springer: New York, 1991. 3. Farrow L. A.; Graedel T. E. J. Phys. Chem. 1977, 81, 2480–2483. 4. Chesick, J. P. J. Chem. Educ. 1988, 65, 599–602.

5. Tyson, J. T. In Oscillations and Travelling Waves in Chemical Systems; Field, R. J.; Burger, M., Eds.; Wiley: New York, 1985; Chapter 3. 6. Gray, P.; Scott, S. K. Chemical Oscillations and Instabilities; Clarendon: Oxford, 1990. 7. Curtiss, C. F.; Hirschfelder, J. O. Proc. Natl. Acad. Sci. USA 1952, 38, 235–243. 8. Segel, L. A.; Slemrod, M. SIAM Rev. 1989, 31, 446–477. 9. Viossat, V.; Ben-Aim, R. I. J. Chem. Educ. 1993, 70, 732–738. 10. Gellene, G. I. J. Chem. Educ. 1995, 72, 196–199. 11. Ramachandran, B. R.; Halpern, A. M. J. Chem. Educ. 1997, 74, 975–978. 12. Laidler, K. J. J. Chem. Educ. 1988, 65, 251–254. 13. Gelinas, R. J. J. Comp. Phys. 1972, 9, 222–236. 14. Gear, C. W. Numerical Initial Value Problems in ODE’s; PrenticeHall: Engelwood Cliffs, NJ, 1971. 15. Press, W. H.; Teukolsky, S. A.; Vetterling, W. T.; Flannery, B. P. Numerical Recipes in C: The Art of Scientific Computing; Cambridge University Press: Cambridge, 1992. 16. Edelson, D.; Rabitz, H. In Oscillations and Travelling Waves in Chemical Systems; Field, R. J.; Burger, M., Eds; Wiley: New York, 1985; Chapter 6. 17. Bluestone, S.; Yan, K. Y. J. Chem. Educ. 1995, 72, 884–886. 18. Steffen, L. K.; Holt, P. L. J. Chem. Educ. 1993, 70, 991–993. 19. Benini, O.; Cervellati, R.; Fetto, P. J. Chem. Educ. 1996, 73, 865–868. 20. Field, R. J.; Schneider, F. W. J. Chem. Educ. 1989, 66, 195–204. 21. Mika, J. R.; Palczewski, A. Comp. Math. Appl. 1991, 21(10), 13–32. 22. Pavlis, R. R. J. Chem. Educ. 1997, 75, 1139–1140. 23. Zeng, G.; Pilling, M. J.; Saunders, S. M. J. Chem. Soc., Faraday Trans. 1997, 93, 2937–2946.

JChemEd.chem.wisc.edu • Vol. 75 No. 9 September 1998 • Journal of Chemical Education

1165