Kinetics without Steady State Approximations - Journal of Chemical

Sep 1, 1997 - The usual numeric algorithms for integra-tion of ordinary differential equations are often inadequate because kinetics simulations often...
0 downloads 0 Views 80KB Size
Information • Textbooks • Media • Resources edited by

computer bulletin board

Steven D. Gammon University of Idaho Moscow, ID 83844

Kinetics without Steady State Approximations Robert R. Pavlis* Department of Chemistry, Pittsburg State University, Pittsburg, KS 66762 There is a great need today for students in undergraduate chemistry and physics classes to use computational methods that use numeric algorithms instead of classical integration and differentiation. Chemistry and physics faculty tend to feel that using numerical algorithms in their classes is inappropriate because mathematics courses have not provided the appropriate background. Mathematics faculty seem to feel that numeric training of this sort should be provided by computer science departments. Computer science faculty seem to feel that their courses should provide students with training in data processing and operating system theory. The net result is that students are often deprived of training in this area. Chemical kinetics problems are virtually always sets of simultaneous ordinary differential equations with known initial boundary conditions. These differential equations are generally difficult or impossible to integrate exactly. In the past, chemists developed approximate methods to deal with them. The development of inexpensive computers and efficient numerical integration algorithms have made these methods into anachronisms. The older standard numerical integration methods for ordinary differential equations are explicit. That is, they take the present values and extend the solution forward by taking small steps in the independent variable using the derivatives to extrapolate the solution forward step by step. There are four commonly used explicit algorithms: (i) Eulertype algorithms that approximate each step as a line; (ii) Runge–Kutta type algorithms that compute several derivatives across each step to approximate the functions as polynomials; (iii) predictor–corrector type of algorithms that use the results of several past steps to predict the next, and then compute an explicit derivative to apply a correction, thereby also approximating the function as a polynomial; (iv) rational extrapolation algorithms that take several different explicit substep sizes across a relatively large interval, and then extrapolate to an infinite number of substeps. Runge–Kutta methods are used most frequently. The traditional Runge–Kutta methods do not directly provide any indication of truncation error that results from approximating the functions by a finite order function. An old way to determine truncation error is to extend the solution at two step sizes simultaneously, one size twice the other. Let step size be h, let n be the truncation order, and t be the truncation error. t ~ 1 / [h (n + 1) – 2] By comparing the results of two steps of size h to one of size 2h, truncation error can be determined, and the step size can be optimized. More recently Runge–Kutta type algorithms have been developed that provide solutions simultaneously at two different orders. This allows truncation error to be determined after each step, and allows for constant adjustment of step size (1). *Email: [email protected].

When numerical solutions are sought for simultaneous differential equations in which the variables are scaled very differently, as in most kinetics problems, the integration steps must be very small for explicit algorithms, even though the variables that require the small steps be changing very slowly. Such differential equations are termed “stiff”. The required step size may be so small that hundreds of millions of steps may be required for the integration! In many cases, self-adjusting explicit Runge–Kutta algorithms quickly adjust the step size to infinitesimal levels, and the integration is halted! Solutions for stiff ordinary differential equations can generally be obtained without tiny steps by the use of implicit algorithms. Good implicit algorithms are relatively recent advances in applied mathematics. To understand how this can be done it is easier to compare the simple explicit and implicit Euler algorithms. Let y be a set of simultaneous differential equations: dy = f (y) dx Using explicit Euler integration for a step of h, y is advanced by the following: yn+1 = yn + h* f′ (yn ) However, using implicit Euler integration for a step of h: yn+1 = yn + h* f′ (yn+1 ) To obtain f′ (yn+1) one must solve a series of simultaneous equations in N unknowns, where N is the number of differential equations. These are not linear in any practical problems. Because the solution is being advanced over relatively small steps, these nonlinear equations can be approximated by linear ones using partial derivatives. This is best described using matrix notation. This approximation results really in a semi-implicit integration, but it almost always produces excellent numerical results. In the following equation, let 1 be the identity matrix. A matrix where each row contains partial derivative values of one of the differential equations is called a Jacobian matrix. Let J m be such a Jacobian matrix. The column elements of each row contain partial derivatives in sequence with each of the dependent variables. y n+1 = yn + h* [1 – h* Jm ] {1 ? f (yn ) A computer program must compute n squared partial derivatives to produce the Jacobian matrix. Most practical kinetics problems tend to have fewer than a dozen simultaneous differential equations, so coding this can be done quite easily. The Jacobian matrices can be computed by numerical differentiation, but this can have pitfalls, and obtaining exact derivatives is simple on most kinetics problems anyway. Euler integration is impractical for most situations. For solving such systems implicit Runge–Kutta type algorithms are generally the most practical. Truncation error determination at each step is essential for practical integration of

Vol. 74 No. 9 September 1997 • Journal of Chemical Education

1139

Information • Textbooks • Media • Resources

Figure 1. Simulated enzymatic reaction with noncompetitive inhibition. The initial substrate concentration is 0.4 M and enzyme concentration is 2.0 × 10 {7 M. Species other than inhibitor are started at zero. Curve A represents no inhibitor present. Curves B through K represent stepped increases of inhibitor, each step increasing by 0.00002 M. Rate constants and source code of the program to produce this output are available upon request.

stiff equations, because such equations typically require nearly continuous step-size changes. Predictor–corrector algorithms such as the Gear (2) algorithm are difficult to restart after a step-size change. An implicit algorithm was originally described by Rosenbrock that provides truncation error determination at each step. Kaps and Rentrop (3) and Shampine (4) provided practical implementations of this idea. The 1992 edition of Numerical Recipes in C contains this type of algorithm coded in C (5). This otherwise excellent code contains two classic examples of bad programming practice: it uses dynamic allocation of memory in functions that are typically called thousands or even millions of times, and it passes parameters that are not used! A rewrite of this code makes an excellent integrator for virtually all kinetics problems. The algorithm can be written in C++ using a matrix class. Such code is easier to understand and modify, although it does run somewhat more slowly. It is also possible to define a C++ differential equation solver class. One was described in Dr. Dobb’s Journal (6). Once one has a computer program correctly coded to solve one set of stiff equations, modifying it to solve another is merely a matter of coding all the derivatives and partial derivatives and adding them to the program. Extreme care

1140

must be taken with error tolerance values used to regulate step size. Too small a tolerance may result in a very slow solution, and too large may result in wildly incorrect solutions. Code for this needs some provision to set the tolerance for each dependent variable individually. Virtually any type of kinetics simulation can be integrated to any degree of accuracy; the steady-state approximation is totally unnecessary, as are the Michaelis–Menton approximations. It is a worthwhile exercise for students to compare the numerical solution with these now archaic pseudo solutions. Although this method is discussed here in terms of chemical kinetics problems, similar problems arise in practical situations in all areas of science. In research situations it is very useful to maintain a library of code for methods such as this to meet the many situations that arise. The algorithm is not excessively complicated, and the basic principles can certainly be understood by students with only a basic understanding of calculus. It requires only a few lectures to get third- and fourth-year students to the level where they can utilize it. The lack of computer programming knowledge of many undergraduate students is a bigger obstacle. The computer programming courses offered by many universities tend to avoid mathematical programming. In actual fact, mathematical programming is not excessively difficult—a student need only understand the language control structures and input–output commands, and then enter the equations. Figure 1 shows the results of numerical integration using a program that employs this algorithm to integrate the differential equations that describe a simple enzyme system with noncompetitive inhibition. There are seven species in this system: the enzyme, the substrate, the enzyme– substrate complex, the product, the inhibitor, the enzyme– inhibitor complex, and finally the enzyme–inhibitor–substrate complex. At the present time, C and C++ have become the standard programming languages; although they were designed for systems programming, they are excellent for mathematics. Dozens of compilers are available, some without charge. The GNU compilers are excellent. They are available for many computer systems, including MSDOS and OS/2. They also are free, although the user must abide by the GNU licence agreement. Complete code for some sample systems, including the one mentioned in text, can be obtained from Robert R. Pavlis, Department of Chemistry, Pittsburg State University, Pittsburg, Kansas 66762. It may also be obtained by email from [email protected]. Literature Cited 1. Verner, J. H. SIAM J. Numer. Anal. 1978, 15. 2. Gear, C. W. Numerical Initial Value Problems in Ordinary Differential Equations; Prentice Hall: Englewood Cliffs, NJ, 1971. 3. Kaps, P.; Rentrop, P. Numerische Mathematik 1979, 55, 33. 4. Shampine, L. F. ACM Transactions of Mathematical Software 1982, 8, 92. 5. Press, W. H.; Flannery, B. P.; Taukolsky, S. A.; Vetterling, W. T. Numerical Recipes in C; Cambridge University: Cambridge, UK, 1992. 6. Conway, D. J. Dr. Dobb’s J. 1995, 20(12), 52.

Journal of Chemical Education • Vol. 74 No. 9 September 1997