Solution of Stiff Differential Equations and the Use of Imbedding

FDA commissioner Scott Gottlieb to step down. In a move that has stunned the drug industry, US Food and Drug Administration commissioner Scott Gottlie...
0 downloads 0 Views 1MB Size
Ind. Eng. Chem. Fundam., Vol. 17, No. 3, 1978

133

REVIEWS

Solution of Stiff Differential Equations and the Use of Imbedding Techniques Yau Nam 1. Chan;’

Irving Birnbaum, and Leon Lapidus*

Department of Chemical Engineering, Princeton University, Princeton, New Jersey

The development of imbedded semi-implicit Runge-Kutta methods for solving stiff ordinary differential equations is presented. Comparisons with proposed methods in the literature and extensive numerical tests on systems with stiffness ratios as high as 1Ol6 reveal that the present proposed algorithms are extremely efficient, simple to use in a variable step-size format, and competitive.

Introduction The past five or six years have seen a virtual explosion in the analysis, in the numerical sense, of stiff differential equations. Such systems, first analyzed by Curtiss and Hirschfelder (1952) and then brought to prominence by Dahlquist (1963), may be thought, in a loose sense, as being represented by variables with different time constants. Of course, the interest in these stiff systems comes from the fact that many physical and chemical problems exhibit these features (Lapidus et al., 1974); a t the same time, the equations are illconditioned in a computational sense and thus are extremely difficult to solve. A stiff system of ordinary differential equations may also is be defined as one in which the maximum eigenvalue, A,, sufficiently large so that either stability or an error bound of a numerical solution (or both) can be assumed only by unreasonable restrictions on the numerical integration stepsize. This infers an excessively small stepsize requiring enormous computing time to solve the system equations. Usually a stiff system is defined by its stiffness ratio, the ratio of the largest to the smallest eigenvalue of the system. But even this is an unduly restrictive definition since if A,, is small but the overall integration interval is large, the computation time may be excessive. The high frequency harmonic oscillator, d2y/dt2 w 2 y = 0 with u2large is stiff in a broad sense, but this is not due to the usual reasons. Thus stiffness is really an imprecise concept; we shall use the stiffness ratio as a measure of stiffness because of the lack of a more useful definition. The literature in the area of stiff systems is voluminous and in a state of rapid flux. Thus we shall not detail here any papers other than some review articles; many references will, however, be listed throughout the text. The first review in this area was by Seinfeld, Lapidus, and Hwang in 1970; this was followed by a report by Bjurel et al. in 1970 and by two books (Willoughby (1974) and Schiesser and Lapidus (1976)). The latter two books contain a compilation of papers from technical meetings dealing with stiff equations. It is also of interest to note that stiff systems have been attacked in a singular perturbation concept (Aiken and Lapidus, 1974), that the method-of-lines for solving partial differential equations leads

+



to stiffness (Madsen and Sincovec, 1976),and that comparative testing has been started on various algorithms (Enright and Hull, 1976). The work of Shampine, Watts, and Davenport (1976) is also of interest even though it is directed largely toward nonstiff equations; these authors do show that algorithms which are highly efficient for nonstiff systems may behave quite differently, especially with regard to each other, as stiffness is included. In the present paper we concentrate our efforts on the use of semi-implicit Runge-Kutta algorithms as a viable means of solving general stiff systems. A new imbedding technique and a family of algorithms is introduced which allow a variable integration stepsize as a function of the local truncation error. This imbedding concept for nonstiff systems has been detailed by Shampine, Watts, and Davenport (1976) as one of the most efficient techniques available. An analysis of the present approach as applied to a variety of test systems which span the range of linearity and nonlinearity, small and large dimension, and slightly stiff to very stiff indicates the positive features of the current algorithms. In fact, one may hypothesize that there already exist efficient methods for systems with stiffness ratios 1102 (nonstiff), that in the range IO2 to 1014 where stiffness becomes extremely important the current algorithm may be the best available, and that for a stiffness ratio either the current algorithm or a singular perturbation approach may be the most feasible.

Semi-Implicit Runge-Kutta Methods The ordinary differential equations of interest in the present work can be written in a number of ways. In scalar initial-value form we ask to solve

*

dx = f ( x , Y )

(y = yo; x = no)

In marly applications x is related to time, and the equivalence = t can be made. For more than one dependent variable we may write (1)in vector form as

x

where y is a vector [yl, y2, . . . , ym] of dimension m. The various equations of interest usually simplify if we write (2) in autonomous form

Address correspondence to this author a t Bldg 815, Brookhaven National Laboratory, Upton, N.Y. 11973. Deceased.

0019-7874/78/1017-0133$01.00/0

(3) 0 1978 American Chemical Society

Ind. Eng. Chem. Fundam., Vol. 17, No. 3, 1978

134

+

where the dimension of y is now m 1; this is achieved by augmenting the m equations with the addition of dy,+l/dx = 1,ym+l = xo, x = xo. In this autonomous form we also can identify that Way 9 J(y) is the Jacobian matrix and a2f/dy2 1 H(y) is the Hessian matrix. The corresponding numerical solution is specified as given y = yo, x = x o to find suitable values of y1 at x 1, yz a t x2, . . . , y, a t x,, yn+l at x,+1,. . . which are accurate as compared to the exact values ~ i ( x i )~ ,2 ( ~ 2. ). ,. , Yn(xn); Y n + l ( X n + d , . . . . The x increments in the numerical solution, Le., x,+l-x,, are termed the stepsize h. While the stepsize can vary from step-to-step, hl, h2, . . . , h,, h n + l .. . , it is easier to merely use h in the context for further discussion.

Runge-Kutta Methods Given that we have yn at x , (in scalar notation), RungeKutta algorithms calculate yn+l a t x,+1 by selecting a subset of points within (x,, x,+1). Letting r equal the size of this subset, plus the initial point x,, the values of f ( x , y ) are evaluated at each point and added to y , with appropriate weighting factors. Explicitly we write Yn+l=

Yn +

r

C Riki

(4)

i=l

and

ki = hf

(x , + Cih, yn + ,iU i j k j )

(i = 1 , 2 , . . . , r )

r=l

(5) The parameters (to be determined) (ci, R;, ai,) in this formulation can be arranged in a Butcher Block (Lapidus and Seinfeld, 1971)

Legendre polynomial of order r in (0, l),the result is a Gauss-Legendre form. If instead the elements of c are the abscissa of the Radau quadrature of order 2r - 2,the result is with c1 = 0 a Radau (0) form, with cr = 1a Radau (1)form, and with c 1 = 0, cr = 1a Lobatto form. Such forms are intimately connected to the orthogonal collocation technique widely exploited in chemical engineering literature (Finlayson, 1972). There are, however, some major computational differences between the three major types of Runge-Kutta forms. In the explicit case ki can be calculated sequentially; by contrast, in the implicit case a set of simultaneous nonlinear equations (of size r ) must be solved to evaluate the ki. The semi-implicit forms fall in between requiring the solution of simultaneous linear algebraic equations. However, for r internal points the implicit forms can achieve accuracy of order p = 2r, while the explicit forms can achieve at best p = r. Once again the semi-implicit methods achieve an intermediate position. Finally, there is also a difference in the stability properties of the three types; we shall detail this feature next.

Stability Stability is one of the major factors used to evaluate the viability of numerical algorithms for solving ordinary differential equations of the form (1)-(3). This may be elucidated by turning to the simple linear system (the so-called model problem)

dY = Xy

(7) ( y = yo; x = x o ) dx with X real or complex. Applying the specific Runge-Kutta algorithm to (7), we find that

Yn+1 = P ( h h ) ~ n

or Yn = P ( ~ X ) " Y O

(8)

In order to obtain bounded numerical solutions, &A), termed the characteristic root of the numerical process, must obey certain requirements. Stated in a simple fashion (see Lambert (1973) for details) a numerical scheme is termed with c , R, and A indicating the various vectors or the matrix of parameters. Continuity requires that r

ci =

,Eaij; 1=1

r

C Rj

= 1.0

j =1

The explicit determination of the parameters Rj, cj, and aij are effected by appropriate Taylor Series expansions. The details of such expansions go beyond this paper, but the interested reader should consult Lapidus and Seinfeld (1971) and Curtis (1975).By matching expansions in (4) and (5) with a pth-order Taylor Series the result is termed a pth-order Runge-Kutta process; the larger p , the more accurate is the process. Many specific formulas can be found in Lapidu's and Seinfeld (1971). Within the context of the above general Runge-Kutta formulation, there are three special forms related to a splitting of the matrix A. Let

A

=

AL

+ D -I-AU

where AL and AU are lower and upper triangular matrixes and D is a diagonal matrix. It then follows that if AL # 0 but D = 0 and AU = 0 (we mean the elements within ALare not zero), the process is called explicit. If AL # 0 and D # 0 but AU = 0, the process is semi-implicit; while if AL,D, and A" # 0, the process is implicit. Within the implicit Runge-Kutta formulation there are some further subdivisions or special forms. Thus if the vector c has as its elements the zeroes of the shifted

-

-

A-Stable if for < Re(hX) < 0, Ip(hh)l 6 1 Strongly-A-Stable if Ip(hX)l 0 as Re(hX) --m -0)

If, instead of (7), we had used the vector equation

-dY =By dx the equivalent results follow except the max(Rehi 1, i = 1, . . . , m, the maximum eigenvalue of B, replaces h. The requirement that a numerical algorithm may or may not be A-Stable or Strongly-A-Stable is not important in nonstiff systems but is of major importance in stiff systems. As an illustration, the classical fourth-order explicit RungeKutta form is not A-Stable and instead is conditionally stable. In other words Ip(hh)Id6 ~ ~1 for -2.8

< hh < 0

(9)

By contrast, the modified Euler or Trapezoidal rule (an implicit Runge-Kutta form) is A-Stable but not Strongly-AStable. For a two-dimensional linear system with m = 2 and X 1 = -1000 and X2 = -1, the use of the fourth-order RungeKutta algorithm requires that h < 2.8/1000 in order to meet the bound in (9). This implies a very small value of h and an enormous amount of computing time. We may also mention that a measure of the stiffness of an m-dimensional system is characterized by its stiffness ratio (SR) defined as

Ind. Eng. Chem. Fundam., Vol. 17, No. 3, 1978

/

SR =

max IReXi) , min IReXiI (10) . . ,m r = l , . . . ,m where X1,X2, . . . , A, are the eigenvalues of B in dyldt = By. If the system were nonlinear, the equivalent definition would hold with B = J = aflay = Jacobian matrix. On this basis a “good” algorithm should first have the property of being A-Stable and then the further property of being Strongly-A-Stable. Explicit Runge-Kutta methods have neither property, and thus are completely unsuitable, even though they are computationally simple to use, for stiff systems. By contrast, most, although not all, implicit methods have the A-Stable property but are computationally difficult to use. Midway between the two extremes are the semi-implicit methods which can be A-Stable and Strongly-A-Stable and yet are computationally easy to use. i-1,.

Rosenbrock Semi-Implicit Methods While Rosenbrock (1963) and Butcher (1964) both introduced semi-implicit Runge-Kutta methods about the same time, the former seems to be the more useful and we shall concentrate on these. Rosenbrock outlined a way to introduce the Jacobian matrix directly into the coefficients of the Runge-Kutta process and thereby yield an A-Stable algorithm. Using the autonomous form of the differential equation and a slight change in the notation for the parameters in the Runge-Kutta process, Rosenbrock specified

p(hX)R =

1 - 2hX

-

-

Recalling that aflay J and selecting r = 2 we may convert these equations to the form Y n + l = Yn

kl = h[I

+ R l k l + Rzk2

- halJ(~n)]-’f(~n)

-

-

a1 = a2 = -- 0.7886. . . 2 -2 bl = & = -1.1547, . , +

c1 = 0 n

1 Rz = [Calahan parameters] (17) 4 1 - 0.578hX - 0.456(hX)2 (18) p ( h X ) c= 1 - 1.578hX 0.622(hX)2 Further, I p ( h X ) , 1 I1for all hX I0 and lim p ( h h ) , -0.733 as Re(hX) -a. Thus this algorithm is A-Stable but not Strongly-A-Stable. The next important contribution was due to Caillaud and Padmanabhan (1971) in which they used the concept of exponential fitting first elucidated by Liniger and Willoughby (1970). This concept is based on consideration of the eigenvalues of the model system with complex or real I X I ( >>> I A, 1. Since the model problem has a solution of the stiff component exp[h XI], it is necessary that any numerical scheme follow this stiff component. It is not possible to include this feature on the basis of the Taylor Series matching but instead requires the additional condition

+

+

+

k2 = h[I - hazJ(y, clkl)]-’f(yn blk1) [Rosenbrock, r = 2, p = 31 (14) where the bi, and cij have been simplified in notation. There are six adjustable parameters, R1, Rz, a l , U Z ,c1, b l , and by Taylor Series expansions through terms including h 3 (thus the method is third order or p = 3), Rosenbrock showed that four identities were available. On this basis two of the six parameters are free to be chosen in any manner desired, i.e., to establish an A-stable method or minimize the computational effort. Rosenbrock developed the following set of parameters a1 = 1 ~2

+4

/ 6 = 1.408. . .

= 1 - 4 1 6 = 0.5917.

6+2d6 R1 = -0.4131..

The characteristic root of this Rosenbrock process is

Y n + l = Yn

k2 = h [ I

- 6 - d 6 + d 5 8 + 2 0 d = 0.1737.. b1= c1 = Rz = 1.413 . . . [Rosenbrock parameters]

pL(hX1) = exp[hXl]

(15)

-

(19)

It is important to observe that when the exponential fitting is done at Re(hX1) = -m, exp[-m] = 0 and the numerical algorithm will be automatically Strongly-A-Stable. Of course, it is not necessary that the fitting be made at Re(hX1) = -a; however, Birnbaum (1976) has shown that the numerical solution may be very strongly influenced by the specific value of Re(hX1) used in the fitting. The Caillaud-Padmanabhan semi-implicit form is given by

kl = h[I

..

(16)

+ -56 (hX)2

for which l p ( h X ) I~4 1for all hX I0. In addition, lim p ( h X ) ~ -0.8 as Re(hX) -a. It thus follows that Rosenbrock: third-order process r=2 A-Stable but not Strongly-A-Stable Two Jacobian evaluations and two matrix inversions required per step x , x,+1 Calahan (1968) improved on the Rosenbrock form by specifying that a1 = u2 and c 1 = 0 in (14). This had the feature of retaining the third-order accurate process but requiring only one Jacobian evaluation and one matrix inversion per step x , x,+1. The parameters used by Calahan in (14) and the characteristic root of the algorithm are

-

(i = 2 , . . . , r ) (13)

2 1 - hX - - (hX)2 3

135

+ R l k l + R Z ~+ZR3k3 - ha~J(~n)]-’f(~n)

- h a 1 J ( ~ n ) I - ’ f ( ~+ n b&l)

k3 = h [ I - h ~ l J ( y , ) ] - ~ J ( ~ n ) ( b 3 1+kb32k2) l [Caillaud-Padmanabhan] (20) Note that only one Jacobian evaluation, one matrix inversion, and two function evaluations are needed. There are seven parameters in (201,a l , bz, b31, b32, R1, R2, and R3. By Taylor Series matching through h3 terms (the

136

Ind. Eng. Chem. Fundam., Vol. 17, No. 3, 1978

method is thus third order), four relations among the parameters can be ascertained; this leaves three parameters open for adjusting. Thus the authors selected R3 = 1 and then specified R2b23 = ‘14 so that one of the terms in the h4 expansion was canceled. The final parameter was determined by noting that the characteristic root for this process is

with the same characteristic root, p ( h X ) c p = p ( h X ) ~ . Cash (1976) has also developed a third-order semi-implicit algorithm which has the form Yn+l

=

Yn

+ h[Rlkl + R 2 k ~+ R3k3I

k l = [I - h a l J ( ~ n ) l - l f ( ~ n ) k2 = [I - h a l J ( ~ n ) l - ’ f ( ~ + n hblkl) k3 = [I - h~lJ(y,)]-’f(y,

+ hb’kl+

hb3k2)

(26)

The development of the parameter values follows our previous discussion with a few changes. Thus the Taylor Series matching is as before; the final parameters are determined by satisfying error monitoring characteristics which we shall comment on shortly. The specific values are Thus the behavior of p ( h X ) c p is determined solely by the parameter ul. By choosing a1 as the real root in the solution of 2 3 = 0, a Strongly-A-Stable method results in that the denominator will be an order greater than the numerator as a function of hX. This corresponds to exactly exponential fitting at Re(hh) = --. The explicit value of a1 turns out to be 0.4358 . . . . Now one can solve for all the parameters to yield = 0.4358. . . 11 R1=27 16 Rz=27 R3=1 3 bz = 4

b31

1 = - - U 1 - b32 = -0.2746.

(23)

Caillaud and Padmanabhan also indicated the specific parameters for exponential fitting at values other than Re(hX) = -a; we shall not present these values here. However, as given above, we do have a third-order Strongly-A-Stable algorithm which requires one Jacobian evaluation, one matrix inversion, and two function evaluations per step x, xn+l. Michelsen (1976) proposed a slightly revised version of the Caillaud-Padmanabhan form (20) in which k3 is written as

-

k3 = [I - h a l J ( ~ n ) I - l (b31kl+ b32k2)

(24)

Note that J(y,) has been omitted thus saving a matrix multiplication. Using the identical procedures as Caillaud and Padmanabhan, Michelsen derived the parameters = 0.4358.

= 0.8670..

R1

= 0.9215.. .

R2 = 0.1703.. . R3 = -0.0918

b1 b2 b3

.

-1.5936.. = 0.6888..

= 0.3510.

..

.

[Cash parameters] (27)

Note that only one Jacobian evaluation and matrix inversion are required but three function evaluations are necessary.

Error Estimates To this point we have ignored completely one of the important features associated with the actual use of any of the semi-implicit algorithms, namely, how to estimate the numerical error so that the stepsize h can be adjusted up or down. x,+1 it is necessary In other words, as we calculate from x, to be able to estimate the error in the calculation and adjust the h being used; i.e., if the error is too large, the h must be decreased and vice versa. Without this variable stepsize facility no algorithm can be truly efficient. There are two ways to accomplish the appropriate error estimation for algorithms of the type considered here. These are to apply a Richardson-type extrapolation or to use imbedding techniques. The first approach is standard while the second is relatively new. Lapidus and Seinfeld (1971) d‘lSCUSS the basis of the two approaches for nonstiff problems, and Shampine et al. (1976) and Enright and Hull (1976) indicate that for nonstiff problems the imbedding procedure is the better one to use. Michelsen (1976), as an illustration, details the extrapolation approach in which a full step-half step is used for stepsize x,+1 using the value of adjustment. Thus we step from x, h to generate y , + ~ ( and ~ ) then repeat the calculation from x x,+1/2 x,+1, using the value hl2 to generate y , + ~ ( ~ / ~ ) . If the algorithm in use is of p order, a better or extrapolated (more accurate) answer for yn+l at x,+1 is given by

-

..

18 [Caillaud-Padmanabhan parameters]

.

~1

..

-

-+

+

and the truncation error at x,+ n

[Michelsen parameters] (25)

is given by

T n + l = yn+l(h/2)- yn+l(h’

(29)

It also follows that for p = 3, the case considered by Michelsen, that 1 (30) Y n + l = Yn+l(h/2’ + - T n + l 7 ). is a better value at x,+1 than either ~ , + l (or~ Y) , + I ( ~ / ~ This linear combination of two approximate solutions yields a more

Ind. Eng. Chem. Fundam., Vol. 17,No. 3, 1978

accurate answer, and it can be used to guide the selection of the appropriate stepsize; it achieves these results because one term in the error expansion is canceled and in effect the method becomes fourth order ( p = 4). But it does so a t a considerable computational expense; namely, the two extra steps (with h/2) calculation. Nonetheless, this one step-two step approach retains the stability properties of the core algorithm, increases the accuracy of the algorithm by one order, and provides a simple means of adjusting the stepsize. Whether the additional computational effort required is necessary and efficient is open to conjecture.

Imbedded Algorithms By contrast to the previous one step-two step method of estimating the local truncation error and the stepsize with a single formula, the imbedded approach uses a pair of formulas to obtain the same information. However, the pair is carefully selected so that only a single step is required and both formulas use the same set of function evaluations (or kl, k2, . . .). If one member of the pair is p 1-order and the other p-order, the local truncation error can be estimated to be at or beyond the digit difference between the two solutions over the interval ( x n , xn+l). This is achieved at essentially no computational cost once the p 1-order result is available. As a preliminary illustration of the concepts, we select the Caillaud-Padmanabhan form (20) but instead of determining b2 by removing one term in the h 4 Taylor series, we let

+

+

b2

n+l

=

Yn

and

a1 = 0.4358.

..

(34)

The imbedded pair of formulae is now (24) and (31) with the parameters of (34). Unfortunately, in both the pairs just developed the lower order formula is not Strongly-A-Stable which may cause some difficulties in solving stiff systems. This restriction shall be removed shortly. As an aside but still important point, we mention the oneterm implicit method by Cash (1975). The implicitness occurs only in the single term yn+l. From the general formulation of Cash one can abstract a second-order and a third-order algorithm given by (for a non-autonomous system) Y n + l = Yn

+ k2

k l = hf(XIl+l, Y n + d

= 1- 2 ~ 1

Following the same procedures as before for evaluating the parameters, we now find there exists a second-order (p = 2) semi-implicit method of the form Y

137

+ Rl’kl+

R2’k~

and

(31)

where R1‘ = R2’ = 112 and kl, k 2 are identical with those calculated in (20);the parameters for (20) are now defined as

k4 = hf(xn,

[p = 31

Yn)

(36)

The corresponding characteristic roots are, respectively

1 l+;hX = 0.4358.

..

(32)

The characteristic root of (31) is given by

-

-

(33)

with the property that lim fi(hX)~pZ -0.9567 as Re(hh) -a. Thus the second-order method of (31) is only A-Stable. With the parameters given by (32) the imbedded formula pair is now (20) and (31) but with the higher order one being Strongly-A-Stable and the lower order one being A-Stable. The computational cost of calculating yn+lfrom (31) once (20) has been used is essentially trivial since the ki are identical. The same procedure can be applied to the Michelsen form (24) with the same imbedded second-order method and the same characteristic root if b2

= 1- 2 ~ 1

dhNc3 =

b

5 1 - -hX 6

1 + -31 (hX)2- (hX)3 12

(37)

which are both Strongly-A-Stable. As a result a StronglyA-Stable second-order method is imbedded within a Strongly-A-Stable third-order method; whether this is computationally more feasible than (32) or (24) depends on the iterative method used to solve the implicit portion and/or the rate of convergence. The method of Cash (1976) given by (26) with parameters specified in (27) is also of interest. There are seven adjustable ,parameters, a l , R1, R2, b l , bz, and b 3 for the algorithm to proceed from x n xnfl, thereby using a step h. Cash also defines a second third-order semi-implicit method which uses the parameters 1/2al, El, R2, E3, % b l , lI~b2,and l/2b3 while stepping from xnP1 x , + ~ ,thereby using a step 2h. The kl, k2, and k3 are identical in the two formulas and thus one can be considered as imbedded in the other. With a total of ten adjustable parameters Cash is able to satisfy eight via Taylor Series expansions. The remaining two parameters are deter-

-

138

Ind. Eng. Chem. Fundam.. Vol. 17, No. 3, 1978

mined by assuming the truncation error over ( x n , x n + l ) is identical with that over ( ~ ~ - 1x n, + l ) . The end result is an imbedded coupling of two third-order forms with each being A-Stable but not Strongly-A-Stable, The three parameters beyond those in (27) are = 0.1510,. .

R1

R2 =

R3

= 0.5642

...

0.2847, .

[Further Cash parameters]

(38)

The imbedded formula pairs developed above can be thought of as being in the family ( p 1,p ) ;however, only one member of each pair is Strongly-A-Stable. If the family ( p 1,p - 1)or ( p 1,p - 2) is considered, then it is possible to have both members Strongly-A-Stable and retain the imbedding concept for error estimation and stepsize adjustment. T o illustrate this feature we return to the Michelsen semiimplicit form (20) and (24). Explicitly this is

+

Y n + Elk1

+ Rlkl + R2k2 + R3k3 (third order) (first order) yn+l = y n + Rl’kl + Rz’k2 yn+l = y n + Rl”kl (zeroth order) (39)

yn+l

= yn

for which all the k , are the same, each is Strongly-A-Stable, and the parameters are = 0.4358.. .

+

+

Yn+l=

For Strongly-A-Stable stability, z 1 = 0 or Rl” = a l . Thus we have a family of Strongly-A-Stable methods of the forms

b2 = 0.75

R2 = 0.8349

+ R2k2 + R3k3

R3=

h[I - halJ]-’f(y,)

- halJ]-’f(y,

k2 = h[I k3

= [I - h a l J I - l ( b 3 l k l = 0.4358.

a1

.

+ b2kl) + b32k2)

I

b2 = 0.75 b31

= -0.6302..

.

b32

= -0.2423..

.

R1

= 1.0376.. .

R2

= 0.8349.. R3

... = 0.3278. . . Rl” = a1

/.4hX)=

+

Yn+l = Y n

Yn

+ R l k l + R3k3

+ Rl’kl + R2’k~

+ z2(hX)2

(1- U l h X ) 2

+ R i ’ k i + Rz’k2 + R3k3

+

= - 2 ~ 1 + (R1’ R2’) = a i 2 - (R1’ R,’)al

+

b31

+

+

0.6721.. . R2’ = 0.3278,. . We could have even asked for a zeroth order imbedded formula of the form R1‘ =

+ Rl”k1

.

= 0.1756

R1= R3

a1

= 0.5641.. .

.

= -0.1035.. = 0.1035.. .

R3/ = 1.0 (42) Another family developed is given by (41) but with the imbedded lower order form being second order instead of first order. As a result, this lower order form is only A-Stable. The parameter values found were a1

= 0.4358..

.

. b31 = 0.5812.. . b32 = 0.1238.. .

b2 =

-1.8339..

R 1 = a1

whose characteristic root is

R3 = R1’ =

with

(41)

= 0.8243.. .

b32

Rz’

22 Rz’b2 T o ensure Strong-A-Stability it is necessary that t 2 = 0; then matching terms with a Taylor Series through first-order terms yields R1’ R i = 1. From these two requirements we find that

Yn+l = Y n

(first order)

b2 = -1.8334..

R1’

where ti

(third order)

where the k , are identical with Michelsen, each formula is Strongly-A-Stable,and the parameters are given by a1 = 0.4358.. .

.

= 1.0

tl(hA)

(40)

We have also developed a number of other imbedded pairs whose maximum order is third order. Without presenting the details (see Chan, 1977),we merely cite the results here. The approach used in the development of these formulas is like that already given. One such family is

with kl and k2 identical to those above. The characteristic root for (42) is given by 1

1.0

= 0.6721

R2’

Yn+l =

Now we ask for a first-order imbedded formula of the type Yn+l = Y n

= 1.0376

R1

R1‘

=

= -0.2423

b32

where kl

.

= -0.6302..

b31

R2‘ =

0.8

0.2860. . .

8.7423.. . X R3’

= 1.0

(43)

Ind. Eng. Chem. Fundam., Vol. 17, No. 3, 1978

We have also found another set of parameters by a complicated search procedure to replace (43); these are a1 = 0.4358.. . b2

= -1.8339.

0.8455..

R1 = a1 R3

= 0.55

.X

R1‘ = -3.4308.. R2’

= 2.8611.. . X 10-2

R3/ = 1.0

(44)

Finally, we have developed a family whose maximum order is fourth order and for which each imbedded (and the original) member is Strongly-A-Stable. The family is given by ~ n + l =Yn

+ Rikl + R2k2 + R3k3 + R4k4 (fou.rth order)

+ Rl’kl + R2’k2 + R3’k3 yn+l = yn + Rl”kl + R2”k2 y n f l = yn + Rl’’’kl

higher order lower order form form order classa order classa

3 SA STEP IMB 3 SA 2 A IMBl 3 SA 2 A IMB2 3 SA 2 A IMB3 3 SA 1 SA IMB4 3 SA 1 SA IMB5 4 SA 2 SA a A = A-Stable; SA = Strongly-A-Stable.

. b32 = 0.1801.. . b31 =

Table I. Identification of Different Imbedded Forms method name

..

(second order)

yn+l = yn

(first order) (zeroth order)

(45)

where all the k, are obtained from k1 = h[I - haiJ]-’f(y,)

+

139

equations Michelsen (24) (34) (411, (42) (411, (44) (41), (42)

(39),(40) (45)-(47)

This information is summarized in Table I. Further details on the derivations can be found in Chan (1977).

Numerical Test Systems In this section we present a number of systems which have been used by the present authors to test the numerical efficiency of the various algorithms. No detailed derivations are given since adequate references are provided. These test systems cover the following structural changes: (1)The system dimension (or the number of dependent variables) can be varied. (2) The Stiffness Ratio (SR) can be varied. (3) The eigenvalues can be real or complex. (4) Linear and nonlinear behavior can be developed. ( 5 ) The stiff variables can change as the integration proceeds. System I. Ozone Decomposition (Bowen et al., 1963; Aiken and Lapidus, 1974).

k2 = h [ I - h a l J 1 - l ~ ~b2kl) k3 = h[I - ha1J]-’f(yn + b3k2)

+

k4 = [I - h~lJ]-’(b41k1 b42k2

+ b43k3)

(46)

Using exactly the same principles, Le., look a t the characteristic root for each formula to ensure Strong-A-Stability, appropriate Taylor Series matching and selection of roots in polynomial equations which arise, a unique set of parameters have been found. These are

. b2 = 0.8807.. . b 3 = 0.1982.. . bi1 = 3.5458.. . X

e

dY2 dt = Y 1 - Y l Y 2 - CKY2;

to = 0

+

Y z ( 0 ) = 0 , K = 3.0

t f = 50

SR = lo2

System 11. Chemical Kinetics (Robertson, 1967; Seinfeld et al., 1970).

a i = 0.2203..

b42 =

-1.4512..

b43 =

-0.1656..

dY3

dt = 3

. .

R1 = -1.0085..

x 1 0 7 ~ ~ 2y3(o) ; =o

to = 0

-

tf = 400

SR = 104

R2 = 1.7985,. . R3 = 1.7913.. . R4 = 1.0 R1’ = 0.6412, , . R2’ = 0.2227..

.

. R1” = 0.8049 . . . R2” = 0.1950. . . R 3’= 0.1360..

R 1” = a i With all of these formulas in hand, we can now identify those which will be referred to later for computer evaluation.

with k l = 0.04, k3 = 3 X lo7,and k2 = variable. As k2 increases, the SR can be increased. The Jacobian matrix J is given by

140

Ind. Eng. Chem. Fundam., Vol. 17, No. 3, 1978

System 111. Belousov Reaction (Field and Noyes, 1974).

k = 0.0006 exp[20.7 - 15 OOO/Y11 to=O+tf=500

_ dY3- 0.161(yi - y3);

SR = lo6

y3(0) = 4

dt

This system exhibits a limit cycle (oscillatory) behavior. The Jacobian matrix of this system is

1

77.27(-y2 f 1 1.675 X 10-5~1) 77.27(-y, 1) J = -y2/77.27 (-1 - y1)/77.27 0.161 0

;I

+

1.294 X -0.161

such that a t t = 0

System V. Dispersion-Convection (Sinovec and Madsen, 1975; Hindmarsh and Byrne, 1976). Starting with

2 = 9- c 2 at

( c = constant > 0 )

ax

ax2

and

>0

y(0, t ) = 1; t %,t)=O;

t>O

ax

y ( x , 0 ) = 0; 0 < x

-7.732 -1.424 X 0.161

-231.81 -6.47 X 10-2 0

1.29 X 10-2 -0.161 O I

200%

I3 I4 15 I6 I7

stable, error unstable stable, error stable, error unstable

I8 I9 I10 I11

unstable unstable not convergent unstable

> 200% > 200%

largest stepsize with-stable computing times in s; t f = 10 solutions h = 1.0 h = 0.1 h = 0.01

h = 0.01

NEa stable, error = 2%

NEa stable, error = 3.5%

stable, error E 20% unstable stable, error > 100% stable, error > 200% NE0

NE" unstable stable, error stable, error NEa

unstable unstable stable, error unstable

unstable unstable NEa unstable

E

200%

8.5% > 200% E

0.24 0.19

1.14 2.31

1.0 5 x 10-4 1.0 1.o

0.16

1.74

0.06 0.05

0.1

-

0.66 0.56 4.30

1 x 10-4 1 x 10-3

-

-

1.0 1.0

0.1 -

-

-

21.25 -

11.21

5.87 tf = 1.0 6.04 -

6.6 5.01 5.62 tf = 1.0 -

6.30 -

a Negligible error = NE. b The WATFIV compiler was used for these calculations. Hence computing time is much larger than in subsequent computations where a more efficient compiler was used.

Table IV. Numerical Results for Various Algorithms in a Constant Step Size Mode. System 111, h = 0.01, x = 1.40-4.5 computing time,a s

funct. eval.

Jacob. eval.

matrixmatrix products

method I1 8.54 1818 606 0 I2 17.21 2131 1827 0 I3 4.68 305 305 305 I4 6.28 1133 305 305 I5 5.21 610 305 0 I6 3.90 610 305 0 I7 15.46 2872 2871 3828 I8 6.91 456 61 122 I9 15.71 1224 305 3672 I10 9.46 1878 1268 634 I11 WATFIV compiler. See footnote b in Table 111.

matrixvector products

matrix inversions

606 1827 610 1654 1220 915 0 517 3672 634

606 1827 305 305 305 305 957 123 610 634

Next we apply the different methods to System 111, which is three-dimensional and operates in a limit cycle oscillation. Because of the latter condition there are regions of slow smooth response and regions of fast nonlinear response, Le., y1, y2, and y3 are changing very rapidly. The most drastic range is from x = 1.40 to x = 4.45 where the initial condition is y l ( X = 0)

59.20219027

Y ~ ( X= 0) = 0.92478024 Y ~ ( X= 0) = 6.272424679

and within this range x = 1.61 to x = 1.68 yields maximum rate of change of y2 x = 4.37 to x = 4.44 yields maximum rate of change of y1

Table IV lists the computing time, and such items as function calls, Jacobian evaluations, etc., over x = 1.40 to x = 4.45 with a fixed stepsize of h = 0.01. In this case 13,15, and 16 seem t o be the best methods. Tables V and VI give the percent error of each method in the sub-ranges x = 1.61 to x = 1.68 and x = 4.37 to x = 4.44 for y 2 and yl, respectively. These values were obtained with h = 0.01 and compared to a fourth-order Runge-Kutta solution using h = 0.001 and/or h = 0.0001. From these data I7 performs the best (note that some of the methods generate negative numbers); this method is Cash's single-term implicit

-

-

comments needs three extra points to start needs three extra points to start

tf = 2.01

needs two extra points to start solution blew up after 36 steps

method which does, however, require iteration in the solution. I t is apparent that the semi-implicit methods in a fixed stepsize environment are quite competitive with the other methods; in fact, they are better than most of the methods. When one considers that no iterations are required and the semi-implicit methods are simple to program, they tend to look as among the best algorithms. A number of numerical experiments were also carried out using the various algorithms in Table I1 in a variable stepsize environment. In addition, Cash's (1976) semi-implicit method with imbedded error estimation, (26)-(27), was included. In cases where a specific stepsize adjustment procedure has been given with an algorithm, it was used. In other cases the extrapolation or one step-two step process was employed. In most of the cases a relative error-criterion, I (yl- y Z ) / y l I, was used to compare to a preset error tolerance; this allowed the stepsize to be adjusted up or down. When the variables are very small however, an absolute error tolerance, Iy1 - y21, was used instead. When necessary, linear interpolation was used to estimate the dependent variables a t points between those actually calculated. From these computations I5 and I6 gave the best (most efficient) results. 17, which had performed so well in the fixed stepsize mode, required more operations than I5 or I6 but was able to take larger steps. Thus the computing times are comparable. However, in both Systems I1 and 111, as the computation progressed, the large stepsizes eventually led to com-

Ind. Eng. Chem. Fundam., Vol. 17, No. 3, 1978

143

Table V. Percent Errors on y2 for System 111, h = 0.01 and x = 1.61-1.68 method

1.61

1.62

1.63

1.64

1.65

1.66

1.67

1.68

av

I1

5.6

8.9

20.4

46.7

2.6

27.8

12.6

29.0

I2 I3

63.6

82.1 22.4 2.1 1.8

88.2 50.1 12.9 3.1 8.4 0.64 3.3 0.78 35.9

66.8 91.7 53.7 2.6 44.2 3.6 4.4 8.5 56.8

negative number 12.0 76.1 51.9 2.98 106.9 2.6 10.9 23.9 25.7

36.5

43.9 20.8 1.7 1.54 4.1 0.03 0.35 0.68 3.3

42.9 11.6 0.92 1.54 1.38 0.04 0.45 0.39 5.3

54.5 37.7 15.7

I4

10.5 0.1

I5 I6 I7 I8 I9 I10

0.9 0.7 negligible 0.5 negligible 12.4

2.1

0.15 1.2

negligible 20.7

18.1

2.5 0.25 33.1 0.19 0.58 2.7 0.19

1.8

25.1 0.9 2.7 4.6 20.0

Table VI. Percent Errors on y1 for System 111, h = 0.01 and x = 4.37-4.44 method

4.37

4.38

4.39

4.40

4.41

4.42

4.43

4.44

I1

12.1

23.1

40.0

negative number

2156

4423

824.9

I2

168 23.3

312 48.4

975 75.9

negative number 11790 617.4

12.2

146.7

I3 I4

0.66

I5 I6 I7 I8 I9 I10

0.36

2.2 1.8 negligible

4.1 3.4 0.05

-

0.10 8.6

-

0.30 16.8

1.4 39.3

m

939.4

30.4 negative number

1418 58.6

-

96 errora

X

I6

I5

1.61 1.62 1.63 1.64 1.65 1.66 1.67 1.68

0.20 0.06 0.47 3.5 0.20 0.18 0.14 0.05 Av 0.60 0.40 0.30 5.6 0.80 1.7 0.06 0.07 0.24 Av 1.15

0.03 0.19 0.44 0.95 0.42 0.13 0.01 0.06 0.28 0.30 0.73 2.0 5.0 6.7 0.33 0.04 0.03 1.89

execution time (s) b from x = 1.4-10.0 function calls Jacobian evaluations matrix-matrix products matrix-vector products matrix inversions

m

negative number 8.3

258.7 5446

Table VII. Algorithm I5 and I6 in a Variable Step - Size Mode on System 111. E r r o r Tolerance = 10-3

4.37 4.38 4.39 4.40 4.41 4.42 4.43 4.44

m

m

50.7 90.7 11.3

4.5 6.9 0.47

-

m

negative number 897.1

I6

I5

5.78 830 415 0 1245 415

9.82 1202 1202 0 2404 601

a Interrelated values. b WATFIV compiler. See footnote b of Table 111.

pletely inaccurate results and a blowing up of the calculation. Other methods such as I4 and Cash’s semi-implicit method with its imbedded error formula required extremely small stepsizes and thereby led to an inefficient computation.

18.2 -

negative number 36.7 4157 2.6 -

240.1 6.3

0.6 616 0.03

0.45 0.05

71.8 negligible

-

0.07 0.29

4.3 0.07

As an illustration of the two most efficient methods, I5 and 16, we give in Table VI1 some results in the rapidly changing regions of System 111and an analysis of computing times and function calls, Jacobian evaluations, etc., for the integration x = 1.4 to x. = 10. These results are based upon the one steptwo step error estimation. While there is little to choose in terms of error, I6 is considerably more efficient than 15, i.e., it requires only about 60% of the computing time. These preliminary results support the contention that semi-implicit Runge-Kutta methods such as Michelsen’s (16) are completely competitive if not better than most other algorithms in the literature for solving stiff differential equations. Now we turn to a detailed analysis of the efficiency of I6 vs. the newer imbedded error monitoring pairs discussed in a previous section of this paper. Semi-Implicit Runge-Kutta Numerical Results Table VI11 summarizes a small fraction of the numerical results obtained with the various semi-implicit Runge-Kutta methods of Table I; all of the numerical test systems previously described are used. The terminology TOL indicates the error tolerance used to monitor the appropriate stepsizes. Shown are the computed values of the variables, the percent error in each variable with a dash (-) meaning essentially zero error, the number of function calls, Jacobian evaluations, matrix inversions, matrix-vector products, and the computing time in seconds. When the system to be integrated is essentially nonstiff and of small dimension, System I, the differences between the computing times for the various methods is small. However, if the number of function calls, Jacobian evaluations, etc., are considered, it can be seen that the ( p 1, p ) imbedded formulas are more efficient than the one s t e p t w o step algorithm, STEP. Furthermore, if the accuracy of the various methods is considered, the (p 2, p ) imbedded formulas, even with the error tolerance TOL relaxed by an order of magnitude, are also more efficient than STEP. The same behavior can be observed from the results on System 11. As the stiffness increases,

+

+

144

Ind. Eng. Chem. Fundam., Vol. 17, No. 3, 1978

Table VI11 A. System 111.Belousov Reaction with t f = 300.0

tolerance

method

10-3

STEP IMB IMBl IMBZ

10-2

STEP IMB

value of y 1 and error

no. of funct. eval.

no. of Jacob. calls

no. of matrix inver.

no. of matrix products

comp. time

4.41881 0.01 4.42209 0.10 4.43018 0.27 4.42251 0.09 4.40997 0.19 4.47225

1316

658

658

1974

1.18

1178

589

589

1767

1.53

664

332

332

996

0.82

1088

544

544

1632

1.33

944

472

472

1416

1.01

464

232

232

696

0.63

304

152

152

456

0.39

452

226

226

678

0.56

702

351

351

1053

0.89

828

414

414

1242

1.01

864

288

288

1152

0.77

252

126

126

378

0.34

282

141

141

423

0.37

399

133

133

532

0.38

1.22

IMBl IMB2 IMB3 IMB4 IMB5 10-1

IMB3 IMB4 IMB5 exact

3.77362 14.6 4.41622 0.04 4.42360 0.12 4.41927 0.02 4.41903 0.02 4.29186 2.86 4.48478 1.50 4.36146 1.28 4.41830

B. System VI. Method of Lines Applied on a Parabolic System with t f = 0.04 tolerance

method

10-3

STEP IMB IMBl IMBZ

10-2

STEP IMB IMBl

value of y3 and % error

no. of funct. evaluation

no. of Jacob.

calls

no. of matrix inver.

no. of matrix products

comp. time

0.11899 0.58 0.11913 0.46 0.11922 0.38 0.11943 0.21 0.11687 2.35 0.11785 1.53 0.11835

128

64

64

192

7.62

118

59

59

177

7.34

42

21

21

63

2.77

66

33

33

99

4.20

56

28

28

84

3.45

44

22

22

66

2.91

20

10

10

30

1.26

28

14

14

42

1.79

34

17

17

51

2.07

38

19

19

57

2.26

57

19

19

76

2.30

22

11

11

33

1.39

14

7

7

21

0.91

16

8

8

24

1.02

16

8

8

24

0.91

18

9

9

27

1.10

27

9

9

36

1.17

1.11

IMBZ IMB3 IMB4 IMB5 10-1

IMB IMBl IMB2 IMB3 IMB4 IMB5 exact

0.11879 0.74 0.11879 0.74 0.11855 0.94 0.11857 0.92 0.11511 3.82 0.11731 1.98 0.11691 2.31 0.11438 4.43 0.11306 5.53 0.11618 2.92 0.11968

Ind. Eng. Chem. Fundam., Vol. 17, No. 3, 1978

145

Table VI11 (continued)

C. System VIII. Linear System with Imaginary Eigenvalues with tf = 5.0 tolerance

method

10-2

STEP

value of y1 and 96 error

no. of funct. evaluation

no. of Jacob. calls

no. of matrix inver.

no. of matrix products

comp. time

125.0154 0.01 124.9284 0.05 124.8194 0.14 125.0126 0.02 125.0191 0.02 124.9963

1508

754

754

2262

3.16

1176

588

588

1764

3.01

1178

589

589

1767

2.86

2322

1161

1161

3483

5.81

2626

1313

1313

3939

6.41

1878

626

626

2504

3.35

125.2628 0.21 124.1735 0.66 122.7405 1.81 125.0211 0.02 124.9666 0.03 125.8815 0.71 116.1982 7.04 110.0736 11.94 125.0869 0.06 124.8016 0.16 125.0000

704

352

352

1056

1.48

608

304

304

912

1.47

664

332

332

996

1.70

700

350

350

1050

1.71

675

225

225

900

1.29

488

244

244

732

1.01

122

61

61

183

0.37

148

74

74

222

0.45

224

112

112

336

0.61

243

81

81

324

0.50

IMB IMB2 IMBS IMB4 IMB5 lo-'

STEP IMB2 IMB3 IMB4 IMB5

100

STEP IMB2 IMBS IMB4 IMB5 exact

-

Table IX. E r r o r Estimation on System VI1

STEP TOL= 10-3 IMB TOL= 10-3 IMB2 TOL= 10-3 IMBS TOL= 10-2

IMB5 TOL= 10-2

0.100369 X 0.15828 X loo 0.59226 X 10' 0.10993 X 0.14311 X loo 0.58804 X 10' 0.13635 X 0.14994 X loo 0.58586 X lo1 0.23699 X 0.17080 X loo 0.59593 X lo1 0.25113 X 0.12377 X loo 0.58721 X lo1

actual error

est. error

actual error

est. error

0.687 X lo-" 0.122 x 10-4 0.779 X 0.173 X 1.108 X 0.731 X 0.111x 10-7 0.208 X 0.626 X 0.213 X 0.491 X 0.646 X 0.935 X 0.118 X 0.702 X

0.601 X lo-'' 0.121 x 10-4 0.669 X 0.174 x 0.893 X 0.698 X

0.240 X 0.239 X 0.779 X 0.611 X 0.187 X 0.731 X 0.506 X 0.326 X 0.626 X 0.453 X 0.504 X lov3 0.646 X 0.227 X 0.169 X 0.702 X

0.220 x 10-3 0.239 X 0.669 X 0.442 X 0.144 X 0.698 x 0.886 X 0.507 X 0.715 X 0.437 X 0.497 X 0.644 X 0.235 X 0.178 X low3 0.702 X

System IV, the superiority of the imbedded formulas becomes more obvious. In the oscillating limit cycle case, System 111,all the methods traverse the slow/fast regions of the cycle with no difficulty. However, the imbedded methods again have a superior efficiency and accuracy. IMB5 gives the best performance even with an error tolerance of TOL = 10-1. For systems with larger dimensions, System V or VI, the distinctions between the nonimbedded and imbedded forms become apparent. Now, of course, the computational effort

0.111x 10-7

0.299 X 0.715 X 0.213 X 0.494 X 0.644 X 0.935 X 0.123 X 0.702 X

required to invert a matrix of dimension -20 is substantial. Hence the imbedded methods which do not require extra steps to estimate the truncation error are more efficient as the data clearly show. The results on System VI1 show that the semiimplicit methods can handle complex eigenvalues with moderate ease. However, IMB5 seems clearly the best, especially at low error tolerances. When the error tolerance is stringent, there is always a disproportionate increase in the computing effort (see System I). This behavior is exhibited by all the imbedding methods.

Ind. Eng. Chem. Fundam., Vol. 17, No. 3, 1978

146

Table X. System VI1 with ul = l , u 2 = 100 and u3 Variable tf = 10, TOL =

method STEP

‘53

1014

IMB IMB2 IMBB IMB4 IMB5 1015

STEP IMB IMBB IMB3 IMB4 IMB5

10’6

STEP IMB2 IMB3 IMB4 IMB5

For a TOL = 10’6 IMB2

value of y1 at t = 10; % error 0.44790 X 1.34% 0.47689 X 5.04% 0.44982 X 10-4 0.92% 0.45349 x 0.11% 0.45225 X 0.39% 0.45287 X 0.25% 0.4460 x 2.07% 0.30507 X 32.80% 0.49711 X 9.44% 0.48677 x 7.22% 0.45726 X 0.72% 0.455211 X 0.27% 0.425877 X 6.19% 0.10827 X 10-3 138.4% 0.53484 X 17.8% 0.42068 X 7.33% 0.45722 X 10-4 0.71% 0.46507 X 2.44%

This can probably be explained by the fact that as the tolerance becomes more severe, the algorithms are forced to rely on the lower order ( p ) formula rather than the higher order ( p 2 or p 1) formula. As a result, smaller stepsizes have to be used to preserve the same accuracy. For the same tolerance, ( p 2, p ) methods require more computational effort than ( p 1,p ) methods, but they give more accurate results. In fact, the former tend to have approximately the efficiency and accuracy as the latter when the error tolerance is one order of magnitude smaller. In some cases the former are even better than the latter with reduced tolerance. There are some further interesting extensions of the use of the imbedding formulas which we wish to point out. Of importance is the question of how the imbedding error estimation technique compares with the more conventional one step-two step approach. To show this feature we use System VI1 with g1 = 1, u2 = 100, and ‘53 = lo5 and select an error tolerance of or Since System VI1 is linear, the exact solution is available, and we adopt the strategy of (1)for each single step of size h the values obtained from the exact solution are used as the initial condition for the next step, and (2) the numerical solutions obtained at the end of a step are compared with the known exact values to check the estimated error. In this way there is no accumulated error in the computation. Table IX shows some results using the various semi-implicit algorithms; the independent variable x is located at different values since the stepsize adjustment is in full operation. It can

+

+

+

+

no. of function calls

no. Jacobian eval.

no. of matrix inversions

no. of vector matrix product

188

94

94

282

572

286

286

858

210

105

105

315

194

97

97

291

218

109

109

327

426

142

142

568

536

268

268

804

2516

1258

1258

3774

234

117

117

351

192

96

96

288

228

114

114

342

837

279

279

1116

3032

1516

1516

4548

434

217

217

651

466

233

233

699

454

227

227

681

3543

1181

1181

4724

1460

730

730

2190

be seen that STEP and IMB have a tendency to underestimate the error while IMBB tends to overestimate the error. If this trend remains true for other systems, IMBB can be speeded up even more by a form of overrelaxation. We have not, however, pursued this point further. The Strongly-AStable imbedded methods estimate the errors in a superb fashion. The question also arises as to whether the current imbedding algorithms can handle increased stiffness or whether there is an upper bound on the stiffness ratio for which the algorithms become inoperative. The results given here to answer this question probably apply to all such algorithms. Table X shows results for System VI1 (linear) with o1 = 1,‘52 = 100, and u3 covering the range lo6 to 1017;as ‘53 increases, the system becomes stiffer. In Table X the interval of integration is x = 0 to x f = 10 and a TOL = The numerical solution at the end point of integration is used as an indication of the accuracy of the algorithm and the number of Jacobian and function evaluations can be used as a measure of integration efficiency. As can be seen, as the value of u3 is increased, the methods become less accurate and lose their efficiency. For (r3 = 1017 all the methods diverge; this holds true even when TOL is decreased to 10-3. However, for u3 = 10l6 and TOL = 10-3, IMB2 does generate an answer in which the error is suppressed to 2.44% but a t the cost of considerably increased computer time. Generally, for the same error tolerance, IMB5 requires less

Ind. Eng. Chem. Fundam., Vol. 17, No. 3, 1978

Table XI. System VII, Single Precision, 6 3 variable, TOL =

03

method

105

STEP IMB IMB2

106

STEP IMB IMB2

107

STEP IMB IMB2

value of y1 at t = 10; % error 0.45506 X 0.34795 X 0.4430 X lo-* 2.42% 0.53783 X >loo% err. 0.4552 X 0.27% 0.23979 X 48% failed >loo% err.

xf =

10 matrix vector product

function calls

Jacobian evaluation

266 860 116

133 430 58

399 1290 174

133 430 58

1190 7152 202

595 3576 101

1785 10729 303

595 3576 101

8414

4207

12621

4207

926

463

1389

463

work than IMB4 although the number of function evaluations is higher. This can be a major asset when a very large dimension system is being integrated. Although System VI is fairly large (dim = 19), the overall interval of integration may be too small to allow a serious differentiation between the methods. However, as the stiffness ratio increases (see Table X), IMB4 definitely performs better than IMB5. Further, the ( p 2, p ) methods exhibit superior quality as the stiffness increases. IMBS, in particular, behaves in an ideal fashion since the number of steps does not increase proportionately to the stiffness. From these results we see that there is an upper bound on stiffness beyond which the present integration methods will not work. This bound is not a function of the methods but rather of the ill-conditioning of the system. Any algorithm of the present type will encounter this breakdown. It is evident that when the stiffness becomes sufficiently large, methods which decompose the system into stiff and nonstiff groups, Le., singular perturbation techniques as in Aiken and Lapidus (1974), may be the only feasible approach. Preliminary results, not given here, indicate that indeed such singular perturbation methods become very competitive as the stiffness of the system increases. Finally we turn to the influence of the precision of the computer itself. All the above results use double precision on a IBM 36/91 computer; this is equivalent to carrying about 16 digits in the calculation, If we restrict the calculation to single precision (about eight digits), the results of Table XI are obtained (System VII, o3 variable, and TOL = lo-*). When compared to Table X for g3 = lo6, it is obvious that single precision restricts the applicability of the methods (this would of course apply in the same way to any numerical algorithm).

+

Conclusions and Significance The present paper combines the concepts of semi-implicit Runge-Kutta methods with an imbedding overlay to develop a new family of algorithms suitable for stiff differential equations. Numerical comparisons with other proposed methods in the literature demonstrate that the semi-implicit Runge-Kutta methods are intrinsically efficient. A simple comparison of the imbedded forms allows an estimate of the local truncation error and, in turn, an almost trivial adjustment of the integration stepsize. Derivation of the family of algorithms is presented together with numerical results obtained from a variety of linear and nonlinear systems with varying degrees of stiffness as well as dimension. It is observed that the imbedding formulation is most effective when the systems are very stiff and of large dimension. Furthermore, the algorithm is simple, easy to implement, and extremely efficient. As such, it offers a new alternative to solving stiff

147

matrix inversion

differential equations. As a final note, no attempts have been made to compare the present algorithms with elaborate program packages because the latter have optimized in every way their operations, e.g., matrix inversions while the former have not. Hence, it would seem unfair to do so a t the present stage of development.

Nomenclature ai, aij = coefficients of Runge-Kutta processes A = matrix containing elements ai; A L = lower triangular matrix Au = upper triangular matrix b = coefficients of Runge-Kutta processes B = constant coefficient matrix ci = coefficients of Runge-Kutta processes D = diagonal matrix f = first derivative of dependent variable h, hi = stepsize J = Jacobian matrix k = reaction constant ki = functions of system evaluated at intermediate points m = dimension of system of equations n = step number p = order of accuracy of numerical methods Ri*, Ri’, Ri”, Ri”’ = weight factors of Runge-Kutta processes SR = stiffness ratio t = independent variable Tn+l = truncation error at ( n 1)st step u = number of stages of a Runge-Kutta process x = independent variable xo = starting value of independent variable y = dependent variable yo = initial condition of dependent variable zi = polynomials in the characteristic root Greek Letters t = constant coefficient for System I K = constant coefficient for System I X = eigenvalue of the system A,, = maximum eigenvalue of the system 1 = characteristic root of the numerical method cri = constant coefficient for System VI1

+

Literature Cited Aiken, R. C., Lapidus, L., AIChE J., 20, 368 (1974). Bickart, T. A,, Picel, Z.,BIT, 13, 272 (1973). Birnbaum, I . , Ph.D. Thesis, Princeton University, 1976. Bjurel, G., et al.. “Survey of Stiff ODE”, The Royal Institute of Technology, Stockholm, Sweden, Rep. MA 70-11 (1970). Bowen, J. R., et al., Chern. Eng. Sci., 18, 177 (1963). Butcher, J. C., J. Aust. Math. SOC.,4, 179 (1964). Caillaud, J. B., Padmanabhan, L., Chern. Eng. J., 2, 227 (1971). Calahan, D. A,, Proc. /€€E, 56, 744 (1968). Cash, J. R.. ACMJ. 22 (4). 504 (1974). Cash, J. R., ACM J., 23 (3), 455 (1976). Chan, Y. N. I., Ph.D. Thesis, Princeton University, 1977. Chipman, F. H., BIT, 13, 391 (1973).

148

Ind. Eng. Chem. Fundam., Vol. 17, No. 3, 1978

Curtis, A. R., J. lnst. Math. Its ADPI., 16, 35 (1975). Curtiss, C. F., Hirschfelder, J. O., Proc. Nat. Acad. Sci. USA, 38, 235 (1952). Dahlouist. G. G.. BIT. 3. 27 11963). Ehle, B.. L.', "A Comparison of Numerical Methods for Solving Certain Stiff ODE", Victoria University, Rep. No. 70 (1974). Ehle, B. L., Lawson. J. D., J. lnst. Math. Its Appl..-16, 11 (1975). Enright, W. H., SlAM Num. Anal., 11, No. 2 (1974). Enright. W. H., Hull, T. E., SlAMNurn. Anal., 13, No. 6, 944 (1976). Fehlberg, E.. NASA Technical Report, NASA TR R-287 (1968). Fehlberg, E., NASA Technical Report, M-256 (1969). Field, R. J., Noyes, R. M., J. Chern. Phys., 60, 1877 (1974). Finlayson, 9. A., "The Method of Weighted Residuals and Variational Principles", Academic Press, New York, N.Y., 1972. Hindmarsh, A. C., Byrne, G. D., "Applications of EPISODE: An Experimental Package for the Integration of Systems of ODE", "Numerical Methods for Differential Systems", L. Lapidus and W. E. Sciesser, Ed.. p 147, Academic Press, New York, N.Y., 1976. Klopfenstein, R. W., RCA Rev., 32 (3). 447 (1971). Kubicek. M., Visnak, K., Acta Univ. Carol. Math. Phys., No. 1-2, 63 (1974). Lambert, J. D., Sigurdsson. S.T., SlAMJ. Num. Anal., 9 (4), 715 (1972). Lambert, J. D., "Computational Methods in ODE", Wiley, London, 1973. Lapidus, L., Seinfeld, J. H., "Numerical Solution of ODE", Academic Press, New York, N.Y., 1971. Lapidus, L., et al., "The Occurrence and Numerical Solution of Physical and Chemical Systems Having Widely Varying Time Constants", International Symposium on Stiff Differential Systems, Wilbad, Germany, 1973. Liniger, W., Willoughby, R., SlAMJ. Numer. Anal., 7, 47 (1970). Luss, D.. Amundson, N. R., AlChEJ., 14,211 (1968).

Madsen, N. K., Sincovec, R. F., "General Software for Partial Differential Equations", in Numerical Methods for Differential Systems", p 229, L. Lapidus and W. E. Schiesser, Ed., Academic Press, New York, N.Y., 1976. Michelsen, M. L., AlChEJ., 22, 594(1976). Nosrati, H., Math. Cornp., 27, No. 122. 267 (1973). Rao, C. P., lyengar, S.R . K., J. lnst. Math. Appl.. 1, 281 (1976). Robertson, H. H., "Solution of a Set of Reaction Rate Equations", in "Numerical Analysis", p 178, J. Walsh, Ed., Thompson Book Co., Washington, D.C., 1967. Rosenbrock, H. H., Comput. J., 5, 329 (1963). Schiesser, W. E., Lapidus, L., Ed., "Numerical Methods for DifferentialSystems", Academic Press, New York, N.Y., 1976. Seinfeld, J. H., et al., lnd. Eng. Chem. Fundam., 9, 266 (1970). Shampine, L. F., et al., "Comparison of Some Codes for the Initial Value Problem for ODE", in "Numerical Solutions of Boundary Value Problems for ODE", p 349, A. K. Aziz, Ed., Academic Press, New York, N.Y., 1975. Shampine, L. F.. Watts, H. A,, "Practical Solution of ODE by Runge-Kutta Methods", Sandia Laboratory Report No. SAND 76-0585 (1976). Shampine, L. F., Watts, H. A., ACM Trans. Math. Software, 2 (2), 172 (1976). Sincovec, R. F., Madsen, N. K., ACM Trans. Math. Software, 1, No. 3 (1975). Watts, H. A., Shampine. L. F., BIT, 12, 252 (1972). Willoughby, R. A., Ed., "Stiff Differential Systems", llw IBM Research Symposia Series, Plenum Press, New York, N.Y., 1974.

Received for review August 10,1977 Accepted May 3,1978

Transport Rates by Moment Analysis of Dynamic Data P. A. Ramachandran and J. M. Smith' University of California, Davis, California 956 16

The evaluation of mass transfer rates and equilibrium constants by moment analysis of dynamic data has been applied to several types of fluid-solid contacting equipment during the past decade. In the first part of this review the moment method is compared with other procedures for analyzing pulse-response data. This is followed by a discussion of the applications of the moment method to different contacting devices. Limitations of the technique and criteria for evaluation of specific rate processes are emphasized.

Analysis of the response to input disturbances has been used for many years to describe mixing characteristics of commercial-scale processing equipment. Such techniques are applied to heat exchangers, absorbers, distillation columns, reactors, etc., and collectively known as studies of process dynamics. More recently input-response measurements have been used for another purpose-for quantitative evaluation of transport rates in fluid-solid contacting equipment when the mixing characteristics are known. For those rate parameters or constants that are independent of the equipment size, for example, effective diffusivities in catalyst pellets, the experimental measurements can be carried out in laboratoryscale apparatus without loss of applicability. Hence, in the past decade laboratory experiments and data analysis methods have been developed for evaluating both equilibrium and rate constants for mass transport processes occurring in well-known types of fluid-particle equipment. The usual experimental procedure is to introduce a concentration pulse of tracer and measure the response C ( t ) .The methods of analyzing the data are compared at the beginning of this review. Of the various procedures the one based upon evaluating the difference in moments of the response and input disturbances has the advantage of displaying in explicit, analytical equations the effect of the rate constants of each separate transport process. Further, for some types of equipment the contributions of the separate transport steps to the moments are additive. Hence, the major part of the review is devoted to de0019-7874/78/1017-0148$01.00/0

scribing the applications of the moment method to various types of contacting equipment. Emphasis is placed on the limitations of the method and some criteria are suggested for establishing the suitability of the moment procedure.

Methods of Analysis The main methods of parameter estimation are summarized in Table I. I t is important to note a t first some general limitations. All methods except curve fitting in the time domain are restricted to linear processes. This places restrictions on the type of processes that can be studied and on experimental conditions. For example, heat and mass transfer processes are normally first order, but adsorption on catalysts and chemical reactions may not be linear. Sometimes nonlinearities in the latter processes can be circumvented by experimental techniques (see section on fixed-bed adsorbers). Further, limitations of available instruments (detectors) for rapid analysis of the response can restrict the application of all methods. For example, to study multicomponent reactions in various types of reactors, it is necessary to determine the concentration response of one component in the presence of the entire reaction mixture. A. Curve Fitting in the Time Domain. In this method, the values of the rate and equilibrium parameters are determined by matching the experimental response curve with the theoretical C ( t ) curve. The latter is obtained by solving the conservation equations employed to describe the transport 0 1978 American Chemical Society