Computer Modeling of Reaction Rate Constants
The Journal of Physical Chemistry, Vol. 83, No. 1, 1979 37
Derivation of Elementary Reaction Rate Constants by Meanis of Computer Modeling W. C. Gardiner, Jr. Department of Chemistry, University of Texas, Austin, Texas 78712 (Received Ju/y 26, 1978) Publication costs assisted by the Petroleum Research Fund and the Robert A. Welch Foundation
A brief review is given of the application of computer modeling to explore reaction mechanisms and derive elementary reaction rate constants, or rate constant expressions, for gas reactions. Procedures are described for optimizing the agreement between predictions made with the model and experimental results. Guidelines are proposed for estimating the accuracy of rate constants obtained by modeling. It is shown that worst-case guidelines may be found automatically through the optimization or, approximately, by utilization of the final sensitivity spectrum of the model.
I. Introduction Computer modeling of chemical reactions has proved to be a powerful methold to explore the connection between assumed reaction mechanisms and real-world chemistry.’ While the original motivation for constructing models for gas reactions was to transform microscopic-level chemical knowledge into useful predictions about processes of practical interest, it was also realized quite early that similar techniques provided new research opportunities in situations where parts or all of the important microscopic-level information is not known at the outset. At the present state of development of the science of computer modeling for the latter purpose, it is readily possible for nonspecialists in modeling to make use of general-purpose computer codes for experiment design and data interpretation, Modeling specialists also have improved the speed and flexibility of larger codes to the point that surveys of broad and diverse data bases can be exploited for the purpose of testing large mechanisms for their compatibility with the overall experimental knowledge one has in a given area.:’ In this paper a brief review will be given of how computer modeling has been used in recent gas kinetics studies for experiment design, data interpretation, mechanism development, and derivation of values for elementary reaction rate constants. The question of how one can best utilize modeling t,echniques for estimating the uncertainties in rate constants so derived, or in the parameters of derived rate constant expressions, is then investigated. 11. Computer Modeling Computer modeling of a complex chemical reaction is based upon numerical integration of a set of coupled simultaneous ordinary differential equations. The single independent variable is usually time, although distance may serve just as well when dealing with flow systems. As dependent variables one must include the concentrations of all chemical species assumed to change in the course of reaction, and one niay have or choose to include other variables describing the physical state of the system, such as density or temperature. Usually some auxiliary algebra must be done in parallel with the numerical integration in order to keep up with additional time-varying quantities of interest and to prepare the results of the integration for output. Examples would be the absorptivity or refractive index of the gas. One important characteristic of the sets of differential equations that must be solved for chemical kinetics problems impeded the use of numerical solutions of them for many years. This has to do with the fact that most 0022-3654/79/2083-0037$0 1.OO/O
reaction mechanisms of interest imply a wide spectrum of rates of individual elementary reactions, giving a so-called “stiff set” of differential equations. Such sets usually prove to demand extremely short integration steps, and hence unacceptably long computation t,imes, when integrated using conventional techniques such as a Runge-Kutta algorithm. Early efforts a t modeling realistic reaction mechanisms therefore often required extensive special modifications of the basic differential equation set in order to make modeling work at all, or had to be limited by the computer time demand. A breakthrough in the technique of numerical integration by Gear finally solved the stiffness problem, and most modern programs for computer modeling of chemical reactions include an integration subroutine based upon some subsequent development of Gear’s original r n e t h ~ d . ~ A fortunate aspect of chemical kinetics modeling is that the rate equations directly deducible from the mechanism provide explicit algebraic expressions for the time derivatives of all species concentrations. These derivatives must be constantly reevaluated, several times for each integration step, and one usually finds that a major fraction of the overall computation time is spent reevaluating derivatives. If Lhe temperature changes during the reaction, then rate constants and equilibrium constants (needed to compute reverse reaction rate constants) must also be reevaluated. The consequence of this is that when computer time is a consideration, rationalization of derivative and thermochemistry evaluations is an effective way to conserve it without resorting to truncation of the assumed reaction mechanism. Another is to relax unnecessary requirements as to integration accuracy. Detailed descriptions of the mathematical aspects of computer modeling and a variety of examples of applications may be found in a symposium rep0rt.l
111. Modeling for Rate Constant Evaluation The attitude of an experimentalist setting out to incorporate computer modeling into a search for elementary reaction rate constant information is basically as follows. First, there is always an initial concept of what the reaction mechanism should include. There will be some elementary reactions which one feels must be included, some which are of doubtful iimportance, and some which may be purely speculative. For many of the reactions there will be experimental rate constant information upon which one desires to rely, for others one might have to begin with estimates based only upon theory, and for some one may have information in which one has an intermediate level of confidence. Modeling with this initial rate constant set 0 1979 American Chemical Society
38
The Journal of Physical Chemistry, Vol. 83, No. I , 1979
serves the very important purpose of providing guidance in discovering the ranges of conditions in which informative experiments can be done. In particular, by testing for the likely observable consequences of variations in the values of the initially assumed rate constant set, one can find out the degree to which he can hope the experiments will provide the rate constant information sought. One important such variation consists of course in adding or subtracting elementary reactions from the mechanism on a trial basis. After experimental data has been accumulated the concluding round of computer modeling begins. At this stage one’s interest is in formulating a final statement about the interpretation of the data in terms of mechanism and rate constants. In early applications of computer modeling to gas kinetics this was done by comparing various computed profiles to representative experimental profiles. These could be either directly measured ones, such as oscilloscope trace photographs, or inferred ones, such as concentration or temperature profiles. It became apparent, however, that this was an inefficient way to improve one’s understanding of the model and the experiments. A much better way was found to be the following. A kinetics experiment involves observing a smooth change with time (or perhaps distance) in some quantity. This change will usually be characterizable by a function with only one parameter that is “adjustable” to “fit” the observed change. Each experiment, in other words, really provides just a single datum; only in very unusual circumstances will more than two data values be required to characterize the shape of the function which describes whatever transition is observed as reaction proceeds. An experimental kinetics study may be regarded as consisting of accumulating enough such data values to provide, within the random scatter of the measurements, a scientific description of how the observed change depends upon variables which are the conditions which were changed from experiment to experiment in the course of the study. If one considers these data values to be functions of the conditions and locates them in a (hyper)space in which the coordinates are the values of the variables specifying the conditions (temperature, pressure, and concentrations) which were varied during the investigation, then the collection of data values will define, again within the data scatter, a (hyper)surface extending over the investigated range. This surface, just like the originally observed transitions, is a smoothly varying function, now of position in the space of experimental variables. We can therefore characterize it completely by a (probably small) characteristic data set {DLT)of representative (typical, if not average) values of the experimentally measured quantity a t an appropriate selection of points in the space of the experimental variables. If one has in hand a reaction mechanism and set of rate constants (or rate constants as functions of temperature), hereafter refered to as “the model”, with which one can predict the characteristic set {DIT]with acceptable accuracy, then one can expect that the same model will be in satisfactory agreement with all aspects of all the experiments. This idea can be made clearer with examples. A simple one would be a set of N experiments in which all starting conditions were held constant except for the temperature. If a semilogarithmic graph of the experimental data values vs. T1appears to show linear dependence, then one could choose a two-member set { D I T }consisting of one high temperature and one low temperature characteristic value on the semilogarithmic regression line through the data; perhaps a third characteristic D,T near the midpoint of the
W. C. Gardiner
T1range would turn out to be useful in order to keep a check during model development upon whether the model also suggests a linear dependence. A more complex example would be where in addition to the temperature two starting concentrations [A], and [B], were also varied during the investigation. Perhaps the data regression indicates a dependence upon these concentrations that can also be characterized by measurements taken a t three values of each one. Then the concluding computer modeling study would take for (DIT]a nine-member set of characteristic points on the regression surface for the experimental data. In a modeling study one obtains for each characteristic set of conditions a computed value DITm which can be compared to the corresponding characteristic experimental value DITat the same conditions. The quality of the model compares is assessed by seeing how the computed set {DITm] to the experimental set {DIT),and it is improved by adjusting the unknown or uncertain parameters of the model, usually rate coilstants, until satisfactory agreement between {DITm)and {DIT]is attained. The basic working hypothesis of such a modeling study is that once the model has been refined to the point that the characteristic sets (DITm)and {DIT]are in agreement, all other data will also be satisfactorily accounted for by the model. The idea of characterizing an entire investigation by a small set of characteristic data values {D,,] can be extended to characterizing all of the kinetics investigations in which (essentially) the same elementary reactions determine the chemistry. One and the same model can be used then for a superset of characteristic conditions to generate the superset {{DbTm)] of characteristic values. If the superset of characteristic conditions provides a richer variety of emphasis in effect among the elementary reactions, then the testing of the parameters of the model is correspondingly more severe. One assumes that once nature’s own values of the parameters have been discovered and incorporated into the model, then any remaining disagreements between individual DITmand D,T T values will only reflect undiscovered systematic errors in the experiments. To complete our notation we need to take account of the fact that it is unlikely that each member of a set (DIT]or {{DIT)) will be considered equally reliable. We introduce for this purpose a set of weighting factors { W,,), or ({WIT)), which will reflect, on the customary 0 I W 1I~1 scale, one’s estimate of the confidence one can place in the accuracy of each member of the characteristic data set. IV. Model Optimization We assume in the following, for simplicity of presentation, that the only adjustable parameters in the model are rate constants hlT for some or all of the elementary reactions. Before proceeding, however, we should note that this may not be the case at all. Assumptions about quite different aspects, such as transport properties, efficiency of mixing reagents, or connections between computed and measured quantities (e.g., through absorptivities) may also be considered as being variable parameters of the model. Furthermore, the set ( h is by no means a simple collection of parameters to be determined through modeling. There are important constraints imposed by the physical meaning of rate constants. They must in any event be positive and not imply reaction rates faster than the rates of molecular encounters. In addition, their temperature dependence must be a smooth function and will probably be expressible in Arrhenius form with an activation energy that is compatible with the known thermochemistry of the reaction and with general experience with activation
The Journal of Physical Chemistry, Vol. 83, No. 1, 1979 39
Computer Modeling of Reaction Rate Constants
energies of analogous reactions. We return to this question later. We then have the mathematical problem of finding the best set (hJTJbased upon (DIT{and {W,&If one were confident that differences between ( Q T ) and ( D , T ~ ] members arise only through random scatter of uncorrelated measurement errors, then the best set (k,T) would have to be determined by minimizing the squared relative residual sum cwiT[l - D L T " / D I T ~ ~
(1)
with respect to variations in hJT,subject to the constraints given above. I t is by no means clear that minimizing ( 1 ) is also the path to the best (hJT] when there is reason to believe that other causes, predominantly systematic errors in the experiments and unknown oversimplifications of the model, are responsiblle for the residuals. Good cases can also be made for minimizing the weighted sums of absolute values of relative residuals or of their logarithms. As far as the present accuracy of gas kinetics data goes, however, would arise it is unlikely that significant differences in (kJT] through different choices of minimization function. Given that minimizing (1)or some alternative measure of the difference between (DIT]and (D,T"] is the route to optimizing (klT),one may ask how much effort is required to find a minimum. ,4n automatic minimization program will require in addition to a starting set (h,,] the derivative of each data value with respect to each variable parameter. In the present context this will have to be done by numerical differencing, i.e., by finding
or
to bring all derivatives onto a common scale: Both (2) and (3) are termed sensitivity matrix elements. For No elements in (DITm)and Nh elements in (h,T] there are No x iVkelements in (S,), each of which requires two integrations to evaluate. At every cycle of improving the model one therefore has 2NDNkintegrations to perform. A typical modeling study might entail say ten DITmand five unknown klT, or 100 integrations per cycle; if five cycles suffice to reach the minimum of (1)to acceptable accuracy, then 500 integrations would be needed. To complete this task in 1 h of computing time, each integration would have to be done in about 7 s. For a reasonably fast modern integrator arid a CDC 6600 computer this would correspond to integrating to near-equilibrium a fairly stiff 50-reaction rnechanjsm with 20 species. While such an optimization procedure is thus reasonable as far as total demand upon computation resources is concerned, it is not used in practice. Optimization is done instead by a sequence of trial-and-error computations in which only those S,, elements that are thought to be significant are evaluated. Chemical understanding rather than blind numerical pathfinding is used to improve the model until the modeler is convinced that further finetuning is not justified by the quality of (DIT].If the final IS,} is reported together with { k , T } , then subsequent improvements in knowledge about kJTor DITvalues can be utilized for an additional cycle of improvement of the modeL4 An unavoidable difficulty in model optimization is presented by the special role played by temperature, i.e.,
by the fact that kJT is invariably expressible in a simple mathematical form, usually as In k = a + b In T - c/T, where either b or c is usually zero. If only two temperatures are involved and the quality of (DIT)warrants it, then two-parameter k,(T) functions can be found directly through substituting final h,T values into the preferred k,(T) functions. If (DIT{is insufficient in quality or size for this purpose, lor if several temperatures are involved, then finding k,(T)i functions after finding an optimum set (klT]is inefficient. It is more efficient to assume Arrhenius temperature dependence, estimate A factors by empirical methods,j and use the initial cycles of optimization to find activation energies. In the final stages of optimization, chemical understanding of the model will have matured to the point that, refinements of A factors will also be possible. Expressions analogous to (2) using A, and EA, as parameters rather than k J T could in principle also be used as the basis for a stable optimization procedure; however, no trial of such a procedure has yet been reported. It temperature varies during an experiment, then D,Tm depends upon the assumed temperature dependence, i.e., the values of E,, of each reaction for which S has an appreciable value. When optimizing a model u d e r these circumstances, the modeler is forced to tune A, and EA, as quasi-independent variables rather than dealing with trial values of k, p
V. Error Bounds for (k,T)and ((kJT)) A. Error Sensitivity Propagation. We consider first the case that all parameters of the model other than the ( k J T ) determined by optimization may be assumed not to have error. Once the optiimum (klT)has been determined, one measure of the quality of this set is immediately available through inspection of the sensitivity spectra in (S,]or Ips,]. There is even one circumstance in which the sensitivity spectrum gives all of the quality-of-fit information needed to provide error bounds in the usual sense, namely, the circumstance that one column of the {pS,] matrix contains elements that are substantially larger than those in all the other columns. This is simply the modeling expression for one reaction being rate determining for the whole set of conditions from which {DIT)was taken. A measure A In k,T of the uncertainty range for k,T resulting from some (perhaps arbitrarily estimated) uncertainty range A In DIT for one of the characteristic data values D I T is then available through A In klT = I p S J I A In DIT
(4)
(Note that A In KIT and A In D I T are by their meaning intrinsically positive.) This A In kJT is unlikely to be a quantity of interest even in the one-reaction case, however, since one would not have been interested in modeling for other DIT in the first place if the corresponding pS,, elements were not significant. One must instead adopt some means to combine uncertainty ranges from all DIT elements. The straightforward estimate from "error sensitivity propagation" (ESP) is then
where W , = 2c,=1NDWiT, if one supposes that error propagation behaves as it would for the case that the different hi, values arise in the optimization because of random measurement errors in the Dip The minimization procedure used to discover the best IklT] in the first place has the effect of coupling all uncertainties in the input (DiT)to all values of { h j T ]in the
40
The Journal of Physical Chemistry, Vol 83, No. 1, 1979
output. One can choose to ignore this coupling if one has reason to believe that no essential distortions in (klT)were caused by the coupling. In this event one can still, optimistically, use ( 5 ) to compute uncertainty ranges for the elements of {hlTJeven when the (pS,,)array has elements of comparable size in many columns. For the common situation where some of the hiT may be determined for the first time in the modeling effort, this may be as useful a procedure as any to get an idea of the quality of the rate constant information one has obtained. B. Worst Case Guidelines. On the other hand, one may also be interested in finding a very conservative estimate of the quality of the (h,,). To achieve this one wishes to incorporate into the uncertainty estimation not only the uncertainties propagated from (DiT)to {hlT]in the optimization, but also those introduced by limitations in accuracy of other parameters of the model, predominantly of course in the unvaried rate constants. One possibility to find the “absolute worst event” (AWE) range of uncertainty in a h,T value, AAWE111 hIT,is to adopt the following procedure. The value of hlT is fixed at various multiples of the value found to be optimum, and the optimization program utilized to reminimize (1)repeatedly, now letting all hlZ,,T be varied. As h,? is set to successively larger or smaller values, the optimization program will begin to push members of {D,TmJas well as other rate constants to upper or lower bounds of the ranges one is willing to consider reasonable. When a DiTmis found to reach such a bound, the corresponding W,, is increased to force the optimization program to avoid crossing it. When k1#,,?reaches a bound, it is fixed a t the boundary value. Eventually one should find “constraining” values of hiT,which are either so large or so small that no variations in the remaining unfixed hl#l,Tare able to bring {DLTm) away from the acceptable limits. These hlr values may of course turn out to include 0 or the maximum physically meaningful value, or both, implying that the (DIT]can be matched acceptably without reaction j , or with reaction j being greater than collision frequency. If all of the kl#j,Tvalues become fixed a t boundary values while {D,TmJis still acceptably close to {D,,], then h,T can be further changed without regard for hlfl,Tvalues until some DITIndoes reach a limit, or hlT reaches one of its limits. This procedure does in principle provide an absolute worst event uncertainty range AAWE In k l T , and it does so despite the coupling to and among the other members of {h,,). It has a possible flaw and a major disadvantage. The possible flaw is that the optimization program could lead itself into a succession of false minima, even starting from the optimum (h,,) values, and thereby give incorrect limits. Aside from this possible numerical trap, however, there is the major disadvantage that the procedure is so cumbersome and probably so enormously demanding of computer time that no one would be interested enough in such absolute worst event uncertainty limits to make the effort of finding them this way, except perhaps for a model with very small sets (h,TJ and (DIT). One is therefore led to seek an alternative conservative estimate of uncertainty limits on hJT. At the sacrifice of ignoring the coupling among the hlr beyond first order, a direct route to a “worst case guideline” (WCG) can be found as follows. For kl#l,Tvalues that were not varied in the optimization, an uncertainty range A In hlT can be estimated from one’s knowledge about the unvaried hly. For hlf,,Tvalues that were varied in the optimization, the AEsP In hiT values from (5) can be accepted as first-order uncoupled uncertainty ranges. A worst case guideline uncertainty range for hlT can then be constructed from the
W. C.Gardiner
uncertainty ranges in A ~ C In G hlT =
and {hl#l,T)through
.v,
N*+iV-1
c
wiTIPsl1I-l[AIn D&T
w.v-’
IPs,llA
hi1 (6)
(#I
1=1
where Nukis the number of unvaried rate constants. Ignoring the coupling is likely to make ARCGIn hlT from (6) smaller than the true fully-coupled uncertainty range one would compute from the same input using the optimization method described above, i.e., one expects to have AAWE
In
hlT
> AN7CG
hlT
> AESP
~ J T
(7)
although this clearly cannot be expected to hold in all cases.
VI. Example of Uncertainty Range Calculation We consider a simple example in which modeling a single experiment is carried out to derive a rate constant for just one reaction, the other rate constants being known well enough that uncertainties in their values have only minor effects upon the accuracy of the rate constant being evaluated. Such a case is the evaluation of the rate constant for CO + 0 + M CO, + M (8) by modeling as DIT the time for decay of the CO flame spectrum intensity from 3/4 to 1/4 of its maximum value in the post-reflected-shock gas of an H,:02:CO:Ar mixture shock-heated to 1800 K, found experimentally to be D1,1800 = 115 f 5 ws, this error bound referring to the range of data scattera6 The modeling result for the final rate constant set shows that out of the total assumed set of 27 reactions, only the rates of three other reactions H -I-H + M H2 + M (9) H + 0 2 + M HO2 + M (10) H f 02- OH + 0 (11) influence D1,1800,i.e., have appreciable p S , . The p S spectrum obtained from doubling each hl,1800was found to be psi,, = -0.82, pS1,, = -0.07, pSl,lo= -0.04, and pS1,ll = -0.03, other p S , being less than 0.01 in magnitude. For the error sensitivity propagation estimate of A In only one term contributes to ( 5 ) , since we are concerned with only one experiment. If the data scatter range is taken for A In L)1.1&)0, then ( 5 ) becomes --+
-+
-+
AESP In
= (1)(0.82)V1(1n [120/110]) = 0.11
If an estimated systematic error bound of h20% is used instead for 1 In DIT,then A ~ s pIn h8,1800 becomes AESp In h8,1800 = (1)(0.82)-’(1n [1.2/0.8]) = 0.49 From the data scatter alone one thus has an 11% estimate for the uncertainty range of while the systematic error estimate is 63%. For the worst case guideline uncertainty range estimate, the above AESP In h8,1800values must be supplemented by contributions from uncertainties in the rate constants for reactions 9-11, which can be estimated for T = 1800 K as multiplications or divisions by a factor of 1.5, a factor of 2, and a factor of 1.25, r e s p e c t i ~ e l y .Thus ~~~ 26
z I p S l l l A In k l = 2(0.07) In 1.5
+ 2(0.04) In 2 + 2(0.03) In 1.25
= 0.126
Computer Modeling of Reaction Rate Constants
The Journal of Physical Chemistry, Vol. 83, No. l, 1979 41
The uncertainty range estimate on h8,1800 is then, not including the f20% systematic error, from (6) AWCG
In
k8,1800
= (0.82)-'[ln (120/11O) = 0.26
+ 0.1261
which is a 3070 uncertainty range. Including the *20% systematic error estiimate gives a worst case guideline of An.cc In h8,1800 = 0.65.
VII. Discussion The foregoing presentation has been developed as if estimation of elementary reaction rate constant parameters were the only application of computer modeling techniques of the types discuswd. In fact, of course, there exists a considerable literature on the subject of computer modeling, and much of it comes close to the very problems that arise in rate constant estimation. For example, the distinction between errors arising from uncertainties in initial conditions ("a-type errors") and those arising from uncertainties in model parameters ("P-type errors") or in model form ("7-type errors") has long been made in engineering system t h e ~ r y . For ~ the most part it is unnecessary to take recourse to results obtained in these related field:, when working with modeling of chemical reactions by integration of ordinary differential equations, but one should be aware that a considerable resource of literature on modeling exists.'@'j One should also be aware that the preference for logarithmic forms [e.g., eq 2 and 31 which prevails when dealing with kinetics models4is also found to be preferred in most other contexts as well (see Chapter 1 of ref 9). The above presentation of uncertainty range estimations has not dealt with the fact that eq 2 and 3 do not typically lead to S,, and p S , elements that are valid over the entire ranges of DITmor k j Tthat one is exploring. They are valid and klT f AhlT instead as averages over the D,Tm f mLTm ranges that happened t o be used in computing sensitivity spectra. For complex models, p S , usually proves to be more or less constant over the range of kJTonly when p S , is one of the small elements of (ps,,). For the large elements of {pS,],steady changes are usually found i n ~ t e a d . ~ For cautious use of the equations given for AEsP In kjT and A w c ~In hlT, one must therefore use p S , elements that are characteristic for the kjT ranges in which one is interested for the uncertainty analysis. In addition to t h k procedural caveat, we note also that the methods for obtaining uncertainty ranges described in section V were proposed not as definitive numerical methods, but rather as reasonable ways to discover the effects of error propagation in a modeling investigation. Only the "absolute worst event" method could be claimed to define true bounds, and then only with the restriction that false minima may lead to false bounds. The intent of the other methods is to provide quantitative and rea-
sonable statements about uncertainty ranges. By providing such statements in addition to the output {kJT} and {pS,], a modeling study can provide h,T values that will be acceptable for later independent critical evaluations just as are klT values and uncertainty range estimations obtained by traditional data reduction procedures.
Acknowledgmtvzt. The ideas presented here were developed in connection with applications of computer modeling to high temperature combustion chemistry. Acknowledgment is made to the donors of the Petroleum Research Fund, administered by the American Chemical Society, for partial support of this research. This research was also supported by the Robert A. Welch Foundation and the U.S. Army Research Office. References andl Notes Symposium on Reaction Mechanisms, Models, and Computers, J . Phys. Chem., 81, 2309-2586 (1977). D. B. Olson and W. C. Gardiner, Jr., J. Phys. Chem., 81, 2514 (1977). C. W. Gear, Math. Computations,21, 146 (1967); "Numerical Initial Value Problems in Ordinary Differential Equations", Prentice-Hail, Englewood Cliffs, N.J., 1971. W. C. Gardiner, Jr., J . Phys. Chem., 81, 2367 (1977). S. W. Benson, "Thermochemical Kinetics", 2nd ed, Wiley, New York, N.Y., 1976. J. E. Hardy, W. C. Gardiner, Jr., and A. Burcat, Int. J . Chem. Kinet., 10, 503 (1978). D. L. Baulch, D. D. Drysdale, D. G. Horne, and A. L. Lloyd, "Evaluated Kinetic Data for High Temperature Reactions", Vol. 1, Butterworths, London, 1972. D. L. Baulch, D. D. Drysdale, J. Duxbury, and S. J. Grant, "Evaluated Kinetic Data for High Temperature Reactions", Vol. 3, Butterworths, London, 1976. P. M. Frank, "Introduction to Sensitivity Theory", Academic Press, New York, 1978. R. K. Mezaki and J. Happei, Cafal. Rev., 3, 241 (1969). D. M. Himmelblau, "Process Analysis by Statistical Methods", Wiley, New York, 19'70. J. R. Kittrell, Adv. Chem. f n g . , 8, 97 (1970). A. P. Sage, "Estimation Theory with Applications to Communications and Control", McGraw-Hill, New York, 1971. Y. Bard, "Nonlinear Parameter Estimation", Academic Press, New York, 1974. J. H. Seinfeid and L. Lapidus, "Mathematical Models in Chemical Engineering", Vol. 3, "Process Modeling, Estimation and Identification", Prentice-Hall, IEnglewood Cliffs, N.J., 1974.
Discussion S. H. BAUER (Cornel1University). In discussing the evaluation of rate constants of elementary steps by curve fitting observations on complex systems, one cannot overemphasize the need to undertake sensitivity analyses, as Professor Gardiner has done. It is necessary to stress that a systematic procedure must be used to ascertain whether all relevant reactions (in the sense that they affect the particular diagnostic used) have been included in the array. The mininzal set should not be defined by a list of only those which are required to account for the observations, but rather by those steps which, when included with reasonable rate parameters, affect the observations by magnitudes larger than the experimental errors.