Laboratory simulations that include experimental error - American

0.75 1 mul"1 s_1. (4) k$ = 0.25 1 mol"1 s"1. (5) ka = 1.0 1 mol"1 s"1. (6). It should be noted that the rate constants used in this mech- anism are no...
2 downloads 0 Views 2MB Size
Laboratory Simulations that Include Experimental Error It should he noted that the rate constants used in this mechanism are not necessarily meant to he realistic hut have been chosen purely for the purposes of the exercise. If the simulation were run at 300K, step (1) were rate-determining, and two experiments (one with RNzX and Xz, the other with RNaX alone) sufficed to reveal all that could he known ahout the mechanism under room temperature conditions. (Usually ahout six experiments were done). A log plot versus time for the decay of RN2X showed the rate-determining step to he first~order,and the slope of the line was independent of the amount of X2. The difference in product distributions in the absence of X? identified this species as a allowed the student to estaMish five of the six steps of the propose the sixth. There was still mechanism. and plausiblv. . . some ambiguity c.kcerning the nature of the rate-determining step, that is, was the N-X or N-R bond broken first? One .

~

tures. (Only the more advanced students were encouraged to do simulations a t more than one temperature, since the internretation of the results of the temoerature chance was not very straightforward.) From the change in product distrihutions it was nossible to deduce that step 1 was rate-determining a t low temperatures. This type of simulation has several characteristics that make it particularly realistic and useful for instructing chemistrv students. First, the specified small number of curves generated by the program. This is very much like the noise that all experimentalists have to deal with, as in real life, and the students have to learn to distineuish between sianificant and insignificant deviations from anticipated behavior. Second, any conclusions reached by the students concerning the nature of the mechanism and the magnitudes of rate constants (or some combination of them) must he demonstrated as convincingly as in any research project. They have to show that the rate law derived hv ordinary steady-state analysis from their mechanism uniquely reproduces the ohserved kinetics, and that they have uncovered all the information that can he extracted. Finally, realizing that no real-life problem can he completely solved, the students hecome aware that it is as much an error to reach too many conclusions as too few. A particularly interesting feature of this type of nroiect . is that students can mislead themselves for some time h y r , . n s t r l t ~ ~ 3l nInl+ ~ Inv. h;~ni:tn.: ~ n dthen esprritnrc t h ~ . ~ v ~ I I - L ~i r~u ~d r~ . $u ~I i~~otmI ~ ; I Y I W- it rIenldi.htd In! . ' m t e x periment too many." In summarv. simulations allow kinetics to he presented as a challenge tdbne's detective ability rather than a repetitive exercise in followina algehraicallv messy steady-state analyses. we use-is esp&iallywell adapted to The particular teaching and has heen made easily available. ~~

~~

~

~

Acknowledgment This research has been s u o ~ o r t e din oart hv the National Science Foundation and by tKe Donors of the Petroleum Research Fund. administered bv the American Chemical Societv.

'

A shortcoming of traditional simulations of experiments is that errors (which appear in "real" experiments) are not included. As a result the user is not fully aware of the sensitivities of the results to the various experimental parameters. For example, how is the result affected if a measurement is made with a snecified uncertaintv? Likewise. if exoerimental conditions cannot he maintained precisely (i.e., temperature control! then associated errors are introduced. For simulations of tightly controlled experiments, experimental error can be introduced easilv into the computed output. However, if experimental conditions are not stable, i.e.; the temperature is not only unknown but also not well regulated, then experimental errors must be introduced during the simulation. A simple algorithm for calculating random experimental error can be implemented by using the Central Limit Theorem (11).This theorem states that the normalized sum of identically distrihuted random numbers approaches a Normal (Gaussian) Distrihution. This is the reason a Normal Distribution is used in descrihing random errors (12). Consider a uniform distribution of random numhers, ni, from 0 to N. For this case r',=+ii2=N2

12

so that for h randomly picked numhers from the above distribution

-

.

and Technology, ~

Dwight C. Tardy University of lowa lowa City, IA 52242

u 25-31, i ~ 1976, in Caracas, Venezuela.

Permanent address: IBM Research Laboratory. 5600 Conle Road, San Jose. CA 95193. Deceased.

By selecting 1 2 random numhers distributed between 0 and 1 the standard deviation will he 1 and the average will he 6. Thus, if the average value of a variable is with standard deviation U E , then the observed quantity (including the error) will he given hy

where Ri is the sum of 12 random numbers and changes according to the 12 numhers selected. This algorithm for introducing experimental error in computer simulations brings the user closer to "real experimental" simulations. This ad-

The prime question of what variable should he measured with high or low precision must be asked for each experiment. This is necessarv so that the error in the final result is obtained with a minimum cost expenditure. In fact a gaming approach can he used in the simulation: the user is asked to minimize the cost of an experiment and to obtain data with a certain precision. This relates to the real-world situation in which measurements of higher precision are more costly. The cost of an experiment can be related to the precision of each measurement, the control of experimental parameters, and the number of measurements made. The optimizat,ion process occurs when balancing cost and precision.~woextreme cases can result: an experiment with a large number of measnrements a t high precision will produce very reliable results but deplete the whole NSF budget, or a cheap experiment can he performed by using a few measured values, each with large associated errors. The latter example will produce a 'shotgun' data display, which can he (and often is) naively interpreted in a multitude of ways. Volume 58

Number 5

Mav 1981

407

Application of Error Algorithm We have included these ideas of experimental error in a kinetic simulation program (13). The primary ohjectives of this particular simulation are 1) to introduce the element of experimental design into the undergraduate study of chemical kinetics 2) to interpret raw kinetic data from a user~designedexperiment so that the halanced chemical equation,rate law, rate constant, a c ~ tivation energy, and pre-exponential factor can he obtained.

initial ronrentrati& of reactants. and which substances are to he analyzed a t predetermined times. The user has the option of introducing experimental errors (standard deviations) associated with each of the above variables so that the real experiment can be simulated. After selecting an unbalanced chemical equation (the instructor can easily change reactions to mimic the experiment to he performed in the laboratory), the user designs the kinetic study to determine the kinetic parameters of the reaction by inputting the appropriate variables. The computer program then simulates the experiment as designed; the appropriate errors are introduced. The temperature variation appears over each discrete time interval; i.e., the temperature will change with each interval as determined hv the orecision which was input so that the reaction will speed up or slow down during this period. The next step is for the student to analyze the data. The user can request a concentration-time tahle, rate-time tahle, or plots of various functions of the concentration (cone., In(cunc.) or cone.-') versus time for any of the analyzed substances. In addition to the plot, a least-squares analysis is werformed: the slope and intercept dong with their standard deviations are output so the user can assess the "goodness" of the plotted concentration functions. From the analyzed data the balanced chemical equation can he deduced along with the rate law and rate constant. If the analysis is unsuccessful, (not enough reaction, too much reaction, etc.) then the user can redesign the experiment by changing the initial concentrations. temuerature. time interval size or numher of time intervals, and perform another simulation with the new data. Bv sirnulatine the exweriments a t a variety of temperatures (& least 2) the user'can select an option to caliulate Arrhenius parameters and compare them with the "real" values. By changing the experimental precisions of the input parameters, the user can evaluate the accuracy with which the rate parameters can be obtained (i.e., propagation of errors) without the aid of taking derivatives. Each decision the student makes on the precision of a laboratorv variahle and on the samuline hv an . " itself is weirhted internal cost factor. This produces a realistic cost analysisthe better the precision the higher the cost. This cost is output a t the end of each simulation. Thus, a gaming approach can he used in which the instructor assigns areaction and wants rate constant to be determined within a certain tolerance; the student's erade is determined bv the cost factor. The program has been used successfully in the undergraduate laboratory course as a stand-in for a real-time experiment and to acquaint the student with experimental design. The simulator has also heen used in lecture courses whose ohjectives have heen to determine kinetic parameters and to evaluate the effect of variables on these parameters. In all cases the user has appreciated the analysis problem associated with having only a few close data points. -~~~~~~~ ~

~

~

~

Acknowledgment The author wishes to express his appreciation to the University of Iowa for a Summer Fellowship in which this work was undertaken. 408

Journal of Chemical Education

BASIC Version of the Stiffly Stable, Gear Numerical Integrator

Richard J . Field1 Radiation Laboratory University of Notre Dame Notre Dame. IN 46556 Numerical simulation is a very useful method for analyzing the kinetics of complex reaction mechanisms because it does not require closed-form solution of the differential equations that arise from application of the law of mass action to a mechanism. It does, however, require that values, either experimental or estimated, he availahle for all of the rate constants in a mechanism. Activity in this area is increasing in both teaching (141 and research (1.5).Unfortunatelv. solution tice because these equations are often subject to a numerical instability referred to as "stiffness" (16). This problem arises because of the vastly differing time scales (rate constants) that may he associated with the various elementary reactions composing a complex mechanism. Stiffness appears whenever a pseudo-steady state develops in the concentration of one or more intermediate species. Commonly used numerical integration routines, such as those based upon the Runge-Kutta method. are so inefficient when aoulied .. to stiff orohlems that they are essentially useless. However, there are now available a number of numerical integration routines (17) which easily handle stiff problems. The best known of these is one due to Gear (16). Essentially all of these routines are written in FORTRAN and are not useful with many small computers which are restricted to BASIC. This report concerns the availability of a BASIC version of the Hindmarsh (18) implementation of the Gear (16) algorithm. Our BASIC version of GEAR was written for use on a Hewlett-Packard 9830 desk calculator with 5760-word memory and matrix operations ROM, hut it will run with only minor modification on systems supporting an extended BASIC. A memory capacity in the region of 5500 words is necessary, and if a system does not support matrix operation commands the user must suwolv a matrix operations package. The program does not confain the Adam; methods-present in the orieinal (16. 18) FORTRAN program. The size of the for the cal&tion to he completed. Mechanisms of moderate complexity (3-4 intermediates) usually take several minutes on the HP-9830. The program has been used successfully (19) on an HP-9830 to simulate the rather complex (6 intermediates) mechanism of the photoreduction of Ha02 by H2 in water. A listing of the program is availahle by writing to: GEAR PROGRAM, Radiation Laboratory, University of Notre Dame. Notre Dame. Indiana 46556. A technical report (NDRL-2161~) that describes in detail the use of the program is included with the listing. The program itself is set up to integrate the very stiff set of eq;ati;ns resulting from-the Oregonator (20) model of the Belousov-Zhabotinsky reaction.

The research described herein was supported by the Office of Basic Energy Sciences of the Department of Energy. This is document No. NDRL-2121 from the Notre Dame Radiation Laboratory. ' Present address: Department of Chemistry. University of Montana. Missaula, MT 59812.