Computer calculations of first-order rate constants - Journal of

Feb 1, 1970 - Describes the design of a computer program to calculate first-order rate constants from a variety of experimental data. Keywords (Audien...
24 downloads 18 Views 5MB Size
Roberl C. Williams

Computer Calculation of First-Order

and James w. Taylor' The University of Wisconsin Madison, 53706

Rate Constants

M a n y commonly studied reactions are kinetically first- or pseudo first-order. They are described by the integrated rate equation where A, is related to the fraction of reaction, t is time, and k is the first-order rate constant. To evaluate k, A, is generally replaced by I C, - Ct I, where C, is some instrumental reading proportional to the concentration of either the reactant or product, and C, is the instrumental reading a t infinite reaction, usually taken after ten half-lives or more. I n practice, especially in the undergraduate laboratory, absolute standardization of the measuring instrument is not required provided the concentration proportionality constant does not vary. This and other constants may be absorbed in the intercept of eqn. (1). Once the data have been taken, there remains the problem of actually calculating the first-order rate constant and estimating its reliability. Numerous techniques for this have been described in the literature ( - 5 ; they range in computational complexity from fairly simple to extremely difficult. Four commonly used techniques are (in order of increasing complexity): three point Guggenheim calculations (5), plotting values of I C, - Ct I versus time on log paper, point by point substitution of data into eqn. (I), and a fit of the data to eqn. (1) using the theory of least squares (6-8). The former two methods have been popular for their computational simplicity and ease of application. There is, however, difficulty in estimating uncertainties with these approaches. The latter two techniques, especially the least-squares fit, have valid statistical methods for estimating uncertainties, but manual calculations have usually been considered too tedious for the large number of data points necessary for greater statistical significance. Until recently, therefore, the choice of method was often dictated by the number of data points and complexity of the calculation. Large, high-speed digital computers now reduce the tedium of data manipulation and have put the complexity of calculation back into perspective. A number of programs have been reported utilizing methods which were considered too complex to be of value previously (9-16); several have

been written especially for undergraduate laboratories 1 , 15 1 In particular, programs utilizing either least-squares or other statistical methods to calculate rate constants, equilibrium constants, elc., have been discussed (9-10, 14, 16-19). These programs have freed the chemist to use the most statistically accurate methods possible in his work. Most computer programs, however, are written to convert only one type of data into a form compatible with the least-squares equations and, as a result, a program library is required for each type of input data that an investigator might decide to employ for the kinetic analysis. Often it is even necessary to use a different program on the same data if some particular manipulation is desired. Such a situation is inconvenient even for a large research group at t,he graduate level; on the undergraduate level it becomes impossible. Accordingly, we set out to design a computer program which would not be confined to a single data source but which would accept data taken by a variety of experimental methods and would offer a wide choice of data manipulation options. In particular, we set out to design a program which would calculate first-order rate constants from absorbance, transmittance, percent transmittance, resistance, conductance, any general variable linearly proportional to concentration, or any generalvariableinversely proportional to concentration. To accommodate this variety of input data it was necessary to devise a technique for expressing the input data in terms of linear concentration units. Once the input data has been expressed in this fashion, the theory of least-squares, which minimizes the sum of the squared point residues, can be employed to assure the best fit of the data. This approach allows us to use statistical expressions (6-8) to estimate the reliability of the calculated rate constant. We have previously reported on the use of an earlier version of this program in fitting conductance and difference data to within 0.01%, using Youden's formulas (7) to estimate the uncertainties (20). The versatility of the resultant program makes it both a useful research tool and a welcome tcaching device; the features incorporated are especially instructive for an undergraduate physical chemistry laboratory. Among the most important features are the following

This research was supported by the Wisconsin Alumni Research Foundation and the National Science Foundation through grant GP-8369. Portions of this work were presented at the 154th National Meeting of the American Chemical Society, Chicago, September, 1967. Use of the University of Wisconsin Computer Center was made possible through support of the Wisconsin Alumni Research Foundation (WARF) through the Wisconsin Research Committee. To whom inquiries should be addressed.

Input Data. Program accepts any data which are linear in or invemely proportional to concentration, as well as '%T. Provision is made for including an additional subroutine for other types of data, or for additional preliminary data manipulation. Cross Reference Tables. To facilitate comparison among instruments, equivalent values in other units are printed; i.e., for %l'data, the equivalent absorbance and transmittance are computed and printed for each point. Equiv-

In A , = intercept

- kt

(1)

Volume 47, Number 2, February 1970

/

129

d e n t times in hours, minutes, and seconds m e also printed. Time Conversion. Input times mrty he in hours, minutes, or seconds; rate constant units may he reciprocal seconds, minutes, or hours, and need not he of the same units as input. Infinity Vwianee. If desired, the infinity value which best fits the experimental d s t s is calculated and used in calculating the rate constant. Conpence Limils and Deviant Point Elimination. A specified ~ e r c e n tconfidence limit is calculated, and, if desired, those points exceeding this limit may be discarded. Comparison Technigues. If desired, the program will calculate the rate constant by comparison with the known rate constant of a standard, against which the unknown is measured. Guggenhcim Calculations. If desired, the program can do a Guggenheim type calculation a t the same time that i t does the least-squares fit. Modular Feature. Unwanted options can usually be deleted without affecting the rest of the program, thus decreasing still further the deck size and core storage requirements.

Discussion

The program described herein was written for and run on a Univac 1108 time-shared computer, and copies of the program (approximately 430 cards code, 100 cards comment) are available from the authors. Execution times vary, depending on the type and amount of data; approximately fifteen seconds are required to compile the program and execute four data sets consisting of forty points each when both the infinity point adjustment and the deviant point elimination options (dropping an average of three points per data set) are used. This time may be materially shortened if it is possible to preserve the compiled program on tape or as a binary deck. Core storage requirements are approximately ten thousand decimal locations when the program is dimensioned for two hundred data points and all opt,ions and library routines are included. Slightly less core is required if some of the options are deleted. The programming language is Univac Fortran V, a variation of Fortran IV. The program is CDC and IBM Fortran compatible with minor changes. The program is designed in the modular fashion so that unused options can usually be deleted from the source deck. (A special feature of Univac's EXEC-8 Monitor allows the deletion, before execution, of previously compiled program code. This was the original reason for the modular design; however, we have found the modular design to be useful on any system.) The generalized flow diagram is given as Figure 1 with asterisks indicating those options which may be deleted. Specialized options and the details of the algorithms used are discussed below.

tion. After conversion, all input data use the same set of line fitting equations and, if desired, the same infinity adjustment routine regardless of their origin. The reader is cautioned that the linear values are used for computational convenience and may lack physical significance. At the same time input time is converted into those units which express the rate constant. Provision is made for an alternate user-written conversion routine and/or a routine to perform more preliminary data manipulation. Weighting Options

Occasionally it is desirable to use points which are weighted as a function of the data point values. If, for example, aliquots taken at various reaction times are being titrated and this titration data is entered as input data, an aliquot which requires only 0.15 ml of titrant should not be weighted equally with one which requires 5.32 ml. Also, a % T reading of 35% should be weighted considerably more than either a 5% or a 95% reading. On the other hand, because conductance values are generally calculated by taking the reciprocals of measured resistances, a conductance value of 0.01 should be weighted less than one of 0.001. In many cases, weighting according to the point values is unnecessary because (1) the variance of weights is not large, and (2) the logarithmic values of concentration which are used tend to minimize all except very gross differences in weights. The least-squares technique does bias points taken at large times more heavily, and this may be sufficient to handle those cases where the least precise measurements are taken a t very short

Checking the Input Doto for Errors

A series of safeguards is built into the program to catch errors in the input data; discovery of an error in the input data will suppress execution of the data set, although the input data will he read and prilited. This printing is optionally available when no input errors are detected, as a further aid in detecting more subtle errors, and is especially useful when students keypunch their own data. Conversion of Experiment01 Volues

I n order to keep the program both as versatile and as compact as possible, all measurements are converted into an equivalent value C , which is linear in concentra130

/

lournol of Chemicol Educofion

Figure 1. The generalized flow chert for the first-order kinetics program KINDAT. An oderisk indicates o portion of the program which may be deleted without hindrance to the rest of the pmgrorn.

times. If scaled weighting of all points is desired, the following choices are available Linear, proportional to (point Inlerse, proporbional to (point Spectral, proportional to (absorbance . l O - ~ b . ~ ' h ~ " =) a* (81) Exponential, proportional to log(linear weight) Automatic, proportional to log(linear, inverse, or spectral weight), depending on the type of input data

If none of the scaled weighting options is used, each point is given the individual value specified as the points are read in (where unspecified weights are assumed to he unity). It is sometimes desirable to assign these individual weights to particular points which one has reason to believe are either more or less precise than the rest of the data points. For example, one might weight. t,he first several points of a kinet,ic run only half as much as subsequent points. It is possible to use both methods simultaneously. In this case, the weights are first scaled, then multiplied by the value assigned a t input, and finally, normalized to the sum of the input weights. Individual weight specification is possible even when the scaling opt,ionis deleted.

the infinity point can be measured as precisely as any other point in the study, the proportionality constant relat,ing measured values to actual concentration often varies slightly a t this point. I n addition, certain solvolysis reactions inyolve an equilibrium and the infinity value may be more influenced by this equilibrium than data points taken earlier in the reaction. A method of calculating the best infinity value is, hence, extremely advantageous. The experimental data presented later will illustrate some of these advantages, but the critical influence of C, can best be illustrated with synthetic data where there are no measurement experimental difficulties to obscure the observation. Accordingly, we have illustrated in Table 1 a set of absorbance data Table 1. Theoretical Data Derived for a Typical First-Order Reaction Studied Spectroscopically

Time (rnin)

Absorbance

Comparison Technique

Occasionally, it is useful to run two differentreactions simultaneously and read the instrumental difference between these two. One could observe the resistance of the standard solution, that of the solution being studied, and the resistance difference between the two. Comparison of the absolute measurements of the studied solution against the sum of the difference readings and the standard readings might serve as a check on the reliability of the data. In the case of a very well studied reference reaction, one might merely wish to observe the difference between it and the interesting reaction. This program can handle either the case of three sets of data, where it compares the (reference difference) against the other, or the case of one set of difference readings, if the line parameters for the standard reaction are also input. This linear concentration sum of (reference difference) is calculated using the formulas C, = Lin [Aot(Sign . e 2 n c e r ~ w t = r + x d 6+ Ci..,) .t Act(ACt)]

representative of a first order reaction corresponding to the range of 0.205 to 0.900 (80 to 13%T) with a designed C, of 1.000. If these data are fitted to eqn. (1) and Ci is allowed to vary, the standard deviation of the leastsquares line, as is seen from I'imre 2, is strongly dependent of the exact value of C,. '

+

+

C, = Lin[Act(Ci..,)

+ Act(ACi)l

(2) (3)

where Lin is a function specifying the linear equivalent to a measured value, Act is a function specifying the measured equivalent to a linear value, Sign is =t1 and &I.', and the rest gives the correct sign to e'"*reD+ec of the symbols are self-explanatory. These formulas are necessary because the sums of the actual measured values may not be equal to the sums of the equivalent concentration values and would depend on the original data. For example, if the reference infinity point value were 100 ohms, and the difference infinity value were 10 ohms, the linear concentration value should he 1/110, not. 1/100 1/10. I t is easier to use these equations and handle the data normally elsewhere than to treat difference data as a special case throughout the entire program. +

+

Infinity Point Adjustment

I t is sometimes desirahle to vary the infinity point from the value measured in a part,icular run. Although

Figure 2. The rtondmrd deviation of the Rt of doto in Tobla 1 or a function of Ci.

The analytically important region is the sharply localized minimum about 1.0. This value of C, gives the best fit to the experimental data. The specific depth and width of the minimum will, of course, vary with the type arid quality of the data. At larger values of C, the curve is well behaved and the standard deviation flows smoothly and assymptotically toward zero; for smaller values of C, the curve is discontinuous heVolume 47, Number 2, February 1970

/

131

tween the first and last C1 values; i.e., the standard deviation is infinite whenever C, is equal to C,. The algorithm which searches for the best C, requires three parameters, CINC, QUIT, and TOL, which are read in with the dat,a, and traces the following basic steps: (1) The experimental value of Ct is used to compute a rate constant and its uncertainty. (2) Rate constants and their uncertainties are then calculated for trial values C,. (1 * CINC). (3) If either of these trial values yields a rate constant with a smaller uncertainty, then C, is replaced by this trial value; if not, CINC is replaced by CINC/2. Step (2) is then repeated. This continues until (4a) CINC < QUIT, or (4b) C, - Ci > TOL. C