How to Determine the Best Straight Line - Journal of Chemical

How to Determine the Best Straight Line ... large variations, it is shown that a modified procedure gives an improved straight line. .... US lawmakers...
2 downloads 0 Views 2MB Size
How To Determine the Best Straight Line S. R. Logan University of Ulster at Coleraine, Coleraine, BT52 lSA, N. Ireland

There are many instances in chemistry where theory confidently predicts that one function of experimental variables should be linearly related to another, and that from the resulting straight line, some other parameter may he calculated from the slope or the intercept. Indeed, it used to be said, only partly in jest, that physical chemistry was the science of drawing straight lines. For years the least-squares regression line has been considered the appropriate formula to use in order to determine the optimum straight line in an efficient and objective manner. In this era of the personal computer, least-squares packages are widely used in data treatment by scientists and social scientists alike. However, in the course of this widespread usage, this procedure is without doubt being used in instances where a basic assumption on which the treatment is based is quite untenable. The Least Squartes Treatment

In deriving the equations for the least-squares regression line ( I ) , we assume that the values of the abscissa ( x ) are all accurate and that there are random errors in the values of the ordinate Cy). Also, it is implicitly assumed that each point is liable to the same range of random errors. Sometimes this assumption is quite realistic, hut on occasions it is totally untrue. Even when it is abundantly clear that some points have a much greater experimental uncertainty than others, the least-squares regression package is still called up on the computer to specify the best line to fit the data. Conveniencetriumphs over rationality. As a simple if slightly exaggerated example, let us refer to Figure 1, where some of the points show appreciable scatter. The least-squares regression line, drawn here as a

896

Journal of Chemical Education

Flgdre I A plol of y agalnst x for polnts w In w dely varylng errors in tne ord nale The I nes are, !he least-sq~ares llne, y = 0 525x -2 485 (- -) and tne ne from eqs 4 an0 5, y = 0 598x 2 025 (- -) 7

dotted line, treats every point as having equal validity and reliability. The error bars show that some points can claim much greater accuracy than others and in logic the best straight line should reflect this. Suppose for each of a set of n points we are provided with ;. asthe coordinates (xi. vi) and also the limit of error., .I,., sessed as the staniard deviation of a number of independent measurements. of the ordinate.. "v;.. For the line given by the equation, y=m+c

(1)

the residual in y of the ith point is given by An Improved Assumption The familiar least-squares line is derived on the arbitrary assumption that the best straight lime minimizes the sums of the squares of these residuals. Amore reasonable aim in the present scenario seeks to minimize S , the sum of the squares of the ratio of each residual in y, Cy - yi), to the corresponding assessed error, y,, that is, to minimize,

Equating both (JSAm) and (dS/ae) to zero specifies the conditions for S to be a minimum and leads to these values, called m' and c', to distinguish them from the corresponding parameters m and c obtained from the conventional least-squares treatment.

Figure 2. A plot of in (A,- A,) against t, using the exponential decay (A, - ,q= e 4 8 0 0 f with ' random errors added to the points. The least-

where all the summations are from i = 1to i = n. The line y = m'x + c' has also been drawn on Figure 1and it may be noted that, in relation to the varying accuracies of the points, it occupies a position much closer to the + 2.0. Sometimes the sienifi"theoretical" line of v = 0 . 6 ~ cance of the best straight line is the ratio of slope/inte~cept. In this instance, the least-sauares line eives a firmre which isonly 0.71 times that obtained usingequations 4 and 5 and has 18 times the error of the latter. Although the differences in the sizes of the error bars in Figure 1may seem exaggerated, circumstances may well arise where even greater variation occurs. As an example, consider a parameter measured with a constant uncertainty, but the function to be plotted on the graph is its reciprocal. Thus, the experimental values of 0.6 f 0.1 and 1.8 f 0.1 would generate points at 1.67 f 0.3 and 0.555 0.03. In such cases the use of eqs 4 and 5 poses no problems where there are only a few experimental points, and assessing the error of each is quite a simple task.

+

Treating Kinteic Data In making kinetic measurements using a stopped-flow system or a spectrophotometer linked to a computer, one normally has lots of experimental points so that it is hardly feasible individually to assess the limits of error of each. But in some circumstances the principles underlying eqs 4 and 5 are easily incorporated despite the large number of datum ooints. Many kinetic experiments are carried out under conditions where first-order or ~seudo-first-orderkinetics are observed. The relevant equ;tion is then

squares treatment yields the line, In (& - At) = 0.101 - 0.0910t (--); eqs 4 and 5 generate the line, In (A,- At) = 4.007 -0.07941

where A,, A,, and A, denote the respective values of the absorbance a t t = 0, t = t and t = -; and k' is the phenomenological first-order rate constant. Assuming that the absorbance increases during the reaction, applying this equation involves, in effect, a plot of In (A,-A,) against t. The difficulty is that as @--At) becomes smaller, its natural logarithm becomes increasingly uncertain so that the notional limits of error continually increase with increasing values oft. Assuming that the distribution of errors in (A- -At) is random, with an absolute error of 6 equally probable regardless of the magnitude of (A- - At), which for convenience we shall call AA, we have

On this basis, the limit of error of each point on the graph of in A4 against t is inversely proportional to AA so that individual assessments of errors are unnecessary. Starting from the exponential decay curve,

and taking points at unit intervals up to t = 40 (i.e., over more than four half-lives) errors were randomly introduced by computer. The ensuing first-order plot is shown in Figure 2, with error bars representing io.These data were treated using eqs 4 and 5, taking each error as (AA-'. On this basis the first-order rate constant k' is given by

Volume 72 Number 10 October 1995

897

From this equation, k' was evaluated as 0.0794, which is obviously much closcr to the .'correct" value of 0.0800 than the fikmrr of 0.0910 obtained from Figure 2 using the leastsquares procedure. The latter line probably &duly influenced by the increasingly unreliable points at high values oft. Distortion from Large Errors Among chemists carrying out experiments of this nature, it is normal to use the traditional least-squares regression formula, but to limit the damage done by points with large errors in in (AA) by deleting all points beyond a certain stage. However, there is also a snag about using eq 9 for a set of data that includes verv small values of the absorbance. The difficulty is that, w6ere the random flnctuations in the readine &4 result in the value beine less than that which fits the perfect exponential decay: this point is less highly weighted than if AA were the same amount greater than the notional value on the exponential decay curve. For example, if the notional "correct"va1ue of AA were 0.08 and the error 6 were 0.03, the point at (AA 6) = 0.05 carries less than a quarter of the weighting of the point at (AA + 6) = 0.11. This point is illustrated by the relative sizes of the error bars of the final dozen points in Figure 2. Consequently, the values of k' computed using eq 9 will tend to be too low. It would thus appear sensible to limit the use of eq 9 to sets of data from which unduly small values of AA have been excluded. As an arbitrary cutoff point, one might suggest five times the standard deviation. When applied to Figure 2, this would cause the deletion of all points after t = 20. For those remaining, the least-squares treatment gives k' = 0.084, whereas eq 9 leads to k' = 0.0799. The Method of Guggenheim

In experimental work of this type, especially where the ~ r o d u cs~ecies t absorb more stronelv - " a t the waveleneth of ohiervation than do the reactants, one of the potential ~roblemsis the unreliabilitv ofthc A, vnluc due tu its susceptibility to the slightest amount of further reaction over the ten half-lives necessary to achieve 99.9% reaction. A

898

Journal of Chemical Education

long-standing solution to this problem is to use the method of Guggenheim (21, which involves a constant interval of time A and leads to the following equation. In (A,, -A,) = In (A_ - A,) +In (1 - e-wA)- k't (10) This shows that in (A,,-A,) should be a linear function of t, but once again the probable error in the points rises sharply with increasing t. The magnitude of the errors introduced into the set of data shown in Figure 2 was purposely made quite considerable so that the distinction between the two treatments of results would be readily apparent. The points are, however, too erratic to be used in the Guggenheim treatment in that, even with A greater than one half-life, a negative value may be obtained for &+A -At) so that the left side of eq 10 cannot be evaluated. They are also, for the reason given above, too erratic to be treated using eq 9. To look a t the Guggenheim treatment, the random errors were scaled down by a factor of 3. These modified data, less affected by error, were analyzed using eq 9. When A is 10 and 15, k' was found to be 0.0795 and 0.0805, in good agreement with each other and with the "correct" value. For these same values of A, the least-squares treatment yielded thevalues 0.0841 and 0.0810, once again reflecting the influence of erratic points a t high values oft, leading to less accurate values. This aspect of the Guggenheim method has given rise to a variation of that treatment, in the form of the KezdySwinbourne equation (3). A, = ~ , , e " ~ + A l l - ewA)

(11)

When this equation was used on the same data, for A = 10, 15, and 20, the resulting values of k' were 0.0783, 0.0808, and 0.0787. These illustrate the contention (3)that the use of eq 11should lead to a more reliable value than is obtained from eq 10 when all points are treated as equally valid. However, they also show that if the Guggenheim method is used in conjunction with eq 9, then the resulting rate constant should be at least as reliable as that obtained by the Kezdy-Swinbourne method. The latter, of course, uses a plot in which both the ordinate and the abscissa of each point are subject to error. Literature Cited 1. See, far example, Francis, P G. Molhernoficsfor Chornlsls;Chapman and Hall: Iandon. 1984; p 178. 2. Guggenheim, E. A Phil. Mag.19B,2,538. 3. Swinbourne, E. S. Anoiysis ofKinelie Dofa; Nelson: London, 1971: p 81.