Statistical analysis of errors: A practical approach for an

Sep 1, 1993 - An Introductory Experience for Physical Chemistry: Victor Meyer Revisited. Frederick A. Kundell. Journal of Chemical Education 1999 76 (...
0 downloads 4 Views 3MB Size
Statistical Analysis of Errors: A Practical for an Undergraduate Chemistry Lab

ro roach

Part 1. The Concepts W. J. Guedens, J. Yperman, J. Mullens, and L. C. Van Poucke Department SBG, Lirnburgs Universitair Centrum, 8-3590 Diepenbeek, Belgium E. J. Pauwels Department of Electrical Engineering, Katholieke Universiteit Leuven, 8-3001 Leuven, Belgium

In this paper we give a concise and practice-oriented introduction to the analysis and interpretation of measurement errors. An experiment almost never produces exactly the same result when repeated. The question then arises whether or not the differences are significant. In other words, are we observing random and uncontrollable fluctuations or has something essentially changed? Thus, i t is of paramount importance to specify the measurement error. Moreover, a detailed analysis of the inaccuracies that riccumulate dunng the ditferent steps in an exwrlrncnt will allow us to identify the factors thut have a major influence on the uncertainty of the find result. On the basis of this information we can then take appropriate actions to try and reduce the variation. On many occasions the quantities of interest are not observed directly but must be computed using the experimental data. Aset of rules to handle such situations can be found a t various places in the literature (Id). I n this paper we show how all these rules are a n application of a small number of basic ideas. Moreover, we want to illustrate how these rules can be applied to common laboratory situations. We will restrict our attention to the calculus of statistical errors because the case of significant digits has been thoroughly investigated by several authors (6-8). Underlying Statistical Concepts Suppose we prepare a large volume of a solution of acid (well-mixed) and then determine the pH by measuring small samples of this solution with a potentiometer. The results might be the following. 2, = 3.75

Let x be the result of one particular measurement. 'Xwill denote the list of all the values we would obtain if we measured (under identical circumstances)an infmite number of samples taken fmm the same solution. Because we will (almost) never reproduce exactly the same result, we call X a random variable. In most experiments the variable X will have a Gaussian (or normal) distribution. In other words, if we made an infinitely detailed histogram of an infinite number of individual results x , we would obtain the well-known bell-shaped curve that can be described by

Stated differently, the probability that X assumes a value between x and x + dz is flr)dz. This distribution is completely determined by the parameters p and 02,which denote the expectation (or mean) Em and the variance V a r O , respectively. If the mean E(X) = p does not coincide with the parameter (value) of interest 5, then there is a systematic error in our measurement (see Fig. 1).Detecting such a systematic error is far from trivial bemuse we do not know 5. Therefore we can only hope to spot such a discrepancy by scrutinizing both the results and the procedures. Below we assume t h a t no systematic error is heing made. Thus, we also assume that

xs = 3.67

This tells us the following. .We are using a potentiometer whose precision limit is equal to 0.OIJ2 = 0.005 units. The uncertainty on the result is far greater than the precision limit because the individual measurements differ by approximately 0.2 units. The variahilitv of the DHvalues is the cumulative result of a number of"uncont;olled (or uncontrollable) fluctuations in the conditions affecting - the measurement, for example, changes in temperature or illumination. Because these fluctuations occur at random, one can resort to statistical theory to analyze the results. However, before we can cast this problem in the framework of statistics we must set up some notation. .Let E, be the unknown value of some physical or chemical parameter of interest, for example, the true value of the pH of the solution considered.

776

Journal of Chemical Education

Figure 1. Probability density function

If possible, one will make not one but several independent measurements of the unknown parameter 5. The rationale underlying this approach is 2-fold:

Moreover, assume that

By averagingout over the different results we can reduce the

variability. The averaging will increase the normality of the data.

Then

Below we will take a closer look a t these two aspects of the averaging procedure.

and

E(.%

=P

Reduction of Variability

Let us return for a moment to the example introduced a t the beginning of the paragraph. We determined the pH three times and denoted the results XI, xz, 23. This is where one usually stops the experiment, but i t is conceivable that one would repeat the procedure (under the same experimental conditions). The result of the first measurement of the second run would be denoted x j 2 ' , and similarly one would get x2@' and xJ2'. Given time and resources (and a lot of patience) diligent scientists could repeat this procedure over and over, thus producing three columns (say XI, Xz, X3) in which they would store the results of the first, second, and third result of each run (see Fig. 2). Then the distributions (the infinitely detailed histograms) corresponding to each of these columns are similar because the results will not depend on whether the actual measurement is the first, rather than the second or third, in the run. For each run one can compute the mean Z of the three observations:

and we can store this in a separate column that we will call -

X. --

A proof of this standard result can he found in any textbook on statistics (4). Below we will adopt standard practice and use the notation &%r VadX). These results confirm our intuitive idea that X has the same mean as the X,'s. However, by averaging, we reduce the variability by a factor equal to the sample size n. Improving the Normality

The use of jZ has a second advantage: Most of the treatment of errors is based on the assumption that the underlying distribution is Gaussian. This need not be true. However, a general result in probability theory (the so-called Central Limit Theorem) assures us thatiirrespective of the distribution of Xi, the sample mean X will be approximately normal. Also, this approximation will improve as the sample size n increases. From the above considerations it becomes clear how we can quantify the uncertainty due to statistical fluctuations. Given n observations XI, ...,X.(made under comparable conditions) of a quantity 5, we know that the sample X i s approximately normal. Hence, there is a high probability that its mean

Basic statistics now tell us that the histogram (distribution) of this (infinite) column has the same mean but a smaller variance than the columns XI, Xz, X3. More precisely,

E(li) = P (which by assungtion is equal to 5) will be located in a neighborhood of X More precisely, we have the following approximate result.

In general when repeating the measurement n times independently~we get the result shown below.

For all practical purposes this means that we are certain 5 (where 5= k) of the quantity is confined within the following interval.

LetX1,Xz,..., X, be n independent repetitions of a measurement, and let

that the true value

[2-3Oj3+

XI

3 r

Thus, we will call 3- the theoretical absolute (statistical) error, AE(x), on the result.' 'Theoretical" is used here because in practice we do not know VadX) but must estimate it from the data. This is done by realizing that

be the corresponding sample mean -

X

one particular

sample

another possible

is an estimator of V a r W = 3, Because

we conclude that we can estimate &by

Sample

'Equation 2 is based on the well-known rules that for the normal oooulation aooroximatelv 68% of the data are situated within u f o. 95:5% withii ; 20. ~,~aid 99.7% within u . +- 30. Eauation 2- i; - onli ~ & O X mate y' true oecaJse in most cases X w II only oe appro~: rnateiy norma. Moreover, I we mJst est mate o, as wo always be the strio~tioninstead ofthe case In pract ce, we rnJs! Lse tne St~denl-1-0 normal.

*

Figure 2. Tabulated values of x?.

~~~

~

~~

~.~~ ~

Volume 70 Number 9 September 1993

7TI

From these considerations we construct the working definition given below. IfX1, ...,X, are observations of an unknown quantity, then

= &AE(x)

Dividing both sides by xZ,we obtain

~ ~ (= GRE(XI 2 1

our best estimate of this quantity is given by

The absolute (statistical)e m (AE(x))on this result is given by

However, this result is wrong because we did not take into account that the covariance of the two fadan does not vanish. In fad Cov(X,X)= VarO. Substituting this in eq 6, we get the correct value. Alternatively,one can use the result in eq 10. Applications We will look at some frequently used functions.

Calculus General Results

Propagation of Statistical Errors in Sums and Differences

Let

Let us assume that we are interested in a function f (E,,q) of two quantities 5 and q of which we obtained several measurements XI,...,x. and yl, ...,y,, respectively. On the basis of these observations we can determine the absolute (statistical) errors AE(x) and AEb). To see how these errors contribute to the final uncertainty on the result we use Taylor's theorem (9). Because we can safely assume that the deviation of a typical observationX(resp. Y) with respect to the true valueX(resp. II) is small, we get

RXI,...,." , = z a i x i

((4. =

*1)

Then

and hence

In view of eq 7 this yields

C w,1- CAE%i) i In particular this implies AE2

Discarding the higher order terms and using

-

AE(X +y) = 4AEZ(X)+ AELE"@) we obtain

(8)

Propa abon of Stabsl~calErrors when &u/rrplymgby an Errorless Constant

.

.

.

.. .

We used the following elementary facts. Vafi5, ?I= 0 because 5 and 11 are constant.

.

Var(X- 5) = VarO and VadY - q) = Var(Y) because translation of the data does not change the variance.

Let Ax) = ar

where a is an errorless constant Then

In almost all cases of interest the measurements X and Yare independent of each other. (For an important exception see the remark below.) This implies that the covariance Cov(X, Y)vanishes. Hence eq 5 reduces to

. . or, in terms of the absolute statistical error,

. . The covariance in eq 5 only vanishes if the measurements are indeed independent. Thus, the application of eq 7 shown below yields an erroneous result. Interpret 1 = r . x as a function of two variables 5 and q where both 5 and q are equal to x. x2=x.x=fc~,q)=~.q In this case the variables E, and q, being identical, are certainly not independent. Applying eq 7, we get

778

Journal of Chemical Education

Pro agation of Slalistical Errors in Ffoducts. Ouobents, and Powers

To avoid unnecessary complications we look at one particular example of a rational function. From this it will be clear how to handle similar functions. Therefore we concentrate on the following function.

where q, n, m, and k are positive (errorless) exponents; and a is an errorless constant. It is easy to verify that the partial derivatives are given by

--af - rn f@,

a~

Y

Y,

4 Similarly, concentrating on the natural logarithm first, we see that because

3,--;~CX,Y,Z) k

y=f(x)=lnz

Using eq 7, we obtain A E ~ V) (~X(, 2Yw,+z,) g) =

and dividing by

we get

+