Estimating error limits in parametric curve fitting - Analytical Chemistry

Oct 1, 1989 - Citation data is made available by participants in CrossRef's Cited-by Linking service. For a more comprehensive list of citations to th...
0 downloads 9 Views 960KB Size
Anal. Chem. 1909.61,2324-2327

2324 is T

estimate of the concentration of dopamine needed to produce a current of this size. Considering the ratio of 1OooO:1 for ascorbate to dopamine concentrations in the striatum, it is likely that the current recorded in vivo is due almost entirely to ascorbate. In conclusion, the results indicate that while stearatemodified electrodes have the desired properties for electrochemical discrimination of ascorbate and dopamine before they are implanted in brain tissue, these properties are lost after implantation. Taken together, the literature data and the present findings also suggest that these electrodes are neither selective nor sensitive enough to detect dopamine levels in vivo.

’A

LITERATURE CITED

c

250

40s

mc

boo

Figwe 1. Sections of the voltammograms (seetext) for dopamine (DA) and ascorbate (AA) in PBS, pH 7.4: (top) DA, 100 pmol/L; AA, 200 pmol/L; at an unmodlfied carbon paste electrode; (middle) DA, 20 pmol/L; AA, 200 pmol/L; at a stearate-modlfied carbon paste electrode; (bottom)DA, 100 WmollL; AA, 500 Imol/L; at a tissue-treated stearate-modified carbon paste electrode.

The effect of brain tissue on the stearate-modified electrode is consistent with recent studies of similar action by brain tissue on unmodified carbon paste electrodes (24) and with reports for surfactant action on carbon paste electrodes (25). It has been proposed that in such cases the surfactant solubilizes the oil and other hydrophobic elements of the paste, leaving behind a “clean” graphite surface. We propose that a similar mechanism occurs in vivo. The stearate-modified electrode implanted in brain tissue is introduced to the hydrophobic environment of lipids and proteins. These take the role played by the surfactants mentioned above and remove the oil and other hydrophobic /lipophilic moieties of the electrode surface, the result being a modification of the electrode surface and an increase in the rate of electron transfer as shown (Figure 1 and Table I). A reduction in sensitivity is found at the tissue-treated electrodes compared to the electrodes before treatment, and is most likely a result of partial blockage of the electrode surface due to the adsorption of lipids and proteins (26). A linear sweep voltammetric wave, attributed to dopamine oxidation at the stearate-modified electrode in vivo, has been reported by Lane et al. (14). The wave, centered at +lo0 mV vs Ag/AgCl, has a peak height of the order of 1nA. This peak occurs at a potential (vs SCE) corresponding to large ascorbate oxidation at the tissue-treated electrode. A current of 1 nA for dopamine in vitro would correspond to a concentration of approximately 5 wmol/L (10). Taking into account the restricted compartment environment of the electrode in the brain (27, 28), this concentration represents a gross under-

Marcenac, F.; Gonon, F. Anal. Chem. 1985,57, 1778-1779. Crespi, F.; Martin, K. F.; Marsden, C. A. Neurosclence 1988, 2 7 , 885-896. Stamford, J. A. Anal. Chem. 1988,5 8 , 1033-1036. Kasser, R. J.; Renner, K. J.; Feng, J. X.; Brazell, M. P.; Adams, R. N. Brain Res. l98& 475, 333-344. Ewing, A. 0.; Wightman, R. M. J. Neurochem. 1984. 43. 570-577. Adams, R. N.; Marsden, C. A. I n Hendbook of Psychopharmacdogy; Plenum Press: New York, 1982; Vol. 15, pp 1-74. Voltammefry in the Neurosclences; Justice, J. B., Jr., Ed.; Humana Press: Cllfton, NJ, 1987. Measurement of Neurotransmitter Release In Vivo; Marsden, C. A,, Ed.; IBRO Handbook Series; J. Wiley and Sons: Chichester, 1984; Vol. 6.

Marsden, C. A.; Joseph, M. H.; Kurk, 2. L.; MaMment, N. T.; 0”eill. R. D.;Schenk, J. 0.;Stamford, J. A. Neuroscience 1988,2 5 , 389-400. Blaha, C. D.; Lane, R. F. Brain Res. Bull. 1983, 10, 881-864. Gelbert, M. 8.;Curran, D. J. Anal. Chem. 1986,58, 1028-1032. Lane, R. F.; Biaha, C. D.; Hari, S. P. Braln Res. Bull. 1987, 19, 19-27. Broderick, P. A. Life Sci. 1985, 3 6 , 2269-2275. Lane, R. F.; Blaha, C. D.; Phillips, A. G. Brain Res. 1988. 397, 200-204. Glynn, G. E.;Yamamoto, B. K. Brain Res. 1989, 481, 235-241. Gonon. F. G.; Navarre, F.; Buda, M. J. Anal. Chem. 1984, 56. 573-575. Kelly, R. S.;Wightman, R. M. Brain Res. 1987,423, 79-87. Church, W. H.; Justice, J. 6.. Jr. Anal. Chem. 1987,59, 712-716. ONeill, R. D.; Fillenz, M.; Albery, W. J.; Goddard, N. J. Neurosclence 1983,9 ,87-93. OMham, K. 8. J. Electroanel. Chem. 1985, 184, 257-287. Sternson, A. W.; McCreery, R.; Feinberg, 6.; Adams, R. N. J . Nectroanal. Chem. 1973,46, 313-321. Dayton, M. A.; Ewing, A. G.; Wlghtman, R. M. Anal. Chsm. 1980,52, 2392-2396. Kovach, P. M.; Ewing, A. 0.; Wilson, R. L.; Wightman, R. M. J. Neurosci. Mettwds 1984, 10. 215-227. Ormonde, D. E.; O‘Neill, R. D. J. Electroanal. Chem. 1989. 261, 463-469. Albahadily, F. N.; Mottob, H. A. Anal. Chem. 1987,59, 958-962. Nelson, A.; Auffret, N. J. Electroanel. chem. 1988,248, 167-180. Albery, W. J.; Goddard, N. J.; Beck, T. W.; Flllenz, M.; O’Neill. R. D.J. Electroanel. Chem. 1984, 161, 221-233. Cheng, H.-Y. J. Nectroanal. Chem. 1982, 135, 145-151.

Paul D. Lyne Robert D. O’Neill* Chemistry Department University College Dublin Belfield, Dublin 4 Ireland

RECEIVED for review April 28, 1989. Accepted July 20,1989. We thank EOLAS for a grant to P.D.L. under the Basic Research Awards scheme.

Estimating Error Limits in Parametric Curve Fitting Sir: The recent article by Phillips and Eyring in this journal (I) presented an interesting solution, based on the sequential simplex method, to the problem of the estimation of errors in nonlinear parametric fitting. The authors did not mention

two other simple, powerful, and reliable methods, the jackknife and the bootstrap (2-6). The need for simple and robust procedures to assess confidence limits in estimated parameters is widely perceived in

0003-2700/89/0361-2324$01.50/00 1989 American Chemical Society

ANALYTICAL CHEMISTRY, VOL. 61, NO. 20, OCTOBER 15, 1989

Table I. The Reference 'Experimental" Data Set" t

A

t

A

1.5 1.5 3.0 3.0 4.5 4.5 6.0 6.0 9.0

0.111 0.109 0.169 0.172 0.210 0.210 0.251 0.255 0.331

9.0 12.0 12.0 15.0 15.0 18.0 18.0 24.0 24.0

0.325 0.326 0.330 0.362 0.383 0.381 0.372 0.422 0.411

"These 'noisy" data (from ref 16) are simulated results of a first-order kinetics experiment. The absorbance (A) is a function of time ( t ) . The function has the form A = A& - exp(-kt)), where A,, the absorbance at t = a, and k, the rate constant, are the unknown parameters.

all physical sciences. The jackknife and the bootstrap algorithm can provide a particularly simple solution. Other methods (for example, likelihood and lack-of-fit,ref 7)) which will not be discussed here, can also be used. Relatively few explicit references to the jackknife and the bootstrap are found in the chemical literature. Some applications of the bootstrap have been determination of the confidence limits of correlation coefficients between elemental concentrations in meteorites (8))analysis of the correlation between blood lead content and blood pressure in policemen (9))determination of confidence bounds of means, errors, and correlation coefficients in air quality data (IO),determination of confidence intervals for the timing of the DNA molecular clock in human filogenesys (11))and analysis and interlaboratory comparison of exponential fits in specific-heat measurements (12). The jackknife has found application in the estimation of confidence intervals for parameters associated with quantitative structure-activity relationships (13))in uncertainty analysis in reactor risk estimation (14), and in outlier detection and error estimation in geothermometer calibration (15). Concise operational descriptions are presented here and are applied to a simple test problem in two parameters. Extension to problems with more parameters and/or variables is straightforward. Formal descriptions and proofs can be found in ref 3 and 6 and references therein. The jackknife is a finite algorithm (it requires a finite and a priori computable number of calculations) while the bootstrap is not: the number of computations needed is proportional to the precision asked of the results. The bootstrap can provide a better representation of the geometry of the confidence regions, at the cost of a significantly larger computational load. Both can use any curve-fitting program. Let us consider the now classic data set (from ref 16) reproduced in Table I. The 18 "experimental" pairs (A, absorbance; t , time) simulate the results of a first-order kinetics experiment. They are to be fitted by the function A = A,(1 - exp(-kt)) where A,, the absorbance at t = m, and k, the rate constant, are the unknown parameters to be determined. Assuming that all points have identical statistical weight, that the error is normal (random with Gaussian distribution), and that it affects only the dependent variable A, use of any nonlinear least-square parametric curve fitting program gives A, = 0.4043

& = 0.1698 as extensively published. The fitting program used in this work was SIMP, a public domain implementation of the

2325

Table 11. Parameters Fitting the Full Data Set (Table I) and the 18 Jackknife Subsets of 17 Points Each"

SSR

A,

k

0.404 275 02

0.169 830 49

Full Set 0.003 642 02

Diminished Sets 0.405 939 18 0.405 777 97 0.405 177 94 0.405 540 86 0.403 581 31 0.403 581 31 0.403 652 97 0.403 988 70 0.404 293 19 0.404 278 38 0.406 488 95 0.406 13861 0.406 020 11 0.402 573 00 0.405 275 02 0.407 392 05 0.396 140 77 0.399 689 70

0.166 835 86 0.167 12195 0.168 074 16 0.167 383 43 0.171 362 37 0.171 362 37 0.171 502 97 0.170 592 54 0.168 054 53 0.168 799 02 0.170 112 28 0.170 070 34 0.168 898 68 0.170 752 77 0.169 083 22 0.167 523 33 0.177 452 69 0.174 060 41

0.003 217 55 0.003 29786 0.003 576 97 0.003 515 64 0.003 600 41 0.003 600 41 0.003 580 29 0.003 629 21 0.003 414 61 0.003 564 66 0.002 932 80 0.003 137 11 0.003 516 72 0.003 522 67 0.003 620 91 0.003 437 54 0.002 866 60 0.003 404 72

(point 1 deleted) (point 2 deleted) (point 3 deleted) (point 4 deleted) (point 5 deleted) (point 6 deleted) (point 7 deleted) (point 8 deleted) (point 9 deleted) (point 10 deleted) (point 11deleted) (point 12 deleted) (point 13 deleted) (point 14 deleted) (point 15 deleted) (point 16 deleted) (point 17 deleted) (point 18 deleted)

Mean 0.404 196 11 0.169 946 83

Standard Deviation 0.002 592 61

0.002 594 33

"The full data set and the one-minus subsets were fitted by a PC-DOS Pascal port of program SIMP (ref 17), a public domain implementation of the simplex algorithm. Table 111. Error Analysis of Data in Table I

method jackknife bootstrap sequential simplex Marquardt Newton

A,, abs at t = value, 0.404

error k, rate constant m value, 0.170

0.0107 0.0102 0.012 0.009 0.006

0.0107 0.0103 0.013

0.010 0.007

ref a a 1 1 1

" Present work. simplex algorithm (17) ported to Borland Pascal in the PC-DOS environment. The questions to be answered now are what is the error on the estimates of the parameters A, and 12 and what are the limits (e.g. at the 0.5,0.9,0.98 confidence level) on the possible range of these estimates?

THE JACKKNIFE To estimate error by the jackknife method 1. Delete the first data point from the triginal data set. 2. Fit the "jackknifed" set to compute Am(1),k(l). 3. Repeat steps 1 and 2 n times (where n is the number of data poinJs) deleting now point 2, 3, ..., n, to obtain a set of n (A,('), kc')) parameter sets. 4. Compute the jackknife estimate of the standard error of parameter p (here, p = A,, k) as

where

f i ~ =) l/n?fi(i) i=l

Application to the example data set produced the jackknife replicates in Table 11, from which the standard errors reported in Table I11 were obtained, in good agreement with results

2326

ANALYTICAL CHEMISTRY, VOL. 61, NO. 20, OCTOBER 15, 1989

previously obtained by other methods. The jackknife can be applied to small data sets, such as the one studied here, with no need to develop computer code. All that is needed is to run the fitting program n times, where n is the number of data points in the set. The fitting program can utilize, of course, any algorithm. The jackknife can be programmed in the following way: Instruct or modify the fitting program (called FITTER) to append the parameters it computes to a given result file. Write a program (called KNIFE) to produce diminished data sets (input the data set and the ordinal number of the point to be deleted). Write a program (possibly a batch/script file, called JACK) to call n times KNIFE and FITTER. JACK will produce a file containing the n parameter sets obtained by fitting the n diminished sets of (n- 1)data points. The variance of these parameters can be computed by any statistical package (or most pocket calculators). Multiplying the square root of the variance by (n - 1)1/2gives the standard error of each parameter. Outliers can be easily detected by examining the sum of the squares of the residuals (SSR) of the jackknifed sets: if deleting a data point decreases significantly the SSR of the resulting reduced set, that point is probably in error. The decision to reject an outlier from the experimental set can then be made according to definite rules. If the error on a parameter is known to obey a specific distribution (for example normal or Student’s law), then tabulated values of the appropriate function can be used to translate standard error to confidence intervals. The variance-covariance matrix of the jackknifed results can be used to construct elliptical joint confidence regions, but this technique implicitly assumes Gaussian distribution and has been found unreliable (18), at least for a highly nonlinear function and a data set affected by large and probably inhomogeneus errors (19).

THE BOOTSTRAP The bootstrap method studies the statistical properties of a set of parameter sets obtained by fitting a large number of simulated data sets. These simulated data sets (or bootstrap replicates) are simply obtained by random sampling (with replacement) of the “experimental” data set. Operationally: 1. Copy a point at random from the original data set to a simulated data set. 2. Repeat step 1 n times, the number of samples in the original set, to produce a bootstrap replicate.3. Fit the bootstrap replicate, computing Am(1),R ( I ) . 4. R_epeat steps 1, 2, and 3 m times to obtain a set of m ( i L ( i ) , k(i))sets. 5. Compute the bootstrap estimate of the standard error on parameter p (here, p = A,, k ) as

where m

The bootstrap can be programmed in the following way: Instruct or modify the fitting program (called FITTER) to append the parameters it computes to a given result file. Write a program (called STRAP) to produce a bootstrap replicate by copying n times a point at random from the original set to the replicate set. Write a program (it can be a batch/script fiie, called BOOT) to call m times STRAP and FITTER.

L55

-

LJ5

-

t

t 360

380

400

420

~