Report
A Primer on Multivariate Calibration F
or centuries the practice of cali bration has been widespread throughout science and engineer ing. The modern application of calibra tion procedures is very diverse. For exam ple, calibration methods have been used in conjunction with ultrasonic measure ments to predict the gestational age of human fetuses (1), and calibrated bicycle wheels have been used to measure mara thon courses (2). Within analytical chemistry and re lated areas, the field of calibration has evolved into a discipline of its own. In ana lytical chemistry, calibration is the proce dure that relates instrumental measure ments to an analyte of interest. In this context, calibration is one of the key steps associated with the analyses of many in dustrial, environmental, and biological ma terials. Increased capabilities resulting from advances in instrumentation and computing have stimulated the develop ment of numerous calibration methods. These new methods have helped to broaden the use of analytical techniques
Edward V. Thomas Sandia National Laboratories 0003-2700/94/0366-795A/$04.50/0 © 1994 American Chemical Society
Calibration methods allow one to relate instrumental measurements to analytes of interest in industrial, environmental, and biological materials
dent and inherently accurate assay (e.g., wet chemistry). Together, the instrumen tal measurements and results from the in dependent assays are used to construct a model (e.g., estimate a and b) that relates the analyte level to the instrumental mea surements. This model is then used to pre dict the analyte levels associated with fu ture samples based solely on the instru mental measurements. In the past, data acquisition and analy sis were often time-consuming, tedious ac tivities in analytical laboratories. The ad-
(especially those that are spectroscopic in nature) for increasingly difficult prob lems. In the simplest situations, models such as? = a + χ · b have been used to express the relationship between a single measure ment (y) from an instrument (e.g., absorbance of a dilute solution at a single wave length) and the level (*) of the analyte of interest. Typically, instrumental measure ments are obtained from specimens in which the amount (or level) of the analyte has been determined by some indepen Analytical Chemistry, Vol. 66, No. 15, August 1, 1994 795 A
Report vent of high-speed digital computers has greatly increased data acquisition and analysis capabilities and has provided the analytical chemist with opportunities to use many measurements—perhaps hundreds—for calibrating an instru ment (e.g., absorbances at multiple wave lengths) . To take advantage of this tech nology, however, new methods (i.e., multi variate calibration methods) were needed for analyzing and modeling the experimen tal data. The purpose of this Report is to introduce several evolving multivariate cal ibration methods and to present some im portant issues regarding their use. Univariate calibration
To understand the evolution of multivari ate calibration methods, it is useful to re view univariate calibration methods and their limitations. In general, these meth ods involve the use of a single measure ment from an instrument such as a spec trometer for the determination of an ana lyte. This indirect measurement can have significant advantages over gravimetric or other direct measurements. Foremost among these advantages is the reduction in sample preparation (e.g., chemical sepa ration) that is often required with the use of direct methods. Thus, indirect meth ods, which can be rapid and inexpensive, have replaced a number of direct meth ods. The role of calibration in these analy ses is embodied in a two-step procedure: calibration and prediction. In the calibra tion step, indirect instrumental measure ments are obtained from specimens in which the amount of the analyte of inter est has been determined by an inherently accurate independent assay. The set of in strumental measurements and results from the independent assays, collectively referred to as the calibration set or train ing set, is used to construct a model that relates the amount of analyte to the instru mental measurements. For example, in determining Sb con centration by atomic absorption spectros copy (AAS), the absorbances of a number of solutions (with known concentrations of Sb) are measured at a strong absorbing line of elemental Sb (e.g., 217.6 nm). A model relating absorbance and Sb concen tration is generated. In this case, model development is straightforward, because 796 A
Beer's law can be applied. In other situa tions, the model may be more complex and lack a straightforward theoretical basis. In general, this step is the most time-consuming and expensive part of the overall calibration procedure because it involves the preparation of reference sam ples and modeling. Next, the indirect instrumental mea surement of a new specimen (in combina tion with the model developed in the cal ibration step) is used to predict its associ ated analyte level. This prediction step is illustrated in Figure 1, which shows Sb de termination by AAS. Usually, this step is repeated many times with new specimens using the model developed in the calibra tion step. Even in the simplest case of univariate calibration—when there is a linear rela tionship between the analyte level (AT) and instrumental measurement (y)—model ing can be done in different ways. In one approach, often referred to as the classical method, the implied statistical model is
posing the calibration set. The estimate of ft, can be expressed as b\ = (x'x)"1 xTy, wherex= fru*» · · -.«π)1 andy = (yi,y2, .. .,yn)T. In this article, the "hat" symbol over a quantity is used to denote an esti mate (or prediction) of that quantity. The predicted analyte level associated with a new specimen is x' = y*/b\, where/ is the observed measurement associated with the new specimen. In another approach, often referred to as the inverse method, the implied statisti cal model is *i = ft2 · :Vi + e-
(2)
where ex is assumed to be the measure ment error associated with the reference value xy In the calibration step, the model parameter, ft2, is estimated by leastsquares regression of the reference values on the instrument measurements (i.e., \ = (y T y) _1 y Tx )· m the prediction step, x" = $2 · / . In general, predictions ob tained by the classical and inverse meth ods will be different. However, in many cases, these differences will not be im y\ = V * i + e-, (1) portant. In the literature, there has been an where x-, and y-, are the analyte level and in ongoing debate about which method is preferred (3,4). When calibrating with a strument measurement associated with single measurement, the inverse method the fth of η specimens in the calibration may be preferred if the instrumental mea set. The measurement error associated with^j is represented by ex. To simplify this surements are precise (e.g., as in near-IR spectroscopy). discussion, an intercept is not included in Equation 1. In the calibration step, the The point of this discussion isn't to rec model parameter, bv is usually estimated ommend one method over another; it is to by least-squares regression of the instru show that, even in this relatively simple ment measurements on the reference situation, different approaches exist. How values associated with the specimens com- ever, the breadth of the applicability of these univariate methods is limited. For example, let us reconsider the de termination of Sb concentration by AAS. Suppose the specimens to be analyzed 0.06 contain Pb. It is well known that Pb has a 0.05 strongly absorbing spectral line at 217.0 nm, which is quite close to the pri -gO.03' mary Sb line at 217.6 nm (5). There are 5 0.02 important ramifications of this fact. If an 0.01 analyst fails to recognize the presence of .Pb, the application of univariate calibra 0 1 2 3 4 5 tion using the 217.6-nm line can result in Concentration of Sb (ppm) inaccurate predictions for Sb because of the additional absorbance attributable to Figure 1 . Prediction of the Sb Pb (see Figure 2). If Pb is recognized as a concentration of a new specimen. possible interference, the usual ap The calibration model (solid line derived from the calibration set [dots]) relating the absor proach is to move to a less intense spec bance at 217.6 nm to the Sb concentration tral line for Sb (e.g., 231.2 nm); however, and the absorbance of the new specimen are one can expect a poorer detection limit used for prediction.
Analytical Chemistry, Vol. 66, No. 15, August 1, 1994
0.06' g 0.05 τ |Pb absorbance
J 0.04 "§0.03
/ι
(Λ
j). performance of full-spectrum methods. The centered and weighted instrumental Furthermore, in applications outside the measurements (yjj) are then used as the laboratory (e.g. determination of compobasis for constructing the calibration nents in an in situ setting), physical and model. economic considerations associated with the measurement apparatus may reThe purpose of using nonuniform strict the number of wavelengths (or meaweighting is to modify the relative influsurements) that can be used. Thus, waveence of each measurement on the resultlength selection is very important, even ing model. The influence of the/th measurement is raised by increasing the mag- when applying methods capable of using a very large number of measurements. nitude of its weight, wy Sometimes and the centered values for the/th mea surement are given by
Before methods such as PLS and PCR are used, a certain amount of data pretreatment is often performed.
Analytical Chemistry, Vol. 66, No. 15, August 1, 1994
Currently few empirical procedures for wavelength selection are appropriate for use with full-spectrum methods such as PLS. Most procedures (e.g., stepwise regression) are associated with calibration methods (e.g., MLR) that are capable of using relatively few wavelengths. However, Frank and Friedman (22) showed that stepwise MLR does not seem to perform as well as PLS or PCR with all measurements. In general, wavelength selection procedures that can be used with full-spectrum methods (e.g., the correlation plot) search for individual wavelengths that empirically exhibit good selectivity, sensitivity, and linearity for the analyte of interest over the training set (23, 24). In order for these methods to be useful, wavelengths specific to the analyte of interest with good S/N are needed. However, the required wavelength specificity is not usually available in difficult applications (e.g., analysis of complex biological materials). Procedures such as the correlation plot, which consider only the relationships between individual wavelengths and the analyte of interest, are ill-equipped for such applications. This has provided the motivation to develop methods that select instrumental measurements on the basis of the collective relationship between candidate measurements and the analyte of interest^). A number of other procedures exist for data pretreatment, primarily to linearize the relationships between the analyte level and the various instrumental measurements. This is important because of the inherent linear nature of the commonly used multivariate calibration methods. For example, in spectroscopy, optical transmission data are usually converted to absorbance before analysis. In this setting, this is a natural transformation given the underlying linear relationship (through Beer's law) between analyte concentration and absorbance. Other pretreatment methods rely on the ordered nature of the instrumental measurements (e.g., a spectrum). In near-IR spectroscopy, instrumental measurements—which are first converted to reflectance—are often further transformed by procedures such as smoothing and the use of differencing (derivatives). Smoothing reduces the effects of
performance of a specific calibration model (fixed method/model size) is based on a very simple concept. First, data from the calibration set are partitioned into a number of mutually exclusive subsets (S1( S 2 ,..., Sv), with the z'th subset (SJ containing the reference values and instrumental measurements associated with »; specimens. Next, F different models are constructed, each using the prescribed method/model size with all except one of the F available data subsets. The ith model, MA, is constructed by using all data subsets except Sr In turn, each model is used to predict the analyte of interest for specimens whose data were not used in its construction (i.e., M_{ is used to predict the specimens in S,). In a sense, this procedure, which can be computing-intenCross-validation, model size, and model validation sive, simulates the prediction of new Cross-validation is a general statistical specimens. A comparison of predictions method that can be used to obtain an obobtained in this way with the known reference analyte values provides an objective assessment of the errors associated with predicting the analyte values of new specimens. Partitioning the calibration set into the various data subsets should be done carefully. Typically, the calibration set is partitioned into subsets of size one (i.e., leave one out at a time cross-validation). However, difficulty arises when replicate sets of instrumental measurements are obtained from individual specimens. Many practitioners use "leave one out at a time crossvalidation" in this situation. Unfortunately, what is left out one at a time is usually a jective assessment of the magnitude of prediction errors resulting from the use of single set of instrumental measurements. In this case, the cross-validated predican empirically based model or rule in complex situations (27). The objectivity of tions associated with specimens with replicate instrumental measurements will be the assessment is obtained by comparing predictions with known analyte values for influenced by the replicate measurements (from the same specimen) used to specimens that are not used in developing the prediction model. In complex situa- construct M_{. tions, it is impossible or inappropriate to Such use of cross-validation does not use traditional methods of model assesssimulate the prediction of new samples. ment. In the context of multivariate cali- The likely result is an optimistic assessbration, cross-validation is used to help ment of prediction errors. A more realistic identify the optimal size (hopt) for softassessment of prediction errors would be model-based methods such as PLS and obtained if the calibration set were partiPCR. In addition, cross-validation can pro- tioned into subsets in which all replicate vide a preliminary assessment of the premeasurements from a single specimen are diction errors that are to be expected included in the same subset. when using the developed model (of opTo select the optimal model size (hopt), timal size) with instrumental measure- the cross-validation procedure is perments obtained from new specimens. formed using various values of the metaThe cross-validated assessment of the parameter, h. For each value of A, an ap-
high-frequency noise throughout an ordered set of instrumental measurements such as a spectrum. It can be effective if the signal present in the instrumental measurements has a smooth (or lowfrequency) nature. Differencing the ordered measurements mitigates problems associated with baseline shifts and overlapping features. Another technique often used in near-IR reflectance spectroscopy is multiplicative signal correction (26), which handles problems introduced by strong scattering effects. The performance of the multivariate calibration methods described earlier can be strongly influenced by data pretreatment.
Cross validation can be used to obtain an objective assessment of the magnitude of prediction errors.
Analytical Chemistry, Vol. 66, No. 15, August 1, 1994 801 A
Report
propriate measure of model performance is obtained. A commonly used measure of performance is the root mean squared prediction error based on cross-validation RMSCV(A) =
(12) where χ{[ΜΑ0ι)] represents the predicted value of the ith specimen using a model of size h, which was developed without us ing Sj. Sometimes, to establish a base line performance metric, RMSCV(O) is computed. For this purpose, îJAi.ji.O)] is defined as the average analyte level in the set of all specimens with the ith specimen removed. Thus, RMSCV(O) provides a measure of how well we would predict on the basis of the average analyte level in the calibration set rather than instrumental measurements. Often, practitioners choose hopt as the value of h that yields the minimum value of the RMSCV. The shape associated with RMSCV(/j) in Figure 5 is quite common. When h < hopV the prediction errors are largely a consequence of systematic effects (e.g., interferences) that are unaccounted for. When h > hopt, the prediction errors are primarily attributable to modeling of noise artifacts (overfitting). Usually, if the model provides a high degree of predictability (as in the case illustrated by Figure 5), the errors caused by overfitting are relatively small compared with those associated with systematic effects not accounted for.
-.300' 5
Λ
£200
\
>
ϋ to 100 tr 0
ν γ. *
0
I
•
•
5 10 No. of factors (ft)
15
Figure 5. Determination of optimal PLS model size, ftopt. The model relates near-IR spectroscopic measurements to urea concentration (mg/dL) in multicomponent aqueous solutions. 802 A
of the run order with respect to analyte level. Often, however, more subtle confound ing patterns exist. For instance, in a multicomponent system, the analyte level may be correlated with the levels of other com ponents or a physical phenomenon such as temperature. In such situations it may be difficult to establish whether, in fact, the model is specific to the analyte of in terest. In a tightly controlled laboratory study, where the sample specimens can be formulated by the experimenter, it is possible to design the calibration set (with respect to component concentra tions) so that the levels of different compo nents are uncorrelated. However, this countermeasure does not work when an empirical model is being used to predict analyte levels associated with, for exam ple, complex industrial or environmental specimens. In such situations, one rarely has complete knowledge of the compo Pitfalls nents involved, not to mention the physi The primary difficulty associated with us cal and chemical interactions among com ing empirically determined models is that ponents. they are based on correlation rather than causation. Construction of these models The validity of empirically based mod involvesfindingmeasurements or combi els depends heavily on how well the cali nations of measurements that simply cor bration set represents the new specimens relate well with the analyte level through in the prediction set. All phenomena out the calibration set. However, correla (with a chemical, physical, or other basis) tion does not imply causation (i.e., a that vary in the prediction set and influ cause-and-effect relationship between the ence the instrumental measurements analyte level and the instrumental mea must also vary in the calibration set over surements). ranges that span the levels of the phenom ena occurring in the prediction set. Some Suppose wefind,by empirical means, times the complete prediction set is at that a certain instrumental measurement correlates well with the analyte level hand before the calibration takes place. In such cases, the calibration set can be ob throughout the calibration set. Does this mean that the analyte level affects that par tained directly by sampling the prediction set (28). Usually, however, the complete ticular instrumental measurement? Not necessarily. Consider Figure 6, which dis prediction set is not available at the time of calibration, and an unusual (or unac plays the hypothetical relationship be counted for) phenomenon may be associ tween the reference analyte level and the order of measurement (run order) for ated with some of the prediction speci mens. Outlier detection methods repre specimens in the calibration set. Be sent only a limited countermeasure cause of the strong relationship between analyte level and run order, it is difficult to against such difficulties; valid predictions for these problematic specimens cannot be separate their effects on the instrumen tal measurements. Thus, the effects of ana obtained. lyte level and run order are said to be con founded. In this case, simple instrument Sources of prediction errors instability could generate a strong mis Ultimately, the efficacy of an empirical cal leading correlation between analyte level ibration model depends on how well it and an instrumental measurement. Fortu predicts the analyte level of new speci nately, a useful countermeasure for this mens that are completely external to the type of confounding exists: randomization development of the model. If the reference
At this point, an optimal (or nearoptimal) model size has been selected. RMSCV(/iopt) can be used as a rough esti mate of the root mean squared predic tion error associated with using the se lected model with new specimens. This estimate may be somewhat optimistic, given the nature of the model selection process where many possible models were under consideration. A more realis tic assessment of the magnitude of predic tion errors can be obtained by using an external data set for model validation. An ideal strategy for model selection and vali dation would be to separate the original calibration set into two subsets: one for model selection (i.e., determination of hopl) and one strictly for model valida tion. Use of this strategy would guarantee that model validation is independent of model selection.
Analytical Chemistry, Vol. 66, No. 15, August 1, 1994
20 0 Ο
5
10
15
20
Run order Figure 6. Relationship between the reference analyte level and run order of the calibration experiment.
values associated with m new specimens (or specimens from an external data set) are available, a useful measure of model performance is given by the standard error of prediction SEP = (13) If these m new specimens comprise a ran dom sample from the prediction set spanning the range of potential analyte values (and interferences), the SEP can provide a good measure of how well, on average, the calibration model performs. Often, however, the performance of the calibration model varies, depending on the analyte level. For example, consider Figure 7a where the standard deviation of the prediction er rors (eK = *; - x) increases as the analyte value deviates from the average analyte value found in the training set, f. In this case, although the precision of predic tions depends on the analyte value, the ac curacy is maintained over the range of analyte values. That is, for a particular ana lyte level, the average prediction error is about zero. The behavior with respect to precision is neither unexpected nor ab normal; the model is often better de scribed in the vicinity off rather than in the extremes. On the other hand sometimes there is a systematic bias associated with predic tion errors that is dependent on the ana lyte level (Figure 7b). When the analyte values are less than χ, prediction errors are generally positive. Conversely, when ana lyte values are greater than x, the predic tion errors are generally negative.
(a) _ 16
I 14 ϊ 12 C
Ι10 ο
'S 8 *
6
6
8 10 12 14 16 Reference analyte level
(b) Φ
|
1 6
yf*
"
14-
/i
I 1 12c Ê3
ο
CO
*
oo
IT 40
Predicted
80 m S 60
possible to decompose instrumental varia tion into features that are slowly varying (low frequency) and quickly varying (high frequency). Often the focus is on only the high-frequency error component, and usu ally only in the context of repeatability. This is unfortunate because multivariate methods that are capable of using many measurements are somewhat adept at reducing the effects of high-frequency errors. Practitioners should work to iden tify and eliminate sources of slowly vary ing error features. Other sources of prediction error may be unrelated to the reference method or the analytical instrument. Modeling non linear behavior with inherently linear methods can result in model inade quacy. Some researchers thus have adapted multivariate calibrations methods in order to accommodate nonlinearities (29,30). The ability to adequately sample and measure specimens in difficult environ-
•
y
σ»
100
This pattern is indicative of a defective model in which the apparent good predic tions in the vicinity of χ are attributable primarily to the centering operation that is usually performed during preprocessing in PLS and PCR. That is, predictions based on Equation 10 effectively reduce to i, = χ + noise, if the estimated model coefficients, {b), are spurious. Spurious model coefficients are obtained if noninformative instrumental measurements are used to construct a model. Thus, one should be wary of models that produce the system atic pattern of prediction errors shown in Figure 7b, regardless of whether the pre dictions are based on cross-validation or a true external validation set. Several other factors affect the accu racy and precision of predictions, notably the inherent accuracy and precision of the reference method used. If the reference method produces erroneous analyte val ues that are consistently low or high, the resulting predictions will reflect that bias. Imprecise ( but accurate) reference val ues will also inflate the magnitude of pre diction errors, but in a nonsystematic way. Furthermore, errors in determining the reference values will affect the ability to as sess the magnitude of prediction errors. The assessed magnitude of prediction er rors can never be less than the magnitude of the reference errors. Thus, it is very important to minimize the errors in the ref erence analyte values that are used to construct an empirical model. Other sources of prediction error are related to the repeatability, stability, and reproducibility of the instrumental mea surements. Repeatability relates to the ability of the instrument to generate con sistent measurements of a specimen using some fixed conditions (without removing the specimen from the instrument), over a relatively short period of time (perhaps seconds or minutes). Stability is similar to repeatability, but it involves a somewhat longer time period (perhaps hours or days). Reproducibility refers to the con sistency of instrumental measurements during a small change in conditions, as might occur from multiple insertions of a specimen into an instrument. Further classification of instrumental measurement errors is possible for cases in which the multiple instrumental errors are ordered (e.g., by wavelength). It is also
6
8 10 12 14 16 Reference analyte level
Figure 7. Relationship between the predicted analyte and reference analyte levels. (a) In the nomal relationship, the average reference analyte level, x, is 11 (arbitrary units). The precision of the predicted values depends on the reference analyte level and is best in the vicinity of x. (b) In the abnormal relationship, the precision and accuracy of the predicted values depend on the reference analyte level.
Analytical Chemistry, Vol. 66, No. 15, August 1, 1994 8 0 3 A
A COMPLETE FAMILY
OF S E L E C T T O O L S
FOR THE DEMANDING
ELECTROCHEMICAL
Report ments can significantly affect the perfor mance of calibration methods. In a labora tory it might be possible to control some of the factors that adversely affect model performance. However, many emerging analytical methods (e.g., noninvasive medical analyses and in situ analyses of in dustrial and environmental materials) are intended for use outside the traditional laboratory environment, where it might not be possible to control such factors. The ability to overcome these obstacles will largely influence the success or fail ure of a calibration method in a given appli cation. Thus, practitioners must strive to identify and eliminate the dominant sources of prediction error.
RESEARCHER For more than 25 years, EG&G Princeton Applied Research has been supplying science and industry with the finest selection of electrochemical instruments and software available. Add the outstanding support provided by our full staff of applications specialists, and it's easy to see why the EG&G PARC family of electrochemical solutions is legendary.
Summary This article has provided a basic introduc tion to multivariate calibration methods with an emphasis on identifying issues that are critical to their effective use. In the future, as increasingly difficult problems arise, these methods will continue to evolve. Regardless of the direction of the evolutionary process, the resulting meth ods will need to be used carefully, with rec ognition of the issues presented in this ar ticle.
And the family is growing, with the recent addition of the Model 263, a 20 volt compli ance, 200 mA potentiostat/galvanostat with impressive speed, precision, and sensitivity.
I thank Steven Brown, Bob Easterling, Ries Robinson, and Brian Stallard for their advice on this manuscript. Brian Stallard provided the water vapor spectra.
You can run static or scanning experiments from the front panel or from your PC. Also, the new Model 6310 Electrochemical Impedance Analyzer combines high perform ance potentiostat circuitry with stateof-the-art phase sensitive detection in one affordable unit. To start your own family of electrochemical products or just to add on, give us a call at 609-530-1000. One of our expert applications people can help you make the right choice.
EG&G INSTRUMENTS Princeton Applied Research P.O. Box 2565 · Princeton, NJ 08543 · (609) 530-1000 FAX: (609) 883-7259-TELEX: 843409 United Kingdom (44) 734-773003 · Canada 905-827-2400 Netherlands (31) 034-0248777 · Italy (39) 02-27003636
Germany (49) 89-926920 France ( 3 3 ) 0 1 - 6 9 8 9 8 9 2 0 · Japan (03) 638-1506
Circle 10 for Literature. Circle 11 for Sales Rep. 804 A
(11) Robinson, M. R.; Eaton, R. P.; Haaland, D. M.; Koepp, G. W.; Thomas, E. V.; Stal lard, B. R.; Robinson, P. L. Clin. Chem. 1992,38,1618-22. (12) Brown, S. D.; Bear, R. S.; Blank, T. B. Anal. Chem. 1992,64, 22R-49R. (13) Martens, H.; Naes, T. Multivariate Cali bration; Wiley: Chichester, England, 1989. (14) Haaland, D. M.; Thomas, E. V. Anal. Chem. 1988, 60,1193-1202. (15) Thomas, E. V. Technometrics 1991,33, 405-14. (16) Brown, C. W.; Lynch, P. F.; Obremski, R. J.; Lavery, D. S. Anal. Chem. 1982,54, 1472-79. (17) Mendelson, Y. Ph. D. Dissertation, Case Western Reserve University, Cleveland, OH, 1983. (18) Stone, M.; Brooks, R. }.]. Royal Statistical Soc. Series B, 1990,52, 237-69. (19) Hoskuldsson.A./. Chemom. 1988,2, 211-28. (20) Helland, I. S. Scandinavian J. Statistics 1990,17,97-114. (21) Thomas, E. V.; Haaland, D. M. Anal. Chem. 1990, 62,1091-99. (22) Frank, I. E.; Friedman, J. H. Technomet rics 1993, 35,109-48. (23) Hruschka, W. R. In Near-Infrared Technol ogy in the Agricultural and Food Indus tries; Williams, P.; Norris K., Eds.; Ameri can Association of Cereal Chemists, Inc.: St. Paul, MN, 1987; pp. 35-55. (24) Brown, P.J./. Chemom. 1992, 6,151-61. (25) Li, T.; Lucasius, C. B.; Kateman, G. Anal. Chim. Acta 1992,268,123-34. (26) Isaksson, T.; Naes, T. Appl. Spectrosc. 1988,42,1273-84. (27) Stone, M.J. Royal Statistical Soc. Series B, 1974,36,111-33. (28) Naes, T.; Isaksson, T. Appl. Spectrosc. 1989, 43,328-35. (29) Hoskuldsson, A./. Chemom. 1992, 6, 307-34. (30) Sekulic, S.; Seasholtz, M. B.; Kowalski, B. R.; Lee, S. E.; Holt, B. R. Anal. Chem. 1993,65,835A-845A
References (1) Oman, S. D.; Wax, Y. Biometrics 1984, 40,947-60. (2) Smith, R. L.; Corbett, M. Applied Statistics 1987,36, 283-95. (3) Krutchkoff, R. G. Technometrics 1967, 9, 425-39. (4) Williams, E. J. Technometrics 1969, 11, 189-92. (5) Occupational Safety and Health Adminis tration Salt Lake Technical Center: Metal and Metalloid Particulate in Workplace Atmospheres (Atomic Absorption) (USDOL/OSHA Method No. ID-121). In OSHA Analytical Methods Manual Part 2, 2nd ed. Edward V. Thomas is a statistician at the (6) Fearn, T. Applied Statistics 1983,32, 73- Statistics and Human Factors Department 79. ofSandia National Laboratories, Albu (7) Oman, S. D.; Naes, T.; Zube, A. J. Chequerque, NM 87185-0829. His B.A. degree mom. 1993, 7,195-212. in chemistry and M.A. degree and Ph.D in (8) Haaland, D. M.Anal. Chem. 1988, 60, 1208-17. statistics were all awarded by the University (9) Small, G. Α.; Arnold, Μ. Α.; Marquardt, of New Mexico. His research interests in L. A. Anal. Chem. 1993, 65, 3279-89. (10) Bhandare, P.; Mendelson, Y.; Peura, R. Α.; clude calibration methods with a focus on their application to problems in analytical Janatsch, G.; Kruse-Jarres, J. D.; Marbach, R.; Heise, H. M. Appl. Spectrosc. chemistry. 1993, 47,1214-21.
Analytical Chemistry, Vol. 66, No. 15, August 1, 1994