1000
Anal. Chem. 1903, 55, 1800-1804
(IO) Broecker, W. S.J . Geophys. Res. 1963, 68, 2817. (11) Broecker, W. S.;Thurber, D. L.; Goddard, J.; Ku, T.-L.; Mathews, R. K.; Mesolella, K. J. Science 1968, 159,297. (12) Kaufman, A.; Broecker, W. S.; Ku, T.-L.; Thurber, D. L. Geochim. Cosmochim. Acta 1971, 35, 1155. (13) Osmond, J. K.; Carpenter, J. R.; Windom, H. L. J . Geophys. Res. 1965, 70,1843. (14) Thurber, D. L.; Broecker, W. S.;Blanchard, R. L.; Potratz, H. A. Science 1965, 149,5 5 . (15) Kolodny, Y.; Kaplan, I.R. Geochim. Cosmochim. Acta 1970, 3 4 , 3. (16) Baturin, G. N.; Merkulova, K. I.; Chalov, P. I. Mar. Geoi. 1972, 13, M37. (17) Burnett, W. C.;Veeh, H. H. Geochim. Cosmochim. Acta 1977, 4 1 , 755. (18) Burnett, W. C.; Beers, M. J.; Roe, K. K. Science 1962, 215, 1616. (19) Veeh, H. H., Earth Planet. Sci. Lett. 1982, 57,278. (20) . . Roe. K. K.: Burnett. W. C.: Kim. K. H.: Beers, M. J. Earth Planet. Sci. Lett. 1982, 6 0 , 39. (21) O'Brien, G. W.; Veeh, H. H. Nature (London) 1960, 288, 890. (22) Cochran, J. K. Earth Planet. Sci. Lett. 1960, 49,381. (23) Kolde, M.; Bruland, K. W.; Goldberg, E. D. Earth Planet. Sci. Lett. 1976. 31. ... . 31 (24) Kolde, M.f Bruiand, K. W. Anal. Chim. Acta 1975, 75,1. (25) Komura, K.; Sakanoue, M.; Konlshi, K. R o c . Jpn. Acad., Ser. B 1978, 54,505. (26) Yokoyama, Y.; van Nguen, H. I n La gen6se des nodules de mangangse, Colloques Internationaux du C.N.R.S., No. 289, Gif-surYvette, 25-30 Sept 1978, Paris, France. (27) Kim, K. H.; Burnett, W. C., to be submitted for publication.
(28) Michel, J.; Moore, W. S.;King, P. T. Anal. Chem. 1981, 53, 1885. (29) Nix, D. W.; Scott, N. E. I n "Radloelement Analysis: Progress and Problems"; Lyon, W. S., Ed.; Ann Arbor Science: Ann Arbor, MI, 1980; pp 133-141. (30) Verplanke, J. C. Nucl. Instrum. Methods 1971, 96,557. (31) Bowman, W. W.; MacMurdo, K. W. At. Data Nucl. Data Tables 1974, 13. 89. (32) U S . Environmental Protection Agency, Calibration Certificate, Standard Monazite Ore, Environmental Monitoring and Support LaboratoryLas Vegas, Quallty Assurance Board, 1977. (33) Erdtmann, G.; Soyka, W. "The Gamma Rays of the Radionuclides, Tables for Applied Gamma Ray Spectrometry, Topical Presentation in Nuclear Chemistry, Kemchemie in Einzeldarstellungen"; Verlag Chemle: Weinheim, 1979; Vol. 7. (34) Smlth, A. R.; Wollenberg, H. A. I n The Natural Radiatlon Environment 11, Proceedings of the 2nd International Symposlum on Natural Radiation in the Environment, Aug 1972, Houston, TX. (35) Franson, M. A,, Ed. I n "Standard Methods for the Examination of Water and Waste Water", 14th ed.; Am. Public Health Assoc., Am. Water Works Assoc., and Water Pollution Control Federation, 1975.
..
RECEIVED for review December 22, 1982. Accepted July 1, 1983. This work was supported by National Science Foundation Grant No. OCE-8117583. Partial support for this study came through the Korean Government Scholarship to Kee Hyun Kim during his stay a t Florida State University.
Partial Least Squares Solutions for Multicomponent Analysis Ildiko E. Frank, John H. Kalivas, and Bruce R. Kowalski* Laboratory for Chemometrics, Department of Chemistry BG-IO, University of Washington, Seattle, Washington 98195
The partial least squares (PLS) solutlon Is compared to the normal least squares solution for the generahed standard addition method (GSAM), a multlcornponent analysis procedure that can detect and correct for spectral Interferences, matrix effects, and instrument drlft. Several advantages of the PLS algorlthm are pointed out and demonstrated on slmulated and experlmental data sets. PLS can be applled to the GSAM when the number of analytes Is greater than the number of sensors or standard additions when the normal least squares solutlon for the GSAM falls. The noise filtering capablilty of PLS enables the method to give more accurate results for the lnltlal concentratlons, especlally for the case of hlgh noise level and time drift In the sensors. PLS is a general algorithm whlch can be applled to any multlcomponent analysls procedure.
Recently, a method for multicomponent analysis using standard additions of more than one analyte, component of interest, was developed. This method is called the generalized standard addition method (GSAM) (1-6) which is able to detect and correct for spectral interferences, matrix effects, and drift. The method has recently been automated in conjunction with a visible light spectrophotometer (5). Due to some inherent restrictions in the GSAM algorithm, it cannot be used when the number of analytes (r) is greater than the number of sensors (p) or standard additions (n). The GSAM requires that p 1 r, n 2 r, and each of the r analytes affect at least one of the p sensors. These restrictions apply because GSAM uses a multiple linear least squares routine to solve for the linear response constants. A recently developed multiple regression algorithm using partial least squares (PLS) (7) can be used to overcome these restrictions.
There are several advantages to PLS. It can give an early estimate of the calibration matrix K before standard additions for all the analytes are made. It can also estimate the initial concentrations even when there are fewer sensors than analytes. Of course, these estimates are biased, but in both cases early information can be obtained that is helpful to improve the experimental design. The PLS contains also a noise filtering capability which is useful for the case of high noise level in the sensors and for the case when time drift is present in the sensors. Following a brief review of the GSAM and an introduction to PLS, this paper will compare the performance of the current GSAM algorithm and a new PLS-GSAM algorithm for different experimental situations with simulated data sets.
THEORY The Generalized Standard Addition Method (GSAM). Since the theory of GSAM and its different versions have been discussed elsewhere, only a short overview is given here. GSAM is used for the simultaneous analysis of r analytes by measuring the responses from p sensors before and after n standard additions are made. The assumption for the model is that each response of a sensor is a linear combination of the analytes and, perhaps, other variables. The number of these variables is denoted here as t. They are nonanalyte components in the mixture, Le., variables not to be measured, for example, time related to drift. In matrix form, the model is R = CK (1) where R is the (n 1) X p matrix of measured responses (initial responses and responses after n standard additions), C is the ( n + 1) X (r + t ) concentration matrix ( r analytes + t other variables), and K is the ( r t ) X p matrix of linear response constants showing the contribution of each of r an-
+
0003-2700/83/0355-1800$01.50/0@ 1983 Amerlcan Chemical Society
+
ANALYTICAL CHEMISTRY, VOL. 55, NO. 11, SEPTEMBER 1983
E+-
alytes and t other variables to the p t h senEior. Since volume corrected changes must be used (2), eq 1 becomes
A,Q = W K
START
(2)
where AQ is the n X p volume corrected response change matrix, and is the n X (r t ) matrix of absolute quantities of standards added and ailso contains t other variables that, for example, may be functions of the time elapsed between the additions of standards. The multiple linear least squares solution for K is
+
K = (APAiV)-lAN"AQ
(3)
After the last t rows from matrix K are removed, where the t rows indicate the presence or absence of time drift, the initial concentrations are calculated for the case of p = r by solving
and for the case of p
1801
k
--
calculate dth column of K
I
I
*
r
The Partial Least Squares Regressioin Method. A new multiple linear regression algorithm solved by partial least squares (PLS) was recently introduced to the statistical (7) and chemical literature (8). This method is ciimilar to principal components regression (l?CR) (9) which spans the variance of the original independent variables by orthogonal latent variables which are linear combinations of the independent variables. However, the (criteria for calculating these latent variables for PLS is not only that they describe the possible maximum amount of varimce of the data mi in PCR but, also, that they may be maximally correlated to the dependent response variables. Since the dependent variables are also linear combinations of the orthogonal latent variables, then, from the coefficients of the new system of linear equations, regression coefficients for the original variables can be calculated. The maximum number of latent variables is the number of the independent variables. However, including fewer latent variables in the regression model allows for filtering noise present in thle data. This is achieved by fitting only the regular variation of predictor variables. Therefore, the PLS regression model could give better predictions and, therefore, a better model than the regular least squares routines if noise or nonlinearity exist. Given the matrix of independent variables X = ( x , ) and the response vector Y = ( y J ,where i is the index of samples and j is the index of independent variables, the linear model is m
x i j = Cuil.blj+ eU I=1
(6)
where 1 represents the index of latent variables u, the b,'s are the loadings (i.e., the coefficients in the linear combination of the original variables),,the e i i s are the residual errors of the original data matrix not explained by thie latent variables, and m is the optimal number of latent variables. The regression of the response variable on the latent variable is m ,,"
1=1
PL.S r e g r e s s i o n
Figure 1. Algorithm for PLS-GSAM.
models on the basis of their predictive ability and chooses the one with the best predictive performance. In the case of PLS, models with different numbers of latent variables are investigated and the optimal aumber from the prediction point of view is defined. As mentioned above, calculation of the K matrix is obtained for the GSAM by using multiple linear least squares. This calculation procedure is also used for estimation of the initial concentration of the analytes when p > r. Incorporating PLS into the GSAM experimental design represents a significant improvement for the calculation procedure. Our PLS-GSAM algorithm is a two-stage one. At the first stage the columns of the K matrix are calculated by regressing a response change matrix. To estimate the vector (Aq) of one sensor on the complete IC matrix, p regressions must be performed Aqm,d = Ckd,s*Anm,s d = 1, ..., P s=l
i = 1, ...,Ivp; j = 1, ..., NV
yi = Cpruii + Ai
e l t r a n s1 pose K
(7)
where p l gives the correlation between the response variables and the latent variables and Ai is the residual error in the y i s not described by the model. As more latent variables are included in the model ( m approaches the number of original predictor variables), the eijls become smal.ler. The significance of each latent variable can be checked by cross validation (IO), a method that compares statistical
(8)
In the second stage, the initial response vector (qo) is used with the K matrix to obtain estimates for the initial concentrations qO,d
= CnO,s.kd,s s=l
(9)
A block diagram of the PLS-GSAM algorithm is shown in Figure 1. A significant advantage of PLS regression is that it works for the case of an underdetermined system (i.e., when there are fewer samples than independent variables; i < j in PLS notation). This means that a K matrix can be calculated even if there are fewer additions than analytes (n < r ) and the n,'s can be estimated in the second stage even when there are fewer sensors than analytes ( p < r ) . These are cases where the original GSAM algorithm will not work. In a previous publication the possibility of using the GSAM for drift correction was studied ( 4 ) . This was accomplished by measuring the time elapsed up to a standard addition of an analyte. The K matrix is then corrected for time drift and
1802
ANALYTICAL CHEMISTRY, VOL. 55,
NO. 11, SEPTEMBER 1983
Table I (a) Fewer Standard Additions Than Analytes, Orthogonal Additions K sensor sensor 2 3
AN
analyte
analyte
analyte
1
2
3
1
0
0
1 0 0
0.5 0 0
1 0
0
1 0 0
0.5
1
0 0
0 0 0 0
1
0.5
0
1
0
0
0 0
1
0
0.5 1
0
1
1 0 0
0 Oa 0.5 1
analyte 1
sensor 1
0.25
estimated no re1 errorC analyte analyte analyte analyte analyte 2 3 1 2 3
1.301
0.0
0.0
30.1
100.0
100.0
0.833
1.167
0.0
16.6
133.4
100.0
1.000
0.500
l.OOOb
0.0
0.0
0.0
(b) Fewer Standard Additions Than Analytes, Nonorthogonal Additions AN K estimated no re1 error analyte analyte sensor sensor sensor analyte analyte analyte analyte analyte 2 3 1 2 3 2 3 2 1 1
1
0
1
0.5 0.0
0.5
a
analyte 1
1
0
1
0
1
1
1
0
0
1
1 1
1
1
0
True K matrix.
True no's.
0.88 - 0.11 0.11 1 0 0
Re1 error
=
analyte 3
0.37 0.0 0.37
0.5 0.0 0.5
1.244
0.0
1.244
24.4
100.0
24.4
0.42 0.92
0.59 1.09
1.11
0.48
0.52
11.0
4.0
48.0
0.33
0.41
0.5 1 .o
Oa
1.00
0.50
l.OOb
0.0
0.0
0.0
0.25
1
0.5
( lestimated no - actual n,l)/actual n o .
used to estimate the no's. However, after the noise in the p sensors increased to a 5% relative standard deviation, the sensitivity of the ordinary GSAM for detecting drift was found to drop considerably. Due to the noise filtering capability of the PLS algorithm (when fewer than the maximum number of latent variables are calculated), it is expected to give better performance for drift correction than the ordinary GSAM a t high noise levels.
EXPERIMENTAL SECTION Simulated experimental data used in this study were based on known K and AN matrices. Three analytes were examined by standard additions of 1 unit (arbitrary units) each per analyte. All additions were orthogonal (a standard addition of one analyte at a time) except those for testing the n < r case. Volumes were kept constant. In the study of drift correction, a second-order time model was used. A 2% linear drift was added to the responses measured from sensor one and 2% quadratic drift was added to the responses measured from sensor three. The linear time additions consisted of 1 unit of time between additions in a given partition and 2 units of time between partitions. Quadratic time additions were calculated the same way. The noise-free AQ was calculated according to eq 2. Random noise for the sensors was generated by using a Monte Carlo method (11). Normally distributed noise with a zero mean and 1%, 3%, 5%, and 10% relative standard deviations were used. Twenty Monte Carlo perturbations were performed t o compute 20 sets of random responses and K matrices which were used to estimate 20 no's and a resulting average. This procedure was repeated 20 times, calculating the absolute value of the relative error for the averaged no's each time. The following four cases were investigated: (1)Fewer standard additions than analytes (n < r). A simulated data set was generated for the determination of three analytes by using three sensors. Two different experiments were designed, one with orthogonal additions (Le., standard additions consisting of only one component at a time) and the other with multiple additions (i.e., an addition containing more than one component). Table I contains the exact A N matrices used. Both experimental situations were simulated without noise. One, two, and three additions were made.
-
(2) Fewer sensors than analytes < r ) . The experiment for the p < r study was designed with orthogonal additions and with r = 3, p = 2, and n = 3. Two data sets with two sensors were investigated. One sensor was common to both data sets, while the second sensor had different sensitivity with respect to the two data sets. Table I1 contains the exact A N and K matrices. (3)Noise filtering when p = r and when p > r. GSAM experiments were designed with three analytes and three and four sensors, respectively. The standard additions were orthogonal. Table I11 contains the exact K matrices: 1% , 3% , 5 % , and 10% noise were generated in both data sets. (4) Correction for drift. A drift correction experiment was designed for three analytes using three partitions with three standard additions of 1unit for one analyte in each partition. A second-order time model was used as described previously. Experimental data from use of an inductively coupled plasma atomic emission spectrometer (ICP-AES) was also studied. The experiment and results have been discussed elsewhere ( 3 ) . ~
RESULTS AND DISCUSSIONS (1) Fewer Standard Additions Than Analytes (n< r ) . N,K , and nowith one, two, and three additions for the two data sets are shown in Table I. I t is demonstrated that, with n < r , the K matrix and the initial concentrations can be calculated even though the estimates are heavily biased. For the case of orthogonal standard additions (Table Ia), when an addition is made of an analyte, the corresponding row in the K matrix can be calculated without bias. All other rows contain zero elements. Also, initial concentrations are nonzero only for those analytes for which additions were made. In the case of nonorthogona1 standard additions (Table Ib), the values in the K matrix converge to the true value with an increase in the number of additions. Exact agreement is obtained when n = r standard additions are made. The above results were calculated by use of noise-free simulated data in order to show the exact difference in the two methods. Later, the methods are tested with experimental data. (2) Fewer Sensors Than Analytes ( p < r ) . In the second stage of the GSAM (solving for the no's),the system may be
ANALYTICAL CHEMISTRY, VOL. 55, NO. 11, SEPTEMBER 1983
1803
Table I1 (a) Fewer Sensors Than Analytes
K
AN
analyte analyte analyte 1 2 3
sensor
0 1 0
1 0 0
1 0
0
AN
analyte 1
0 0 1
___
1 0 0
0 1 0
sensor 2
1
0.5 1.o
0 0 1
re1 error
no’s
1.o
1.o
0.0
0.375 1.5
0.5
25.0 50.0
K
n0
sensor 2
sensor 3
0.5
0.0
1.0
0.5 1.0
0.25
estimated without noise 1.224 0.371 1.064
true no’s 1.000
0.500
1.ooo
noise
re1 error variance with 1% noise
0.96 23.89 50.37
0.53 10.70 53.19
re1 error with 1%
without noise
true
1.o 0.25 (b) Fewer Sensors Than Analytes
__
analyte amalyte 2 3
n0
estimated without noise
re1 error without noise
re1 error with 1% noise
re1 error variance with 1% noise
22.4 25.8 6.4
23.75 27.07 7.11
1.39 6.86 2.46
Table I11 (a) Noise Filtering by PLS-GSAM When p K
sensor
%
-~
1
sensor 2
sensor 3
1.0 0 0
0.5 1.0 0.25
0 0.5 1.0
K sensor 2
sensor 1
sensor 3
1.0 0
1.o
0.5
0
0
0.25
0.5 1.0
:=
r
re1 error
noise
analyte
level
1
analyte 2
analyte 3
0.76 1.90 3.78 14.85
1.83 5.33 12.19 40.90
1.91 3.36 7.39 22.24
1
3 5 10
re1 error variance analyte analyte analyte 2 3 1 0.50 2.32 2.79 12.36
5.44 10.16 41.99 63.71
2.28 10.76 44.14 71.83
(b) Noise Filtering by PLS-GSAM When p > r re1 error re1 error variance % analyte analyte sensor noise analyte analyte analyte analyte 2 3 4 level 3 1 1 2 0.1 1.0 0.1
1
3 5 10
0.79 2.47 2.29 6.78
1.05 4.84 3.21 10.73
1.08 2.87 4.67 5.15
0.16 3.21 3.68 21.41
1.53 15.14 44.56 119.52
0.34 7.50 42.21 63.35
Table IV. Noise Filtering by Multiple Regression GSAM When p = r and When p > r analyte 1
re1 eriror (p = 3) analy t e 2
0.63 2.01 4.61 60.05
1.71 4.61 17.93 143.60
I
noise level, % 1
3 5 10
re1 error (p = 4 )
analyte 3 0.67 3.38
7.51 64.34
underdetermined (p < r). Thus, regressing vector qo on matrix determination of the r initial concentrations ie underdetermined and can only be performed by using PLS. The bias in the estimates for the no’s was found to depend on the selectivity of the Eiensors. In the first data set, shown in Table IIa, sensor 1is 100% selective for analyte 1;therefore, an unbiased estimation can be made for analyte 1. The second sensor is mainly affected by analyte 2, but with interferences from analytes 1 and 3. Therefore, the results, both noise-free and with 1% noise in the measurements, for analyte 2 are less accurate. This is more typical of a real analysis. The worst results are for analyte 3 because it has no sensor and its only information source is the slight interference in sensor 2. Table IIa also shows the relativle errors in the estimation of the n,,’s with their variances using 1% noise, In the second data set (Table IIb) both sensor’s responses have interferences; therefore, all three estimates of the analyte concentrations are biased. On the other hand, even though there is no selective sensor for analyte 1,irelatively good es-
fl for the
analyte 1
0.49 1.58 3.50 7.28
analyte 2
analyte 3
1.06 3.83 4.46
0.70 2.93 3.51 10.55
13.87
timates can be made for this analyte because it is an interferent on both sensors. (3) Noise Filtering When p = rand When p > r . The K matrices and the relative errors in estimating the no’s with 1%,3 % , 5%, and 10% noise in the responses are reported in Table I11 for the case when the AiV matrix is the identity matrix, Le., orthogonal standard additions. Table IIIa has p = r with only three sensors and Table IIIb has p = 4 > r. Because the fourth sensor is a highly selective sensor for the second analyte, a significant improvement can be seen for the estimation of the concentration for the second analyte (Table IIIb). As expected, with increasing noise the estimates for the initial concentrations are seen to degrade. Also, when four sensors are used, more noise is tolerated. Specifically, with 10% noise, better accuracy is obtained with p = 4 than for p = 3. The relative errors of the estimates for the no’swere compared to the estimates of the original GSAM algorithm which are summarized in Table IV. Again, there is an enhanced
1804
ANALYTICAL CHEMISTRY, VOL. 55, NO. 11, SEPTEMBER 1983
Table V. N o Time Additions, No Drift % noise
analyte 1
re1 error PLS-GSAM analyte 2
analyte 3
1 3 5 10
0.25 1.43 2.00 2.90
1.02 3.89 4.78 10.16
0.68 1.68 1.20 6.27
re1 error multiple regression GSAM analyte 1 analyte 2 analyte 3 0.45 1.47 2.18 3.86
0.91 3.23 5.79 9.55
0.55 1.67 4.06 6.44
Table VI. Time Additions, No Drift % noise
analyte 1
1 3 5 10
0.33 1.30 2.77 0.49
re1 error PLS-GSAM analyte 2 analyte 3 1.47 4.97 10.15 15.36
0.85 2.85 6.75 8.40
re1 error multiple regression GSAM analyte 1 analyte 2 analyte 3 0.33 1.26 2.49 10.66
1.70 5.04 10.94 81.60
1.37 3.98 11.30 51.20
Table VII. Time Additions, Drift % noise
1
3 5 10
analyte 1
re1 error PLS-GSAM analyte 2 analyte 3
1.90 2.26 5.24 11.62
2.26 6.64 8.81 33.01
1.42 6.00 11.69 25.90
Table VIII. Drift Correction in Al-As-Cd Experimental Data
no drift with time additions drift, time additions
re1 error multiple regression GSAM Al As Cd
re1 error PLS-GSAM Al As Cd
2.95 2.03 0.199
2.90 2.00
0.202
2.96 2.03
2.90 2.00
0.202
0.199
tolerance to the noise level when p = 4 as opposed to p = 3. This result of better accuracy when p > r is in agreement with a previous study (6). The noise filtering ability of the PLS algorithm is due to the use of less latent variables than original predictor variables avoiding in this way to fit the noise. Here, with three latent variables in PLS-GSAM estimation, the relative errors of the no- s are not statistically different from those calculated by the traditional GSAM algorithm. At high noise level with two latent variables, however, PLS-GSAM does not fit the noise in the K matrix and the initial responses giving more accurate values for no's. (4) Correction for Drift. The PLS-GSAM results were compared to the results obtained by using multiple regression GSAM for the cases of 1% , 3 % , 5 % , and 10% noise. In Table V, results using a data set representing a GSAM experiment with no drift and no correction for drift are shown. As seen in the earlier tables, the PLS-GSAM results are slightly better as the noise is filtered from the responses. When drift is not a problem but the GSAM model includes a drift correction term (Table VI), the PLS-GSAM results on the same data set are much better because multiple regression uses the drift term to fit measurement noise and PLS does not. This resistance to overfitting data is an extremely important advantage of the PLS method. One that recommends extensive use of PLS in chemical analysis. In Table VII, drift is added to the measurements and a drift correction term is included in the model. Again, the PLS-
re1 error multiple regression GSAM analyte 1 analyte 2 analyte 3 0.54 1.54 5.03 29.46
1.82 6.75 34.36 132.80
1.93 9.18 30.84 164.90
GSAM results are considerably better. Table VI11 shows the experimental results from the traditional GSAM and PLS-GSAM solutions. Because there were low noise levels in the measured responses from the ICP, the relative errors of estimated initial amounts of analytes do not differ significantly. In this paper the PLS algorithm was introduced as a solution for multicomponent analysis with standard additions. The PLS-GSAM method is a two-stage one, with the first stage consisting of making p regressions to calculate the K matrix, and the second stage requires regressing the initial responses on the transpose of the K matrix to obtain estimates of the initial analyte amounts. Several advantages of the PLS-GSAM method were pointed out and demonstrated on simulated data. PLS-GSAM was found to give early information about the K matrix before the necessary n = r additions are made. It was also found to be able to estimate, although biased to the selectivity of sensors, the nGs by avoiding fitting noise present in the data. The PLS-GSAM algorithm was shown to give better results than the original GSAM algorithm for the case of high noise levels and to be able to correct for drift when the noise level is higher than the drift itself. Further studies using experimental data are currently under way in our laboratories. LITERATURE CITED (1) Saxberg, B. E. H.; Kowalski, B. R. Anal. Chem. 1979, 51, 1031. (2) Jochum, C.; Jochum, P.; Kowalski, B. R. Anal. Chem. 1981, 5 3 , 85. (3) Kallvas, J. H.; Kowalski, 8. R. Anal. Chem. 1981, 5 3 , 2207. (4) Kallvas, J. H.; Kowalskl, B. R. Anal. Chem. 1982, 5 4 , 560. (5) Kalivas, J. H.; Kowalski, B. R. Anal. Chem. 1983, 55, 532. (6) Kalivas, J. H. Anal. Chem. 1983, 5 5 , 565. (7) Joreskog, K. G., Wold, H., Ed. "Systems Under Indirect Observation, Part I and 11"; North Holland; Amsterdam, 1982. (8) Martens, H.; Jansen, S. A. 7th World Cereal and Bread Congress, Prague, 1982, Proceedings"; Elsevier: Amsterdam, 1982. (9) Welsberg, S. "Applied Llnear Regression"; Wiley: New York, 1980. (IO) Wold, S. T€CHNOM€TRICS 1978, 20 (4), 397. (11) Naylor, T. H.; Balintfy, J. L.; Burdick, D. S.; Chu, K. "Computer Simulation Techniques"; Wlley: New York, 1966; Chapter 4.
RECEIVED for review March 29,1983. Accepted May 13,1983. This material is based upon work supported by the National Science Foundation under Grant CHE-8004220.