A Chemometrics Module for an Undergraduate ... - ACS Publications

Mar 1, 2005 - Progression of Chemometrics in Research Supportive Curricula: Preparing for the Demands of Society. John H. Kalivas. 2007,140-156...
0 downloads 0 Views 282KB Size
In the Classroom

A Chemometrics Module for an Undergraduate Instrumental Analysis Chemistry Course

W

Huggins Z. Msimanga,* Phet Elkins, Segmia K. Tata, and Dustin Ryan Smith Department of Chemistry and Biochemistry, Kennesaw State University, Kennesaw, GA 30144; *[email protected]

Most of today’s undergraduate chemistry laboratories are equipped with computerized instruments. The instrument– computer interface provides the user with an opportunity to acquire and store large quantities of data quickly (allowing chemical processes to be monitored in real time) and to retrieve and post-analyze data for further interpretation. Such instruments include ultraviolet–visible (UV–vis), high-resolution infrared (IR), and flame atomic absorption (FAAS) spectrometers, gas chromatography (GC), and high-performance liquid chromatography (HPLC) equipped with different types of detectors, gas chromatography–mass spectrometers (GC–MS), and potentiostats for studying electrochemical processes. Except for FAAS, the majority of these instruments are first-order type, suitable for simultaneously detecting different compounds in a multicomponent sample. A challenging problem in analyzing a multicomponent system is establishing optimum conditions for all compounds of interest. Two or more compounds may have similar response features, leading to highly correlated signals. Several variables in the sample matrix may affect the responses of the compounds differently, making it even more difficult to detect the compounds simultaneously. These problems can be partially solved through the application of more advanced data-analysis techniques, beyond the use of a least-squares method for finding the “best fit” using a straight-line model. Students need to be equipped with more advanced skills of not only analyzing data, but also of designing experiments and optimizing the experimental parameters that will lead to good data. Chemometrics techniques (1–3) are important tools for enhancing such needed skills. Chemometrics Teaching chemometrics at the undergraduate level has been a topic of national discussion since the early 1980s (4– 9). For instance, what topics must be included and to what depth these topics must be covered are important questions, along with the demand on the mathematics related to chemometrics techniques. Delaney and Warren (4) proposed a chemometrics content to include simplex optimization, data smoothing, pattern recognition, library search, graph theory, and factor analysis, among others. Some of these topics are shared in chemistry courses and other disciplines. For example, in our department, students taking chemical literature courses delve into library searches. In other disciplines such as computer science, students learn about graph theory. Some institutions, especially graduate institutions, offer chemometrics as a full-semester course, thus allowing instructors to treat chemometrics techniques with a broader spectrum and greater depth. At the undergraduate level, restrictions on the number of credit hours and the desire to introduce students to a broad spectrum of techniques all place www.JCE.DivCHED.org



constraints on whether to have a full-semester course of chemometrics. Notwithstanding these challenges, more and more undergraduate institutions are teaching chemometrics in their classes (7–9). This article describes a chemometrics module that has been developed for senior-level students taking instrumental analysis chemistry at Kennesaw State University. The module has been engineered for a wide spectrum of the student body. Abstract concepts have been simplified by use of illustrative examples and graphics, while demonstrating the capabilities of chemometrics. The initial activities are spread over the first four weeks of the semester, followed by further applications during the rest of the semester. The module is totally integrated with the instrumental analysis topics offered, namely, spectroscopy, separation science, and electroanalysis. In the first four weeks of the semester topics in linear algebra, leading to the matrix formulation of the classical least-squares method, are reviewed. The F test and its applications in the analysis of variances (ANOVA) in data modeling are presented, followed by the theory and applications of multivariate analysis techniques (multiple linear regression, MLR, and target factor analysis, TFA). Fast Fourier transform (FFT) is used to demonstrate how digital filtering can enhance the signal-to-noise ratio, hence improve detection limits. Students are provided with synthetic and experimental data to prepare file structures that are compatible with the computer programs being used. Students also learn how to run the computer programs and to interpret their observations. As the semester progresses, a number of labs are performed to reinforce the skills that have been learned. For example, students determine iron as a phenanthroline complex in centrum tablets via an external standard. They repeat the experiment using the standard addition method and then analyze their data to establish whether there is interference from the centrum tablet matrix. In another lab, TFA is used to identify transition-metal ions (Co2+, Cu2+, Ni2+, MnO4−, Cr2O72−) in a mixture by extracting their absorptivities. Another experiment, developed by directed-study students for this course, involves the analysis of pain relievers for aspirin, caffeine, and salicylic acid by TFA (10). The same system is analyzed by HPLC–UV. The two methods are compared by use of ANOVA and the appropriate F tests. Computer programs for data analysis were locally coded in Turbo Pascal, which is compatible with MS-DOS. Similar programs can be coded using Matlab (1, 11), which is compatible with the Windows environment. Some highlights of the activities of this module are presented. Pertinent equations for ANOVA are listed in the Supplemental Material.W In this article we have used TFA to illustrate multivariate-analysis techniques. Other techniques (principal component regression, PCR,

Vol. 82 No. 3 March 2005



Journal of Chemical Education

415

In the Classroom

Table 1. Topics Covered in the Chemometrics Module During the First Four Weeks of a Fifteen-Week Semester Week

Location

Topics

One

In class

Define chemometrics and role in analytical chemistry. Discuss errors in measurement and use of statistics.

In lab

Review linear algebra and classical least-squares method. Homework assignment on linear algebra. Discuss experimental design, data modeling, and ANOVA. Hands-on analysis of data: comparing two procedures and data modeling.

Two

In class

Discuss calibration strategies (one component and multicomponent systems).

In lab

Hands-on analysis of data: multicomponent analysis using classical least-squares method and noise effect on data. Use of correlation coefficients to improve a calibration matrix.

Three

In class

Discuss homework problems; discuss more multivariate calibration techniques (TFA, PCR, PLS).

In lab

Hands-on data analysis: resolution of co-eluting peaks from HPLC–EC data using TFA.

In class

Discuss filtering techniques (box car, moving window, FFT), signal-to-noise ratio, and limits of detection.

Determination of concentrations of two or more components in a mixture using TFA. Four

In lab

Data analysis: FFT on a nickel spectrum; signal-to-noise and limit of detection using a simulation program. Test on chemometrics module.

NOTE: Computer programs are used to analyze synthetic and experimental data.

partial least squares, PLS) may be used. A flowchart for TFA is provided in the Supplemental Material.W Table 1 lists the activities that are carried out during the first four weeks.

∂ erri 2 = −2 X i   Yi − (b0 + b1X i )  ∂ b1 = −2 X iYi + 2 X i b0 + 2 X i 2 b1 = 0

Data Modeling One method of obtaining information from experimental data is to model the data points. A model is a mathematical function that closely describes the data points. From the model, properties of the system and parameters affecting that system may be predicted. Indeed, if the parameters affecting the system are known, such information can be used to select the best experimental conditions for a desired outcome. The question is, how do we choose a satisfactory model for a particular set of data? The usual approach is to look for a model that fits the data. The most favorable model in analytical chemistry is a straight-line model, since it is easy to interpret chemical information from a straight-line graph. For example let three measurements, Yi, be taken over three factor levels, Xi, in a three-point calibration curve to determine the concentration of nitrate ions in water:

After taking the differentials, the matrix forms of the results are as given in eqs 3 and 4:

2Y2 2 X 2 Y2

Using the above equations, a least-squares method is used to find the best parameters b0 and b1. The method tries to minimize the sums of squares of the residuals by taking the differentials of the errors (err) with respect to b0 and b1 and equating each to zero, as in eq 2: ∂ erri 2 = −2   Yi − (b0 + b1X i )  ∂ b0

(2a)

= −2Yi + 2b0 + 2 X i b1 = 0

416

Journal of Chemical Education



2 X1

2 X 1 2 X 12

2 X1Y1 =

2

2X2

2 X2 2 X 2 2

2Y3

2

2X3

2 X 3 2 X 32

2 X 3Y3

b0 b1 (3)

Y = XB

[

B = XT X

(1)

Y 3 = b 0 + b1 X 3 + err3

2

2Y1

Y1 = b 0 + b1 X 1 + err1 Y 2 = b 0 + b1 X 2 + err2

(2b)

−1

] [X Y ] T

(4)

In these equations, the X matrix holds the concentration values, while the B matrix holds the b0 and b1 parameters. Superscript T means “transpose of ”, where the transpose of a matrix is obtained by exchanging the rows and columns of the matrix. Using the B elements, one can now calculate a set of Ycal values, which must be very close to the Y values for a good model. Examination of a plot of observed data, Y, along with the calculated values, Ycal, might help evaluate the goodness of a model. The distribution of residuals (deviations between the observed and calculated values) can help evaluate a model as well. For example, if the residuals are symmetrically distributed along a zero origin, the model is good. Examina-

Vol. 82 No. 3 March 2005



www.JCE.DivCHED.org

In the Classroom Table 2. Data Generated by the Function: Y = 6.64 + 2.31X1 + 1.24X2 + 4.51X1X2 Y 57.4

X1

X2

22

0

517

32

3

1056

42

5

1776

52

7

2676

62

9

3756

72

11

NOTE: The models evaluated by students. Model IV best fits the data and therefore has the largest F test value for goodness of fit. I. Y = b0 + b1X + err II. Y = b0 + b1X + b1X2 + err III. Y = b0 + b1X + b2X2 + err

Figure 1. Plot of data points generated by a random function: R2 = 0.0035 and Fcal(2, 28) = 0.097, which is less than Fcrit(2, 28) = 3.32, showing that there is no relationship between Y and X at 95% confidence level.

tion of particular sums of squares via ANOVA, and then calculating ratios such as the coefficient of multiple determination (R 2) and goodness of fit (F test) is yet another method of evaluating a model. Sums of squares of deviations are calculated based on Y and Ycal, as outlined in the Supplemental MaterialW and R 2 is then given by

SS fact R

2

=

SS corr

(p

− 1)

(n

− 1)

(5)

where SSfact is the sum of squares owing to factors in the model, SScorr is the sum of squares corrected for the mean, p is the number of parameters in the model, and n is the number of experimental points. R 2 approaches unity for a good model. The F test for the goodness of a model is given by

SSfact F test =

SS r

(p

(n

− 1) (6)

− p)

where SSr is the sum of the squares of the residuals. The F test works on the hypothesis that Y is not dependent on X (i.e., for a straight-line model, b0 = b1 = 0). If the F test calculated by eq 6 at the specified level of confidence is less than the corresponding tabulated F test values, then Y is not a function of X. For any two models being compared, the one that gives a larger F-test value signifies that it is a better model than the one with a smaller F-test value. The ANOVA sums of squares of deviations that are calculated to obtain R 2 and F test for goodness of fit are shown in the Supplemental Material.W

Evaluation of Two Extreme Cases To get some hands-on experience of how ANOVA is used to evaluate a model, students are provided with two extreme www.JCE.DivCHED.org



IV. Y = b0 + b1X + b2X2 + b3X1X2 + err

cases. In one case a model is totally unrelated to the data, and in a second case a model fits the data. In the first case, data were generated by a random-noise function, that is, Y ≠ f (X, a, b). The X–Y plot of the random points is shown in Figure 1. Using a straight-line model (Y = a + bX ), students calculate Ycal based on the a and b values. They note the coefficient of multiple determination (R 2 = 0.0035), which should approach unity for a good model. The F test for goodness of fit turns out to be 0.097, which is far less than Fcrit (3.32) at the 95% confidence level. These observations clearly indicate that a straight-line model does not fit this particular data set. In another case, a set of data is generated using eq 7, where the equation exemplifies a two-factor model with factor interaction:

Y = 6.664 + 2.31 X 1 + 1.24 X 2 + 4.51 X 1 X 2

(7)

Students are asked to find the best model for this data set from those listed in Table 2. Larger values of the F test are calculated as better models are used. Models I through IV yield R 2 values of 0.97, 1.00, 1.00, and 1.00, respectively, while the corresponding values of the F test are 57.0, 3583, 5882, and 2.1 × 107. All four F-test values indicate that Y is dependent on X1 and X2 at the 95% level of confidence, but model IV is the best model in describing the data points. R 2 values do not distinguish among these models. These two extreme cases are revealing to the student’s understanding of the uses of the F test in evaluating the goodness of a model.

Evaluation of Square-Wave Voltammetry In square-wave voltammetry, current is measured against the applied potential. If the reduction or oxidation potential of the electroactive species falls within the potential window of the electrochemical cell used, a signal of the species will be observed. The potential window is a function of the components of the electrochemical cell (solvent, support electrolyte, electrodes). Also, for a given electrode system, finding a

Vol. 82 No. 3 March 2005



Journal of Chemical Education

417

In the Classroom Table 3. Reduction potentials Used To Carry Out Modeling of Cu2+, Cr3+, Zn2+, and Fe3+ Ions Solution

pH

Ic/ (mol L᎑1)

1

5.26

0.200

2

5.51

0.100

3

3.79

0.040

Reduction Potential/mV Cr3+

Zn2+

Fe3+

40

---

᎑960

᎑1310

50

᎑790

᎑960

᎑1290

60

᎑730

᎑950

᎑1250

᎑740

᎑950

᎑1250

Cu

2+

4

4.45

0.040

60

5

4.52

0.200

40

---

᎑960

᎑1310

6

3.43

0.020

70

᎑700

᎑950

᎑1250

7

5.46

0.200

50

---

᎑950

᎑1270

᎑970

᎑1270

᎑940

᎑1270

8

2.73

0.002

60

᎑700

9

6.01

0.100

60

---

NOTE: The potentials were measured in different solvent–electrolyte compositions.

solvent–support electrolyte that will resolve the reduction potentials of the analytes, thus increasing the selectivity of square-wave voltammetry as a multiple-channel detector for analyzing a multicomponent system, is a challenging problem in electrochemistry. For example, the pH and ionic strength of the medium are observed to affect the reduction potentials of Cu2+, Cr3+, Zn2+, and Fe3+. Can a suitable support electrolyte that will allow simultaneous analysis of a mixture of these ions be obtained? To answer this question, data are provided to students to model the reduction potentials, E, of several metal ions in varying values of pH and ionic strength in CH3COOH兾Na(CH3COO) buffer. For each metal ion, different mixtures of support electrolytes were prepared, each with different pH and ionic strength combinations. The measured reduction potentials are shown in Table 3. Students obtain the models of each metal ion, overlay the models, and then deduce from the plots what pH and ionic strength ranges are suitable for analyzing a mixture of the metal ions. An Ammel 433 A trace analyzer (Electrosynthesis Co.), equipped with a dropping mercury working electrode, a platinum counter electrode, and Ag兾AgCl reference electrode, was used for data acquisition. The data in Figure 2 show that the optimum support electrolyte for all four metal ions is defined at a pH range of 2.5 to 3.5 and ionic strength of 0.02 to 0.06. Cu2+ and Zn2+ do not seem to be affected by the different support electrolyte compositions. Indeed when a solvent–support electrolyte of the optimum composition is used, the data in Figure 3 are obtained, with the four peaks clearly resolved on the potential axis. A similar design of this exercise can be used to study the potential and pH ranges that will give the best current response, which can be related to the concentration of the electroactive species. Multivariate Analysis

Figure 2. Plot of the reduction potentials of the metal ions as the pH and ionic strength change. A support electrolyte of ionic strength around 0.15 and pH 5.5 will not resolve Cr3+ from Zn2+, while ionic strength of 0.02 to 0.06 and pH of 2.5 to 3.5 will.

Figure 3. Square-wave voltammograms of solutions containing all four metal ions at optimum ionic strength (0.02–0.06) and pH (2.5– 3.5). The metal ions can be analyzed simultaneously in a mixture, thus making full use of square-wave voltammetry as a multicomponent sensor.

418

Journal of Chemical Education



While zero-order instruments provide simple measurements, usually aimed at detecting one component in a sample, scanning instruments have multiple channels that provide more complex data from samples with several components. Such complex data must be analyzed with appropriate techniques such as MLR, TFA, PCR, and PLS. Unlike MLR, TFA, PCR, and PLS will quantify the number of significant components in a mixture, as well as calculate the profiles (spectra, voltammograms, chromatograms, etc.) of those components (1–3). They are self-modeling techniques that can be used to resolve convoluted peaks, to determine concentrations of components in a multicomponent systems, and to study multiple chemical equilibrium systems in kinetics, among other uses. In this article we focus on TFA to illustrate some of the capabilities of these techniques.

Theory of TFA Given a data matrix D, with n rows and p columns, TFA seeks to compute subsets of matrix D such that eq 8 is satisfied. The rows and columns of each matrix have a physical meaning, and they are defined for each example provided to the students as:

Vol. 82 No. 3 March 2005

Dn, p = C n, k R k, p



www.JCE.DivCHED.org

(8)

In the Classroom

For matrix D to be factor-analyzable, its individual elements must be linear sums of the product terms. For example, the total absorbance of a sample containing n UV-active components, is given by eq 9 at a specified wavelength, A t = ε1C 1 + ε 2 C 2 + … + ε n C n

(9)

where each of the terms being added (eq 9) is a product of absorptivity, ε, and concentration, C, for each component. The reader is referred to the Supplemental MaterialW for the pertinent equations presented in the TFA flowchart. TFA decomposes the covariance matrix of D (Z = DTD) into two matrices, one holding a set of eigenvalues (λ) and the other holding eigenvectors, Q. The eigen-analysis procedure used in the subsequent calculations is described in detail in Malinowski’s monograph (1). The eigenvalues provide information about the number of significant factors contributing to the D matrix. The eigenvectors are abstract distortions of the profiles of the significant factors. They are used to calculate abstract matrices R* and C* (see Appendix B in the Supplemental MaterialW ). To calculate real factors R and C, a transformation vector, Tv, is sought. As shown in the flowchart, Tv is a least-squares result of using a test vector, Ctest, which must have some resemblance to the basic factors contained in the data matrix. In curve-resolution problems, such as encountered in HPLC–UV or HPLC–electrochemical detection, a uniqueness test vector has been used successfully to extract chromatograms of significant components (12–14). In multivariate-calibration problems, the concentrations of the analytes in the training set can be readily used as test vectors to obtain the required transformation vectors, leading to the calculations of real R and C, which contain the chemical information being sought. To demonstrate how TFA works, two synthetic vectors are generated using a Gaussian function. One vector represents a spectrum and the other represents a chromatogram as shown in Figure 4a. The two vectors are cross-multiplied and a matrix as shown in Figure 4b is obtained. TFA decomposes this matrix to eigenvalue and eigenvector matrices. This is a noise-free matrix, with one component by design, and the first eigenvalue is 6.64, while the second one is 2.8 × 10᎑11. TFA reproduces a nearly 100% match of the traces that were used to produce the matrix as seen in Figure 4c. For more insight into TFA, two experimental data sets, one involving curve resolution and the other involving multivariate calibration, are provided.

Curve Resolution Students analyze a matrix containing three nitrophenols. This matrix was obtained by high-performance liquid chromatography using a scanning electrochemical detector, HPLC–EC (14). The computer-controlled instrument was connected to a EG & G PARC model 310 static mercury drop electrode stand and a flow cell. The instrument was operated in the square-wave mode at 600 mV s᎑1. A C18 reverse phase column (4.6-mm i.d. × 150-mm) was used for separation, with acetonitrile buffered at pH 5.1 in CH3COOH兾CH3COO᎑ as the mobile phase. The three nitrophenols (2,3-dinitrophenol, p-nitrophenol, and mnitrophenol) were not completely resolved on the time scale, neither were they resolved on the potential scale. Thus TFA www.JCE.DivCHED.org



Figure 4. Graphics illustrating how TFA works. The two vectors (a) are cross-multiplied to give the three-dimensional figure (b). Factor analysis reduces the matrix back to vectors displayed in (c), which are identical to (a).

had to be used for quantification purposes. The resulting D matrix consists of n = 30 rows of chromatograms (time axis) and p = 48 columns of voltammograms (potential axis). The measured response is current, since an electrochemical detector was used. To use more appropriate variable names, eq 8 may now be rewritten as

Dn, p = Vn, k C k, p

(10)

where the D matrix is reduced to a V matrix consisting of k column vectors of voltammograms, each with n elements, and a C matrix consisting of k rows (chromatograms), each with p elements. The principal factor analysis output for the first twelve components (NC) is shown in Table 4. Reduced eigenvalues (red_eigen) clearly show that there are three significant components in this matrix. The autocorrelation function (auto_co) points at three or four components, while variance (var) and the ratio function indicates three components. The indicator function (IND) predicts too many components. Details of how these functions are used to predict number of significant components are described in literature (1, 15). Abstract voltammograms for the first eight components are shown in Figure 5. While the first four vectors are relatively smooth, they show very little resemblance to a voltammogram (Gaussian shape). The last four vectors show some noise, to be expected since there are only three significant components. A transformation vector is needed to convert the vectors comprising the abstract factors (V* and C*) into real interpretable vectors. Since the voltammograms and chromatograms have maximum peaks, intuition dictates to us tha a Λ-shaped target test may be a reasonable one. If one choose to use abstract chromatograms (C*) for calculating the transformation vector, a set of test vectors of a unity and zeroes i used (12–14). The first test vector is made up of (1, 0, 0, 0, ..., p) up to p elements. The second one is made up of (0, 1,

Vol. 82 No. 3 March 2005



Journal of Chemical Education

419

In the Classroom

Figure 5. Plot of abstract voltammograms of nitrophenols corresponding to the first eight eigenvalues. The first four vectors have less noise (primary eigenvectors) while the last four show some noise (secondary eigenvectors). A transformation vector is required to transform the abstract to real vectors.

Figure 6. A time–voltage–current plot of data for the separation of nitrophenols using HPLC–EC system: (a) before analysis with TFA, (b) voltammograms, and (c) chromatograms of the three compounds after using TFA and the transformation vector. The compounds are (I) 2,3-dinitrophenol, (II) m-nitrophenol, and (III) p-nitrophenol.

0, 0, ..., p) and so on. A total of k rows of such vectors are obtained and used as Ctest in eq 11:

From the chromatograms, concentrations of the nitrophenols can be calculated by using the peak areas of standard solutions. Further, since reverse-phase HPLC was used, the structures of the separated nitrophenols can be matched with their relative polarities. Thus 2,3-dinitrophenol (I) is more polar than p-nitrophenol (II), which is more polar than mnitrophenol (III).

(

Tv = C * T C *

−1

) (C C ) *

(11)

test

Tv is then used to complete the combination step, yielding V and C. The D matrix and the T v -transformed voltammograms and chromatograms are displayed in Figure 6a–c. The two-peak voltammogram (Figure 6b) is that of a dinitrophenol, which has two electroactive ⫺NO2 groups.

Table 4. Principal Factor Analysis Output of the Nitrophenols Data Matrix for the First Twelve Components NC Red_eigen

IND

Auto_co

Ratio

21161

0.0957

0.98

0.698

77.2

2

3625

0.0702

0.95

0.869

12.5

3

3128

0.0076

0.96

0.999

10.2

4

21

0.0048

0.86

1.00

0.068

5

4.9

0.0041

0.72

1.00

0.014

6

2.9

0.0035

0.63

1.00

0.008

7

1.5

0.0032

0.43

1.00

0.004

8

1.1

0.0029

0.005

1.00

0.003

9

0.69

0.0028

0.044

1.00

0.003

10

0.35

0.0028

0.038

1.00

0.000

11

0.30

0.0029

᎑0.095

1.00

0.000

12

0.24

0.0030

᎑0.048

1.00

0.000

NOTE: All the predictors used point at three or four significant components in the matrix. However, the IND function predicts nine components. Column headings are defined in the text.

Journal of Chemical Education



Dn, p = C n , k R k , p

Var

1

420

Multivariate Calibration Multivariate-calibration methods help to enhance selectivity and reliability in measurement by using all the data points provided by scanning instruments. The same data format as in eq 10 can be used for multivariate-calibration problems, but with different data description. Equation 10 may be rewritten as eq 12 to emphasize that C is a calibration matrix while R is a regression matrix: (12)

As with the curve-resolution problem treated above, electrochemical data are provided for analysis using TFA as a multivariate-calibration technique. The data matrix, D, is obtained by measuring responses of n − 1 different mixtures of compounds of known concentrations, including the nth sample response containing compounds of interest. The columns of D represent voltammograms measured at p potential steps. The rows of C are mixtures of compounds of known concentrations up to n − 1 rows. The nth row consists of zeros for the compounds whose concentrations are to be determined. The k columns of C represent the number of compounds. For the R matrix, the rows represent k compounds and the p columns represent the regression coefficients as a function of each potential step. These coefficients are proportionality constants that relate measured current to the concentration of the electroactive species.

Vol. 82 No. 3 March 2005



www.JCE.DivCHED.org

In the Classroom

Another important feature of TFA as a multivariate calibration technique is that it is capable of predicting missing points in a vector when valid test vectors are used. The calibration matrix provides valid test vectors. Ctest, a vector of concentrations of compound X in each of the n calibration solutions, including a zero for the concentration of X in the sample solution, is used in eq 11 to obtain Tv. This Tv in turn is used to predict the concentration of X in the sample solution, according to eq 15:

C X = C* T v

Figure 7. Plot of the voltammograms of a calibration matrix for the analysis of Pb2+ and Tl+. A 0.1 M KNO3 support electrolyte was used. (Waveform: 50 mV amplitude, 10 mV wave increment, 100 ms wave period and potential window: 0 to ᎑700 mV.) The insert is the resolved Pb2+ and Tl+ from solution 4, the sum of the resolved Pb2+ and Tl+, showing how the sum agrees with the composite solution number 4.

The design of a calibration matrix, C, is crucial for successful analysis of a mixture. Each calibration sample must contain a unique set of combinations of component concentrations, unrelated to any other mixture in the calibration matrix. Such a calibration matrix can be obtained by checking the correlation coefficients of the concentrations, and replacing heavily correlated off-diagonal concentrations by those that will lead to less correlation. In eq 12, we know the two matrices, D from the instrument response and C from the preparations of the calibration matrix. If matrix D is noise free and has no interference, and C has very small or no errors, it is possible to obtain R by the MLR method (16) according to eq 13:

(

R = CTC

−1

) (C D ) T

(15)

The zero in the original Ctest is replaced by CX iteratively until CX converges to a constant value. The concentrations of compounds in a given sample are predicted one at a time, since they have unique Ctest vectors. It is found that the concentrations that are predicted by TFA using either eqs 14 or 15 are statistically similar to those predicted by PCR or PLS.

Example To demonstrate the practical aspects of multivariate calibration using TFA, some electrochemical data of a binary system containing Pb2+ and Tl+ are provided to the class. The response matrix was obtained by taking voltammograms of several mixtures of different concentrations of the binary system in 0.1 M KNO3 as support electrolyte. An Ammel 433 A trace analyzer was used. The waveform consisted of a 50 mV amplitude, 10 mV wave increment, and 100 ms wave period. A potential window of 0 to ᎑700 mV was used. Nine voltammograms (matrix D) of the calibration matrix of Pb2+ and Tl+ are shown in Figure 7. The actual concentrations of the nine solutions are listed in Table 7. Eigenanalysis of this D matrix is summarized in Table 5. There was no pretreatment of data prior to factor analysis. While the reduced eigenvalues (red_eigen) do not decrease to zero, they fall rapidly from the first to the second component. The ratio of the second eigenvalue to the first one is 12%, while that of the third to the second is 1%, thus indicating two significant factors. The other functions indicate two or three factors. A plot of the log of sum of squares of errors versus the number of com-

(13)

Once R is known, it can be used with the response vector of the sample (Dv) to obtain the concentration of the compounds in the sample, Cx, according to eq 14:

(

)(

C x = Dx R T R R T

)

−1

NC

While the errors due to interference can be minimized, one cannot know with certainty to what degree the predictions will be affected by such errors. In reality, no experimental data are error free. One major limitation of MLR is that it does not analyze data for background noise or interference, neither does it determine the rank of the data matrix. TFA analyzes the D matrix to determine the matrix rank and to reduce the background noise by rejecting secondary eigenvalues. TFA also uses the calibration concentrations as test vectors to find the best transformation vector (Tv), which is used to calculate R. Once R is obtained, concentrations of the compounds are found by using eq 14. www.JCE.DivCHED.org

Table 5. Principal Factor Analysis Output for the Pb2+/Tl+ Data Matrix

(14)



Red_eigen

IND

1

3.4 x 10

8

102.3

2

4.3 x 107

15.0

3

5.5 x 105

4

Auto_co

Ratio

Var

0.99

0.802

89.90

0.97

0.997

9.950

3.42

0.87

0.999

0.107

7.5 x 103

3.98

0.76

1.000

0.001

5

7.4 x 10

3

3.98

0.84

1.000

0.001

6

3.0 x 103

4.81

0.79

1.000

0.000

7

9.7 x 103

9.94

0.51

1.000

0.000

NOTE: The auto-correlation (auto_co) and variance (var) functions indicate two significant components, while the reduced eigenvalues (red_eigen), indicator function (IND), and coefficient of multiple determination (ratio) point at three components.

Vol. 82 No. 3 March 2005



Journal of Chemical Education

421

In the Classroom

The TFA iteration using the Pb2+ test vector is shown in Table 6. The first element of the fifth row in Table 6 (30.54 ppm) was replaced by a zero and labeled “MP” in the table. As seen, within six iterations a value of 30.66 ppm was predicted. The iteration was terminated when two consecutive predictions did not change much. Table 7 summarizes the Pb2+ and Tl+ predictions by TFA, PCR, and PLS on all the calibration mixtures used as validation samples in turns. Except for the Pb2+ and Tl+ concentrations of 1.018 and 1.264 ppm respectively, the tabulated results indicate that all three multivariate calibration techniques perform about the same, as indicated by the statistics row of Table 7. The numbers in the statistics column are percent calculated concentrations over known concentrations from the calibration solutions. Included in this row are also the standard deviations for each technique.

Figure 8. A plot of log (Press) versus the number of components, where “Press” is the prediction sum of squares of errors. From the plot, Pb2+/Tl+ calibration matrix performs the best when three components are used.

Signal Processing Analytical instruments have limitations as to what level of analyte concentrations they can detect. One way to improve detection limits is to perform digital filtering on a signal. In instrumentation, different forms of integrated circuits in operational amplifiers are widely used to produce analog

ponents used to make predictions indicates that three components fully describe the matrix rank (Figure 8). Once the matrix rank has been established, eqs 11 and 15 are used to predict the “missing point” in the test vector.

Table 6. The Iterative Process as the “Missing Point“ (MP) Is Being Sought by TFA Number of Iterations

Test Vector

Pred/Test (%)

1

2

3

4

5

6

61.08

63.27

61.47

60.80

60.55

60.46

59.76

097.84

55.99

57.01

56.87

56.83

56.81

56.80

57.18

102.1

50.90

47.40

49.67

50.52

50.84

50.95

51.02

100.2

40.72

31.51

37.08

39.17

39.95

40.24

40.23

098.80

MP

19.23

26.40

29.08

30.08

30.45

30.66

100.4

20.36

11.23

17.00

19.15

19.95

20.25

20.37

100.0

10.18

5.801

8.521

9.535

9.910

10.05

10.11

099.31

5.090

4.668

4.953

5.059

5.098

05.113

05.145

101.1

1.018

4.965

2.481

1.554

1.208

1.080

1.055

103.6

NOTE: Within six iterations a number in agreement with the expected value is obtained. The predicted values, including the “missing point“ (MP), expressed as percent of predicted/test vector, range from 98% to 104% of the test vectors.

Table 7. Predictions of Pb2+ and Tl+ Concentrations by TFA, PCR, and PLS Solution

Pb2+ (ppm)

TFA

PCR

PLS

Tl+ (ppm)

TFA

PCR

PLS

1

61.08

59.76

59.86

59.87

1.264

1.593*

1.698*

1.596*

2

55.99

57.18

57.01

56.88

6.320

6.612

5.767

5.841

3

50.90

51.02

51.53

51.52

12.64

12.05

12.60

12.19

4

40.72

40.23

40.49

40.33

25.28

25.13

25.64

25.80

5

30.54

30.66

30.18

30.31

37.92

38.28

37.59

37.67

6

20.36

20.37

20.46

20.46

50.56

50.19

50.27

50.27

10.10

10.21

10.22

7

10.18

63.20

63.31

63.46

63.44

8

5.090

5.145

5.003

5.055

69.52

70.48

70.33

70.29

9

1.018

1.055*

1.143*

1.143*

75.84

74.57

74.71

74.71

Pred/Test (%)

---

99.8 ± 1.4

99.9 ± 1.2

---

99.9 ± 2.6

99.5 ± 3.8

98.7 ± 3.1

100. ± 1.3

NOTE: Excluding the lowest concentrations of Pb2+ and Tl+ (with asterisks), the predictions are very satisfactory. The % predicted/test vector averages range from 98.7 to 100.0% and standard deviations range from 1.2 to 3.8.

422

Journal of Chemical Education



Vol. 82 No. 3 March 2005



www.JCE.DivCHED.org

In the Classroom

filters for the purpose of improving the signal-to-noise ratio. One limitation of these filters is their lack of flexibility needed to cope with different types of changing experimental conditions. This limitation has resulted in scientists seeking software-filtering techniques (17–20). Software techniques are programmable, hence can be adapted to different types of noise. In the instrumental analysis class several signal enhancement techniques (boxcar averaging, ensemble averaging, Fourier transforms) are presented. Of interest is the FFT (18, 19), which enjoys broader applications, including digital filtering, signal differentiation, and signal resolution. In its filtering application, the forward FFT transforms the data array to the spatial frequency domain where, by using some predefined mathematical function, the noise frequencies are truncated. In the frequency domain, the noise frequencies occupy a different region from the signal frequencies. A truncating function is chosen to remove the noise frequencies. Choice of such a function depends on the type of data being smoothed. The inverse FFT is then used to convert the convoluted frequencies back to the time domain with increased signal-to-noise ratio. Equations 16 and 17 define the forward and inverse FFT (20): 1 N

F (u ) =

f (x ) =

1 N

N −1

∑ f ( x ) exp

x =0

N −1

∑ F (u ) exp

u=0

− j 2πu x N

(16)

j 2 πu x N

(17)

In the above equations, f (x) and F(u) are the vector elements of the raw data and frequencies respectively, N is the total number of vector elements, It takes 2n values, where n is an integer. The j term accounts for the imaginary numbers. A plot of different steps that illustrate the operation of FFT filtering is shown in Figure 9. Raw absorbance data of a

Figure 9. (a) Unfiltered data from a 256-point Ni2+ absorbance spectrum. (b) Frequency elements before using a low-pass filter. (c) Frequency elements after using a low-pass filter. A 5th-order Butterworth filter with a cutoff, D0 = 10, was used. (d) Ni2+ spectrum after inverse FFT.

www.JCE.DivCHED.org



roughly 4.0 × 10᎑4 M aqueous nickel(II) solution is displayed in Figure 9a. The raw data are Fourier transformed to the frequency domain (Figure 9b), where the lower frequencies constitute the signal frequencies while the noise frequencies occupy higher frequency regions. The first portion of the frequency vector is real frequencies while the second portion is imaginary frequencies. A filter function is sought to remove the high-noise frequencies. In our case, we have used a Butterworth filter function (20), defined by eq 18, 1

H (u ) = 1 +

D (u ) D0

2n

(18)

where n is the order of the filter function and D0 is the cutoff point. D(u) = 1 for counts less than the cutoff point, otherwise it is equal to zero. Unit values preserve the signal frequencies, while the zeros “kill” the noise frequencies. The data displayed in Figure 9a are provided to the students to determine a suitable cutoff point (point distinguishing signal frequencies from noise frequencies). The cutoff point is determined visually as each inverse-transformed vector is overlaid with the original raw data vector. Figures 9c shows the F(u)*H(u) results, while Figure 9d shows the inverse-transformed points. A cutoff (D0) of 10 and filter order of 5 were found to be suitable for the nickel absorbance spectrum. For a better understanding of signal-to-noise ratio (S兾N) and limits of detection (LOD) and how signal filtering improves the signal quality, the class is provided with a simulation program. This program generates a Gaussian peak, gives the user an option to add noise onto the peak, and then analyzes the peak before and after filtering. A 256-point peak with a peak height of 2.0 at a peak position of 128 and a peak-width of 20 units is shown in Figure 10. About 20% noise was added to this peak as shown in Figure 10. The

Figure 10. Plot of synthetic noisy data superimposed over filtered data. Digital filtering improved the S/N ratio from 8.2 to 33, showing that filtering can improve limits of detection.

Vol. 82 No. 3 March 2005



Journal of Chemical Education

423

In the Classroom W

points were generated via a simplified Gaussian eq 19

Yi = h exp −

P − i w

2

(19)

where h is the peak height, P is the peak position, w is the peak width, and i takes values 1 to 256. For peak analysis, three to four points of the peak apex were averaged to obtain the actual peak height (X ), and 20 points were selected at a flat region of the peak base from which to calculate the background average (B) and the standard deviation of the background (SB). From these values, the signal-to-noise ratio, S兾N = (X − B)兾B, was calculated. It was found that while the S兾N ratio on raw data was 8.2, it changed to 33 after filtering, a four-time improvement. Students do further exploration by increasing the noise level on the Gaussian peak until the limit of detection, |X − B| ≤ 3SB, condition is met. In this way the abstract but important concepts of instrumental analytical chemistry, limits of detection, and signal-to-noise ration are addressed. Conclusion Modern industrial laboratories for chemical analysis are equipped with computerized instruments. The instrument– computer interface facilitates data acquisition, storage, and postprocessing. This module provides a solid introduction to what students are likely to be faced with when they leave university. Students learn how to process data and transform data sets into simple, interpretable forms. They learn how to use statistics to make decisions about chemical systems. Students are introduced to multivariate calibration techniques, which take them beyond just the analysis of one component in a sample. They learn how to prepare data files for postprocessing. They learn how to evaluate the goodness of a model and how to improve a calibration matrix by use of correlation matrices so that predicted results are acceptable. The knowledge acquired becomes useful as the students work on more analytical labs into the semester. One pleasant outcome about this module is that students are eager to obtain good results. When they suspect that something went wrong with their calibration solutions, they do not feel coerced to repeat the experiment. A few have expressed that this first part of the course is challenging, but into the semester they appreciate this module as they begin to apply what they have learned from it. Acknowledgments Participation of the undergraduate students taking instrumental analysis chemistry classes is greatly appreciated. The Department of Chemistry and Biochemistry’s efforts to involve students in directed research is also acknowledged. Authors further acknowledge W. Dunn III for his advice and for making available the PLS and PCR computer programs.

424

Journal of Chemical Education



Supplemental Material

The steps in analysis of variance and a flowchart of the target factor analysis are available in this issue of JCE Online. Literature Cited 1. Malinowski, Edmund R. Factor Analysis in Chemistry, 2nd ed.; John Wiley & Sons, Inc.: New York, 1991. 2. Martens, Harald; Naes, Tormod. Multivariate Calibration; John Wiley & Sons, Inc.: New York, 1998. 3. Kramer, Richard. Chemometric Techniques for Quantitative Analysis; Marcel Dekker, Inc.: New York, 1998. 4. Delaney, M. F.; Warren, F. V., Jr. J. Chem Educ. 1981, 58, 646–651. 5. Howery, D. G.; Hirch, R. F. J. Chem. Educ. 1983, 60, 656. 6. Harvey, D. T.; Bowman, A. J. Chem. Educ. 1990, 67, 470. 7. Meftah, M. E.; Bigan, M.; Blondeau, D. Chem. Educ. 2003, 8, 318–326. 8. Capello, G.; Goicoechea, H. C.; Miglietta, H. F.; Mantovani, V. E. Chem. Educ. 2003, 8, 371–374. 9. Kalivas, John. Evolution of Chemometrics in the Undergraduate Analytical Chemistry Curriculum; Dunn, William; Krunic, A. Chemometrics as an Opportunity for Introduction of some Fundamentals; Morgan, Steven L.; Kinton, V. R. Chemometrics and Spectrophotometry in the Instrumental Analysis Laboratory: Multicomponent Mixtures and Principal Component Analysis; Lochmuller, C. H. Integrating Chemometrics Concepts and Experiments at the Advanced Analytical Course Level; Anderson, James L.; Shira, B. A. Analytical Sampling Statistics: What You See May Not Be What You Want To Get; Msimanga, Huggins Z.; Tata, Segmia K. A Modeling Experiment for Instrumental Analysis Lab: A Support Electrolyte for Electrochemical Analysis of Heavy Metals in Food Supplements, Abstracts: 53rd Southeast Regional Meeting of the American Chemical Society, Savannah, GA, September 2001. 10. Charles, M. J.; Martin, N. W.; Msimanga, H. Z. J. Chem. Educ. 1997, 74, 1114–1117. 11. Chan, F. T.; Ching, W. H. J. Chem. Educ. 1995, 72, A84. 12. Vandeginste, B. G. M.; Derks, W.; Kateman, G. Anal. Chim. Acta 1985, 173, 253–264. 13. Gemperline, P. J. J. Chem. Inf. Comput. Sci. 1984, 24, 206– 212. 14. Msimanga, H. Z.; Sturrock, P. E. Anal. Chem. 1990, 62, 2134–2140. 15. Msimanga, H. Z.; Sturrock, P. E. Electroanalysis 1992, 4, 507– 513. 16. Beebe, K. R.; Kowalski, B. R. Anal. Chem. 1987, 59, 1007A– 1017A. 17. Holick, G. Anal. Chem. 1972, 44, 943. 18. Binkley, D. P.; Dessy, R. E. J. Chem. Educ. 1979, 56, 148. 19. Brigham, E. O. The Fast Fourier Transform; Printice-Hall, Inc.: Englewood Cliffs, NJ, 1974. 20. Gonzalez, R. C.; Wintz, P. Digital Image Processing; AddisonWesley Publishing Co.: London, 1977; Chapter 4.

Vol. 82 No. 3 March 2005



www.JCE.DivCHED.org