An error analysis of the rapid lifetime determination method for the

An ErrorAnalysis of the Rapid Lifetime Determination Method for the Evaluation of Single Exponential Decays. Richard M. Ballew and J. N. Demás*. Chem...
0 downloads 0 Views 511KB Size
30

Anal. Chem. 1989, 6 1 , 30-33

An Error Analysis of the Rapid Lifetime Determination Method for the Evaluation of Single Exponential Decays Richard M. Ballew a n d J. N. Demas*

Chemistry Department, University of Virginia, Charlottesville, Virginia 22901

The precislon and speed of Ashworth’s rapid lifetime determination method (RLD) for a single exponential decay is evaluated. The RLD Is compared to the welghted linear least-squares (WLLS) method. Results are presented as a function of integration range and signal noise level. For both the ilfetime and the preexponentiai factor, optimum fitting reglons exist, yet the errors increase rather slowly on either side of the optimum. The optimum conditions for determination of the preexponentiai factor and the lifetime are similar, so both can be determined with good precision even at low total counts (IO4). I n the optimum region, the relative standard deviations for the RLD are only 30-40 % worse than for WLLS, but the calculations are tens to hundreds of times faster, depending on how the data are taken. The speed and prectsion of the RLD coupled with the ease of data acqulsltlon make it an attractlve data reduction tool for real time analyses.

(1b) The RLD does not calculate A and I by fitting a large number of data points, as does the linear least-squares methd, but calculates the parameters directly from Do, D1, and At. The RLD algorithm is simple, compact, and easy to implement. The speed of the RLD is realized through its mathematical simplicity. For data that are recorded a t even, discrete intervals (cl, ...d,) rather than integrated directly, the necessary integrals may be approximated by NI2

Do = Cdj j=l N D1

The single exponential decay of the form I ( t ) = A exp(-t/7) describes a broad range of physical processes. Extracting parameters from such data is important in many areas of physical science. In particular, luminescence decays provide the basis for a number of analytical methods (1,2) and contain molecular and environmental information (3). Analytical information includes the concentration of quenching species ( 1 ) and temperature (2). The preexponential factor ( A ) can provide a measure of analyte concentration with very good detection limits even in the presence of quenching and scattering or fluorescence errors. When the sample is excited with a short-duration light pulse, the preexponential factor is directly related to the concentration of the luminescent species and is independent of quenching errors (4, 5). The standard method for evaluating A and 7 for exponential decays is to fit In ( Z ( t ) ) versus t by linear least squares. This method applies to a decay with a zero base line or where the known or measured base line is subtracted. The rapid lifetime determination method (RLD) of Ashworth, however, provides a significantly faster calculation, allowing for real time analyses (6), with little loss of precision under optimum conditions. Actually, the RLD is a family of methods that can be applied to single or multiple exponential decays with or without an unknown base line (6,7). The RLD can substantially reduce the computational time. An analog version of the RLD for single exponential evaluation without a base line was implemented in a luminescence decay time thermometer (2). Use of a digital RLD would provide improvements in precision for this instrument. For the RLD with zero base line, the decay curve is divided into two contiguous areas (Doand D l ) of width At (in Figure 1). From the areas of each interval, Doand D1, the lifetime and preexponential factor may be extracted easily by using eq 1: 0003-2700/89/0361-0030$01.50/0

=

j=Nf2+1

dj

(2b)

In quantitative analyses, it is important to have an estimation of the measurement precision in the presence of noise. Originally, the RLD’s precision was evaluated for a limited range of experimental conditions (7). We wanted to evaluate the performance over a wide range of experimental conditions in order to assess the optimum conditions and the theoretical limitations. In particular, we examined the effect of shot noise associated with photon counting. We evaluated the precision of determinations of A and 7 as functions of the time width of integration relative to the lifetime and the total counts of the decay. We used the single exponential RLD without a base line and assumed that the only source of noise was Poisson statistical noise in the photon counts, which is reasonable for many instruments. We also compared the RLD with weighted linear least-squares (WLLS) fitting of the In ( I ( t ) )versus t plots under the same conditions. EXPERIMENTAL SECTION Our calculations confirm that both the WLLS and RLD methods give the correct parameters for a decay and typically within 1 standard deviation of the mean. Thus, the primary question is what is the precision of the two methods. We evaluated the precision of the RLD by two independent methods, error propagation and numerical (Monte Carlo) simulations. We assumed zero base line and Poisson noise. These experimental parameters are reasonable for an instrument with a pulsed laser excitation source and a cooled photomultiplier tube (PMT). The relative standard deviation in each parameter was determined for these methods as a function of the relative integration window (Aht/7) and the logarithm of the total number of detected photon counts under the entire decay c w e (A7). The integration window T 0.2 to A t / r = 5, corresponding to a total was varied from A ~ / = integration time of 0.4-10 lifetimes. The quantity AT was varied from io4 to lo8. Error Propagation. Since there are closed form solutions for A and T, it is a simple matter to use standard error propagation techniques for evaluating uncertainties in these quantities in terms 0 1988 American Chemical Society

ANALYTICAL CHEMISTRY, VOL. 61, NO. 1, JANUARY 1, 1989 31

1

0 .2

Lifetime =-At

7.6

c)

9

-

0.8

2.0

1.4

~1

/ In(Dl/DO)

2.6

3.2

3.8

5.0

4.4

7.6

7.0 6.4

5.8

0

5.2 4.6

0.0

4.0

Time

At Figure 1.

+ -4

'

'

'

'

1

'

'

1

'

1

0.2 0.8 1.4 2.0

'

'

'

2.6

'

'

'

3.2

'

'

'

3.8

'

~ I

4.4

"

'

4.0

5.0

At I z Contour map, generated by propagation of errors, of relative standard deviations (uJT) in lifetime determination by RLD. Each contour is separated by 0.002. Figure 2.

At

Graphical representation of the RLD method. 02

of the uncertainties in Do and D1 (8). Carrying out the necessary evaluation yields Y = D1/Do (3)

0 8

1 4

20

26

32 3 8

4 4

50

70 6.4

(4)

5.8

5.2 4.6

0.2 0.8 1 . 4

where U D ~and U D ~are the standard deviations in Do and D,, respectively. Values for Do and D, were generated by varying At/T and log ~ UD, being equal to D,'J2 and Dll/', respectively, (AT),with U D and for Poisson noise. Appropriate values of Do and D , were entered into eq 3-5 to generate the error surfaces. Monte Carlo Methods. Monte Carlo methods (9) were also used to generate error surfaces for both A and T for the RLD method. DO and D1 were generated, and for each simulated experiment random noise was added. Poisson noise was approximated from a Gaussian noise generator with a mean of zero and a standard deviation of 1 (IO). This noise, weighted by the square root of the counts in each window, was added to the data. One hundred separate "experiments" were performed, from which the relative standard deviation was calculated for each point in a 21 X 25 point grid. Weighted Linear Least Squares (WLLS). For comparison we generated error surfaces for the lifetime and preexponential factor by the weighted linear least-squares method ( 1 1 ) with Poisson weighting factors. The total decay curve over 2 A t / ~was recorded in 512 channels. Channels with fewer than 20 counts were not used in the fitting to eliminate systematic errors from the nonsymmetric shape of the Poisson distribution versus the symmetric weighted Gaussian function used to approximate Poisson noise. To verify our expressions, we spot-checked several values of A, T , and At with lo00 Monte Carlo simulations. Because of the greater number of experiments, the precision was better and we could more accurately check eq 3 and 4. The Monte Carlo results matched the error propagation equations in all cases. The error surfaces for the WLLS method were similarly verified. Calculations. Calculations were performed on an AT&T PC6300 equipped with an 8087 math coprocessor. All programs were coded in TURBO Pascal 4.0. To compare the relative speeds of the RLD and WLLS, programs performing multiple lifetime and preexponential factor determinations were run on the AT&T PC6300 with an 8087 math coprocessor and a Tandon PCA with an 80286 processor speed of 8 MHz. The RLD was timed for calculating A and T directly from Do and D1 and for summing Do and D1 from a 512-point decay.

20

2.6

3.2 3.8 4 . 4

5.0

At I t Contour map of relatiie standard deviations In lifetime ( ~ $ 7 ) determination by WLLS. Each contour is separated by 0.002. Flgure 3.

02

OR

1 4

20 2 6

C " " ' " ~ " '

w

0 +

32

39

4 4

5 0

l ' l l l A

5 8

58

52

52

4 6

4 6

40

02

08

" 4

20

2 6

3 2

38

4 4

sc

At i t the relative standard deviation in the calculations using error propagation (smooth Ilnes) and Monte Carlo simulations (jagged lines). Each contour is separated by 0.002.

Flgure 4. Overlay of for A ( u A / A )by RLD

RESULTS AND DISCUSSION The primary benefit of the RLD is speed. When the integrated areas, Doand D,,are available, the RLD is 350-830 times faster than the WLLS method, depending upon the computer and whether a math coprocessor was present. Even when data are divided among 512 channels and must be summed by eq 2, the RLD is 10-120 times faster than WLLS. However, as speed without precision is unacceptable, we show that precision for the RLD is quite good under appropriate conditions. With the 8-MHz AT using an 80287 and double precision reals, a lifetime calculation took 1.0 ms with eq 1 and 630 ms with WLLS. By going to assembly language, one can reduce

32

ANALYTICAL CHEMISTRY, VOL. 61, NO. 1, JANUARY 1, 1989 G S

3 2

2 6

20

1 4

3 2

38

'6

'

c

5C

4 4

)

>

ri-E

I

' / ' I " ' ' '

At I t Flgure 5. Ratio of relative standard deviations of RLD and WLLS for the lifetime (o,/T). Each contour is separated by 0.2. The minimum contour is 1.0 below which the RLD performs better than WLLS. c 2 1

2 8 -

'

2 0

4 I

I

l

l

2 6 . r l l

3 2

3 8 '

\

\

l

{

5C

4 4 l

'

'

'

\

h4

AtiT

Flgure 6. Ratio of relative standard deviations of RLD and WLLS for the preexponential factor ( g A/ A ). Each contour is separated by 0.1. The minimum contour is 1.0 below which the RLD performs better than WLLS.

this time to less than 200 ps. This speed is adequate to keep up with most pulsed lasers or, by doing the integrations in hardware, with much higher frequency modulated CW sources. Figure 2 is the error surface for the relative error in T evaluated by the RLD using the error propagation expressions (eq 3). Errors are relative ( u ~ / T ) .Figure 3 is an analogous error surface for T evaluated by WLLS. Figure 4 is an error surface for A evaluated by the RLD using the error propagation expression (eq 4). For comparison the Monte Carlo results (the jagged lines) are also shown. Within experimental error, the error propagation and Monte Carlo methods are equivalent. Similar results were obtained for the Monte Carlo calculation with 7 . Figure 5 is the ratio of the error surfaces for T calculated by the RLD to that calculated by WLLS (i.e. a pointwise ratio of the data of Figure 2 to that of Figure 3). Figure 6 is the ratio of the error surfaces for A calculated by the RLD relative to WLLS. The RLD error surface for T reveals that there is an optimum At relative to the lifetime. This optimum region occurs a t At17 = 2.5, or where the total integration time, for both Doand D1,is 5 lifetimes. The error surface is rather flat around the optimum, and the quality of the fit is, thus, not very sensitive to the choice of At/T. Precision does decrease rapidly for A t / s < 1. The integration windows are small, yielding small Doand D 1values. Since noise is proportional to the square root of the counts in an integral, the error increases rapidly. In this region, the WLLS also suffers from significant errors comparable to that of the RLD (Figure 3). A t the other extreme, where At17 becomes large, while Do contains a large number of counts and contributes little to

the error, few photons are counted in the second window, and the error contribution from Dl causes the observed loss of precision. Explicit calculations have verified this. The RLD error surface for A (Figure 4) shows that the calculation of A is more forgiving with respect to the fitting region ( A t / T ) than is the T determination. The precision changes very little for At/T over a range from 1to 4. Only when At < T does the error increase significantly. The precision of the WLLS lifetime calculation improves as the fit is made over more lifetimes (Figure 3). The plots of the ratio of the errors for the RLD and for the WLLS (Figure 5) show that near the optimum region for the RLD, the RLD calculation is 30%-40% worse than the WLLS, while in some regions the RLD has equivalent or significantly better precision. The small loss of precision is not so great for many experiments that it outweighs the benefits of the RLD. For example, in an experiment with a total of IO5 counts, the relative standard deviation is 0.5% for T and 0.6% for A. For A the loss of precision is even less (Figure 6). In the optimum region for the RLD, only 10%-20% precision is lost by the RLD relative to WLLS. A t low light intensities the RLD will perform better than the WLLS. The RLD is superior to WLLS in the lower left-hand area bound by the last contour line in Figures 5 and 6. This superiority arises from the number of counts per channel falling below 20, which causes the symmetric Gaussian distribution to be lost for the WLLS. Even with the observed loss of precision in the RLD relative to the WLLS, the RLD's speed may allow one to perform many more RLD than WLLS experiments in the same time. Since the S/N ratio improves by a factor of the square root of the number of experiments, we may compensate for a 30%-40% loss of precision by performing twice as many experiments using the RLD if desired or by increasing the number of detected photons.

CONCLUSIONS We have evaluated the precision of Ashworths rapid lifetime determination method for a single exponential decay as a function of integration time relative to the lifetime and the total photon counts. For both the lifetime and the preexponential fador, optimum regions with respect to the integration time exist, yet the error does not increase rapidly for small changes on either side of the optimum value. The optimum conditions for determination of A and T are similar, so both can be determined with good precision even at low total counts (104).

The RLD was comparable to the weighted linear leastsquares method in terms of precision and accuracy and was found to be 10-800 times faster. While the precision is less in some regions, the relative loss of precision is not excessive. We stress that the instrumentation necessary for implementing the RLD can be quite simple even for rather short decay times. Switched analog integrators could be used as was done earlier (2). Alternatively, a gated pair of counters could be connected to a photomultiplier or an avalanche photodiode used in the single-photon counting mode. This latter implementation is particularly attractive since the data acquisition matches the statistical calculations presented here. Further, because the standard deviations of A and r can be calculated in real time, it is easy to determine when the desired precision has been achieved and the experiment can be terminated. Thus, in summary, the speed and simple instrumentation of the RLD should make it an excellent data acquisition and reduction tool for real time analyses. The primary disadvantages of the RLD of eq 1 is that it gives no warning of more complex decays. It is, thus, only appropriate for well-characterized systems such as arise in standardized analytical methods. Even if the decays are not

Anal. Chem. 1989, 6 1 , 33-37

purely exponential, but are reproducible, the RLD would provide a simple quantitative measure of the system. For example, it would be ideal for monitoring oxygen concentrations ( I ) . Also, extensions of the RLD (6) do provide a warning of more complex kinetics and allow evaluation of decay parameters.

LITERATURE CITED (1) Bacon, J. R.; Oemas, J. N. Anal. Chem. 1987, 50, 2780. (2) Sholes, R. R.; Small. J. G. Rev. Scl. Instrum. 1980, 51, 882. (3) Lakowicr, Joseph R. Prlnclples of Fluorescence Spectroscopy; Pienum Press: New York, 1983. (4) Bushaw, 8. A. Anelyticel Spectroscopy; Lyon, W. S., Ed.; Elsevier: Amsterdam, 1984; pp 57-62.

33

(5) Demas. J. N.; Jones, W. M.; Keller, R. A. Anal. Chem. 1986, 58, 1717. (6) Ashworth, H. A.; Sternberg, 0.;Dabiran, A,, Seton Hall University, South Orange, NJ, unpublished work, 1983. (7) Woods, R. J.; Scypinski, S.; Cline Love, L. J.; Ashworth, H. A. Anal. Chem. 1984, 56, 1395. (8) Bevlngton, P. R. Data Reductbn and Error Analysis For the Physical Sclences; McGraw-Hill: New York, 1969. (9) Meyer, W. J. Concepts of Mathematical MOdellng; McGraw-Hill: New York, 1984. (10) Demas, J. N. Excited State Lffeflme Wsufements; Academic Press: New York, 1983.

RECEIVED for review July 21,1988. Accepted October 3,1988. We gratefully acknowledge support by the National Science Foundation (Grant CHE 86-00012).

Deconvoluted Band Representation for Infrared Spectrum Compression Roger A. Divis and Robert L. White* Department of Chemistry, University of Oklahoma, Norman, Oklahoma 73019

A spectrum compresslon algornhm Is described that reduces the computer media storage space required for 4-cm-l-resolutlon dlgitlred Infrared vapor-phase spectra by 95 % with mlnlmal loss of structural Information content. Spectra can be reconstructed from compressed data for library searching, visual comparison, and wbtractlon. Fourier selfdeconvolutlon Is used to resolve overlapplng bands, and a curve-fitting process Is used to calculate and store intensity, location, and width of Identified absorbance bands. The algorlthm Is evaluated by comparisons between the EPA vapor-phase Infrared llbrary and reconstructed spectra.

INTRODUCTION Infrared spectroscopy is an important tool for molecular structure elucidation. Development of sensitive and rapid infrared analysis methods that generate numerous spectra such as gas chromatography-Fourier transform infrared spectroscopy (GC/FT-IR) has created a need for improved spectral interpretation methods. Emphasis in recent years has been placed on developing computerized methods for spectral interpretation. The most common computerized spectrum interpretation method employed today is library searching (1). In this method, a reference set of high-quality spectra are compared to spectra of materials to be identified. Reference spectra are sorted in order of decreasing similarity, and a list of the best matching spectra is provided to the operator. The list may contain the identity of the unknown material. More often, trends in top matches identify molecular functionalities likely to be present in the molecule. The first infrared spectral libraries were simply collections of spectral plots contained in book form. These libraries were suitable for manual searching but were incompatible with automated methods. Kuentzel described the first machineoriented library searching procedure in 1951 in which Hollerith punch cards were used (2). As computer facilities became more accessible and powerful, more advanced spectral compression methods were developed. The goal of spectral compression is to reduce the storage space required for reference 0003-2700/89/036 1-0033$0 1.50/0

spectra while retaining structure-specific information. Early compression methods reduced spectra to binary format. Spectra were manually encoded from spectral plots and stored on punch cards that could be read into a computer. Band center assignments were subject to instrument calibration errors and inconsistencies in manual selection of spectral band positions. Improved infrared spectral digitization was attained by using devices for translating hard-copy plots into digital format (3,4).Binary information can be augmented with band intensity (5,6)as well as width designations (7,8).Augmented binary representations contain qualitative band intensity information encoded by using numbers to indicate relative magnitude (e.g. 1 = strong, 2 = medium, 3 = weak). Bandwidth can be represented in a simiiar manner (e.g. 1 = sharp, 2 = medium, 3 = broad). The trend since the mid-1970s has been to develop searching methods based on digitized spectra (9-11). A digitized infrared spectrum consists of a sequence of intensity values sampled at equal wavelength intervals and normalized by either scaling the highest intensity to unit absorbance or dividing each intensity by the square root of the sum of the squares of all intensities (12).Digitized spectra can be used to accurately represent complex spectral curves. This form of reference spectrum storage is particularly useful because FT-IR spectrometers generate spectra in this format. Library search spectral comparisons are made by computing point-by-point differences between unknown and reference spectra. This method of spectrum correlation can yield acceptable matches even when multiple band overlap occurs. A disadvantage of this form of library searching is that a large amount of information must be stored in order to preserve adequate spectral detail. Often, 8 bits are used to represent digitized spectrum intensities. At 4-cm-l resolution (2 cm-l between digitized intensities), 14 000 bits are required to represent an infrared spectrum from 4000 to 500 ern-'. Most FT-IR spectra are measured a t 4-cm-' resolution or better. However, library reference spectra are often stored at reduced resolution (16 or 32 cm-') in order to minimize the amount of computer storage media needed. Ideally, one would like to achieve spectral compression without sacrificing spectral resolution. A spectrum compression algorithm that combines 0 1988 American Chemical Society