Anal. Chem. 1983, 55, 653-656
The approximations for large @ (@> 1) are given by
=i r q dh)=@
&)
(44)
(45)
(46) (47) where the right-hand sides represent the results obtained by integration of the continuous lines. The full width at half maximum is fwhm = 2(ln 2)lI2p in the Gaussian case and fwhm = 2p
(48)
(49)
for Lorentzians. LITERATURE CITED (1) (2) (3) (4) (5) (6) (7)
Savitzky. A.; Golay, M. J. E. Anal. Chem. 1984, 36, 1627-1638. Bromba, M. U. A.; Ziegler, H. Anal. Chem. 1979, 5 7 , 1760-1762. Zlegler, H. Appl. Spectrosc. 1981, 3 5 , 88-92. Bromba, M. U. A.; Ziegler, H. Anal. Chem. 1981, 5 3 , 1583-1586. Korb, J.-P.; Maruanl, J. J . Megn. Reson. 1981, 46, 514-520. Van Cittert. P. H. 2. Phys. 1931, 6 9 , 298-308. Schafer, R. W.; Mersereau, R. M.; Rlchards, M. A. Proc. IEEE 1981, 6 9 , 432-450.
653
(8) Mersereau, R. M.; Schafer, R. W. Proc. 7978 I€€€ Int. Conf. ASSP 192-195. (9) Mersereau, R. M.; Schafer, R. W. Proc. 7978 I€€€ Europ. Conf. CT & D 404-41 I . (IO) Richards, M. A.; Schafer, R. W.; Mersereau, R. M. Proc. 7979 I€€€ Int. Conf. ASSP 401-404. (11) Youla, D. C. I€€€ Trans. Clrcults Syst. 1978, CAS-25, 694-702. (12) Hamming, R. W. “Dlgltal Fllters”; Prentlce-Hall: Englewood Cliffs, NJ, 1977. (13) Schoenberg, I. J. Quart. Appl. Math. 1946, 4 , 45-99. (14) Tominaga, H.; Dojyo, M.; Tanaka, M. Nucl. Instrum. Methods 1972, 9 8 , 69-76. (15) Rabiner, L. R.; Gold, B. “Theory and Application of Digital Signal Processing”; Prentlce-Hall: Englewood Cliffs, NJ, 1975. (16) Willson, P. D.;Edwards, T. H. Appl. Spectrosc. Rev. 1978, 72, 1-81. (17) Hnatowlcz, V. Nucl. Instrum. Methods 1978, 733, 137-141. (18) Turln, G. L. Proc. I€€€1978, 64. 1092-1112. (19) Lam, R. B.; Isenhour, T. L. Anal. Chem. 1981, 5 3 , 1179-1182. (20) Bracewell, R. N. “The Fourier Transform and its Appllcations”, 2nd ed.; McGraw-Hill: New York, 1978; Chapter 8. (21) Kaiser, J. F.; Reed, W. A. Rev. Scl. Instrum. 1977, 48, 1447-1457. (22) Bromba, M. U. A.; Zlegler. H. ,Electron. Lett. 1980, 76, 905-906. (23) Arsenault, H. H.; Marmet, P. Rev. Sci. Instrum. 1977, 4 8 , 512-516. (24) Marmet, P. Rev. Scl. Instrum. 1979, 5 0 , 79-83. (25) Edwards, R. E. “Fourier Series”, 2nd ed.; Springer: New York, 1979; p 182. (26) Hansen, E. R. “A Table of Serles and Products”; Prentice-Hall: Englewood Cliffs, NJ, 1975.
RECEIVED for review August 19, 1982. Accepted December 6, 1982.
Phase-Plane and Guggenheim Methods for Treatment of Kinetic Data J. Roger Bacon” Chemistry Department, Western Carolina University, Cull0whee, North Carolina 28723
J. N. Demas* Chemistry Department, Universl@ of Virginia, Charlottesvllle, Virglnla 2290 1
A common kinetic problem is the analysis of data whlch are exponential decays on constant, but unknown, base lines. Most commonly, data treatment Is by the Guggenhelm method. A new phaseplane (PP) method ls developed. The two methods are compared experlmentaily and by digital slmulatlons. The PP method Is insensltlve to the nature of the nolse and, unilke the Guggenheim method, does not requlre operator adjustments to fltting parameters. For good data measured under optimum condltlons, the PP and Guggenhelm methods are comparable In accuracy and preclsion. The PP method Is more accurate than, and not susceptible to the computatlonal failures of, the Guggenhelm method where the data are nolsy, have a large base llne contrlbutlon, or were taken over other than the optlmum fittlng reglons. Therefore, the PP method is Ideal for use In automated Instruments or with samples where multlple runs cannot be made to optlmlze measurement parameters.
Evaluation of kinetic parameters from experimental data is a pervasive and important problem in mechanistic chemical studies ( I , 2), luminescence spectrometry (3),and a variety of kinetic chemical analyses (4). The most common systems involve first- or pseudo-first-order kinetics which yield ex-
ponential decays. When the base line is stable and noise free, the standard and most accurate approach to evaluating the decay parameters is to subtract the base line and fit the resultant curve by a linear least-squares fit to the semilogarithmic plot of the data vs. time. Frequently, however, a good stable noise-free base line is unavailable. This may be due to irreproducible solvent blanks in luminescence, instrument base line drift (3),or too short a data acquisition time to permit the signal to decay to the base level (1,2). Further, if a noisy base line is available (e.g., stopped flow), attempts to subtract the base line can yield increased noise levels on the decays and, in extreme cases, negative base line corrected signals. Under these conditions the signal is given by
D ( t ) = K exp(-t/T)
+B
(1)
where D ( t ) is the decay vs. time t , K is the proportionality constant, and B is the base line. The solution of eq 1 where B = 0 can be by the Guggenheim method (1-3, 5 ) ,Mangelsdorf method (5,6),or nonlinear least squares (5, 7). By far the most popular approach is the Guggenheim method. In the Guggenheim method one linearizes the function by first taking the differences of D ( t ) and D(t A t )
+
Y = A,, + Alt
0003-2700/83/0355-0653$01.50/00 1983 American Chemical Society
(24
054
ANALYTICAL CHEMISTRY, VOL. 55, NO. 4, APRIL 1983
+ At)]
(2b)
A. = In (K[1 - exp(-At/r)])
(24
A1 = -1/7
(2d)
Y = In [D(t)- D(t
A plot of Y versus t is linear. 7 and K can then be derived from the slope and intercept respectively. The selection of At can be a problem. At should, in general, be a t least 2-37 (2). If the data acquisition period is shorter than 2-37, smaller At values of course, must be used. The Guggenheim method is popular because it eliminates the need for acquiring a base line, gives accurate measurements, and is simple. At least for phosphorescence decay measurements it is more precise than Mangelsdorf's method (3). It is much simpler to program and faster computationally than nonlinear least squares. However, the Guggenheim method has several disadvantages. The fitting region and At must be properly selected for accurate results; improper selection can yield negative [D(t)- D(t + At)] values which are computationally unacceptable in eq 2b. Depending on the fitting range and At selected, only part of the data may be wed in the fit (5). The Guggenheim method does not directly fit D(t)and reconstruction of D(t)using the best fit parameters and eq 1 is complicated since B is not directly available. We wish to report a new method for treating kinetic data of the form of eq 1. The method is based on the phase-plane (PP) method which was originally developed for evaluation of simple exponential5 ( B = 0) (8, 9). I t was subsequently extended to the treatment of luminescence lifetime measurements involving the deconvolution of exponential decays from data taken with a finite flash (10, 11)and with scattered light (12,13). It also has been extended to decays involving Forster kinetics (14). We will show that the modified PP method is simple and permits direct calculation of K , B, and 7. Further, the PP method is more forgiving in the fitting ranges and the noise levels than the Guggenheim method. Additionally, the PP method requires no operator selection of optimum fitting parameters (Le., At and fitting region) which makes it particularly amenable to automated analytical instruments. Unlike nonlinear least squares, no initial guesses of the pa= rameters are required. THEORY The PP solution to eq 1 is developed by integrating eq I over the limits 0 to t to yield
+ A , X ( t ) + Azt
st
(3a9
D(t) d t
C3b)
Ao=B+R
I34
D ( t ) = A0 X ( C )=
AI
0
-Y/T
Az = B / r
(34 (3e)
D ( t ) is now a linear function of the variable X t t ) and t s X ( t ) is readily derived from D ( t ) by numerical integration, The desired system parameters B, K , and 7 are then readily computed from the coefficients of the linear equation, One can obtain Ao, A I , and A2 by standard linear leastsquares fitting, In an uwweighted least-squares sense one strives to minimize
C[D(t,j- [A, 4=AIX, -6. A,t,J)~
(4
where X,is X(&) evaluated at t,. The w~mrnalionIS over a91 data points used in the fit. The. resdtant normal equations for this minimization are
(gki
2ti
$
zitti)
(i:)
=( Z z DD(( t i ) X i )
z x i t i zt:
(5)
ED( ti)ti
Solution of these equations yields the best Ao, A I , and A2 which, from eq 3c, d, and e, yield K , B, and 7. By use of the least-squares Ao, A l , and Az, D ( t ) can be simply and rapidly calculated from eq 3 for comparison with the original decay. For real data X ( t ) is derived by numerical integration. We use the trapezoidal rule which yields
X i = x t D ( t ) dt = ( A t / 2 ) R i
(64
Rj = D(ti) + D(ti-1) + Rj-1 i 11 At = ti+l - ti
(6b)
Ro = 0
(6d)
(64
Note that the At used here is the time between points and bears no relationship to the At used in the Guggenheim method. For most experimental data with a reasonably small At relative to 7, any errors associated with the trapezoidal rule integration will be insignificant. EXPERIMENTAL SECTION Materials. The K&O(CN)~ from Alpha Chemical Co. was recrystalked once from water. Ita absorption and emission spectra matched the literature results (15). Lifetime Measurements. Luminescence decay time measurements were made with a Tektronix 7912 ultrahigh speed transient digitizer interfaced to an Altair 8800B microcomputer. The system is described elsewhere (16). Simulations. Because noise on experimental data can possess widely varying properties, we deemed it necessary to ensure that the results of our simulations were not sensitive to the detailed noise characteristics. We have used two different noise types, amplitude dependent Poisson noise and amplitude independent noise. These mimic experimentallyobserved noise. In both cases the noise distribution was assumed to be normal or Gaussian. Synthetic noisy data were generated by Dp(t) = D(t) + ND(t)l/'
(74
Bc(t) = D(t) + ND(O)'/'
(7b)
Where D ( t ) is given by eq 1and N is a random Gaussian number with a mean of zero and standard deviation of unity. N was derived from our computer's uniformly distributed random number generator using the algorithm outlined by Knuth (17). D p ( t ) is D ( t ) with superimposed Poisson noise which has a standard deviation equal to the square root of D ( t ) . For large values (>25) D p ( t )is the same as single photon counting statistics which arise in single photon counting nanosecond fluorimeters or shot noise in photomultiplier signals. It also is similar to the noise encountered in a variety of instruments, such as spectrofluorimeters, where the magnitude of the noise increases with signal strength, but the fractional component of the noise decreases. Dc(t) exhibits constant amplitude noise independent of D ( t ) , The signal from a stopped flow or flash photolysis instrument generally exhibits similar behavior. In order to simulate varying noise levels, we varied ( K + E) which is D(0) in eq 7b. For ( K + B ) = lo4, the data are similar to good analog or single photon counting data with a peak noise level of 1%of full scale. A (K + B ) of lo8 corresponds to reasonably noisy data and ( K + B ) = lo2 yields very noisy experimental data. The typical quality of data exhibiting Poisson noise (peak values of 100 and IO4) are shown in ref 11. For the digital simulation, ten simulated Dp(t)'sor Dc(t)'s were generated for every K , .Is, and T with 201 evenly spaced points En each decay Each decay was reduced by the Guggenheim and by the PI? methods. For each set of calculations, we evaluated che relative error (RE) md relative standard deviation (RSD) in RE = [(7 T ) / T ] X 100% @a)
RaD =
(U/T)
x 100%
(8b)
ANALYTICAL CHEMISTRY, VOL. 55, NO. 4, APRIL 1983 RELATIVE ERROR
10 r
vs
LIFETIME F I l
655
RESIDUALS PLOl
GUGGENHEIM
:a
-15 -201
-25 500 450 400
I
I
?
350 * L; 300 250 E 200 150 I00 50
I
PHASE PLANE
t
I
EXPERIMENTAL DATA
c
50
I
100 150 200 250 300 350 400 450 500
-
CHANNEL NUMBER 2 MILLISECONOS
512 CHANNELS
Figure 2. A typical experimental decay curve for solid K3Co(CN), at 77 K. The residual plot Is for the fit calculated by the phase-plane method using the data from points 50-5 11.
-2[ -4
2
W
R NUMBER
3
W
I
F
B
S
b
l
;
OF LIFETIMES W R DATA FIT
Flgure 1. Effect of fitting region on the accuracy and precision of the Quggenhelm(upper) and phaseplane (lower) methods. The asterisks Indicate the RE In 7 for ten runs and the error bars are flRSD. In all cases the noise was Poisson with K = 8000 and 6 = 2000. Note that the fitting scales differ for the two plots. 7 and n are the average 7 and standard deviation in T calculated for the set of simulations.
RESULTS AND DISCUSSION Effect of Noise Types. All of the simulations reported below are for Poisson noise. For constant noise, the results were comparable. All the RSD’s were larger, however, due to the greater overall noise levels. With constant noise the Guggenheim method failed more often because of its sensitivity to the noise level. Choice of A t in the Guggenheim Method. The optimum choice in At is not clearly defined in the literature. Laidler (2) recommends At > 2-37. We opted to resolve the question by simulations. We selected a transient spanning 57’s with K = 8000 and B = 2000. This corresponds to reasonable quality data with the transient decaying to within 1%of the terminal base line. Using Poisson noise we then evaluated RE and RSD for different At values by using the Guggenheim method. The maximum possible data range was used in the fits. The RSD’s exhibited a broad minimum for 2.57 < At < 3.77. For larger or smaller At’s, RSD increased rapidly. For all further Guggenheim simulations we used At corresponding to 65% of the signal measurement period. Effect of Varying the Fitting Region. Figure 1 shows the effect of varying the range over which the data were acquired in units of 7. The RE in 7 is plotted along with error bars corresponding to hl RSD. In all cases (K + B ) = IO4. For both the Guggenheim and the PP methods, the RSD’s increase rapidly if too small or too large a fitting range is used. This occurs either because there is insignificant decay in D(t) or 7 is so short that there are few points on the decaying portion of the transient. Note the differences in fitting scales for the Guggenheim and the PP methods. The Guggenheim method failed for sample periods >lor or low),the PP and Guggenheim methods are essentially equivalent. For noisy data, however, the PP method yields significantly better R E S and RSD’s than the Guggenheim method. Indeed, the Guggenheim method failed for (K B ) I200 due to the appearance of negative D ( t ) D(t A t ) terms. The PP method was unconditionally stable and yielded usable results even with (K+ B ) = 100. Effect of Varying the Base Line. To test the effect of base line variation, we carried out extensive simulations with (K + B ) = IO4 and base line contributions ranging from 0 to 95%. Again the fits were made by using a 57 range. For small base line contributions (