Determining the parameters of first-order decay ... - ACS Publications

Determining the parameters of first-order decay with a nonzero end point and unequal time intervals. M. Keith. Christensen. Anal. Chem. , 1983, 55 (14...
0 downloads 0 Views 495KB Size
2324

Anal. Chem. l003, 55, 2324-2327

Determining the Parameters of First-Order Decay with a Nonzero End Point and Unequal Time Intervals M. Keith Christensen Department of Botany, University of California, Davis, California 95616

When a process decays to some nonzero end polnt, one cannot solve It by the common firstorder analysis wIthout first subtractlng the end polnt. Several technlques are avallable for analyzing such data when one has data polnts paired In equal Intervals, but only lteratlve computer search methods are successful wlth nonequal Intervals. Thls paper presents a straightforward analytlcal method for determlnlng all of the parameters for first-order decay (the end point, the inltlal deviation from that end polnt, and the rate constant) when the Intervals are not equal. When Intervals vary a great deal, an Iterative correctlon analysis Is Invoked that Is a functlon of the width of each Interval and an approxlmatlon for the decay constant.

THEORETICAL SECTION The Guggenheim Method. The monoexponential function describes the first-order relationship between rate and the distance to equilibrium, where k is the rate constant, D is the initial displacement, and E is the end point (see Figure 1).

y = Dekt

(1) Guggenheim suggested taking measurements in a series of pairs ti and tj so that each pair is separated by the same interval, At. An equation is defined for each, y Lis subtracted from yj, and the end point is eliminated. t, = tj At (2)

+

y i = Dektl During first-order decay, the rate of decay is proportional to the amount of decay remaining to some end point. In many cases, such as in radioactive decay with negligible background, this end point is zero and the decay constant can be found by taking the logarithm. If the end point is not zero one must subtract it from all of the measurements to be able to use the logarithm to solve for the decay constant. In 1926 Guggenheim (1, 2) described the difficulties encountered when the end point is not known and proposed a method of solving for the decay constant invoking what is often termed the retarded or delayed function by using measurements paired in equal time intervals. Mangelsdorf ( 3 , 4 ) proposed a method in 1959 that is quite similar in its utility and approach, but that has the effect of dividing the retarded function while Guggenheim subtracts (5,6). Schwartz (7) recently presented a generalized approach that embodies both of these methods along with other methods in the literature that are modifications of the same fundamental idea. Other methods which require equal time intervals have been used to predict the parameters of exponential decay. Wagner (8) adapted a model-independent prediction method based on "accelerated convergence" from Amidon (9) for use in pharmacokinetics to solve for the kinetic parameters of drug concentrations in urine or blood. Pronys method (10,ll) also requires equal intervals and will solve a series of added exponentials. Any mistake in the time of measurement while running an experiment could cause significant inaccuracies in methods which require equal time intervals (12),a problem which is aggravated by small sample numbers. Methods that do not require equal time intervals are generally computer analyses that optimize the values of the given model parameters by iterative search methods (13). The present paper describes Guggenheim's method and presents a new one that accomplishes the same purpose with the same advantages but does not require measurements at equal time intervals. A method is also introduced for correcting the systematic error which rises from relating a discrete interval slope to a single time point. The analysis presented uses noiseless simulated data points to demonstrate the accuracy of the method and the efficiency of the iterative correction method.

+E

+E

(3)

y , = Dek[tc+Atl+ E

(4)

A y = Dektl[ekAt- 11

(5)

The natural logarithm is used to solve for k

In A y = In D

+ k t i + In [ekAt- 11

(6)

If At is constant, then the line In [y, - y L ]plotted against t, will have a slope equal to the decay constant. The log Rate Method. Due to the unique properties of the exponential function, the decay constant can be found with a method that is more general and more direct than that of Guggenheim. If the decay or rise is first order, the decaying rate will exhibit the same decay constant as the decaying measurement. Plotting the natural log of rate vs. time will yield a slope equal to the decay constant and an intercept which can be used to solve for the initial displacement. Once the system has returned to its final state there will be no further change in y. Even though there is no end point for the measurement y , there is an end point for the rate dy/dt because the rate will equal zero at equilibrium. If the process is first order, the decay of the rate will exhibit the same decay constant as the decay of the measurement y . By taking the derivative of eq 1, we obtain d y / d t = kDekt In d y / d t = In k D

+ kt

(7) (8)

In contrast to a plot of the logarithm of the measurement, the logarithm of rate vs. time (Figure 2) will yield a line with slope of k and an intercept of In kD. Usually In rate can be plotted against the midpoint, but if the intervals vary a great deal or increase regularly from one side of the time scale to the other, the midpoint may not be adequate (See "Iterative log Rate Method"). It might seem advantageous to use this derived k and reanalyze the data set with the relationship in eq 5, where any two data points and the k would give a estimate for D. In my experience such a technique did not improve the estimate for D over the y intercept method described above. To obtain the end point one should return to the data set with the derived k and D and, using a rearrangement of eq 1, find solutions for E.

0003-2700/83/0355-2324$01.50/00 1983 American Chemical Society

ANALYTICAL CHEMISTRY, VOL. 55, NO. 14, DECEMBER 1983

kzN

1.0

...............................

0.8 n

r4 b

I

o

1

z

3

TIME

Figure 2. Natural log vs. time, the difference between the misapplied log measurement plot and log rate plot using values from Figure 1.

Iterative log Rate Method. When the Guggenheim or the log rate method simplified by constant intervals is used, either term can be plotted as the time coordinate. The Guggenheim method stipulates ti, but tj or the midpoint between the two could be used and the slope of the line with respect to the y axis will not be altered. This is not so if the intervals vary. The midpoint of the interval between two points on an exponential decay curve is near but not the same as the time point where the tangent to the curve is equal to the slope of the line through the two points. The difference between the true time point and the midpoint is a function of the decay constant and the size of the interval, is independent of time, and is independent of the measuremenis. The solution for this true time point will be defined to be y, the fractional distance across the interval where the tangent would be equal to the slope of a line through the two points that delimit the interval. Given an interval between two time points i and j (9) Y = (t, - ti)/(tj - ti) Solve for y by finding the time point where the tangent at the point would equal the slope between ti and tj. From eq 8 one can solve for t, t , = [In (dy/dt) - In kD]/iz (10) To describe the change in y for the interval At, the slope can be expressed as - 1) Y; - Yi D(ekti)(ekAt

- ti

-

/ ""i'i

'

"""'

1.0 IO Interval/ Relaxation Time

'

' ""d

100

Figure 3. Magnitude of the iterative y function, the dependency of y

TIME

tj

/

1 v

0.5L 0.I

4

Figure 1. Monoexponentlal decay with nonzero end point. With generalized axes, the parameters of first-order decay are k = -1.0, D = 7.5, and E = 1.0.

--

c

0.7;

........................

o

2325

At

Substituting the slope expressed in eq 11 into eq 10, one obtains t, = ti + In [(ekAt- l)/(kAt)]/k (12) Substituting eq 12 into eq 9 yields y = In [(ekAt- l)/kAt]/kAt (13) The changes in y as a function of interval size can have a significant influence on the observed decay constant when the intervals tend to be wider on one side than the other, especially

on the ratio defined by the time Interval divided by the relaxation time (k-I).

Table I. A Simple Threepoint log Rate Analysis Using the Skip Point Pair Pattern tl

t*

0 1 3

6 10 15

At

Y1

0.00 6 9 0.95 2.59 12 h = -0.0925 D

yz

Ay

In rate

4.51 -0.285 4.51 6.32 5.37 -0.516 5.18 -0.840 7.77 = -10.73 r2 = 1.000

toss

3.0 5.5 9.0

when they become greater than the relaxation time (the inverse of k). Figure 3 illustrates the relationship, showing how y would approach 0.5 or 1.0 when the time interval to relaxation time ratio becomes extreme. If the decay constant is negative, this relationship is equivalent, but extends below the 0.5 value and would approach 0.0 as the ratio became very large. This ratio is initially acted on as an exponent which subsequently is decreased by one, so as the ratio approaches zero the potential for error due to rounding can become a problem. To use the y iteration method, one can first plot log rate vs. the midpoint to find an initial estimate for k, and using y in subsequent iterations one can improve that k estimate by plotting the same log rate values against a y corrected" time point t,. t, = ti + ?At (14) In Irate1 = In lkDl

+ kt,

(15)

METHOD The following analysis will demonstrate the patterns of inaccuracy with log measurement techniques, the accuracy of the log rate method, and the efficiency of the iterative correction method. This is done by generating noiseless data from each time point with known values in each parameter of the model (eq 1) and then observing the ability of the method to return to those values. For the single step log rate analysis, a hand calculator (HP15C) was used and the numbers were rounded off to that illustrated (see Table I). For the later analysis the intermediate values were not rounded, but all 10 digits of the calculator were used for the log rate and iterative y values to retain accuracy. The nearness of each estimate to the original value of the parameter is evaluated by expressing them as significant figures by the relationship in eq 16. observed - expected significant figures = -log,, (16) expected Simulation of noisy data was accomplished with the random number generator on the HP15C. The y value from the model was multiplied by (1 + ("random number" - 0.5)/10)), yielding a variation in y of &5%. RESULTS AND DISCUSSION Simulated log Rate Method. The log rate method yields an estimate for the parameters that is independent of the end

2326

ANALYTICAL CHEMISTRY, VOL. 55, NO. 14, DECEMBER 1983

Table 11. Iterative log Rate Analysis Using the Fan Point Pair Pattern t2

1.0 3.0 6.0 10.0 15.0

Y2

0.9516 2.5918 4.5119 6.3212 7.7687

log rate -0.0496 -0.1463 -0.2850 -0.4587 -0.6579

k

D r2

tr 0

tY1

tr 2

0.5 1.5 3.0 5.0 7.5 -0.0869 -11.3360 0.9986

0.4964 1.4674 2.8700 4.6403 6.6969 -0.0981 -1 0.1696 1.0000

0.4959 1.4632 2.8532 4.5944 6.5962 -0.0997 -10.0241 1.0000

tr2 0.4965 1.4686 2.8747 4.6533 6.7254 -0.0937 -10.48 0.9766

0.4961 1.4649 2.8598 4.6124 6.6356 -0.0950 -10.36 0.9755

Table 111. Iterative log Rate Analysis with Noisy Data t2

1.0 3.0 6.0 10.0 15.0

Y2

0.9131 2.5267 4.7367 6.5072 7.5897

k

D r2 a

log rate -0.0909 -0.1717 -0.2364 -0.4297 -0.6813

tr 3 0.4958 1.4626 2.8509 4.5879 6.5819 -0.1000 -10.0035 1.0000

( 2 5%in y z ) a

t% 0.500 1.500 3.000 5.000 7.500 -0.0837 -11.61 0.9843

tr2

tr 3 0.4960 1.4644 2.8579 4.6073 6.6243 -0.0951 -10.34 0.9753

t, and y , are 0.00 in every case.

point. This is most profoundly illustrated in the case of drug infusion, where the end point is a distinct positive value. This type of data is also a good choice for a simple illustration of this method because it often exhibits a great deal of variation in the time intervals, and only a limited number of points are available. When a drug is infused at a constant rate, the plasma drug concentration will rise to some end point which can be anticipated by the log rate method. The decay constant would have first been predicted from an experiment that measures the rate of elimination of the drug to allow one to predict the rate of infusion needed to acquire the desired concentration of drug within the tissue (14). During the rise of drug levels in the plasma, one can analyze samples and, by the log rate method (see Figure 4), find an observed infusion rate constant and plateau concentration to compare to those previously predicted. The choice of point pair patterns can influence the quality of the log rate estimate. Intervals can be chosen from a data set in several patterns, and significant error is introduced in those patterns where the intervals vary a great deal or vary regularly from one side to the other. Given ten measurements ordered from first to last (1 to 10) they could be paired by skipping intermediate values, (1,6),(2, 7), (3,8), etc., they could be paired in sequence, (1,2), (2,3), (3,4), etc., or, if one is certain of an initial value, one could pair time zero with each measurement in a fanlike array, (0, l), (0, 2), (0, 3), etc. Although the sequence pattern gives the greatest accuracy, it would also be most sensitive to measurement error. Table I illustrates the log rate analysis by using the skip point pair pattern, The data were generated by a model where k , D, and E were -0.1,-10.0,and 10.0. The values found by regression of the last two columns in the table were -0.093, -10.7,and 10.6. The error in these first single step estimates is aggravated in this case by the increase in the size of the intervals from one side of the time axis to the other. In cases where the variation in the width of the intervals is more random accross the data set, this error is not as pronounced. Analysis of the Correction Technique. Table I1 illustrates the improvement that can be made in the estimates for the decay parameters by the iterative correction technique. In this case the fan point pair pattern was chosen to cause even greater error than that seen in Table I. The initial model is the same, and the solved parameters improve with iterations.

0.0

TIME

(hours)

Figure 4. Rise of drug concentration in plasma. During constant intravenous infusion, plasma drug levels rise to a plateau simulated by the model: E = -D = 10.0 pg mL-’, and k = -0.1 h-’.

The efficiency of the iterative technique at improving the accuracy of the estimates is a function of the point pair choice. As shown in Figure 5, the accuracy of each parameter improves by the same amount each iteration and by an amount equivalent to that parameters original accuracy. The data used in this case were such that the accuracies of k and D were equivalent and each point is effectively representing the accuracy in both parameters. The accuracy of D will vary if the set of time points used is moved to one side or the other on the time axis, while the accuracy of k will remain the same. The accuracies of the estimate for both k and D improve parallel to each other in all cases tested. Simulation with Noisy Data. With the same point pair arrangement as in Table 11, data simulating h5% variation in y showed a similar convergence of the estimates toward the original value of each parameter (see Table 111). The same noise simulated data were analyzed by the log measurement method, with the natural log of the distance to the end point regressed against each time value (zero was not included). These latter estimates were k = -0.0965,D = 9.7500,and r2 = 0.9924.

SUMMARY The log rate method frees the experimenter from being forced to structure his observations into pairs with equal intervals. Though the method remains equivalent to that of Guggenheim when the time intervals between points are equal, the log rate analysis can successfully evaluate any set of data

Anal. Chem. 1983, 55,2327-2332

0 1 2 ITERATIONS

3

Flgure 5. Accuracy of several pair choices for plasma data,significant figures (see text, eq 16) vs. the number of iterations for three possible patterns of pairing the data points.

that finds one variable approaching a monoexponential assymptote with respect to another. The decay constant is first found with a linear regression of log rate vs. the midpoint of each interval. This decay constant is then used along with the size of each interval to find a more correct time point against which log rate is again regressed, whereupon a better decay constant can be found. The displacement is then solved from the intercept of the log rate plot. To obtain the end point one returns to the data set with the derived k and D, using a rearrangement of eq 1 and averaging the solutions found for E. If there is any error in k and D, the points closest to the end point seem to give the best estimate. The correction technique shows a regular linear relationship between the number of significant figures obtained and the number of iterations done. This shows that the iterative process is regular and predictable but does not include the effect that error in the measurement would have on the number of significant figures. The effect of such error will

2327

require statistical analysis of noisy data, or its simulation, in the various contexts where the technique may eventually be applied. The contexts will differ in the number of available data points, their clustered vs. sequential behavior, and the total magnitude of change accross which the measurements are taken. There are advantages that this technique readily affords. At times one cannot obtain equal time intervals, especially where one dependent variable approaches an exponential asymptote with respect to another dependent variable. This technique makes it possible to process such data in a straightforward analytical manner without requiring computer search methods. In cases where computer search is used, this log rate technique could enhance the quality of an initial guess and decrease the time used by the computer.

LITERATURE CITED Guggenhelm, E. A. Phiios. Mag. 1928, 2 , 538-540. Cline Love, L. J.; Skrliec, Marie Anal. Chem. 1981, 53, 2103-2106. Mangelsdorf, P. C. J . Appi. fhys. 1959, 30, 442-443. Llvesey, D. L. Am. J . Phys. 1979, 4 7 , 558-559. Schwartz, L. M.;Gelb, R. I . Anal. Chem. 1978, 50. 1592-1594. Margerison, D. ”Comprehenslve Chemical Kinetics”; Bamford, C. H., Tipper, C. F. H., Eds.; Elsevier: London, 1968; pp 388-391. Schwartz, L. M. Anal. Chem. 1981, 5 3 , 206-213. Wagner, John G.; Ayres, James W. J . Pharmacokinet. Biopharm. 1977, 5, 533-557. Amldon, 0. L.; Paul, M. J.; Wellings, P. 0. Math. Siosci. 1975. 2 5 , 259-272. de Prony, Rlche J . €cole Polflech. 1795, I , 24-75. Doneily, J. D. P. “Methods of Numerical Approximation”; Handscomb, D. C., Ed.; Pergamon Press: Oxford, 1966; pp 135-136. N wburger, J.; Stavchansky, S.; Pearlman, R. S. J . Pharm. Sci. I&,69, 1231-1232. Sorenson, Harold W. “Parameter Estimation, Principles and Problems”; Marcel Dekker: New York, 1980; pp 245-277. Baggot, J.; Desmond, J. J . Vet. Pharmacoi. Ther. 1978, 1 , 5-18.

RECEIVED for review Januarv 25.1983. Accepted August 22, 1983.

Humic Acid as a Preservative for Trace Mercury(I1) Solutions Stored in Polyolefin Containers R. W. Heiden* RCA Corp., Lancaster, Pennsylvania 17604 D. A. Aikens Department of Chemistry, Rensselaer Polytechnic Institute, Troy, New York 12181 Humic acid prevents significant loss of trace Hg(I1) from aqueous solutions stored In commercial polyolefin containers, thus permitting their use for storing natural water samples prior to mercury determination. At a level of 50 mg/L, humic acid reduces losses from 1 ng/mL Hg( I I ) solutlons over I S days to less than 10 %. Hwnlc acid suppresses mercury loss better than do preservatives based on HNOS and oxidants, and it Is simple to use. I t is lnsensitlve to pH between pH 3 and pH 9 and to levels of Ca, Mg, and Fe(II1) above those found In most natural waters, and It is equally effective in commercial contalners fabricated from a number of polyolefins.

Loss of mercury during storage and shipment constitutes a source of potentially serious error in the determination of traces of mercury in natural water samples. Addition of 0003-2700/83/0355-2327$01.50/0

sufficient nitric acid at the time of collection to reduce the pH to less than 2 is the recommended method for preventing loss of trace mercury from water samples prior to analysis (1-3) but this preservation method suffers a severe limitation. When such samples are stored in polyolefin containers, a substantial fraction of the mercury is lost (4-7)thus nullifying the advantages of polyolefin bottles in terms of cost, weight, and durability. Clearly, a method to prevent the loss of mercury from natural water samples stored in polyolefin bottles would be of significant value. This paper demonstrates that humic acid achieves this goal. Humic acid is inexpensive, readily available, and suitable for routine use, it eliminates the use of corrosive chemicals, it has a low blank value, and it is substantially more effective than nitric acid. Humic acid is a complex natural substance whose detailed structure remains controversial, but which is thought to consist of an aromatic core bearing OH and COOH groups and aliphatic side chains with various functional groups (8-12). 0 1983 American Chemical Soclety