725
Anal. Chem. 1988, 60,725-727
Hazards of a Naive Approach to Detection Limits with Transient Signals Sir: Within the last 15 years a number of transient signal based analytical methods have become popular. These included chromatography, flow injection analysis, electrochemical pulsed systems, and a number of atomic systems ranging from atomic absorption electrothermal vaporization devices to direct sample insertion methods (1) for atomic emission spectrometry. The concept of detection limits has been thoroughly discussed recently by Long and Winefordner (2). In their discussion they point out that Gaussian statistics for a detection limit test (one tailed) using the IUPAC definition ( 3 , 4 )with 12 = 3 (a signal 3 standard deviations above the blank) would indicate that there is a probability of only 0.001 35 that a blank value would exceed this signal level; hence the commonly quoted "99.9% confidence" that a signal a t or above this level is the analyte. We recently reported a sample introduction system ( I ) which reproducibly provides a peak during a very small time interval. We calculated our detection limits by traditional methods using the noise of the base line and the IUPAC definition with a k = 3. Reflection on the topic indicated to us that we had not utilized all the information available and consequently were probably overestimating the true detection limit. In specific, we had not used to our advantage our knowledge of the time window in which the peak would appear (with a given confidence level). In the chemical literature one can find a great deal of information on peak smoothing, deconvolution, and detection limits; however, we have to date found no work which specifically takes advantage of prior knowledge of the probable peak location. Since transient signals are so prevalent, we have, in this preliminary work, simply tried to point out some of the errors that can be made if naive assumptions about the statistics involved are made. In further papers we hope to provide more comprehensive solutions to the problem. EXPERIMENTAL SECTION All computational work was done on IBM-PC compatible computers equipped with 640K bytes of memory and an 8087 coprocessor. The coprocessor was required by the software package. The software package, ASYST A Scientific System (MacMillan Software Co.) was used for the computations, because it provided a powerful library of support functions including graphics, smoothing routines, Gaussian noise generator, and a fast Fourier transform (FFT). All critical algorithms were checked manually for correct operation. In the past one might have smoothed data with a traditional low pass or low/high pass analog filter combination. One undesirable aspect of these filters is a lack of control over the distortion provided by the filter. With the infusion of microprocessors into instrumentation, it may be advantageous to take the data with a minimum of analog filtering and apply digital smoothing techniques after data acquisition. The original data then can remain intact for historical storage and the most suitable data processing approach can be selected by either an automatic selection algorithm or the operator. Our noise was generated by the following approach. Random noise was generated by the ASYST random noise generator and a second function which together produced white noise with a Gaussian distribution with a mean of 0.00 and a standard deviation of 1.0. Stored arrays of the noise can be filtered by the ASYST data smoothing function. This function FFTs the data, truncates it at a user-defined frequency, and inverse FFTs the data back into the original domain. To avoid the rippling effect caused by an abrupt truncation of the data at a given FFT frequency, the ASYST smoothing algorithm utilizes a mathematical function 0003-2700/88/0360-0725$0 1.50/0
Table I. Fraction of White Noise (Unfiltered) Data Sets in Which at Least One Data Value Exceeds the Detection Limit Threshold" % of
% of
windows
windows
no. of points in window
over
no. of points
over
threshold
in window
threshold
1 3
0.155 0.510
11 21
1.815 3.445
nThestandard deviation (0.987) was calculated from 1000 sets of 2 1 uoints.
which effects a gradual decay of frequency intensities around the cutoff point. The cutoff frequency was selected to remove only a small portion of an FFTed noise-free Gaussian peak (Figure 1) in order to cause minimal distortion to the original peak shape and height in the original (time) domain. We selected a Gaussian signal peak shape since this is a common signal shape, particularly in chromatography. It is important to note that the exact shape of the peak is unimportant to the general discussion but will affect the exact values produced in specific calculations. A transient signal shape with more abrupt features will naturally produce more information at higher frequencies. This would require either that a higher band-pass be used or that a greater degradation of the peak shape and peak height be accepted. The base of the peak was 21 data points wide in our work. Parts a (unsmoothed noise) and b (smooth noise) of Figure 2 demonstrate the effect of the filter function in the frequency domain. Figure 3 is a Fourier transform of the Gaussian peak demonstrating that the majority of the information remains below the smoothing function cutoff frequency. On evaluation of 1280 points generated by the filtered and unfiltered methods, it was determined that 27.6% of the data values were 1.10 standard deviations or greater from the average for both methods. This is not significantly different from the 27.7% predicted by standard Gaussian tables. While this is not a stringent test, it is a strong indication of Gaussian character for large sets generated by either method.
RESULTS AND DISCUSSION As we mentioned in our introduction, we have elected to use as a peak detection limit criteria the conventional IUPAC definition (the concentration a t which the SIN = 3); however the situation is somewhat more interesting when dealing with peaks, since there is always some uncertainty as to the precise location of the peak in the time domain. This requires that a search be conducted in a given time window. The width of the window should be determined with a knowledge of the uncertainty of the peak in the time domain. To make a general example, we have utilized a common method of peak height determination as a basis for our calculations. We have utilized bordering window widths of 21 points on both sides of the expected peak location and immediately adjacent to it. We have calculated the average value of the 21 points in each of these border windows. The two border window averages were averaged to provide an estimate of the blank signal value in the time window. I t is interesting t o note that this is essentially a "blankless" method of peak height determination and assumes that no regular signal features exist in this portion of the time domain. In Table I we have presented data describing the fraction of times that the detection limit threshold is crossed a t least once with windows of 1, 3, 11, and 2 1 data points with unsmoothed noise and a three standard deviation threshold ( S I N 0 1988 American Chemical Society
726
ANALYTICAL CHEMISTRY, VOL. 60, NO. 7, APRIL 1, 1988
-0 C
.-0 v,
3
-2. 30
- 3 . 50
c
31
c
60
c
50
c
61 Q
71
c
EO
c
qc
c
:c
Flgure 3. A fast Fourier transform of the Gaussian peak (Figure 1). The arrow indicates the point above which the smoothing routine truncates.
Table 11. Average Standard Deviation for Different Size Windows Calculated from One Set of 1024 Smoothed Data Points
il 1 50.0
120.
IEO.
240.
300.
360.
420.
480.
540.
Ti me
BOO ;
23
Frequency
Ti me Figure 1. Gaussian-shaped model signal.
-..700 000
n
Y
I1
li
1.00 t
Ti me Figure 2. (a) Noise generated by the ASYST program. (b) Noise after applying the smoothing function to the data points illustrated in part a. = 3). A full set of 1024 blank values was generated as described in the Experimental Section. To eliminate any nonindependence effect that might be present between sets, 2 X lo4 sets of 63 members [21 (before), 21 (data window), 21 (after)] of unsmoothed white noise were generated for each window width considered. Table I presents the percentage of those sets in which at least one element (point) was found to be greater than 3 standard deviations from the calculated base line. The standard deviation was calculated from 21
window size
std dev
window size
std deb
512 256 128 64 43 32
0.353 0.351 0.335 0.326 0.302 0.270
20 12 10 8 6
0.235 0.211 0.185 0.170 0.133
contiguous points from 1000 different sets. Table I demonstrates that the 99.9% confidence level is valid only for determinations using a single point. Clearly that confidence level is degraded as the window widens; however to not open up the window makes one more likely to make another error, stating that the analyte is not present when it is present. Since this data set is ideal white (random) noise, it made essentially no difference whether the average standard deviation was calculated with a relatively wide or relatively narrow window. This is because the points are all independent. In a more realistic situation, the bandwidth will have been reduced by either analog or digital means to minimize the noise throughout. In our case, the smoothing resulted in a reduction of the average standard deviation (calculated from 100 sets) from 1.003 to 0.3491. With a smoothed set of data one could observe several important phenomena. As Table I1 demonstrates, the calculated standard deviation decreases with the size of the windows. This is to be expected since only low-frequency components remain after filtering. This means that points that are closely spaced in the time domain are not independent but are now related due to the smoothing process. The obvious byproduct of this is that the 99.9% confidence level for any window is not correctly provided by a simple standard deviation calculation. The same is true for any other confidence level; however we will limit ourselves to this one. Table I11 data were calculated in the same manner as Table I data. The standard deviations were calculated three different ways. In Table 111,case 1we see data calculated by using the standard deviation derived from lo00sets of 1024 data values. Case 2 presents data for the average standard deviation of lo00 windows of width 21, and case 3 presents data whose standard deviations were calculated from using lo00sets of the 42 points of two border the windows. This analysis of blank noises generated in three different ways demonstrates clearly that the expected confidence level is unlikely to be reached. While these exact numbers will not be applicable to all experiments, the technique of investigating
Anal. Chem. 1988, 6 0 , 727-730
Table 111. Fraction of Smoothed (filtered) Noise Data Sets in Which at Least One Data Value Exceeds the Detection Limit Threshold no. of points in window
1 3 11 21
70 of windows over threshold case 1 case 2 case 3
0.260 0.445 1.090 1.725
1.535 2.420 5.665 8.945
2.835 3.835 8.415 11.470
727
used. It is important, however, to reiterate that the simplistic assumptions used with static signals are clearly not applicable to transients.
LITERATURE CITED (1) Salln. E. D.; Sing, R. L. A. Anal. Chern. 1984, 56, 2596. (2) Long, G. L.; Windfordner, J. D. Anal. Chem. 1983. 55, 712A. (3) “Nomenclature, Symbols, Units and Their Usage in Spectrochemical Analysis-11” Spectfochim. Acta, Part 6 1978. 336, 242. (4) “Guidelines for Data Acquisition and Data Quality Evaluation ronmental Chemistry” Anal. Chem. 1980, 52, 2242.
T. W. Williams E. D. Salin*
Case 1. The standard deviation (0.349) was calculated from 1024 point sets.
Case 2. The standard deviation (0.270) was calculated from 21 point sets.
Case 3. The standard deviation was calculated by averaging the standard deviations of windows of 21 points on both sides of the analyte window. the noise will be useful to many. We have refrained from making more generaal statements since the noise distribution will vary drastically for each experiment depending on the band-pass required by the experiment and the type of filtering
in Envi-
Department of Chemistry McGill University 801 Sherbrooke Street West Montreal, Quebec, Canada H3A 2K6 RECEIVEDfor review April 10,1987. Resubmitted October 19, 1987. Accepted December 1,1987. This work was supported by Ontario Ministry of the Environment Grant 270 and Government of Canada Natural Sciences and Engineering Research Council Grant A1126.
TECHNICAL NOTES ~~
Reusable Thin-Layer Spectroelectrochemical Cell for Nonaqueous Solvent Systems W. Andrew Nevin and A. B. P. Lever*
Department of Chemistry, York University, 4700 Keele Street, North York, Ontario M3J 1P3, Canada Thin-layer spectroelectrochemistry has become a widely used technique in many laboratories for the characterization of the redox and spectroscopic properties of electroactive species (1-15). Many different designs of optically transparent thin-layer electrode (OTTLE) cells have been reported, based on a variety of working electrode materials, such as gold minigrid, platinum gauze, vitreous carbon, metal foam, tin oxide, and thin metal or carbon films. Relatively simple cell designs have been used successfully with aqueous solutions; however, difficulties arise with organic solvents, since these attack the adhesives commonly used in cell construction, so that cell lifetime is severely limited. As a result, most cells designed for use with organic solvents have consisted of complex assemblies, with inconvenient construction and in which cleaning or replacement of the working electrode is often difficult, particularly with the fragile gold minigrid (2-8). An improved cell has recently been reported by Lin and Kadish (7)which shows excellent electrochemical behavior; however, i t requires a fairly robust platinum gauze working electrode. As yet, no design has been reported for organic solvents which allows convenient assembly, cleaning, and versatility of working electrode material. We report here a reusable OTTLE cell in which a gold minigrid working electrode is sandwiched between Teflon spacers and a Teflon body without the use of epoxies or other adhesive materials. Leakage is not a problem over the normal experimental time periods. The cell is easily dismantled for cleaning or replacement of the gold minigrid and rapidly reassembled. It can be used under degassed conditions with aqueous solution or a variety of common organic electrochemical solvents, such as o-dichlorobenzene, N,N-dimethylformamide, propylene carbonate, and dimethyl sulf-
oxide. A useful feature of the cell is its versatility over a wide spectroscopic range by varying the window material, e.g., from the W-visible-near-IR (quartz, Pyrex) to the infrared regions (NaC1, CaF2, CsI). Little use has been made of OTTLEs in the infrared region, but interest appears to have been growing recently (3-5, 16). The use of the cell in the UV-visible region is illustrated by the oxidation of [2,9,16,23-tetrakis(neopentoxy)phthalocyaninatolzinc (ZnTNPc(-2)) in o-dichlorobenzene and in the infrared region by the reduction of [bis(2,2’-bipyridine)(2,2’-azodipyridine)ruthenium(II)bis(hexafluoroph0sphate) (Ru(bpy),(Azdpy)(PF& in deuteriated dimethyl sulfoxide.
EXPERIMENTAL SECTION Cell Construction. The design of the thin-layer cell is shown in Figure 1. The working chamber is formed by sandwiching two Teflon spacers between two windows and contains a semitransparent gold minigrid (500wires/in., 60% transmittance,Buckbee Mears Co., St. Paul, MN) as the working electrode, platinum foil as the counter, and silver foil (Aldrich,0.025 mm thick) or AgCl coated silver foil as the reference electrode. This last was prepared electrochemically by passing an anodic current of ca. 15 pA/cm2 for 30 min through the silver foil immersed in 0.1 M HC1(17,18). The pure silver foil reference will drift up to 100 mV which can be an inconvenience. The AgCl coated foil is much more stable and is preferred, especially if the solution can tolerate chloride ion as the supporting electrolyte anion. The counter and reference electrodes are separated from each other and from the working electrode by two rolls of Teflon tape. The assembly is held between two Teflon holders which are tightened together to give a pressure seal between the Teflon spacers and windows. The dimensions of the working chamber are defined by the size and thickness of the Teflon spacers. In this study the cell thickness was 0.45 mm, with a chamber volume
0003-2700/88/0360-0727$01.50/00 1988 American Chemical Society