THE NEW GENERATION of Measurement - ACS Publications

solid-state, microcomputer-driven instrumentation thatis equipped with sophisticated software, one should critically examine factors that can contribu...
0 downloads 0 Views 9MB Size
REPORT

THE NEW GENERATION of

Measurement To take full advantage of modern, solid-state, microcomputer-driven instrumentation that is equipped with sophisticated software, one should critically examine factors that can contribute to system instability, bias, and mistaken identification. Measurements that demand unusual system stability can benefit, as can interlaboratory analytical studies. Greater uses of simulations and studies directed toward estimating relative selectivities for techniques and procedures are recommended.

0003-2700/90/0362-703A/$02.50/0 © 1990 American Chemical Society

Lockhart B. Rogers Department of Chemistry University of Georgia Athens, GA 30602

Introduction of the microcomputer has triggered what can be called the first wave of a revolution in laboratory measurements on chemical systems. In my case, the basic concepts were planted in 1968 at Lawrence Livermore National Laboratory when Jack W. Frazer proudly showed me a Digital Equipment Corp. PDP-8 minicomputer system, with its 4K of 12-bit-word memory and a Teletypewriter, capable of printing approximately 10 characters per second and providing input/output of programs and data by means of punched paper tape. Moreover, he had purchased the system for the unbelievably low price of $10,000. He had also added another 4K of memory (the maximum possible), a digital-to-analog converter (DAC) and an analog-to-digital converter (ADC) so as to be able to do instrument control and data acquisition as well as data processing. Furthermore, he had mounted both the computer system and the teletype on separate low, rolling platforms so that the system could be moved easily from one laboratory t o another. He explained that he wanted to be able to take data from his instruments a t

short, reproducible time intervals and then examine his calculated results "at once." That would allow him, if he desired, to change the conditions he had planned for his next experiment. The second wave of the revolution was the substitution of solid-state microelectronics for the discrete, handwired (at first) circuits. Not only did the bulkiness of the computer shrink but also the cost and the time for a computer cycle. (Another major step toward further shrinkage is already on the horizon [1].) As a byproduct of that first major reduction in size and uniformity of the circuit components, the control of some temperature effects, a topic to be discussed later, became easier. The third, most recent, wave has been the introduction of many complex mathematical calculations " a t the bench." For example, rapid acquisition of multiple-wavelength spectral data for gasoline permits calculations, in less than 20 s, of the concentrations of three classes of compounds plus five physical properties t h a t otherwise would have required five separate procedures (2). More complex "chemome-

This R E P O R T is based on the award address given by L. B. Rogers when he received the 1989 Pittsburgh Analytical Chemistry Award at the 40th Pittsburgh Conference and Exposition on Analytical Chemistry and Applied Spectroscopy in Atlanta, GA, March 1989.

ANALYTICAL CHEMISTRY, VOL. 62, NO. 13, JULY 1, 1990 • 703 A

REPORT trie" operations, such as multidimen­ sional classifications of data, require longer times; but they, too, can now be handled by new laboratory microcom­ puters and workstations rather than by centralized mainframes or large mini­ computers.

second concerns the desirability of de­ veloping ways to estimate relative selectivities of different techniques. Res­ olution is, of course, only one factor to be incorporated in such a calculation. Such information might be useful in selecting a technique—or a combina­ tion of two or more techniques—for do­ ing a particular analysis. I regret that some of the references are not to published work but to infor­ mal conversations with co-workers and other friends. In addition, some obser­ vations apply to outdated equipment, but they have been included because they illustrate a useful approach to a given type of error. Specialists in areas other than those for which specific ex­ amples have been given should have little difficulty in identifying appropri­ ate analogies.

Goals

Archaeological Wood Properties, Chemistry, and Preservation

S

cientists can gain valuable information about the past by studying wooden arti­ facts. This new volume-the first of its kind-is truly unique in that it combines chemistry with techniques of preserving archaeological wood. Perfect as a reference source for anyone in this field, this 488-page volume reveals pres­ ent knowledge of the structure of wood and the mechanisms of its degradation. Seventeen chapters cover topics within these general categories: • properties and chemistry • overview of preservation • preservation of waterlogged wood • preservation of dry wood • future research Specific areas discussed include the chemical composition of wood and changes brought on by the decay process, biopredators, radiation curing, freeze-drying, chemical preservation techniques, museum environments, the ethics of conservation, and value systems for choos­ ing among the qualities of wood that can be preserved. Contents Scope and History of Archaeological Wood · Structure and Degrada­ tion, Waterlogged Wood * Structure and Aging, Dry Wood • Physical and Mechanical Properties · Chemistry of Archaeological Wood · Bio­ logical Degradation of Wood · Treatments for Waterlogged and Dry Wood · Polyethylene Glycol Preservation Method * Radiation-Curing Monomers and Resins · Application of Freeze-Drying · Outdoor Wood Weathering and Protection * Consolidation Systems for Degraded Wood · Impregnation With Thermoplastic Resins · Gluing of Archaeo­ logical Wood · Exhibition and Storage · Chemical Modification of Ceil Wall Polymers · New Directions in Conservation

Roger M. Rowell, Editor, U.S. Department of Agriculture R. James Barbour, Editor, Forintek Canada Corporation Developed from a symposium sponsored by the Cellu­ lose. Paper and Textile Division of the American Chem­ ical Society Advances in Chemistry Series No. 225 488 pages (1990) Clothbound ISBN 0-8412-1623-1 LC 89-39451 $79.95

The combination of those advances and the awesome capabilities of some multiprocessor instrument systems has lulled many users into assuming that such a system automatically "takes care of everything." Therefore, one goal of this R E P O R T is to alert the user to the desirability of critically examining seemingly unimportant sources of uncertainty and/or bias, so as to take full advantage of the inher­ ent capabilities of those systems. Fur­ thermore, the same factors usually pro­ duce larger and more readily detected effects on interlaboratory data. For that reason, a brief introductory sec­ tion will outline some important char­ acteristics of such studies. A second goal is to remind the reader that chemistry and physics can still in­ trude on system performance. Espe­ cially in trace-level determinations, where the coefficient of variation for replicates obtained for a concentration near 1 ppb can be ±50% or more (3, 4), the possibility of interfering signals also increases (5-8) and can, at worst, result in careful quantitative measure­ ments being made on incorrectly iden­ tified signals. Finally, two other goals are discussed briefly. The first is addressed to soft­ ware problems. The use of simulated data to test proprietary programs of instrument manufacturers or, alter­ nately, of using a validated program for analyzing the raw data so as to compare the results with those from the propri­ etary program is very worthwhile. The

Interlaboratory measurements

Two different approaches are used in interlaboratory studies. In one ap­ proach, each laboratory analyzes the reference samples using the method in which it has the greatest confidence. In the second approach, used by the Asso­ ciation of Official Analytical Chemists (AOAC) and by the American Society for Testing and Materials (ASTM), a single method is agreed upon in ad­ vance for use by all of the laboratories (9). The discussion t h a t follows is largely devoted to the latter. A variety of factors that affect uncer­ tainty and bias in interlaboratory stud­ ies have been reported (5-8,10). Those that will not be discussed further are: the difficulty in obtaining a represen­ tative sample (11-14), sample storage (15, 16), and recognition by the ana­ lysts of samples used to test their profi­ ciencies even though the concentra­ tions are unknown to the analysts (17). Another factor that deserves some dis­ cussion, although it is not directly rele-

100%

_, EPA

80 60

.

-

Dow

40 20

American Chemical Society Distribution Office, Dept. 64 1155 Sixteenth St., N.W. Washington. DC 20036 or CALL TOLL FREE

800-227-5558 (in Washington. D.C. 872-4363) and use your credit card!

0

I

I 200

I

Ι 400

ι

ι 600

ι

I 800

Concentration (ppb) Figure 1. Total number of compounds verified by a laboratory, including those it detected that the other laboratory did not.

704 A • ANALYTICAL CHEMISTRY, VOL. 62, NO. 13, JULY 1, 1990

Figure 2. Change with time of coefficients of variation for determination of pesticide residues in check samples of fat and blood by EPA contractors. Results are from 10-22 laboratories for 3-9 compounds. Solid circles, fat; open circles, blood.

vant to the main thrust of this article, is the existence of a broad "region of uncertain reaction" near the detection limit (18). Rather than reciting the hypothetical set of results that has been reported for intralaboratory results (19), this phenomenon is better illustrated by real data from an interlaboratory study undertaken by the U.S. Environmental Protection Agency (EPA) and the Dow Chemical Company (now Dow USA) in 1979 (20a). The EPA prepared a dilute solution of a very complex mixture of volatile organic compounds and then sent a portion to Dow Chemical Co. Each laboratory prepared for analysis a halfdozen dilutions that covered a range of between 1.5 and 2.0 orders of magnitude. Considering first the case for the most dilute samples in each set, 656 species were not detected by either laboratory whereas 132 were detected by both (a coincidental agreement). More interesting is the fact that there were some species detected by one laboratory but not the other: 132 for the EPA and 55 for Dow Chemical Co. As each laboratory analyzed the next more concentrated sample, the totals for the detected species increased in a rather smooth curve (Figure 1). However, the phenomenon of one laboratory finding a species that the other did not was reported over more than an order of magnitude. T h e ultimate test of the phenomenon was also observed: the failure of a given laboratory to detect at a higher concentration a compound that it had successfully detected at a lower concentration (20b). The discussion will now focus upon groups of experimental factors that combine to produce two characteristics t h a t are evident in interlaboratory studies. Horwitz (21) reported the behavior with time of the average annual coefficient of variation (CV) for data taken in an AOAC-type study. Figure 2

shows that, after starting out at a value 4-5 times larger than the CV obtained from within-lab data, the average value for the CV fell smoothly for a period of approximately five years, before leveling off at roughly 1.5-2 times the size of the average within-lab value. Clearly, the initial decrease can be rationalized as an increase in the proficiencies of the experienced chemists with the method. This has been confirmed indirectly by Bulkin (22), who reported that the introduction of robots in his laboratories immediately produced data having less scatter (and better accuracy) for the proficiency standards. One would not, however, expect the use of robots per se to influence the second characteristic: the height of the plateau that was finally reached by the CV for the interlaboratory data. There is good reason to believe that the factors that cause that second difference are the same ones that are highly significant in day-to-day variations in intralaboratory measurements (23) as well as those made on a given day at high sensitivities: in trace analyses, in cases where signal averaging for long times is involved, and in differential measurements of two large signals. In the discussion that follows, the experimental variables have been grouped under the headings of environment, hardware, and software. The discussion also addresses the question of interference and the related topic of selectivity. Environmental factors The term environment is used very broadly in this discussion. First, there is the laboratory environment with its airborne contaminants as well as its changes in temperature, pressure, and humidity. Then, there is the more specific sample environment that includes the container, reaction vessels, reagents and any in situ sensors, stirrers, or other equipment to which the sample is exposed. Chemical contamination/loss. This first example was brought to my attention by C. J. Rodden (24), who worked at the U.S. National Bureau of Standards (now the National Institute of Standards and Technology) before and during World War II. The results obtained in his laboratory for low partper-million levels of cadmium in uranium were higher and much more variable than those reported by a halfdozen other laboratories that were analyzing portions of the same samples. Data from Rodden's laboratory fell into line only after someone thought to remove the cadmium-plated ironware from the laboratory. A variation of that theme was report-

Chromatography Specialists Gas Chromatography

• Capillary columns incl. LIPODEX® chiral capillaries • Packed columns and materials for packed columns • Products for derivatization and sample preparation • Unimetrics micro syringes • Accessories Please ask for further

MACHEREY-NAGEL GmbH & Co. KG P.O. Box 101352 · D-5160 Oiiren · W. Germany Tel. (02421) 698-0 • Telefax (02421) 62054 Switzerland: MACHEREY-NAGEL AG P.O. Box 224 • CH-4702 Oensingen Tel. (062) 762066 • Telefax (062) 762864 CIRCLE 92 ON READER SERVICE CARD

ANALYTICAL CHEMISTRY, VOL. 62, NO. 13, JULY 1, 1990 • 705 A

REPORT ed by Nordberg (25), who was attempting to follow depletion with time of drugs in hospital patients. His data were highly erratic until he isolated, in two separate rooms, the equipment such as syringes, standard solutions, and the washing operations for (a) the concentrated solutions that were to be injected and (b) the very dilute samples of drugs in body fluids that were withdrawn from patients for analyses. Another example involves the work of Powell and Kingston described by Currie (26) on determinations of blanks for low part-per-billion levels of chromium. Figure 3 shows that high variable results were obtained when analyses were performed in the normal way. However, the amounts and the variability were markedly decreased when the analyses were done in a "clean room." Finally, the lowest values and the smallest variability were obtained after adding a step to clean up the reagents. One must also be careful in the mixing of reagents when preparing metal-ion standards for multiple spectroscopic measurements (27). It is important to realize that contamination of laboratory air also occurs via vapors of low-volatility organic compounds, such as phthalate esters (28), as well as those for the common, more volatile solvents (29). The latter could easily account for data obtained for successive 10-fold dilutions of a solution that still gave measurable IR absorption signals for a solute after 30 dilutions (30). Temperature. Most scientists put too much faith in readings from room thermostats. It is easy to forget that

6

9

12

15

thermostats are carefully protected from damage—and from temperature changes—by shields that often have few perforations, and that the room temperature is controlled by blowing cold or hot air into the room through diffusion devices having unknown mixing effects. This was first brought to my attention in the mid-1950s by H. J. Keily, a graduate student at MIT, who complained of erratic behavior when he was performing thermometric titrations in a new humidity- and temperature-controlled room. He was using a Mariotte flask and a long thick-walled capillary tube to obtain "constant" flow of reagent into a Dewar flask that held his stirred sample. His setup was near, but not under, an air inlet in the ceiling. His problem was solved when he took the metal cover off the thermostat on the wall and directed a small fan at it from about 25 cm away, and set a large (~75 cm diameter) fan on a box on the benchtop at the opposite end of the small laboratory and directed it so as to blow air along a line about 1 m below the air inlets. A combination of more nearly uniform temperature of the room air and the faster response of the thermostat to a change in the average room temperature eliminated detectable effects on the titrations. Recollection of Keily's experience helped to solve a related problem in the late 1960s, when J. E. Oberholtzer, a graduate student at Purdue, was assembling a high-precision, digitally controlled gas chromatograph using an oven capable of ±0.02 °C control while working in a room without temperature control (31). The temperature of a

18 21 24 27 Blank no.

30 33 36

39 42

Figure 3. Analytical blanks for chromium determination. Region I: ordinary laboratory atmosphere and reagents. Region II: purified atmosphere (clean room and clean laminar-flow bench). Region III: purified atmosphere and purified reagents.

706 A • ANALYTICAL CHEMISTRY, VOL. 62, NO. 13, JULY 1, 1990

thermistor in the oven was measured against a secondary standard resistance box in a bridge circuit on the benchtop. During the course of the day, the short-term mean temperature of the oven appeared to drift slowly in one direction by more than 0.1 °C and then back at the end of the day. By putting the resistance standard along with transducers in a large wooden box in which the temperature was controlled a few degrees above the highest temperature reached in the room, the desired long-term stability of the oven temperature was observed. Everyone is familiar with the recommendation that an instrument should be turned on for warmup well in advance of its use for measurements. Some chemists also take an additional precaution and prepare solutions well in advance of use so as to minimize the effect of heats of mixing. Both precautions are essential if differential measurements are to be made. For example, in the late 1940s when differential measurements in the UV-vis were introduced, differences of only a few degrees in temperature between the concentrated reference and the slightly more concentrated sample solutions produced very large changes in the measured differences, even though a 0.01% transmittance difference was usually the smallest detectable! A spectacular example of another effect of sample temperature has recently been reported by Allerhand and Maple (32). They found that resolution in NMR was greatly enhanced by controlling the temperature of the liquid sample to ±0.02 °C. The implications of this finding certainly extend beyond NMR measurements. Pressure of the laboratory. Our first custom, high-precision gas chromatograph, which was mentioned earlier, boasted a large thermostatically controlled box that held all of the regulators and transducers (except the flame ionization and thermal conductivity detectors, which were sometimes individually thermostated), the digitally controlled sampling valve, and long sections of metal tubing (usually filled with coarse metal shavings to improve heat transfer) to precondition the carrier gases. Flows of carrier gases and those for the flame ionization detector were regulated by manually adjusted, high-precision pressure controllers because of the absence at that time of high-stability electronic flow controllers. One day, a frustrated R. A. Culp, a postdoctoral associate working on very high precision differential measurements of retention times (33), asked me to step into the laboratory next door. He promised to generate

REPORT peaks immediately upon my command. He stood behind me as I confirmed the flat baseline on the laboratory recorder that was connected in parallel with the ADC output. Then, yes, a sizable peak did appear at my request—several times! Each time, he had simply opened a door into the hallway! Clearly, the fans in our laboratory hoods were able to lower the atmospheric pressure in the room enough to result in a noticeable change in flow through the low-impedance gas line to the flame-ionization detector which, in turn, resulted in a change in its signal. It certainly provided a "high-tech" way in which to confirm the arrivals and departures of his laboratory partners and visitors! It is worth noting here another observation from the same study. By measuring directly, for successive chromatograms, the difference in the retention time between the peak for the internal standard and that for the solute in question, one could gain nearly an order of magnitude in reproducibility compared with taking differences between the averages for the absolute measurements of each peak. In this comparison, more factors than just the laboratory pressure were undoubtedly involved. Returning to the effects of changes in laboratory pressure, one could produce not only isolated false peaks but also, as expected, noticeable effects on quantitative measurements of peaks for real components. Shortly thereafter, L. J. Lorenz, a graduate student, using the same chromatographic system to ensemble-average successive chromatograms for methane at a series of very low concentrations (34), found that he obtained responses for his series of standards that were noticeably closer to linear after he corrected for changes in barometric pressure at regular intervals during the day and, especially, from day to day. Finally, a brief mention can serve to remind the reader of the many effects of partial pressure of water and its dayto-day changes, even in air-conditioned rooms when the humidity is not independently controlled. These include the weighings of hygroscopic and active-surface solids and absorption measurements in both IR and NMR (35). Hardware One should always examine critically the basic design features of any new instrument. Oberholtzer and I bought ("sight-unseen") an oven from the Netherlands that users had recommended for high-precision control of the chromatographic column temperature. As soon as we saw the Beckmann

thermometer, with its large bulb of mercury, t h a t was used to detect changes in temperature, we put a thermistor in the oven. Instead of control to ±0.02 °C, the range was more than 10 times greater! Happily, the desired level of control could be obtained after the control circuit had been redesigned to incorporate a thermistor in place of the thermometer. Nevertheless, further examination showed that in spite of very rapid air circulation, when one simply moved the thermistor to different locations in the empty oven, the observed temperature was significantly different (but still under good local control). Because t h a t meant we would not know the effective average temperature of the chromatographic column, we emphasized the precisions of our results rather than their absolute values. One of the first inquiries we received after our first paper on high-precision GC was published came from a medical researcher who was using capillary gas chromatography to monitor the constituents of body fluids. Because the chromatographic run lasted more than an hour and there were countless peaks, the selection of internal standards was very difficult. Furthermore, he could not always be confident of having every species present in every chromatogram. His uncertainty in identifying a peak by retention time was such that he worried about misidentification of peaks. Hence, this researcher was keenly interested in improving the reproducibilities of his retention times. Another factor to consider is the possible effect of "time jitter" (uncertainty) in data acquisition and control. When early laboratory computers (because of their cost) were used either for combined data acquisition and processing from multiple instruments or for combined instrument control, data acquisition, and processing for a single complex instrument, variable delays in responding to interrupts sometimes had noticeable effects on data. Although those errors can still be found (36), the use of relatively inexpensive coprocessors under such circumstances can now virtually eliminate the need for concern. Two other factors that can contribute to day-to-day and lab-to-lab variability are often overlooked. The first involves the repeatability of setting instrument parameters ("resetability") before each measurement as opposed to simple repeatability of replicate measurements using the same settings. For example, a chromatographic pump might bear performance specifications of 0.2% for repeatability and 0.5% for

708 A • ANALYTICAL CHEMISTRY, VOL. 62, NO. 13, JULY 1, 1990

resetability. Because the latter figure includes "slack" or "play" in the adjustment mechanism(s), that figure can sometimes be improved by approaching the set-point from the same direction each time. The second type of set-point problem arises from having too coarse an adjustment device. In a recent study involving the use of a DAC to control the rate of a liquid chromatographic pump, it was found that resolution of the DAC did not contribute to the uncertainty; it was solely a case of resetability of the pump (37). Finally, failure to calibrate instrument settings and performance is often overlooked. For example, most chromatographers take for granted the temperature settings on their injection ports, column ovens, detector ovens, and temperature programmers. An interesting situation is encountered in a control laboratory where a variety of instruments, differing by manufacturer and/or vintage, are used interchangeably for a given analysis. Marc Feldman (38) reported that when he measured the temperature of each column oven after using its instrument dial to make the setting, the actual temperatures for the half-dozen ovens in one group differed by more than ±10 °C. Although such an uncertainty may be relatively unimportant in analyzing simple mixtures using packed columns, for more complex samples and for long runs using capillary columns, such a range can put a difficult burden on the analysts and/or the software algorithms. A similar check on the temperature-programming hardware is also desirable. Software Over the years, one of the sticky problems has been the matter of proprietary software for peak deconvolution and baseline correction. One can understand why a company may not wish to divulge its algorithms. However, there are now relatively simple ways to avoid this problem without breaching confidentiality. They result from three factors: (a) the relatively inexpensive computer memory of all types, (b) the likelihood that a user will have access to another microcomputer in addition to the one in the instrument, and (c) the ease with which one can generate mathematically a simulated data set (39). Such a set has features that are known exactly, such as peak locations, peak areas, signal-to-noise ratios, and different types of baselines. One approach is to feed the simulated data into the instrument system and use the proprietary program to analyze it. A second approach is to feed a previously

validated program into the instrument to analyze its raw data and compare the results with those from the proprietary program. Alternatively, one can take the raw data out to a stand-alone computer in order to use the validated program on it and then compare the results as before. Steps have already been taken by some instrument manufacturers to make such procedures readily possible. In addition, because governmental regulations dealing with Good Laboratory Practice Standards are forcing users to provide evidence of validation (40,41), vendors of instruments and computerized data management systems are beginning to provide users with the necessary information (41). In case those precautions seem unnecessary, consider the following. Papas and Delaney (40) have reported that when four chromatographic integrators were tested using simulated data, they obtained not only significantly discrepant results between instruments but also, for a given instrument, widely different effects of noise level and tailing on peaks having different heights and widths. When I later discussed their findings with Lubkowitz (42), I found that he had independently discovered the existence of such problems. By having the output of a chromatogram put into firmware, he, too, could feed exactly the same (but not validated) data into two or more instruments. He found that the programs for different computerized chromatographs and for chromatographic integrators sometimes produced quite different results! Careful calibrations, using for each "unknown" peak a "known" having nearly the same concentration, should help to minimize bias. Unfortunately, analyses of known concentrations will not be useful in improving the factors that result in variations in precision within one instrument (40). One can find similar problems outside the area of chromatography. Meglen (43) reported that somewhat different conclusions were obtained when two different chemometric programs were used to analyze the same data. Therefore, although the user would like to assume that such concerns are unnecessary, errors resulting from software are not entirely hypothetical. Interference and mistaken identification Before discussing detailed examples, it is well to recall two classical requirements for reliable analyses, whether they are satisfied by performing chemical tests or by using highly instrumented measurements of physical properties. First, it is imperative that the

identity of the species being measured be confirmed. Either a different chemical reaction or a different physical measurement should be made. Second, in making crucially important quantitative meaurements, it is desirable to use two as nearly independent procedures as possible so as to minimize the effect of an interference from an unsuspected source. In many environmental and clinical problems involving tracelevel analyses, the stakes can be very high—as can the chances of interference or misidentification (5-7). As the examples below illustrate, identification of the exact chemical species that is the source of the problem is rarely ever pursued because of the difficulty and cost involved. Instead, one takes appropriate steps to avoid or at least minimize the interference. Let us first consider examples of interferences by unidentified species. Veillon (44) has estimated t h a t a graphite furnace atomic absorption procedure t h a t was inadequate for dealing with one or more interferences produced erroneous results for the chromium content of urine for a period of 10 years or more! Another example, which involved chlorinated dibenzo-pdioxins, was reported by Shadoff (45). The procedure involved successively a liquid-liquid extraction, LC, and GC/ MS using selected ion peaks for quantitation (46). He found unexpectedly high results for dioxin in fractions from a fish. However, when he scanned a large range of mass-to-charge ratios, he found two tiny dioxin peaks sitting on a high, almost featureless background t h a t spread over a wide range of masses. He speculated that the background came from small amounts of glycerides t h a t had unexpectedly passed through the entire procedure. Similarly, Nestrick and Lamparski (47) found an interesting interference when analyzing for very low levels of dioxins using basically the same isolation and measurement procedures as Shadoff. They found that very erratic and unexpectedly high results were obtained unless they passed high-purity nitrogen through a bed of silica before using that nitrogen to evaporate solvent from the appropriate LC fraction prior to injecting a portion of it into the GC/MS. Since high-purity nitrogen is not a likely place to find tetrachlorodioxins, I suspect that a tank-valve lubricant or some residual pump oil had the "right" properties to be collected by the LC solvent and then pass through the GC/MS to produce the interfering signals. One must always be on the alert for unexpected behavior, but one certainly cannot predict such interferences in the usual way.

Finally, in the clinical area of drug abuse, "false positives" are also of great concern because of the seriousness of the charge to the accused person. That is why one laboratory selects statistical conditions t h a t have greater than 99.99% chance of detecting a drug user (48). Nevertheless, those conditions will allow an estimated 2000 true users to pass undetected for every nonuser who is judged wrongly to fail the test. As a further consideration, in some tests, common sources of interference are encountered; for example, poppy seeds on a couple of breakfast rolls have been found to give a positive test for opiates (48). There are other types of interferences that can be examined more logically, just as chemical interferences have traditionally been explored. First, in a situation reminiscent of the early applications of emission spectroscopy using arcs and sparks, a recent paper (49) reminds us that in making inductively coupled plasma measurements, one should not overlook the possibility of interference from second- and thirdorder spectral lines. Second, there is a publication in mass spectrometry that illustrates the use of calculations of peak locations and relative heights based upon (a) known masses and abundances of isotopic species, and (b) known fragmentation patterns for molecules that contain only certain specified elements. In one dioxin study (50), a computer program was used to calculate all peaks in the region of two prominent peaks used for determining tetrachlorodioxin. All possible combinations of species were calculated that contained up to 50 carbons; 6 oxygens; 2 each of nitrogen, chlorine-35, chlorine-37, bromine-79; one each of bromine-81, sulfur, and phosphorus; and the needed number of hydrogens. They found totals of 339 and 359 peaks that fell within 0.010 mass units of the respective 319.8965 and 321.8936 peaks being measured. It was estimated that a resolution of at least 30,000 would be needed. However, one is limited in the use of increasingly greater resolution because one faces the problems of: (a) appearances of isotopic peaks and (b) greater widths and lower intensities of all peaks. The maximum useful resolution is then set by the amount of available sample (more precisely, the amount of the sought-for chemical species) as well as the previous two factors (50). In practice, the problem can be complicated further if the two molecular species in the sample, the sought-for and the interfering species, are present in widely different amounts. A minor isotopic peak from an interfering sub-

ANALYTICAL CHEMISTRY, VOL. 62, NO. 13, JULY 1, 1990 • 709 A

REPORT

i

2%|

sife Electron Transfer in Biology and the Solid State Inorganic Compounds with Unusual Properties

T

his comprehensive volume brings to­ gether the latest information on electron transfer from two diverse scientific communities-those studying mechanisms of electron transfer in metalloproteins and those studying similar processes in solid state mate­ rials. Leading scientists in both fields contrib­ uted to this volume providing a balance of overview, theory, and experiment. Semiconduc­ tors and tunneling models are offered to ex­ plain many aspects of long-range biological electron transfer. Recent research activities and significant advances are highlighted. Unifying concepts found here are applicable to electron transfer between metal centers in both solid state ma­ terials and biological systems. With 23 chap­ ters, topics covered within this volume include: • theoretical aspects of biological electron transfer • experimental approaches to biological electron transfer: peptides and proteins and inorganic complexes • theoretical aspects of solid state systems • experimental aspects of solid state systems This volume will the professional tists, solid-state whose interests state materials.

serve as a valuable asset to libraries of materials scien­ scientists, biochemists, and all span both biological and solid

stance present at the 100 parts-per-billion level may obliterate the abundant target peak of the sought-for substance present at the 100 parts-per-trillion level. Hence, there is a continuing need to go in the direction of using yet more complex combinations of separation and measurement techniques. Before discussing the use of hyphen­ ated techniques and other multiple-op­ eration and multiple-detector systems, it is worth digressing to note a recent paper that dealt with a different multi­ ple-technique problem: the cross-eval­ uation of independent, multiple ana­ lytical procedures for a given species (51). By using four or more different methods for determining an analyte, it is possible to arrive at the most accu­ rate procedure for analyzing a complex sample and also to assess the capabili­ ties of each of the individual methods with respect to bias and precision. This paper clearly addresses a problem that is sometimes encountered when the an­ alytical results obtained by two meth­ ods are plotted against one another over a range of concentrations. When, as Figure 4 shows, the results deviate from a broad straight line, especially at one end or the other, one doesn't know which method to blame (8). Although a third, independent method should be helpful (8,51), the reported mathemat­ ical treatment argues strongly for a fourth. The major reason for using multiple operations in series and/or in parallel is to improve the selectivity, a goal that involves more than just one of its com­ ponents: resolution. This is why it is appealing to speculate as to the feasi­ bility of devising methods for calculat­ ing selectivities—at least on a relative basis. Certainly, in a simple example that was cited (10) using first boilingpoint and then molecular-weight data

F · R · Ο · M

American Chemical Society Distribution Office. Dept. 67 1155 Sixteenth St.. N.W. Washington. DC 20036 or CALL TOLL FREE

800-227-5558 (in Washington. D.C. 872-4363) and use your credit card!

General conclusions

Throughout this discussion, I have doc­ umented sources of instability and er­ ror, including gross misidentifications, that can occur within a laboratory. Minimizing those sources within a lab­ oratory will expand the range of mea­ surements that can be made and the accuracy with which they can be inter­ preted reliably. In addition, one may, in some cases, extend one's capability even further by improving resolution in a qualitative, rather than quantitative, way, as has been shown for NMR. Finally, regardless of how consistent one laboratory is in its operations, there is a need to reduce the differences between it and an equally consistent second laboratory. Better recognition of the classes of the sources of bias and uncertainty should lead to better re­ search data as well as to improved product control by industry and regu­ latory control by governmental agen­ cies. References

Michael K. Johnson, R. Bruce King, Donald M. Kurtz, Jr., Charles Kutal, Michael L. Nor­ ton, Robert A. Scott, Editors. University of Georgia Developed from a symposium sponsored by the Divi­ sion of Inorganic Chemistry of the American Chemical Society Advances in Chemistry Series 226 470 pages (1990) Clothbound ISBN 0-8412-1675-4 LC 90-30119 $89.95 Ο · R · D · Ε · R

from a large compilation of organic compounds, the number of possible species was greatly reduced (from roughly 1300 to 6) by introducing the second criterion. This, of course, is the same general principle that is used in searching libraries of spectra. Intuitively, it seems that it should be worthwhile, by using different combi­ nations of physical and chemical prop­ erties, to estimate the relative numbers of expected interferences when apply­ ing different combinations of proce­ dures and techniques to a sample con­ taining given classes of compounds (or elemental species). Are certain combi­ nations more useful than competitive ones for certain classes of samples? If careful examination shows that hope to be untrue, one can certainly continue along the present random course.

Figure 4 . C o m p a r i s o n of r e s u l t s o b ­ t a i n e d by t w o i n d e p e n d e n t m e t h o d s . The region where disagreement arises either be­ cause Method 1 gives low results or because Method 2 gives high results because of an inter­ ference is indicated with an asterisk.

710 A • ANALYTICAL CHEMISTRY, VOL. 62, NO. 13, JULY 1, 1990

(1) Chem. Eng. News 1990,68(6), 32. (2) a. Callis, J. B.; Illman, D. L.; Kowalski, B. R. Anal. Chem. 1987,59,624 A-637 A; b. Pell, R. J.; Erickson, B. C; Hannah, R. W.; Callis, J. B.; Kowalski, B. R. Anal. Chem. 1988,60, 2824. (3) Horwitz, W.; Kamps, L. R.; Boyer, K. W. J. Assoc. Off. Anal. Chem. 1980,63, 1344. (4) a. Whitaker, T. B.; Dickens, J. W.; Mon­ roe, R. J.; Wiser, E. H. J. Am. Oil Chem. Soc. 1972, 49, 590; b. Whitaker, T. B.; Dickens, J. W.; Monroe, R. J. J. Am. Oil Chem. Soc. 1974,52,214. (5) Widmark, G. Adv. Chem. Ser. 1971,11, 348. (6) Donaldson, W. T. Environ. Sci. Technol. 1977, / / , 348. (7) a. Improving the Reliability and Ac­ ceptability of Analytical Chemical Data Used for Public Purposes; Ad Hoc Sub­ committee Dealing with the Scientific As­ pects of Regulatory Measurements, ACS Joint Board/Council Committee on Sci-

ence; American Chemical Society: Washington, DC, May 10,1982. b. Chem. Eng. News 1982,60(23), 44. (8) Rogers, L. B. J. Chem. Educ. 1986,63,3. (9) Handbook of the Association of Official Analytical Chemists, 4th éd.; Association of Official Analytical Chemists: Arlington, VA, 1977. (10) Rogers, L. B. In Detection in Analytical Chemistry; Currie, L. A., Ed.; ACS Symposium Series; American Chemical Society: Washington, DC, 1988; Vol. 361, pp. 94-108. (11) a. Lundell, G.E.F. Ind. Eng. Chem., Anal. Ed. 1933, 5(4), 22. b. Lundell, G.E.F.; Hoffman, J. I. Outlines of Methods of Chemical Analysis; Wiley: New York, 1938, Chapter 3. (12) Furman, Ν. Η. Scott's Standard Methods of Analysis; Van Nostrand: New York, 1939; Vol. 2, pp. 1301-1333. (13) Laitinen, Η. Α.; Harris, W. E. Chemi­ cal Analysis, 2nd éd.; McGraw-Hill: New York, 1975; pp. 569-74. (14) Kratochvil, B.; Taylor, J. K. Anal. Chem. 1981,53, 924 A. (15) Analytical Reference Service Training Program, Report of Water Metals, No. 2; U.S. Dept. of Health, Education and Welfare. Robert A. Taft Sanitary Engineering Center, Cincinnati, OH, 1962. (16) I-CHEM advertisement. Anal. Chem. 1988,60,1145 A. (17) Maugh, T. H., II. Science 1982, 215, 490. (18) a. Emich, F. Ber. 1910,43, 10; b. Feigl, F. In Chemistry of Specific, Selective and Sensitive Reactions; Oesper, R. E., Translator; Academic Press: New York, 1949; p. 14. (19) Rogers, L. B. Presented at the Seminar on Priority Pollutants, Washington, DC, 1980, pp. 118-45. (20) a. Kagel, R. O. Analytical Variability and Priority Pollutant Analysis—Industrial Perspective; Water Pollution Control Federation Conference, Houston, TX, October 1979. b. Kagel, R. O., Dow Chemical Co., personal communication, March 1990. (21) Horwitz, W. Anal. Chem. 1982, 65, 67 A. (22) Bulkin, B. J., BPAmerica, personal communication, March 1989. (23) Gemperline, P. J.; Webber, L. D.; Cox, F. O. Anal. Chem. 1989,61,138. (24) Rodden, C. J., New Brunswick Laboratory, U.S. Atomic Energy Commission, personal communication, 1947. (25) a. Nordberg, R. Acta Pharm. Suecica 1982,19(1), 51. b. Borgâ, O. Idem. 52. (26) Currie, L. A. Pure Appl. Chem. 1982, 54,733. (27) Bass, D.A. Amer. Lab. 1989, 2.7(12), 24. (28) Budde, W., U.S. EPA, personal communication, 1976. (29) Ember, L. R. Chem. Eng. News 1988, 66(48), 23. (30) Heintz, E. Naturwissen. 1941,29,71325. (31) a. Oberholtzer, J. E.; Rogers, L. B. Anal. Chem. 1969, 41, 1234. b. Westerberg, R. B.; Davis, J. E.; Phelps, J. E.; Higgins, G. W.; Rogers, L. B. Chem. Instrum. 1975, 6, 273. (32) Allerhand, Α.; Maple, S. R. Anal. Chem. 1987,59, 441 A. (33) Culp, R. Α.; Lochmuller, C. H.; Moreland, A. K.; Swingle, R. S.; Rogers, L. B. J. Chromatogr. Sci. 1971,9, 6. (34) Lorenz, L. J.; Culp, R. Α.; Rogers, L. B. Anal. Chem. 1970,42,979. (35) Roberts, D. Amer. Lab. News Ed. 1988, 20,16. (36) Joyce, R. Amer. Lab. 1989,21(6), 48.

(37) Ravichandran, K.; Lewis, J. J.; Yin, I-H.; Koenigbauer, M.; Powley, C. R.; Shah, P.; Rogers, L. B. J. Chromatogr. 1988,439, 213. (38) a. Feldman, M., E. I. du Pont de Ne­ mours and Co., personal communication, July 1986. b. Garelick, E. L. Intech. 1990, 37,34. (39) Pauls, R. E.; Rogers, L. B. Sep. Sci. 1977,12, 395. (40) Papas, A. N.; Delaney, M.F. Anal. Chem. 1987,59, 54 A. (41) Head, M.; Last, B. Amer. Lab. 1989, 22(12), 56-59. (42) Lubkowitz, J. Α., Independent consul­ tant, Gulf Breeze, FL, personal communi­ cation, March 1987. (43) Meglen, R. R. Presented at the 100th AOAC Annual Meeting, Scottsdale, AZ; Sept. 16,1986. (44) Veillon, C. Anal. Chem. 1986,58,851 A. (45) Shadoff, L. Α., Dow Chemical Co., per­ sonal communication, July 1986. (46) Nestrick, T. J.; Lamparski, L. L.; Stehl, R. H. Anal. Chem. 1979, 51, 1453, 2273. (47) Nestrick, T.J., Dow Chemical Co.; Lamparski, L. L., Dow Chemical Co., per­ sonal communication, March 1979. (48) Hively, W. Amer. Sci. 1989, 77,19-23. (49) Attar, Κ. Μ. Anal. Chem. 1988, 60, 2505. (50) Mahle, N. H.; Shadoff, L. A. Biomed. Mass Spectrom. 1982, 9, 45. (51) Mark, H.; Norris, K.; Williams, P. C. Anal. Chem. 1989,61, 398.

Ë

MicroView

Microscope

Glassy carbon electrode in solution, coated with copper, deposited and imaged at sub-nanometerresolution.

A New Horizon in Electrochemistry

E

G&G Princeton Applied Research is pleased to open the curtain to a new horizon in re­ search with the MicroView Scan­ ning Tunneling Microscope.

The MicroView uses Scanning Tunneling Microscope (STM) tech­ nology, which lets you view sur­ faces at u p to atomic resolution. To the STM, we add a built-in potentiostat and electrochemical cell. But the MicroView gives you more than a closeup view. It electrochemically stimulates your sample in situ and measures the resulting reaction as you view a graphic image of the reaction site on your video display.

Lockhart B. Rogers is Graham Perdue Professor of Chemistry Emeritus at the University of Georgia. He has been teaching since 1942, except during February 1946-August 1948, when he was group leader of long-range re­ search in analytical chemistry at what is now Oak Ridge National Laborato­ ry. He taught for 4 years at Stanford, 13 years at MIT, 13 years at Purdue, and 12.5 years at Georgia before retir­ ing in 1986. Among the numerous awards he has received are the ACS awards in Analytical Chemistry and in Chromatography, the ACS Division of Analytical Chemistry Award for Ex­ cellence in Teaching, and the Analyti­ cal Chemistry Award of the Society of Analytical Chemists of Pittsburgh.

Think of the advantage of using the MicroView to monitor a graphite electrode as it deposits copper onto the surface. Or the value of examining a pit forma­ tion with the MicroView as it ap­ plies a passivation potential to iron in a chloride solution. So if you're ready for new hori­ zons, call us at 1-800-274-PARC. Our twenty-eight years of experi­ ence in electrochemistry tell us that a new day is here.

Π EG&G

PARC

P.O. BOX 2565 · PRINCETON, NJ 08543-2565 (609) 530-1000 · FAX: (609) 883-7259

Circle 36 for Literature. Circle 37 for Sales Representative. ANALYTICAL CHEMISTRY, VOL. 62, NO. 13, JULY 1, 1990 • 711 A