Peer Reviewed: Optimizing Data Collection Design - Environmental

Peer Reviewed: Optimizing Data Collection Design. An effective design requires managing multidisciplinary details and integrated errors. John P. Maney...
0 downloads 0 Views 613KB Size
Optimizing

DATA COLLECTION DESIGN An effective design requires managing multidisciplinary details and integrated errors. J O H N P. M A N E Y

mplementation of a data collection activity (DCA) requires that several individuals address numerous, intricate details on schedule and in an exacting manner. The U.S. EPA continues to generate guidance on the complex issues that underlie environmental DCAs (www.epa.gov/quality1). This article presents the background and context of this approach and, building on EPA guidance, introduces an approach for managing all error types during optimization of DCAs.

I

It began with Love Canal In 1978, just eight years after its establishment, EPA was confronted with a medical emergency. In the summer of 1980, EPA embarked on a momentous DCA in the Love Canal area of upstate New York. These data were used to support the conclusions of a longawaited 1982 report on the habitability of Love Canal (1). However, when the Congressional Office of Technology Assessment (OTA) reviewed the report, © 2002 American Chemical Society

they concluded, “The design of the EPA monitoring study, particularly its sampling strategy, was inadequate” (2). The OTA’s findings were not surprising, considering the national pressure to act, the breadth of this multidisciplinary undertaking, and the state of the art of environmental data collection design. EPA had recognized the obstacles to defensible data collection earlier. In 1979, the agency issued a policy statement establishing agency-wide quality assurance (QA) programs. At the same time that data were being collected at Love Canal, EPA’s National Enforcement and Investigation Center (NEIC) was implementing a mechanism for contract commercial laboratories to generate legally defensible data, and EPA’s QA Management Staff was developing a guidance document for the development of QA Project Plans (QAPP) (3). The NEIC program evolved into the Contract Laboratory Program, which has supported federally funded remediation of hazardous waste sites since the early OCTOBER 1, 2002 / ENVIRONMENTAL SCIENCE & TECHNOLOGY



383 A

1980s. The QAPP guidance was in effect until 1994, when it was replaced by new requirements based on a consensus standard (4, 5 ). In 1984, EPA issued a more detailed directive mandating a QA Program for all environmentally related measurements performed by the agency (6). The goal was to ensure that all EPA decisions would be based on data of known quality. EPA also recognized that systematic planning was necessary to consistently generate data of known and usable quality (7). The resulting planning process is referred to as the Data Quality Objective (DQO) planning process.

The initial DQO planning process consisted of three steps: Define the decision, clarify the information needed for the decision, and design the data collection program (8). By 1990, the DQO planning process had evolved into the current seven separate but iterative and interdependent steps (9): State the problem, identify the decision, clarify inputs to the decision, define the study boundaries, develop a decision rule, specify limits on decision errors, and optimize the design for obtaining data. Guidance on the seven-step process was supplemented in 1994 (10). The DQO planning process has evolved since 1994 and has been widely adopted

Examples of data collection details Objectives

Site/waste

Sampling

Analytical

Data assessment

• Identification and understanding of the project “need” or “driver” (e.g., regulations,

liability, protection of environment) • Identification and participation of stakeholders • Identification of pertinent thresholds and standards • Identification of DQOs (e.g., the decision to be made, data needs) • Public relations • End use of data • Site or process history • Waste generation and handling • Contaminants of interest • Sources of contamination • Fate and transport /exposure pathways • Conceptual models • Population of interest • Decision unit support • Adjacent properties • Resource constraints • Health and safety • Representativeness • Logistics • Sampling strategy • Sampling locations • Number and types (field/QA) of samples • Sample volume/mass/orientation • Composite versus discrete samples • Sampling equipment/containers • Sample preservation, custody, and handling • Subsampling • Optimal analytical sample volume/mass • Quality of reagents/supplies • Sample preparation/analytical method • Calibration and verification • Matrix interferences and laboratory contamination • Detection limits • Holding times and turnaround times • QC samples/statistical control • Reporting requirements • DQOs, data quality indicators, and performance criteria • Documentation of implementation activities and data quality • Completeness, comparability, representativeness, sensitivity, bias, and imprecision versus performance criteria • Audits, performance evaluation samples, and corrective actions • Chain of custody • Verification of assumptions and statistical treatment of data

Source: Adapted with permission from Reference (25).

384 A



ENVIRONMENTAL SCIENCE & TECHNOLOGY / OCTOBER 1, 2002

(11–13). Nevertheless, data and the certainty of The relatively straightforward the DQO planning associated decisions and process has not been conclusions. Errors have DQO planning process is inconsistently employed, been classified into three even within EPA (14). An general categories (19). correctly perceived as being EPA Science Advisory Blunders are mistakes Board report found that that typically occur on cumbersome. the agency’s acceptance occasion. Examples inof the DQO planning clude misweighing, misprocess was hindered by a lack of requirements and labeling, transcription errors, transposition errors, consequences if it were not consistently applied, staff incorrect notebook recordings, and sample switchdiscomfort in accepting and managing decision ering. Mislabeling, for example, can impact a single rors, and technical problems such as an absence of sample and its associated data. On the other hand, a statistical expertise (15). blunder such as incorrectly switching the identificaAlthough EPA’s initial guidance discriminated betion of two peaks in a gas chromatography calibratween the DQO planning process and its outputs tion standard will affect all analyzed samples until (DQOs), differences were not clearly understood (8). the error is corrected. Blunders can result in qualitaSome interpreted the DQO process to be as simple as tive errors (misidentifications) and quantitative erchoosing between four levels of field and laboratory rors (under- or over-reporting). analytical methods, which were described in subseAlthough infrequent and isolated, there is at least quent DQO guidance (16), or specifying laboratory one example of a multilaboratory blunder due to comquality control (QC) criteria. As a result, the full mercially available standards unknowingly degrading breadth of issues that the DQO planning process into another environmental contaminant (20). As a retended to encompass was not widely appreciated in sult, many laboratories could have reported the wrong the 1980s and 1990s. compound for years. Blunders are controlled by QA EPA presently defines DQOs and the DQO planprograms that require the use of standard operating ning process as distinct but closely related concepts procedures (SOPs), self-verification, training, quality (17). DQOs are the qualitative and quantitative stateimprovement, corrective actions, and various forms of ments derived from the DQO process that clarify assessments such as audits and inspections. study objectives, define the appropriate type of data, Systematic errors are measurements that are conand specify tolerable levels of potential decision ersistently removed from their true values—for example, rors that will be used as the basis for establishing the under- or over-reporting a contaminant concentration. quality and quantity of data needed to support deciThus, systematic errors result in a bias. sions. The DQO process is defined as a systematic Although systematic errors are often considered planning tool to facilitate the planning of environto be consistent in magnitude, this is not always the mental data collection activities. case. For example, much of the soil data collected Complexity is also recognized as a barrier to acduring the 1980s and 1990s for volatile organic comcepting innovative concepts, and unfortunately the pounds are biased low (21). The magnitude of this DQO planning process is perceived as complex (18). error is a function of the degree of soil disturbance, In fact, a DCA is generally a difficult undertaking, and which can vary from sample to sample and between it is the complexity of the underlying, multidisciplifield technicians. Thus, systematic errors can be comnary details that confound planning. The relatively plicated by a random component or some variable straightforward DQO planning process is incorrectly such as temperature. When trying to correct for their perceived as being cumbersome because it encourpresence, the variation in the magnitude of systemages planners to address DCA details and complexiatic errors can result in uncertainty (22). ty up front in the planning phase. Systematic errors are quantitative in nature, unless Despite these setbacks, EPA continues to generate an incorrect detection versus nondetection decision, guidance that focuses the discussions concerning enresulting from a systematic error, is considered as qualvironmental data collection issues, significantly imitative. Some systematic errors are easily detected by proves the state of the science, and gives decision makers using QC samples, such as spiked samples, calibration the tools to make defensible decisions. Environmental check samples, and standard reference materials. project planners, managers, decision makers, and priOther systematic errors are more difficult to detect vate and government contractors who disregard or because the true value and the error sources are not do not allow implementation of EPA or similar planalways known, for example, a subsampling procedure ning guidance may generate data of unknown or unthat unknowingly discriminates against certain partiusable quality. cle sizes. Implementing adequate QA programs also controls this latter type of systematic error. Blunders, random errors, and other demons Random errors arise from the difficulty of repeatDecision makers, field teams, laboratory staff, statising sampling and analytical processes and the hetticians, and assessors who may have never met must erogeneity of the population under study. For example, address many intricate details on schedule. The way the subtle variants of operator implementation, the that details outlined in the box on the facing page are ruggedness of the sampling and analytical process, addressed will determine the type, frequency, and and the heterogeneity within a sample and the popmagnitude of the errors expressed in the resulting ulation, lead to imprecise measurements. These ranOCTOBER 1, 2002 / ENVIRONMENTAL SCIENCE & TECHNOLOGY



385 A

dom errors are readily detected by using QC samples, such as field or laboratory replicates and matrix spike duplicates. The random nature of this type of error is such that as more samples and replicates are processed, the errors tend to cancel, an advantage when mean properties are of interest. Random errors are typically the focus of statistics. When statisticians refer to “total error”, it is a reference to the total of only random errors. For the sake of clarity, this article uses the term “integrated error” when referring to the collective impact of blunders and random and systematic errors. Although a typical DCA goal is to eliminate or minimize blunders, systematic errors, and operator/metrology-caused random errors, a concurrent goal is to also manage or document heterogeneity. Statistics are a key planning tool for determining how best to manage heterogeneity and other sources of random errors that are expressed in the spread of values found in environmental databases. The type and magnitude of errors vary between data collections. The detection and management of these errors are necessary to ensure that data-based FIGURE 1

Absolute errors associated with project details and their impact on decision uncertainty This semiquantitative summary of error impact on the certainty of decisions versus the decision threshold can help focus resources while designing a data collection activity. The concentric circles with the true value (T ) at center represent the major sources of error: population heterogeneity, sampling, and analysis. The absolute magnitude of the error from each source is proportional to the radius of the concentric circle. (Ra, random analytical error; Sa, systematic analytical error; San, nonmeasurable systematic analytical error; Rs, random sampling error; Ss, systematic sampling error; Ssn, nonmeasurable systematic sampling error; and Vh, variability from population heterogeneity.)

Absolute integrated error

Decision Threshold Ra Sa San Rs Ss Ssn

Vh

T

Population variability

Sampling tasks/details

Analytical tasks/details

386 A



ENVIRONMENTAL SCIENCE & TECHNOLOGY / OCTOBER 1, 2002

decisions are made within the desired level of confidence. Therefore, it is helpful to further classify errors according to their measurability (23). Measurable errors are those whose impact on accuracy can be detected, monitored, and quantified by QC samples or an easily identified metric, such as percent completion. During data quality assessment, it is fundamental for the assessor to remember that the impact of measurable errors can only be detected and quantified when the appropriate QC samples are used. It is also important that the assessor not conclude that data are usable solely because QC results comply with established criteria. QC samples can only monitor the impact of measurable errors. For example, QC samples do not indicate if the wrong drum was sampled or was sampled in a biased manner. Nonmeasurable errors, which include certain blunders and systematic errors, affect accuracy and are not typically detected by QC samples. However, they can be controlled through QA procedures that require structured planning, detailed project plans, the use of SOPs, self-verification, training, quality improvement, and various forms of assessments, such as audits and inspections. An evaluation is required to determine if nonmeasurable errors have affected the accuracy of a measurement. To facilitate such an evaluation, assessment findings, SOPs, documentation of SOPs and project plan implementation, and documentation of pertinent personnel training and experience should be considered. This information can be used to determine the likelihood or occurrence of blunders and systematic errors, and the accuracy of the data. Some nonmeasurable errors, such as blunders, may be uncovered during corrective-action investigations if the nonmeasurable error affected a sample of known concentration, such as a QC sample, or resulted in nonsensical data. It should be noted that errors normally considered measurable become nonmeasurable if the appropriate QC samples are not used. Therefore, assessors must be cognizant of any QC sample omissions. Assuming that all appropriate QC samples are used and pertinent metrics are identified, random errors, comparability, and percent completions will be measurable. Blunders will be nonmeasurable, and some systematic errors will be measurable and others nonmeasurable. Other examples of measurable errors include poorquality reagents, poor recoveries, matrix interferences, contamination, imprecise procedures, or calibration drift. Nonmeasurable errors can encompass an incorrect sampling strategy or documentation, improper decision unit support, sample switching or mislabeling, incorrect dilutions, or sampling the wrong area. During the design of a DCA, it is important to bear in mind that the selection of sampling, analytical, and data-handling details and their potential for error will determine the level of confidence in associated decisions as shown: DCA design ➝ Details ➝ Errors ➝ Decision uncertainty Figure 1 is a pedagogic variant of a box-andwhisker plot, which summarizes the impact of these

FIGURE 2

Optimizing data collection design A data collection activity is planned, implemented, and assessed with a focus on the details and associated errors. DQO process (Steps 1– 6) Need for DCA

Planning

Focus of this article

7th DQO step: Optimize the design for obtaining data

Implementation

Assessment

Decision/conclusion

Managing nonmeasurable errors

Managing measurable errors

Heterogeneity (Gy’s selection errora) - Correct and resource-effective sample selection design that does not over- or underrepresent certain components, particles, or portions of the population - Correct population and decision unit support

Heterogeneity - Stratify population and/or increase sample mass and/or number of samples and/or compositing - Sample variance is indicative of this error

Sampling errors Random - Increase number of samples - Increase sample mass to control fundamental error - Establish collocated samples criteria Systematic (bias) - Identify proper type and frequency of blanks - Establish blank criteria

Sampling errors (Gy’s delimitation and extraction errorsa) - Correct and resource-effective sampling, preservation, decontamination, and sample support - QA systems

Analytical errors - Accurate and resource-effective analytical methods - QA systems

Analytical errors Random - Increase number of analytical replicates - Establish criteria for replicates Systematic (bias) - Establish criteria for calibration checks, method, reagent and calibration blanks, spikes, yield, standard reference materials, and performance evaluation samples

Selectivity - Appropriate method and/or resolution and/or confirmation

Comparability - Well-documented field and lab procedures - QA systems

Sensitivity - Method detection limit significantly less than decision threshold to minimize random error and to minimize false negatives - Establish criteria for method detection limit

Data handling - Proper statistical and data-handling methods - QA systems

Comparability and completeness - Identify critical comparability characteristics - Identify the minimum number of data and critical data points

Project plan

a See Reference (25).

errors on the certainty of decisions. It is semiquantitative and meant as a tool to understand the different sources of decision error and to encourage an effective focus of resources during the design of a DCA. The concentric circles with the true value (T) at center represent the major sources of error—population heterogeneity, sampling, and analysis. The absolute magnitude of the error from each source is proportional to the radius of the concentric circle. Figure 1 represents a typical DCA for which population variability is the largest source of error (24), although this is not always the case (21). This figure depicts the impact of both systematic errors and random errors upon the ultimate decision or conclusion versus a decision threshold. (For simplicity, the im-

pact of the infrequent blunder, which is consistent in its direction, is incorporated under systematic errors.) Using absolute error estimates presents a worst case, because, in practice, some errors could cancel each other. T could be an average property or the value for an individual sample. The goal is to control the overall radius length, which is the integrated error resulting from all blunders and systematic and random errors, so that error-induced uncertainty is controlled. Thereby, decisions or conclusions can be made with the desired amount of certainty.

Optimization Optimization, one step in the multiphased life cycle of a DCA, is depicted in the flow diagram of Figure 2. OCTOBER 1, 2002 / ENVIRONMENTAL SCIENCE & TECHNOLOGY



387 A

The diagram illustrates how the DCA is planned, imas the ramifications of making an incorrect decision plemented, and assessed after a need is identified. or conclusion increase. Guidance from EPA and other The remainder of this article focuses on the seventh groups addresses critical sampling and analytical isstep of the DQO planning process, in which the outsues and QA procedures that can manage nonmeaputs of the previous steps and their multidisciplinary surable errors (12, 13, 25). complexity are confronted and used to create a reBecause measurable errors are detected and consource-effective design that produces data of known trolled through QCs and specified metrics, they are and sufficient quality. easier to detect than nonmeasurable errors. MeasurOptimization requires that appropriate details for able errors are best managed by identifying the data collection be selected and that the integrated appropriate suite of QCs and implementing them error associated with their implementation be withwith the appropriate frequency. Omission of a pertiin acceptable boundaries. Most of the guidance connent QC can mask an error. Using QCs infrequently cerning this step emphasizes control of random errors decreases confidence as they are extrapolated to (10–13). This focus is understandable because ranlarger sets of field or lab samples, especially when dom errors are obvious some samples may be whenever multiple saminnately different or may ples are collected and have undergone different Planners seldom know enough because statistics is a handling, preparation, or well-developed branch analyses. As with QA proabout the sources and the magof mathematics for meacedures, the type and fresuring and managing quency of QCs should nitude of errors to effectively de- be chosen during optirandom errors. Although the impact of random ermization using a graded fine project-specific QC criteria. rors must always be adapproach. dressed, even elegant The use of QCs restatistical designs can be quires that associated ineffective if the controlling source of error is sysacceptance criteria be specified. Readily available tematic or nonmeasurable, or if the systematic, nonacceptance criteria are often used with project plans measurable, and random errors together create a that are replete with criteria copied directly from different outcome. standard analytical methods or laboratory manuals. The flow diagram in Figure 2 depicts general catAlthough useful, these acceptance criteria are not egories and examples of issues that need to be conproject specific and may not meet project objecsidered during optimization. Therefore, planners must tives. Unfortunately, planners seldom know enough manage all error sources. Institutional knowledge, site about the sources and the magnitude of errors to history, history from a similar site, and preliminary site effectively define project-specific QC criteria. Adstudies can be sources of information for error. ditionally, the sources and magnitude of errors Successful management of errors requires that change as the design varies. For example, when conthose implementing the DCA have quality systems. A sidering the control of random errors, the various quality system is the management system that consources of these errors must be considered because sists of QA procedures and QC activities and is structhey offer different options for error control. tured to generate a product of known and sufficient Increasing the number of analytical replicates can quality. QC activities are those used to determine decrease the impact of analytical imprecision, and whether a process is within statistical control. The QA increasing the number of samples or the mass of procedures are designed to oversee and assess QC samples and using composite samples or stratificaactivities, specify the implementation of all activities tion can decrease the problem of heterogeneity. that can impact quality, assess quality, improve qualThese options complicate the translation of DQOs ity, and report to managers. into QC acceptance criteria because the need for QA procedures are the tools for controlling nonmore stringent analytical or sampling QC criteria measurable errors. These QA procedures require that varies according to the proximity of measured valappropriate stakeholders and technical experts initiues to the decision point, as impacted by sampling ate a structured planning process and those who imstrategy, sample mass, and the type and number of plement the DCA are technically qualified. Without replicates and field samples. structured planning, success and quality cannot be Fortunately, population heterogeneity is typically defined. Lacking qualified personnel, there is a sigthe major source of error (24), so that increasing the nificant risk that even a sound plan will fail. QA pronumber of field samples is generally more productive cedures, such as personnel training, personnel than developing project-specific QC criteria for a suite performance evaluation (e.g., certification or docuof sampling and analytical QC samples. For example, mentation of performance), SOPs, assessments, and it is clear from Figure 1 that population heterogenespecifications for contractor selection, increase the ity is the greatest source of decision error. However, likelihood that errors will be detected and corrected in this example, the integrated error, which sums all or avoided. The concern is that there is no guarantee measurable and nonmeasurable errors, is not a comthat nonmeasurable errors will be detected, which plicating issue because the data are still removed from encourages the DCA planners to use a graded apthe decision threshold and the correct decision is likeproach that requires more demanding QA procedures ly—that is, T is less than the threshold. If Figure 1 388 A



ENVIRONMENTAL SCIENCE & TECHNOLOGY / OCTOBER 1, 2002

were altered by moving the decision threshold so that it intersected the outer concentric circle, then the integrated error would now increase the possibility of making an incorrect decision. If mean properties are of interest, increasing the number of samples can help control the integrated error, while standard QC criteria can determine whether procedures are within statistical control. If increased sampling efficiency is not sufficient to make decisions with the desired amount of certainty, then other permutations may be necessary, such as analytical methods subjected to tighter controls and project-specific criteria. There may be situations in which the population is relatively homogeneous—that is, the radius of Vh in Figure 1 is small compared with the radial components for sampling and analysis, yet the remaining errors preclude decision making with the desired level of confidence. In this situation, potential sources of analytical and sampling errors should be investigated to identify and modify the details that most effectively decrease the integrated error. For example, if the parameter of interest approached the chosen analytical technique’s method detection limit (MDL), a more sensitive technique with a significantly lower MDL should give less random error. On the other hand, if the chosen fieldanalytical technique suffers from a bias, an alternative technique may be more suitable for measurements approaching the decision threshold. If these modifications were used, then QC samples, such as lowlevel standards and split samples, and project-specific acceptance criteria could ensure that expectations are met. There are situations, such as longer-term site remediations, in which the different sources of error are well understood and documented over time. Under these circumstances, QA procedures have typically lowered the possibility of significant nonmeasurable errors, minimized measurable systematic errors, and evaluated and subjected random errors to analysis of variance. Under these circumstances, the component radial lengths depicted in Figure 1 are well defined, and project-specific QC criteria can be confidently specified and resource-effective optimization can be implemented using more sophisticated techniques (26). John P. Maney is president of Environmental Measurements Assessment in Gloucester, Mass., and can be reached at [email protected].

References (1) DHHS Evaluation of Results of Environmental Chemical Testing by EPA in the Vicinity of Love Canal: Implications for Human HealthFurther Considerations Concerning Habitability; U.S. Department of Health and Human Services: Washington, DC, 1982. (2) Habitability of the Love Canal Area: An Analysis of the Technical Basis for the Decision on the Habitability of the Emergency Declaration Area; Office of Technology Assessment, U.S. Congress, NTIS order #PB84-114917; U.S. Government Printing Office: Washington, DC, 1983. (3) Interim Guidelines and Specifications for Preparing Quality Assurance Project Plans; QAMS 005/80; Office of Monitoring Systems and Quality Assurance, U.S. Environmental

Protection Agency, Washington, DC, 1980. (4) EPA Requirements for Quality Assurance Project Plans for Environmental Data Operations; (EPA QA/R-5); Quality Assurance Management Staff, U.S. Environmental Protection Agency, Washington, DC, 1994. (5) Specifications and Guidelines for Quality Systems for Environmental Data Collection and Environmental Technology Programs; ANSI/ASQC E4-1994; American Society for Quality Control: Milwaukee, WI, 1995. (6) EPA Order 5360.1 Policy and Program Requirements To Implement the Mandatory Quality Assurance Program; U.S. Environmental Protection Agency, Washington, DC, April 1984. (7) Data Quality Objectives; Memorandum from Alvin Alm, Deputy Administrator; U.S. Environmental Protection Agency, Washington, DC, May 1984. (8) Draft Information Guide on Data Quality Objectives; Memorandum from Dean Neptune, Quality Assurance Management Staff; U.S. Environmental Protection Agency, Washington, DC, November 1986. (9) Neptune, D.; et al. Streamlining Superfund Soil Studies: Using the Data Quality Objective Process for Scoping; Proceedings of the Sixth Annual Waste Testing and Quality Assurance Symposium, Washington, DC, July 16–20, 1990. (10) Guidance for the Data Quality Objectives Process (EPA QA/G-4); EPA/600/R-96/056; U.S. Environmental Protection Agency, Washington, DC, September 1994. (11) Institutionalizing the DQO Planning Process for DOE Environmental Data Collection Activities; Memorandum from Thomas Grumbly; U.S. Department of Energy, September 7, 1994. (12) ASTM Standard D 5792; Practice for Generation of Environmental Data Related to Waste Management Activities: Development of Data Quality Objectives; American Society for Testing and Materials: West Conshohocken, PA, 1995. (13) Multi-Agency Radiation Survey and Site Investigation Manual (MARSSIM); NUREG-1575; U.S. Nuclear Regulatory Commission: Washington, DC, 1997. (14) EPA Had Not Effectively Implemented Its Superfund Quality Assurance Program; E1SKF7-08-0011-8100240; Office of Inspector General, U.S. Environmental Protection Agency: Washington, DC, 1998. (15) Science Advisory Board Review of the Implementation of the Agency-Wide Quality System; EPA-SAB-EEC-LTR-99002; Science Advisory Board, U.S. Environmental Protection Agency: Washington, DC, 1999; www.epa.gov/sab. (16) Data Quality Objectives for Remedial Response Activities; EPA/540/G-87/004; U.S. Environmental Protection Agency: Washington, DC, 1987. (17) Guidance for the Data Quality Objectives Process (EPA QA/G-4); EPA/600/R-95/055; U.S. Environmental Protection Agency: Washington, DC, 2000. (18) Commentary Resulting From a Workshop on the Diffusion and Adoption of Innovations in Environmental Protection; EPA-SAB-COM-01-001; Science Advisory Board, U.S. Environmental Protection Agency: Washington, DC, November 2000; www.epa.gov/sab. (19) Taylor, J. K. Quality Assurance of Chemical Measurements; Lewis Publishers: Chelsea, MI, 1987. (20) Maney, J. P.; et al. Environ. Sci. Technol. 1995, 29, 2147– 2149. (21) Smith, J. S.; et al. Volatile Organic Compounds in Soil; Accurate and Representative Analysis. In Principles of Environmental Sampling; Keith, L. H., Ed.; American Chemical Society: Washington, DC, 1996; pp. 693–704. (22) Guide to the Expression of Uncertainty in Measurements; International Organization for Standardization: Geneva, Switzerland, 1995. (23) Maney, J. P.; Wait, A. D. Environ. Lab. 1991, 3, 2026. (24) Crumbling, D. M.; et al. Environ. Sci. Technol. 2001, 35, 404–409. (25) Maney, J. P. Sample Collection Design. In Hazardous and Radioactive Waste Treatment Technologies Handbook; Oh, C. O., Ed.; CRC Press: Washington, DC, 2001; pp. 2.1-3–2.1-34. (26) Guidance on Data Quality Indicators; Peer Review Draft (EPA QA/G-5i); U.S. Environmental Protection Agency: Washington, DC, September 2001; www.epa.gov/quality1. OCTOBER 1, 2002 / ENVIRONMENTAL SCIENCE & TECHNOLOGY



389 A