VIEWPOINT pubs.acs.org/est
Anthrax Cleanup Decisions: Statistical Confidence or Confident Response Igor Linkov,*,† John B. Coles,† Paul Welle,† Matthew Bates,† and Jeffrey Keisler‡ †
Environmental Laboratory, US Army Engineer Research and Development Center, US Army Corps of Engineers, Vicksburg, Mississippi ‡ University of Massachusetts, Boston, Massachusetts
“C
an you guarantee this building is safe?” This question is difficult to answer in the wake of a biological attack. It is likewise difficult to decide when people may return to a place where dangerous agents were once dispersed. This decision becomes especially challenging when even minute quantities of the suspected contaminating agent could pose a significant threat (e.g., weaponized anthrax). Following the 2001 US anthrax attacks, the Government Accountability Office (GAO) directed the U.S. Department of Homeland Security (DHS) to develop a defensible strategy for making such decisions following biological incidents.1 Statistically based sampling and analysis has historically provided the information to guide remediation of contaminated sites. However, classical statistics approaches could require thousands of tests2 to conclude that the number of anthrax spores is below the level of 0.1 spores per square meter (a concentration which could still pose a significant risk). Supplemental experimental data (e.g., dispersion studies, preliminary sampling) may be difficult to obtain due to financial constraints and could be limited to specific experimental conditions. Moreover, classical statistics expresses results as confidence or tolerance intervals and does not actually state a probability of r 2011 American Chemical Society
contamination necessary for risk-based decision making, which poses a challenge of communicating test results in a meaningful way.3 For example, the confidence interval statement, “with 90% confidence the probability of contamination is less than 5%,” means “if the probability of contamination was really 5% or greater, there is only a 10% chance we would have observed no contamination.” Likewise, the tolerance interval statement “with 90% confidence, 95% of the room has no contamination” is not reassuring. Modern Bayesian statistical approaches (developed starting in the 1950s, building from a tradition going back to Bayes and Laplace in the 1700s-1800s) facilitate inferences about the probability of remaining contamination, effectively overcoming much of the aforementioned challenge. Bayesian methods can smoothly incorporate data from laboratory experiments (e.g., decontaminating different surfaces such as steel and concrete), which is useful when conditions are dynamic. Additionally, when relevant expert knowledge exists, Bayesian statistics can smoothly incorporate it into decision making, in place of further sampling. Finally, Bayesian methods can support direct statements about: probabilities, e.g., “there is a 95% probability that the room is not contaminated”; probability ranges, e.g., “are 95% confident that the probability of the room being clean is between 2% and 4%”; or probability distributions, e.g., “the probability distribution over the number of spores follows a beta distribution with these parameters” or even “there is a 1% chance that there are ten or more spores remaining” (which could be useful in less hazardous situations where some risk of exposure may be tolerable). An important caveat is that the probability statements are still fundamentally an assertion of the expert beliefs based on subjective inputs regarding properties of contaminants and cleanup methods under different conditions tempered by the logical implications of the observed data. They should not be viewed as a way to “launder” opinions into fact, and if experts do not know enough to provide strong judgments, new data will still be needed to gain adequate certainty about treatment success. The other key limitation of the Bayesian approach is that people— including subject matter experts—are known to have a variety of systematic biases in making subjective probability estimates. Scientific research is replete with examples of overconfidence, where experts were slow to update their beliefs in the face of new Received: September 29, 2011 Accepted: October 4, 2011 Published: October 26, 2011 9471
dx.doi.org/10.1021/es203479t | Environ. Sci. Technol. 2011, 45, 9471–9472
Environmental Science & Technology
VIEWPOINT
assessment and remediation approach could more rapidly assess and respond to future biological threats.
’ AUTHOR INFORMATION Corresponding Author
*E-mail:
[email protected]. US Army Corps of Engineers, 696 Virginia Rd., Concord, MA 01742.
’ ACKNOWLEDGMENT Permission was granted by the Chief of Engineers to publish this information.
Figure 1. Proposed cleanup decision framework.
information. But there is also a rich literature on how to minimize these biases and validate results based on small (and thus, relatively inexpensive) sets of objective data. Correctly applied Bayesian methods fill an important gap when empirical data are limited. Figure 1 proposes a basic framework in which Bayesian methods could combine expert judgments about the threat with lab-data about decontamination efficacy to provide an estimate of remaining threat. Because it is able to integrate onsite judgment from first responders and experts in the field, such a model could help guide remediation and testing. The likelihood of remaining contamination is one input to a larger decision context for which experts from pertinent fields (e.g., counter-terrorism, law-enforcement, epidemiology, and policy), can provide other relevant information. Specifically, onsite observations (room characteristics, contaminated material composition, and wind/draft potential) are represented by the dark blue nodes. Onsite test data results (characterization, remediation test strips) are represented by light blue nodes. The red nodes are informed by historical laboratory data and offsite expert judgments about the probabilistic relations between these nodes and their predecessors, and Bayesian methods are then used to calculate probability distributions on each red node’s values, given those of its predecessors. The probability distribution on the number of surviving spores is derived directly from: (1) the initial number of spores (more generally, colony forming units), (2) the percentage that would be killed by remediation under laboratory conditions, and the (3) efficacy of onsite remediation compared to laboratory conditions. Issues 1 3 above are each informed by off or onsite data and judgment about their predecessors, as described. The distribution on the number (possibly zero) of remaining spores is an input to the overall threat calculation. With both strengths and limitations, the Bayesian approach is well-suited to guide remedial decisions in problems like anthrax remediation. Consistent procedures and inputs from both objective and subjective sources and procedures are important factors in making traceable and justifiable decisions based on estimates of probability given the available data. The better and more empirical the data, the less need there is to rely on judgment (and, in fact, Bayesian and classical statistical approaches converge to the same conclusions as sufficiently rich data become available). Through the continued development and integration of Bayesian and classical statistical method, a consistent, reliable, and scalable
’ REFERENCES (1) Agencies Need to Validate Sampling Activities in Order to Increase Confidence in Negative Results, GAO-05-251; Government Accountability Office: Washington, DC, 2005. (2) Price, P. N., Sohn, M. D.; Lacommare, K. S. H.; , McWilliams, J. A.; Framework for evaluating anthrax risk in buildings. Environ. Sci. Technol. 2009, 43, 1783 1787. (3) Benchmark Dose Analysis for Bacillus anthracis Inhalation Exposures in the Nonhuman Primate and Application to Risk-Based Decision Making, EPA 600/R-10/138; Environmental Protection Agency: Washington, DC, 2010.
9472
dx.doi.org/10.1021/es203479t |Environ. Sci. Technol. 2011, 45, 9471–9472