Bayesian Networks Improve Causal Environmental Assessments for

Dec 8, 2016 - Probabilistic inference from a Bayesian network allows evaluation of .... at Cochin University of Science and Technology), and the USA (...
0 downloads 4 Views 2MB Size
Policy Analysis pubs.acs.org/est

Bayesian Networks Improve Causal Environmental Assessments for Evidence-Based Policy John F. Carriger,*,† Mace G. Barron,‡ and Michael C. Newman§ †

Oak Ridge Institute for Science and Education, U.S. Environmental Protection Agency, Office of Research and Development, National Health and Environmental Effects Research Laboratory, Gulf Ecology Division, 1 Sabine Island Drive, Gulf Breeze, Florida 32561, United States ‡ U.S. Environmental Protection Agency, Office of Research and Development, National Health and Environmental Effects Research Laboratory, Gulf Ecology Division, 1 Sabine Island Drive, Gulf Breeze, Florida 32561, United States § College of William & Mary, Virginia Institute of Marine Science, P.O. Box 1346, Route 1208 Greate Road, Gloucester Point, Virginia 23062, United States ABSTRACT: Rule-based weight of evidence approaches to ecological risk assessment may not account for uncertainties and generally lack probabilistic integration of lines of evidence. Bayesian networks allow causal inferences to be made from evidence by including causal knowledge about the problem, using this knowledge with probabilistic calculus to combine multiple lines of evidence, and minimizing biases in predicting or diagnosing causal relationships. Too often, sources of uncertainty in conventional weight of evidence approaches are ignored that can be accounted for with Bayesian networks. Specifying and propagating uncertainties improve the ability of models to incorporate strength of the evidence in the risk management phase of an assessment. Probabilistic inference from a Bayesian network allows evaluation of changes in uncertainty for variables from the evidence. The network structure and probabilistic framework of a Bayesian approach provide advantages over qualitative approaches in weight of evidence for capturing the impacts of multiple sources of quantifiable uncertainty on predictions of ecological risk. Bayesian networks can facilitate the development of evidence-based policy under conditions of uncertainty by incorporating analytical inaccuracies or the implications of imperfect information, structuring and communicating causal issues through qualitative directed graph formulations, and quantitatively comparing the causal power of multiple stressors on valued ecological resources. These aspects are demonstrated through hypothetical problem scenarios that explore some major benefits of using Bayesian networks for reasoning and making inferences in evidence-based policy.



INTRODUCTION The term weight of evidence strongly connotes the significance of evidence applied to judgments and decisions. Its meaning is underscored by the general notion that any major inference or decision with important personal or societal impacts should rely on a requisite assessment of the best information available. The role of weight of evidence and evidence synthesis can be seen in diverse fields such as criminal and civil law, healthcare, policy, business, economics, and science. It can be reasonably assumed that weight of evidence will have a universal role in the future of civil societies. The field of evidence synthesis has advanced the understanding of how lines of evidence can be collected to make informed decisions so much so that they routinely impact the development of weight of evidence in contemporary policymaking.1 Since 2003, one nongovernmental organization, the Collaboration for Environmental Evidence, has expanded the implementation and dissemination of environmental evidence synthesis for a variety of important policy issues such as those linking biodiversity and poverty.2−4 Evidence weighing is the motivation behind collection and analysis of information © XXXX American Chemical Society

in ecological risk assessment. Numerous lines of evidence and causally related hypotheses are the norm in environmental protection and restoration. Environmental policy decisions use evidence with varying degrees of formality to establish where resources should be invested before a decision is made, whether results are being achieved during implementation, and the successes or failure after decision implementation. As an example of the first situation, the World Bank developed tools to help predict impacts of funding decisions on coastal receptors and social-biophysical endpoints.5,6 As an example of the second situation, the U.S. Fish and Wildlife Service utilized an adaptive management framework for establishing waterfowl harvesting targets that iteratively incorporated population monitoring data to reduce uncertainties in predictive capabilities.7 For the third situation, Received: June 29, 2016 Revised: September 27, 2016 Accepted: November 14, 2016

A

DOI: 10.1021/acs.est.6b03220 Environ. Sci. Technol. XXXX, XXX, XXX−XXX

Policy Analysis

Environmental Science & Technology

means of using evidence to evaluate the strength of a hypothesis through a probabilistic framework. Bayesian reasoning is coherent with ecological risk assessment where vested parties establish risk-based hypotheses and use past and future evidence to update beliefs about the hypotheses. It also provides a useful alternative to commonly practiced simple scoring procedures in ecological risk assessment that are difficult to integrate and have been implicated in major failures of risk management for other fields.12 Bayesian networks are probabilistic graphical models representing the joint probability of a distribution. Bayes’ theorem can become cumbersome to use with numerous hypotheses supported by different lines of evidence, but BNs are more useful for visualizing and providing solutions for such problems.13 Their structure consists of a series of nodes (variables) connected by arcs in a directed fashion. For all nodes in a BN that are directly connected with arcs, conditional probabilities are used to represent the probability of each possible event in a child node given each possible event in the parent nodes. A simple BN is illustrated with two nodes and the calculations used for the conditional probability table (Figure 1). These conditional probabilities express uncertainty

evidence-based procedures were used by the Australian Landscape Logic project to examine the success of past environmental management investments on social and ecological endpoints and to use this information for future interventions.8 One common approach used in these examples is Bayesian inference. The World Bank describes a Bayesian network (BN) approach to lending decisions that can be implemented across a variety of possible coastal impacts.6 Additionally, the U.S. Fish and Wildlife Service utilized Bayes theorem to compare multiple models to observed population trends and establish model weightings based on predictive accuracy for subsequent harvesting decisions.7 Moreover, in Landscape Logic, BNs were one of the modeling approaches used for synthesizing evidence from multidisciplinary research endeavors and for examining the effectiveness of different interventions on the environment.8 Other useful examples of Bayesian applications to complex environmental assessment problems include but are not limited to Amstrup et al.,9 Bayliss et al.,10 and Pollino et al.11 This article highlights opportunities for using Bayesian tools to combine evidence during ecological risk assessment. This work demonstrates how probabilistic and causal reasoning about complicated problems are accommodated in BNs and discusses how a causal BN approach would benefit future evidence synthesis in environmental management and risk assessment by improving inferences under high uncertainty. First, the mathematical properties of Bayes theorem for updating beliefs about a hypothesis based on evidence are introduced. The discussion is then broadened to environmental measurements and how Bayes theorem can be utilized to incorporate the explanatory potential of an experiment and its implications for a riskbased hypothesis. A methodology is introduced that examines accuracy of multiple observational methods and confidence in a hypothesis derived from observational inference. Lastly, inclusion of interventional inference is discussed for testing causal assumptions and the causal power of interacting stressors in a complex system. The conclusion describes the implications of the inferential properties of BNs and several remaining issues. Introduction to Bayesian Networks. Bayesian statistics provide a framework for updating what is known from available information. This process can be translated to weight of evidence approaches in a straightforward manner. The diachronic Bayes’ theorem can be expressed as P(H |E) = P(H )

Figure 1. Probability (expressed as %) of a pesticide being present in a water sample given a positive monitoring result. (a) Bayesian network with conditional probability table (above Test Result node) built using background evidence provided in Rizak and Hrudey;20 (b) probability the pesticide is present given a positive result. Gray nodes have hard evidence placed into a state of the variable indicating that it has been observed (100% chance of being true). Tan nodes do not have hard evidence placed in any variable state. After inputting evidence to a network, posterior probabilities are calculated in an omnidirectional fashion so inferences can be made in or against the direction of the arcs depending on the structure of the network and any other hard or soft evidence in the network.

P(E|H ) P(E)

The posterior probability (P(H|E)) calculated by Bayes’ theorem represents what is known about how likely a hypothesis (H) is to be true given the observed evidence (E). The prior probability, or P(H), is what is known about how likely a hypothesis is to be true before the evidence is observed. The additional terms weight the prior beliefs in the hypothesis by the degree it is maintained by the evidence with P(E|H) being the likelihood function for the evidence given the hypothesis and P(E) being the marginal probability of the evidence, which is constant for all hypotheses. The P(E|H) is normalized by P(E) in the likelihood ratio so that the posterior probability, P(H|E), is equal to or less than 1. Both frequentist and Bayesian approaches to statistics make use of the likelihood function, but the likelihood can only provide a measure of the strength of the evidence itself given a particular hypothesis is true. Bayes theorem provides a valuable

in the relationships and are themselves conditioned on background knowledge which would change probability assignments as the knowledge base increases.14 A fully developed model with all probabilities will initially provide prior probabilities for each variable based on the background information for the problem. Unlike other analytical approaches, the BN variables can be updated to posterior probabilities with findings on any node in an omnidirectional fashion.15 As a consequence, three types of inferences that lead to posterior probabilities can be made with BNs. The first is predictive inference (forward direction- from parent to child node). The second is diagnostic inference (backward direction- from child to parent node), and the final is mixed inference (forward and backward). As will be B

DOI: 10.1021/acs.est.6b03220 Environ. Sci. Technol. XXXX, XXX, XXX−XXX

Policy Analysis

Environmental Science & Technology

extension of a survey of 352 Australian professionals.20 In the original survey, 42% of those surveyed were involved in water quality monitoring and most had more than a decade of experience. Thirty percent worked at water utilities. Each was presented with the following pesticide detection scenario and asked to estimate their level of confidence that the pesticide was truly present in the tap water sample. [Hypothesis scenario] Monitoring evidence for a(n) [Australian] city has indicated that in treated drinking water, a pesticide, say “atrazine’, is truly present above the recognized standard methods’ detection limit once in every 1000 water samples from consumers’ taps. • 95% of tests will be positive for detection when the contaminant is truly present above the detection limit, and • 98% of tests will be negative for detection when the contaminant is truly not present above the detection limit. Q. With these characteristics, given a positive result (detection) on the analytical test for the pesticide in the drinking water system, how likely do you think the positive result is true? ___Almost certain ___Very Unlikely (95 to 100%) (5 to 20%) ___Very likely ___Extremely unlikely (80 to 95%) (0 to 5%) ___More likely than not ___Do not know (50 to 80%) ___Less likely than not (20 to 50%)

discussed, it is also possible to frame a BN so as to separate observational from interventional inferences. The former is particularly useful for examining what can be inferred from monitoring data and environmental observations, and the latter is useful for examining the causal strengths of relationships between variables. Several features of BNs make them valuable for complex weight of evidence problems. One is their graphical building block or connective properties for problems with complex causal chains, such as, social-ecological knowledge representations.16 Their graphical feature is a powerful representation, reasoning, and communication tool. The assumptions of a BN such as those related to conditional independence (see Pearl17 for details on assumptions) facilitate tractability in complex model construction and development. The second is the robust capability for characterizing the problem in a probabilistic context. Each linkage between nodes is established through a conditional probability table. The implications of the table are that contrasting claims can be considered about the relationships between different variables, and these claims are reflected in the probabilities for all possible scenarios. Thus, counterfactual possibilities are naturally integrated into the BN process for each element or object within a model so that analysts can piece together the uncertainties for a range of possibilities in a given context. A related advantage of the probability tables is the ease in which data and procedures from domain experts can be used for quantifying the relationships. The information used to build the conditional probabilities includes many sources of information already used in ecological risk assessment problems including environmental measurements, experimental evidence, model simulations, analogous literature studies, and/or expert opinion. The third involves the inferential possibilities with a compiled network. Posterior probabilities are propagated throughout the model and can be utilized for posing observational and interventional inferences among the causal connections. Probabilistic inference with BNs is useful for examining multiple scenarios of interest and the sensitivity of the relationships among sets of causally related variables. The first step in deriving BNs is to ensure that the most useful structure is being developed to model and make inferences about the situation. In practical applications, BNs can be quite complex. Neil et al.18 identified model fragments or idioms that fit many problems and assist in constructing coherent models. Later work by Fenton et al.19 specified idioms for legal reasoning. These idioms were developed to understand how crime scene evidence impacts the plausibility of a perpetrator committing a crime when combined with additional causal factors that could influence a hypothesis of guilt or innocence, such as motive and opportunity. Two of the idioms from Neil et al.18 will be highlighted here due to their potential in weight of evidence risk assessments. The first is the measurement idiom which represents the measurement inaccuracies in observations. The second is the cause-consequence idiom which represents a causal connection between two variables. The latter is the most common idiom for knowledge representation with BNs.18 Building multilayered networks tracing root causes to ultimate effects fulfills a role for model application in many problems. These two idioms alone can broadly represent many key factors for model-building in evidence synthesis. Problem Scenario: Determining Whether a Detection Supports Presence of a Stressor. The value of the measurement idiom in risk assessment can be illustrated by

Most respondents chose almost certain or very likely (80 to 100%) when, as will be shown shortly, the correct response was extremely unlikely (0 to 5%). In addition, from an open-ended follow-up question, approximately 15% of the respondents from both the water professional and academic groups indicated they would take what Rizak and Hrudey20 called an “alarmist response” to the monitoring result including taking the water treatment plant offline, initiating public warnings and advisories, or shifting to another source of water. Between 2009 and 2014, one of us (MCN) surveyed environmental students and professionals in southern China (2009, 50 students in the advanced course, Eutrophication and Environmental Risk Assessment at Xiamen University), central China (2010, 26 graduate students in a practical environmental statistics course at Huazhong Normal University), Spain (2010, 174 attendees of SETAC Seville International Meeting), southern India (2011, 57 attendees of India Erudite Program lecture at Cochin University of Science and Technology), and the USA (2014, 37 attendees of invited lecture at the University of Georgia’s Savannah River Ecology Laboratory). Metaanalysis (random model) of the results indicated that 10% of the responders chose the correct answer, and most (90%) chose almost certain or very likely. Water quality experts and environmental science students and professionals surveyed from around the world consistently misinterpreted simple measurement data. Using percentages as measures of belief, their answers were off by roughly 90%/2.5% or 36-fold. Several extremely common heuristic mistakes likely contributed: inverse probability error,21 base-rate fallacy,22,23 representativeness (if their probabilities were representative of the power of the test),24 and perhaps, the Ellsberg Paradox.25 It would be unsound to apply some analytical method to measure the concentration of a pollutant if that method was biased by 36-fold and a better method was available. A sampling C

DOI: 10.1021/acs.est.6b03220 Environ. Sci. Technol. XXXX, XXX, XXX−XXX

Policy Analysis

Environmental Science & Technology

is present or absent has not changed as expected, but an additional node is added to test the implications from a second test of the water supply (Figure 2a). Entering the evidence of two positive test results provides the answer given in Newman26 (Figure 2b). The analyst is now 70% confident that the pesticide is in the water supply. (An urn standard may be used to assist in interpreting these probabilities as described in Aven and Reniers.)27 A third sample taken would increase her confidence to greater than 99% (Figure 2c). In practice, these and similar BNs are not only useful for examining the power of the evidence in disconfirming a hypothesis but also for examining and developing a measurement process ahead of time that will provide reliable information for the purposes it is being developed for. This notion is expanded in the next section to more complicated problem examples. Problem Scenario: Inferences on Bird Deaths at a Wind Farm. A crucial question in weighing evidence is how to tell if observations match reality. Retrodictive assessments attempt to understand environmental injury from human or natural activities through evidence of damage. A BN for representing the number of birds killed at a wind farm can be depicted for retrodictive assessments with evidence (Figure 3).

of educated environmental scientists provided consistently inaccurate judgments of this magnitude in the above survey due to pervasive cognitive errors. It would seem reasonable to use a better available tool for such judgments, that is, BNs. The measurement idiom is illustrated in this example (Figures 1a and 1b). A prior BN is used as a starting point before evidence is entered into the network regarding a positive test result for a pesticide (Figure 1a). From the prior information available, there is a 1 in 1,000 chance (0.10% probability) of the pesticide being truly present and a (0.001·0.95 + 0.999·0.02)·100 or 2.09% chance that any measurement of the water for a pesticide will give a positive result. There is also a (0.999·0.98 + 0.001· 0.05)·100 or 97.9% chance of a negative test. From this prior BN, a scenario where a positive result is found, and what may be inferred about the pesticide being present given this positive test result, can be examined (Figure 1b). The probability that the pesticide is present given a positive detection (4.54%) was demonstrated in Chart 1 of Rizak and Hrudey20 where it was referred to as the test positive predictive value. Newman26 expanded the example in Rizak and Hrudey20 to one where additional samples are subjected to a more precise analysis and combined with the output from the initial test. The initial BN that tested a single sample is updated to include this new evidence (Figure 2). The prior probability that the pesticide

Figure 3. A measurement accuracy model fragment for estimating bird mortality based on recovered carcasses and a specific sampling strategy. The breakout table from the Found carcasses node is the underlying conditional probability table. The numbers found at the bottom of the nodes below the probability distributions refer to the mean ± the standard deviation (e.g., 7.5 ± 1.4) for continuous variables.

In this example, a wind farm is developing a measurement system for identifying the number of birds killed during operations. The method relies upon collecting and counting the number of carcasses found at ground level. To test the accuracy of this procedure, calibration exercises are initiated similar to ones described in Bispo et al.28 The assumptions are that sampling is not continuous and carcasses can be unidentifiable due to removal by scavengers or decomposition and searcher inefficiencies. For carcass removal, tests are used to examine how many carcasses remain in an area and at what time period they were removed. For search efficiencies, bird carcasses are placed in different areas around the wind farm and a trial monitoring estimates how many of these dead birds are found. This information is then combined with other influential factors related to the search characteristics and used to estimate the number of birds that would be detected given the number that were killed. Estimating bird deaths is an active area of research

Figure 2. Network for calculating whether a pesticide is present in a water sample given multiple positive monitoring results: (a) Bayesian network built using background evidence provided in Rizak and Hrudey20 and Newman;26 (b) probability the pesticide is present given two positive results; (c) probability the pesticide is present given three positive results. D

DOI: 10.1021/acs.est.6b03220 Environ. Sci. Technol. XXXX, XXX, XXX−XXX

Policy Analysis

Environmental Science & Technology

chance of finding fewer birds (0 to 5) and a 51% chance of finding 5 to 10 carcasses. There is a 1% chance each of finding 10 to 20 and greater than or equal to 20 dead birds. From the quantitative information in the conditional probability table and the network structure, observational inference is used as done in the previous example to update probabilities of the range of birds that have been killed (Figure 3). This time, the prior possibilities for bird mortality (0 to ≥20 bird deaths) were all weighted equally, indicating complete ignorance outside of what is found during the measurement process. From an observation of 5 to 10 birds killed and an intermittent sampling strategy, the likelihood that more birds were killed than observed is high. Though clear evidence is provided of bird deaths, the observed number of birds does not lead us to conclude that a similar number were actually killed. The idioms discussed thus far have a risk hypothesis as a root node, that is, a node without parents. This allows users to establish an unconditional prior belief on the risk hypothesis.19 More commonly, prior conditions such as exposure events will influence an assessment endpoint in a risk problem and thus the prior probability of a negative outcome. The causal influences of several supporting hypotheses for estimating the cumulative number of bird deaths may be usefully examined with a BN (Figure 4). In addition, several other measurement processes can assist in determining how many birds were killed

that will continue to develop in the future, but multiple measurement functions have been deployed with various assumptions.29,30 The networks and probabilities constructed in this and the next scenarios are for example purposes and to illustrate the inferential properties of BNs. The study results discussed above are used to create a BN (Figure 3) and populate the probabilities including the probability of detection given that sampling is more or less intensive which could influence the detectability of the carcasses. The latter node called Sampling strategy is necessary to specify the uncertainty in the evidence from the measurement process and how this influences the determination of bird mortalities.31 The conditional probability table for the Found carcasses node contains the output from the functions built on the sampling information collected from the calibration exercises. Reading across the table gives each probability of obtaining a certain number of carcasses given the actual number of birds that died and the sampling scheme used for monitoring the birds. For example, a 97% chance is given of finding 0 to 5 birds with an intensive sampling strategy when 0 to 5 birds are actually killed. There is also a slight chance of false positives where birds may die within the area due to other causes. This can also be seen in the next row that provides the probabilities of finding different carcasses if 5 to 10 birds die and the sampling strategy is intensive. For this scenario, there is a 47%

Figure 4. Bayesian network for measuring the cumulative number of bird mortalities at a hypothetical wind farm. Gray nodes are observable nodes with evidence placed into them. Tan nodes are unobservable problem variables. E

DOI: 10.1021/acs.est.6b03220 Environ. Sci. Technol. XXXX, XXX, XXX−XXX

Policy Analysis

Environmental Science & Technology at the wind farm. The first provides video evidence of any bird collisions with wind rotors. The second provides radar observations for determining the number of birds at risk from rotor operation. Measurement accuracies are established for each of these processes and corresponding hypotheses pertaining to birds in the vicinity of the wind farm and birds colliding with the rotors. Other variables that are useful for estimating mortalities are included such as the season of operation and the presence of wind gusts. An important aspect of this example is how the problem is structured using both a cause-consequence idiom for the unobservable and background variables of interest and the measurement idiom for the uncertainties in the measurement process for each hypothesis. Influential factors for determining the number of birds killed are naturally included, leading to improved reasoning with the measurement data. Without including these causal factors, a user could reason incorrectly by assuming that no bird deaths occurred and therefore risk was low, whereas the explanation might have been that fewer birds were in the vicinity due to some seasonal influence. Moreover, the BN can be used to better consider where in the inferential process a measurement is and the correct hypothesis to attribute it to. The cause-consequence and measurement idioms can be used to piece together the hypotheses and evidence sources in a manner that is useful for aiding judgments in many situations.19 Using BNs for developing risk-based monitoring programs provides considerable opportunities for improved monitoring practices and inferences from observations.

wait for the best available science to be completed for defining causal relationships.34 This does not mean that causality should be ignored or implicitly assumed in complex policy problems. Quite the opposite, it should often be made explicit with the best available information, key uncertainties, and structural causal assumptions. Probabilistic reasoning is well-established for combining uncertainties and is the foundation of statistics and risk assessment. Probabilistic modeling alone is insufficient for causal analysis unless augmented by system knowledge.35 For example, probabilistic modeling will not differentiate between whether an ecosystem outcome was caused by a stressor or a stressor was caused by an ecosystem outcome beyond the conditional strength of the relationship.35 Both of these structures can properly encode the joint probability of the two variables and allow for appropriate inferences. Accounting for a causal relationship requires evidence for both a causal mechanism for an effect, on a probabilistic level, and the strength of an association.36 The representation of causal interactions including possible outcomes, known confounding factors, and relationships between unobserved problem variables and measurements are key building blocks for useful causal inferences. Causal BNs can formalize causal possibilities as a graphical BN that makes these assumptions apparent for invalidation.35 Graphical models not only can communicate the potential issues in causality but also can examine the evidence for causality through improved measurements to test the potential relationships. But proof of causality is often elusive, making plausibility a more useful criterion.37 Furthermore, causal operations are most usefully measured probabilistically (e.g., malaria infections do not always lead to organ failures) and are more likely to provide evidence of changes in probabilities for effects and not on definitive occurrences if manipulated in experimental or observational contexts. Problem Scenario: Causal Explanations from Correlated Data - Fish Population Reductions and Contaminant Concentrations. Direct and indirect effects are critical issues in environmental management problems.38 In a simple matter of two variables being associated with one another, numerous possibilities arise in an open environmental system that might explain the causal mechanism for the associations. This complexity can be compounded because observational data might not capture the temporal sequences of occurrences at a resolution for determining the plausible causal mechanisms. Formal structures like BNs provide greater clarity in situations with ambiguous causality by revealing potential causal mechanisms. This example (adapted from Neapolitan)39 illustrates several instances of the cause-consequence idiom and causal possibilities for two variables associated with one another on a temporal and spatial scale. Assume that a pesticide (A) and a fish population (B) have an inverse linear relationship in an aquatic system where higher pesticides are associated with low fish abundance and low pesticides are associated with high fish abundance. A team of analysts has come up with four different possible nonmutually exclusive explanations that could plausibly explain this association but requires further analysis for resolution. One causal relationship that explains this association is A→B where A is a cause of B (i.e., the pesticide is reducing fish through mortality). However, one may induce a different causal possibility: that B is causing A or A←B. In this belief, the fish might bioconcentrate the pesticide when present. A common issue is a third variable (C) that is causing both B and A (A←C→B). Hydrodynamics (C) could be dictating



QUALITATIVE AND QUANTITATIVE SUPPORT: BAYESIAN NETWORKS IN CAUSAL ASSESSMENTS Comprehensive risk assessment procedures should use the best available tools and evidence for assessing causality. The problems with establishing a formal causal relationship between variables is well-known in science and statistics. Practical and logically sound insight needs to be brought to bear on any causal problem, especially ones dealing with environmental issues where high uncertainty is the norm. Spurious correlations often arise due to measurement deficiencies, lack of expertise for reasoning with the available information, and/or incorrectly weighted prior beliefs. On a structural level, misperceptions can occur when observations for two variables are tightly associated due to indirect causal factors. Causality is a central theme in science just as in evidencebased policy making. Statistical inference is often built on an assumed causal structure32 just as management interventions are built on a causally related system of varying degrees of explication. Evidence-based policy seeks causal explanations of what might have happened in the past or will happen in the future. These explanations are stochastic in that any answer is uncertain.33 Scientific judgments that attribute causality often are made deliberately after repeated evidence shows a strong causal association between two variables under conditions that would provide proof of a causal theory and withstand scrutiny. The information used to make these determinations can come from both observational and interventional studies such as randomized experiments. Environmental management decision making uses causal knowledge to design interventions and weight the impacts of multiple stressors on receptors; causal understanding of a system and the impacts of interventions are a key component of the management process. Similar to judicial decisions, evidence-based policy making in environmental management is a time sensitive activity that cannot always F

DOI: 10.1021/acs.est.6b03220 Environ. Sci. Technol. XXXX, XXX, XXX−XXX

Policy Analysis

Environmental Science & Technology the inverse relationship between the presence of the fish (habitat quality) and the presence of the pesticide (transport). A final scenario could be a common effect variable (D) which in different states, such as present or absent, causes A and B to be associated (A→D←B). Crustacean prey abundance (D) might be low from direct pesticide effects, causing fish to not aggregate in the area. All, some, or none of these factors might be true, and monitoring could be improved to better capture the temporal dynamics. A cohesive directed causal model might be developed out of these causal arguments and further investigated, or the implications of several candidate structures could be examined if structural possibilities remain unspecifiable by one cohesive model.40,37 Either way, the directed graph representation makes the causal issues clear from the structural assumptions, can enhance communication and clear thinking about the problem when causes and effects are an issue, and can maintain analytical focus on the implications from the structural issues. Problem Scenario: Forward Causal Reasoning on Risks to a Recreational Fish Population. Even when a useful causal modeling structure has been constructed, hypotheses are often difficult to resolve about the causal influence between problem variables. An example causal BN was developed for assessing the abundance of a recreational fish population (Figure 5). The structure of the network uses the

Figure 6. Posterior probabilities after observing high parasitic lamprey abundance.

Breeding adult trout abundance as well as the parent variable (Larval lamprey abundance) which propagates the influence of this observation to Breeding adult trout abundance. Thus, the inferences work in an omnidirectional fashion so that both the probability of Larval lamprey abundance and Polluted sediment extent has been updated from this observation which also influences the probability of Breeding adult trout abundance outside of the causal connection between Parasitic lamprey abundance and Breeding adult trout abundance. The causal power of parasitic lampreys over the adult trout population is obscured and can be misinterpreted when relying on observational inference in this scenario.42 The question of interest is on the causal influence and not the implications from observation, so acausal, backdoor pathways are problematic in this scenario. Intervening on a variable will now be differentiated from observing a variable, and a formal measure of causal power described in Korb et al.42 and Korb et al.43 will be adapted. Manipulations, or system interventions, occur when a real or simulated action is taken to change part of a system. It is an active event distinct from passive ones like observation inputs. Though the idea may seem similar, the implications are vastly different.44 To remove the influence of the noncausal pathway, an intervention node is added to Parasitic lamprey abundance (Figure 7). Intervening on Parasitic lamprey abundance cuts the parent variables away from the node, thus conducting graphical surgery on the model. Cutting the parents of a variable after a full intervention removes any backdoor, acausal pathway from the parents to children or descendants of the variable in the model. Noncausal pathways no longer influence any subsequent analysis. As described in Korb et al.,43 the first state of the Intervention node enforces the causal implications from the prior frequencies of Parasitic lamprey abundance (i.e., the original distribution). The second is a soft intervention to construct a uniform distribution which is akin to constructing random experimental assignments but simulated within the model. The last state labeled None nullifies the impact of the intervention node on the network and allows use of the original, unmutilated model. For the remaining discussion, the interventions will establish a uniform distribution as a consistent basis for contrasting the two stressors of concern. Intervening with a uniform distribution on the Parasitic lamprey abundance is illustrated (Figure 7). As can be seen when comparing

Figure 5. Causal model with prior probabilities for investing relevant risk factors to breeding adult trout abundance in an aquatic system.

cause-consequence format to study the impacts of biological and chemicals stressors on a fish population. For many BN applications, the conditional probabilities can be estimated with existing data using expressions for each of the relationships similar to how a Monte Carlo analysis is developed. Another common approach relies on nonparametric maximum likelihood or Bayesian methods to establish conditional probabilities from a case file of occurrences.41 The questions of interest in this scenario pertain to the relative influence of parasitic lampreys and polluted sediment on adult fish abundance. For simplicity, the variables are placed in three discrete levels of low, medium, or high, and the model is assumed to be spatially and temporally coherent for all relationships. The implications of using observational inference for assessing causal questions will now be demonstrated (Figure 6). An observation of high Parasitic lamprey abundance is input to the model. This observation directly changes the distribution of G

DOI: 10.1021/acs.est.6b03220 Environ. Sci. Technol. XXXX, XXX, XXX−XXX

Policy Analysis

Environmental Science & Technology

Figure 7. Bayesian network for breeding adult trout abundance after a uniform distribution is placed on parasitic lamprey abundance by intervening on the variable. The backdoor, acausal pathway through larval lamprey abundance is now cut by this intervention.

Table 1. Causal Information (CI) Questions and Their Corresponding Calculations43a question type

question example

equation

c and e

What is the causal power of high parasitic lamprey abundance over low breeding adult trout abundance?

CI(c , e) = P*(e|c)log

C and e

What is the causal power of parasitic lamprey abundance over low breeding adult trout abundance?

CI(C , e) =

P*(e|c) P*(e)

∑ P*(c)P*(e|c)log c∈C

c and E

What is the causal power of high parasitic lamprey abundance over breeding adult trout abundance?

CI(c , E) =

∑ P*(e|c)log e∈E

C and E

What is the causal power of parasitic lamprey abundance over breeding adult trout abundance?

CI(C , E) =

∑ c∈C ,e∈E

P*(e|c) P*(e)

P*(e|c) P*(e)

P*(c)P*(e|c)log

P*(e|c) P*(e)

a

Lower case c and e refer to a state of a cause and effect variable, respectively. Upper case C and E refer to the complete distribution of the cause and effect variable. P* refers to the manipulated probability distribution that is constructed from an overwhelming simulated intervention on the causal variable in question. Base 2 is used for logarithm calculations. The distribution of C can be of several types after intervention including an original, uniform, or maximizing distribution.43

breeding adult trout abundance. The causal power is 0.120. This value is almost an order of magnitude higher indicating the greater causal sensitivity for Breeding adult trout abundance to changes in Polluted sediment extent over Parasitic lamprey abundance. Even though the polluted sediment has an indirect influence (i.e., polluted sediment does not directly cause loss of fish but only through mortality of its prey and lamprey abundance), the Polluted sediment extent has a greater causal power to determine adult trout abundance. The qualitative and quantitative reasons for this were already encoded within the model but were made apparent with these causal power calculations. The questions posed in the previous paragraph can be refined to provide more information than the overall strength of the relationships between distributions. This can help to drill down into what is really desired from a causal analysis. For example, the overall causal strength of the relationship between specific sections of the distributions or events is usually important to consider. To answer these questions, additional causal power questions may be asked for examining the implications of different stressors on a receptor of concern (Table 1).43 In risk assessment, the potential influence of a high exposure scenario is often utilized to include worst-case possibilities. For this system

Figure 6 with Figure 7, the intervention changes the distribution of the causal variable without changing the prior distribution on the causal node’s parent. The prior distribution for the parent nodes, Larval lamprey abundance and the ancestor node Polluted sediment extent, is unchanged due to the graph surgery of the intervention. The impacts of the intervention on the causal relationship can now be considered without concern from a backdoor pathway. The causal power on the effect variable is calculated using mutual information statistics between the causal variable (Parasitic lamprey abundance) and the effect variable (Breeding adult trout abundance). Causal power is measured in bits of causal information gained, or uncertainty that is reduced, for an effect from knowledge of the cause.43 Causal power uses an asymmetric adaption of mutual information calculations that take into account the directed nature of causal relationships.43 Using mutual information statistics in Netica’s Sensitivity to Findings feature,45 the causal power between Parasitic lamprey abundance and Breeding adult trout abundance is calculated to be 0.021 in the intervened graph (Figure 7). Similarly a uniform stochastic intervention on Polluted sediment extent is specified, and causal power is calculated between polluted sediment extent and H

DOI: 10.1021/acs.est.6b03220 Environ. Sci. Technol. XXXX, XXX, XXX−XXX

Policy Analysis

Environmental Science & Technology

Several guidance documents on BN development and interpretation for environmental problems can help facilitate implementation and broader understanding of network model applications in environmental assessments.47−49 Uusitalo50 provides an overview of BN usage in environmental applications including advantages and disadvantages. In ecological risk assessments, the BN might help incorporate the qualitative knowledge used in conceptual model development and better align it with quantitative risk estimates. In this way, causal understanding can be more explicitly tested as knowledge is iteratively developed and incorporated. They can also be a central reasoning tool for adaptive management applications48 where the causal knowledge and beliefs regarding management interventions are captured and updated with new information as management learning progresses. In this process, influence diagrams can extend the BN by incorporating information on how decisions interact with the system model and uncertain desired and undesired outcomes.51−54 The utility nodes in influence diagrams contain a scale for quantifying the costs and benefits of decision outcomes and, in some cases, the relative or absolute value of evidence to a decision. Evidence-based policy for environmental management is moving in directions that will strain conventional weight of evidence analysis practices in ecological risk assessment. Although there is no single framework that can match the scope of every problem, arbitrary or overly simplistic rules for combining and weighing evidence can inhibit insights and create difficulties in data interpretation or discerning the validity of multiple diverging viewpoints. Moving beyond the exploratory phases of environmental management requires greater consideration and explication of causality. Evidence-based assessments would benefit from incorporating Bayesian approaches to weighing evidence even under conditions of high uncertainty and incomplete evidence. The transparency, tractability, mathematical rigor, and focus on the causal aspects of a problem make BNs an advantageous analytic structure in the evidencebased practitioner’s toolbox.

model, the intent is to examine the causal power of high polluted sediment extent over low adult fish abundance. From the equation (c and e), the causal power is calculated to be 0.273. The positive value means this is a promoting factor for low fish abundance.42 If the same is done with high parasitic lamprey abundance, the causal power is −0.125 indicating an inverse or preventative causal relationship between high lampreys and low adult fish. Examining the additional questions and the probabilistic relationship between these variables in this causal framework provides further insights into how each of these potential stressors relates to an ecological state or variable. This can be viewed as a sensitivity analysis of risk factors that is purely based on the uncertainties in the cause-effect relationships. The credence given to these values depends on the capabilities for capturing causal events of interest in the system that the model represents.42



DISCUSSION Bayesian inference in general and BNs in particular provide powerful approaches for evidence-based policy in environmental management, but their potential capabilities are only now beginning to be explored. Although simple examples of how BNs can help reason with evidence were provided, as the complexity of the evidence base increases and numerous hypotheses and uncertainties need to be considered, BNs can integrate the information for improved understanding of the endpoints and causal factors of interest. In policy making where trade-offs have to be carefully considered, the qualitative and quantitative properties of BNs can enhance causal understanding of the uncertainties regarding the weight of the evidence for hypotheses of interest. This can provide an explicit basis for data gathering and monitoring efforts that improve the evidence base even when data gaps are large. The measurement idiom directly translates the impact of evidence on a hypothesis through Bayes theorem. The accuracy and validity of the evidence can then be propagated throughout the model providing greater understanding of the power of a test or observation. The capabilities of a measurement process are quantified based on the strength of the evidence in resolving a hypothesis. Misuse of false signals or patterns can be avoided along with the potential for a mismatch between the value of a measurement and the perception of its value. The measurement idiom for BNs alone has broad importance for weight of evidence assessments and subsequent application to policy. Causality is a central component of evidence-based policymaking and causal BNs increase applied relevance. As noted in Bollen and Pearl,37 statistical science has largely ignored pioneers in graphical statistical models such as path analysis.40 There still is a disconnect between modeling efforts used to establish causal inferences and the reasoning from the output of these models that largely follows the difficulties in statistics with establishing causality from evidence. Bayesian networks provide insights into the causal aspects being investigated in an evidence-based problem including the assumptions and quantitative implications. Besides examining the causal possibilities in a problem structurally, BNs can examine the strength of the associations probabilistically to support causal inferences and reasoning in a policymaking situation. Evidence-based questions are often questions regarding causal relationships.46 For addressing these questions and the attendant uncertainties, a BN framework is beneficial for interpreting the causal implications from the evidence.



AUTHOR INFORMATION

Corresponding Author

*Phone: 1-850-934-9226. E-mail: [email protected]. Notes

The authors declare no competing financial interest.



ACKNOWLEDGMENTS This article is dedicated to Bonnie Carriger. This research was supported in part by an appointment to the ORISE participant research program through an interagency agreement between the U.S. Environmental Protection Agency and the U.S. Department of Energy. The views expressed in this article are those of the authors and do not necessarily reflect the views or policies of the U.S. Environmental Protection Agency. M.C. Newman was the A. Marshall Acuff Jr. Professor of Marine Science during the tenure of this study.



REFERENCES

(1) Nature Editorial. Look after the pennies: Government decisions about where to spend and where to cut should be based on evidence, not ideology. Nature 2013, 496, 269. (2) CEE. Guidelines for Systematic Reviews in Environmental Management, Version 4.2; Collaboration for Environmental Evidence, Bangor University: Bangor, UK, 2013.

I

DOI: 10.1021/acs.est.6b03220 Environ. Sci. Technol. XXXX, XXX, XXX−XXX

Policy Analysis

Environmental Science & Technology (3) Roe, D.; Sandbrook, C.; Fancourt, M.; Schulte, B.; Munrow, R.; Sibanda, M. A systematic map protocol: Which components or attributes of biodiversity affect which dimensions of poverty? Environ. Evidence 2013, 2, 8. (4) Russell-Smith, J.; Lindemayer, D.; Kubiszewski, I.; Green, P.; Constanza, R.; Campbell, A. Moving beyond evidence-free environmental policy. Front. Ecol. Environ. 2015, 13 (8), 441−448. (5) World Bank. Assessing the Environmental, Forest, and other Natural Resource Aspects of Development Policy Lending: A World Bank Toolkit; The International Bank for Reconstruction and Development/The World Bank: Washington, DC, 2008. (6) Cenacchi, N. Assessing the Environmental Impact of Development Policy Lending on Coastal Areas: A World Bank Toolkit; The International Bank for Reconstruction and Development/The World Bank: Washington, DC, 2010. (7) Johnson, F. A. Learning and adaptation in the management of waterfowl harvests. J. Environ. Manage. 2011, 92, 1385−1394. (8) Lefroy, T.; Curtis, A.; Jakeman, A.; McKee, J. Introduction: Improving the evidence base for natural resource management. In Landscape Logic: Integrating Science for Landscape Management; Lefroy, T., Curtis, A., Jakeman, A., McKee, J., Eds.; CSIRO Publishing: Collingwood, AU, 2012; pp 1−6. (9) Amstrup, S. C.; DeWeaver, E. T.; Douglas, D. C.; Marcot, B. G.; Durner, G. M.; Bitz, C. M.; Bailey, D. A. Greenhouse gas mitigation can reduce sea-ice loss and increase polar bear persistence. Nature 2010, 468, 955−958. (10) Bayliss, P.; van Dam, R. A.; Bartolo, R. E. Quantitative Ecological Risk Assessment of the Magela Creek Floodplain in Kakadu National Park, Australia: Comparing Point Source Risks from the Ranger Uranium Mine to Diffuse Landscape-Scale Risks. Hum. Ecol. Risk Assess. 2012, 18, 115−151. (11) Pollino, C. A.; Woodberry, O.; Nicholson, A.; Korb, K.; Hart, B. T. Parameterisation and evaluation of a Bayesian network for use in an ecological risk assessment. Environ. Modell. Software 2007, 22 (8), 1140−1152. (12) Hubbard, D. W. The Failure of Risk Management: Why It’s Broken and How to Fix It; John Wiley & Sons, Inc.: Hoboken, NJ, 2009. (13) Fenton, N.; Neil, M. Avoiding probabilistic reasoning fallacies in legal practices using Bayesian networks. Aust. J. Leg. Philos. 2011, 36, 114−151. (14) Flage, R.; Aven, T. Expressing and communicating uncertainty in relation to quantitative risk analysis. Reliab. Risk Anal.: Theory Appl. 2009, 2 (13), 9−18. (15) Conrady, S.; Jouffe, L. Bayesian Networks & Bayesialab: A Practical Introduction for Researchers; Bayesia USA: Franklin, TN, 2015. (16) Lehikoinen, A.; Hänninen, M.; Storgård, J.; Luoma, E.; Mäntyniemi, S.; Kuikka, S. A Bayesian network for assessing the collision induced risk of an oil accident in the Gulf of Finland. Environ. Sci. Technol. 2015, 49, 5301−5309. (17) Pearl, J. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference; Morgan Kaufmann Publishers, Inc.: San Francisco, CA, 1988. (18) Neil, M.; Fenton, N.; Nielson, L. Building large-scale Bayesian networks. Knowl. Eng. Rev. 2000, 15 (3), 257−284. (19) Fenton, N.; Neil, M.; Lagnado, D. A. A general structure for legal arguments about evidence using Bayesian networks. Cognit. Sci. 2013, 37, 61−102. (20) Rizak, S. N.; Hrudey, S. E. Misinterpretation of drinking water quality monitoring data with implications for risk management. Environ. Sci. Technol. 2006, 40 (17), 5244−5250. (21) Daston, L. Classical Probability in the Enlightenment; Princeton University Press: Princeton, NJ, 1988. (22) Tversky, A.; Kahneman, D. Judgment under uncertainty: heuristics and biases. Science 1974, 185, 1124−1131. (23) Gigerenzer, G. How to make cognitive illusions disappear: beyond “heuristics and biases”. In European Review of Social Psychology; Stroebe, W., Hewstone, M., Eds.; John Wiley & Sons Ltd.: Chichester, UK, 1991; pp 83−115.

(24) Kahneman, D.; Tversky, A. Subjective probability: A judgment of representativeness. Cognit. Psychol. 1972, 3 (3), 430−454. (25) Viscusi, W. K.; Chesson, H. Hopes and fears: the conflicting effects of risk ambiguity. Theory Decis 1999, 47, 157−184. (26) Newman, M. C. Quantitative Ecotoxicology, 2nd ed.; CRC Press: Boca Raton, FL, 2013. (27) Aven, T.; Reniers, G. How to define and interpret a probability in a risk and safety setting. Saf. Sci. 2013, 51, 223−231. (28) Bispo, R.; Bernardino, J.; Marques, T. A.; Pestana, D. Modeling carcass removal time for avian mortality assessment in wind farms using survival analysis. Environ. Ecol. Stat. 2013, 20, 147−165. (29) Bernardino, J.; Bispo, R.; Costa, H.; Mascarenhas, M. Estimating bird and bat fatality at wind farms: A practical overview of estimators, their assumptions and limitations. N. Z. J. Zool. 2013, 40 (1), 63−74. (30) Korner-Nievergelt, F.; Behr, O.; Brinkmann, R.; Etterson, M. A.; Huso, M. M. P.; Dalthorp, D.; Korner-Nievergelt, P.; Roth, T.; Niermann, I. Mortality estimation from carcass searches using the Rpackage carcass − a tutorial. Wildl. Biol. 2015, 21, 30−43. (31) Fenton, N.; Neil, M. Risk Assessment and Decision Analysis with Bayesian Networks; CRC Press: Boca Raton, FL, 2013. (32) Joffe, M. The gap between evidence discovery and actual causal relationships. Prev. Med. 2011, 53, 246−249. (33) Mouchart, M.; Russo, F. Causal explanation: Recursive decompositions and mechanisms. In Causality in the Sciences; Illari, P. M., Russo, F., Williamson, J., Eds.; Oxford University Press: Oxford, UK, 2011; pp 317−337. (34) NRC. Reference Manual on Scientific Evidence, 3rd ed.; National Research Council, The National Academies Press: Washington, DC, 2011. (35) Pearl, J. Causal inference in the health sciences: A conceptual introduction. Health Serv. Outcomes Res. Methodol. 2001, 2, 189−220. (36) Joffe, M. The concept of causation in biology. Erkenn 2013, 78, 179−197. (37) Bollen, K. A.; Pearl, J. Eight myths about causality and structural equation models. In Handbook of Causal Analysis for Social Research; Morgan, S. L., Ed.; Springer: Dordrecht NL, 2013; pp 301−328. (38) Fleeger, J. W.; Carman, K. R.; Nisbet, R. M. Indirect effects of contaminants in aquatic ecosystems. Sci. Total Environ. 2003, 317, 207−233. (39) Neapolitan, R. E. Probabilistic Methods for Bioinformatics with an Introduction to Bayesian Networks; Morgan Kaufmann: Burlington, MA, 2009. (40) Wright, S. S. Correlation and causation. J. Agric. Res. 1921, 20 (7), 557−585. (41) Scutari, M.; Denis, J.-B. Bayesian Networks: With Examples in R; CRC Press: Boca Raton, FL, 2015. (42) Korb, K. B.; Hope, K. R.; Nyberg, E. P. Information-theoretic causal power. In Information Theory and Statistical Learning; EmmertStreib, F., Dehmer, M., Eds.; Springer Science+Business Media LLC: New York, NY, 2009; pp 231−265. (43) Korb, K. B.; Nyberg, E. P.; Hope, L. A new causal power theory. In Causality in the Sciences; Illari, P. M., Russo, F., Williamson, J., Eds.; Oxford University Press: Oxford, UK, 2011; pp 628−652. (44) Sloman, S. Causal Models: How People Think About the World and Its Alternatives; Oxford University Press, Inc.: New York, NY, 2005. (45) Norsys Software Corp. Netica 5.12; Vancouver, CA, 2013. (46) Linder, S. H.; Delclos, G.; Sexton, K. Making causal claims about environmentally induced adverse effects. Hum. Ecol. Risk Assess. 2010, 16, 35−52. (47) Marcot, B. G.; Steventon, J. D.; Sutherland, G. D.; McCann, R. K. Guidelines for developing and updating Bayesian belief networks applied to ecological modeling and conservation. Can. J. For. Res. 2006, 36 (12), 3063−3074. (48) Nyberg, J. B.; Marcot, B. G.; Sulyma, R. Using Bayesian belief networks in adaptive management. Can. J. For. Res. 2006, 36 (12), 3104−3116. J

DOI: 10.1021/acs.est.6b03220 Environ. Sci. Technol. XXXX, XXX, XXX−XXX

Policy Analysis

Environmental Science & Technology (49) Marcot, B. G. Metrics for evaluating performance and uncertainty of Bayesian network models. Ecol. Modell. 2012, 230, 50−62. (50) Uusitalo, L. Advantages and challenges of Bayesian networks in environmental modelling. Ecol. Modell. 2007, 203, 312−318. (51) Shachter, R. D. Evaluating influence diagrams. Oper. Res. 1986, 34 (6), 871−882. (52) Howard, R. A.; Matheson, J. E. Influence diagrams. Decis. Anal 2005, 2 (3), 127−143. (53) Carriger, J. F.; Barron, M. G. Minimizing Risks from Spilled Oil to Ecosystem Services Using Influence Diagrams: The Deepwater Horizon Spill Response. Environ. Sci. Technol. 2011, 45 (18), 7631− 7639. (54) Carriger, J. F.; Newman, M. C. Influence diagrams as decisionmaking tools for pesticide risk management. Integr. Environ. Assess. Manage. 2012, 8 (2), 339−350.

K

DOI: 10.1021/acs.est.6b03220 Environ. Sci. Technol. XXXX, XXX, XXX−XXX