Character of Environmental Harms: Overcoming Implementation

Oct 21, 2011 - Policy makers and regulators are charged with the daunting task of comparing incommensurate environmental risks to inform strategic ...
0 downloads 0 Views 4MB Size
POLICY ANALYSIS pubs.acs.org/est

Character of Environmental Harms: Overcoming Implementation Challenges with Policy Makers and Regulators George Prpich,† Jens Evans,‡ Phil Irving,‡ Jer^ome Dagonneau,† James Hutchinson,† Sophie Rocks,† Edgar Black,§ and Simon J. T. Pollard*,† †

Cranfield University, Collaborative Centre of Excellence in Understanding and Managing Natural and Environmental Risks, Cranfield, Bedfordshire, MK43 0AL, U.K. ‡ Environment Agency, Kings Meadow House, Kings Meadow Road, Reading, Berkshire RG1 8DQ, U.K. § Department for Environment, Food and Rural Affairs, Nobel House, Smith Square, London, SW1P 3JR, U.K. ABSTRACT: Policy makers and regulators are charged with the daunting task of comparing incommensurate environmental risks to inform strategic decisions on interventions. Here we present a policy-level framework intended to support strategic decision processes concerning environmental risks within the Department for Environment, Food and Rural Affairs (Defra). The framework provides the structure by which risk-based evidence may be collated and by assessing the value of harm expressed by different environmental policy areas against a consistent objective (e.g., sustainable development), we begin to form a basis for relative comparison. This research integrates the prior art, examples of best practice, and intimate end-user input to build a qualitative assessment informed by expert judgment. Supported by contextual narratives, the framework has proven successful in securing organizational support and stimulating debate about proportionate mitigation activity, resource allocation, and shifts in current risk thinking.

’ INTRODUCTION We generally define risk as the likelihood of suffering harm from a hazard. Environmental risks are hugely varied in character. They straddle issues as varied in scale as the environmental release of engineered nanomaterials to the global challenges of climate change. We conceptualize environmental risks across different spatial and temporal domains; we consider critical characteristics such as reversibility (oxygen sag in rivers); latency (environmental asbestos exposure); the uncertainty in outcomes (climate change); the potential for knock-on effects (impacts on ecosystem services provided by groundwaters); and we have developed domain-specific risk analysis tools to inform decisions on how environmental risks should be best managed. Policy makers designing “interventions” to manage risk and regulators implementing policies in localized settings face the unenviable task of comparing a large number of diverse environmental risks across their remits in order to direct resources, plan research efforts, and devise strategies that address the worst risks first. At the policy level, we are concerned with comparing strategic risks through high level comparisons of the residual risk that remains when controls are in place. We ask—what is the residual risk of flooding in England and Wales this year?; or how large is the residual risk to groundwaters posed by historic land contamination?; or, what is the current risk of invasive species crossing a national boundary? Comparing risks is not new. To formalize comparisons, a wide range of comparative risk tools has been developed over the last r 2011 American Chemical Society

30 years. The design of these has been fraught with the intellectual challenge of reducing the dimensions of risks to a manageable set of core characteristics (attributes) that can be meaningfully compared. Tools range from multiattribute, multidimensional genetic algorithms that adopt “optimization surfaces” for policy makers, to the adoption of probability tools for elicited risks, through to narrative comparisons of the harms that environmental risks pose. A persistent tension exists between the desire for presentational clarity and for defensible risk analysis. Even more challenging has been their implementation. Few have been implemented in any real depth with their recommendations actively used to inform government or agency priorities. Why not? Part of the answer lies in the cultures of government departments and agencies. There are important tensions in the design, use, and communication of risk analysis tools within policy and regulatory settings that are the subject of this paper. Our research has been informed by (i) a published review of strategic risk tools; (ii) several applications of strategic risk analysis (SRA), case studies from which we have observed implementation difficulties at first hand; and (iii) recent research conducted for the Department for Environment, Food and Rural Affairs (Defra) Received: April 5, 2011 Accepted: October 21, 2011 Revised: August 17, 2011 Published: October 21, 2011 9857

dx.doi.org/10.1021/es201145a | Environ. Sci. Technol. 2011, 45, 9857–9865

Environmental Science & Technology who have been challenged to improve their organizational capability in strategic risk analysis. Our purpose is to communicate these challenges and offer solutions to the tensions. Our observations are of interest to corporate risk specialists, to policy staff assembling “state of the environment” reports, and to research teams in Governments; not least because evidenceand risk-informed decision-making extends beyond the comparative analysis of policy domains (air quality, water quality, and chemical regulation, etc.) to decisions that set the size, shape, and focus of environment ministries and their regulatory agencies. Problem Statement. We frame an implementation problem: why, given the substantive practitioner literature and prior art, do strategic environmental risk analyses (SRA) seldom influence decisions on how risks are prioritized? We recognize that SRA tools cannot provide “answers” for policy makers. Rather, through comparing risks they engender a deeper understanding of the relationship between risks and espoused values.1 An effective tool provides multiple access points from which different beliefs can be voiced, meeting the varying needs of decision makers2 and allowing decisions to be approached in a structured and equitable manner.3 Moreover, SRA can guide decision makers through the myriad of scenarios, consequences, and responses to a suite of risks worthy of intervention. The needs of the “end user” must feature strongly. Heads of environmental policy, directors of regulatory functions, and chief scientific officers are rarely risk analysts by profession and have little opportunity to assimilate the detail of how SRA tools are constructed, attributes aggregated, and risks visualized. Their need, and the driver for this research, is for strategic insight about risk management priorities, guidance on how risks are best managed in practice, and confidence in the accountabilities of others to act on executive decisions. Decision tools must therefore be pragmatic and fit for purpose; supported with the relevant evidence; transparent and capable of integrating quantitative and qualitative data; and they must broaden the risk debate beyond a technocratic perspective to one that reflects the organizational objectives of government departments and agencies.4 International Policy Context. The prior art on SRA has been reviewed elsewhere.5 First-generation SRA tools used nominal rankings for comparing environmental risks. Extensive work by the United States Environmental Protection Agency (USEPA) sought to develop a single, prioritized ranking of environmental risks.6 Early attempts at ranking failed to deliver the intended result of informing proportionate resource allocation.7 A common theme is the tension between the need to communicate risk (adopted by pragmatists) and a desire to maintain intellectual consistency (theorists8). Although the USEPA’s (1987) work was criticized, as most SRA tools have been, completing the exercise significantly improved cross-departmental communication and fostered a societal discussion about environmental challenges.9 Other approaches10 16 have shown that risk-ranking exercises are necessarily imperfect due to a host of design and integration issues (logarithmic versus linear expressions of probability and consequence; multidimensionality of environmental harm; incommensurability of comparing harm), the uncertainties of strategic decision-making (time scale, poor granularity at national level), and the staff time needed to fully characterize strategic risks which is often prohibitively long. Second-generation SRA tools progressed beyond a desire to rank risks quantitatively17 recognizing that risk characterization (how significant is the risk and what are its dimensions of uncertainty?) required an explicit expression of values (risk appetite)

POLICY ANALYSIS

and a process to enable a fair, effective comparison.18 Richer descriptions of risk reflecting social, economic and environmental dimensions (“attributes”) of harm were adopted. These tools grouped risks by common attributes, as in the landmark New Jersey Comparative Risk Project19 believing that risks of common character could be managed similarly, even if the nature of the hazard was different. Designers of tools sought to be allencompassing in their analysis of risks and mutually exclusive between risk types so to avoid double-counting.5,20,21 They faced a series of challenges in describing the attributes of harm posed by a risk. Though some suggested the number of attributes should be twenty or less,22 others noted that methods with more than ten attributes became simply unwieldy.17 Selecting attributes, that is the different characteristics of harm posed by a hazard, inferred unambiguous knowledge of each attribute data to support its use, and clarity about the context of application.23 26 Considerable research was undertaken by the Environment Agency of England and Wales (EA) to support its reporting to ministers on the state of the environment. Its work revealed a number of barriers (Table 1) that constrained implementation. These are now predictable and relate to a suite of communication shortcomings.5,27,28 Consider the use of workshops used to inform the strategic assessment of risks within a single policy domain—the environmental risks posed by agriculture, for example. Expert risk workshops that bring several (ca. 20) individuals together from policy, technical, and operational backgrounds, introduce an SRA tool to them, explain the attributes of harm, and then seek to elicit relative assessments of risk in a single day, or even over two days, clearly operate at pace. The tone and pace may leave participants behind, fail to address incomplete understandings of terminology, not “bottom-out” contentious issues sufficiently, and expose (possibly without resolution) professional biases about quantitative and qualitative data and its representation. As a result, participants may leave workshops unfulfilled, with the analysis incomplete or even “forced” due to time constraint.27,28 In response to these shortcomings, we now witness the emergence of a third generation of SRA tools more realistic about what they can achieve, concentrating on the communication and visualization of strategic risks with a principal objective of stimulating rich discussions on risk rather than delivering a “top ten” of residual risks to address. The remainder of this paper describes our attempts to develop such a tool for the specific purpose of comparing strategic risks in Defra’s remit and promoting a high-level debate at Defra’s management board about the current and future strategies for managing residual risks across the government department.

’ METHODOLOGY Prior Reflections on Practice. Prior to new developments, we critiqued historic experience in SRA tool development. The EA developed SRA tools over a 10-year period from 1998.27,28 The workshops convened to apply these tools exposed fundamental debates about risk characteristics and often strayed into localized, rather than strategic expressions of risk. Typical were (i) apparent technical discourses that were actually about regulatory policy; (ii) genuine technical discourses about the harm posed by environmental hazards; and (iii) discourses on the way people engaged in the process. We critique these in turn. Risk analyses at the policy level require a complex judgment about the importance of a future event, an ongoing exposure, or 9858

dx.doi.org/10.1021/es201145a |Environ. Sci. Technol. 2011, 45, 9857–9865

Environmental Science & Technology

POLICY ANALYSIS

Table 1. Barriers to SRA Implementation: Issues, Definitions and Possible Solutions barrier category policy

issue

definition

possible solution

hazards

What hazards are to be assessed?

Identify and agree policy frame before technical work begins

values

What policy values are to be considered and

Explicit statement of organizational objectives followed by

scope

What spatial and temporal

outputs

What type of output and at what

supporting evidence

How to deal with disparate data?

All available data must be accessed. Tool must be flexible enough to incorporate both qualitative and quantitative data.

uncertainty

How should uncertainties be reported?

Pragmatically, uncertainties should be recorded

communication

How to accommodate a diverse range

Determine end-user requirements and broadly communicate desired

what metrics and weighting will be used? scale will be considered? technical

assessment of risk relative to organizational objectives Explicit statement of spatial and temporal scope. Ambiguity in definition leads to confusion and lack of consensus. Elicit expectations from end users. Integrate with

resolution is being expected?

existing reporting structures.

and then deliberated upon. engagement

of expectations communication

of expectations and expertise?

outcomes to focus evidence collection and assessment.

How to engage end users?

Elicit decision enabling needs from end users.

of results communication among participants

Present relevant information via visual and brief narrative. Provide access point to further information. Employ a facilitator who may build consensus

How to maintain focus and direction during expert elicitation?

the possibility of harm. Judgments need to elicit evidence and preferences on how the future might look, and which hazards might deviate us from it. Judgments embody implied values or policy choices critical to the framing of subsequent strategic risk analysis. Many of the apparent technical arguments encountered originated from confusion or disagreement about the rationale for policy choices. Disputes over the framing of an analysis (which risks were included; which metrics were appropriate; timeframes) revealed fundamental challenges about environmental protection goals. In short, the collective framing of the assessment was frequently overlooked. Examples of issues requiring discussion include (a) the choice of risk metrics and policy aims: which hazards and environmental end points were being considered and what nature and extent of harm was deemed unacceptable; (b) the relative weighting of metrics; (c) the temporal frame of the analysis; (d) the spatial frame of analysis, where the hazards and environmental end points were located. Because in practice these were operational expressions of a preferred state of the environment they are, in our view, matters for higher level policy discourse and not technical specifications for risk tools. More recent versions of the EA’s SRA tools focused on agreeing on the policy frame of the assessment before engaging in any risk analysis. This has been the most important step in making strategic risk analysis operationally useful. For example, in prioritizing risks to a nationally significant nature reserve, so resources can be allocated to manage it, the analysis is now rooted in a clear decision (how can we allocate resources proportionately?) and a specific risk comparison (what is the relative importance of risks to this particular ecosystem?). We define our risks around management goals for the reserve (which species or services are we trying to protect? What is their relative importance?) and select metrics relevant to them (e.g., the population density of species X). Adoption of SRA does, of course, promote technical discussion. Genuine technical queries include (a) how precise our analysis needs to be (i.e., its resolution and granularity); (b) what kind of output we should aim for (e.g., absolute risk estimates, relative risk rankings, detailed multidimensional characterization); (c) whether

while remaining nonpartisan.

the analysis should be qualitative or quantitative; (d) what information should it be based on; and (e) how we should represent uncertainty or variability visually. Again, discussions with decisionmakers at the outset proved helpful, especially where a relatively coarse approach was “good enough” for the intended application. Agreeing on decision needs in this way gives a clear rationale for methodological choices. Finally, we address the challenge of engaging others in strategic risk analyses. By definition, strategic analyses are high-level affairs for senior management, implying the possibility of changing corporate priorities or reallocating resources and budgets. It would not be surprising if staff further down an organizational structure viewed them with suspicion or even hostility. But risk assessments also tend to be misconstrued as objective technical exercises, carried out by specialists who use complex techniques to decide how important risks are. Risk assessments can easily exclude bystanders; including the senior managers who commission them and may feel that deciding priorities (irrespective of the output of an elegant analysis) is their prerogative. In this sense, strategic risk analyses run the risk of antagonizing staff by being misconstrued as imposing decisions on them, and potentially undermining senior staff by seeming to remove their prerogative for strategic decisions. So SRA requires a sociotechnical process which, for senior managers, means ensuring the analysis meets their needs and is legitimized by improving their capacity to tackle complex decisions. For domain specialists, it is helpful to explain what decision the analysis is supporting and how the input will be used. In both cases, additional factors can be recorded as contextual narrative. We have also found it helpful to elicit commentary on SRA outputs. Comments are often as important as the analysis, because they provide corroboration (or otherwise) and may identify additional factors that may have been overlooked. Care needs to be taken to present results in the context of the decisions they support. It is easy to misinterpret summary statistics and graphics or infer a level of precision to an analysis that may not be supported by the underlying evidence base. In concert, these reflections point to the need for a clear rationale for methodological and presentational choices. 9859

dx.doi.org/10.1021/es201145a |Environ. Sci. Technol. 2011, 45, 9857–9865

Environmental Science & Technology

POLICY ANALYSIS

Table 2. Summary of Hazard Characterization Attributes Used in This Study risk dimension social

characterization attribute human well being

description and example The social consequence of a detriment to human health and well being. An example is the health impact and anxiety that might follow an acute exposure to hazardous waste solvent during an accident at a poorly managed waste treatment facility. Examples of different magnitude consequences are shown below for a range of risks.

social cohesion

The social consequence of reduced social trust, cohesion, or community resilience. For example, the reduction in trust that a community may have for a local paint manufacturer following successive industrial accidents in their community. Other examples are offered below for a range of risks.

economic

loss in natural capital

The economic consequence of a reduction in economic value of the natural asset (measured in £). For example, the direct economic loss incurred in culling animal stock of a tradable value, or the value of groundwaters in England and Wales economically unavailable as a potable supply due to historic contamination

loss in value of ecosystem service environment

impact on natural capital

The economic consequence of a reduction in the economic value of the services provided by the natural asset. For example, the economic loss of recreational income from a reservoir being closed. The environmental consequence of a reduction in the environmental quality of an asset (air, land, water, biota). For example, a temporary reduction in water quality of a stretch of an urban river, or a long-term loss of nationally important heathland from sustained acid deposition.

loss in ecosystem service

The environmental consequence of a loss in the function of ecosystem services provided by the natural asset. For example, the adverse impacts of interfering with the microbial processes within soil.

What follows is a summary of methodological and presentational developments since 2008 and a defense of this as an advance in approach. Method Development. We developed our SRA method employing an active research methodology guided by extensive deliberations with organizational experts and end users. Special attention was given to the needs of the Defra management board and their desire for a tool capable of addressing (or refuting) “gut feelings” with respect to risk priorities, highlighting unknowns, and providing a cross-cutting comparison. The desire was for an uncomplicated approach that meshed with existing reporting structures. Principal Assumptions. To assess policy level risks a number of assumptions were made. To begin, risk was defined as the likelihood of a strategic threat occurring that results in the realization of a set of economic, environmental, and social consequences.6,17,27,28 The framework assessed residual risk, an analysis was made on a national level and concerned the coming 12 month period. Due to disparate data, expert judgment was required to estimate impacts and likelihood and to synthesize a wide body of information. Elsewhere in UK government, similar appraisals exist for assessing risks to national security and civil emergency.29 Beyond well-characterized events such as flood and animal disease outbreaks, no historic frequency distribution of damages at the national level exists for environmental risks in the UK. This lack of data required analysis to make use of all available data (qualitative and quantitative) and necessitated expert judgment to complete the risk analysis. Given the variability of uncertainty and the end user’s requirements, a quantitative risk estimate was not supportable. Instead, uncertainties were captured during interviews and discussions and represented as risk narratives22 providing a rich, contextual description20 of risk. The method integrates two components. The first is an expertinformed, qualitative analysis of risk that uses a list of harm attributes.5,22,30 The second is a narrative that supports the analysis by providing context for experts to discuss broader scientific expertise and uncertainty.22,30 The narrative was subdivided into four sections: (1) a description of the context in which the risk was evaluated; (2) the risk policy owner, which provides senior managers an access point for additional information; (3) the current management strategy; and (4) technical

summaries for each of the attribute dimensions. The following sections describe the selection of harm attributes and the expert deliberation process. Attributes of Harm. SRA tools have adopted various means of representing the characteristics of the detriment (damage, impact, harm) that may occur when a risk is realized. We term these “attributes” and their selection is widely contested.18 The problem distills to a pragmatic compromise between complete characterization and usability. In an a priori analysis of ten major SRA studies, >60 attributes were identified. From this list, attributes were cross compared with the results of an empirical study into the selection of ecological risk attributes35 and in consultation with end-users. A set of six broad attributes, linked to Defra’s organisational objectives,36 were selected (Table 2). We believe this format addresses the balance between tractability and risk characterization; providing a manageable subset of attributes resonant with the sustainable development agenda and research perspectives on natural assets and ecosystem services. Selected attributes were reviewed with Defra experts and direction was provided regarding attribute definition (Table 2). Deliberative Process. Deliberation was used to populate the framework.30,31,33 Our approach placed emphasis on developing departmental buy-in and support, which we believe increases organizational confidence in the framework and process, and therefore relied on individual and group interviews rather than large, one-off, one-day workshops. Because the framework is as much a vehicle for communication as it is an assessment exercise, it was imperative all organizational levels were comfortable and conversant in the application of the framework. This takes time. The process for completing the assessment is provided schematically in Figure 1 and described below. Phase 1: Literature search and scoping—identification of attributes (described above), identification of pilot risks (in concert with the management board), and a literature scoping exercise. For each risk the academic and “gray” literatures were reviewed and a draft summary was created. Phase 2: Build organizational support— introductory meetings among the research team, Defra’s risk coordinator, and key policy and technical managers. The aim of these discussions was to build trust and acceptance while establishing an entry point for accessing relevant organizational literature and 9860

dx.doi.org/10.1021/es201145a |Environ. Sci. Technol. 2011, 45, 9857–9865

Environmental Science & Technology

POLICY ANALYSIS

Figure 1. SRA deliberation process. The schematic presents a five-phase process that describes the individuals and aims.

Figure 2. (a) Impact schematic as perceived from the environmental (green), economic (gray), and social (blue) viewpoints. Vertical axis describes impact severity. (b) Comparative schematic comparing the relative risk of multiple hazards. Vertical axis describes likelihood and horizontal axis describes impact severity.

expertise. This stage ended with an informed understanding of organizational structure and a list of domain experts. Phase 3: Expert elicitation—completion of the technical assessment. Experts participated in semistructured interviews guided by a risk matrix. The matrix included a description of the six qualitative risk attributes (Table 2) and a single measure of likelihood. A five-point, logarithmic qualitative rating scale was used to assess impact (“negligible” to “catastrophic”), while a similar logarithmic five-point scale was used to assess likelihood (“very low” to “very high”). Experts were asked to consider the residual risk posed by each hazard on a national level in the “here and now”. Assessments were completed by a minimum of three experts and, where available, incorporated expertise from across the economic, social, and environmental domains. Contextual support was provided by the experts and integrated into the draft summary developed in phase 1. Elicited impact ratings were converted to risk scores and aggregated, thus providing a single impact score for each dimension and for overall. These results were communicated using visuals. Policy-level risks represent strategic, national threats that express themselves as national events, changes in exposure, or evident harms. These risks vary in their range of impacts and uncertainties. An ellipse was used to better communicate the character of harm,5 with the size and dimensions of the ellipses (Figure 2b) intended to help decision makers visualize the uncertainty dimension.

The dimensions of the ellipses were ultimately determined via expert deliberation, dependent upon the range of risk scores provided and supported by a brief narrative. Phase 4: Organizational validation—four levels of end-user review ensured the context of use, and the various methodological compromises inherent to the development of a workable method were explicit. Review and challenge were received from (i) senior policy “owners” with direct responsibility for the risks being evaluated; (ii) critical challenge from a group of senior policy specialists within specific fields; (iii) management board representatives with an interest in the use and application of SRA outputs during executive discussions; and (iv) technical risk analysts at distance from development of the tool, but with expertise in environmental risk analysis. Phase 5: Communication and presentation—finally, outputs from the assessment were presented to the board, with each board member being presented a file containing summaries for all assessed risks and a summary schematic that combined the results of all risk assessment.

’ RESULTS AND DISCUSSION We present the comparative analysis of three national, strategic risks: (i) ongoing public exposure to engineered nanomaterials 9861

dx.doi.org/10.1021/es201145a |Environ. Sci. Technol. 2011, 45, 9857–9865

Environmental Science & Technology (ENM); (ii) flooding; (iii) and Foot and Mouth Disease (FMD) to illustrate the framework in practice. These risks were chosen for the particular comparative challenges they present: their disparate harm characteristics, their varying spatial extent, and their distinct temporal boundaries. Case 1: Foot and Mouth Disease. FMD is a highly contagious viral disease known to affect cloven-hoofed animals. Not endemic to the UK, the occurrence of a single case of FMD is considered an outbreak of national concern and will trigger a strident high-level response. Continuous monitoring and prevention strategies maintain the likelihood of an outbreak at a low level. However, natural uncertainty ensures that predicting the next outbreak remains difficult. Case 2: Flooding. Flooding is a naturally occurring event that affects the UK on a yearly basis. The likelihood of floods in the UK for any given year is near certain. The uncertainty lies in predicting the specific location of future flooding. Flooding may originate from the sea, rivers, or excessive rainfall with surface runoff and is an example of an aggregate risk. Granularity is not provided by subdivision of flood classifications and, instead, all types of flooding are grouped and assessed as a whole. Case 3: Engineered Nanomaterials. Engineered nanomaterials (ENMs) are nanoscale manufactured materials used in a wide range of products. The short- and long-term impacts of ENMs are highly uncertain with understanding defaulting to the health impacts of the constituent chemistry and morphology of nanoparticles. The risk from ENMs constitutes an aggregation of risk from all ENM products. Long-term or chronic impacts are unknown but are of concern. However this framework considers the likelihood of harm being realized in the short term from near negligible levels of public exposure. Qualitative Presentation. The risk summary comprised a blend of visual outputs informed by the qualitative assessment and the narrative. Here we present a summary of the schematics used to visualize the assessment. At the strategic level, fresh visuals that are simple and effective provide an interface to capture, present, and explore information providing cognitive and social benefit to communication processes.34 Figure 2a presents the impact scores elicited for flooding using the three risk dimensions in Table 2. The results are unweighted and expert informed, providing end-users a “sense” for how a risk may interrelate with organizational objectives (Table 2). The visual is purposely uncluttered, intended to enable rapid assessment at a glance. Care must be taken to ensure end-users understand the message conveyed by the visuals and this is achieved by a narrative which makes explicit key supporting data, assessment rationale, and mention of uncertainties. Figure 2b is a valuable visual for presenting the relative risk posed by each hazard. The position of the risk is determined by aggregation of the impact scores (Figure 1) and an expert elicited assessment of likelihood. The deliberate shape of an ellipse, positioned on the two, conceptually logarithmic risk axes,35 communicates, in this instance the uncertain severity and geography of yearly floods, as well as the uncertainty in expected likelihood. The positioning of risks on this schematic promotes debate about the relative significance of current environmental threats. A succinct summation of relevant and targeted information presented in an engaging format is vital for capturing the attention of a high-level audience. Here, the breadth of policylevel information is synthesized into a fluid and logical description delivered on a single page. Figure 3 is an example of a risk summary for flooding (Figure 3).

POLICY ANALYSIS

The summary provides a complete narrative of risk beginning with contextual scene-setting and culminating with a final, overall analysis. Supported by referenced material (not shown), it serves as an introduction and guide for end-users to access additional expert information. From an organizsational perspective these summaries represent the beginnings of a risk inventory; a hierarchal independent platform for capturing risk relevant data. Ideally, this framework would be updated on a regular basis, the process of which may instill a culture of risk reporting and risk thinking within the organization. Observations. Overall, the response from experts was favorable with respondents recognizing the value of the framework as a vehicle for communicating with the management board. The narrative was appreciated for its ability to capture conflicting viewpoints and crossdisciplinary disagreements,21 while the schematics were successful in stimulating initial debate on comparisons, the range of impacts, and the suitability of the current risk management strategy. Instances of misunderstanding during expert elicitation and presentation were unavoidable and highlighted the value of guidance from experienced facilitators to maintain consistency. Challenging questions about uncertainty, knowledge gaps, and policy limitations were posed and revealed a wealth of supporting information, as well as secondary risks and interdependencies. The effectiveness of the framework may not be determined solely by the quality of the end product, but also by the process of gathering and sharing information thus promoting reflective understanding with the possibility of better risk management.36 Environmental policy makers and regulators have the unenviable task of making decisions on subjects that are truly incommensurate, under conditions of knowledge inequality and uncertainty. In the absence of formalized tools, decision-makers implicitly compare and contrast risks, thus exposing decision processes to perception and agenda-driven bias. Our aim is to provide a structured and transparent framework able to synthesize information in a systematic manner thus bringing knowledge equality to decision processes and initiating a structured dialogue between technical specialists and decision-makers.7 Conventional treatments (i.e., probability  consequence) in isolation of these complexities may not provide the depth of understanding necessary to adequately inform decision processes. Tools that are flexible and provide risk-relevant context are favored. In essence, the tool becomes a vehicle or conduit for communicating risk to decision makers. Here we have illustrated the efficacy of our framework by assessing three risks that own dramatically different characters of harm. Rather than make attempts to objectively rank these risks—an exercise that often fails to add value for policy makers7—our framework makes strides to communicate the relative harm of each risk in a manner meaningful to the organization. No single common measure exists capable of assessing these risks across plural values, but this does not mean the risks are not comparable. Assessment of risk via common attributes provides the grounding for comparison. However, attribute selection requires methodological compromise to balance totality with usefulness. Tailoring attributes to sustainability increased the relevance of the comparisons from a management perspective. However, technical efficacy was still somewhat limited by the ambiguity of the attributes. Given the mix of issues and variable character of risks within a policy area, this ambiguity was deemed necessary to capture “‘all issues” though it did result in some misinterpretation among technical experts, and so required diligent facilitation. From a usability perspective, there is no 9862

dx.doi.org/10.1021/es201145a |Environ. Sci. Technol. 2011, 45, 9857–9865

Environmental Science & Technology

POLICY ANALYSIS

Figure 3. Example of a generalized risk summary used to collate information. Defra sensitive material has been removed.

defensible rationale for a “right number” of attributes. Some argue that risk attributes (characteristics) should be exhaustive;20 others suggest no more than twenty;22 while some believe ten attributes should be sufficient.17 For the practitioner, a fine balance must be struck between tractability and organizational capacity, and our approach is in line with current thinking related to assessment and communication of global risks,37 39 within the recommended methods of UK government.26,40 The process of informing the SRA was highly deliberative, though it did not use commonly employed workshops to elicit data.20 The framework was organic, in that it underwent continual

renewal from within as new information became available, and from an organizational perspective this could only be achieved via individual/group interviews or correspondence. Though more taxing on facilitators, this approach limited the burden on the organization, which increased buy-in and the value of the tool for policy makers.18 A central challenge that has eluded many practitioners is the active implementation of these frameworks within an organization.5 Indeed, many such tools are designed in a research setting and validated via case studies some considerable distance from the end-user environment; a process that ignores the complexities 9863

dx.doi.org/10.1021/es201145a |Environ. Sci. Technol. 2011, 45, 9857–9865

Environmental Science & Technology of implementation. To overcome this, we developed our tool in collaboration with the end-user, enabling integration of their expectations and requirements and allowing for organizational “fit” alongside existing frameworks. Aimed at senior managers, requirements were for a policy-level relative comparison of environmental risk analyzed on a national level over the next 12 months. Though engagement was high, which secured buy-in and increased the potential for traction and authority among executive users, the scope of the study inevitably led to methodological compromise. This framework is a first step to improving risk-based decision making at a policy level within Defra. Depending upon the success of implementation, future work will investigate linking analysis more closely to management activities thus providing increasing value to policy-level decision processes.

’ AUTHOR INFORMATION Corresponding Author

*E-mail: s.pollard@cranfield.ac.uk; phone: +44 (0)1234 754101.

’ ACKNOWLEDGMENT The Risk Centre is funded by Defra, EPSRC, NERC, ESRC, and Cranfield University under EPSRC Grant EP/G022682/1. J.H. was supported on an EPSRC Vacancy Bursary (2010). We thank policy owners in Defra for numerous discussions that have informed this manuscript. The views and opinions expressed are the authors’ alone and are not attributable to Defra or its Agencies. ’ REFERENCES (1) Knol, A. B.; Briggs, D. J.; Lebret, E. Assessment of complex environmental health problems: Framing the structures and structuring the frameworks. Sci. Total Environ. 2010, 408 (14), 2785–2794. (2) Jasanoff, S. The song lines of risk. Environmental values. Special Issue. Risk 1999, 8 (2), 135–52. (3) Ijjasz, E.; Tlaiye, L. Comparative risk assessment. World BankPollution Management in Focus, 1999. (4) Eduljee, G. H. Trends in risk assessment and risk management. Sci. Total Environ. 2000, 249, 13–23. (5) Pollard, S. J. T.; Kemp, R. V.; Crawford, M.; Duarte-Davidson, R.; Irwin, J. G.; Yearsley, R. Characterizing environmental harm: Developments in an approach to strategic risk assessment and risk management. Risk Analysis 2004, 24 (6), 1551–1560. (6) Unfinished Business: A comparative assessment of environmental problems. U.S. Environmental Protection Agency (USEPA), Office of Policy Analysis, Office of Policy, Planning and Evaluation: Washington, DC, 1987. (7) Andrews, C. J.; et al. Comparative Risk Assessment: Past Experience, Current Trends and Future Directions; Advanced Research Workshop on Comparative Risk Assessment and Environmental Decision Making, Rome, Italy, 2002. (8) Minard, R. A. CRA and the States: History, Politics and Results. In Comparing Environmental Risks Tools for Setting Government Priorities; Davies, J. C., Ed.; Resources for the Future: Washington, DC, 1996. (9) Andrews, C. J. Lessons from the New Jersey comparative risk project. Proceedings of the NATO Advanced Research Workshop on Comparative Risk Assessment and Environmental Decision Making, Rome, Italy, 2002. (10) Fischhoff, B. Ranking risks. RISK: Health, Safety Environ. 1995, 6, 191–202. (11) Morgan, M. G.; et al. A proposal for ranking risk within federal agencies. In Comparing Environmental Risks: Tools for Setting Government

POLICY ANALYSIS

Priorities; Davies, J. C., Ed.; Resources for the Future: Washington, DC, 1996. (12) van Asselt, M. B. A. Improving decision-making under uncertainty: an integrated approach to strategic risk analysis. In Facing the New Millennium. Proceedings of the 9th Annual Conference on Risk Analysis. Rotterdam, 10 13 October 1999; Goosens, L. H. J., Ed.; Delft University Press: Netherlands, 1999. (13) Morgenstern, R. D.; Shih, J.-S.; Sessions, S. L. Comparative risk assessment: An international comparison of methodologies and results. J. Hazard. Mater. 2000, 78, 19–39. (14) DeKay, M. L.; et al. The use of public risk ranking in regulatory development. In Improving Regulation: Cases in Environment, Health, and Safety; Fischbeck, P. S., Farrow, R. S., Eds.; Resources for the Future: Washington, DC, 2001. (15) Andrews, C. J.; Hassenzahl, D. M.; Johnson, B. B. Accommodating uncertainty in comparative risk. Risk Anal. 2004, 24 (5), 1323–1335. (16) Linkov, I.; et al. Multi-criteria decision analysis and adaptive management: A review and framework for application to Superfund sites. In Reclaiming the Land: Rethinking Superfund Institutions, Methods and Practices; Macey, G. P., Cannon, J., Eds.; Springer: New York, 2005. (17) Risk is More than Just a Number; No. 1996/03E; Committee on Risk Measures and Risk Assessment, Health Council of the Netherlands: The Hague, The Netherlands, 1996. (18) Morgan, K. M.; DeKay, M. L.; Fischbeck, P. S.; Morgan, M. G.; Fischhoff, B.; Florig, H. K. A deliberative method for ranking risks (II): Evaluation of validity and agreement among risk managers. Risk Anal 2001, 21, 923–937. (19) New Jersey Comparative Risk Project Final Report; New Jersey Department of Environmental Protection (NJEP): Trenton, NJ, 2002; http://www.state.nj.us/dep/dsr/njcrp/index.htm. (20) Morgan, M. G.; Florig, H. K.; DeKay, M. L.; Fischbeck, P. Categorizing risks for risk ranking. Risk Anal. 2000, 20 (1), 49–58. (21) Klinke, A.; Renn, O. A new approach to risk evaluation and management: Risk based, precaution based, and discourse based strategies. Risk Anal. 2002, 22 (6), 1071–1094. (22) Willis, H. H.; DeKay, M. L.; Morgan, M. G.; Florig, H. G.; Fischbeck, P. S. Ecological risk ranking: Development and evaluation of a method for improving public participation in environmental decision making. Risk Anal. 2004, 24 (2), 363–378. (23) Tal, A. Assessing the environmental movement’s attitudes toward risk assessment. Environ. Sci. Technol. 1997, 31 (10), 470–476. (24) Hofstetter, P.; Bare, J. C.; Hammitt, J. K.; Murphy, P. A.; Rice, G. E. Tools for comparative analysis of alternatives: Competing or complementary perspectives? Risk Anal. 2002, 22 (5), 833–851. (25) Andrews, C. J.; Valverde, L. J. Analysis in support of environmental decision-making. In Proceedings of the NATO Advanced Research Workshop on Comparative Risk Assessment and Environmental Decision Making: Rome, Italy, 2002. (26) The Orange Book. Management of Risk Principles and Concepts; HM Treasury: London, U.K., 2004. (27) Strategic Risk Assessment: Further Development and Trials, Research and Development; Technical Report E70; Environmental Agency: Bristol, U.K., 1999. (28) Strategic Risk Assessment Phase II, R&D Technical Report E2041/TR; Environmental Agency: Bristol, U.K., 2002. (29) National Risk Register of Civil Emergencies; Cabinet Office: London, U. K., 2010; http://www.cabinetoffice.gov.uk/media/348986/ nationalriskregister-2010.pdf. (30) Willis, H. H.; MacDonald Gibson, J.; Shih, R. A.; Geschwind, S.; Olmstead, S.; Hu, J.; Curtright, A. E.; Cecchine, G.; Moore, M. Prioritizing environmental health risks in the UAE. Risk Anal. 2010, 30 (12), 1842–1856. (31) Willis, H. H.; DeKay, M. L.; Fischhoff, B.; Morgan, M. G. Aggregate, disaggregate and hybrid analysts of ecological risk perceptions. Risk Anal. 2005, 25 (2), 405–428. (32) Department for Environment, Food and Rural Affairs (Defra) Website; http://ww2.defra.gov.uk. 9864

dx.doi.org/10.1021/es201145a |Environ. Sci. Technol. 2011, 45, 9857–9865

Environmental Science & Technology

POLICY ANALYSIS

(33) Linkov, I.; Satterstrom, K.; Kiker, G.; Seager, T. P.; Bridges, T.; Gardner, K. H.; Rogers, S. H.; Belluck, D. A.; Meyer, A. Multicriteria decision analysis: A comprehensive decision approach for management of contaminated sediments. Risk Anal. 2006, 26 (1), 61–78. (34) Eppler, M.; Platts, K. Visual strategizing: The systematic use of visualization in the strategic planning process, long range planning LRP. Int. J. Strategic Manage. 2009, 42 (1), 42–74. (35) World in transition - Strategies for managing global environmental risks; Annual Report, German Advisory Council on Global Change (WGBU); Springer: Berlin, 1998. (36) Sparrow, M. K. The Character of Harms: Operational Challenges in Control; Cambridge University Press: Cambridge, U.K., 2008. (37) Global risks 2011, 6th ed.; An initiative of the risk response network; A World Economic Forum Report, Switzerland, 2011; http:// riskreport.weforum.org/. (38) Capistano, D., Samper, C. K., Lee, M. J., Raudsepp-Hearne, C., Eds. Ecosystems and Human Well Being Multiscale Assessments. Findings of the Sub-Global Assessment Working Group, Millennium Ecosystem Assessment (ME); Island Press: Washington, DC, 2005. (39) TEEB. The Economics of Ecosystems and Biodiversity: Ecological and Economic Foundations; Pushpam, Kumar, Ed.; Earthscan: London and Washington, 2010. (40) Better Regulation Executive; Department for Business Innovation and Skills: London, U.K.; Website; www.bis.gov.uk/bre.

9865

dx.doi.org/10.1021/es201145a |Environ. Sci. Technol. 2011, 45, 9857–9865