Policy Analysis Public Participation and the Environment: Do We Know What Works? CARON CHESS* AND KRISTEN PURCELL The Center for Environmental Communication, Rutgers University, Cook College, New Brunswick, New Jersey 08901-2883
This literature review of studies on public meetings, workshops, and community advisory committees discusses public participation based on empirical evidence. Public participation “success” is defined by researchers’ criteria that are divided into two categories: (1) those that evaluate the success of the participatory process and (2) those that evaluate the success of the outcome of the process. We find the form of participationspublic meetings, workshops, or citizen advisory committeessdoes not determine process or outcome success. Therefore, attempts to develop a typology of public participation efforts may be problematic. However, we find some empirical support for practitioners’ other widely accepted “rules of thumb”. “The common practice of eliciting comments only after most of the work of reaching a decision has been done is cause for resentment of risk decisions.... Many decisions can be better informed and their information base can be more credible if the interested and affected parties are appropriately and effectively involved...” (1). “Experience increasingly shows that risk management decisions that are made in collaboration with stakeholders are more effective and more durable.” (2). Two major national reports have called for more involvement of outside stakeholders in agency decision making about risk (1, 2). Despite fears of some scientists that participatory processes can swamp good science, these reports encourage agencies to take collaborative approaches to environmental problem solving. The following review of public participation studies seeks to ground discussions of implementation of public participation in empirical evidence, dating back to the 1970s when seminal research was spurred by the enactment of federal environmental statutes that mandated public participation.
Methods The literature cited results from a search of 21 electronic databases, eight annotated bibliographies, and nine edited volumes, as well as contacts with government officials about agency research. For the purpose of this paper, which focuses on specific forms of environmental public participation (public meetings, workshops, and citizen advisory panels), we did not include research on alternative dispute resolution or public participation efforts that involve many different forms of participation. Hence, most research on siting of unwanted land uses (e.g., incinerators, nuclear waste sites, etc.) was not appropriate for this review. We also excluded * Corresponding author phone: (732)932-8795; fax: (732)932-7815. 10.1021/es980500g CCC: $18.00 Published on Web 08/15/1999
1999 American Chemical Society
articles about public participation outside of North America due to differences in culture that might affect study results. We included non-peer-reviewed government reports because they contain critical, recent research, and their quality is comparable (if not superior) to the peer-reviewed literature. Notwithstanding our extensive search, we recognize that undoubtedly we missed appropriate articles. Therefore, this review is limited. Many of the citations in this article, as with other reviews on public participation, date back to the 1970s and early 1980s, reflecting the relative dearth of recent empirical research on specific forms of public participation. While the environmental field has changed markedly, a seminal review (3) points out that the hallmarks of public participation have changed little over time (e.g., the emphasis on interest group pluralism and the reluctance of government to grant influence to participatory efforts). We first explore varying approaches to defining public participation “success”. Then, we review the empirical evidence concerning the success of three forms of public participation. Finally, we examine the implications of this research for practice. (To conform to the research literature, we use the term “public participation” rather than more recent nomenclature, such as “stakeholder involvement”.)
Successful Participation To answer practitioners’ questions about the effectiveness of forms of public participation, a definition of success is essential. However, developing a single definition is problematic because of the diversity of perspectives about the goals of public participation. We discuss here some goals and criteria that researchers have proposed to define “success,” and we sort them according to two categories: public participation “outcome” and “process” (e.g., 4, 5). Outcome Goals. For some, successful public participation is judged solely by outcomes; the results determine whether the participatory means are successful. However, the definition of positive results varies considerably. Many discussions concerning the outcome (e.g., 6-8) note the distinction between public participation efforts that use citizens to garner support for agency efforts and those that involve citizens while developing policy. Public participation programs can be seen primarily as boosterism to “channel and contain citizen demands, delay difficult decisions, or build support for agency plans,” according to one article (9). Conversely, stakeholders may have different goals, for example, improving or blocking an agency proposal. Among other goals for outcome successes are better accepted decisions (e.g., 2), consensus (e.g., 10), education (e.g., 10), and improved quality of decisions (1). Regardless of the goals selected, evaluating the outcome of any public participation effort is problematic because researchers cannot be sure if an effect is due to public participation efforts or to other variables (11), such as simultaneous events (e.g., local elections), the social context in which the activities take place (e.g., the composition of the community and the history of controversy), and/or the nature of the environmental problem. Process Goals. Instead of defining public participation success by the outcomes, it can be defined by the participatory processes used in the programs. According to this perspective, the characteristics of the meanssrather than the resultss used in public participation programs define success. Such VOL. 33, NO. 16, 1999 / ENVIRONMENTAL SCIENCE & TECHNOLOGY
9
2685
studies explore issues such as fairness, information exchange, group process, and procedures. The Middle Ground. Analysts who take a position in the middle of the process-outcome spectrum believe that public participation should meet some balance of outcome and process goals. That is, neither “good” process nor “good” outcome is sufficient by itself. For example, the results of surveying staff and participants at five U.S. Department of Energy (DOE) sites suggests that future evaluations use both outcome criteria (e.g., “Key decisions are improved by public participation”) and process criteria (e.g., “The decisionmaking process allows full and active stakeholder representation”) (12).
Selecting Goals and Criteria Not only the goals, but also the methods for determining goals and criteria differ greatly. Based on empirical studies of public participation, two major methods for selecting goals and criteria were used to measure participatory processes. Theory-Based Criteria. Criteria based on a particular theory have the advantage of providing consistent means by which to compare studies. For example, Webler (13) makes a strong argument that evaluation of public participation should be based on “fairness,” permitting people to participate in the interaction, initiate dialogue, challenge and defend claims; and “competence,” using the best available information. Particular emphasis is placed on “right discourse,” which, among other characteristics, involves multiway communication, consensus-based interaction, critical self-reflection, and “reasonableness” (13). Another normative model calls for the involvement of “amateurs” in decisions, considers citizens’ abilities to share in decision making, and proposes face-to-face discussions over time and mechanisms for citizens to participate on some basis of equality with agency officials (8). Criteria Based on Participants’ Goals and Satisfaction. According to this perspective, universal goals and criteria derived from theory are less important than the specific goals of those involved in participatory efforts. Thus, participantbased goals may vary depending on culture, environmental problem, historical context, and other factors. This approach provides room not only for stakeholders’ expectations but also for agencies’ definitions that may be more tied to programmatic outcomes than theorists’ definitions. Although democratic ideals might encourage agencies to engage in participatory processes, unless environmental public participation becomes a strong societal norm, agencies, suffering from fiscal, regulatory, and time constraints, must see programmatic benefits (14). We do not see these two approaches as mutually exclusive. In keeping with some of the current thinking in evaluation research (15), we advocate methodological pluralism, which provides the opportunity to overcome the limitations of any one approach. Although fuller discussion of this issue is the subject of another paper, researchers of environmental public participation arguably could combine these approaches by soliciting from participants their expectations and criteria for success, comparing these expectations to theory, and synthesizing participants’ criteria and theoretical ones. Because relatively few studies use such methodological pluralism, imposing this definition of success on the existing empirical research would be fruitless. Instead, we use researchers’ definitions for judging success and point out their derivation (see Tables 1-3). Because methodological and definitional differences make comparing studies difficult, our analysis is meant to raise issues concerning public participation rather than to strive for definitive conclusions. 2686
9
ENVIRONMENTAL SCIENCE & TECHNOLOGY / VOL. 33, NO. 16, 1999
Effective Public Meetings? A major concern of some critics is that public meetingssboth legally required hearings and other public meetings open to allslegitimize agency decisions that have already been made (8). Some agency practitioners are equally as disillusioned about the usefulness of public meetings, for example, the staffer who stated that his goals for a public meeting were merely “to survive” (16). In addition, advice to practitioners strongly discourages the use of large public meetings (9, 17, 18). However, the empirical research specifically about public meetings provides a more optimistic assessment. Public Meeting Process. Some empirical studies support criticism of the public meeting process (see Table 1 and (1929)). For example, authors note the overrepresentation of opponents compared with proponents (19) and some demographic differences between the general public and meeting participants (20, 21). The empirical research underscores other process concerns related to interaction between participants and agencies. For example, hearings on the siting proposals of Hydro Quebec in Canada forced participants to react to plans for siting, rather than more holistically evaluate the project’s feasibility (11). In-depth cases of the Army Corps of Engineers’ public participation efforts included portrayal of public meetings with hundreds of opponents loudly resisting the agency’s “decide, announce, defend” approach to governing (22). Also, a study of 23 public meetings dealing with cleanup of a hazardous waste site found that the process imposed by the government led to participants feeling patronized and frustrated (23). In addition, government agencies unduly limited the scope of discussion to exclude social issues (24) or nontechnical concerns of participants (23). However, other researchers’ assessments of public meetings have been more positive. For example, some studies contradict claims that public meetings are unrepresentative. When 600 oral and written comments submitted at public hearings were compared with a random sample of the opinions of 384 households, the study found that “in most respects, the survey results backed up the opinions expressed at the meetings” (25). Similarly, a summary of three studies that measured the representativeness of meeting participants compared with the general population found that the demographics differed somewhat, but opinions expressed by meeting participants did not differ significantly from surveys returned by the general population. The research findings were strikingly consistent despite differences in the scale of the meetingss one citywide, one countywide, and one statewidestheir subject, their level of controversy, and their length (21). Another study positively rated representation based on the participation of many interest groups (26). Finally, in-depth analysis of transcripts of 30 meetings on the issue of transportation planning found considerable diversity in reasons for and intensity of concern, suggesting that merely counting opponents and proponents oversimplifies the complexity of issues and the nature of representation (27). Not surprisingly, questionnaires distributed after four Army Corps of Engineers’ public participation efforts found that more people attended public meetings than were involved in workshops and seminars. The survey also found participants preferred public meetings to these other forums (22). Public Meeting Outcome. The majority of the studies found that meetings influenced government decisions. For example, quantitative analysis of 1816 California Coastal Commission hearings about building permits found that the combination of citizen and staff opposition raised commissioners’ denial rates for permits from 19% to 66% (28). A series of public meetings dealing with hydropower led to
TABLE 1. Assessing the Usefulness of Public Meetingsa origin of criteria for success study
situation
Elder, 1982 (26)
siting in CD oil sands recovery
Gariepy, 1991 (11)
five cases/ hydro-electric siting in CD two cases/road salt, deer hunting
Gundry and Heberlein, 1984 (21) Heberlein, 1976 (19) Kaminstein, 1996 (23) Kihl, 1985 (27)
Torgeson, 1986 (29)
interaction technical advisors representation (number of groups) timing interaction
Binational Great Lakes water issues
siting CD pipeline
+ + + -
consensus influence on social and environmental factors influence on decision advance rights of “native” peoples NA
-
representation of opinion NA
influence on decision + +/- influence on decision participant satisfaction with outcome NA + influence on decision
+/+ +/-
case 1 scope of discussion interaction case 2 prehearing interaction + cost and ease access to technical information representation (demographic) outreach funding of native groups scope of discussion interaction information exchange
theory/prior research
outcome
representation demographic opinion variation of opinion representation of opinion interaction scope of discussion
disposal of animal waste remediation/ superfund site three cases/ representation transportation of opinion five cases/Corps interaction of Engineers
Mazmanian and Nienaber, 1979 (22) O’Riordan, 1976 water planning (25) in CD Rosener, 1982 (28) CA coastal commission Rowbotham, 1982 two cases/CD (24) siting of sour gas wells
Sinclair, 1977 (20)
process
+
influence on decision
+
x
+ -
x
+ x
direct response
x
direct response
*+
NA
case 1 influence on institution influence on decision case 2 influence on decision influence on institution NA
participants
+ *+
x x
x
x *+
+ -
x x
+ + x
-
direct response
+ + + +
influence on decision *+ influence on institution +
x
a In the studies above, criteria reflecting process and outcome goals suggest public meetings have a mixed record. Process (+) indicates a process element, such as interaction, was positive, whereas (-) suggests the process was less favorable. Outcome (+) indicates public participation influenced decision making, although not necessarily favorably, and the addition of (/) indicates this influence blocked an agency proposal. Conversely, (-) indicates public participation had little impact on decisions. Under participants, “direct response” means that participants did not define criteria and instead responded to researchers’ criteria. NA means not applicable.
some mitigation of project effects that had previously been overlooked (26), and another study found 90% of 30 transportation projects were changed because of public meetings (27). As noted previously, public meetings have been criticized for placing participants in the position of reacting to agency proposals rather than providing input to their development. Ironically, because public meetings are a useful focal point of proposal opposition, process problems can lead to major shifts in the outcome: Projects were blocked due to overwhelming opposition to agencies’ de facto decisions (22). This suggests that an agency that uses public meetings to avoid responding to public concerns may find the meetings serve the opposite purpose, forcing the agency ultimately to abandon proposals in the face of extreme and concentrated public opposition. We found only one study in which public meetings were found to yield a consensus (26), underlining
that public meetings are less likely to meet this criterion for success. However, researchers point not only to impacts of public participation on specific decisions, but also to subsequent institutional changes that influenced other public participation efforts. For example, the Canadian government added participatory processes to complement public hearings (24, 29). In addition, participatory efforts led to political recognition for native peoples (29) and the inclusion of social factors in government decision making (26). In short, a public participation effort can lead to long-term changes in formal or informal agency policy and procedures. Such social learning is arguably as important as a single government decision about a project or program. Because of considerable diversity in methodologies and variables, these studies cannot easily be compared to determine which factors may have contributed to meeting VOL. 33, NO. 16, 1999 / ENVIRONMENTAL SCIENCE & TECHNOLOGY
9
2687
TABLE 2. Assessing the Usefulness of Workshopsa origin of criteria for success study
situation
process
Gundry and Heberlein, 1984 (21)
natural resource planning
Rosener, 1981 (4)
Corps of Engineers wetland permitting two cases
representation demographic opinion variance of opinion case 1 representation (interest groups) interaction case 2 representation (interest groups) interaction perception of agency
Twight and Carroll, 1983 (31)
Forest Service land use planning
Young, Williams, and Goldberg, 1993 (30)
DOE draft plan for Programmatic Environmental Impact statement
fairness understanding accessibility perceived understanding
theory/prior research
outcome + +
+ + + -
+ + +/-
NA
influence on decision
+
influence on decision
-
consensus perception of influence participants’ understanding of Forest Service perception of influence
-
participants
x
direct response
x
x
x
direct response
x
direct response
+
+
a
In the studies above, criteria reflecting process and outcome goals suggest that, like public meetings, workshops have a mixed record. Process (+) indicates a process element, such as interaction, was positive, whereas (-) suggests the process was less favorable. Outcome (+) indicates public participation influenced decision making, although not necessarily favorably, whereas, (-) indicates public participation had little impact on decisions. Under participants, “direct response” means that participants did not define criteria and instead responded to researchers’ criteria. NA means not applicable.
success or failure. However, agencies’ actions seem to have significant impact. For example, agencies have undercut effectiveness of public meetings through poor outreach to potential participants (20), limited provision of technical information (20), procedures that disempower citizens (23), unwillingness to accommodate discussion of social issues (24), and timing hearings to be held after a decision has been made or late in the decision-making process (11, 22). Others suggest that agencies contributed to meeting success by holding public meetings in combination with other forms of participation (24), providing significant technical assistance to citizens (26), conducting vigorous outreach (26), encouraging participation of native peoples (26, 29), discussing social issues (29), and fielding questions adequately (27).
Workshops Workshops represent a middle ground between public meetings and citizen advisory committees, involving citizens in a task-oriented process that enables more discussion than public meetings over less time than a citizen advisory committee (10) (see Table 2 and (21, 28, 30, 31)). Workshop Process. One of the most carefully researched analyses of workshops explores both process and outcome measures (4). In 1979, the Army Corps of Engineers held public workshops in Miami and Sanibel Island, Fla., to determine whether to issue a General Permit (GP) for development in a specific wetland area. The “user-oriented” evaluation began by asking both prospective workshop participants and Corps personnel to define, before the workshops, their goals and objectives. These responses served as the basis of questionnaires administered after the two workshops ended. In Sanibel, where an environmentalist served as liaison between environmentalists and the Corps, participants who came to the workshop opposing the Corps and the GP concept left supportive of the process and the Corps itself. In Miami, 2688
9
ENVIRONMENTAL SCIENCE & TECHNOLOGY / VOL. 33, NO. 16, 1999
on the other hand, no intermediary was used, and many GP opponents refused to attend the workshop, possibly to “retain their status as opponents” (4). Those who did attend indicated that most of their process goals had been met. Similarly, according to one study, DOE workshop attendance was half the agency’s expectations, and some of the lowest ratings (which were fairly neutral) concerned measures of DOE’s accessibility, including ratings for outreach efforts and input into development of the meetings (30). However, participants rated the workshops positively in terms of criteria related to fairness and understanding. Workshop Outcome. The outcomes of the workshops are mixed. On the positive side, participants in the DOE workshops rated the agency favorably on measures related to perception of agency responsiveness (30). In addition, the study of Corps’ workshops on permitting wetlands found that the Sanibel workshop was both a process and outcome success (4). However, although the Miami workshop was seen by participants as a process success, measuring outcomes for the Miami workshop was problematic because participants’ goals were either too “vague” or too “extreme,” according to the author (4). Rosener attributes the failure to reach a consensus at the Miami workshop to differences in agency process: Unlike the Sanibel workshop, there was no liaison to environmentalists, and this resulted in low attendance of opponents of the wetlands permit. The study concludes that the workshops’ differing outcomes show that participants’ “support for a process does not necessarily lead immediately to support for the outcomes” (4). Similarly, a large U.S. Forest Service participation program in Colorado also was termed an outcome failure because it failed to build consensus. Those participants who attended one or more workshops failed to see themselves in any more agreement with agency personnel than those who participated through letter writing and/or public meetings (31). A large percentage of both workshop attendees (85%) and
TABLE 3. Assessing the Usefulness of Citizen Advisory Committeesa origin of criteria for success authors Beltsen, 1995 (39)
situation remediation of army sites two CACs
DOE, 1997 (33) remediation eleven CACs
process case 1 timing of involvement representation interaction case 2 timing of involvement represenation interaction processes & procedures interaction & exchange of viewpoints
Hannah and large internal citizen control Lewis, 1982 Midwestern city associated with multiple (37) nine CACs sources of information nonagency channels for support professional members closeness of ties to the department Houghton, Michigan city greater independence 1988 (38) nine CACs of CAC
Lynn, 1987 (32) hazardous waste/toxics two CACs
theory/prior research
outcome
participants
x
direct response
x
x
x
direct response
x
x
+ +
x
direct response
-
x
-
x
direct response
x
direct response
perceived influence + on decision + + agency + reponsiveness + perceived influence +/on decision + + agency reponsiveness + + improve DOE decision making +/+ achieve more acceptable actions +/influence on significant issues + + + perceived influence on decision actual influence on decision + influence on decision +
+
timing of involvement independence of CACs existing citizen leadership & education + logistical support from local agencies + access to expertise + Plumblee et EPA water conflicting expectations influence on decision al., 1985 (34) quality between citizens and agency planning overdominance of EPA two CACs delays in implementation of advice Stewart et al., urban air quality timing of involvement - influence on decision 1984 (35) planning dominance of technical experts expectations Delli Priscoli, river planning interaction -/+ NA 1983 (36) four CACs internal control -
+
a In the studies above, criteria reflecting process and outcome goals suggest that, like public meetings and workshops, citizen advisory committees (CACs) have a mixed record. Process (+) indicates a process element, such as interaction, was positive, whereas (-) suggests the process was less favorable. Outcome (+) indicates public participation influenced decision making, although not necessarily favorably, whereas (-) indicates public participation had little impact on decisions. Under participants, “direct response” means that participants did not define criteria and instead responded to researchers’ criteria. NA means not applicable.
nonattendees (92%) felt they had not contributed to the final U.S. Forest Service decision.
Advisory Committees Citizen advisory committees (CACs), a group of individuals appointed “for the purposes of examining an issue or set of issues” (10) meet over a longer term than public meetings or workshops and thus are believed to encourage more extensive interaction (see Table 3 and (32-39)). CAC Process. The empirical research gives CACs mixed grades on process. According to researchers, successes include an advisory group in Greensboro, N.C., which had such a positive experience interacting with government agencies that the citizen chair praised city and county employees for devoting “vast amounts of time, energy, and skill . . . [to] . . . describing the company’s plans and to enlarging the understanding . . . [about what] the ‘demanufacture’ of chemical wastes means” (32).
In addition, three process goals of DOE’s Site-Specific Advisory Boards (SSABs) are “generally being met”, according to results of a survey developed by a task force of SSAB members and DOE personnel, with guidance from evaluation experts (33). More than 70% of SSAB members and “closely associated” site staff agreed or strongly agreed that the SSABs facilitate interaction and exchange of information and viewpoints, according to averages of responses to multiple questions that define this goal. Somewhat fewer (64%) felt the SSABs had established effective procedures to do so. Favorable responses averaged 63% across questions that measured the extent to which the SSABs met the goal of providing useful advice. On the other hand, although CACs are reputed to allow meaningful interaction among participants, other studies suggest that agencies hampered citizens’ ability to play meaningful roles. For example, EPA overly controlled two cases of water quality planning (34); planners’ value judgVOL. 33, NO. 16, 1999 / ENVIRONMENTAL SCIENCE & TECHNOLOGY
9
2689
ments limited CAC members’ ability to discuss options for air quality planning (35); and structural separation of advisory board members from planners was associated with participants’ low sense of involvement in four cases of water resources planning (36). However, another study points out the downside of CACs being independent from their sponsoring organizations (37). Members’ control over their own CACs was associated with the use of multiple sources of information beyond those provided by the agency; the outreach to a variety of channels for support of their recommendations; and the high proportion of professional members. Also, the weaker the structural ties and affiliations of CACs to their sponsoring agencies the more likely they were to be controlled by members, but the less likely they were to deal with significant issues, probably because agencies did not bring the issues to the table. CAC Outcome. The three studies of CACs that note process problems also judged the CACs to have little impact on planning (34-36). Also, DOE participants’ relative satisfaction with process was not matched by their responses to outcome measures, “which show room for improvement” (33). Respondents indicated “a relatively low level of favorability” on the agency’s three outcome goals: SSABs improve DOE’s site decisions (56%); lead to more acceptable actions (56%); and contribute to confidence in the agency (57%). Among CAC successes are their impact on the development of a risk assessment (32) and constructive changes in a plan for siting of a hazardous waste facility (32). In addition, another careful study concludes, on the basis of the public record and perceptions of CAC members, that the greater the independence of CACs from agency control, the greater the influence on decisions (38). A study of two advisory boards dealing with the cleanup of defense sites (39) also finds that members at one site were generally satisfied with the CAC’s process and were also optimistic about potential outcomes. Surprisingly, participants in the CAC at the other site, who had serious concerns about the quality of facilitation and discussions, were nonetheless positive about their ability to affect the remediation of the site. In conclusion, these empirical studies as a whole do not point to CACs as the solution to agencies’ participatory problems. However, DOE’s well-designed research on its SSABs provides some cause for optimism, as does an earlier review of CACs with somewhat different criteria for inclusion of studies (40).
Implications Drawing conclusions about “what works” vis a` vis public participation is difficult because of the limited empirical research and great variation in the criteria for success. Therefore, we see the following as hypotheses in need of further research. The Forms of ParticipationsPublic Meetings, Workshops, or CACssMay Not Determine Process or Outcome Success. Studies of different forms sometimes yielded similar outcomes, while studies exploring the same form of participation sometimes yielded different outcomes. Because empirical studies of the same form of participatory process may yield such varied results, factors other than the mechanism for the participatory process undoubtedly account for variation in public participation success. For example, the history of the issue, the context in which the participation takes place, the expertise of those planning the effort, and the agency commitment may all have an impact on a particular program’s success or failure (1). Although the form of public participation does not determine either process or outcome success, the form may lend itself to meeting some criteria more than others. For 2690
9
ENVIRONMENTAL SCIENCE & TECHNOLOGY / VOL. 33, NO. 16, 1999
example, the empirical evidence underlines the limitations of public meetings for reaching a consensus. The limitations generally ascribed to various forms of participation may be due, in part, to how the agency uses these formssrather than merely the forms themselves. For example, an agency decision to rely on public meetings as a means of soliciting input may reflect the agency’s desire for public participation without ongoing contact with stakeholders, as required by a CAC. This scant agency commitment may constrain effectiveness as much or more than any inherent limitations of public meetings. Thus, while practitioners may want a public participation taxonomy that recommends which form of public participation to use in a particular situation, this review suggests that the empirical data are not sufficient to do so with confidence. Agency Actions (or Inactions) Have a Significant Impact on Both Process and Outcome Success. Agencies’ handling of participatory forms also may explain, in part, why each form gets mixed reviews. Agency actions such as overdominance of group dynamics (34, 35), failure to appropriately publicize forums (20, 30), placement of citizens in a reactive position (11, 20), and condescension to participants (23) were associated with process and outcome limitations. Agencies also contributed to success, according to researchers, by providing technical assistance (26, 32), initiating approaches to engage in and improve dialogue with indigenous people (29), engaging liaisons to encourage participation (4), making a commitment to follow recommendations (4), and providing neutral, competent facilitation (39), among others. Although a Majority of Studies Suggest Participants’ Satisfaction with Participatory Processes May Be Associated with Satisfaction with Outcomes, Other Carefully Conducted Studies Raise Questions. Merely glancing at the tables in this article makes apparent an association between positive processes and positive outcomes, as well as negative processes and negative outcomes. This finding supports arguments of advocates for public participation that agencies may have pragmatic, instrumental reasons to improve participatory processes. On the other hand, according to three carefully conducted studies, stakeholders may respond favorably to a given participatory process, but they may not respond favorably to outcomes. For example, an Army Corps of Engineers’ workshop was rated satisfactory on process but unsatisfactory on outcomes (4). In another case, the final public meeting found both the Corps and citizen participation selfcongratulatory (and the process ratings more than satisfactory), yet participants rated the outcome as less than satisfactory (22). Finally, not surprisingly, a recent DOE study of advisory boards found greater overall satisfaction with the process of citizen involvement than with the outcome (33). On the other hand, without a positive participation process, dissatisfaction with the outcome might have been even greater. Conversely, one study found continuing optimism about outcomes despite dissatisfaction with major elements of the CAC process (39). Obviously, research is critically needed to explore further the association between process and outcome (41). In particular, several studies suggest that participatory programs were notable for the institutional changes that resulted. One of the clearest examples is a Canadian inquiry that encouraged active public participation of native peoples through funding technical assistance, modifying the formats of public meetings, and allowing testimony in native languages. The innovations of this inquiry process, which facilitated a voice for native peoples, “became a key part of a political and historical transformation of the Canadian North” (29). Thus, exploration of the association between public participation process and outcome needs to consider
that organizational or social learning may be one of the most lasting influences of a participatory effort. Exploring only immediately apparent programmatic outcomes may be shortsighted. Empirical Research Provides Some Support for Public Participation Rules of Thumb. Public participation rules of thumb are based on the accumulated experience of practitioners. The empirical research cited in this paper is not sufficient to “prove” or “disprove” any of these rules, but it does provide support for several. (1) Clarify goals. These studies suggest that a variety of public participation goals are clearly possible; however, some may be difficult to reconcile (e.g., agencies looking for support of plans that citizens want to block). (2) Begin participation early and invest in advance planning. Although the data are far from definitive, investment in these preliminary stages of public participation appears to be important. For example, insufficient or inappropriate outreach was cited as a problem (20, 30). Timing of the participation effort also was cited in cases in which participants were placed in a reactive position by virtue of being asked to consider agency proposalssoften perceived as final decisionssrather than to join in earlier discussions of alternatives (e.g., 11, 22, 35). Conversely, early involvement was positively noted by other studies (22, 39). (3) Modify traditional participatory forums to meet process or outcome goals. As discussed earlier, we suggest that the dynamic between agencies and citizens may not be due to the participatory form. For example, it is possible to hold meetings earlier in the decision-making process or to develop meeting agendas that include presentations about or generation of alternative proposals, as requested by participants in one effort by the Army Corps of Engineers (22). Development of “community hearings” with formats different from formal ones may encourage the participation of those unaccustomed to public testimony (29). In addition, hiring liaisons from outside agencies or providing technical assistance may promote attendance and outcome success (26, 28, 32). Expert facilitation can also transform forums (39). (4) Implement a public participation program with various forms of public participation. A program that seeks to involve many individuals but also strives for extended discussions to develop alternative solutions might use a CAC for sustained interactions, workshops to develop options, and any number of techniquesspublic meetings, interactive technology, various types of polls and surveyssto involve larger numbers of people. New methods of combining different techniques are under way; these include, for example, soliciting concerns of interest groups, involving a randomly selected citizen panel to review the concerns, and convening technical experts to provide scientific feedback on decision options (42). (5) Collect feedback on public participation efforts. Although most agencies have advisory boards, very few ask their members about whether they “work.” The success of public meetings is gauged by hunches, and some agency scientists admit they are functionally illiterate in public participation. Narrowing the chasm between the capability of environmental science and the capacity for environmental governance will depend, in part, on research about “what works” in public participation.
Acknowledgments Although this review has been funded in part by the U.S. EPA under cooperative agreement CR820-796-01-0, it has not been subjected to agency review and therefore does not reflect the views of the agency, and no official endorsement can be inferred. Project officer Lynn Desautels provided consistent support and invaluable input. We also appreciate the suggestions of Nevin Cohen, Ginger Gibson, Billie Jo Hance, Daniel Mazmanian, Samantha Milby, Susan Santos, and three
anonymous reviewers. In addition, Jennifer Bulava provided research assistance. Nonetheless, the opinions contained here are solely those of the authors.
Literature Cited (1) National Research Council. Understanding Risk: Informing Decisions in a Democratic Society; Stern, P. C., Fineberg, H. V., Eds.; National Academy Press: Washington, DC, 1996. (2) Presidential/Congressional Commission on Risk Assessment and Risk Management. Framework for Environmental Health Risk Management; Presidential/Congressional Commission on Risk Assessment and Risk Management: Washington, DC, 1997. (3) Fiorino, D. Columbia J. Environ. Law, 1989, 14(2), 226-243. (4) Rosener, J. B. J. Appl. Behav. Stud. 1981, 17(4), 583-596. (5) Tuler, S.; Webler, T. Hum. Ecol. Rev. 1995, 2, Winter/Spring, 62-71. (6) Renn, O.; Webler, T.; Wiedemann, P. A Need for Discourse on Citizen Participation: Objectives and Structure of the Book. In Fairness and Competence in Citizen Participation: Evaluating Models for Environmental Discourse; Renn, O., Webler, T., Wiedemann, P., Eds.; Kluwer: Dordrecht, The Netherlands, 1995, pp. 1-15. (7) Rosenbaum, W. A. Public Involvement as Reform and Ritual. In Citizen Participation in America; Langton, S., Ed.; Lexington Books: Lexington, MA, 1978, pp. 81-96. (8) Fiorino, D. J. Sci. Technol. Hum. Values 1990, 15(2), 226-243. (9) Checkoway, B. J. Appl. Behav. Sci. 1981, 17(4), 566-581. (10) English, M. R.; Gibson, A. K.; Feldman, D. L.; Tonn, B. E. Stakeholder Involvement: Open Processes for Reaching Decisions About the Future Uses of Contaminated Sites; Waste Management Research and Education Institute: Knoxville, TN, 1993. (11) Gariepy, M. Environ. Impact Assess. Rev. 1991, 11, 353-374. (12) Carnes, S. A.; Schweiter, M.; Peelle, E. B.; Wolfe, A. K.; Munro, J. F. Performing Measures for Evaluating Public Participation Activities in DOE’s Office of Environmental Management; Oak Ridge National Laboratory: Oak Ridge, TN, 1996. (13) Webler, T. Right Discourse in Public Participation: An Evaluative Yardstick. In Fairness and Competence in Public Participation: Evaluating Models for Environmental Discourse; Renn, O., Webler, T., Wiedemann, P., Eds.; Kluwer: Dordrecht, The Netherlands, 1995, pp. 35-86. (14) Balch, G. I.; Sutton, S. M. Putting the Audience First: Conducting Useful Evaluation for a Risk-Related Government Agency. Risk Anal. 1995, 15(2), 163-168. (15) Chelimsky, E. The Coming Transformations in Evaluation. In Evaluation for the 21st Century; Chelimsky, E., Shadish, W., Eds.; Sage Publications: Thousand Oaks, CA, 1997, pp. 1-26. (16) Hance, B. J.; Chess, C.; Sandman, P. Planning Dialogue With Communities: A Risk Communication Workbook; Center for Environmental Communication: New Brunswick, NJ, 1989. (17) Conner, D. M. Public Participation: A Manual on How to Prevent and Resolve Public Controversy; Conner Development: Victoria; British Columbia, Canada, 1997. (18) Hance, B. J.; Chess, C.; Sandman, P. Improving Dialogue with Communities: A Risk Communication Manual for Government; Center for Environmental Communication: New Brunswick, NJ, 1988. (19) Heberlein, T. A. Nat. Resources J. 1976, 16, January, 197-212. (20) Sinclair, M. The Public Hearing as a Participatory Device. In Public Participation in Planning; Sewell, D. W., Coppock, J., Eds.; Wiley & Sons: London, 1977, pp. 105-123. (21) Gundry, K. G.; Heberlein, T. A. J. Am. Plan. Assoc. 1984, 50(2), 175-182. (22) Mazmanian, D. A.; Nienaber, J. Can Organizations Change?; Brookings Institution: Washington, DC, 1979. (23) Kaminstein, D. S. Hum. Org. 1996, 55(4), 458-464. (24) Rowbotham, P. Alberta Law Rev. 1982, 3(32), 468-483. (25) O’Riordan, J. The Public Involvement Program in the Okanagan Basin Study. In Natural Resources for a Democratic Society: Public Participation in Decision-Making; Utton, A. E., Sewell, W.R.D., O’Riordan, T., Eds.; Westview: Boulder, CO, 1976, pp. 177-196. (26) Elder, P. The Environmentalist 1982, 2(1), 55-71. (27) Kihl, M. K. J. Appl. Behav. Sci. 1985, 21(2), 185-200. (28) Rosener, J. B. Public Admin. Rev. 1982, July/August, 339-345. (29) Torgeson, D. Policy Sci. 1986, 19(1), 33-59. (30) Young, C. W.; Williams, G.; Goldberg, M. Evaluating the Effectiveness of Public Meetings and Workshops: A New Approach for Improving DOE Public Involvement; Argonne National Laboratory, Environmental Assessment and Information Division, USDOE (63): Argonne, IL, 1993. VOL. 33, NO. 16, 1999 / ENVIRONMENTAL SCIENCE & TECHNOLOGY
9
2691
(31) Twight, B. W.; Carroll, M. S. J. Forestry 1983, November, 732735. (32) Lynn, F. Environ. Impact Assess. Rev. 1987, 18, 283-296. (33) U.S. Department of Energy. Site Specific Advisory Board Initiative 1997 Evaluation Survey Results; Office of Environmental Management: Washington, DC, 1997. (34) Plumlee, J.; Starling, J.; Kramer, K. Admin. Soc. 1985, 16(4), 455473. (35) Stewart, T. R.; Dennis, R. L.; Ely, D.W. Policy Sci. 1984, 17(1), 67-87. (36) Delli Priscoli, J. The Citizen Advisory Group as an Integrative Tool in Regional Water Resources Planning. In Public Involvement and Social Impact Assessment; Daneke, G., Garcia, M., Delli Priscoli, J., Eds.; Westview: Boulder, CO, 1983, pp. 79-87. (37) Hannah, S.; Lewis, H. Internal Citizen Control of Locally Initiated
2692
9
ENVIRONMENTAL SCIENCE & TECHNOLOGY / VOL. 33, NO. 16, 1999
(38) (39) (40) (41) (42)
Citizen Advisory Committees: A Case Study. J. Volun. Action Res. 1982, 11(4), 39-52. Houghton, D. G. Am. Rev. Public Admin. 1988, 18(3), 283-296. Beltsen, L. Assessment of Local Stakeholder Involvement; Western Governors’ Association: Denver, CO, 1995. Lynn, F.; Busenberg, G. Risk Anal. 1995, 15(2), 147-162. Webler, T. Hum. Ecol. Rev. 1996, 3(2), 245. Renn, O.; Webler, T.; Rakel, H.; Dienel, P. C.; Johnson, B. Public Policy Sci. 1993, 26, 189-214.
Received for review May 18, 1998. Revised manuscript received December 21, 1998. Accepted December 21, 1998. ES980500G