A Missing Error Term in Benefit–Cost Analysis - ACS Publications

Dec 6, 2011 - Department of Economics, University of Maryland, Baltimore County, Baltimore, .... From a more managerial perspective, we might ask whet...
0 downloads 0 Views 357KB Size
Policy Analysis pubs.acs.org/est

A Missing Error Term in Benefit−Cost Analysis R. Scott Farrow*,† †

Department of Economics, University of Maryland, Baltimore County, Baltimore, Maryland, United States, and the Woods Hole Oceanographic Institution, Woods Hole, Massachusetts, United States ABSTRACT: Benefit−cost models are frequently used to inform environmental policy and management decisions. However, they typically omit a random or pure error which biases downward any estimated forecast variance. Ex-ante benefit−cost analyses create a particular problem because there are no historically observed values of the dependent variable, such as net present social value, on which to construct a historically based variance as is the usual statistical approach. To correct this omission, an estimator for the random error variance in this situation is developed based on analysis of variance measures and the coefficient of determination, R2. A larger variance may affect decisionmaker’s choices if they are risk averse, consider confidence intervals, exceedance probabilities, or other measures related to the variance. When applied to a model of the net benefits of the Clean Air Act, although the probability of large net benefits increases, the probability that the net present value is negative also increases from 0.2 to 4.5%. A framework is also provided to assist in determining when a variance estimate would be better, in a utility sense, than using the current default of a zero error variance.



either a single market or multimarket setting.4,5 He identified himself with analytical principles for creating useful BCAs that advocated a concern for statistical confidence in the outcome6 while also serving on a committee that sharply criticized improper use of BCA tools by the U.S. Army Corps of Engineers.7 More supportively he stated “If benefit−cost analysis is to become as useful as Farrow, Toman, and I desire, environmental groups will have to invest some time in understanding the tools it employs; poor-quality and selfserving analysis will have to be found out for what they are; and resources will have to be made available to do good analyses”.8 This article is about the way that even probabilistic benefit− cost analyses used to forecast impacts almost always ignore a fundamental element of random error, and it ignores that error precisely because BCA forecasts seldom confront observable data in the historical record. An implementable method is then developed to estimate the variance of the missing random error and bounds are derived for when such estimates will be preferred to the existing default of entirely omitting the error and its variance.

INTRODUCTION Benefit−cost analysis has been in use in government and among academics for at least half a century to inform decisions such as environmental regulation or infrastructure investment.1 Nonetheless, benefit−cost analysis (BCA) has a checkered past with regard to the accuracy of its predictions and usefulness in decision-making. Historically, BCA started out with text discussion of a decision then moved to quantification of point estimates, and only gradually have concerns about variability and uncertainty become more common, if still not standard practice. But like a relict element, the history of point estimates seems to have obscured a basic issue for benefit−cost practitioners: there is a missing error term in almost all BCAs, and that error term, or more specifically its variance, could vary widely across applications from the cost of pollution control methods to damages from a terrorist attack. This error is closest to a random error, or the error remaining after what it is analysts can explain with their modeling efforts. Lester Lave, in whose honor this piece is offered, was no stranger to problems hiding in plain sight and to BCA. Lester (I’m proud to have shared a number of years with him at Carnegie Mellon) liked to challenge the conventional wisdom. His path-breaking work with Eugene Seskin on the links between air pollution and premature mortality and morbidity opened up interdisciplinary work that dramatically improved the estimation of the benefits of air pollution control.2 On BCA, like many an economist, he took a complicated position. He was sharply critical of those who thought BCA was a panacea that would easily identify the “best” choice for society.3 At the same time he carried out multidimensional valuation studies such as those related to autos or power generation in © 2011 American Chemical Society



BCA AS A MODEL AND ITS SOURCES OF ERROR Many BCAs provide a point estimate; some provide sensitivity analysis, while probabilistic BCA seeks to provide a more complete characterization of the probability distribution of measures such as impacts, benefits, costs, or net present value Received: Revised: Accepted: Published: 2523

August 16, 2011 December 6, 2011 December 6, 2011 December 6, 2011 dx.doi.org/10.1021/es202861z | Environ. Sci. Technol. 2012, 46, 2523−2528

Environmental Science & Technology

Policy Analysis

For instance, might government decision-makers care about the probability that actual costs will be larger than some budgeted amount; or the probability that net benefits or other measures of a regulation may be positive (negative) even if the mean value is negative (positive)? Separately, highly linked and complex systems such as advanced technology transportation or oil drilling, power systems, and so on may be characterized by modes of failure, “normal” accidents, which may not be adequately captured in risk models of the system.18 In such cases there seems to be an “excess” probability or underestimation of failure which may be at least partially caused by an underestimate of the predicted variance. Concern about the existence and magnitude of the error variance can also be placed in context by considering what the EPA identifies as two sources of economic uncertainty: (1) the variation in explanatory variables and parameters, and (2) incomplete understanding of relationships which is here associated with “pure” uncertainty.12 In each case, there is typically a concern that the estimates be centered on the true value, a condition such as unbiasedness, and that the variance be appropriately estimated. Examples for parameters are the statistical properties of regression coefficients, transfer values in benefit−cost analysis, and values determining toxicity. This paper however, focuses on the pure error term which is typically assumed to be unbiased with an expected value of zero but where little or no consideration is given to its variance, an omission which is the subject of this paper. Including a random error term that increases the variance in many simulation models is complicated by the absence of observations for constructed measures. The discussion that follows uses a benefit−cost analysis as the prototypical model but other synthesized models may have similar qualities.

(the discounted value of all future benefits and costs that is the “bottom line” of BCAs).9 Government guidance recommends using the expected value for point estimates of benefits and costs but to include information on the statistical distribution of key measures in decisions with particularly large impacts.10−12 After a measure of central tendency such as the mean or the median, the variance is a commonly used descriptive statistic or determinant of measures such as confidence intervals and exceedance probabilities for a variable. The most frequent use of benefit−cost analysis is to forecast, ex-ante, the economic efficiency impact of a governmental action. In contrast to a model that estimates the mean of an observed outcome, for example the number of auto accidents, in benefit−cost analysis a forecast is sought given some random parameters and conditioning factors but observations are lacking for the dependent variable. Current practice in constructing a stochastic (simulated) benefit−cost analysis is to build, from the bottom up, an estimate of benefits and costs using distributions for both parameters and conditioning factors. Simulation tools are then used to sample over these distributions. An example for one component is multiplying the estimated number of fatalities by the value of a statistical life and specifying statistical distributions for each variable. This “bottom-up” type of modeling typically excludes unexplained variation in the constructed model and understates the variance or other measures of dispersion of the constructed measure. Omitting the random error term and its variance implies that the tails of the reported statistical distribution are too thin, allocating too little probability to events further from the mean on both the up and down side. As the source of this error is the pure uncertainty which the modeler is unable to explain, it is difficult to investigate what may comprise the error. In regression analysis, the error term is usually justified as being the sum of a variety of individually small factors about which the modeler does not know, but other justifications are sometimes mentioned such as functional approximations with higher order terms omitted or items for which there is insufficient data but which taken together, are thought not to bias estimate of the mean while acknowledging imperfect modeling of the problem. This paper does not address possible sources of bias in the estimate of the mean, another whole topic in BCA and environmental modeling, but focuses on the variance. Although the information desires of policy makers for decision-making under uncertainty remain ambiguous, information about the variance of a statistical distribution is believed to be useful from either a theoretical or a descriptive perspective.12−16 While risk is not variance, variance and possibly other elements of a statistical distribution are important in standard models of decision-making with risk. For instance, hypothesis testing, whether formal or informal by the analyst or the decision-maker, depends on the variance to distinguish whether a value is significantly different from zero or some other value; falls within a confidence region; or in a related test, to determine the probability that some value is exceeded. Considering risk preferences, in a static setting, although a risk neutral decision-maker focuses only on the expected value of a distribution, the frequently modeled riskaverse individual is affected by the variance.13 In a dynamic setting, even risk-neutral decision makers can be affected by the variance of outcomes.17 From a more managerial perspective, we might ask whether information about the variance might affect a specific decision.



AN ESTIMATOR FOR THE UNKNOWN RANDOM ERROR A method to incorporate an estimate of the random error in a simulation model is developed based on a linear model and common measures used in the analysis of variance. Consider that many simulation applications can be modeled as a vector product of parameters, β, and variables or conditioning factors, X. Not infrequently, an X variable is itself a function of other parameters and conditioning factors but the final aggregation, as with benefit−cost analysis, may follow the linear additive form. In that case, benefits are distinguished from costs by the sign of the parameter. Discounting can be incorporated into appropriate parameters. Consequently measures such as net present value, Y, can often be expressed as: Y = Xβ + ε (1) The random error term, ε, is unknown but is typically assumed (if considered at all) to be normally distributed with mean zero and constant variance σ2. The benefit−cost analyst typically seeks an estimate of Y and information about its distribution without ever observing values of Y. The usual simulation approach to stochastic benefit−cost analysis utilizes either historical or subjective data to provide distributions for each element of β and X to estimate the distribution of Y, including its mean and variance. The typically implicit random error is omitted. In contrast, in the standard statistical setting, values of Y exist and so estimates of ε can be constructed as the deviation between the observed value and the estimated value and used to estimate the error variance. Estimators exist in this case for 2524

dx.doi.org/10.1021/es202861z | Environ. Sci. Technol. 2012, 46, 2523−2528

Environmental Science & Technology

Policy Analysis

the ratio of the unexplained to the explained variance. Note that the standard but implicit assumption is that R2 equals 1 in which case only parameter and conditioning factor variability matter as there would be no unexplained variance. In the approach suggested here, that assumption is relaxed and subject to analysis. As the mean model sum of squares (SSM/N) in eq 4 is calculable from a simulation model based on the difference between the predicted and the average value, the right-hand side is estimable given an estimate for R2. The resulting estimate of the random error variance can be used for individual draws of the random error term in a simulation using eq 1, given a distributional assumption that is typically, but not necessarily, normal, mean zero, and the estimated variance.

the forecast variance when parameters are stochastic and the X are fixed19 or with stochastic X where the variance of Y would be determined by simulation methods.20 Consider the conditional forecast error variance in a leastsquares regression context, where Y is observed, as in Wooldridge,19 and presented as eq 2. The usual simulation approach estimates the first component of the right-hand side that represents sample variability given a fixed value X0 (or in more complex models, varying X) but omits the last term, the population random error. Wooldridge states “In many examples, σ2 will be the dominant term.” Var(Y |X 0) = Var(Y ̂ |X 0) + σ2

(2)



Given the lack of observed Y in benefit−cost analysis such that sample-based errors and variance cannot be estimated, how might one estimate the error variance in a simulation model? One approach could be a purely Bayesian methodology to directly estimate the error variance, confidence intervals, or other aspects of the linear model based on the analyst’s prior beliefs.21 An alternative, and the approach followed here, is to develop an estimator based on aggregate model fit and casespecific information. The proposed estimator uses empirical information from the problem at hand to anchor the size of the error variance, and uses a well-known but bounded measure of fit, the coefficient of determination or R2, to scale up or down the model sum of squares to estimate the error variance. Given typical distributional assumptions, this is sufficient to define an estimator for the variance of ε in eq 1. Most analysts are familiar with R2 as a measure of fit of an equation and information is sometimes available on submodels or calibration efforts in a particular setting although an aggregate R2 is not necessarily a simple aggregate of the components. If the analyst can subjectively provide an estimate of R2, supported by whatever evidence is available, an estimator for the error variance can be developed as below. Consider the standard definition of R2: R2 = 1 − SSE/SST

APPLICATION: THE ADJUSTED VARIANCE OF THE NET BENEFITS OF AIR POLLUTION CONTROL Models have been developed, including significant contributions by Lave, to estimate elements of the net benefits from controlling air pollutants such as particulate matter and sulfur dioxide which embed mortality and morbidity risks.2,5,14,22 Variability in parameters and conditioning factors is sometimes built into the models and in some models a type of Bayesian model averaging across parameters and conditioning factors is available. However, a pure error term has still been omitted. That error term may include small or omitted factors, functional approximations, or other modeling decisions. For instance, the choice of whether and how to include short-term or long-term exposure models, the measurement and effects of lagged or cumulative exposure, and measures of the value of a statistical life based on revealed or stated preferences all reflect modeling choices and data that may be in error while some components such as nonuse value may be omitted. There may be interactions among pollutants that are not fully accounted for or difficulty in distinguishing the marginal effects of pollution from other corelated but omitted variables. In light of these possible sources of pure error and to demonstrate the above procedure, a subjective estimated fit, R̂ 2, of 0.6 is used for the simulation model of the net benefits of the Clean Air Act in 2010. This indicates that the net benefit model for criteria air pollutants is believed to capture about 60% of the total variability in net benefits. Alternative to the modeler’s subjective perception of fit, one may instead survey experts in the field or assess submodels and obtain estimates through other means. A distribution for this value is also possible. As will be developed in the next section, this estimate, with a standard utility loss function, will be preferred to assuming a perfect fit as long as the true R2 is not above 0.8, an R2 which seems highly unlikely to this author. The model used for the base analysis is that by Farrow, et al.14 which was calibrated to EPA estimates for the net benefits of the Clean Air Act in 2010. The model was augmented to compute the model sum of squares based on the mean net present value of the net benefits of $81.2 billion. Equation 4 was then used to estimate the random error variance using the estimated model fit of 0.6. To create the pure error augmented model, a second simulation was run in which an additive, normal, independent, zero mean, random error was added to the base model which had excluded the error term. The augmented model increased the standard error of the net present value such that there was a higher probability of large net benefits but the probability that the net benefits were below zero also increased from 0.2% to 4.5%. This change may be

(3)

where, from standard analysis of variance decomposition and with the usual notation that “hats” are estimates, an overbar is the mean, and N is the number of observations: • SST, the total sum of squares = SSE + SSM where SSE is the error or residual sum of squares and SSM is the model or explained sum of squares. • SSE/N, is the mean square error equal to ∑(Y − Ŷ)2/N • SSM/N, is the model mean square equal to (∑(Y − Y̅)2/ N noting that this value is computable from a simulation model if the analyst assumes an unbiased model such that E(Y) = Y̅ = E(Ŷ). • σ̂2 = SSE/N is a consistent estimator for the error variance Suppose an analyst can provide a subjective estimate of R2, R̂ 2. Substituting the component elements for the total sum of squares and dividing both numerator and denominator by N results in 2⎞ ⎛ 1 − R̂ ⎟ SSM σ̂ 2 = ⎜⎜ 2 ⎟ N ⎝ R̂ ⎠

(4)

This estimator for the error variance is a proportional adjustment to the mean of the model sum of squares based on 2525

dx.doi.org/10.1021/es202861z | Environ. Sci. Technol. 2012, 46, 2523−2528

Environmental Science & Technology

Policy Analysis

utility compared to using a value of zero, or some other specified default value, as the estimate? In other words, which analysis, with or without the default, creates a smaller loss in utility for the user? This characterization of the problem, choosing a default such as zero or use of an available estimate, may be viewed as simplistic in a decision-analytic sense although it may correspond to substantial behavioral practice. The approach taken here applies two frequently used loss functions: the squared and the absolute error functions. The squared error loss function penalizes estimates that deviate from the true value at an exponential rate and treats the loss symmetrically for over- or underestimates. Hence critics of a policy may feel that large errors from the true value are penalized substantially more than errors close to the true value. Consider a squared error loss function (the result is the same for minimizing the absolute error when the default is zero).

viewed as statistically significant, for instance a one-tailed test at the 99% level that the net benefits are equal to zero would have been rejected in the first instance but not in the second. An alternative approach, in a nonsimulation setting, could have proceeded analytically. If the variance of the dependent variable was estimated using just variability in parameters and conditioning factors, then that estimate is the mean model sum of errors in a full model that includes a random error term. The variance of random error can be estimated using eq 4 and added to the earlier variance to recover the error augmented or total variance. Standard hypothesis tests or p values can then be calculated with appropriate distributional assumptions. If the analytical approach is taken to the base data in this example, the estimated p value from a one sided t test for the hypothesis that the net present value is zero is 4.3%, qualitatively similar to the simulation approach. Incorporating the variance of the previously omitted error term added probability to both tails of the distribution. However, the policy process incorporates uncertainty in a much more ambiguous manner.16 Incorporating additional uncertainty through an estimate of the random error may simply inform the decision-maker that there is more probability in the extremes of the distribution than would be estimated by the standard simulation method. An analyst may wish to present both the basic set of results and the error augmented sets of results as one way to quantitatively capture uncertainty about the prediction.

Define θ̂: estimate θ*: true value (θ̂ − θ*) error with estimate (0 − θ*) error using zero (default) as estimate

The answer to “When is the error using an estimate smaller than when a value of zero is used?” follows from comparing the squared errors in each case:



(θ̂ − θ*)2 < (0 − θ*)2

WHEN IS SOME ESTIMATE OF THE ERROR VARIANCE BETTER THAN ASSUMING IT IS ZERO? Many analysts, especially in policy applications, face the issue of whether it is better to include some estimate or to exclude the factor from the analysis. Usual tests of significance are one approach to this problem, as is statistical decision analysis. In frequent practice, there is an element of judgment about including or excluding a factor. Whether or not to assume that the error variance is zero or to include an estimate of the error variance is a variation on this problem. Another example is the current practice of the U.S. Army Corps of Engineers (COE) to not monetize the value of statistical lives saved23 said to be in part due to the controversy over assigning such a value. The result is an implicit value of zero for loss of life in the monetized benefit-cost analysis. Consequently, a basic framework is first developed for when a smaller error is made by incorporating some factor or letting the default be zero followed by an application of that framework on whether to assume the error variance is zero, or alternatively that R2 is equal to 1. It is well-known that a Bayesian decision-maker using a squared error loss function will minimize the expected utility loss by using the conditional expected value as an estimate (and an absolute error decision-maker will use the median value).24 This is one argument for using estimates of central tendency in analyses and supports modeling interest in unbiasedness. However, in many benefit−cost or related debates, people are not explicit about their loss in utility that results from errors in estimation and may instead act more heuristically or behaviorally by using a default value, such as but not necessarily zero, or using an available estimate which may or may not be a measure of central tendency from a distribution. As a result, instead of specifying that a person seeks to minimize the expected loss, the hypothesized behavioral question can be phrased as follows: Given a loss function, how accurate does the estimate have to be in order to generate a smaller loss in

(5)

Some algebraic manipulation yields the result that the error is smaller using the estimate instead of zero when for θ̂, θ* positive; θ̂ < 2θ*

(6)

for θ̂, θ* negative; when θ̂ > 2θ*

(7)

If the estimate and the true value are of different sign, then zero provides the smaller error, indicating the importance of a high degree of confidence regarding the sign of the estimate. In many cases, such as the value of a statistical life or the variance, there is presumably no debate about its positive sign. What does this result tell us for those applying such a behavioral heuristic in choosing among estimates? First, while we may never know the true value of the parameter in question, as long as the estimate is between zero and less than twice the true value, then the error of the analysis is smaller when the estimate is used instead of an implicit value of zero. This defines an acceptable (though not optimal) degree of imprecision and provides some guidance for reviewers or readers as to the required accuracy of the estimate to improve upon an implicit value of zero even when there is a dispute among stakeholders or peer reviewers. The result can also be viewed as increasing the usefulness of a bounding analysis.25 If a bound is considered the credible upper value for the true value; the analyst will always improve upon an implicit value of zero if the estimate used is within two times the credible upper bound. Second, if an estimator applied to the data is statistically unbiased, then the expected value is equal to the true value and the condition will be met on average. Individual results or draws from the sampling distribution however, may not meet the criteria. Consequently, a conservative approach if a few estimates exist or if biasedness is a concern can be to use only estimates that are less than two times the minimum credible 2526

dx.doi.org/10.1021/es202861z | Environ. Sci. Technol. 2012, 46, 2523−2528

Environmental Science & Technology

Policy Analysis

a panacea but I like to think Lester and I could have had a good conversation about its usefulness.

estimate. Any particular estimate, or even the mean or median of a set of estimates, is not always guaranteed to meet this criterion. For instance, Viscusi and Aldy reviewed 30 published estimates of the value of a statistical life in the United States and concluded “the value of a statistical life for prime-aged workers has a median value of about $7 million in the United States. Our meta-analysis...finds that 95 percent confidence interval upper bounds can exceed the lower bounds by a factor of two or more”.26 Central to the question of this article, when to include an estimate of the error variance instead of assuming the variance zero, can be viewed as an extension of the above using a different default parameter value. The problem for preferring an estimate of the pure error variance, based on eq 4, can be restated as asking when it is better to assume a parameter value of R2 that is less than 1, instead of assuming the typical default of a perfect fit when R2 is 1 (implying that the variance of the random error is zero). Consider the more general case of eq 5 in which an alternative point estimate, θ̂2, is considered in place of the default zero used previously: (θ1̂ − θ*)2 < (θ̂2 − θ*)2

■ ■

Corresponding Author

*Phone: 410-455-5922; e-mail: [email protected].

ACKNOWLEDGMENTS Appreciation is extended to Andrew Solow and Gregory M. Duncan for comments on an earlier version. Partial funding has been provided by the John D. and Catherine T. MacArthur Foundation.

■ ■

DEDICATION This paper is dedicated to Lester B. Lave.

1

2

(8)

(9)

The prior result, for a parameter θ̂2 equal to zero, is a special case of eq 9. The default R2 case for estimating the error variance, where θ̂2 is 1, implies no random error. Consequently, a smaller squared error will result from using an estimate of fit other than the default of 1 if θ1̂ 1 + < θ* 2 2

REFERENCES

(1) Porter, T. M. Trust in Numbers: The Pursuit of Objectivity in Science and Public Life; Princeton University Press: Princeton, NJ, 1995. (2) Lave, L.; Seskin, E. Air Pollution and Human Health. Science 1970, 169, 723−733. (3) Lave, L. Benefit-Cost Analysis: Do the Benefits Exceed the Costs? In Risks, Costs, and Lives Saved; Hahn, R., Ed.; Cambridge University Press: Cambridge, U.K., 1996. (4) Lave, L. Conflicting Federal Regulations Concerning the Automobile. Science 1981, 11, 893−899. (5) Hendrickson, C.; Lave, L.; Matthews, S. Environmental Life Cycle Assessment of Goods and Services; Resources for the Future Press: Washington, DC, 2006. (6) Arrow, K.; Cropper, M.; Eads, G.; Hahn, R.; Lave, L.; Knoll, R.; Portney, P.; Russell, M.; Schmalensee, R.; Smith, V. K.; Stavins, R. Is There a Role for Benefit−Cost Analysis in Environmental, Health, and Safety Regulation? Science 1996, 272 (April), 221−222. (7) National Research Council. Inland Navigation System Planning: The Upper Mississippi River-Illinois Waterway; Transportation Research Board and Water Science and Technology Board: Washington, DC, 2001. http://www.port.pittsburgh.pa.us/docs/inland_navigation_ system_planning.pdf . (8) Lave, L. Commentary: Benefit-Cost Analysis. Environment 1999, 5. (9) Boardman, A.; Greenberg, D.; Vining, A.; Weimer, D. Cost-Benefit Analysis: Concepts and Practice; Pearson-Prentice Hall: Upper Saddle River, NJ, 2011. (10) Office of Management and Budget (OMB). Regulatory Analysis: Circular A-4; Washington, DC, 2003. Available at http://www. whitehouse.gov/omb/circulars/index.html. (11) OMB. Guidelines and Discount Rates for Benefit-Cost Analysis of Federal Programs: Circular A-94. Washington, DC, 1992. Available at http://www.whitehouse.gov/omb/circulars/index.html. (12) EPA, National Center for Economics. Guidelines for Preparing Economic Analyses; Washington, DC, 2010; pp 11−9. Available at http://yosemite.epa.gov/ee/epa/eerm.nsf/vwAN/EE-0568-50.pdf/ $file/EE-0568-50.pdf. (13) Eeckhoudt, L.; Gollier, C.; Schlesinger, H. Economic and Financial Decisions under Risk; Princeton University Press: Princeton, NJ, 2005. (14) Farrow, S.; Ponce, R.; Wong, E.; Faustman, E.; Zerbe, R. Facilitating Regulatory Design and Stakeholder Participation. In Improving Regulation: Cases in Environment, Health and Safety; Farrow, S., Fischbeck, P., Eds.; Resources for the Future Press: Washington, DC, 2001. (15) Hassenzahl, D. M. Implications of Excessive Precision for Risk Comparisons: Lessons from the Past Four Decades. Risk Anal. 2006, 26 (1), 1−12. (16) Krupnick, A.; Morgenstern, R.; Batz, M.; Nelson, P.; Burtraw, D.; Shih, J.; McWilliams, M. Not a Sure Thing: Making Regulatory

The general result for when θ̂1 is preferred to estimate θ̂2 is, after expanding and simplifying, 2 2 θ1̂ − θ̂2 = θ1̂ + θ̂2 < 2θ* θ̂ − θ̂

AUTHOR INFORMATION

(10)

Equation 10 places the tipping point of the true value midway between the estimate and the default value of one. The expression can be used to explore implications of an estimate, for example to provide a check on a subjectively chosen value for R2. For instance, if the analyst believes that the model has an R2 of 0.6, (θ̂1); then the implication from eq 10 is that the highest value possible for which the analyst’s estimate is better than the default of a perfect fit is that the true value is less than 0.8; or half way between the estimate of the analyst and the typical default value of 1. Significant advances in the quantification and communication of uncertainty to decision-makers have been made possible with the use of Monte Carlo simulation. However, unless pure error is included in the analysis, a decision-maker may believe that the results of a variability analysis encompass the total variability. The method presented here allows the analyst several ways to present uncertainty, including breaking out its variability and pure uncertainty components. While an analyst following the above methodology will be expected to be explicit about and defend their choice of subjective fit, an analyst omitting such an estimate may now be asked to defend their assumption that the model is a perfect fit. The methodology presented here for estimating the variance of a previously omitted error term and guidance as to when that estimate is preferred to the current default assumption of a perfect fit is not 2527

dx.doi.org/10.1021/es202861z | Environ. Sci. Technol. 2012, 46, 2523−2528

Environmental Science & Technology

Policy Analysis

Choices Under Uncertainty; Resources for the Future: Washington, DC, 2006. Available at http://www.rff.org/RFF/Documents/RFF-RptRegulatoryChoices.pdf. (17) Dixit, A.; Pindyck, R. Investment Under Uncertainty; Princeton University Press: Princeton, NJ, 1994. (18) Perrow, C. Normal Accidents: Living with High-Risk Technologies, revised ed.; Princeton University Press: Princeton, NJ, 1999. (19) Wooldridge, J. Introductory Econometrics, 3rd ed.; ThompsonSouthwestern: Mason, OH, 2005; p 216. (20) Feldstein, M. The Error of Forecast in Econometric Models when the Forecast-Period Exogenous Variables are Stochastic. Econometrica 1971, 39 (1), 55−60. (21) Kadane, J.; Dickey, J.; Winkler, R.; Smith, W.; Peters, S. Interactive Elicitation of Opinion for a Normal Linear Model. J. Am. Statistical Assoc. 1980, 75 (372), 845−854. (22) U.S. Environmental Protection Agency (EPA). Environmental Benefits Mapping and Analysis Program: BenMAP; 2009. Available at http://www.epa.gov/air/benmap/. (23) U.S. Government Accountability Office (GAO). Oregon Inlet Jetty Project: Environmental and Economic Concerns Need to Be Resolved; GAO-02-803; Washington, DC, 2003. Available at www.gao.gov. (24) Degroot, M. Optimal Statistical Decisions; McGraw-Hill: New York, 1970; pp 226−233. (25) Morgan, M. G. The Neglected Art of Bounding Analysis. Environ. Sci. Technol. 2001, 35, 162A−164A. (26) Viscusi, W. K.; Aldy, J. The Value of a Statistical Life: A Critical Review of Market Estimates throughout the World. J. Risk Uncertainty 2003, 27 (1), 5−76.



NOTE ADDED AFTER ASAP PUBLICATION There was an error in equation 1, and errors in the last two bulleted items between equations 3 and 4 in the version of this paper published February 8, 2012. The correct version published February 22, 2012.

2528

dx.doi.org/10.1021/es202861z | Environ. Sci. Technol. 2012, 46, 2523−2528