Predicting future pollution exceedances under ... - ACS Publications

Predicting Future Pollution Exceedances under Emission Controls ... Engineering and Operations Research, University of California, Berkeley, Californi...
1 downloads 0 Views 936KB Size
Predicting Future Pollution Exceedances under Emission Controls Robert M. Oliver Department of Industrial Engineering and Operations Research, University of California, Berkeley, California 94720

rn This paper proposes a theoretical methodology to predict the distribution of future exceedances of an air pollution standard resulting from long-term emission controls at a single and possibly distant source. The inverse problem, namely, what steady-state emission will yield a given safety or violation standard, is also posed and solved. Finally, the theoretical model is extended to include seasonal or diurnal effects by means of a seasonal function S(t) which maps clock time into “seasonal”time. To obtain these results we derive an equilibrium rollback model based on conservation of emitted pollutants and show how it can be used to obtain the parameters of the Poisson counting distribution for exceedances of future air quality standards in terms of data obtained from historical pollutant records. We also formulate the problem of calculating what the reduction in steady-state source emissions must be in order for the probability of one or more exceedances per year to be less than a given number. In the case of log-normally distributed receptor concentrations it is shown that the mathematical solution of this problem is obtained from the unique root of a simple transcendental equation expressed in units of the geometric standard deviation of the receptor concentrations. The similarities and differences between these results and those obtained by Peterson and Moyers (1) for their MPR (multipoint rollback) algorithm are discussed.

Introduction The subject of monitoring, measuring, and regulating air quality has become increasingly important in recent years (see ref 2). In particular there has been increased interest and evidence that air pollutants should be viewed as random variables and that their measurement therefore involves an understanding of the stochastic transport phenomena which carry the pollutants from sources to receptors as well as the statistical estimation problems which arise from different sampling techniques. In this author’s experience there are at least two distinct types of problems one can encounter in predicting air quality and/or the number of exceedances of stated standards. In the first instance one may be interested in the relatively short-term prediction and control of air quality over minutes, hours, or days given knowledge of historical pollution levels and forecasts of meteorological conditions. On-line and real-time monitoring techniques combined with limited controls of emission sources may be feasible and practical in certain instances. Papers such as those by Horowitz and Barakat (3) point out some of the difficulties that arise in predicting high or maximum levels of air pollutants. On the other hand, long-term pollutant levels from power plants, manufacturing processes, etc. measured over 0013-936X/85/09 19-0225$01.50/0

long periods, months or years, may be difficult and expensive to control in view of major plant modifications and capital investments that may have to be made to reduce source emissions. Larsen (4-3, Horie and Overton (B), and others voice concern about control problems that arise in this connection. A recent paper by Georgopoulos and Seinfeld (9) gives a review of the literature on statistical distributions of air pollutant concentrations and suggests that a “rollback” model should be based on the notion of mass conservation for nonreactive species. This observation leads to the result that the expected value of the (random) concentration at a receptor should be proportional to the emissions of the pollutant species under consideration,whether they be from man-made emission sources (which in principle can be controlled) or from so-called background sources (which may not be controllable). For an example of a procedure that cannot satisfy mass conservation see Knuth and Giroux (IO). Let me attempt to state a major unsolved problem in predicting the long-run impact of emission controls as best I understand it. For purposes of this paper we will assume that there exist (but we may not have) historical records of emissions E at one or more sources affecting the site of interest. Due to population growth and other factors the intensity of source emissions are expected to change slowly over time. At any given location, z , other than the emission source itself, spatial and time-dependent observations of pollutant concentrations $(z,t) are recorded. Air quality goals are known and stated for the particular pollutant of interest, and exceedances of this standard can be monitored or estimated under existing conditions. Controls or abatement measures are to be imposed on the emission sources. These will lead to different future pollutant levels and concentration distributions at each site of interest. Given a known reduction in emission sources we want to calculate the future distribution of exceedances of standards. What model(s) should we use? A second, inverse problem is the following: What future emission controls should we impose in order to achieve stated air quality goals? What models or theoretical techniques can one use to solve this difficult problem and to predict its impact on future exceedances? Before we can attempt to analyze and design emission control strategies, it is important to first discuss models for sourcereceptor relationships.

Source-Receptor Relationships The model used in this paper to describe the sourcereceptor relationship can be written in the form $(z,t) = a(z,y) E W + b(z) + &,t) (la)

0 1985 American Chemical Society

Envlron. Sci. Technol., Vol. 19, No. 3, 1985 225

where $(z,t) is the random receptor concentration at point z and time t, a is the diffusion or meteorologicalparameter depending on sites z and y and time t , and E(y) is the

intensity of the source emission at y. The term b(z) is the background concentration at the receptor site and &,t) is a random sequence of error or noise terms possibly correlated with one another. We will assume throughout that the expected value of errors is zero, i.e., E [ t ]= 0. It is important to stress the point that, in this model, randomness in $&) is derived from randomness in the terms &,t) and not necessarily from random backgrounds or random emissions. One can, if need be, also incorporate randomness in a and E but that is not necessary for an understanding of this paper. A different model views the receptor concentration at time t as a superposition of emissions at previous times (s Q t ) being transported (possibly diluted) from y to z so that

In the analysis of their multipoint rollback (MPR) strategies Peterson and Moyers ( I ) implicitly assume that a , E , and $ are stationary random processes, Le., their probability distributions are invariant to time shifts, and furthermore that the random variables a and E are independent of one another. As we will see, this independence assumption explicitly excludes emission control strategies or abatement policies which depend on meteorology or forecasts of meteorology. In the multiplicative model of (IC)where both a and E are random and assumed independent of one another

F+(x) = Pr(aE Q x ) =

lxlmf,&) f , (i)?du (3b) 0

0

in terms of the density functions f a and f E . The special case of (3b) when E is constant yields the obvious result

F+(x) = Fa(x/E) $(z,t) = l l a ( z , y , s , t ) E b , s ) ds

Ob)

Stated another way, a unit emission at s contributes a(z,y,s,t) to the receptor concentration at time t. A special

case of this model where transport times are explicitly excluded

*(z,t) = a(z,y,t) E b , t ) (IC) was used by Peterson and Moyers ( I ) . In their model the random receptor concentration $(z,t) at place z and time t is proportional to the product of the random time-dependent emission rate E(y,t) at place y and time t and a random time dependent meteorological or diffusion parameter a(z,y,t) which depends on the location of both source and receptor. [In the notation of Peterson and Moyers ( I ) $ c, a D, and E -E. We use E to denote a physical emission quantity and E[.] for the mathematical operator denoting expectation of a random variable.] If any one or both of the variables on the right-hand side of (lb) are random, then so also is the receptor concentration on the left-hand side. In both formulations, the diffusion or meteorological parameter a will depend not only on terrain but also on distance between source and receptor, dilution, diffusion, and heating effects. We denote the stationary cumulative distribution function for pollutant concentrations at a particular site by

- -

F&) = Pr($ < x ) = $‘f+(u) du 0

F+(m) = 1

(2a)

where )I is the random concentration and f&) is the probability density function. Throughout this paper the subscript on density or distribution function denotes the random variable of interest. As is common practice in the scientific literature, we denote by P&x) = 1 - F,(x) = Pr($ > x ) (2b) the tail distribution, which is simply the probability that a pollutant level of x will b_eexceeded. One can usually obtain estimates of F&) or F&) from records of pollution data. If x is an air quality standard, we say that an exceedance has occurred whenever the concentration of the pollutant exceeds x . If we consider only stationary random processes and, in the interst of simplicity, temporarily drop the space-dependent notation, the probability distribution of $ in the additive model of (la) is related to the error terms as follows:

F&) = Pr{$ 6 x ) = F,(b-l(x - aE)) 226

Environ. Sci. Technol., Vol. 19, No. 3, 1985

(34

(3c)

In this case, the distribution of IC/ is a constant scaling of the distribution of a. Although these mathematical models sometimes yield the same results for receptor-pollutant concentrations (described more fully later in the text), there are certain features that can be included in (la) that are not available to (IC).In its present form (IC)does not allow transport lags between an emission at y at time t and a pollutant reading at z at some time later time s > t. In other words, simultaneity is implied. The noise terms, t(z,t), on the other hand, may be strongly autocorrelated and thus allow for source-receptor lags.

Equilibrium Abatement and Rollback Models for Sources Consider single source emissions which, as a result of the effects of many random meteorological variables, terrain and instrumentation or other measurement errors, yield random concentrations at a single, possibly distant, receptor. We argue that, in the absence of background emissions, controls which reduce the average emission rate of pollution sources by a factor, K , also reduce the average or expected total amount of pollutants deposited onto a given site over a long time period by the same factor. Thus, in the long run the average hourly pollutant concentration will also be reduced by this factor. The criticism that Peterson and Moyers (1)level at the “single point” rollback procedures recommended by EPA are probably well founded. But they do not mention the most important requirement unsatisfied by these models, namely, mass conservation over long periods of time. This is precisely the issue raised in the paper by Georgopoulos and Seinfeld (9). Fur an example of a procedure that cannot satisfy mass conservation see Knuth and Giroux (10).

Suppose that E (deterministic) is the intensity or strength of the emission source, b is the (deterministic) background emission rate, and a is a constant of proportionality; then the expected concentration measured at a receptor is given by E [ $ ] = aE + b (44 The left-hand side measures the expected value of a random pollutant concentration which is composed of two parts; one part due to a source with emission intensity E and the other part to background emissions in the vicinity of the receptor. Note that (4a) is obtained by taking expectations of both sides of (la) with E[€] = 0. Constant a does not have to be a number less than one but, typically, decreases with increasing distance between source and receptor and is reduced if the pollutant species

is a reactive one. For the same site, source location, and background levels but with a new emission intensity E’ and a new expected concentration E[$’] we have

E[$’] = aE’

+b

(4b)

where a and b remain unchanged. From now on we will use the convention that primed quantities such as J/’ or E’ apply to future periods (with emission controls) and unprimed quantities apply to the present (without emission controls). On solving for the percentage change or “rollback” in emission intensity we obtain

E’ - E E[$’] - E [ # ] - E [ $ ]- b (54 E in terms of old and new expected concentration levels. Note that the left-hand side is source dependent and the right-hand side is receptor dependent and includes only expectations of pollutant concentrations. If the growth factor in expected concentrations is K = E[#’]/E[$],then (5a) yields

R=--

R is positive or negative depending on whether K is less than or greater than 1. We assume, of course, that E[#] > b. When background levels are negligible, the rollback factor is R = K - 1. Alternatively, K = 1+ R is the reduction in average pollution concentrations at a receptor if R is the long-run percentage decrease in emissions at a source. How are the results in (5) affected by random emissions of intensity E or by randomness in the background, b? On reflection the reader should realize that mass conservation requires that (4a) and (4b) continue to be valid if we substitute expectations E [ E ] and E [ b ] for their deterministic counterparts. In these cases the correct version of (5a) and (5b) becomes E[E’I - E[EI - E[#’] - E [ $ ] (5c) E[EI E[$] - a b 1 What happens in the case of a reactive species where only a fraction of the emitted pollutant returns to earth? In this case only constant a will be made smaller by the loss of some fraction of the original species. Since (5a-c) is independent of constant a, the loss of some fraction of the species will not affect the rollback models described above, i.e., reactive pollutants will not affect steady-state results so that one again obteins R=

A recent study by the National Academy of Sciences (11) in connection with acid rain concludes that there is both empirical and theoretical evidence for linearity in the source-receptor relationship when applied to long-term average emissions over large areas. While the report did not address specific source-receptor relationships, the physical air pollution process would also seem to support this conclusion, provided long-term meteorology remains unchanged and any chemical reactions induced by the pollutants are not limited or saturated in such a way as to provide a nonlinear relationship. Larsen (12) frequently refers to a “rollback” formula

where g is a growth factor, c is the present pollutant con-

centration, b is background, and q is a desired air quality goal. This formula was not derived from a consideration of random variables but rather was based on unclear deterministic arguments. The form of (5a,b,d) is similar to Larsen’s model (5e) except that no air quality standards or extremes appear in it. They should not, as mass conservation yields results for long-term expectations. Thus, the first result of this paper is a statement and result for expectations under stationary but possibly autocorrelated random receptor concentrations. I t is important to emphasize that since rollback is independent of emissions E, one does not have to have historical measurements of emissions in order to apply the rollback formula.

Receptor Concentration Distributions and Poisson Exceedances In order to capture the probabilistic behavior of exceedances at “downwind” sites, one must consider at least three different random variables and their distributions. The first of these is the random variable that measures the pollutant concentration at a particular receptor site at a particular time. With this distribution and the threshold level which defines an exceedance, it is possible to calculate the marginal probability that an exceedance will occur at a given site at a particular time. Under appropriate assumptions for the joint probability of successive exceedances one can also describe the distribution of the random variable measuring elapsed time between two such exceedances;thus, in principle one can obtain the probability counting distribution for the random number of exceedances in a large interval of time, say a month or a year, and from the latter the probability of violation of a standard. If the time between exceedances is large, and the probability that an exceedance will occur in a small interval of time is close to zero and proportional to the length of the subinterval, an exceedance can be viewed as a “rare” event. In such cases the Poisson distribution is the appropriate model for the probability that a given number of exceedances will occur over a finite period of (continuous) time. See, for example, such references as Breiman (13) on Poisson processes and the early works of Cram& and Leadbetter (14) which show that the number of exceedances for quasi-stationary correlated Gaussian processes in a finite interval of time is asymptotically Poisson. Consider an entire year of hourly observations. If it is found that there occur on average X exceedances per year, the probability that there will be m exceedances in any particular year would then be given by the Poisson formula e-Xhm m = 0, 1, 2, .., Pm = m! As an example, let the yearly average of number of exceedances be X = 10. In this instance the probability of no exceedances in the year is p o = e-1° = 4.5 X and the probability of exactly 10 exceedances is plo = 0.125. In other words, the chance that there will be no exceedances is almost negligible, and the chance of exactly 10 is about one in eight. Excluding for the moment seasonal effects, the average number of exceedances of level x in the Poisson distribution of (6a) just equals the (hourly) probability that the exceedance x will occur in 1h times the number of hours in a year. In other words, the average number of exceedances in (6a) equals the number of hours in a year times the probability of an hourly exceedance so that X = 8760FJx) (6b) Environ. Sci. Technol., Vol. 19, No. 3, 1985

227

Table I. Monthly Exceedances at a Receptor Site

Jan

Feb

Mar

Apr

May

June

July

Aug

Sep

Oct

Nov

Dec

total

0

1

0

2

1

3

8

4

1

1

0

0

21

Table I records a total of 21 exceedances at one of six sites in a total of 8760 h. Thus, if exceedances were equally likely (we know they are not!) to occur in any hour of the year, the probability of an hourly exceedance is about one in 417 or 0.0024. To state it another way, the expected rate of exceedances is about 0.0024 per hour or 1.73 per month. Later in this report we modify this calculation to include the obvious seasonal effect suggested by the data. If the air quality standard were stated, as many of us feel it should be, in terms of a small probability of exceedance of a given level, it is relatively easy to compute that violation probability from the Poisson distribution for counts of exceedances. If we want the probability of exceeding an hourly standard in 1year to be less than or equal to Qo, we then have Qo 2 1 - p o = 1 - e-X = 1 - e-TE,(x) (7) where T is the period of interest. A slightly more complicated situation arises if instead of (7) we want the probability of more than one exceedance per year of a stated standard to be less than or equal to a given value. While the problem can be easily stated in mathematical form, its solution is more difficult.

An Example To illustrate how the rollback formula can be used in conjunction with the Poisson counting distribution, consider an example where the density function for receptor concentrations is exponential, i.e.

and where the background concentrations are negligible. In this case both models l a and ICyield identical results. Obviously, a single parameter E [ $ ] = aE characterizes this distribution. I want to show that, by reducing the constant emission level, a constant scaling of all receptor concentrations is thereby obtained, and the Peterson-Moyers (1) algorithm is not used, but violation standards for air pollution goals can be met. To solve for the new (reduced) emissions which yield a given probability of violation (one or more exceedances per year) means solving the equation = 1 - e-Te-x/aE' Q - 1(8b) 0 -

for E' as a function of Qo,x , a, and T, the time period of interest. For purposes of this example we will assume a year or T = 8760 h, an exceedance threshold x = 30 ppb, an expected receptor pollutant level of aE = E[$]= 4.953 ppb, and a violation probability (one or more exceedances) Qo = 0.1. The simplest way to solve for E'is to eliminate a and obtain -30 K = -In 0.532 (84 4.953 8760

(")'=

KaE

(84

and the new value for expected receptor concentrations is E[I)'] = (0.38)-' = 2.64. The density of low concentrations has been increased, the density of high concentrations has decreased, the desired low probability of annual vio228

Envlron. Sci. Technol., Vol. 19, No. 3, 1985

While this is an example of MPR rollback in that each concentration is reduced by a constant factor, note that the Petersen/Moyers algorithm is not used and gives different results as one can easily show. In fact, this example points out an important theoretical flaw in the Peterson and Moyers (1) paper in that the scaling constant they use to reduce random emissions is a random variable estimated from historical data rather than a deterministic factor K or R computed from a desired extreme of the process. In principle, with F,, FE, and Fa given, one should be able to formulate a solution for K as a deterministic and unique number independent of historical receptor concentrations. In the Peterson/Moyers algorithm (step 2, p 1441) the rollback factor, R , is a random variable since ita numerator depends on a maximum observed concentration. The distinction should be or FE and calculating, not estibetween estimating F,,, mating, R p

Log-Normal Receptor Concentrations An analysis of much pollution data suggests that a lognormal distribution satisfactorily describes hourly pollutant concentrations. This conclusion has been reported by Larsen (15)and others. See also the recent survey paper by Georgopoulos and Seinfeld (9). These studies indicate that in many rural and city locations air pollutant concentrations are well described by the log-normal probability density function Pr(x < $ S x + dx) =

As before, $ is the random pollutant concentration. The parameter p is the expected or mean value of the logarithm of concentrations, and u2 is the variance of the logarithm of concentrations. An alternate way to describe this probability density is to say that the logarithm of pollutant concentrations is normally distributed with mean p and variance u2. For further details see Aitchison and Brown (16),Larsen (15),or Oliver (17). The probability that the log-normally distributed pollutant concentration will exceed x can then be expressed as

where

-IY e-t2/2d t

ab) = 1

&

The new receptor concentration density function is 1 fv(x) = -e-x/./(xaE) = 0.38e4.38'

lation has been met, conservation is satisfied, and, as it must

--

(lob)

is the well-known unit normal. For large x , the log-normal probability in (loa) is given by

Thus, as we suggested earlier, the probability that a concentration will exceed x decreases very rapidly with large X.

Expected pollutant concentration can be written as

E [ + ] = eP+(02/2) (W It has been argued by Larsen (12) and others that if source locations and meteorological and terrain conditions remain constant, the standard geometric deviation of the lognormal pollutant distribution remains unchanged. That is to say, in the future we should expect equality of the primed and unprimed quantities ed = e'; thus, the probability distribution of given concentrations and the probability that an exceedance of level x will occur in 1 h are obtained from (9) when one substitutes p' for p, leaving u unchanged. To obtain p' from long-run conservation in the absence of background emissions E [ $ / ] = eP'+Y2/2)= KeP+(U2/2) = .E[#]

(1lb)

or p' = p

+ In

(W

K

In other words, the parameters for the future log-normal distribution of pollutant concentrations are given in terms of the old parameters by (llc) if meterology, terrain conditions, and emission source locations remain unchanged. If the emission rate is doubled, K = 2, p' = p 0.693; if the emission rate is halved, p' = p - 0.693. We are now in position to calculate the distributions of future concentrations, the distribution of their extreme values, the expected return periods of these extremes, and the Poisson counting distribution of future exceedances. Under log normality the probability that the future concentration level $' will exceed x in a given hour is obtained directly from (loa) as

Again, the important conclusion one obtains from this equation is the very rapid and nonlinear decrease of the average Poisson rate of exceedances with increasing x . For example, if p = 1.35, u = 0.5, and x = 30 ppm, then X = 16.3. With the same p and u values but a new threshold of x = 60 ppm, X decreases to 0.5. In this example doubling the exceedance threshold reduces the average number of exceedances by a factor of 33!! It should also be noted that linear scaling laws do not apply to the expected return period for a given concentration even though the average pollution concentrations may be linearly related to emission intensity. The expected value of the random return period for a threshold x is given by ~ ( x =) E[return period] =

[ ('-rX)]-' @

-

(14a)

By reducing the expected emission rate to a new (lower) value, we reduce p to p' = p In K . With u' = u the new return period for the same threshold, x, is therefore given by

+

+

Pr($'

::

> x) = 1 - @ (In

=

The effect of multiplying the expected emission intensity by the factor K makes the probability that a future concentration will exceed x equal to the probability of exceeding a reduced threshold X / K under existing conditions. Note that when we discuss the effects of abatement, K is assumed to be less than 1. Thus, X / K is greater than x,and the future probability of exceeding x under emission control is, as it should be, smaller than the probability of exceeding x. If emissions are halved, for example, the probability of a future exceedance is much smaller than half the probability of an exceedance under existing conditions. To state it another way, reducing emissions by a factor K is equivalent to increasing the exceedance threshold (but not the probability of exceedance) by the same factor. To predict the effect of abatement controls on the counts of exceedances, we now substitute the result obtained in (12) into (6) to obtain the distribution of future exceedances as e-A'Xtm

p f m= - m = 0, 1, 2, ... (13d m! The future average rate of exceedances A' = 8760*[ (p' In x ) / d ]= 8760@((p- In ( x / K ) ) / udepends, ) of course, on the abatement factor K . The dependence of A' on large x values can also be obtained by making use of the result in (1Oc):

Again, we see that ~'(x)does not equal K T ( X ) ; i.e., the expected return periods do not scale linearly with changes in emission rates. For K >1 the ratio T ' ( x ) / T ( x ) is much greater than K ; for K < 1 the ratio T ' ( x ) / T ( x ) is much less than K.

Steady-State Emission Control under Log Normality Using (13), we want to find the value of K , which yields the equality in (7). Now for large x and small K , A' will be a small number. Since we know that the exponential function can be written as

we can neglect quadratic and higher terms when A' is small. Thus, the equality in (7) can be simplified to yield

We are now in a position to calculate the emission abatement factor K required to achieve the pollution standard x by solving a simple transcendental equation. Define w = (In ( x / K ) - p ) / u . Then (15) can be rewritten as 3495e-(1/2)W2 -Q ~ w =0 (164 This equation has a unique root w,which can then be used to obtain K. Substitute the solution for w into the equation K = xe-(P+"W) (16b) to obtain the desired solution to the original problem. A numerical check can be obtained by substituting the solution into the conservation equations of the rollback model. Consider an example of a pollutant species whose hourly expected concentration (in parts per million) is 4.953, with an expected logarithm of concentrations p = 1.35 and variance of the logarithm of concentrations u2 = 0.500. Assuming a log-normal distribution as in (6), we

Environ. Scl. Technol., Vol. 19, No. 3, 1985

229

Table 11. Comparison of This Paper with Peterson/Moyers Paper

topic

Peterson/Moyers article (I)

this paper

(1)long-run mass conservation (2) distribution of exceedances

unstated; not explicitly required in model independent discrete time; Binomial distribution for all sampling intervals not discussed but apparently excluded (3) autocorrelation unstated and unresolved (4)seasonality (5) source/diffusion relationship eventually independent (multiplicative model) unstated but appears implicitly required (6) stationarity (7) randomness in emission random sources explicitly formulated explicitly stated (8) randomness in diffusion, meteorology, etc. (9) source/receptor lags simultaneity required (10) control strategies

selects uniform scaling (MPR) applying to all random emission levels

find from (7) that the probability of exceeding a threshold of 30 ppm in 1 h is therefore equal to P+(30) = @(-2.901) = 0.0019 (174 (17b)

is so close to 1that we are virtually certain there will be one or more exceedances in 1 year. If we want the probability of no exceedance in 1year to be large, say Qo = 0.10, we must look for a reduction factor greater than or equal to K which is a solution of (16a) and (16b). The first step is obtained by finding the unque root of (16a) corresponding to Qo = 0.1. The equation

e-w*/2- (2.861 X 10-6)w = 0 (174 yields the root w = 4.246. We then obtain K = 0.386 from

-

(16b). Thus, long-run source emissions have to be decreased by R 60%. To check our calculations we note that with these controls the future hourly exceedance probability is Pq(77.68) = @(-4.2463) = 0.00001 U7e) and that the expected annual rate and nonexceedance probabilities are given respectively by = 0.094 p b = e-X’ = 0.90 1 - eeX’= 0.10 (17f)

To summarize the effect of the proposed emission controls, we see that the probability that there will be zero exceedances at the 30 ppm level in 1year will be 9 in 10 and the probability of one or more exceedances about 1in 10. The expected return period will be about every 10 years since ~’(30)= ~(77.68)= (0.094)-l = 10.6 years. Time- Varying Seasonal Effects There is an additional but important difficulty that arises in predicting the distribution of future exceedances at receptors: one must not only estimate the average number of total exceedances in a year, but one must also estimate how seasonal or diurnal meteorological factors affect the timing of exceedances as they occur throughout the year. For example, as Table I suggests, more exceedances seem to occur in July or August than in December. If we can assume that exceedances occur as rare events independently of one another with probability S(t) that a given exceedance occurs on or before time t and if there Environ. Sci. Technol., Vol. 19, No. 3, 1985

no restrictive assumptions as rollback model deals only in expectations allows any strategy consistent with mass conservation; in certain cases yields MPR

are m exceedances within a year, the conditional probability that there will be n (out of m) exceedances by time t is the binomial probability

[ s ( ~ ) ] ~ [ I - s n=O, ( ~ )I, ]2,~m-- ~

exceedances per year, slightly less than 1.5 per month. The Poisson probability for one or more exceedances per year

230

allowed formulated and solved no statement other than linearity (additive model) explicitly stated and required not required explicitly stated

pnlm(t)= [ f ]

or, on the average X = 8760P+(x) = 16.4

assumption explicitly required and stated continuous time Poisson process for rare events

(18a)

While m is assumed fixed in (18a), unconditionally it is a Poisson random variable with mean A. By summing over the Poisson probability of having m exceedances in a year, we obtain

for the marginal probability that there are n exceedances by time t. Equation 18b tells us that the probability that we will obtain n exceedances by time t is also Poisson but with seasonally adjusted mean parameter G(t)substituted for A. In our notation S ( t ) represents the probability or cumulative fraction of exceedances which have occurred by t. See also ref 18. This result is not only conceptually appealing but is also a practical method for applying rollback models to real problems where seasonal effects must be included. Manmade controls typically affect the parameter X (in going from h to A’) whereas large-scale seasonal and environmental factors are relatively unchanged from year to year in S(t). Also, S ( t ) has the dual interpretation of being either the fraction of all exceedances or the probability of a single (out of many) exceedance occurring on or before time t. In effect, it maps the time scale t to the scale S ( t ) with exceedances being equally likely in the latter time scale. Suppose, for example, that we are interested in having the pJobability of one or more exceedances less than or equal to Qoduring the first 9 months of a year. In place of X or A’ we would use XS(t) or X’S(t) where S ( t ) corresponds to the 9-month fraction. The average (annual) rate of exceedances is directly affected by the level of abatement or control of pollution sources. This rate can be determined from the use of the “rollback” model which relates the average level of pollutant concentrationsto the average level of source emissions. The time-dependent factor S(t) is most easily determined from historical records of exceedances, i.e., the cumulative fraction of exceedances that occur prior to time t.

Summary and Conclusions To predict the distribution of exceedances during a future period, one must define, estimate, calculate, and predict several quantities: (i) the standard or threshold which defines an exceedance; (ii) the probability of OC-

Environ. Sci. Technol. 1985, 79, 231-238

currence of an exceedance as a function of this standard; (iii) the Poisson distribution and parameters which describe the number of occurrences of exceedances over a long period, say a year, as a function of the standard; (iv) the effect of emission controls upon the probability of an exceedance; (v) the Poisson distribution of number of exceedances over future periods as a function of the emission control. On the basis of our models we have concluded that extreme pollutant concentrations can be viewed as rare events; thus, it is desirable to describe the number of exceedances in a fixed interval of time by the continuous time, possibly inhomogeneous, Poisson distribution. The time between exceedances is exponentiallydistributed, and the mean number of exceedances in a fixed period is proportional to the product of three quantities: the length of the period, the probability of occurrence of a single exceedance, and the conditional probability (a seasonal factor) that an exceedance occurs within a certain period of the year given that it occurs at all. The seasonal factor can also include the effects of terrrain, meteorology, and month of the year. A linear rollback model based on the principle of mass conservation can be used to compute the reduction in long-run expected pollutant levels given a known reduction in known or average source emissions. Derived parameters of the future distribution of hourly pollutant concentrations then yield a formula for determining the probability that a given number of exceedances will occur in a future period of time. We have also shown how the inverse problems can be solved, i.e., determining the future constant emission control or abatement factor to meet a desired air quality standard. Referees who reviewed this paper prior to its publication were justifiably concerned with the duplication, overlap, and/or differences between the results of this paper and the MPR method formulated by Peterson and Moyers (1).

To aid in a comparison and to summarize results, I have included Table 11. Literature Cited Peterson, T. W.; Moyers, J. L. Atmos. Enuiron. 1980, 14, 1439-1444. deNevers, N.; Neligan, R. E.; Slater, H. H. In “Air Pollution”, 3rd ed.; Stern, A. C., Ed.; Academic Press: New York, 1977; Vol. 5. Horowitz, J.; Barakat, S. Atmos. Enuiron. 1979,13,811-818. Larsen, R. I. Proc., Int. Clean Air Congr., 1st 1966,6044. Larsen, R. I. “Proceedings: The Third National Conference on Air Pollution”; U.S. Government Printing Office: Washington, DC, 1967, P H S Publication No. 1649, pp 199-204. Larsen, R. I. J. Air Pollut. Control Assoc. 1967,17,823-829. Larsen, R. I. J. Air Pollut. Control Assoc. 1974,24,551-558. Horie, Y.; Overton, J. US.Enuiron. Prot. Agency, Off. Res. Dev., [Rep.]EPA 1974, Chapter 15. Georgopoulos, P. G.; Seinfeld, J. H. Environ. Sci. Technol. 1982,16,40lA. Knuth, W. R.; Giroux, H. D. Meteorology Research Inc., CA, June 27, 1979, Report MRI 78R-1596. National Academy of Sciences EPRI J. 1983, 8 (9), 23. Larsen, R. I. J. Air Pollut. Control Assoc. 1961,11, 71-76. Breiman, L. “Probability and Stochastic Processes”; Houghton Mifflin Co.: Boston, 1969. Cram&, H.; Leadbetter, M. R. “Stationary and Related Stochastic Processes”; Wiley, New York, 1968; Chapter 12. Larsen, R. I. Environmental Protection Agency, North Carolina, 1971. Research Triangle Park, NC, 1971, Report AP-89. Aitchison, J.; Brown, J. A. C. “The Lognormal Distribution”; Cambridge University Press: Cambridge, 1957. Oliver, R. M. Pacific Gas & Electric Company, March 1980, Report 80-1. Oliver, R. M. submitted for publication in J. Oper. Res. SOC.

Received for review January 17,1983. Accepted September 4, 1984.

Detailed Model for the Mobility of Arsenic in Lacustrine Sediments Based on Measurements in Lake Ohakuri John Aggett and Glennys A. O’Brlen Chemlstry Department, University of Aukland, Aukland, New Zealand

The mobility of arsenic in sediments in Lake Ohakuri, a hydroelectric storage lake on the Waikato River in New Zealand, has been monitored between 1980 and 1982 and the release of arsenic to the overlying water related to seasonal changes in both lake water and sediment. In shallow areas of the lake the release of arsenic contributes to the continuous seasonal variation in the arsenic concentration in the lake water. In areas that become stratified in summer the arsenic released from the sediments accumulates in the hypolimnion until turnover when it is mixed with epilimnetic water. It has been estimated that the turnover effect results in a temporary increase in the arsenic concentration of between 10 and 20%. Important chemical transformations have been identified, and a model for the system has been compared with previous theoretical models for lacustrine systems. Introduction

In 1972 Ferguson and Gavis (1) published a model for *Until May 1985, address correspondence to this author at the Chemistry Department, Arizona State University, Tempe, AZ 85287. 0013-936X/85/0919-0231$01.50/0

the arsenic cycle in a stratified lake based on theoretical considerations and the results of studies on arsenic in other environments. According to this model accumulation of arsenic in sediments occurs through the formation of arsenious sulfide and ferric arsenate while release to the lake water occurs through “reduction” and the formation of methylated arsenic species. In a subsequent model Wood (2) emphasized the role of the methylated arsenic species in providing a pathway for desorption of arsenic from sediments. Neither of these models was based on any observations on real sediment systems, and the purpose of this paper is to present a detailed model for the mobilization of arsenic in lacustrine sediments based on measurements in Lake Ohakuri for comparison with those postulated by Ferguson and Gavis (1)and Wood (2). The model presented here is the result of a 2-year study in which changes in iron and arsenic in both solid phase and interstitial water of sediments were monitored and related to changes in the overlying water. The monitoring of iron was included in the program as it had previously been shown that iron was involved in the main mechanism for the adsorption of arsenic in surficial sediments (3).

0 1985 American Chemical Soclety

Environ. Sci. Technoi., Vol. 19, No. 3, 1985 231