Assessing Wastewater Micropollutant Loads with Approximate

Apr 19, 2011 - Andreas Scheidegger,. † and Christoph Ort. §. †Eawag, Swiss Federal Institute of Aquatic Science and Technology, CH-8600 Dübendor...
0 downloads 0 Views 2MB Size
ARTICLE pubs.acs.org/est

Assessing Wastewater Micropollutant Loads with Approximate Bayesian Computations J€org Rieckermann,*,† Jose Anta,‡ Andreas Scheidegger,† and Christoph Ort§ †

Eawag, Swiss Federal Institute of Aquatic Science and Technology, CH-8600 D€ubendorf, Switzerland Civil Engineering School, University of A Coru~ na, Campus de Elvi~na s/n, 15071, A Coru~na, Spain § The University of Queensland, Advanced Water Management Centre (AWMC), QLD 4072, Australia ‡

bS Supporting Information ABSTRACT: Wastewater production, like many other engineered and environmental processes, is inherent stochastic in nature and requires the use of complex stochastic models, for example, to predict realistic patterns of down-the-drain chemicals or pharmaceuticals and personal care products. Up until now, a formal method of statistical inference has been lacking for many of those models, where explicit likelihood functions were intractable. In this Article, we investigate Approximate Bayesian Computation (ABC) methods to infer important parameters of stochastic environmental models. ABC methods have been recently suggested to perform model-based inference in a Bayesian setting when model likelihoods are analytically or computationally intractable and have not been applied to environmental systems analysis or water quality modeling before. In a case study, we investigate the performance of three different algorithms to infer the number of wastewater pulses contained in three high-resolution data series of benzotriazole and total nitrogen loads in sewers. We find that all algorithms perform well and that the uncertainty in the inferred number of corresponding wastewater pulses varies between 6% and 28%. In our case, the results are more sensitive to substance characteristics than to catchment properties. Although the application of ABC methods requires careful tuning and attention to detail, they have a great general potential to update stochastic model parameters with monitoring data and improve their predictive capabilities.

’ INTRODUCTION After intended, correct usage, a large number of substances, contained in pharmaceuticals, personal care products (PPCPs), and other household chemicals, are comfortably disposed of in sanitary appliances. To assess PPCP and micropollutant loads, detailed knowledge of wastewater generation is important, especially with regard to the assessment of maximum environmental concentrations or the experimental design of monitoring campaigns. The realistic description of wastewater composition requires stochastic models, because it involves processes such as substance intake, substance metabolism, and toilet use, which are inherently stochastic.1-3 Such models have proven useful to better understand random short-term variations of PPCPs,4,5 which can even impact the representativeness of observed data.5 However, modeling should never be a self-sufficient exercise and ideally be performed in combination with monitoring data, especially for decision-making.6 From a Bayesian view, monitoring data are used to update prior knowledge on the investigated parameters considering the likelihood of a set of parameter values under a specific model given the observed outcomes. However, for many stochastic models, an explicit likelihood function is r 2011 American Chemical Society

either analytically intractable or too expensive to evaluate computationally, for example, because of the integration of highdimensional integrals. This prohibits the application of traditional Bayesian algorithms, such as Markov Chain Monte Carlo, and motivates the use of more approximate methods, such as Approximate Bayesian Computation (ABC) methods. ABC has been recently developed in the context of “likelihood-free” parameter estimation and rejection sampling7-9 and represents a complementary numerical technique for posterior sampling. Similar to traditional algorithms, which provide numerical solutions for likelihood functions that cannot be solved analytically, ABC provides approximate solutions in cases where the likelihood functions exist, but is neither analytically nor computationally tractable.8,9 Originally developed for genetic systems, they have already been applied to a wide range of applications from archeology to wireless communication engineering, but to the best of Received: September 5, 2010 Accepted: February 11, 2011 Revised: February 11, 2011 Published: April 19, 2011 4399

dx.doi.org/10.1021/es1030432 | Environ. Sci. Technol. 2011, 45, 4399–4406

Environmental Science & Technology

ARTICLE

our knowledge not to stochastic environmental modeling and assessment. In this study, we investigate the usefulness of ABC methods for parameter estimation of stochastic environmental simulation models. On the example of wastewater micropollutant loads, this contains the following specific innovations: (i) we adapt ABCs for inference with a stochastic dynamic pollutant load model; (ii) we identify a suitable algorithm, which can be readily transferred to other stochastic models; and (iii) we demonstrate the usefulness of model calibration in the context of sewer micropollutant loads. The result from ABC is the full posterior distribution, which enables us to quantify the gain of information with regard to predictions of (i) absolute micropollutant loads, (ii) maximum environmental concentrations and loads, and (iii) expected sampling error. In this context, ABC methods enable us for the first time to quantify the uncertainty of backcalculations from observed wastewater substance loads, which is also important for learning from wastewater (e.g., in the context of illicit drugs10).

’ MATERIALS AND METHODS Stochastic Model for Micropollutant Load Variations in Sewers. As demonstrated in Ort et al.,4 random short-term

variations of PPCPs can be substantial and may severely impact the representativeness of observed data.5 With the simple stochastic model presented in Ort et al.,11 the characteristics of these fluctuations can be predicted, using prior knowledge on (i) the catchment and population, (ii) the sewer system topology and properties, and (iii) substance and source pattern characteristics as follows. The predicted substance mass flux at the monitoring point q(t) is computed as the sum of all discharged pulses related to the substance during the simulation period: qðtÞ !   2 1 si 1 m:pulsei pffiffiffiffiffiffi exp - t - Ti þ ¼ σs, i 2 ν i¼1 σs, i 2π NP



ð1Þ

with t = simulation time, NP = number of pulses discharged per hour; m.pulsei = substance mass contained in ith pulse, v = mean velocity, Ti = release time of ith pulse, and si = flow distance:   si 1 2 σs, i ¼ 2Dx 3 ð2Þ þ σini 2 v v2 with σs,i = spread of ith pulse at monitoring station (σ = single standard deviation of normal distribution), Dx = longitudinal dispersion coefficient, and σini = spread of pulse at house drain. The model is stochastic, because some of the model parameters, such as the release time of each pulse, are randomly sampled from their probability distribution, and therefore the model produces different, but similar, results for each model run with the same parameter set. Examples of output from the stochastic model for different patterns and substance types are given in Figure S-1 in the Supporting Information. Note that we define “user” as all wastewater generating agents, such as persons or appliances discharging the substance of interest. Applications of this model have previously only been realized using “informed guesses” on priors of the proposed model

parameters and at best qualitatively compared to reported data.2,4,11 Up until now, this was because a formal framework with a rigorous treatment of uncertainty in a Bayesian sense has been lacking. Likelihood-Free Inference Using Approximate Bayesian Computations (ABCs). According to Bayes’ theorem, prior knowledge, expressed as the distribution fpri(θ) of an uncertain quantity θ, is combined with observed data D and a model-based likelihood function fmodel(D|θ) to yield a representation of posterior knowledge according to the following equation (see Gelman et al.12 for details): fpost ðθjDÞ µ fpri ðθÞfmodel ðDjθÞ

ð3Þ

The Bayesian approach leads to a number of advantages in an environmental modeling context, where often a wealth of knowledge but only little monitoring data are available. As described above, for complex (stochastic) models, the likelihood function is often analytically intractable or difficult to evaluate in reasonable time, which motivated the use of more approximate methods, such as “likelihood-free” Approximate Bayesian Computations.8,13,14 Arguably, this terminology is a slight misnomer because generally fmodel(D|θ) exists, but the likelihood function is not used for the computation. General Principle of ABC. To avoid evaluating the likelihood, ABC algorithms are based on the following principle to obtain a random sample from the posterior distribution. First, a candidate parameter vector θ* is proposed. Second, θ* is propagated through the model to obtain a simulated data set D*. Next, the simulated data set is compared to observed data D using a distance function F and a tolerance level ε g 0. If the simulated data are “close” to the real data, that is, F(D,D*) e ε, the proposed parameter vector θ* is accepted with a certain probability as a sample from the posterior distribution and otherwise rejected for sure. In recent years, different algorithms have been developed to increase the efficiency of obtaining a sample from the posterior distribution. Here, we investigate three different generic methods of increasing complexity: a rejection sampler (REJ), a Markov Chain Monte Carlo (MCMC), and a Sequential Monte Carlo (SMC) algorithm. See Sisson and Fan15 for an exposition of “likelihood-free” theory. Rejection Algorithm (REJ). The simplest algorithm is the rejection sampler (REJ), with a procedure described as follows:7 REJ 1: Sample θ* from fpri(θ). REJ 2: Generate data D* using model M(θ*) with parameters θ*. REJ 3: Calculate a distance F(D*,D) between the data D and D*. REJ 4: Accept θ* when F(D,D*) e ε. REJ 5: Return to REJ 1. This algorithm generates n s independent samples from the distribution fpost(θ|F(D*,D) e ε). If ε is small, then fpost(θ|F(D*,D) e ε) is a good approximation of fpost(θ|D). The algorithm based on rejection sampling is straightforward but not efficient, especially if the prior distribution is wide. Most proposed parameter vectors θ* will be rejected at the regions of low probability because they do not generate data that are close to the real data. To improve this, more efficient implementations employ a MCMC or SMC framework. ABC Markov-Chain Monte Carlo (MCMC). The “likelihoodfree” approach introduced by Marjoram et al.8 is based on MCMC. 4400

dx.doi.org/10.1021/es1030432 |Environ. Sci. Technol. 2011, 45, 4399–4406

Environmental Science & Technology

ARTICLE

Table 1. Prior Knowledge on Input Factors and Parametersa of the Stochastic Sewer Load Model for the Two Data Sets Benzotriazole (BT) and Total Nitrogen (Ntot) data sets catchment 1 catchment characteristics

BT low activity

catchment 2 BT high activity

Ntot

population size

9152 households (∼19 500 inhabitants)

1063 inhabitants

flow distances si (mean, median)

empirical distribution with a range of [150-4000] m (2550 m, 2670 m)

empirical distribution with a range of [51-968] m (561 m, 574 m)

mean velocity v [m s-1]

U(0.5, 1)

U(0.5, 1)

dispersion coefficient Dx [m s-2]

U(0.01, 0.3)

U(0.01, 0.3)

range of pulse duration in sewer σinic

U(60, 500)

U(30, 180)

no. of wastewater pulses NP [pulses h-1]

U(1, 500)

mass of substance per pulse

empirical distribution with a range of [0.1-60] mg BT,

U(300, 800)

U(100, 600) U(1, 2.6) g Ntot18 (uniformly distributed)

based on product analyses, weighted with sales datab sampling summaries sampling mode

grab sampling

sampling frequency

30 s

1 min

45 s

duration

20 min

60 min

68 min

data points

40

60

91

For details, see Ort et al.11 and Model Input Factors and Parameters, in the Supporting Information. b Figure S-2 in the Supporting Information. c Based on own experiments and unpublished data, see Ort et al.11 and the Supporting Information. a

With an MCMC algorithm, the proposed parameter vector θ* is not sampled from the prior distribution, rather the algorithm makes a small, random step in the parameter space from a “good”, accepted parameter vector. Hence, it is more likely that the new proposed parameter vector is accepted too. This method uses a modified Metropolis-Hastings algorithm for sampling observations of the posterior fpost(θ|D). A Markov Chain without likelihood is generated by the following procedure: MCMC 1: Initialize θi, i = 1. MCMC 2: Propose θ* according to a transition kernel q(θi|θ*). MCMC 3: Simulate a data set D* using the model M(θ*) with parameters θ*. MCMC 4: If F(D,D*) e ε, go MCMC 5; otherwise, set θiþ1 = θi and return to MCMC 2. MCMC 5: Calculate h = min(1,(f pri (θ*)q(θ*|θ i ))/ (fpri(θi)q(θi|θ*))) and generate a random number r from a uniform distribution U(0,1). MCMC 6: Set θiþ1 = θ* if r < h; otherwise, θiþ1 = θi. Return to MCMC 2. ABC Sequential Monte Carlo (SMC). SMC methods can be considered to be an extension of sequential importance samplers16 and were first introduced by Sisson et al.9 In this article, we use the algorithm proposed by Toni et al.14 The key of the SMC method is to reduce the dissimilarity between the prior and posterior distribution by generating a sequence of P intermediate distributions using decreasing tolerances, ε1 > ...> εP g 0. Thus, a number of sampled parameters or particles, {θ (1), θ (2), ..., θ(N)}, sampled from the prior distribution fpri(θ), is propagated through a sequence of intermediate distributions, fpost(θ|F(D*,D) e εi), i = 1, ..., P - 1, until it ultimately represents a sample from the target distribution fpost(θ|F(D*,D) e εP). The samples are uncorrelated. The

algorithm proceeds as follows: SMC 1: Initialize ε1, ..., εP and set the population indicator p = 1. SMC 2.0: Set the particle indicator i = 1. SMC 2.1: If p = 1, sample θ** from fpri(θ) using the REJ algorithm. If p > 1, sample θ* from the previous population {θp-1(i)} with weights {Wp-1(i)} and perturb the particle to obtain θ** ≈ Kp(θ|θ*), where Kp is a Markov transition kernel. SMC 2.2: If fpri(θ**) = 0, return to SMC 2.1. SMC 2.3: Simulate a data set D** = M(θ**). If F(D,D**) > εp, return to SMC 2.1. SMC 2.4: Set θp(i) = θ** and the weight for the accepted particle as

WpðiÞ ¼

8 > > > >
> ðjÞ ðjÞ ðiÞ > > : j ¼ 1Wp - 1 Kp ðθp - 1 jθp Þ



if p ¼ 1 if p > 1

If i < N, set i = i þ 1, go to SMC 2.1. SMC 2.5: Normalize the weights {Wp(i)}. If p < P, set p = p þ 1, go to SMC 2.0. Study Area and Substance Loads. We apply these three ABC methods to infer parameters for the described micropollutant load model and three different data sets: two series of high frequent benzotriazole (BT) profiles, which serve as proxies for household chemicals, and a series of total nitrogen (Ntot) measurements, which can be regarded as surrogates for human related PPCPs (excreted with urine). We show how important a calibrated model is to determine (i) the total load, (ii) maximum pollutant loads and concentrations, and to (iii) assess sampling 4401

dx.doi.org/10.1021/es1030432 |Environ. Sci. Technol. 2011, 45, 4399–4406

Environmental Science & Technology

ARTICLE

meaningful input variables and parameter values of the model have to be selected. Generally, specific information may be available on catchment characteristics and population, and substance masses can be estimated from product descriptions or independent laboratory experiments. For the transport parameters and initial pulse durations, we performed specific tracer experiments to avoid making strong assumptions (details in Table 1 and Supporting Information). Note that, for the sake of simplicity, in this article we restrict our analyses to estimate the number of pulses (NP), which are contained in the observed pattern. This makes it a one-dimensional estimation problem. To demonstrate the benefit of learning from data on the number of pulses, we compare the gain of information (Ginf) using model predictions based on the posterior obtained with REJ to those computed with the prior distribution. As a simple metric, we use the ratio of the empirical standard deviation of the posterior and the prior, σpost and σprior, respectively: Ginf ¼ 1 - σ post =σ prior

Figure 1. Loads (black) and concentrations (gray) for benzotriazole (BT) in catchment 1 (∼19 500 inhabitants) after low dishwashing activity (A, April 30, 2004, 6:56 am) and high dishwashing activity (B, August 30, 2004, 1:23 pm). (C) Total nitrogen (Ntot) in catchment 2 (∼1000 inhabitants) after the morning peak (May 30, 2005, 8:10 am).

uncertainty. The catchments and monitoring campaigns are briefly summarized subsequently (see Table 1 for more details). Benzotriazole Data (BT). We use the two data sets of BT published in Ort et al. 2,11 The two BT time series have been obtained at the effluent of a subcatchment in D€ubendorf, Switzerland. At the monitoring point, the wastewater of about 9200 households (∼19 500 inhabitants) is collected in a trunk sewer with a daily dry weather flow of around 6500 m3 d-1, varying from 40 to 100 L s-1. The maximal flow distance is 4 km. The household structure and the population are representative and typical for large parts of Switzerland. A time-proportional grab sampling scheme was applied: the first data series, representing a low activity profile, was collected with a temporal resolution dt = 30 s,2 and the second, monitored after high dishwashing activity (dt = 60 s).11 BT concentrations were determined as described in Schaffner and Giger.17 The total uncertainty of mass loads is estimated to be approximately 10% (σflow = 8%; σBT = 5%, all as single standard deviations). Total Nitrogen Data (Ntot). The Ntot data have been collected in the catchment of the Swiss village Wangen, with approximately 1000 inhabitants. Flow distances to the monitoring point are between 51 and 968 m. Monitoring was also performed with time-proportional sampling (dt = 45 s). In this case, the uncertainty of mass loads is estimated to ∼11% (σflow = 10%; σNtot = 5%) (see Figure 1). Prior Knowledge Formulation and Gain of Information from Parameter Estimation. To simulate realistic load patterns,

ð4Þ

As described above, we compute Ginf for (i) the ratio between the predicted and measured maximum concentration and loads, (ii) the total load, and (iii) the sampling error based on a Monte Carlo simulation with n = 10 000 repetitions. The load patterns were modeled at a temporal resolution of 1 s during 2 h. Because the model output is loads, the modeled maximum concentration (mg L-1) was calculated by dividing the maximum modeled load (mg s-1) by the average measured flow (L s-1). We define the sampling error (Δsamp) as the relative error of the composite samples: Δsamp ¼

qtrue - qsample qtrue

ð5Þ

where qtrue is the “true” average load for a 2 h composite sample computed from the original model predictions at highest temporal resolution (dt = 1 s) and qsample is the load of corresponding 2 h composite time-proportional samples at intervals of 10 min. Implementation and Performance Evaluation of the ABC Algorithms. For ABC algorithms, we have to define a distance measurement F, a tolerance level ε (or sequence), an appropriate starting point for the MCMC algorithm, and perturbation and transition kernels, q( 3 ) and Kp( 3 ) for MCMC and SMC, respectively. Here, we investigated Euclidean and absolute distances for F, which has also been reported by Toni et al.14 Based on empirical tests, ε was selected as the 1%-quantile of all distances computed from a number of npre presampled simulations from the prior distribution, because it represented a satisfactory compromise between obtained accuracy and simulation time. This has also been reported by Wegmann et al.19 In addition, the performances of both Gaussian and uniform perturbation kernels were tested. Details are provided in the Supporting Information. We compared the performance of the ABC algorithms with regard to the number of uncorrelated samples ns and particles (REJ and SMC samplers, respectively), and the total number of model runs nm (MCMC sampler) for which the mean and the standard deviation of the posterior stabilized.20,21 These should be reliable measures given the observed symmetry of the posteriors. As a reference, we took the stable posterior obtained with the REJ sampler and a sample size of ns = 10 000 for all three data sets. 4402

dx.doi.org/10.1021/es1030432 |Environ. Sci. Technol. 2011, 45, 4399–4406

Environmental Science & Technology

Figure 2. Estimated posterior probability density distributions for the number of pulses (NP). (A) Benzotriazole (BT) for low dishwasher activities, (B) benzotriazole for high dishwasher activities, and (C) Ntot. Very similar results are obtained using the REJ sampler (black, solid), MCMC (black, dashed), and SMC (black, dotted). Uniform prior distributions are plotted as a solid gray line.

MCMC simulations were performed running 10 independent chains of nm = 10 000 simulations with identical ε and Gaussian transition kernel distributions. The 10 chains were concatenated, and the 90% simulations with the lowest ε were used to estimate the posterior.19 Stable posteriors for the SMC were obtained with ns = 2000.

’ RESULTS AND DISCUSSION Parameter Estimation. BT, Low Activity (Data Set A). From the presampling procedure in the REJ and MCMC sampler, we obtained an ε of 1.07 for the Euclidean distance. In the SMC method, the tolerance sequence was 3.50, 2.11, 1.57, 1.21, and 1.07. The posterior obtained with the rejection sampler for NP is 123 pulses h-1 ((35, as single standard deviation), and we observe that all three algorithms result in practically the same outcome (Figure 2). As expected, the ABC rejection sampler is the most time-consuming algorithm and has an acceptance rate of ∼1.1% (see Table 2). The MCMC and SMC techniques are 24 times and 13 times faster. For the SMC, the acceptance rate for the last population is 2.8%, while the global rate is 8.1%. As described in the Supporting Information, the analysis was repeated using the absolute distance and replacing the Gaussian kernel with a uniform kernel in MCMC and SMC. We find that the resulting approximations of posteriors are very similar to

ARTICLE

those obtained with Euclidean distances (data not shown), which is consistent with the results of Toni et al.14 The Gaussian kernel slightly improved chain and population mixing, and the computational expenditure was slightly smaller with the uniform kernel. BT, High Activity (Data Set B). Here, ε was fixed as 4.74 for REJ and MCMC algorithms, while for SMC the sequence was 6.83, 6.04, 5.47, 4.97, and 4.74. The REJ sampler has a 1.02% acceptance rate and takes approximately 34 h to obtain ns = 10 000 uncorrelated samples, which is about 3 times longer than for evaluating data set A. This is mainly because the computations scale approximately linearly with Np and simulation time t. Again, all the algorithms give very similar results and estimate a posterior of NP with a mean of 563 pulses h-1 ((50). Here, the MCMC and SMC methods are 9.5 and 4.75 faster. As compared to the evaluation of data set A, there is a reduction of the efficiency of ABC algorithms, due the lower acceptance rate of this data set. Again, using absolute distances and uniform kernels leads to similar results. Ntot (Data Set C). Using the Euclidean distances, for this data set the 1% quantile ε is 0.43 and the sequence for the SMC method 0.80, 0.59, 0.50, 0.45, and 0.43. With the REJ sampler, the computational time lies between the evaluation of data sets A and B, and the acceptance rate is similar to previous values. The computational efficiency is similar to that of data set B and MCMC and SMC are an order of magnitude faster than the REJ algorithm. The MCMC and SMC algorithm presents an acceptance rate twice as high as compared to the BT data sets. With regard to the posterior, we estimate NP as 289 pulses h-1 ((19). Interestingly, this distribution is much narrower than those obtained for the BT. This can be explained by the fact that the ratio of maximum and minimum substance mass per pulse is only 2.6 for Ntot, as compared to 600 for BT (see Table 1 and Figure S-2). A narrower mass distribution leads to a smoother load pattern with less variability between repeated simulations and consequently also a narrower posterior distribution for observed loads. Furthermore, we found a similar influence for Np on the posterior. In general, as Np increases, the output pattern is smoother and the short-term fluctuations are reduced. Thus, the inferred distributions are wider because the same mobilized load can as well consist of fewer pulses with higher masses as of more pulses with smaller mass or mixtures thereof. Application of the Calibrated Model To Predict Loads, Concentrations, and Sampling Errors. Estimation of Total Loads with the Calibrated Model. We determine the maximum benefit of having a calibrated model in terms of the prediction of total loads with a Monte Carlo simulation analysis. In Table 3, we compare simulated loads based on the prior knowledge to those obtained with the posterior (i.e., the calibrated model) for a 2 h period (see Figure S-4 for full distributions). First, we see that mean total loads from the posterior analysis are in a good agreement with the observed loads for the same duration: 3.06 and 2.98 g for data set A, 14.03 and 14.08 g for B, and 1.04 and 1.12 kg for C. We then observe that differences between prior and posterior predictions of the mean total load are higher for A, while for B the estimated values are very similar to the measured loads. However, the spread of the predicted loads based on prior information is substantial, as indicated by Ginf scores of 0.75 and 0.62. The highest score for Ginf is obtained for the Ntot data set (0.86), and the smallest for the BT data set with a high activity (B). This shows again the sensitivity of the model output to the variability in the mass pulse distribution and the number of pulses. 4403

dx.doi.org/10.1021/es1030432 |Environ. Sci. Technol. 2011, 45, 4399–4406

Environmental Science & Technology

ARTICLE

Table 2. Acceptance Rate, CPU Time, and Estimated Number of Pulses NP (mean ( sd) for the Two Benzotriazole (BT) and the Total Nitrogen (Ntot) Data Set Using the Presented ABC Algorithms BT low activity

BT high activity

acceptance rate sampler

[%]

REJ ns = 10 000 MCMC nm = 100 000

1.09 3.67

SMC ns = 2000

2.28 (8.15)b

NP

acceptance rate

CPU timea

[pulses/h]

[%]

10.2 h 0.4 h

123 ( 35 121 ( 33

1.02 2.48

0.8 h

121 ( 31

1.72 (4.94)b

Ntot NP

acceptance rate

CPU timea

[pulses/h]

[%]

34.2 h 3.6 h

563 ( 50 566 ( 48

7.2 h

562 ( 46

NP CPU timea

[pulses/h]

1.10 6.72

14.6 h 1.33 h

289 ( 19 286 ( 18

4.50 (11.38)b

1.21 h

289 ( 18

All the simulations were computed at Eawag-EMPA cluster (www.ipazia.empa.ch) in order to obtain comparable measurements. b Acceptance rate for the last and for all the populations (in parentheses). a

Table 3. Estimated Mean, Standard Deviation, and Gain of Information (Ginf) of the Total Load, Ratios between Maximum Modeled and Measured Loads, and Concentrations and Sampling Error for Composite Loads Obtained from 10 min TimeProportional Grab Samplesa BT low activity (A) sampling method total load ratio max load ratio max concentration sampling error, Δsamp a

BT high activity (B)

mean ( SD

Ginf

prior

6.25 ( 3.65 g

0.75

posterior

3.06 ( 0.93 g

mean ( SD

Ginf

13.78 ( 3.67 g

0.62

14.03 ( 1.41 g

prior

1.81 ( 0.91

posterior

1.05 ( 0.27

0.70

prior posterior

2.11 ( 1.06 1.23 ( 0.32

0.70

prior

1.05 ( 6.14%

0.14

posterior

0.97 ( 5.30%

0.63 ( 0.15

mean ( SD

Ginf

1.25 ( 0.52 kg

0.86

1.04 ( 0.07 kg 0.54

0.65 ( 0.07

1.19 ( 0.42

0.77

1.02 ( 0.10

0.64 ( 0.15 0.65 ( 0.07

0.54

0.96 ( 3.50%

0.01

0.97 ( 3.49%

Ntot (C)

1.43 ( 0.51 1.23 ( 0.12

0.77

0.03 ( 5.43%

0.02

0.03 ( 5.33%

Based on 10 000 Monte Carlo iterations of 2 h loads.

Estimation of Maximum Loads and Concentrations. We find that the differences in the mean and the standard deviation between the prior and posterior predictions, as well as the Ginf ratio, behave similarly as for the predicted total loads. The predictions based on the prior are more uncertain than those obtained from the posterior. Note, however, that the calibrated model tends to overestimate the observed maximum loads and concentrations in the low activity BT data set and underestimate these values for the high activity data set, by approximately 35% (see Table 3, column 5, rows 5-8). However, this is not necessarily unrealistic, because our measurements are just a single realization of a continuous random process. Again, the best agreement is achieved for the Ntot record, which is most probably due to the relatively narrow distribution of pulse masses. Estimation of Sampling Errors. Interestingly, the expected sampling errors do not differ much for the prior or the posterior number of pulses. Although surprising at first sight, this is consistent with our experience and results published in Ort et al.,2 because even a narrower posterior could lead to an increase in the expected sampling error, if the pattern is on average composed of a lower number of pulses and therefore more “spiky” than expected from our prior knowledge (compare pattern from 1000 pulses/h to those of 10 pulses/h in Figure S-1). Although the updated knowledge is not necessarily to our liking, we can now have more confidence in our predictions because the prior knowledge was apparently too optimistic with regard to the sampling error. Approximate Bayesian Computation Performance for Environmental Modeling. On the basis of the above considerations,

we demonstrated the usefulness of the ABC framework for the assessment of micropollutants in sewers. Using three different ABC algorithms, we infer very similar posteriors for each individual data set. In comparison to the reference posteriors from the REJ algorithm, MCMC and SMC methods tend to slightly underestimate the spread of the posterior distribution for low (6% and 11%) and high BT activity (4% and 8%) and approximately 5% for Ntot. However, due to the stochastic nature of our problem, this drawback is small as compared to the numerical efficiency of the more complex algorithms. While the performance and tuning of ABC algorithms is an area of ongoing research, we can conclude that we found established algorithms and suitable guidance for the choice of ε and F in literature. Future work could evaluate more sophisticated procedures to evaluate the sensitivity of the posterior on ε or the choice of different transition kernels distributions and parameters.19 Also, for SMC, a largely unresolved issue is the systematic choice of the tolerance sequence. Currently, different SMC samplers9,14,20-22 aim at connecting the sequence of populations to its diffusivity or range in the parameter space,21 but this is an area of ongoing research. Nevertheless, we believe that the SMC algorithm is more suitable than our implementation of MCMC because it (i) was easier to apply and tune to reach a good performance, (ii) did not get stuck in regions of low probability as the MCMC algorithm, and (iii) provides a series of uncorrelated samples but with much less computational efforts than the REJ sampler. Arguably, our conclusions might be biased, because several modifications of conventional MCMC schemes, such as Hamilton Monte Carlo or over-relaxation, have been 4404

dx.doi.org/10.1021/es1030432 |Environ. Sci. Technol. 2011, 45, 4399–4406

Environmental Science & Technology proposed to increase their efficiency.23 Here, we deliberately chose not to investigate such modifications in an ABC context, because those methods require specific model structures such as known conditional likelihoods or an exponential form of the posterior. For many applications, the choice of appropriate summary statistics and the distance F is considered a key aspect in the ABC algorithm tuning.14,19-22,24-26 Our results suggest that directly using Euclidean or absolute distances between the observations and model results is a reasonable choice for stochastic dynamic environmental models, because we obtain very similar approximations of the posterior when using absolute distances. As compared to the best estimate for Np obtained with Euclidean distances, it is only 5% lower for the BT data sets and there are virtual no differences for the Ntot data set. However, in some cases, users of stochastic models might be less interested in exactly replicating the observed monitoring data than in predicting “realistic” or “similar” data sets that reproduce the same characteristics like the observed data, for example, the same maximum concentration or the same number of peak loads during a day. In this regard, using a distance function based on a set of summary statistics, which capture the relevant information, instead of using Euclidean distances could be interesting. First trials toward using sufficient statistics in our application show that the selection of a suitable set of summary statistics and weighting its importance on the model realizations are not trivial. Although some studies suggested ranking approaches,25 principal component analysis,14 or partial least-squares methods,7,19 we agree with Csillery et al.27 that ABC is far from being as “easy as 123”. Regarding the statistical validity of ABC, it has to be noted that ABC becomes exact Bayesian inference if εf0 and if the summary statistics (if used) are statistically sufficient for the parameters θ.15 This rigorous statistical foundation of ABC makes it conceptually superior to pseudo-Bayesian approaches for uncertainty analysis, such as Generalized Likelihood Uncertainty Estimation (GLUE).28 The GLUE approach avoids the evaluation of the likelihood functions by using simple “less formal likelihoods”. As demonstrated by Mantovan and Todini,29 this “does not allow for a correct estimation of the posterior probability distribution, since the choice of the ‘less formal likelihoods’ highly reduced its capability to extract information from the observations...”. In this regard, ABC methods are coherent and, similar to numeric solvers for differential equations, can compute approximate but satisfactory solutions to otherwise intractable problems. Although we fully acknowledge that, as with all Bayesian methods, there is some controversy regarding the objectivity of the procedure (see Csillery et al.27 for a comprehensive discussion), we believe that ABC will greatly improve many areas of environmental science and management, because (i) it will allow for a formal treatment of uncertainty of a wide range of environmental simulation models, which today have been, at best, only qualitatively compared to monitoring data and (ii) it can provide a good approximation to the full posterior parameter distribution. This is of great importance for decision making under uncertainty and adaptive management, where it is mandatory to update existing information in the light of new information.6

’ ASSOCIATED CONTENT

bS

Supporting Information. Additional information as noted in the text. This material is available free of charge via the Internet at http://pubs.acs.org.

ARTICLE

’ AUTHOR INFORMATION Corresponding Author

*Phone: þ41 44 823 5397; fax þ41 44 823 5028; e-mail: joerg. [email protected].

’ ACKNOWLEDGMENT We would like to thank Peter Reichert and Carlo Albert from Eawag for introducing and advising us with regard to the ABC methods and its implementations. This work has been partially sponsored by the University of A Coru~na. ’ REFERENCES (1) Gottschalk, F.; Sonderer, T.; Scholz, R. W.; Nowack, B. Modeled environmental concentrations of engineered nanomaterials (TiO2, ZnO, Ag, CNT, fullerenes) for different regions. Environ. Sci. Technol. 2009, 43, 9216–9222. (2) Ort, C.; Gujer, W. Sampling for representative micropollutant loads in sewer systems. Water Sci. Technol. 2006, 54, 169–176. (3) Rauch, W.; Brockmann, D.; Peters, I.; Larsen, T. A.; Gujer, W. Combining urine separation with waste design: an analysis using a stochastic model for urine production. Water Res. 2003, 37, 681–689. (4) Ort, C.; Lawrence, M. G.; Reungoat, J.; Mueller, J. F. Sampling for PPCPs in wastewater systems: Comparison of different sampling modes and optimization strategies. Environ. Sci. Technol. 2010, 44 6289–6296. (5) Ort, C.; Lawrence, M. G.; Rieckermann, J.; Joss, A. Sampling for pharmaceuticals and personal care products (PPCPs) and illicit drugs in wastewater systems: Are your conclusions valid? A critical review. Environ. Sci. Technol. 2010, 44, 6024–6035. (6) Qian, S. S.; Reckhow, K. H. Combining model results and monitoring data for water quality assessment. Environ. Sci. Technol. 2007, 41, 5007–5013. (7) Beaumont, M. A.; Zhang, W. Y.; Balding, D. J. Approximate Bayesian Computation in population genetics. Genetics 2002, 162, 2025–2035. (8) Marjoram, P.; Molitor, J.; Plagnol, V.; Tavare, S. Markov chain Monte Carlo without likelihoods. Proc. Natl. Acad. Sci. U.S.A. 2003, 100, 15324–15328. (9) Sisson, S. A.; Fan, Y.; Tanaka, M. M. Sequential Monte Carlo without likelihoods. Proc. Natl. Acad. Sci. U.S.A. 2007, 104, 1760–1765. (10) van Nuijs, A. L. N.; Castiglioni, S.; Tarcomnicu, I.; Postigo, C.; de Alda, M. L.; Neels, H.; Zuccato, E.; Barcelo, D.; Covaci, A. Illicit drug consumption estimations derived from wastewater analysis: A critical review. Sci. Total Environ. 2010, DOI:10.1016/j.scitotenv.2010.05.030. (11) Ort, C.; Schaffner, C.; Giger, W.; Gujer, W. Modeling stochastic load variations in sewer systems. Water Sci. Technol. 2005, 52, 113-122. (12) Gelman, A.; Carlin, J. B.; Stern, H. S.; Rubin, D. B. Bayesian Data Analysis; Chapman & Hall: London, 1995. (13) Tavare, S.; Balding, D. J.; Griffiths, R. C.; Donnelly, P. Inferring coalescence times from DNA sequence data. Genetics 1997, 145, 505–518. (14) Toni, T.; Welch, D.; Strelkowa, N.; Ipsen, A.; Stumpf, M. P. H. Approximate Bayesian Computation scheme for parameter inference and model selection in dynamical systems. J. R. Soc., Interface 2009, 6, 187–202. (15) Sisson, S. A.; Fan, Y. Likelihood-free Markov chain Monte Carlo. In Handbook of Markov Chain Monte Carlo; Brooks, S. P., Gelman, A., Jones, G., Meng, X.-L., Eds.; Chapman and Hall/CRC Press: New York, 2010. (16) Del Moral, P.; Doucet, A.; Jasra, A. Sequential Monte Carlo samplers. J. R. Stat. Soc., Ser. B: Stat. Methodol. 2006, 68, 411–436. (17) Schaffner, C.; Giger, W. Anticorrosive benzotriazoles as contaminants in wastewaters and rivers. Chimia 2004, 58 (7/8), 453. 4405

dx.doi.org/10.1021/es1030432 |Environ. Sci. Technol. 2011, 45, 4399–4406

Environmental Science & Technology

ARTICLE

(18) Geigy. Geigy Scientific Tables; Ciba Pharmaceutical Co.: Bassel, Switzerland, 1981. (19) Wegmann, D.; Leuenberger, C.; Excoffier, L. Efficient Approximate Bayesian Computation coupled with Markov chain Monte Carlo without likelihood. Genetics 2009, 182, 1207–1218. (20) Beaumont, M. A.; Cornuet, J. M.; Marin, J. M.; Robert, C. P. Adaptive Approximate Bayesian Computation. Biometrika 2009, 96 983–990. (21) Del Moral, P.; Doucet, A.; Jasra, A. An Adaptive Sequential Monte Carlo Method for Approximate Bayesian Computation; Imperial College: London, 2009. (22) Peters, G. W.; Fan, Y.; Sisson, S. A. On sequential Monte Carlo, partial rejection control and Approximate Bayesian Computation. arXiv:0808.3466v2 [stat.CO], 2009. (23) MacKay, D. J. C. Information Theory, Inference and Learning Algorithms; Cambridge University Press: England, 2003; p 052164298. (24) Csillery, K.; Blum, M. G. B.; Gaggiotti, O. E.; Francois, O. Approximate Bayesian Computation (ABC) in practice. Trends Ecol. Evol. 2010, 25, 410–418. (25) Joyce, P.; Marjoram, P. Approximately sufficient statistics and bayesian computation. Stat. Appl. Genet. Mol. Biol. 2008, 7. (26) Secrier, M.; Toni, T.; Stumpf, M. P. H. The ABC of reverse engineering biological signalling systems. Mol. BioSyst. 2009, 5, 1925–1935. (27) Csillery, K.; Blum, M. G. B.; Gaggiotti, O. E.; Francois, O. Approximate Bayesian Computation (ABC) in practice. Trends Ecol. Evol. 2010, 25, 410–418. (28) Beven, K.; Binley, A. The future of distributed models: model calibration and uncertainty prediction. Hydrol. Processes 1992, 6 279–298. (29) Mantovan, P.; Todini, E. Hydrological forecasting uncertainty assessment: Incoherence of the GLUE methodology. J. Hydrol. 2006, 330, 368–381.

4406

dx.doi.org/10.1021/es1030432 |Environ. Sci. Technol. 2011, 45, 4399–4406