Validation and Uncertainty Quantification of a Multiphase

May 6, 2013 - For a more comprehensive list of citations to this article, users are encouraged ... Experimental data for code validation: Horizontal a...
8 downloads 0 Views 3MB Size
Article pubs.acs.org/IECR

Validation and Uncertainty Quantification of a Multiphase Computational Fluid Dynamics Model Aytekin Gel,†,‡ Tingwen Li,†,§ Balaji Gopalan,† Mehrdad Shahnam,† and Madhava Syamlal*,† †

National Energy Technology Laboratory, Morgantown, West Virginia 26505, United States ALPEMI Consulting, LLC, Phoenix, Arizona 85044, United States § URS Corporation, Morgantown, West Virginia 26505, United States ‡

ABSTRACT: We describe the application of a validation and uncertainty quantification methodology to multiphase computational fluid dynamics modeling, demonstrating the methodology with simulations of a pilot-scale circulating fluidized bed. The overall pressure drop is used as the quantity of interest (QoI); the solids circulation rate and the superficial gas velocity are chosen as the uncertain input quantities. The uncertainty in the QoI, caused by uncertainties in input parameters, surrogate model, spatial discretization, and time averaging, is calculated, and the model form uncertainty is estimated by comparing simulation results with experimental data. The spatial discretization error was determined to be the most dominant source of uncertainty, but the applicability of the method used to calculate that uncertainty needs to be further investigated. The results of the analysis are expressed as a probability box (p-box) plot. A p-box similarly constructed for predictive simulations will give the design engineer information about the confidence in the predicted values.

1. INTRODUCTION Advances in the theory and numerical techniques1 and the availability of fast affordable computing power has made multiphase computational fluid dynamics (CFD) an emerging engineering tool for designing and troubleshooting fluidized bed reactors. Decisions about the scale-up of commercial reactors need to be made on the basis of information obtained from pilot scale units 20−100 times smaller, which has been characterized as a “daunting task for a process engineer”.2 Multiphase CFD models have the ability to predict the performance of scaled-up fluidized bed reactors, but they must be validated with data from small, pilot-scale units. The validation studies usually report the confidence in the model qualitatively (e.g., “fair” or “good” agreement with data), which is “often presented in an overly favorable light”.3 Such a judgment based on the difference between simulation results and experimental data cannot by itself give an estimate of the uncertainty in the predicted performance of a larger-scale unit. This is because various sources of uncertainty unavoidably get introduced by the time a numerical solution is computed, even though multiphase CFD models are based on a set of deterministic mathematical equations. Furthermore, the quantities used for the validation such as “the volume fraction, stress, and energy typically fluctuate spatially and temporally with amplitudes comparable to the mean”.4 Therefore, the ideal of a “perfect” agreement between model and experiment, expected of deterministic models, to establish permanently the validity of a model is practically unachievable in multiphase CFD. What is practically achievable is that all the uncertainties are explicitly identified, characterized, and quantified in the validation simulations so that the uncertainty in the predictive simulations can be estimated. Then design decisions can be made on the basis of quantitative information about the confidence in the predicted performance. The objective of this paper is to demonstrate how a comprehensive uncertainty quantification method can be adopted for quantifying the uncertainties in multiphase CFD © 2013 American Chemical Society

models, to help quantify the uncertainty in predictive simulations, such as those used for scale-up. Many advanced energy systems use multiphase flow reactors such as gasifiers and carbon capture devices that can be scaled up with the help of multiphase CFD models.5 A gasifier simulation, for example, uses a set of input parameters taken from the design (e.g., geometry specifications, gas/solid flow rates, and composition) and laboratory measurements (e.g., chemical reaction rates) and predicts the quantity of interest (QoI) (also called system response quantities or SRQ by Roy and Oberkampf10) to the designer (e.g., the product gas temperature and composition and the maximum temperature in the gasifier). At present, the QoIs are typically calculated without accounting for the uncertainties in their values. In reality, there are uncertainties in the input parameters, the numerical solution, and the underlying model itself that may have a substantial effect on the predicted QoIs. To use multiphase CFD models for making scale-up decisions with confidence, these uncertainties must be accounted for and quantified. Before the uncertainty in a predictive simulation (for example, that of a commercial-scale gasifier with no data available) can be quantified, the uncertainty in simulations within the validation domain (for example, that of a pilot-scale gasifier with validation data available) must be quantified. The focus of this paper is on the uncertainty quantification (UQ) associated with simulations within the validation domain, a step that must precede predictive simulation UQ. Special Issue: Multiscale Structures and Systems in Process Engineering Received: Revised: Accepted: Published: 11424

December 15, 2012 May 6, 2013 May 6, 2013 May 6, 2013 dx.doi.org/10.1021/ie303469f | Ind. Eng. Chem. Res. 2013, 52, 11424−11435

Industrial & Engineering Chemistry Research

Article

in the multiphase CFD software MFIX (https://www.mfix.org), we employ nonintrusive UQ methods where the deterministic application code is treated as a black box and used for sampling. Here we adopt a UQ methodology proposed by Roy and Oberkampf10 with some modifications to quantify the validation uncertainty for that case. We use the pressure drop across the riser as the QoI. The methods and procedures for uncertainty quantification are described in the following sections. 3.1. Identification and Characterization of All Sources of Input Uncertainty. The identification and characterization of all sources of input uncertainty, which can affect the simulation results, is the first step. In this study, this is achieved through information provided by the domain experts.13 The survey shown in Table 2 was used to identify sources of uncertainties in the input parameters to capture the knowledge and expertise of the domain experts. First, the domain experts are asked to list all sources of uncertainty in the model input parameters. Then for each input parameter, a nominal (or baseline) value and a range for upper and lower bounds are entered with adequate references to justify and document the source of this information. The domain experts are asked to prioritize the importance of the source of uncertainty based on their own experience and beliefs. Feedback from the domain experts could be used as prior information, which is particularly important with Bayesian methods. However, to avoid the adverse effect of the bias introduced by the domain experts with their rankings, systematic screening studies need to be performed to identify the most important few parameters that significantly affect the quantities of interest. Statistical design of experiments offers various computationally low sample size methods such as fractional factorials or Plackett−Burman designs for screening experiments to identify significant main effects, without their interaction effects.17 Systematic screening studies can help to generate the necessary evidence for the ranking of the identified input parameter uncertainties and, in some cases, initiate further investigation of the discrepancies in prior beliefs of domain experts and screening results, if needed. The domain experts are requested to characterize the listed uncertainties as aleatory, epistemic, or mixed. These are defined by Roy and Oberkampf as “aleatory − the inherent variation in a quantity that, given sufficient samples of the stochastic process, can be characterized via a probability density distribution, or epistemic − uncertainty due to lack of knowledge by the modelers, analysts conducting the analysis, or experimentalists involved in validation.” For example, the temperature at which experiments were conducted was affected by the weather and air conditioning system. Hence, we have a distribution of temperature for experimental repeats due to factors beyond the experimentalist’s control, which implies that the uncertainty in the temperature is aleatory. The same reasoning applies for all input parameters listed in Table 2 with the exception of restitution coefficient, which is categorized as epistemic due to the lack of experimental measurements.18 For aleatory uncertainties, the domain experts are asked to provide a probability distribution function (PDF) to characterize this type of uncertainty. Finally, the input parameters that might be correlated with each other are identified because they need to be analyzed as a special case. These surveys provide the added benefit that they (a) encourage CFD analysts to think about uncertainty at an early stage of their project, (b) become a basis for multiple participants on a project to come to consensus about the important uncertainties that affect the QoIs, and (c) become logbooks of the project for future reference. For demonstrating the VUQ

As the role of modeling is ever-increasing in science and engineering, there is a growing recognition for the need to understand and quantify uncertainties. Hence, several uncertainty quantification frameworks have been proposed in the literature.6−10 In this study, we adopt Roy and Oberkampf’s10 comprehensive UQ framework for multiphase CFD problems. Although UQ has been used extensively in a diverse range of domains such as aerospace engineering and construction, there is no prior published work in the validation and uncertainty quantification (VUQ) of multiphase CFD models applicable to fluidized bed reactors. The prior work,13 which primarily focused on input uncertainty propagation for multiphase flow CFD in a fluidized bed, is extended in the current study by systematically accounting for other sources of uncertainties in addition to the input uncertainties. The objective of this paper is to demonstrate the application of a comprehensive VUQ methodology for multiphase flow CFD by employing a case from the well-known Fluidization Challenge Problem.11 The application of Roy and Oberkampf’s10 UQ framework to multiphase CFD is challenging because of the large computational cost of multiphase CFD. This paper also discusses the modifications required in the UQ framework to address that challenge.

2. MODELING OF A CIRCULATING FLUIDIZED BED The circulating fluidized bed simulated in the current study is based on case 5 of the 2010 NETL/PSRI Fluidization Challenge Problem (https://mfix.netl.doe.gov/challenge/index_2010. php).11 The circulating fluidized bed consisted of a 16.8 m tall riser, with a diameter of 0.305 m, along with its associated cyclones, standpipe, L-valve, and solids collection systems. The continuum flow solver in the open-source code MFIX, which is based on a multifluid Eulerian−Eulerian formulation, with each phase treated as an interpenetrating continuum was used to simulate the circulating fluidized bed. Additional details on the experimental rig, its operating conditions, and the MFIX multiphase flow solver are provided by Li et al.12 The operating conditions used in the current study are shown in Table 1. Table 1. Experimental Test Conditions of Challenge Problem Case 5 test variable gas superficial gas velocitya (m/s) gas flow at riser bottom (SCMb) riser outlet pressure (kPa)

value

test variable

value

air 7.58

solids material solids circulation rate (kg/s)

HDPE 14.0

0.640

gas flow through standpipe and L-valve (SCMs) temperature (K)

0.029

105

295

a The superficial gas velocity is calculated on the basis of the temperature and pressure at the bottom of the riser. bStandard cubic meters.

3. VALIDATION AND UNCERTAINTY QUANTIFICATION Li et al.12 provide a comprehensive comparison between numerical results and experimental data with respect to axial pressure gradient profiles, radial profiles of solids velocity, and solids mass flux for case 5 of the 2010 NETL/PSRI challenge problem. They report that the Eulerian−Eulerian model in MFIX predicts the complex flow behavior reasonably well, both qualitatively and quantitatively. To avoid extensive modifications 11425

dx.doi.org/10.1021/ie303469f | Ind. Eng. Chem. Res. 2013, 52, 11424−11435

Industrial & Engineering Chemistry Research

Article

Table 2. Survey Completed for the Identification and Characterization of All Sources of Uncertainty for the Demonstration Problem the most likely value (n) or the nominal value

minimum value (% of n)

justification for the provided “most likely value” and lower/upper bounds

classification of uncertainty

110

experimental data11

aleatory

95

105

experimental data11

aleatory

0.029

99.95

100.05

experimental data11

aleatory

μm kg/m3

802 863 0.8

98 99.99

102 100.01

experimental data11 experimental data11 literature30

aleatory aleatory epistemic

P

kPa

105

99.996

100.004

experimental data11

aleatory

T

K

293

98

102

experimental data11

aleatory

importance rank

uncertain input parameters

variable name

1

mean solids circulation rate superficial gas velocity gas flow rate from standpipe and L-valve particle diameter particle density restitution coefficient pressure at top exit temperature

Gs

kg/s

14

90

Ug

m/s

7.58

Flv

SCM

Dp P e

2 3 4 5 6 7 8

units

maximum value (% of n)

Figure 1. Schematic illustration of the current MFIX-PSUADE implementation.

methodology, the top two of the eight uncertain input parameters shown in Table 2 are selected for input uncertainty propagation in this study. 3.2. Propagation of Input Uncertainties through the Model. The direct Monte Carlo simulation method used by Roy and Oberkampf10 for the forward propagation of input uncertainties is not suitable for multiphase CFD simulations because of the prohibitive computational cost per simulation sample. Therefore, we propagate the input uncertainties with the help of a surrogate-model-based Monte Carlo simulation. For this purpose, we used the open source UQ toolbox PSUADE (https://computation.llnl.gov/casc/uncertainty_quantification) and developed an integrated workflow to couple simulations and UQ analysis. Figure 1 illustrates a UQ analysis and the workflow between the UQ toolbox (PSUADE) and the application CFD model (MFIX). The details of the above UQ framework and the workflow can be found in previous studies by Gel et al.13,14 and Tong and Gel.15 Of the different types of surrogate models available (e.g., reduced-order models, data-fitted models, and stochastic-collocation-based models), in this study a data-fitted response surface model was selected. In order to build a data-fitted response surface as a surrogate model, the sampling points need to be carefully determined in the allowable input parameter space to extract maximum information for the responses with a minimal number of simulations. Several sampling methodologies are available for this purpose (e.g., full factorial or fractional based approaches, central composite design (CCD) from statistical design of experiments,17

and space-filling methodologies such as Latin Hypercube (LH) and Monte Carlo (MC) sampling).16 The computational expense of each sample (approximately 2−4 weeks for each CFD simulation) constrains the total number of sampling simulations that can be run. Hence, space filling methodologies like LH or MC are computationally quite expensive in spite of the fact that these sampling methods are preferred and commonly employed in the application of UQ methodologies. Furthermore, there were already a number of existing simulations before the VUQ study was initiated. In order to reuse these existing simulations, and to avoid a significant number of additional simulations (e.g., LH would typically require at least 20−24 runs for two uncertain input parameters), CCD was chosen as the sampling method, which necessitated only 6 additional simulations. Further information on CCD and statistical design of experiments can be found in the literature.17 The existing runs (runs 2, 3, 9, 10, 11, 12, and 13 in bold) were augmented with six additional runs as shown in Table 3. The settings of input factors for each run are arranged such that the sampling locations are up to ±1.4 standard deviations from the mean value of each input factor given in Table 2. Several surrogate models were investigated by employing both nonparametric response surfaces such as the Gaussian process model, multivariate adaptive regression splines (MARS), and parametric polynomial regression. For this purpose, the statistical model selection criteria adjusted R2 (for polynomial regression based responses surfaces) and cross-validation error method were employed to determine the best fit with respect to the QoI. Using an inadequate surrogate model would have an adverse 11426

dx.doi.org/10.1021/ie303469f | Ind. Eng. Chem. Res. 2013, 52, 11424−11435

Industrial & Engineering Chemistry Research

Article

As seen from the last column, the surrogate model represents the system behavior with less than 1% error, except for runs 4 and 6, which have an error of less than 2%. This error could be reduced by performing additional simulations around the input parameters used for runs 4 and 6 to improve the surrogate model with additional samples. The adjusted R2, one of the common statistical metrics used to assess the goodness-of-fit and parsimony of the polynomial regression, is calculated as 97.66%, which implies that 2.34% of the variability observed in the QoI as a result of the runs performed in Table 3 could not be explained with the surrogate model. As part of the model selection trials, a cubic-polynomial-based response surface is also tested but it was eliminated because of its adjusted R2 = 97.26%, which was less than that of the quadratic model. An additional check for model selection on the basis of the cross-validation method is also performed to assess the quality of the surrogate model. Figure 2 shows the histogram of cross-validation errors and the plot of actual versus predicted value of the pressure drop (in kPa) during cross-validation. For the latter plot, ideally, all points (i.e., green circles) should reside on the diagonal. However, due to surrogate model errors, there will be deviations from the diagonal line. The prediction error of the surrogate model will increase as more points deviate from the diagonal. In Figure 2b, the mean for cross-validation errors was calculated to be −0.0267. Crossvalidation errors need to be centered around zero to avoid situations like the output from the surrogate model being affected by a systematic bias. In this case, the error due to run 6 appears to create some systematic bias, which could be addressed with additional runs around this sample point. However, additional simulations around this point was not performed, and the estimate for the uncertainty introduced by the surrogate model is considered to be sufficient for the demonstration purposes of this study. Figure 3 shows both the 2-D contour (on the right) and 3-D surface plots (on the left) of the surrogate model constructed. In the 3-D surface plot, some of the actual MFIX simulation sampling results for the QoI are also shown (with black circles above or below the response surface, though some are not visible due to current viewpoint of the plot). The relative distance of these samples from the response surface provides a visual qualitative illustration of the error introduced with the fitted response surface at the same locations in addition to other statistical measures discussed during evaluation of the surrogate model adequacy. Both of the input parameters are treated as aleatory uncertainties, which can be characterized with a PDF. From the experimental data, it is assumed that a normal distribution is an adequate representation. Hence, Ug is characterized as a truncated normal distribution with a mean of 7.2 m/s and standard deviation of 0.04 m/s, and Gs is characterized as a truncated normal distribution with a mean of 14.0 kg/s and standard deviation of 0.34 kg/s. Figure 4 shows the histograms for both of these variables with 100 000 samples drawn for the Monte Carlo simulation. Figure 5 shows the outcome of the forward uncertainty propagation with the help of the surrogate model. Without a surrogate model, it would have been practically impossible to do the required 100 000 sample evaluations with multiphase CFD. Figure 5a shows the histogram of the QoI. The sample mean is 19.02 kPa with a standard deviation of 0.31 kPa. Figure 5b shows the empirical cumulative distribution function (eCDF) computed from the histogram, which is required for the UQ framework used here.

Table 3. Inputs and System Response Quantity Calculated from MFIX Simulations and the Surrogate Model input parameters

system response quantity (SRQ)

run no.

input fact. 1 Ug (m/s)

input fact. 2 Gs (kg/s)

MFIX simulation ΔP (kPa)

surrogate model ΔP (kPa)

% error

1 2 3 4 5 6 7 8 9 10 11 12 13

7.20 7.96 7.20 7.96 7.04 8.12 7.58 7.58 7.58 7.58 7.58 7.20 7.96

12.60 12.60 15.40 15.40 14.00 14.00 12.02 15.98 14.00 12.60 15.40 14.00 14.00

17.999 16.276 20.479 18.682 19.682 16.511 16.636 19.774 17.781 16.899 19.128 19.035 17.109

18.112 16.166 20.409 18.389 19.616 16.798 16.737 19.934 17.807 16.941 19.202 18.996 17.013

0.63 −0.68 −0.34 −1.57 −0.34 1.74 0.61 0.81 0.15 0.25 0.39 −0.20 −0.56

effect on uncertainty quantification analysis as the surrogate model itself introduces an error of its own. For this purpose, the best-fitted surrogate model should be sought and used. There are a number of model selection criteria for parametric response surface models such as adjusted R2, Akaike’s information criterion (AIC), and Schwarz’s Bayesian information criterion (BIC).19 Adjusted R2 is a revised statistical measure to overcome a deficiency of the standard R2 statistic, which measures the total variability explained by the regression model. However, R2 always increases as factors are added to the regression model whether these are significant or not. Adjusted R2 takes into account the size of the model such that adjusted R2 may decrease if nonsignificant terms are added to the regression model.17 The objective of these criteria is to determine the best model based on the trade off between goodness-of-fit and the parsimony (simplicity) of the model as measured by the number of model parameters. Typically, the best model is selected as the one which maximizes adjusted R2 or minimizes AIC or BIC. However, none of these criteria are direct measures of predictive power. Cross-validation methods can be employed to assess the predictive power of the surrogate model by obtaining nearly unbiased estimators of prediction error.20 The method is based on the idea of randomly splitting the sampling data set used to generate the response surface into a number of equal subsets. Each of these subsets is removed in turn from the complete sampling data set, and the model is fitted to the remaining data. At each stage an attempt is made to predict the removed subset by using the model fitted with the remaining data set, on the basis of which an error norm is computed from the predicted results. This approach is applicable for both parametric and nonparametric response surfaces. In this study, cross-validation method is employed in addition to the adjusted R2 based criterion for model selection. Among the several parametric and nonparametric response surfaces tested, the best fit is obtained with quadratic regression based parametric surrogate model, which is shown in equation 1. ΔP = 127.72 − 22.89Ug − 27.032Gs − 0.34774UgGs + 1.3699Ug 2 + 0.13479Gs 2

(1)

In Table 3, the fourth and fifth columns show the results from the MFIX simulations and the constructed surrogate model, respectively. 11427

dx.doi.org/10.1021/ie303469f | Ind. Eng. Chem. Res. 2013, 52, 11424−11435

Industrial & Engineering Chemistry Research

Article

Figure 2. Cross validation (CV) assessment of the surrogate model: (a) histogram of CV errors and (b) plot of actual versus predicted results by the surrogate model for each run.

Figure 3. (a) 3-D and (b) 2-D contour plots of the quadratic response surface fitted for pressure drop (ΔP, in kPa).

the results for 100 versus 1000 times was determined to be less than 1%. The sensitivity analysis shows that 56% of variability observed in pressure drop is due to second input parameter Gs, and the remaining is due to Ug. Hence, reducing the variability in Gs (i.e., standard deviation of Gs) will have more impact in reducing the variability of the predicted pressure drop, which is shown in Figure 6 for two distinct special cases generated with new Monte Carlo simulations. Figure 6a shows the results of the first case when the variability of the first input parameter Ug was reduced by half from 0.04 to 0.02 m/s while keeping all other parameters the same. As the SA results show, the effect of this

Sensitivity analysis (SA) is an important component of the UQ analysis as it can identify the input parameters that contribute the most to the variability observed in the predicted QoI and enable engineers to efficiently allocate resources to measure those parameters more accurately and reduce the uncertainty, if possible. As a demonstration, PSUADE was used to conduct a global sensitivity analysis using Sobol’ total sensitivity analysis with numerical integration.21 To ensure the computed Sobol’ total indices are not sensitive to the number of times the analysis is performed, several trials up to a limit of 1000 (i.e., the maximum limit in PSUADE) were conducted. The difference in 11428

dx.doi.org/10.1021/ie303469f | Ind. Eng. Chem. Res. 2013, 52, 11424−11435

Industrial & Engineering Chemistry Research

Article

uncertainty, which needs to be taken into account. For the demonstration purposes of this study, the uncertainty introduced by the surrogate model is conservatively estimated as the largest discrepancy observed. This is taken as 1.74% on the basis of run 6 (Table 3), which shows the largest discrepancy between MFIX simulation and the surrogate model. 3.3. Estimation of Uncertainty Due to Numerical Approximation. Uncertainties due to numerical approximation typically include coding error, round-off error, iteration convergence error, discretization error (both spatial and temporal), and time-averaging error for transient flows. Coding errors are mistakes that exist in the CFD code. They are eliminated through a verification process that occurs before the validation. In the current study, coding errors are assumed to have been eliminated because MFIX has been verified with numerous verification test problems over several years, and nightly regression testing is performed automatically against a suite of test problems. Round-off errors, resulting from the finite representation of numbers by the computer, are expected to be negligible as double precision numbers are used in the code. Various techniques are available for estimating iteration convergence errors during the solution verification process. In this study, the iteration convergence errors are assumed to be small and independent of other errors. Further investigation on the inclusion of iteration convergence errors in UQ analysis will be conducted in a future study. The discretization and timeaveraging errors are considered in this study and are discussed in the following sections. 3.3.1. Spatial Discretization Error. Three grid resolutions of 50 × 2100 × 38 (fine), 35 × 1485 × 27 (medium), and 25 × 1050 × 19 (coarse) with a grid refinement ratio of 1.41 were used in the simulations to evaluate the grid convergence and to

Figure 4. Histogram plots for the input parameters (a) Ug and (b) Gs used during Monte Carlo simulations for input uncertainty propagation.

change is small on the pressure drop; i.e., the mean stayed the same and the standard deviation changed from 0.31 to 0.30 kPa (about 3.3% decrease). For the second case, Figure 6b shows the results when the variability of Gs is reduced by half from 0.34 to 0.17 kg/s. The variability observed in the predicted pressure drop has been reduced substantially; i.e., the standard deviation reduced from 0.31 to 0.20 kPa (more than 50% reduction). However, the mean has changed negligibly from 19.02 to 19.01. Therefore, to reduce the uncertainty in the predicted pressure drop, it is important to measure or specify Gs with less variability than Ug. The use of a surrogate model instead of the actual CFD model in input uncertainty propagation introduces an additional

Figure 5. (a) Histogram and (b) empirical CDF plot of the pressure drop based on 100 000 sample Monte Carlo simulation for propagating the input uncertainties.

Figure 6. Histograms of the same SRQ for special cases after reduction of the variability of (a) Ug and (b) Gs. 11429

dx.doi.org/10.1021/ie303469f | Ind. Eng. Chem. Res. 2013, 52, 11424−11435

Industrial & Engineering Chemistry Research

Article

from second-order on coarse grids to first-order as the cell size approaches zero, and eq 2 would fail to provide an observed order of accuracy close to the formal order of the scheme.23 The above-mentioned limitation motivates the use of the mixed-order analysis proposed by Roy.18 In the following analysis we are assuming that the numerical uncertainty introduced by the variable time steps used by MFIX is negligible. The series representation for the three grid resolutions are as follows:

quantify the spatial discretization error. A second-order superbee scheme is used for spatial discretization for which the numerical implementation has been verified with the help of analytical solutions generated by the method of manufactured solutions (MMS).22 The QoI, the overall pressure drop, for different grids are compared in Table 4. The pressure drop decreases as the grid Table 4. Numerical Results by Different Grid Resolutions grid (size)

pressure drop (kPa)

coarse (2) medium (1.41) fine (1) f h=0

17.636 16.958 16.429 14.55

f1 = fh = 0 + g1h1 + s1Δt1 + g2h12 + O(h13) + O(Δt12) (4)

f2 = fh = 0 + g1h2 + s1Δt 2 + g2h2 2 + O(h2 3) + O(Δt 2 2) (5)

f3 = fh = 0 + g1h3 + s1Δt3 + g2h32 + O(h33) + O(Δt32)

size is decreased perhaps because the pressure drop decreases as the region near the walls is better resolved, which more than offsets the increase in the pressure drop caused by better resolved clusters of the (Geldart Group B) particles. For three solutions f1, f 2, and f 3 at a constant grid refinement ratio r = √2, the observed order of accuracy p can be calculated to be 0.72 from23,24

In the above equations we have retained the terms up to the second-order for spatial discretization and up to the first-order for temporal discretization on the basis of the methods used in the present calculations. From Table 4 we note that the ratio of the time step to grid size is nearly a constant; i.e., Δtn/hn ≅ 12. Therefore, the above equations can be simplified as

( )

ln p=

(6)

f3 − f2

f2 − f1

ln(r )

f1 = fh = 0 + g1̂ h1 + g2h12 + O(h13) + O(Δt12)

(2)

By applying the Richardson extrapolation, the exact value of the quantity at h = 0 can be estimated as fh = 0 = f1 + (f1 − f2 )/(r p − 1)

2

(3)

Table 5. Time Step Information Used by Different Grids in the Grid Convergence Study coarse

medium

fine

0.164 0.0258

0.115 0.0126

0.0867 0.0052

2

(7)

f2 = fh = 0 + g1̂ h2 + g2h2 + O(h2 ) + O(Δt 2 )

(8)

f3 = fh = 0 + g1̂ h3 + g2h32 + O(h33) + O(Δt32)

(9)

where ĝ1 = g1 + 12s1. The above equations can be solved for the three unknowns, assuming that the higher order terms are negligible, to get f h=0 = 14.9 kPa, ĝ1h1 = 1.6 kPa, and g2h12 = −0.1 kPa. Due to the limitations stated above, the current grid study raises the question of how to determine the grid convergence for an unsteady gas− solid flow simulation. It is beyond the scope of the current paper to answer this question. However, it should be discussed in depth in future work. It should also be noted that the simulation conditions for the grid convergence study are slightly different from the baseline conditions listed in Table 1.12 Consequently, it is an assumption that the numerical uncertainty estimated from this grid convergence study is the numerical uncertainty of the baseline simulation. Nevertheless, it is a reasonable assumption because the discrepancy in the predicted flow conditions is expected to be small, considering the small deviation from the baseline condition. Acknowledging the above limitations of the current grid convergence study, the uncertainty due to numerical spatial discretization is estimated using the approach given by Roy and Oberkampf10 which is based on Roache’s grid convergence index.27

to get f h=0 = 14.55 kPa. A careful examination of axial pressure gradient profiles and flow field confirms that the solutions used in this study are indeed sufficiently grid-converged.25 In spite of the expected accuracy of the solution, the observed order of accuracy is low and the numerical approximation uncertainty is high. One basic assumption of the above analysis is that the spatial discretization error is the dominating one and the others are negligible or can be decoupled from this analysis. The temporal discretization error could be coupled with the spatial discretization error and affect the grid convergence analysis, however. In MFIX simulations, the first-order implicit Euler-based temporal discretization scheme is employed with a time step that is automatically adjusted (within user-defined limits) to reduce the run time. The mean time step used for each grid resolution is shown in Table 5. Given the fact that the time step decreases with

mean time step (ms) standard deviation (ms)

3

UNUM = Uspatial DE ≅ Fs|f2 − fh = 0 | = 1.25|17.6 − 14.9| = 3.4 kPa (10)

increasing grid resolution, the error associated with the temporal discretization varies and could be included in the calculated spatial discretization error. A preliminary study of varying the time step for the coarse grid simulations indicates that the temporal discretization errors are not negligible.26 Furthermore, the flux limiters used in the calculations reduce the order of accuracy to first-order at discontinuities in the volume fraction field. This is then likely to change the order of accuracy

where Fs is a safety factor, which is recommended to be 1.25 for comparisons involving three or more grids and 3 for comparisons between two grids. This numerical uncertainty is expected to be a combination of spatial and temporal discretization errors and is likely to be a very conservative estimate. 3.3.2. Time-Averaging Error. To study the inherently transient gas−solids flow in a circulating fluidized bed (CFB) 11430

dx.doi.org/10.1021/ie303469f | Ind. Eng. Chem. Res. 2013, 52, 11424−11435

Industrial & Engineering Chemistry Research

Article

Figure 7. Time history of pressure drop for the baseline case.

Table 6. Statistics of Time Average QoI over Different Averaging Time Intervals averaging time interval, Δt (s) time-average pressure drop

⟨S̅(Δt)⟩ (kPa) ⟨S̅(Δt)⟩std (kPa) relative error (%)

5

10

20

40

60

17.861 0.607 3.400

17.857 0.476 2.668

17.826 0.356 1.995

17.777 0.234 1.314

17.761 0.164 0.925

where S is the QoI, t is the start point for averaging, and Δt is the time interval for averaging. Mean and standard deviation of time-average response variable S̅ can be calculated by varying the starting point t over the entire data as

riser, an appropriate averaging process is needed for analyzing both experimental data and numerical simulation results. Hence, an adequate simulation time is needed to obtain a time-averaged flow field that is stationary. On the other hand, it is preferable to run shorter simulations because of the large computational cost. For example, it took 22 days to complete 10 s of the fine grid simulation on 20 compute nodes with Intel Xeon E5440 2.83 GHz CPUs (80 cores in total) connected by Mellanox Infiniband interconnect. Figure 7 shows the time variation of the QoI for the baseline case for which a 160 s simulation has been completed. It shows that the startup transients disappear and the flow reaches a stationary state in about 10 s, an observation also confirmed qualitatively by the visualization of the flow field. To eliminate the effect of startup transients, only the numerical results after the first 20 s are used for the analysis. Although the flow reaches a stationary state, there are fluctuations in the flow field, which appears even in a global variable such as the overall pressure drop. (Much stronger fluctuations are observed for the local flow field variables such as the solids velocity.) These fluctuations introduce a time-averaging error. It would be prohibitively expensive to eliminate this error by conducting very long simulations. For practical multiphase CFD computations, therefore, it is necessary to account for the timeaveraging error. In order to quantify the uncertainty caused by the timeaveraging process, the 160 s simulation time is assumed to be sufficient for the current study. This is because that duration is sufficiently larger than the residence time of the gas and the solids. A moving average process with a certain averaging time interval is then applied to the numerical data as given in the following equation. S ̅ (t ,Δt ) =

∫t

⟨S ̅ (Δt )⟩ =

t ≥ 20

Nt

∑ S ̅(ti ,Δt ) i=1

(12)

and ⟨S ̅ (Δt )⟩std

N ⎤0.5 1 ⎡⎢ t 2⎥ = ∑ (S ̅(ti ,Δt ) − ⟨S ̅(Δt )⟩) ⎥⎦ Nt ⎢⎣ i = 1

(13)

where Nt is the number of time samples. These values are reported in Table 6 for different time intervals. The ratio between these two quantities, ⟨S̅(Δt)⟩std/⟨S̅(Δt)⟩, quantifies the timeaveraging error as a function of Δt, and we can clearly see that the error decreases as we increase Δt. As a compromise between accuracy and computational time, a Δt = 40 s time average is used for all analyses, except for the grid convergence study reported in the previous section, which gives about 1.3% uncertainty for the calculated pressure drop. We have also conducted a random sampling uncertainty analysis for the time-averaging error,28 referred to as “bootstrap uncertainty analysis”, and obtained similar results. 3.4. Estimation of Uncertainty in Experimental Data. The uncertainty in experimental data can be viewed as a sum of uncertainty arising out of various errors including instrumentation error, convergence error, and process error.8 The instrumentation error for the measurement of pressure drop is ±0.004% and as such is neglected for the purpose of uncertainty propagation. The convergence error is the error accumulated due to the utilization of finite temporal statistics in calculating the steady state mean value of an unknown variable. This error is quantified using the standard bootstrap uncertainty analysis.28 Bootstrap distribution is generated from the raw data by 20 000

t +Δt

S dt /Δt

1 Nt

(11) 11431

dx.doi.org/10.1021/ie303469f | Ind. Eng. Chem. Res. 2013, 52, 11424−11435

Industrial & Engineering Chemistry Research

Article

Figure 8. Histogram plot of the pressure drop experimental data for 11 replications.

model instead of the actual CFD model, which are shown by the cyan colored region that creates a p-box. For the demonstration purposes of the current study, a simple approach based on the use of the largest discrepancy of 0.287 kPa being appended to both sides is used for estimating the surrogate model uncertainty. The area validation metric is based on the Minkowski L1 norm, which shows the minimum area between the two empirical CDFs.

random samplings of the data. The distribution of the calculated mean value can be fitted with Gaussian distribution, and hence twice the standard deviation is considered as the 95% confidence interval. The average bootstrap uncertainty is ±0.5%. It should be noted that this error accounts for only the instrument and convergence errors and does not include the effect of uncertainty in input parameters (e.g., gas velocity, solids flow rate, environmental variability, and other unknown factors). These process errors are quantified by conducting 12 uncorrelated experimental repetitions. Also, when the bootstrap distribution was examined, an outlier point was found in the experimental data that has 5 times higher uncertainty compared to that of the rest of the data. This is due to a convergence issue caused by some sudden change in experimental conditions or instrument malfunction, and this outlier data point is not considered for comparison purposes. The process error from the 11 repetitions is ∼±5% and is accounted for separately in the uncertainty analysis, as explained in the next section. 3.5. Estimation of Model Form Uncertainty. The outcome of the validation in Roy and Oberkampf’s UQ framework is the model form uncertainty, “the source of which can be either physics modeling assumptions and/or imprecise knowledge of the input uncertainties”.10 The model form uncertainty is then used for calculating the uncertainty in predictive simulations. According to Roy and Oberkampf,10 the model form uncertainty is quantified by the minimum area between the empirical CDF of numerical simulation and the experimental measurements. For the conditions listed in run 12 (Table 3), experimental data for pressure drop is available as 11 replications. Figure 8 shows the histogram of the experimental data used for model validation. Several validation metrics have been reported in the literature.29 Here we employ the area validation metric proposed and used in Roy and Oberkampf.10 For this purpose, an empirical CDF is generated with the experimental data (staircase-shaped eCDF shown in Figure 9 with a thick black line). The uncertainty in experimental results was determined to be ±0.5%, which are appended to both sides of the eCDF for experiments with thin black lines. The empirical CDF obtained from input uncertainty propagation (Figure 5b) is added to the same plot (blue line) by adding the estimate of uncertainty due to the use of a surrogate



d(F ,Sn) =

∫−∞ |F(x) − Sn(x)| dx

(14)

Because the experimental (black line) and input uncertainty (blue line) eCDFs do not overlap in our case, the area can be estimated as the difference between the means of the two eCDFs: ∞

d(F ,Sn) = |



∫−∞ F(x) dx− ∫−∞ Sn(x) dx|

= |F ̅ − Sn̅ |

(15)

S̅n: mean for experimental data (after eliminating one outlier) = 20.297 kPa F̅: mean for the Monte Carlo simulation (100 000 samples) = 19.018 kPa d(F,Sn): area validation metric = |19.018 − 20.297| = 1.279 kPa When the effect of the surrogate model uncertainty (±0.287 kPa) in the results of simulation eCDF (blue eCDF in Figure 9) is considered, the above approach of Roy and Oberkampf can be revised to provide a lower and upper bound for the model form uncertainty. This could be achieved by taking into account the maximum and minimum area between the cyan p-box and experimental eCDF (thick black staircase line in Figure 9). Hence the interval for model form uncertainty can be expressed as: 0.992 kPa < d(F,Sn) < 1.566 kPa. For total uncertainty estimation purposes in Figure 10, we follow the guidance in the UQ framework of Roy and Oberkampf and use the minimum area between the simulation eCDF (the right-most eCDF of the cyan p-box) and experimental measurement eCDF, which is calculated as 0.992 kPa. The computed model form uncertainty, d(F,Sn), arises from the physical modeling assumptions employed in the computational model. It may also account for any deficiency in uncertainties 11432

dx.doi.org/10.1021/ie303469f | Ind. Eng. Chem. Res. 2013, 52, 11424−11435

Industrial & Engineering Chemistry Research

Article

Figure 9. Empirical CDF plot for experiments and forward propagation of model input parameter uncertainties.

the black arrow line on the right-hand side shows that the probability that the pressure drop is greater than 24.2 kPa lies in the interval [0, 0.2], which means that there is at most 20% chance that the actual pressure drop could be greater than 24.2 kPa. The simulations in this study were performed in the validation domain where experimental data is available and estimated probabilities can be considered to be correct. For a predictive simulation (e.g., the simulation of a scaled up device) no experimental data will be available and a UQ analysis will be required to assess the accuracy of the predictions to help make design decisions. Contrast that with the current practice. A standard multiphase CFD analysis would have calculated the QoI as about 19 kPa and compared that with an average experimental value of 20 kPa, to get a qualitative idea about the confidence in the model. Then it will be assumed that the uncertainty in the predictive simulation will be similar (about 5%), which need not be the case because of the various uncertainties introduced in the predictive simulation. The model form uncertainty determined from the validation simulation is then used to estimate the uncertainty in predictive simulations as illustrated by Roy and Oberkampf.10 We will describe the methodology here without actually doing any predictive simulations of larger, commercial scale circulating fluidized beds. The uncertainty quantification for predictive simulations will proceed similar to what was described in the previous sections, except for one crucial difference: no data are available for the commercial-scale unit, which has yet to be built. Therefore, the model form uncertainty is not available. As an approximation, the model form uncertainty calculated from the validation simulation is added to the other uncertainties. Roy and Oberkampf10 recommends the use of multiple validation simulations at different conditions, preferably as close as possible to the operating condition of interest, so that the model form uncertainty can be extrapolated to the conditions of the predictive simulation. The uncertainty quantification for a predictive simulation will produce a chart similar to Figure 10, from which design engineers can extract information about the expected

explicitly accounted for, such as the input uncertainties, uncertainties introduced due to surrogate model, and experimental uncertainties (e.g., measurement uncertainty). 3.6. Discussion. The various uncertainties that we have estimated in the previous sections are summarized in Table 7. We used a nominal QoI value of 19 kPa to normalize the various uncertainties. The different uncertainties are unsatisfactorily dwarfed by the uncertainty in spatial discretization. As discussed before, we believe that the value of this uncertainty is unlikely to be this large and may point to a limitation in the method used for calculating this uncertainty value, which needs further investigation. Figure 10 gives the final results of the analysis in the form of a composite p-box, which is necessary for representing the overall uncertainty composed of both aleatory and epistemic uncertainties.10 The p-box consists of the two bounding eCDFs separated by an interval. The area between the eCDFs is divided into color coded regions, to show how the two bounding CDFs were calculated. The CDF at the center of the p-box (blue curve) was obtained by propagating the input uncertainty with the help of the surrogate model. This is a single CDF because the input uncertainties considered were aleatory. The other sources of uncertaintysurrogate model (cyan), model form (green), spatial discretization (red), and time-averaging (yellow)are added to and subtracted from the blue curve to obtain the two CDFs that bound the p-box. These uncertainties are in the form of an interval and are treated as epistemic uncertainties. One of the benefits of VUQ is the ability to calculate the model form uncertainty, which is then used to quantify the uncertainty in predictive simulations. The development of the p-box is based on all uncertainties from the VUQ to illustrate how such a p-box will be construed and used for a predictive simulation. The p-box provides an estimate for the tail probabilities of the QoI in terms of upper and lower bounds. For example, the black arrow line on the left-hand side shows that the probability of the pressure drop being less than or equal to 13.54 kPa lies in the interval [0, 0.2], which means that there is at most a 20% chance that the actual pressure drop could be less than or equal to 13.54 kPa. Similarly 11433

dx.doi.org/10.1021/ie303469f | Ind. Eng. Chem. Res. 2013, 52, 11424−11435

Industrial & Engineering Chemistry Research

Article

Figure 10. Composite probability box taking into account various sources of uncertainties in the model.

Table 7. Summary of Various Uncertainties source of uncertainty

estimate of uncertainty in pressure drop as a % of a nominal mean (19 kPa)

experimental data input parameter surrogate model spatial discretization time averaging model form

5 1.6 1.5 17.9 1.2 5.2



probability of events for various QoIs. Thus the uncertainty quantification analysis provides quantitative information about the confidence in the predictions, which can be used to make design decisions in a more reliable way.

4. CONCLUSIONS We demonstrate the application of the validation and uncertainty quantification methodology given by Roy and Oberkampf10 to multiphase CFD modeling, taking as an example circulating fluidized bed simulations. The overall pressure drop is used as the quantity of interest; the solids circulation rate and the superficial gas velocity are chosen as the uncertain input parameters to the CFD model. On the basis of this study, we make the following conclusions: • Identification and adequate characterization of all sources of uncertainties needs to be done in a systematic way. In this study, a detailed survey of domain experts was employed. • The uncertain input parameters identified for this study were characterized as an aleatory uncertainty with a certain PDF associated with them. However, in engineering problems, typically epistemic uncertainties or a mix of epistemic and aleatory uncertainities exist. A future study will incorporate multiple types of input uncertainties. • As multiphase CFD is computationally expensive, we modified the Roy and Oberkampf10 methodology by using a surrogate model for propagating input uncertainties.









Although a central composite design was used for sampling due to constraints mentioned in the paper, the quadratic polynomial regression model constructed from CCD sampling appears to be an adequate surrogate model for the demonstration purposes of this study. The quality of the surrogate model plays a crucial role in uncertainty quantification as the uncertainty introduced by the surrogate model itself needs to be taken into account. The spatial discretization error estimated using Roache’s grid convergence index method turned out to be overwhelmingly large in comparison with the other sources of uncertainties. Based on our assessment of the numerical solution, the use of this method appears to have resulted in an unrealistically large uncertainty and calls for further investigation of the application of this method to multiphase CFD. In multiphase CFD calculations, the uncertainty introduced by time-averaging needs to be considered in addition to the sources of uncertainties discussed by Roy and Oberkampf.10 Uncertainty due to time averaging appears to be comparable to other uncertainties (disregarding the large uncertainty caused by spatial discretization). The uncertainty in the QoI caused by uncertainties in input parameters, surrogate model, spatial discretization, and time averaging are calculated, and the model form uncertainty is estimated by comparing simulation results with experimental data. An adequate estimate of the model form uncertainty in the application domain is required for quantifying the total uncertainty in predictive simulations, which gives design engineers quantitative information about the confidence in the predicted performance.

AUTHOR INFORMATION

Corresponding Author

*E-mail: [email protected]. 11434

dx.doi.org/10.1021/ie303469f | Ind. Eng. Chem. Res. 2013, 52, 11424−11435

Industrial & Engineering Chemistry Research

Article

Notes

(11) Shadle, L.; Shahnam, M.; Cocco, R.; Issangya, A.; Guenther, C.; Syamlal, M.; Spenik, J.; Ludlow, C.; Shaffer, F.; Panday, R.; Gopalan, B.; Dastane, R. Challenge Problem III; CFB X Workshop: Sun River, Oregon, 2011. (12) Li, T.; Dietiker, J.; Shahnam, M. MFIX Simulation of NETL/PSRI Challenge Problem of Circulating Fluidized Bed. Chem. Eng. Sci. 2012, 84, 746−760. (13) Gel, A.; Garg, R.; Tong, C.; Shahnam, M.; Guenther, C. Applying Uncertainty Quantification to Multiphase Flow Computational Fluid Dynamics. Powder Technol. 2013, 242, 27−3939. (14) Gel, A.; Shahnam, M.; Guenther, C. Validation and Uncertainty Quantification of a Multiphase Flow CFD Model; NETL Multiphase Flow Science Workshop: Morgantown, WV, 2012. (15) Tong, C.; Gel, A. Applying Uncertainty Quantification to Multiphase Flow CFDs; NETL Multiphase Flow Science Workshop: Pittsburgh, PA, 2011. (16) Allen, M. S.; Camberos, J. A. Comparison of Uncertainty Propagation/Response Surface Techniques for Two Aeroelastic Systems; 50th AIAA Structures, Structural Dynamcs, and Materials Conference: Palm Springs, CA, 2009; AIAA Paper No. 2009-2269. (17) Montgomery, D. Design and Analysis of Experiments; Wiley: Hoboken, NJ, 2008. (18) Roy, C. J.; Balch, M. S. A Holistic Approach to Uncertainty Quantification with Application to Supersonic Nozzle Thrust. International Journal for Uncertainty Quantification 2012, 2, 363−381. (19) Bhatti, M.; Al-Shanfari, I. H.; Hossain, M. Z. Econometric Analysis of Model Selection and Model Testing; Ashgate: Burlington, VT, 2006. (20) Efron, B.; Gong, G. A Leisurely Look at the Bootstrap, the Jackknife, and Cross-Validation. Am. Stat. 1983, 37, 36−48. (21) Saltelli, A.; Ratto, M.; Andres, T.; Campolongo, F.; Cariboni, J.; Gatelli, D.; Saisana, M.; Tarantola, S. Global Sensitivity Analysis: The Primer; Wiley: Hoboken, NJ; 2008. (22) Choudhary, A.; Roy, C. J. Code Verification of MFIX Baseline Governing Equations; NETL 2012 Conference on Multiphase Flow Science: Morgantown, WV, 2012. (23) Roy, C. J. Grid Convergence Error Analysis for Mixed-Order Numerical Schemes. AIAA J. 2003, 41, 595−604. (24) Slater, J. W. NPARC Alliance CFD Verification and Validation Web Site 2008, http://www.grc.nasa.gov/WWW/wind/valid/. (25) Li, T.; Gel, A.; Pannala, S.; Shahnam, M.; Syamlal, M. CFD Simulations of Circulating Fluidized Bed Risers, Part I: Grid Study. Powder Technol. 2013, submitted. (26) Li, T.; Gel, A.; Shahnam, M.; Syamlal, M. A Preliminary Study on Uncertainties in CFD Simulations of a Pilot-Scale Circulating Fluidized Bed Riser. 2012 AIChE Annual Meeting in Pittsburgh, PA; AIChE: New York, 2012. (27) Roache, P. Verification and Validation in Computational Science and Engineering; Hermosa Publishers: Socorro, NM; 1998. (28) Meyer, J. S.; Ingersoll, C. G.; McDonald, L. L. M. S. B. Estimating Uncertainty in Population Growth Rates: Jackknife Vs. Bootstrap Techniques. Ecology 1986, 67, 1156−1166. (29) Romero, V. J. Elements of a Pragmatic Approach for Dealing with Bias and Uncertainty in Experiments through Predictions: Experiment Design and Data Conditioning; ″Real Space″ Model Validation and Conditioning; Hierarchical Modeling and Extrapolative Prediction; Sandia Report SAND2011-7342; Sandia: Albuquerque, NM, 2011. http:// prod.sandia.gov/techlib/access-control.cgi/2011/117342.pdf. (30) Li, T.; Guenther, C. A CFD Study of Gas-Solids Jet in a Riser Flow. AIChE J. 2012, 58, 756−769.

This report was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government or any agency thereof. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof. The authors declare no competing financial interest.



ACKNOWLEDGMENTS This technical effort was performed in support of the National Energy Technology Laboratory’s ongoing research in multiphase flows under the RDS contract DE-AC26-04NT41817. The authors acknowledge Charles Tong (Lawrence Livermore National Lab), Esma Gel (Arizona State University), Chris Roy (Virginia Tech), Vladik Kreinovich (University of Texas at El Paso), and Scott Ferson (Applied Biomathematics) for useful discussions and suggestions provided, Lawrence Shadle, and Rupendranath Panday for providing the detailed experimental data and useful discussions. T.L. thanks Jean Dietiker and Justin Weber for their help on data postprocessing. This research was supported in part by an appointment to US Department of Energy postgraduate program at the National Energy Technology Laboratory by Oak Ridge Institute for Science and Education.



REFERENCES

(1) Pannala, S.; Syamlal, M.; O’Brien, T. J. Computational Gas-Solids Flows and Reacting Systems: Theory, Methods and Practice; IGI Global: Hershey, PA, 2010. (2) Knowlton, T. M.; Karri, S. B. R.; Issangya, A. Scale-up of Fluidized Bed Hydrodynamics. Powder Technol. 2005, 150, 72−77. (3) Grace, J. R.; Taghipour, F. Verification and Validation of CFD Models and Dynamic Similarity for Fluidized Beds. Powder Technol. 2004, 139 (2), 99−110. (4) Report on Workshop on Multiphase Flow Research; DOE/NETL: Morgantown, WV, 2006. http://www.netl.doe.gov/events/ 06conferences/mfr_workshop/Multiphase Workshop Repor 6.pdf. (5) Syamlal, M.; Guenther, C.; Cugini, A.; Ge, W.; Wang, W.; Yang, N.; Li, J. Computational Science: Enabling Technology Development. Chem. Eng. Prog. 2011, 107, 23−29. (6) ASME. Standard for Verification and Validation in Computational Fluid Dynamics and Deat Transfer; ASME: New York, 2009. http://files. asme.org/catalog/codes/printbook/21356.pdf. (7) Mahadevan, S.; Liang, B. Error and Uncertainty Quantification and Sensitivity Analysis in Mechanics Computational Models. International Journal for Uncertainty Quantification 2011, 1, 147−161. (8) Coleman, H. W.; Steele, W. G. Experimentation, Validation, and Uncertainty Analysis for Engineers; Wiley: Hoboken, NJ, 2009. (9) National Research Council of the National Academies. Assessing the Reliability of Complex Models: Mathematical and Statistical Foundations of Verification, Validation, and Uncertainty Quantification; The National Academies Press: Washington, DC, 2012. (10) Roy, C. J.; Oberkampf, W. L. A Comprehensive Framework for Verification, Validation, and Uncertainty Quantification in Scientific Computing. Computer Methods in Applied Mechanics and Engineering 2011, 200 (25−28), 2131−2144. 11435

dx.doi.org/10.1021/ie303469f | Ind. Eng. Chem. Res. 2013, 52, 11424−11435