Computer-Aided Modeling Framework for Efficient Model

Oct 22, 2010 - Computer-aided methods have led to the development of a number of tools ... tion of phenomena on a higher degree of detail has motivate...
0 downloads 0 Views 2MB Size
ARTICLE pubs.acs.org/IECR

Computer-Aided Modeling Framework for Efficient Model Development, Analysis, and Identification: Combustion and Reactor Modeling Martina Heitzig,† G€urkan Sin,† Mauricio Sales-Cruz,‡ Peter Glarborg,§ and Rafiqul Gani*,† †

CAPEC, Department of Chemical and Biochemical Engineering, Technical University of Denmark, Søltofts Plads, Building 227, 2800 Kgs. Lyngby, Denmark ‡ Department of Process and Technology, Universidad Autonoma Metropolitana - Cuajimalpa, Artificios 40, 2° Piso, Col. Hidalgo, Deleg. Alvaro Obregon, 01120 Mexico D.F., Mexico § CHEC, Department of Chemical and Biochemical Engineering, Technical University of Denmark, Søltofts Plads, Building 229, 2800 Kgs. Lyngby, Denmark

bS Supporting Information ABSTRACT: Model-based computer aided product-process engineering has attained increased importance in a number of industries, including pharmaceuticals, petrochemicals, fine chemicals, polymers, biotechnology, food, energy, and water. This trend is set to continue due to the substantial benefits computer-aided methods introduce. The key prerequisite of computer-aided product-process engineering is however the availability of models of different types, forms, and application modes. The development of the models required for the systems under investigation tends to be a challenging and time-consuming task involving numerous steps, expert skills, and different modeling tools. This paper introduces a generic methodology that structures the process of model development, analysis, identification, and application by providing the modeler with the work-flow that needs to be followed in a systematic manner. The methodology has been implemented into a computer-aided modeling framework, which combines expert skills, tools, and database connections that are required for the different steps of the model development work-flow with the goal to increase the efficiency of the modeling process. The framework has two main branches; the first branch deals with single-scale model development while the second branch introduces features for multiscale model development to the methodology. In this paper, the emphasis is on single-scale model development and application part. The modeling framework and the supported stepwise model development is highlighted through a case study related to air pollution control, namely, the thermal treatment of the off-gas stream in adipic acid production in order to reduce its N2O content.

1. INTRODUCTION Computer-aided methods have led to the development of a number of tools (for design, control, sustainability analysis, and many more) that are of great value to chemical and biochemical industrial sectors mastering the current and future challenges related to feedstock shortages, growing population numbers, environmental issues, safety regulations, demand for increasing product quality, sharper competition due to globalization and shorter product-lifetimes, to name a few. Grossmann and Westerberg1 as well as Pantelides2 identified modeling, simulation, and optimization of large-scale systems to be crucial for handling complex processes and their products. The numerous benefits of computer-aided methods could be grouped under (a) reduction of the number of cost-intensive, time-consuming and resource-demanding experiments, (b) prediction of product-process behavior, (c) replacement of a trial and error approach by the more innovative reverse approach, and (d) increase of the system knowledge for the development of model-based algorithms for integrated product-process design. The development of models describing the various systems in chemical and biochemical industrial sectors usually requires r 2010 American Chemical Society

expert knowledge and time, and therefore it is cost-intensive. In fact, Foss et al.3 state that the effort spent for modeling is the most time-consuming factor in an industrial project that involves model-based process engineering techniques, even though there is a large variety of commercially available modeling tools. A lot of research has been concentrated in the field of model development and over the last 20-30 years a number of modeling tools and concepts have been developed. According to von Wedel et al.4 and Marquardt,5 modeling tools can be structured into three main groups. The first group includes the programming languages. The second group is formed by the generic modeling languages that support the modeler in formulating the problem but do not provide any domain specific concepts. Two subgroups of generic modeling languages can be distinguished. They are mathematical modeling Special Issue: Puigjaner Issue Received: June 30, 2010 Accepted: October 5, 2010 Revised: September 29, 2010 Published: October 22, 2010 5253

dx.doi.org/10.1021/ie101393q | Ind. Eng. Chem. Res. 2011, 50, 5253–5265

Industrial & Engineering Chemistry Research languages, which simplify the mathematical formulation of a problem but do not provide the means for structuring the resulting sets of equations. Examples are GAMS6 and MathML.7 The second subgroup consists of the system modeling languages. In contrast to the mathematical modeling languages, the systems modeling languages consider a model as a part of an overall system where model decomposition and aggregation are two main features. A representative of the systems modeling languages is Modelica,8 a standardized modeling language based on object-oriented concepts. Other examples are gPROMS9 and Custom Modeler.10 The third group of modeling tools includes the domain-oriented tools. Here, the model development process is based on providing concepts instead of equations and the tool generates the equations based on the user-specifications. Two different subgroups of domain-oriented tools can be distinguished. The first subgroup is formed by the so-called flowsheeting tools such as Aspen Plus (Aspentech), Hysis (Hyprotech), and Pro II (Simsci), which provide libraries with models for different unit operations that the user can combine to build a process. These tools have a limited flexibility since the user relies to a great extent on the models available in the libraries and has not much insight in the model equations and solution process. Providing more flexibility and allowing the consideration of phenomena on a higher degree of detail has motivated the development of the second subgroup of domain-oriented tools, the process modeling languages. They are based on the decomposition approach that decomposes the flowsheet not only in its unit operations but also on levels below the unit scale. Examples for this subgroup are MODEL.LA11 and ModDev.12,13 All of the above modeling tools support the user, to some extent, in the modeling process. The generic modeling languages offer support in setting up the model equations from a mathematical point of view and in aggregating different models to build the overall system. Here, the user has a maximum of flexibility but the domain knowledge needs to be provided. In this regard, the domain-specific modeling languages intend to complement the generic methods. The systematic computer-aided modeling tool box combines elements of the three groups of modeling tools mentioned above and incorporates them through a framework that provides a balance between support, automation and flexibility. Von Wedel et al.4 proposed a 3-layer approach for the contents and functionalities of such a modeling tool consisting of a mathematical base layer, a systems engineering layer, and a chemical engineering layer to introduce the domain knowledge. Note, however, that different available modeling tools have their limitations and potential for improvements. Two such limitations have been identified by Foss et al.3 and Klatt et al.14 as the following: 1. Lack of detailed understanding of the process of model development in industrial practice.3 2. Implementation of state-of-the-art modeling techniques.3,14 Modeling techniques, suggested by Foss et al.,3 are, for example, handling and solving of all kinds of different equation systems and optimization problems, continuous-discrete models, sensitivity analysis, uncertainty analysis, support for the conceptual modeling phase, methods for systematic model reduction, libraries of standardized model building blocks, parameter estimation, trace of the model development, version management, copying, modifying and documenting, result processing, and more flexible report generation. Also, Bogusch et al.15 suggest

ARTICLE

that a tool needs to provide predefined building blocks like equations describing reaction kinetics or heat and mass transfer. They furthermore propose the automation of parts of the modeling process, like knowledge propagation, documentation, and report generation. It can be concluded that a good motivation exists to develop a computer-aided modeling framework that is not only able to combine state-of-the-art modeling techniques but that is also structured such that specific model development processes can be supported. The objective of this paper is therefore to develop a novel computer-aided tool that guides and supports the user in the modeling process such that the modeling works can be done in a more systematic and efficient way. This is achieved in three steps. First, we identified the work-flows and data-flows (particularly those focusing on single scale systems) to solve different modeling problems in chemical and biochemical engineering based on case studies, reviewing literature as well as our own experience. Second, we developed a systematic modeling methodology based on the identified work-flow and established the common features and tools a computer-aided modeling framework needs to integrate for the different steps of the modeling process. Third, we implemented this work-flow based modeling methodology in a computer-aided tool that is integrated within the ICAS software. The computer-aided modeling framework and its underlying methodology are described in Section 2. Section 3 highlights the application of the developed methodology for model development and identification by presenting a case study related to air pollution control. In Section 4, the developed model is applied for reactor design.

2. COMPUTER-AIDED MODELING FRAMEWORK In this chapter, the developed computer-aided modeling framework is presented. Section 2.1 describes the overall structure of the framework whereas Section 2.2 focuses on the singlescale model development and identification parts. Finally, Section 2.3 gives some details on the implementation of the modeling framework. 2.1. Structure of the Modeling Framework. The modeling framework is based on the concept of decomposing the work into a sequence of modeling activities (tasks) and the associated methods, tools and data needed to perform them. Figure 1 illustrates this concept, where the modeling work is divided into two sets of parallel activities, one for the single scale and another for the multiscale. Within each set, the activities deal with model development and model application. For each activity, the modeling framework provides the user with the corresponding work- and data-flow to be followed. The work-flows define the sequence of modeling activities (steps) needed in solving a specific modeling task. At each step of the work-flows, the modeling framework combines support, expertise, and the required database connections and tools. Model development here refers to the development of a mathematical model representing a system within a defined boundary, a set of assumptions defining the system and matching a set of modeling objectives. Once the model has been developed it is used according to the defined model objective. The application related activities are grouped into the following three main activities: (i) model identification to validate and reconcile the model, (ii) simulation to study the behavior of the system, and (iii) optimization to improve the system according to a defined objective. Certainly, different application blocks and their 5254

dx.doi.org/10.1021/ie101393q |Ind. Eng. Chem. Res. 2011, 50, 5253–5265

Industrial & Engineering Chemistry Research

ARTICLE

Figure 1. The main activities within the modeling framework.

corresponding work-flow steps can and need to be combined. Model identification, for example, will never be the final purpose of a model while, simulation and/or optimization activities should only use identified models. Simulation is the most common application activity of a model while optimization requires multiple uses of the simulation activity. 2.2. Single-Scale Model Development and Identification. Work-Flow and Features. In this paper we are focusing on the single-scale model development, its identification and briefly, its use to study the system. The work-flow to be followed to solve this problem, as well as the required features of a computer-aided modeling framework for each work-flow step, have been derived based on solved case studies and literature on methods in model development.3,16,5 Figure 2 shows the proposed work-flow that represents the single-scale model development and identification blocks in Figure 1. The different steps of the work-flow are briefly explained below and for each step the identified modeling framework tools are summarized. Note, however, that not all steps of the methodology are obligatory and their actual use depends on the specific modeling problem. The modeler needs to decide on a case by case basis if a step is relevant for the specific problem and modeling purpose. Step 1: Modeling Objective. In the first step, the modeling objective is defined so that model reformations can be justified and the model performance can be evaluated. The modeler needs to state the purpose of the model: why is it necessary, what is it going to be used for, and so forth. The computer-aided modeling framework needs to provide an interface where the modeler can insert and update the modeling objective and where it can be easily accessed for later reuse of the model. Step 2: System Information and Documentation. The purpose of this step is to collect and document all available and required information on the system and modeling problem. The collection of information enables the modeler to decide on how to model the system, formulate the model assumptions, and develop the model. The documentation of this information supports the later reuse of the model and its reformation. The system information and documentation is achieved in two substeps, system information collection and system information analysis: Step 2a: System Information Collection. Available information on the system is collected from literature, experts, experience, databases, experiments and model libraries. Step 2b: System Information Analysis. Defines which information is still missing and needs to be generated which helps to define the specific work-flow. The system information step is an iterative step and at a later point during the modeling development and application process

Figure 2. Work-flow for single-scale model development and parameter estimation.

it might turn out that some information is missing. In that case the modeler needs to go back to the system information step in order to generate and document the missing information. To support the modeler in the system information step, first of all, a method to systematically collect, store, modify, and access the system information is required. This information can be of many different types (e.g., model application context, possibly occurring phenomena, how similar systems have been modeled, initial parameter values, experimental data) and needs to be structured. An important feature favoring model documentation is the automated creation of a report on all the information stored during the system information and documentation step as well as the results of the afterward performed work-flow steps. Apart from that, the modeling framework needs to incorporate strategies and tools to identify and generate missing information. In that context, features like sensitivity analysis, identifiability analysis, connections to thermodynamic databases, and property prediction tools as well as a model library are of importance. The model library needs to be easily extendible and has to contain models for a large number of different phenomena, unit operations, and so forth. Step 3: Model Construction. The objective of the model construction is the derivation and translation of the model equations. This can be achieved as follows: Step 3a: Model Derivation. The procedure is according to Figure 3. First the assumptions are given and documented (e.g., isothermal, steady state, ideal mixing, diffusion, limited kinetics, dispersion, reaction, and so forth). On the basis of the assumptions, it is given for which variable (mass, energy, momentum) a conservation equation is required. Further, it is of importance along which coordinate directions the balance volume is assumed to be ideally mixed (lumped) or along which the model is assumed to be distributed. If the model is lumped the conservation equations are algebraic equations (AEs) or ordinary differential equations (ODEs). If the balance volume however is distributed in one or more 5255

dx.doi.org/10.1021/ie101393q |Ind. Eng. Chem. Res. 2011, 50, 5253–5265

Industrial & Engineering Chemistry Research

ARTICLE

Figure 3. Step-wise construction of model equations.

coordinate directions the resulting balance equations in these directions are partial differential equations (PDEs). On the basis of this information and the assumptions, the conservation equations can be constructed. For the distributed case, the partial differential equation needs to be discretized to obtain a system of ODEs or AEs depending on the discretization method that is chosen. The next step is to add the required constitutive equations to the model. Constitutive equations, for example, can be needed to provide an expression for a reaction rate that appears in the conservation equations. Additional details on the systematic derivation of model equations is given by Jensen and Gani.12 In the special case of an already existing similar model in a library it will be adjusted to the current problem in this step. If submodels for

different phenomena exist they will be aggregated and extended to form the overall system model. Step 3b: Model Translation. In a second substep, the model is translated to a form that can be applied for model analysis and solution. This requires the use of a modeling tool for translation (different algorithms, e.g., reverse polish notation (RPN)). This step identifies the different types (and their number) of equations and variables. To support the modeler during the model construction step, a computer-aided modeling framework needs to incorporate a model library and features for systematic and automated generation of model equations based on user specifications (e.g., Jensen13), automated model translation (including discretization of PDEs), model aggregation, and model decomposition. 5256

dx.doi.org/10.1021/ie101393q |Ind. Eng. Chem. Res. 2011, 50, 5253–5265

Industrial & Engineering Chemistry Research Step 4: Model Analysis. The translated model needs to be analyzed. Important substeps within the model analysis are the following: • Equation classification. Equations can be classified as either algebraic equations or differential equations. • Variable classification. Variables are initially classified as either dependent variable, independent variable, or general variable. • Degree of freedom analysis. The degree of freedom for the algebraic and differential equation parts of the model is determined and satisfied by specifying the required amount of general variables as either known or parameter. There are three types of known variables: (1) fixed by problem, (2) fixed by system, and (3) fixed by model. For the parameters and known variables, either a value or an initial guess (if they should be identified or optimized in a later step) needs to be provided. The remaining general variables are specified as unknown variables. • Incidence matrix. The incidence matrix needs to be derived. This matrix indicates which of the unknown variables occur in each model equation. The incidence matrix is used to find the optimal solution sequence of the model equations. It shows which equation subgroups need to be solved coupled. All model equations are decoupled if the incidence matrix can be brought to a lower triangular form. Since the incidence matrix shows which model equations need to be solved, coupled, and which are decoupled, it in that way also supports the modeler in decomposing a complex model into submodels. • Solution strategy. On the basis of the ordered equations, the solution strategy and the required solvers can be set up. If applicable, the modeler further needs to provide the initial and boundary conditions. There are a large number of tools and methods a modeling framework needs to incorporate to support the modeler during the described model analysis step. Among these are the partly automation of the variable specification, the performance of a singularity check as well as the calculation of the degree of freedom based on the current variable specification, display of the incidence matrix, optimization of the equation ordering, automated solution strategy determination, solver selection, and indication of possibilities for model decomposition. Step 5: Sensitivity Analysis. The purpose of the sensitivity analysis is to identify parameters and variables that have no impact on the desired model output. This information is then added to the model documentation under systems information (step 2) and might be used for model simplification or in a later model identification. With respect to model identification, it is a prerequisite for identifiability that the available experimental measurements are sensitive to the parameters to be estimated. If that is not the case, the model can either be simplified by removing or lumping the insensitive parameters or these parameters can be fixed to their initial estimates. A third option is to go back to the systems information step and design new experiments. Different methods for sensitivity analysis exist. One example is the local differential sensitivity analysis where in each run one parameter is perturbed at a time by a given percentage and the resulting impact on the response variables is monitored. Stochastic methods like Morris screening and the Monte Carlo method have the advantage of giving a more global picture. The computer-aided modeling framework offers different sensitivity analysis methods and allows the reuse of the results in the different steps of the work-flow.

ARTICLE

Step 6: Identifiability Analysis. The goal of performing an identifiability analysis is to identify parameter subsets that are noncollinear and therefore identifiable. The identifiability analysis is conducted based on the sensitivity analysis results. Parameters that need to be estimated from the measurements and have been deemed sensitive (necessary condition for identifiability) in the previous step are considered. From these parameters all possible subsets are generated and tested for their identifiability by evaluating their collinearity index.17 To solve and analyze parameter estimation problems, the modeling framework offers tools that perform an identifiability analysis based on the sensitivity analysis results. Step 7: Parameter Estimation. Here, the unknown and identifiable model parameters are estimated by available experimental data. On the basis of the outcome from the identifiability analysis in step 6, it is decided which subsets are promising for the actual parameter estimation step. To this end, the framework employs either the least square fit method or maximum likelihood estimation (MLE) method. To perform the parameter estimation step, different solvers for all kinds of different problem types (e.g., ODE, AE, DAE, PDE) are required. Step 8: Evaluation/Statistical Analysis of Model Predictions. The objective here is to evaluate the model prediction quality. This can be done by conducting a statistical analysis of the model prediction quality. That means to calculate the confidence intervals of the estimated parameters and based on that perform an uncertainty analysis on the model predictions.18 In that way, the modeler is aware of the extent of the uncertainties in the model predictions that originate from the model development process. If more than one parameter subset have been chosen for the parameter estimation step the different alternatives need to be evaluated here. Criteria for a decision are the value of the objective functions from the estimation step, for example, sum of squared errors (sse), but also the resulting confidence intervals and generated uncertainties on the model outputs. On the basis of the criteria, a decision on the best alternative with respect to the current modeling goal defined in step 1 can be formed. The performance criteria and the models used may differ with respect to the desired application and modeling goal. If it turns out that the performance of none of the alternatives is satisfying, the user needs to go back to previous steps in the work-flow. The first option is to go back to step 2 and collect more information on the system or use the already collected information to improve the model by for example increasing the degree of detail. Afterward, the user needs to go to step 3 and perform the changes in the model equations. Another option is to go back to step 7 and select different parameter subsets to be estimated. The modeling framework implements tools for confidence interval calculation and uncertainty analysis. Further, statistical reports can be generated that allow an evaluation of the model performance as well as a comparison between different model alternatives. After having developed and identified the model, it can be applied by following the work-flow of one of the other application blocks in Figure 1. It is good practice to start by validating the model performance against independent experimental data that has not been used during the model building stage. The resulting identified and validated model can then be used for the engineering purpose it was built for like solving a design problem or performing simulations to predict system behavior. 5257

dx.doi.org/10.1021/ie101393q |Ind. Eng. Chem. Res. 2011, 50, 5253–5265

Industrial & Engineering Chemistry Research

ARTICLE

Table 1. Identified Key Features of Computer-Aided Modeling Framework with Respect to Single-Scale Model Development and Identification key features

corresponding work-flow steps

structured and systematic development of models

work-flow steps: all (work-flow structures model development process) layer 3

model documentation model storage in libraries

work-flow steps: 1, 2 layer 3 work-flow steps: 2, 3 layer 3

model aggregation/decompsotion

work-flow steps: 3 layer 2

connection to thermodynamic databases, property prediction tools,

work-flow steps: 2, 3 layer 3

process simulator support for model construction and implementation

work-flow steps: 3 layer 1, 3

numerical model analysis

work-flow steps: 4, simulation layer 1

generic numerical solver

work-flow steps: 5, 7, 9 layer 1

model identification and validation

work-flow steps: 5, 6, 7, 8 layer 1

Figure 4. Thermal treatment of adipic acid production off-gas stream.

2.3. Implementation of Modeling Framework. The developed methodology provides for the structure of the computeraided modeling framework and identifies the key features the modeling framework needs to have to support the modeler in the development and use of the model. The identified key features with respect to single-scale model development and identification are summarized in Table 1. For each feature the table shows the steps of the work-flow as well as the corresponding layer with respect to the 3-layer structure (von Wedel et al.4) described in the introduction. Even though the tool can automate various steps and tasks, there remains still the need for user interaction and input. For example during the discretization of a provided partial differential equation (PDE) by the modeling tool the user still needs to be asked on the discretization method and specifications for the selected method, like the number of discretization or collocation points. However, the tool guides the decision process and provides some information to support the decision of the user. In the case of model documentation, the creation of the final report needs to be completely automated and performed by the modeling tool based on the steps the user went through during the modeling process. However, for the documentation interface the user needs to become active and provide the information to be stored. The existing modeling tool ICAS-MoT19 has been extended and modified to implement the framework. New features that are required to support the user in the different steps of the methodology have been included. However, this tool needs to be integrated in the overall modeling framework (Figure 1) which, in addition, contains a branch for multiscale modeling (not covered in this paper).

3. CASE STUDY In this section, the application of the steps in the work-flow (Figure 2) together with the use of the corresponding modeling

tools are highlighted for the construction and identification of a combustion model for thermal treatment of N2O.20 3.1. Modeling Objective (Step 1). The goal is to provide a model for the thermal treatment of the off-gas stream of an adipic acid production process in a flow reactor/heat exchanger (plugflow reactor). The model is to be applied for reactor design to remove the N2O (greenhouse gas, source of O3 in stratosphere). Consequently, the model needs to be able to calculate the N2O outlet concentrations of the reactor at different temperatures and residence times/reactor volumes. 3.2. System Information (Step 2). Figure 4 shows a basic sketch of the thermal treatment of the off-gas stream of the adipic acid production. The system under consideration (thermal treatment unit) consists of a total of 18 compounds. Many of these compounds occur in very low concentrations. Figure 4 shows the important input and output compounds. NO is recycled back to the adipic acid production unit because it is a feedstock material for one of the production steps. For the H2/O2 mechanism, information involving 18 elementary, reversible reactions (rate constants, thermodynamic properties) was available from Glarborg.21 For the nitrogen species and their reactions, information was initially taken from the NIST Chemical Kinetics Database.22 In total, there are 44 reactions in the system. The rate constants for the forward reactions are calculated applying the Arrhenius equations. The backward rate constants can be calculated from the forward rate constants and the equilibrium constant K. To calculate the equilibrium constants, the component standard enthalpies HjO(T) and entropies Sj0(T) are needed. The Nasa polynomials (CHEMKIN Collection Release 3.6, 2000 and Kee et al.23) provide correlations for HjO(T) and Sj0(T) with respect to the reactor temperature for all components in the system. For pressure dependent reactions, such as the dissociation of N2O, the third body enhancement needs to be considered. Third bodies are molecules that promote the reaction but remain chemically inert during the reaction. The behavior of the rate in the falloff regime can be calculated applying the Troe equation.24 The expected operation conditions for the thermal treatment are a pressure of 1 atm due to safety and economic reasons and an arbitrarily chosen maximum temperature of 1500 K, which is due to material limitations. A number of assumptions are made for the chemical system. It is assumed to be ideally mixed in the radial reactor direction. The reactor is modeled as a plug flow reactor. The system is assumed to be isothermal and isobaric. Transport phenomena like diffusion and dispersion are neglected. Furthermore, the system is 5258

dx.doi.org/10.1021/ie101393q |Ind. Eng. Chem. Res. 2011, 50, 5253–5265

Industrial & Engineering Chemistry Research

ARTICLE

Table 2. Specified Variables type

variables

parameter

coefficients for the Arrhenius equations

Table 4. Incidence Matrix after Equation Ordering amount

coefficients of the Nasa polynomials Troe equation parameter known

molar flows of inert compounds pressure temperature gas constant

Fj Vp Hj Sj HRk SRk Kpk FM kinf klow X Fcent c N F kfk kbk rk dFj

144

Vp

105

Hj

13

Sj

total 262

HRk

3

SRk

1

Kpk

1

FM

* * * * *

* *

* *

*

*

*

* *

kinf

1 total 6

*

klow X

*

*

*

*

*

Fcent

*

c

*

amount

N

*

1

F

* *

component enthalpies

15

kfk

component entropies reaction enthalpies

15 44

reaction entropies

44

equilibrium constants

44

Table 3. Unknown Variables variables volumetric flow

3rd body concentrations rate constants for forward reaction

7 44

rate constants for backward reactions

44

Troe equation variables

28

reaction rates

44

considered to be at steady state. The assumptions of an ideal reactor may not be fulfilled in a practical system, but they are often used in chemical engineering as they serve to simplify the model analysis.25 Evaluation of possible temperature and velocity gradients is outside the scope of the present work. The phenomena considered in the model are the convective mass transport along the reactor axis and the kinetics of the chemical reactions in the system. Experimental data to identify the model parameters are given by Glarborg et al.26 (see Supporting Information). The data involve 76 data points and are subdivided into 5 data sets. For each data set, the feed concentration as well as the residence time differ, whereas the pressure is constant at 1.05 atm for all data sets. The measured variable is the output concentration of N2O for different temperatures. 3.3. Model Construction (Step 3). The complete set of model equations is given in Supporting Information. Due to the assumptions that the system is isothermal and isobaric no energy and momentum balances are needed. Mass balances are required for the 15 noninert compounds in the system. Since the system is considered to be distributed in the axial reactor direction the model equations need to be discretized accordingly. Here, they are transformed to a system of ordinary differential equations (1 equation for each noninert compound) having the reactor length as independent variable. This is possible due to the steady state assumption for the system. Constitutive equations are required to provide expressions for the reaction rates appearing in the mass balance equations. 3.4. Model Analysis (Step 4). The first step after the construction of the model is to determine the number and types of equations. The system has 15 ordinary differential equations (ODEs) and 330 algebraic equations (AEs). The 613 model

dFj

* * * * *

*

kbk rk

*

* *

*

*

*

*

* * *

variables can be preclassified into 15 dependent variables (the flow rates of the noninert compounds), 1 independent variable (the reactor volume) and 598 general variables. The degree of freedom for the algebraic equation part is 268. It is obtained as the difference between the general variables appearing in the AEs and the number of AEs. Accordingly, 268 variables need to be specified either as parameter or known. Table 2 gives an overview of the specified variables. During this specification step a singularity check needs to be conducted. The remaining 330 general variables are unknown (Table 3). For this case study all unknown variables are explicit variables. The degree of freedom of the ordinary differential equation part equals the number of general variables in the ODEs that do not appear in the AE part. In this case it is 0. The next step is to generate the incidence matrix and based on that the equations are ordered (Table 4). The equations are grouped into algebraic and ordinary differential equations. Since the incidence matrix for such a system is rather big (345  345) it is given in a “condensed” version where the equations and variables are represented by vectors, for example, all component enthalpies are represented by the vector Hj and all equations to calculate the reaction enthalpies are summarized to HRk. The incidence matrix reveals that the AEs and ODEs are coupled. Apart from the coupling to the ODE part, the algebraic equations are not coupled with one another and are all explicit (lower tridiagonal form). For this reason, it is easy to solve the AE and ODE parts separately. For each time-step of the ODE system, the AE part is solved with the current values of the independent variables. After having performed the previous analysis steps, the variables classified as known variables need to be given a value. Also, for the parameters a value or an initial guess (if they are to be identified by experimental data in a later step) needs to be provided. Further, initial conditions for the independent variables are required; in this case for the component flows at the entrance of the reactor. Before proceeding to the next step, the eigenvalues of the system have been determined for the conditions of data set 2 at a temperature of 1381 K 5259

dx.doi.org/10.1021/ie101393q |Ind. Eng. Chem. Res. 2011, 50, 5253–5265

Industrial & Engineering Chemistry Research

ARTICLE

YFi - YBi ;i ¼ 1, NDAT; j ¼ 1, NPAR 2ΔPj

ð1Þ

Pj ; i ¼ 1, NDAT; j ¼ 1, NPAR Yi

ð2Þ

Sndði, jÞ ¼ Saði, jÞ

Here, Y is the model output (N2O concentration), the indices F and B stand for forward and backward, respectively, whereas the index i is the response variable counter and j is the parameter counter. ΔPj is the absolute perturbation value of the parameter j. The nondimensional sensitivity Snd (so-called relative sensitivity function, Sin and Vanrolleghem27) is normalized by the initial value of the parameter Pj and model output variable Yi at the current data point and the initial value of the parameter. It is important to choose a reasonable value for the perturbation. The perturbation of a parameter needs to be small enough so that the forward and backward perturbations cause the same change of the model output. On the other hand, it has to be taken into account that the solver is still able to resolve the effect of a small perturbation. It was taken care of that these criteria were fulfilled for all parameters and data points investigated. The resulting value for the perturbation is 0.01%. An overall parameter significance ranking considering all data points is obtained based on the sensitivity measure δjmsqr. sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi N 1 ¼ ðSndij Þ2 ð3Þ δmsqr j N i¼1 Table 5 shows the parameter significance ranking for the top 20 most sensitive parameters.



rank

parameter

δjmsqr

parameter value

1

E_37

62796.0

27.063

2

beta_37

-0.73

6.288

3

A_37

7.23  1017

1.190

4

beta_40

-2.87

0.825

5 6

E_43 E_39

15937.0 15103.0

0.757 0.581

7

A_43

3.69  1012

0.131

8

A_39

9.64  1013

0.105

9

beta_12

-2.00

0.103

10

E_38

26629.0

9.39  10-2

11

E_1

16600.0

12

A_40

4.71  10

3.97  10-2

13 14

beta_1 E_40

-0.41 1552.0

3.73  10-2 2.23  10-2

15

beta_32

-2.16

2.03  10-2

16

E_32

37161.0

17

A_1

3.550  10

1.26  10-2

18

beta_13

1.52

9.8  10-3

19

A_38

6.62  10

9.6  10-3

20

beta_36

4.72

8.9  10-3

7.62  10-2 24

1.75  10-2 15

13

3.6. Identifiability Analysis (Step 6). Since only sensitive parameters with respect to the measured variables can be identified, the first step is to select these parameters based on the parameter significance ranking from the previous step. The nonsensitive parameters are fixed to their initial values from literature. The minimum sensitivity measure for a parameter to be still considered in the analysis has been selected to be 10-1 based on the previous experiences with the method.27 As a result, only the top nine parameters in the ranking are deemed significantly sensitive and hence considered for further identifiability analysis. The second condition for identifiability of a parameter subset is that there is no collinearity between the parameters. For this application example, it should be mentioned that the Arrhenius parameter for one reaction usually is correlated. All possible parameter subsets from the 9 selected parameters need to be generated. The collinearity of a parameter subset can be quantitatively evaluated by calculating the collinearity index measure of the subset.17 To be able to do so, the normalized sensitivities Snorm,ij with respect to all parameters j in the subset at all data points i need to be determined for all possible subsets.

Sndij ð4Þ Sndij The collinearity index γK of a subset K then follows to be d Snorm ¼ fSnorm, ij g with Snorm, ij ¼ 1 γK ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi minðλK Þ

)

Saði, jÞ ¼

Table 5. Parameter Significance Based on the Sensitivity Measure δjmsqr Considering the Available Measurements (Perturbation (0.01% of the Parameter Value)

)

during a simulation for 646 time-steps. The eigenvalues allow a prediction whether the system converges to an asymptotic steady state for the investigated conditions. Furthermore, they give information on possible stiffness, oscillations, and potential for model reduction. The stiffness ratio is defined as the quotient of the maximum absolute value of the real parts and the minimum absolute value of the real parts of the eigenvalues. For the first time-step it results to be 8.35  1020 whereas the stiffness ratio for the last time-step is 2.09 1019. From this, it can be concluded that the system is stiff having very fast modes on the one hand and slow modes on the other. Since the real parts of all eigenvalues are negative the system can be said to be locally (asymptotically) stable. 3.5. Sensitivity Analysis (Step 5). A local differential sensitivity analysis is conducted to ensure that measured variables are actually sensitive to the parameters to be regressed. The analysis needs to be conducted at each data point available, for all measured variables and with respect to all parameters to be identified. The available experimental data26 is given in the Supporting Information. The concentration of N2O at the reactor outlet has been measured under varying residence times, temperatures and feed compositions. The total number of data points NDAT is 76. N2O is also the concentration of interest with respect to the modeling goal. The thermodynamic properties are considered to be known and therefore only the NPAR = 157 kinetic parameters of the model need to be estimated and hence considered in the sensitivity analysis. Consequently, a total of 11 932 (= NDAT  NPAR) sensitivity analysis steps have been conducted for this case study. The parameters are perturbed forward and backward by a certain percentage value and the model is solved for the new parameter values. The absolute and nondimensional sensitivities Sa and Snd are calculated according to eqs 1 and 217

ð5Þ

where λK ¼ eigenðSTnorm, K Snorm, K Þ If the collinearity between the parameters of a subset increases, the collinearity index approaches infinity. If the collinearity decreases, γK approaches unity. In general, the collinearity index 5260

dx.doi.org/10.1021/ie101393q |Ind. Eng. Chem. Res. 2011, 50, 5253–5265

Industrial & Engineering Chemistry Research

ARTICLE

Table 6. Summary of the Identifiability Analysis Results minimal γ

maximal γ

F of the min subset

F of the max subset

66.3

0.0040

62.26

21534.9

0.0054

11.64

subset size

total no. of subsets

not identifiable subsets

% of identifiable subsets

2

36

31

86.1

1.0

3

84

51

60.7

2.0

4

126

40

31.77

3.8

21739.0

0.0043

3.33

5

126

12

9.5

5.1

25482.0

0.0051

0.87

6

84

0

0

79.1

25770.2

0.0051

0.53

7

36

0

0

145.4

27847.2

0.0069

0.11

8

9

0

0

29947.1

29947.1

0.0063

0.006

9

1

0

0

31127.0

31127.0

0.0048

0.005

can be interpreted as follows: the effects of the change of the value of one parameter in a subset K on the response variables can be canceled out (at least in linear approximation) up to a fraction (given by the ratio 1/γK) by adjusting the remaining parameters in the subset.28 Consequently, a parameter subset K is considered not identifiable by the available data if its collinearity index exceeds a certain threshold. Thresholds between 10 and 20 have been suggested and applied in literature.17,27 So deciding which threshold value to use depends on the model application purpose (e.g., what level of parameter uncertainties are acceptable for the model application purpose) and usually found after iterative process.28 For this case study, a threshold of 10 has been applied based on the previous experiences with the method. On top of the sensitivity measure δjmsqr (eq 3) and the collinearity index γK, there is a third measure to be calculated, the determinant measure17 FK ¼ detðSndTK SndK Þ1=2K ¼ ð

K Y j¼1

λSnd, j Þ1=2K

ð6Þ

It combines the information from the previous two measures and thus the two above-mentioned conditions for identifiability of a parameter subset: sensitivity and collinearity. The product of the eigenvalues becomes large if the δjmsqr values are high and γK is low. The exponent 1/2K is introduced to the equation to allow the comparability of subsets having different numbers of parameters. Since the value of FK depends on the perturbation applied in the sensitivity analysis a general threshold above which a subset of parameters is identifiable cannot be given. Consequently, FK is a relative measure for the comparison of different parameter subsets. Table 6 gives an overview over the results. From the 502 possible parameters subsets, only 134 resulted to be identifiable, that means, having a γK lower than 10. These nonidentifiability issues are commonly encountered in engineering models17,18 a discussion of which is beyond the scope this work. Table 6 shows that identifiable subsets can only be found up to a subset size of 5 parameters. From a subset size of 2 to a size of 5 parameters the percentage of identifiable subsets decreases from 86 to 9.5%. The maximum collinearity indexes increase with the size of the subsets, which means that the collinearity increases with the subset size. The minimal and maximal values for the determinant measure decrease with increasing subset size. This is to be expected since with increasing subset size the collinearity increases and the number of less sensitive parameters that have to be included into the subset increases. On the basis of the results of the identifiability analysis proper subsets of parameters can be chosen to be passed to the parameter estimation step. The correlation of two parameters can also be graphically evaluated by plotting the sensitivity functions that show the normalized sensitivities Snorm,ij of the parameters at all data

points. If these functions are collinear for two parameters they are correlated and cannot be unambigously identified by the available experimental data. The sensitivity functions for all possible parameter pairs from the top nine most sensitive parameters are given in the Supporting Information. Figure 5 shows the plots of the sensitivity functions for data set 5 and the 2-parameter subset with the highest and the lowest collinearity index. It can be seen that the curves for the subset with the highest collinearity index are collinear whereas this is not the case for the second subset. 3.7. Parameter Estimation for Identifiable Subsets (Step 7). The least-squares fit objective function is used for the parameter estimation (eq 7) N

OBJ ¼



1 exp ðFN2 O ðiÞ - FN2 O ðiÞÞ2 , i ¼ 1, :::;NDAT N i¼1

ð7Þ

Here, FN2O is the volume flow of N2O at the reactor exit and FN2exp O is its corresponding measured value. The value of the objective function before regression, when all parameters are set to their initial values from literature, is OBJ = 905.67. The parameter regression is performed with respect to all 76 available experimental data points at once. The parameter boundaries are set to (20% from the initial parameter values from literature. If the regression is performed for all top 9 most sensitive parameters at once ignoring the identifiability analysis results the obtained value of the objective function is 144.67. The resulting parameter values are given in Table 7. In a second step, on of the largest identifiable subsets (5 parameters) is chosen for estimation. Among the 5-parameter subsets, the subset has the lowest collinearity index γK = 5.11 is selected. If the initial parameter values are set to the values from the first regression step and only the 5 noncorrelated parameters are reidentified, the objective function value can be improved to 120.49. Table 8 shows the improved values for the 5 parameters in the subset. 3.8. Evaluation/Statistical Analysis (Step 8). This step evaluates how good the parameters fit the experimental data. Two measures for the quality of the fit have been calculated N

MAE ¼



1 exp absðFN2 O ðiÞ - FN2 O ðiÞÞ ¼ 8:311 ð20:91Þ ð8Þ N i¼1 N



!

1 exp ðFN2 O ðiÞ - FN2 O ðiÞÞ2 RMSE ¼ sqrt N i¼1 ¼ 10:977 ð30:09Þ ð9Þ The numbers in brackets are the corresponding measures resulting from the initial parameter values from literature. It 5261

dx.doi.org/10.1021/ie101393q |Ind. Eng. Chem. Res. 2011, 50, 5253–5265

Industrial & Engineering Chemistry Research

ARTICLE

Figure 5. Plot of sensitivity functions for parameter pair with lowest (left) and highest (right) collinearity index.

Table 7. Results of Parameter Regression for Top Nine Most Sensitive Parameters parameter

initial guess

final estimated parameter value

beta_12

-2.00

-2.33

A_37

7.23  1017

6.64  1017

beta_37 E_37

-0.73 62796

-0.65 62322.85

A_39

9.64  1013

11.57  1013

E_39

15103.00

12082.40

beta_40

-2.87

-2.30

A_43

3.69  1012

3.81  1012

E_43

15937.00

13228.76

Table 8. Re-identification of Identifiable 5-Parameter Subset with Highest γK parameter

before estimation

after estimation

beta_12

-2.33

-2.61

A_37

6.64  1017

not regressed

beta_37

-0.65

not regressed

E_37 A_39

62322.85 11.57  1013

62409.45 not regressed

E_39

12082.40

9665.92

beta_40

-2.30

-1.84

A_43

3.81  1012

not regressed

E_43

13228.76

12390.03

shows that the quality of the fit has been improved significantly. The model performance is satisfactory. Figure 6 shows the simulation results in comparison to the experimental measurements applied for model identification.

4. APPLICATION OF THE IDENTIFIED MODEL FOR REACTOR DESIGN Now the developed and identified model is to be used for a reactor design problem, which was the objective at outset of the modeling study. This means that the optimal design is found by fixing the parameters identified in the previous steps and varying the design variables. In case the design target cannot be met, the process concept needs to be revised. The solution of the reactor design problem is described very briefly in the following steps.

4.1. Design Objective. For the investigated problem, the design target is to reduce the concentration of N2O at the reactor exit below 100 ppm. At the same time, the NO concentration should be maximized since it is an intermediate product for the adipic acid production and can be recycled (see Figure 4). 4.2. Formulation of Optimization Problem and Setting up of Solver. The problem needs to be reformulated to an optimization problem by adding an appropriate objective function and transforming the model equations to constraints   0:1 6 OBJ ¼ min 10 FN2 O þ FNO ð10Þ s:t: : model equations

T e 1500 K The design variables are the temperature, the pressure and the residence time. The temperature constraint in eq 10 is due to material limitations. Apart from that, due to safety and economic issues, pressures different from the atmospheric pressure are only acceptable if they lead to significant improvement of the reactor performance. The weighting factors in the objective function have been chosen such, that the impact of both terms in the objective function is of the same order of magnitude for the initial estimates of the design variables. The desired feed conditions differ from that of the available experimental data: 30% N2O, 0.7% NO, 300 ppm CO, 3% H2O, 4% O2, balance N2. Now the solver can be set up. It has been decided to run the reactor model to steady state and in that way fixing the residence time. 4.3. Sensitivity Analysis of Design Variables. In general, it is good practice to perform a sensitivity analysis for the process design variables in the system prior to solving the optimization problem. If it turns out that the system (objective function) is not very sensitive to the perturbation of one of the design variables, this variable can be set to a constant value and does not need to be considered in the optimization problem. This makes sensitivity analysis especially attractive for design problems where a large number of design variables needs to be considered since it has the potential to reduce the complexity. The base value of the design variables T and P was set to 1450 K and 1 atm, respectively. A local differential sensitivity analysis was conducted for both design variables and the objective function (eq 10). Figures 7 and 8 show the results for the pressure and the temperature, respectively. The figures reveal that the molar flow of N2O is especially sensitive to both design variables. However, the sensitivity is remarkably higher for the design variable T. Therefore, 5262

dx.doi.org/10.1021/ie101393q |Ind. Eng. Chem. Res. 2011, 50, 5253–5265

Industrial & Engineering Chemistry Research

ARTICLE

Figure 6. Plot of experimental measurements26 and simulations of N2O [ppmV] concentration at reactor exit vs temperature [K] for the different data sets.

Figure 7. Change of response variables [%, absolute value] versus perturbation of pressure P [%].

Figure 8. Change of response variables [%, absolute value] versus perturbation of temperature T [%].

optimizing the temperature has a better potential of improving the value of the objective function and the main attention should be paid to this variable during the design process. 4.4. Optimization of Design Variables. In this step, the optimization problem for the process design variables is solved. The boundaries for the design variables are set to [280 K, 1500 K] for T and to [0 atm, 3 atm] for P. The initial values given to the design variables are T = 1450 K and P = 1 atm. The applied

optimization method is SQP with a convergence criterion for the normalized step length of 10-10. Figure 9 shows the values of P and T during the solution of the optimization problem given in eq 10. Figure 10 provides a surface plot of the objective function around the found optimum. The best value of the objective function is obtained for a temperature of 1500 K and a pressure of 2.67 atm. It turns out, however, that if a pressure of 1 atm is applied, the N2O 5263

dx.doi.org/10.1021/ie101393q |Ind. Eng. Chem. Res. 2011, 50, 5253–5265

Industrial & Engineering Chemistry Research

ARTICLE

Figure 9. Optimization of design variables T [K] and P [atm] versus iteration steps.

Figure 10. Surface plot of objective function during optimization of design variables P [atm] and T [K].

concentration is higher but does not exceed the maximum allowed value. Consequently, a pressure of 1 atm is chosen that has advantages for the construction of the reactor and at the same time increases the NO concentration at the reactor exit. 4.5. Simulation of System with Optimized Design Variables. In this last step, simulations are performed for the optimized design parameters to study the performance of the system under these conditions. The final concentrations of N2O and NO for a temperature of 1500 K and a pressure of 1 atm are 1.24 ppm and 0.065 [mol/mol], respectively.

5. DISCUSSION AND CONCLUSIONS A computer-aided modeling framework that structures and systematizes the model development and application process has been developed. The modeling framework and its methodology for single scale model development and identification has been shown to work for a case study that deals with thermal treatment of N2O from off-gas streams. On the basis of experience with the presented case study as well as others, the modeling framework has promising perspectives to increase the modeling efficiency read as decreased time and resources associated with the modeling process. This is achieved in many ways. First of all, the structure of the computer-aided modeling framework according to the required work-flows for the different modeling tasks provides guidance to the modeler, increases the transparency of the modeling process and helps to minimize errors related to model implementation and coding. Second, state-of-the-art

modeling techniques that are required to support the modeler during the steps of the work-flows, such as sensitivity analysis, identifiability analysis, and uncertainty analysis, have been combined in one modeling tool providing a user-friendly interface. Tasks that can be performed by the computer are automated (e. g., model translation, equation ordering). Further, the modeling framework includes a number of features favoring reuse of models, among these are, library connections, model aggregation, documentation, and report generation. With respect to the case study, the parameter identification part has fine-tuned the parameter values derived from databases for the conditions of the experimental measurements.26 It has been shown that by applying methods like sensitivity and identifiability analysis the quality of the model fit has been improved. The modeling framework, however, is of general character and is not only applicable for the presented problem but for a large variety of different problems in chemical engineering.

’ ASSOCIATED CONTENT

bS

Supporting Information. Information with respect to case study that is available includes experimental data applied for model identification, complete set of model equations, and sensitivity functions for all combinations of parameter pairs. This material is available free of charge via the Internet at http://pubs. acs.org.

’ AUTHOR INFORMATION Corresponding Author

*Tel.: þ45 4525282882. Fax: þ45 45932906. E-mail: rag@kt. dtu.dk.

’ ACKNOWLEDGMENT The financial support of the Technical University of Denmark is kindly acknowledged. ’ REFERENCES (1) Grossmann, I. E.; Westerberg, A. W. Research challenges in process systems engineering. AIChE J. 2000, 46, 1700–1703. (2) Pantelides, C. C. In 11th European symposium on computer-aided process engineering; Gani, R., Jorgensen, S. B., Eds.; Elsevier: Amsterdam, 2001; pp 15-26. 5264

dx.doi.org/10.1021/ie101393q |Ind. Eng. Chem. Res. 2011, 50, 5253–5265

Industrial & Engineering Chemistry Research (3) Foss, B.; Lohmann, B.; Marquardt, W. A field study of the industrial modeling process. J. Process Control 1998, 5/6 (1), 325– 338. (4) von Wedel, L.; Marquardt, W.; Gani, R. Modeling frameworks. In Software Architectures and Tools for Computer Aided Process Engineering; Braunschweig, B., Gani, R., Eds.; Elsevier: Amsterdam, 2002; pp 87-125. (5) Marquardt, W. Trends in computer-aided process modeling. Comput. Chem. Eng. 1996, 20, 591–609. (6) Brooke, A.; Kendrick; D.; Meeraus; A.; Raman; R. GAMS - A User’s Guide; GAMS Development Corp.: New York, 1998. (7) Ausbrooks, R.; Buswell, S.; Dalmas, S.; Devitt, S.; Diaz, A.; Hunter, R.; Smith, B.; Soiffer, N.; Sutor, R.; Watt, S. Mathematical Markup Language (MathML), Version 2.0.; 2001; available online at http://www.w3.org/TR/MathML2/; (accessed 03.05.2010). (8) Modelica Association. Modelica - A Unified Object-Oriented Language for Physical Systems Models. Language Specification; 2000; available online at http://www.modelica.org (accessed 03.05.2010). (9) Process Systems Enterprise. gPROMS Introductory User Guide; Process Systems Enterprise Ltd.: London, 1997. (10) Aspentech. Aspen Modeler 10.2 - Reference; Aspen Technology, Inc.: Cambridge, MA, 2001. (11) Stephanopoulos, G.; Henning, G.; Leone, H. MODEL.LA - A modeling framework for process engineering - I. The formal framework. Comput. Chem. Eng. 1990, 14, 813–846. (12) Jensen, A. K.; Gani, R. A computer-aided system for generation of problem specific process models. Comput. Chem. Eng. 1996, 20, 145–150. (13) Jensen, A. K. Generation of problem Specific Simulation Methods within an Integrated Computer Aided System. Ph.D. Thesis, Technical University of Denmark, Lyngby, Denmark, 1998. (14) Klatt, K.-U.; Marquardt, W. Perspectives for process systems engineering - Personal views from academia and industry. Comput. Chem. Eng. 2009, 33, 536–550. (15) Bogusch, R.; Lohmann, B.; Marquardt, W. Computer-aided process modeling with modkit. Comput. Chem. Eng. 2001, 25 (1), 963– 995. (16) Hangos, K.; Cameron, I. Process Modeling and Model Analysis, Process Systems Engineering, 1st ed; Academic Presss: London, 2001. (17) Brun, R.; K€uhni, M.; Siegrist, H.; Gujer, W.; Reichert, P. Practical identifiability of ASM2d parameters - systematic selection and tuning of parameter subsets. Water Res. 2002, 36, 4113–4127. (18) Sin, G.; Eliasson, L. A.; Gernaey, K. V. Good modeling practice (GMoP) for PAT applications: Propagation of input uncertainty and sensitivity analysis. Biotechnol. Prog. 2009, 25, 1043–1053. (19) Sales-Cruz, M.; Gani, R. In Computer-Aided Chemical Engineering: Dynamic Model Development, 1st ed.; Asprey, S. P., Macchietto, S., Eds.; Elsevier: Amsterdam, 2003. (20) Kee, R. J.; Coltrin, M. E.; Glarborg, P. Chemically reacting flow theory and practice; Wiley Interscience: Hoboken, NJ, 2003. (21) Rasmussen, C. L.; Hansen, J.; Marshall, P.; Glarborg, P. Experimental Measurements and Kinetic Modeling of CO/H2/O2/ NOX Conversion at High Pressure. Int. J. Chem. Kinet. 2008, 40, 454– 480. (22) National Institute of Standards and Technology, NIST Chemical Kinetics Database, Gaithersburg, MD, 2000. Available online at http://kinetics.nist.gov/kinetics/index.jsp (accessed 03.05.2010). (23) Kee, R. J.; Rupley, F. M.; Miller, J. A. The Chemkin Thermodynamic Data Base. Sandia Report, SAND87-8215B, Sandia National Laboratories: Livermore, CA, 1994. (24) Troe, J. Predictive possibilities of unimolecular rate theory. J. Phys. Chem. 1979, 83, 114–126. (25) Zwietering, T. N. The degree of mixing in continuous flow systems. Chem. Eng. Sci. 1959, 11, 1–15. (26) Glarborg, P.; Johnsson, J. E.; Dam-Johansen, K. Kinetics of Homogeneous Nitrous Oxide Decomposition. Combust. Flame 1994, 99, 523–532.

ARTICLE

(27) Sin, G.; Vanrolleghem, P. A. Extensions to modeling aerobic carbon degradation using combined respirometric-titrimetric measurements in view of activated sludge model calibration. Water Res. 2007, 41, 3345–3358. (28) Brun, R.; Reichert, P.; K€unsch, H. R. Practical identifiability analysis of large environmental simulation models. Water Resour. Res. 2001, 37, 1015–1030.

5265

dx.doi.org/10.1021/ie101393q |Ind. Eng. Chem. Res. 2011, 50, 5253–5265