Quantitative Optimal Experimental Design Using Global Sensitivity

Mar 8, 2010 - This global sensitivity analysis is performed via quasi-linearization, and ...... Society for Industrial and Applied Mathematics: Philad...
0 downloads 0 Views 657KB Size
7782

Ind. Eng. Chem. Res. 2010, 49, 7782–7794

Quantitative Optimal Experimental Design Using Global Sensitivity Analysis via Quasi-Linearization Yunfei Chu and Juergen Hahn* Artie McFerrin Department of Chemical Engineering, Texas A&M UniVersity, College Station, Texas 77843-3122

Local sensitivity analysis is widely used in experimental design to improve the precision of the estimated parameters. However, for nonlinear models, the local sensitivity values and the experimental design criteria are dependent on the, not yet known, parameter values. Global sensitivity analysis can deal with this situation by taking parameter uncertainty into account for computation of the sensitivity values. However, the existing experimental design criteria cannot easily be applied to the conventional global sensitivity analysis results. One outcome of this is that experimental design involving global sensitivity analysis has mainly focused on identification of influential parameters. A new global sensitivity analysis technique is presented in this work for the purpose of using this technique for quantitative experimental design. The methodology makes use of quasi-linearization and the global sensitivity matrix returned is the design matrix of the linearized model. Due to this, the same experimental criteria that have been developed for quantitative optimal design of linear models can be applied and serve as indicators of desired properties of the parameter estimates. The presented design using global sensitivity analysis is consistent with the popular design involving local sensitivity analysis when the parameter uncertainty is small; however, the technique outperforms local design when applied to models with significant parameter uncertainty. 1. Introduction Experimental design has received a significant amount of attention in statistics and system identification.1-7 Qualitative design is one aspect of experimental design and consists of selecting input/output variables and identifiable parameters. Quantitative design, on the other hand, deals with determining input shapes and sampling schedules based on optimization of a suitable criterion.2 Local parametric sensitivities, i.e., partial derivatives of the output with respect to parameters, play an important role in both qualitative and quantitative experimental design. Various criteria for experimental design have been developed based on local sensitivity analysis. While local sensitivity analysis can be applied to nonlinear models, there are several points that need to be carefully considered. One is that the results returned by local sensitivity analysis of a nonlinear system depend upon the values of the parameters that one wants to estimate. Obviously these values are not exactly known prior to estimation. The effect of the parameter values on the sensitivity values and, accordingly, on the experimental design criterion represents one of the main problems associated with experimental design of nonlinear systems. Several approaches have been developed to deal with this dependency. The most widely used one is local design8 which assumes that the true parameter values are close to the nominal values. If this is the case then the sensitivity vectors evaluated at the nominal values of the parameters can be used to design an experiment. However, this approach neglects the parameter uncertainty. Another approach is sequential design9 which iterates between local design and parameter estimation. Using this technique, an experiment is designed based on the sensitivity evaluated at the previously estimated parameter values; the parameter values are then re-estimated based upon data generated from the designed experiment. The newly estimated parameter values are used for experimental design for the next * To whom correspondence should be addressed. E-mail: [email protected].

iteration. The main drawback of this technique is that iterating between experimental design and parameter estimation may not result in a small number of experiments that need to be performed. This drawback is a significant one as reducing the experimental effort is one of the driving factors behind performing experimental design. Robust design10,11 is an alternative to the aforementioned experimental design methods. Robust design evaluates the sensitivity not only at one point in the parameter space, but instead at many individual points. Approaches for robust design include the min-max method12,13 and the Bayesian method.14,15 However, these robust methods are computationally expensive due to the evaluation of the sensitivity over a range of possible parameter values. Global sensitivity analysis has more recently received a lot of attention as an alternative to local sensitivity analysis. Global sensitivity analysis characterizes the effect of a parameter on the output while explicitly taking information about parameter uncertainty into account.16-20 A significant amount of work has been done using global sensitivity analysis instead of local sensitivity analysis for experimental design.21-30 However, these efforts mainly focused on qualitative experimental design, i.e., determining important parameters that should be estimated. While it has been recognized that global sensitivity analysis outperforms local sensitivity for determining important parameters, reports of quantitative optimal experimental design by global sensitivity analysis, e.g. selection of sampling points and determination of input profiles by optimizing a experimental criterion, are rare (for an exception, see the work of Martinez et al.30) The main obstacle to using global sensitivity analysis techniques for quantitative optimal experimental design is that there is a lack of optimality criteria that can be applied to the global sensitivity values. The common experimental criteria are derived for linear systems where the results returned by each criterion characterize a specific attribute related to the precision of the estimated parameter values. These criteria are real functions of the design matrix of the linear model. In the case

10.1021/ie9009827  2010 American Chemical Society Published on Web 03/08/2010

Ind. Eng. Chem. Res., Vol. 49, No. 17, 2010

of a nonlinear model, the local sensitivity matrix can be used as the design matrix; while it is possible to use the experimental criteria on the local sensitivity matrix, it should be noted that the nonlinear behavior of the model is not taken into account in this case. However, it should be pointed out that the use of experimental criteria cannot be easily extended to the global sensitivity matrix. If criteria developed for the local sensitivity matrix are applied to the global sensitivity matrix, such as was done by Martinez et al.,30 then the experimental designs may be inconsistent with the traditional designs2-4 as the global sensitivity measures are not guaranteed to reduce to the local sensitivity when the parameter uncertainty is negligible, or even when the model is linear. A consequence of this is that it is difficult to interpret the results returned by an experimental criterion applied to the global sensitivity matrix. It is the goal of this paper to develop a new global sensitivity analysis measure to be used for optimal experimental design. This global sensitivity analysis is performed via quasi-linearization, and the computed global sensitivity matrix is shown to be an extension of the design matrix of the linearized model. Due to this property, the existing experimental criteria can be applied to the global sensitivity matrix. The technique is consistent with traditional experimental design as results from the global sensitivity analysis reduce to the ones derived using local sensitivity analysis if the model is linear or if the parameter uncertainty is approaching zero. However, the presented approach is a global technique as the parameter uncertainty is explicitly taken into account during the computation. Due to this, quantitative experimental design based on the global sensitivity analysis can be performed, which may result in an improvement compared to a design based upon local sensitivity analysis. The technique is illustrated in three case studies: one where the parameter identifiability is tested, one where the optimal sampling points are determined, and one where the optimal input profile is computed.

si(tj) )

(2)

var[E[g(tj, θ)|θi]] ) E[(E[g(tj, θ)|θi] - E[g(tj, θ)])2] )

∫ ( ∫ · · · ∫ g(t , θ) ∏ p (θ ) ∏ dθ (3) ∫ · · · ∫ g(t , θ) ∏ p (θ ) ∏ dθ ) p (θ )dθ j

k

k

k

k*i

k*i

2

j

k

k

k

k

i

i

i

k

There are two terms contained in the bracket in eq 3. The first term is the conditional mean of the output according to a particular parameter θi, and the second one is the mean of the output over all parameters. The global sensitivity is often defined as the conditional variance divided by the total variance of the output33 si(tj) )

var[E[g(tj, θ)|θi]] var[g(tj, θ)]

(4)

or the normalized conditional variance25

2. Background

y˜ ) g(θ) + ε

∂g(tj, θ) ∂θi

The derivative can be calculated by direct differentiation31,32 which solves the system equations simultaneously with the sensitivity equations. The sensitivity vector of a parameter is constructed by sampling the partial derivatives at several given time points: si ) [si(t1), ..., si(tnt)]T. Various global sensitivity measures exist, unlike what is found for local sensitivity analysis, and among these, the variancebased method is commonly used.33 The variance-based sensitivity characterizes the prior information of the parameter uncertainty by a probability density function p(θ) ) ∏ipi(θi). The conditional variance characterizes the individual contribution of a parameter to the total variance of the output, which is calculated by

si(tj) )

2.1. Local and Global Sensitivity Analysis. Parametric sensitivity analysis deals with how variations in the outputs of a system can be apportioned, qualitatively or quantitatively, to different parameters in a model.16-20 Sensitivity analysis techniques belong to one of two categories: (1) local sensitivity analysis is a derivative-based approach that characterizes the effect of a parameter on the outputs only in a neighborhood of the nominal value and (2) global sensitivity analysis perturbs the parameters over a large range and the sensitivity denotes the effect of parameters on the outputs over the entire uncertainty range. Sensitivity analysis is performed on the mapping from parameters to the outputs. Assume that a regression model is given by

7783



var[E[g(tj, θ)|θi]] var[θi]

(5)

The advantage of the normalized conditional variance (5) is that the global sensitivity is in some sense comparable to the local sensitivity as both have the same units. Computation of the conditional variance is not trivial, and several approaches for its computation have been presented, including the regression method,34 Sobol’s method,35 the Fourier amplitude sensitivity test (FAST),36,37 and extensions of FAST.38 If the model is linear and parameters are independent g(tj, θ) )

∑ a (t )θ k j

(6)

k

k

then the conditional mean of the parameter θi is given by E[g(tj, θ)|θi] ) ai(tj)θi +

∑ a (t )E[θ ] k j

k

(7)

k*i

(1) and the mean is

where y˜ ) [y˜(t1), ..., y˜(tnt)]T is the measured output, g(θ) ) [g(t1,θ), ..., g(ttn,θ)]T is the predicted value and ε ) [ε(t1), ..., ε(tnt)]T is the measurement noise. For dynamic systems, the regression model is defined implicitly by a set of differential equations describing the system. If a system is nonlinear, then an analytical expression of the regression model rarely exists. Local sensitivity is the partial derivative of the output with respect to the parameter

E[g(tj, θ)] )

∑ a (t )E[θ ] k j

k

(8)

k

The conditional variance from eq 3 results in var[E[g(tj, θ)|θi]] )E[ai(tj)2(θi - E[θi])2] )ai(tj)2var[θi]

(9)

7784

Ind. Eng. Chem. Res., Vol. 49, No. 17, 2010

In this case, the global sensitivity given by eq 5 matches the magnitude of the local sensitivity from eq 2:



|

var[E[g(tj, θ)|θi]] ∂g(tj, θ) ) |ai(tj)| ) var[θi] ∂θi

|

(10)

If the model is nonlinear and the parameter uncertainty is small, then the global sensitivity computed by these means returns results that approximate those computed by the absolute value of the local sensitivity analysis.25 2.2. Evaluation of Multidimensional Integrals. Computation of a multidimensional integral is one of the main tasks for many sensitivity analysis methods, e.g., computation of the conditional variance in variance-based global sensitivity analysis but also the quasi-linearization method discussed in the next section. A multidimensional integral over the parameter space can be expressed as

One important aspect of eq 15 is that the multidimensional integral can be transformed into a unidimensional integral, which is significantly easier to evaluate. If the rationally linear independence condition is satisfied, then all elements of ω are irrational numbers. As computers use a finite precision for representing numbers, the irrational ω can not be recorded accurately and the rationally linear independence cannot hold in practice. Instead, a condition approximating the rationally linear independence has been presented in the literature.36,37 Since only rational numbers can be recorded by a computer, the elements in ω can be assumed to be integers without loss of generality. While it is not possible for the condition from eq 14 to hold, the equation can be satisfied by small integers λ1, ..., λnθ in the sense that for any

∑λω i

∑ |λ | e M + 1, i

i

i

) 0 implies all λi ) 0

(16)

i



1

If )

···

0



1

0

f(θ1, ..., θnθ) dθ1 · · · dθnθ

(11)

where the integration intervals are normalized with the lower bound set to zero and the upper bound set to unity. Evaluation of such multidimensional integrals is not trivial. One general approach uses a Monte Carlo method.39 Monte Carlo methods generate a set of uniformly independent random points of the parameters, {θ˜ 1, ..., θ˜ N} and use the average value of the function over the samples to approximate the integral N

IN ) SR



1 f(θ˜ ) N k)1 k

(12)

where SR is the volume of the integration region. For the unit hyper-cube shown in eq 11, the value of SR equals unity and the presence of this variable in the expression does not affect the numerical value, but it does ensure that expression shown in eq 12 has the same units as that from eq 11. As given by the law of large numbers, IN will approach If as the number of sampling points N approaches infinity lim IN ) If

(13)

Nf∞

Apart from the independently distributed random sequences, there are also deterministic sequences, called equidistributed sequences, that are able to satisfy the condition given by eq 13. One method that uses deterministic equidistributed sequences to evaluate the integral is the quasi-Monte Carlo method.40,41 An advantage of the quasi-Monte Carlo method is that it can converge faster than standard Monte Carlo approaches. One well-known equidistributed sequence is generated from a set of rationally linear independent numbers ω ) [ω1, ..., ωnθ]T (nθ g 2), i.e., for any integer λ1, ..., λnθ

∑λω i

i

) 0 implies all λi ) 0

(14)

i

The sequence {θ˜ k ) (kω) mod 1} is equidistributed, and the convergence condition given by eq 13 holds.42 A continuous version of this sequence also exists42,43 and is given by lim

Tf∞

1 T



T

0

g((ωτ)mod 1) dτ )



1

0

···



1

0

g(θ) dθ1 · · · dθnθ

(15)

The number M is called the degree of independence which characterizes how close the condition given by eq 16 is to that given by eq 14. Equation 14 represents the general case for lim Mf∞. If ω consists of only integers, then the function g((ωτ) mod 1) is periodic with respect to τ and the integral can be evaluated over only one period of T. It has been shown that the error in the integration stems from the approximation involving the rationally linear dependence and that this error can be controlled by choosing a value for M.36 A more detailed discussion regarding the selection of the values of M and ω is presented in the Appendix. 2.3. Optimal Experimental Design. Experimental design seeks to determine the experimental conditions that will improve the precision of the estimated parameters.2-4 Information about the noise is often required for experimental design, where it is a common assumption that noise has zero mean and a covariance matrix of Σ. To simplify the notation in the following, a Cholesky decomposition of the inverse of the covariance matrix can be performed, i.e. Σ-1 ) CTC. A new regression model can then be obtained by multiplying both sides of the regression model shown in eq 1 by the matrix C. The noise vector of this new model is Cε which has a covariance matrix equal to the identity matrix. Due to this preprocessing, the covariance matrix of the noise can be assumed to be the identity matrix Σ)I

(17)

without loss of generality. If the covariance matrix of the noise is unknown it is possible to augment the parameter vector to include elements of the covariance matrix and estimate the covariance matrix simultaneously with other parameters. However, this approach further complicates the parameter estimation and experimental design and it is not uncommon to assume that one knows the covariance matrix of the noise in experimental design. The model shown in eq 1 is a single response model; however, it can be extended to a multiple response model by augmenting the vector of the measurement to include different outputs, y˜ ) [y˜1(t1), ..., y˜1(tnt), ..., y˜ny(t1), ..., y˜ny(tnt)]T and, accordingly, the vector for the prediction and the noise vector. Most optimal experimental design techniques are derived for a local linearization of eq 1 y˜ ) Sθ + ε

(18)

Ind. Eng. Chem. Res., Vol. 49, No. 17, 2010

In the linear model, the covariance matrix of the estimated parameters is k) ) (STS)-1 var(θ

(19)

under the assumption stated in eq 17. Additionally, if the noise is Gaussian distributed then the covariance matrix of the estimated parameters is equal to the inverse of the Fisher information matrix and the least-squares estimate is also the maximum likelihood estimate. It is the goal of experimental design to obtain a small value of some norm of the covariance matrix or, along similar lines, a large value of a norm of the Gram matrix STS. A set of real functions of the Gram matrix have been defined as experimental design criteria.44 The most commonly used criterion is the D-optimality criterion which maximizes the determinant of the Gram matrix45,46 max φD(STS) ) max det(STS)

(20)

D-optimal design results in the smallest volume of a confidence ellipsoid for the least-squares estimates of the parameters. Another popular criterion is the A-optimality criterion which minimizes the sum of the variances of the estimated parameters47,48 min φA(STS) ) min trace[(STS)-1]

(21)

While techniques have been derived for linear systems, they can be extended to nonlinear models, e.g., see eq 1, by using a linearization g(θ) ≈ g(θtrue) + S(θ - θtrue)

inputs and outputs has been much more limited and no generally acceptable criterion for quantitative experimental design involving global sensitivity analysis has been proposed in the literature. The reason for this is that quantitative experimental design generally uses a criterion of the sensitivity matrix for determining experimental conditions; however, it is unclear how existing experimental design criteria can be applied to the sensitivity values computed from global sensitivity analysis techniques. Even though it is straightforward to construct the global sensitivity matrix, similar to the local sensitivity matrix, and it has been suggested to apply existing experimental criteria to the global sensitivity matrix;30 the results of such a design can be problematic. The reason for this statement is that such a design involving the global sensitivity matrix is inconsistent with the traditional designs, e.g., if the global sensitivity matrix fails to reduce to the design matrix when the model is linear. One resulting problem is that it that interpretation of the results returned by such a method are unclear. A simple example is presented here to illustrate this point. Consider the following two linear regression models g1(θ1, θ2) ) θ1 + θ2 g2(θ1, θ2) ) θ1 + θ2 g1(θ1, θ2) ) θ1 + θ2 Model II: g2(θ1, θ2) ) θ1 - θ2

Model I:

3. Optimal Experimental Design Using Global Sensitivity Analysis via Quasi-Linearization 3.1. Motivation Behind Derivation of a New Global Sensitivity Analysis Technique. The main drawback of local sensitivity analysis applied to nonlinear systems is that the sensitivity values are affected by the parameter values. To overcome this drawback, a wide variety of global sensitivity methods have been developed. It is generally accepted that global sensitivity analysis is superior to local sensitivity analysis for identification of influential parameters33 as is also evidenced by a large number of applications of global sensitivity analysis. However, the use of global sensitivity analysis for designing

(23)

These models do not contain noise terms as it is the purpose of this illustrative example to assess structural identifiability. The local sensitivity matrix is the design matrix as given by

(22)

where S is the local sensitivity matrix S ) ∂g/∂θT ) [∂g(ti,θ)/ ∂θj|θ)θtrue]ij. The sensitivity matrix of this linearized model serves as the design matrix, and the experimental design criteria can be applied to it. These experimental design criteria locally retain their properties, e.g. the D-optimality criterion is associated with the volume of the confidence region of estimated parameters. It should be noted, however, that the sensitivity matrix should be evaluated at the true parameter values to obtain a reasonable approximation; however, the true values are never known prior to estimation. In fact, this is the main drawback of experimental design based on the local sensitivity analysis. If the nominal parameter values, rather than the true values, are used to evaluate the sensitivity matrix, then the performance of the design can be poor due to the difference between the values. Apart from using a linear approximation, it is also possible to directly generate a distribution of estimated values of the parameters by using a Monte Carlo method. In this case, the experimental design can be performed using multiple sampling points of the estimated parameter values.6,49,50 However, these approaches can be computationally expensive.

7785

SI )

[ ]

[

1 1 1 1 and SII ) 1 1 1 -1

]

(24)

The identifiability of the parameters can be determined directly from the value of the experimental criterion. The D-optimality criterion value is zero for model I since the sensitivity matrix is rank deficient while the criterion value is nonzero for model II due to the full rank of the sensitivity matrix. As a result, the parameters in model I are unidentifiable while the ones for model II are identifiable. However, the D-optimality criterion values of the global sensitivity matrix calculated from the conditional variance are both zero since the global sensitivity matrices are identical and equal to SI, which would falsely suggest that both models are unidentifiable. The reason for the incorrect results returned by this method based upon global sensitivity is that the sign information is lost in the computation of the conditional variance (see eq 10) and that this global sensitivity matrix does not reduce to the design matrix. It should be pointed out that the presented example just used one global sensitivity analysis method to illustrate a point. Since a wide variety of methods for global sensitivity analysis exist, it is beyond the scope of this work to compare all of them. Instead the focus is on variance-based methods as they have been frequently applied in qualitative experimental design to indentify influential parameters. Other global sensitivity indices, e.g., the Kolmogorov-Smirnov statistic21,22 also fail to reduce to the local sensitivity due to several reasons.25 In contrast to these approaches, the technique introduced in this work can be used for global sensitivity analysis but also reduces to existing techniques for small uncertainty in the parameters. It can be shown that this method includes the sensitivity defined by the Pearson correlation coefficient as a special case.51 3.2. Development of a New Global Sensitivity Measure for Optimal Experimental Design. The development of a new global sensitivity analysis technique that can be used for

7786

Ind. Eng. Chem. Res., Vol. 49, No. 17, 2010

quantitative optimal experimental design, instead of existing local sensitivity analysis methods, is the main contribution of this work. This technique has the advantage that parametric uncertainty can be explicitly taken into account by applying existing experimental design criteria to the global sensitivity matrix developed in this work. This extension of local methods to global sensitivity analysis is achieved via quasi-linearization. While the exact values of parameters are never known before estimation is performed, it is common that some prior information about the parameter uncertainty is available. The region of possible parameter values is often characterized by a hyperrectangle, and each parameter is distributed within an interval. A reasonable choice of the nominal value of a parameter is the mean parameter value θnominal ) E[θ]. To simplify the notation, deviation variables are introduced by subtracting the nominal value from the original one, i.e., θ ) θoriginal - θnominal, and then, the nominal value of the deviation variable is θ¯ ) 0

θ ∈ [θL, θU]

1

|

0

g(R1V1(ψ1), ..., RnθVnθ(ψnθ))

∑ s R V (ψ ) ∏ dψ ) min ∑ ∫ · · · ∫ (g(t , R V (ψ ), ..., R V (ψ )) (31) - ∑ s (t )R V (ψ )) ∏ dψ ) ∑ min ∫ · · · ∫ g(t , R V (ψ ), ..., R V (ψ )) - ( ∑ s (t )R V (ψ )) ∏ dψ -

k k k

|

k

k

1

s0,s1,...,snθ

1

0

j

j

0

2

k

k

1 1

1

nθ nθ



2

k j

k k

k

k

k

k

1

j

1

0

s0(tj),s1(tj),...,snθ(tj)

j

0

1 1

1

nθ nθ



2

k j

k k

k

k

k

k

where sk ) [sk(t1), ..., sk(tnt)]T. The last line in eq 31 exemplifies that the optimization can be performed separately for different time points tj. To simplify the notation, the index of tj is omitted



1

min J )

···

0



1

0

(g(R1V1(ψ1), ..., RnθVnθ(ψnθ)) -

(26)

∑ s R V (ψ )) ∏ dψ (32) 2

k k k

k

k

g(θ¯ ) ) 0

k

k

This expression is a least-squares optimization, and the solution can be calculated from

(27)

It should be noted that introducing deviation variables only represents a change of notation and has no effect on the parameter sensitivity analysis itself and/or the experimental design. Since the goal is to perform experimental design for nonlinear systems, a linear approximation of the original model shown in eq 1 can be useful. Using the notation introduced in eqs 25-27, a regression model can be written in deviation variables as

∑sθ

(28)

i i

i

This linear approximation also provides a straightforward technique for evaluating sensitivity: According to this expression, the coefficient vector si is the sensitivity vector of the parameter θi. The most common approximation is the local linearization shown in eq 22 resulting in the local sensitivity value from eq 2. However, several alternatives to the described local linearization exist, one of which will be used in this work. One alternative is to regard the regression model as a nonlinear system mapping of the inputs of θ to the outputs g(θ). To study the behavior of the system and investigate the effect of the inputs, the system is stimulated by an input θi ) RiVi(ψi)

(29)

where Ri ) θiU - θiL is the magnitude of the uncertainty and the input function is chosen such that

[



···

s0,s1,...,snθ

where θL is the lower bound and θU is the upper bound. Similarly, the output can be transformed such that

ψi ∈ [0, 1] and Vi(ψi) ∈

1

0

(25)

The interval of an uncertain parameter is assumed as

g(θ) ≈



min

s0,s1,...,snθ

θLi

θUi

, θUi - θLi θUi - θLi

]

(30)

The best linear approximation to the nonlinear model for this specific input can be calculated by minimizing the squared errors of the approximation

∂J ) 0, i ) 1...nθ ∂si

(33)

which directly leads to



1

···

0



∏ dψ ∑ (R ∫

1

gVi

0

)

k

k

1

j

···

0

j



1

0

ViVj

∏ dψ )s for i ) 1...n (34) k

j

θ

k

where the solution using matrix notation is given by

[]

[ [

s1 l ) snθ



R1

1

∫ V V ∏ dψ 1

···

0

1 1

0

k



1

···

0

·



∫ V V ∏ dψ 1

0

nθ 1

k

0

∫ gV ∏ dψ



∫ gV ∏ dψ

1

1

···

1

0

k

k

l 1

0

···

1

0

Rnθ

k

l R1

···



k

k

]

k



1

···

0

∫ V V ∏ dψ

··

· · · Rnθ

1 nθ

k

k

l



1

0

···

∫V 1

0

nθVnθ

∏ dψ k

]

-1

1

0

k

(35)

A multidimensional integral needs to be evaluated for each element of the matrix and the vector on the right side of eq 35. To limit the computational effort, it is assumed that the input functions are orthogonal:



1

0

···



1

0

Vi(ψi)Vj(ψj)

∏ dψ

k

) 0 for any i * j (36)

k

Then the sensitivity value can be computed from

Ind. Eng. Chem. Res., Vol. 49, No. 17, 2010



1



1

···

0

si )

0

g(R1V1(ψ1), ..., RnθVnθ(ψnθ))Vi(ψi)



1

Ri



1

···

0

0

Vi(ψi)2

7787

∏ dψ

k

k

(37)

∏ dψ

k

k

Another reason to choose orthogonal input functions is that the defined sensitivity value will reduce to the local sensitivity value when the range of parameter uncertainty tends to zero. For an illustration of this statement, suppose that the uncertainty range of each parameter decreases simultaneously with the same R; then, the limit of eq 37 is



1



1

···

0

lim si )lim Rf0

0

Rf0

g(RV1(ψ1), ..., RVnθ(ψnθ))Vi(ψi)

R



1



1

···

0



1

···

0

)



1

1



1

)

)



0





0







···

1

0

V2i



dψk

···

1

0

···

0

∂g(θ) ) ∂θi

|

···

1



1

0

Vi

2



dψk

Vi



dψk

i



···

∂g(RV1(ψ1), ..., RVnθ(ψnθ)) ∂R



( |



···



∂g(θ) ∂θi

1

0

0

1

0

k

k

0

1

2



dψk

k

1

0



∏ k

1

1 1

V2i

k

1 1





0

1 1

dψk

k

1

1

···

0

)



dψk

k

[lim R1 g(RV (ψ ), ..., RV (ψ ))]V ∏ dψ Rf0

0

Vi(ψi)2

0



0

∂g(θ) ∂θ1

θ)0

|

θ)0

Vi

R)0

V1 + · · · +

ViVi

|



∂g(θ) ∂θnθ



| ) θ)0

dψk

(38)

k

Vnθ Vi



dψk

k

dψk

k

k

θ)0

Selecting appropriate inputs for eq 29 is a critical step in this quasi-linearization procedure. The range condition given by eq 30, and the orthogonality condition from eq 36 should be satisfied. Additionally, the inputs should sufficiently stimulate the system to create a rich data set for the global sensitivity computed by eq 37. Several candidates for input functions are commonly used in various types of nonlinear systems analysis: piecewise constant functions, ramp functions, and sinusoidal functions.52,53 In particular, sinusoidal functions are commonly used as frequency response characteristics can be determined and the sensitivity given by eq 37 is related to the Fourier coefficient of the output. However, one is not restricted to these input types and can instead determine the input function according to the prior distribution of the parameter, if this distribution is known. Using this approach, the independent variable ψi is regarded as a random variable with a uniform distribution over the unit interval. According to the distribution of the parameter θi, F(θi), the input function shown in eq 29 can be selected as Vi(ψi) )

1 -1 F (ψi) Ri

(39)

A multidimensional integral needs to be evaluated to compute the global sensitivity from eq 37. In most cases, an analytical solution does not exist and a standard Monte Carlo method can be applied instead: A set of values of ψ are sampled according to the uniform distribution and recorded as {ψk} where k is an index for the run of simulation. The parameter value is calculated using the input function from eq 29 and the output is evaluated at each parameter point to generate a set of {gk}. In the special case where the input function is a linear function θi ) Riψi

(40)

the sensitivity results in



1

0

si )

···



1

0

Ri

g(R1ψ1, ..., Rnθψnθ)ψi



1

0

The calculation by the Monte Carlo method is then given by

···



1

0

ψi2

∏ dψ i

∏ dψ

i

i

i

(41)

7788

Ind. Eng. Chem. Res., Vol. 49, No. 17, 2010

∑g ψ k

si )

Ri



ψki ψki

k

∑g θ

k k i

k i

k

)

k



) F(g, θi)

θki θki

σg σθi

(42)

k

where ψik and θik are the ith elements of the sample vectors ψk and θk, respectively. For this special case, the sensitivity is the uncentered Pearson correlation coefficient F(g,θi) normalized by the ratio of the standard deviations of the output and the parameter. A more efficient approach to evaluate the multidimensional integral is the quasi-Monte Carlo method presented in section 2.2. A set of rationally linear independent numbers {ωn} is selected from Table A1 from the Appendix. The multidimensional integral is then transferred to a unidimensional integral



1

···

0

∫ g(R V (ψ ), ..., R 1

0

1 1

1

nθVnθ(ψnθ))Vi(ψi)

∫ g(R V ((ω τ)mod 1), ..., R T

0

1 1

1

∏ dψ

k

)

k

nθVnθ((ωnθτ)mod

1))Vi((ωiτ)mod 1) dτ(43)

where the upper bound of the integral, T, equals the least common multiple of {1/ωn} since the function Vn((ωnτ) mod 1) is periodic with a period of 1/ωn. This unidimensional integral can be evaluated using standard numerical software packages. The computationally most demanding step of this procedure is the evaluation of the multidimensional integral. Since the same procedure is used for evaluating this multidimensional integral as the one implemented in the FAST method, the computational effort of the presented procedure is comparable to the FAST method. As FAST has been applied to problems with dozens of state variables and parameters,23-26 it is possible to apply the presented procedure to realistic models. 3.3. Optimal Experimental Design Involving Global Sensitivity. The global sensitivity vector si is formed by computing the global sensitivity value at different sampling points in time. Since eq 28 is a linear approximation, the experimental design optimality criteria derived for linear models can also be used in this case. The only modification is that the sensitivity matrix consists of the global sensitivity values, as computed from eq 37, instead of the local sensitivity values. Since the global sensitivity is able to reduce to the local sensitivity, the design by global sensitivity analysis reduces to the one by local sensitivity analysis when the parameter uncertainty is small. At the same time, the effect of parameter uncertainty is taken into account in the presented procedure and, as a result, the technique can be applied to models with a significant degree of uncertainty. The flowchart for the experimental design procedure based on global sensitivity analysis is shown in Figure 1. The first step is to determine the parameter bounds using available information. This information can be obtained from the literature, preliminary experiments, or by modeling and analyzing the mechanisms. The next step is to parametrize the experimental conditions. For example, the input profile is often represented by some form that involves only a few parameters, such as a series of piecewise constant functions, to reduce the resulting optimization problem to a finite-dimensional problem. Other experimental conditions that can be parametrized are the selection of measurements, sampling points, or initial conditions. All of these variables can be included in the decision vector. The optimal design is then determined by solving an optimization problem. The objective function of this optimization problem is an experimental criterion based on the global sensitivity matrix calculated from eq 37. The most popular

Figure 1. Flowchart of experimental design based on global sensitivity analysis.

criterion is the determinant of the Gram matrix of the sensitivity matrix (20) or the trace of the inverse of the Gram matrix (21); however, other criteria can also be applied. 4. Three Illustrative Examples This section presents three examples that illustrate different aspects of the presented experimental design procedure. The first example is a generic one while the second and third examples describe chemical reactors. 4.1. Identifiability Test of a Simple Model. This test of structural parameter identifiability aims to check whether the parameter values can be determined uniquely from noise free data. If multiple solutions exist for parameter estimation, then the parameters are not identifiable and the estimation problem is ill-posed. Identifiability of a linear regression model is directly related to the rank of the design matrix. If the design matrix has full column rank, then the parameters are identifiable, and if the matrix is rank deficient, then the parameters are not identifiable. For a nonlinear model, the identifiability can be locally evaluated by the rank of the local sensitivity matrix. If the sensitivity matrix is full rank in a neighborhood of a given point, then the parameters are identifiable in a neighborhood of this point. It should be noted that the sensitivity value at only one point may be insufficient for determining identifiability as the rank of the sensitivity matrix may change in the neighborhood of this point. Consider the model

Ind. Eng. Chem. Res., Vol. 49, No. 17, 2010

g1(θ1, θ2) ) θ1 + θ2

3

g2(θ1, θ2) ) θ1 - θ23

(44)

[ ] 1

S)

1

where θ1,θ2 ∈ [ -R,R] and the nominal value vector is θ ) 0. As this is a relatively simple example, it is possible to compute an analytical solution for the sensitivity analysis and to conclude that the model is identifiable over this region. In a first step, the local sensitivity matrix is computed for the nominal values S)

[ ] 1 0 1 0

(45)

This sensitivity matrix has a rank of one, which contradicts the observations made about the system above. The reason for this is that the local sensitivity matrix changes rank in a neighborhood containing the nominal value. As a second method, the global sensitivity matrix is computed using a variance-based method as shown in eq 5, where the parameter uncertainty is characterized by a uniform distribution over the region

Figure 2. Sensitivity of the concentration of the species B with respect to the kinetic parameters calculated by three methods (R1 ) 0.1 min-1 and R2 ) 0.1 min-1): (a) k1 and (b) k2.

37 R  37 R

7789

2

(46)

2

This sensitivity matrix also has a rank of one. The reason for this result is that the information about the sign is lost while computing the conditional variance. To compare these results, the global sensitivity matrix is computed via quasi-linearization from eq 37 for the same parameter uncertainty as the one used for the variance-based method

[ ]

3 2 R 5 S) 3 1 - R2 5 1

(47)

The rank of the sensitivity matrix is two, unless R approaches zero in which case eq 47 reduces to eq 45. The results are consistent with what is known about the system. Apart from identifiability, it is also important to compare other results returned by these three methods. For example, the local sensitivity identifies the parameter θ1 as the influential parameter

Figure 3. Sensitivity of the concentration of the species B with respect to the kinetic parameters calculated by three methods (R1 ) 0.9 min-1 and R2 ) 0.9 min-1): (a) k1 and (b) k2.

7790

Ind. Eng. Chem. Res., Vol. 49, No. 17, 2010

regardless of the range of parameter uncertainty. In contrast to this, both global sensitivity methods determine that the uncertainty range has an effect on which of the two parameters is most influential. If the range is small, then the parameter θ1 is influential; however, if the range is large, then the parameter θ2 becomes more important. This is due to the structure of the system where the parameter θ1 appears linearly while the parameter θ2 is taken to the third power in eq 44. This ability to take the parameter uncertainty into account is one of the advantages of global sensitivity analysis. 4.2. Batch Reactor with Two Reactions in Series. Suppose two consecutive reactions are taking place in a batch reactor54 k1

k2

A 98 B 98 C in which species B is the desired product. The reactions are irreversible and first order with regard to species A and B, respectively. For the initial concentrations, CA(0) ) 1 mol/L and CB(0) ) 0, the concentration of B is CB )

k1 (e-k1t - e-k2t) k2 - k1

(48)

Even though this is a linear dynamic system, the output CB is nonlinearly dependent on the parameters k1 and k2. The ranges of the kinetic parameters are chosen as

k1 ∈ [1 - R1 1 + R1 ], k2 ∈ [1 - R2 1 + R2 ] (49) and the nominal values are jk1 ) 1 min-1 and jk2 ) 1 min-1. Three sensitivity measures are calculated for the two parameters: the local sensitivity given by eq 2, the global sensitivity via the conditional variance shown in eq 5 computed by FAST, and the global sensitivity via the quasi linearization from eq 37 computed by the quasi-Monte Carlo method from eq 43. The set of rationally independent numbers are selected as ω1 ) 3 and ω2 ) 7 according to Table A1. To demonstrate the effect of parameter uncertainty on the experimental design, two sets of uncertain ranges are used: a small uncertainty with R1 ) 0.1 min-1 and R2 ) 0.1 min-1 and a large uncertainty with R1 ) 0.9 min-1 and R2 ) 0.9 min-1. In both cases, the parameters are assumed to be uniformly distributed over these intervals. In the case of R1 ) 0.1 min-1 and R2 ) 0.1 min-1, the sensitivity profiles are shown in Figure 2. The global sensitivity via quasi-linearization reduces to the local sensitivity. For the global sensitivity via the variance-based method, only the magnitude of the sensitivity value reduces to the local sensitivity since the global sensitivity values are always non-negative. In the case of R1 ) 0.9 min-1 and R2 ) 0.9 min-1, the three methods return different sensitivity profiles for both parameters as is shown in Figure 3. The local sensitivity profile is the same as the one for the small uncertainty case since the sensitivity value is unaffected by the uncertainty. However, the values of each global sensitivity measure are different for the two cases

Figure 4. Experimental designs returned by the three methods (R1 ) 0.1 min-1 and R2 ) 0.1 min-1) with a different number of sampling points: (a) selected time points; (b) Bayesian D-criterion.

Ind. Eng. Chem. Res., Vol. 49, No. 17, 2010

7791

Figure 5. Experimental designs returned by the three methods (R1 ) 0.9 min-1 and R2 ) 0.9 min-1) with different number of sampling points: (a) selected time points; (b) Bayesian D-criterion. Table 1. Values of the Parameters type parameter for estimation

design variable constant

variable nominal value k1 k2 k3 k4 u CA0 CB0 CAf

50 100 100 10

10

range

unit

25-100 50-200 50-200 5-20 0-100 0-5 0-5

h-1 h-1 h-1 L mol-1 h-1 h-1 mol L-1 mol L-1 mol L-1

since the information about the parameter uncertainty is taken into account for the calculation of the sensitivity value. A comparison of the experimental designs returned by the three sensitivities is performed by selecting the optimal sampling points based on the D-optimality criterion of the sensitivity matrix shown in eq 20. The candidate sampling points were chosen every 0.2 min for a time span from 0 to 10 min. At least 2 and at most 50 sampling points were required to estimate the two parameters. The optimal sampling points were computed using the three sensitivity measures for each number of sampling points. The sets of the sampling points for small uncertainties are shown in Figure 4a and those for large uncertainties are shown in Figure 5a. For some number of sampling points, the results returned by the different methods are identical and those results are not shown. To evaluate the performance of each method over the entire uncertainty region, the Bayesian D-optimality criterion is calculated for each design. The Bayesian D-optimality criterion

is the mean value of the D-optimality evaluated according to the parameter uncertainty φBD(ξ) )

∫ · · · ∫ φ (ξ, θ)p(θ) ∏ dθ D

i

(50)

i

where ξ denotes a experimental design, φD(ξ,θ) is the Dcriterion of the local sensitivity matrix evaluated at a parameter point θ for the given design, and p(θ) is the density function of the parameters. The value of φD(ξ,θ) assesses the design in a neighborhood of the parameter value θ, and the mean value shown in eq 50 describes the overall performance of a design over the entire uncertainty region. The Bayesian criterion is a widely used approach to evaluate a design under uncertainty and is generally acknowledged to be superior to the criterion value at only one given point.4,10,15 The Bayesian criteria of the designs computed by the three sensitivity analysis techniques for small uncertainties are shown in Figure 4b. Since the parameter uncertainty is negligible, the Bayesian criterion is close to the local D-criterion at the nominal point, i.e., the design based on the local sensitivity matrix is near optimal. The design based upon global sensitivity analysis via quasi-linearization achieves approximately the same performance as the local design since the global sensitivity reduces to the local sensitivity. However, the design by global sensitivity analysis using conditional variances returns a smaller value of the Bayesian criterion.

7792

Ind. Eng. Chem. Res., Vol. 49, No. 17, 2010

analysis return better results than the local design since the parameter uncertainty is taken into account. To verify the significance of the difference in the mean criterion values shown in Figures 4b and 5b, a hypothesis test is performed H0:mA - mB ) 0 against H1:mA - mB > 0

(51)

where mA and mB are the mean values of method A and method B, respectively. In this case the subscript A denotes the design by the quasi-linearization method while the subscript B denotes the design by the local method or the design by the variancebased method. The P-value of the test for every case is close to zero which indicates that the difference between the mean values is significant. 4.3. Reactor with van de Vusse Reaction Kinetics. The second case study deals with an isothermal continuously stirred tank reactor (CSTR) in which a van de Vusse reaction is taking place55 k1

k3

A {\} B 98 C k2 k4

A 98 D The model consisting of the component balances for species A and B is given by C˙A ) -k1CA + k2CB - k4CA2 + u(CAf - CA) C˙B ) k1CA - (k2 + k3)CB - uCB y ) CA

Figure 6. Optimal input profile returned by (a) local design and (b) global design. (c) Distribution of differences in the variance of estimated parameters by the two designs.

The results of the Bayesian criterion for large uncertainties are shown in Figure 5b. The design based upon global sensitivity analysis via quasi-linearization returns the best performance while the design based upon local sensitivity analysis returns the smallest criterion value. The design by local sensitivity analysis achieves the best performance when the true parameters are close to the nominal parameter values. However, if the parameter uncertainty is significant, then the best design at one point can be the worst at another point, and on average, the local design is suboptimal. The designs by global sensitivity

The objective is to design a profile of the input u and the initial conditions CA0 and CB0 to generate an output y so that the kinetic parameters k1, k2, k3, and k4 can be accurately estimated. The nominal values of the kinetic parameters were taken from the literature56,57 and are listed in Table 1. The kinetic parameters for estimation were assumed to be log-uniformly distributed from 50% of the nominal value to 200% of the nominal value where the nominal value is the mean value. The A-optimality criterion shown in eq 21 is used to find the optimal experimental condition. This criterion minimizes the sum of the variances of the estimated parameters. Since the parameters have different units, they are normalized by dividing them by their nominal values θi ) ki/kji, i ) 1 ... 4. After normalization, all parameters have no unit and are distributed from 0.5 to 2 with the mean equal to 1. The data for estimation were generated by adding Gaussian distributed noise with zero mean and variance σ2 ) 0.01 to the output. The output was sampled every 0.01 h in the range from 0 to 0.5 h. The input was assumed to be piecewise constant over the time interval, and during each 0.05 h, the input was fixed at some level. The range of the initial values was chosen to be from 0 to 5 mol L-1. Figure 6a shows the optimal profiles of the input according to the A-optimality criterion computed by the local sensitivity analysis, and Figure 6b shows the profile computed by the global sensitivity analysis via quasi-linearization. There are distinct differences between the two input profiles. The initial values returned by the two designs are identical for CA0 ) 0 and CB0 ) 5 mol L-1. The A-optimal design minimizes the variance of the estimated parameters. If the true parameter values are identical to the nominal values then, the local design is optimal. To calculate the variance of the estimated parameter values,

Ind. Eng. Chem. Res., Vol. 49, No. 17, 2010

7793

Table A1. Rationally Linear Independent Numbers of the 4th Order size

rationally linear independent numbers in ω

2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

3,7 1,9,13 5,11,19,23 11,21,27,35,39 1,21,31,37,45,49 17,39,59,69,75,83,87 23,55,77,97,107,113,121,125 19,59,91,113,133,143,149,157,161 25,63,103,135,157,177,187,193,201,205 41,67,105,145,177,199,219,229,235,243,247 31,87,113,151,191,223,245,265,275,281,289,293 23,85,141,167,205,245,277,299,319,329,335,343,347 87,133,195,251,277,315,355,387,409,429,439,445,453,457 67,143,189,251,307,333,371,411,443,465,485,495,501,509,513 73,169,245,291,353,409,435,473,513,545,567,587,597,603,611,615 85,145,241,317,363,425,481,507,545,585,617,639,659,669,675,683,687 143,229,289,385,461,507,569,625,651,689,729,761,783,803,813,819,827,831 149,275,361,421,517,593,639,701,757,783,821,861,893,915,935,945,951,959,963 99,233,359,445,505,601,677,723,785,841,867,905,945,977,999,1019,1029,1035,1043,1047

100 data sets were generated by adding different noise signals to the output and estimate the parameters for each data set. The variance of the parameters is computed from the estimated parameter values. The variance of the parameters for the design using local sensitivity analysis is 0.0769 while the variance of the parameters for the design involving global sensitivity analysis is 0.0795. However, if the true parameter values are not close to the nominal values, then the design by local sensitivity analysis may return poorer results than the design by global sensitivity analysis. To illustrate the effect of parameter uncertainty on the design, 100 parameter values were sampled over the uncertainty range. The averaged variances returned by the local sensitivity analysis experimental design is 0.1108, and the averaged variance returned by the global sensitivity analysis experimental design is 0.1039. Figure 6c shows the distribution of the differences of the variances of estimated parameters between the two designs. It can be seen that the design using global sensitivity analysis returns on average smaller variances than the design based upon local sensitivity analysis. 5. Conclusions Local sensitivity analysis is a widely used technique in experimental design; however, the dependence of the sensitivity results on the parameter values makes the design only valid in a neighborhood of the nominal parameter values. Global sensitivity analysis does not have this drawback and provides a promising alternative for experimental design. However, most existing global sensitivity analysis techniques do not reduce to local sensitivity analysis procedures, even if the model under investigation is linear. As a result, most applications of global sensitivity analysis deal with qualitative experimental design, i.e., determination of important parameters. This work presented a global sensitivity analysis technique that can under appropriate conditions reduce to a local sensitivity analysis method. The technique is derived via quasi-linearization of the nonlinear model, and the parameter uncertainty is explicitly taken into account in the calculation of the global sensitivity. This technique is then incorporated into a quantitative experimental design procedure as it represents an extension of an existing local sensitivity analysis procedure. Existing optimal design criteria, such as the D-optimality criterion or the A-optimality criterion, can be applied to the global sensitivity matrix to select optimal sampling points and determine the

optimal input profile. It was shown in case studies that the design based on global sensitivity analysis outperforms the design based on local sensitivity analysis if the entire uncertainty space of the parameters is considered. Appendix Construction of a set of rationally linear independent numbers of order M is a key factor of the quasi Monte Carlo method presented in section 2.2. The value of M determines the degree of the approximation of rationally linear independence shown in eq 16 and determines the accuracy of the computed multidimensional integral from eq 11. The accuracy of the integral increases with larger values of M while the computational effort also increases at the same time. Therefore it is not possible to compute an approximation for very large values of M, and it is recommended to use M equal to 4 in practice.36,37 Under this condition, i.e., M ) 4, the rationally linear independent numbers in the vector ω are listed in Table A1. The table lists the number sets up to the size of 20. The sets for numbers up to 50 can be found in the literature,36,37 where an trial and error method is also provided to construct sets for sizes larger than 50. Acknowledgment The authors gratefully acknowledge partial financial support from the National Science Foundation (Grant CBET# 0941313) and the ACS Petroleum Research Fund (Grant PRF# 48144AC9). Literature Cited (1) Steinberg, D. M.; Hunter, W. G. Experimental-design - Review and comment. Technometrics 1984, 26, 71–97. (2) Walter, E.; Pronzato, L. Qualitative and quantitative experiment design for phenomenological models - A survey. Automatica 1990, 26, 195– 213. (3) Ljung, L. System identification: Theory for the user, 2nd ed.; Prentice Hall PTR: Upper Saddle River, 1999. (4) Atkinson, A. C.; Donev, A. N.; Tobias, R. D. Optimum experimental designs, with SAS; Oxford University Press: Oxford, 2007. (5) Franceschini, G.; Macchietto, S. Model-based design of experiments for parameter precision: State of the art. Chem. Eng. Sci. 2008, 63, 4846– 4872. (6) Balsa-Canto, E.; Alonso, A. A.; Banga, J. R. Computational procedures for optimal experimental design in biological systems. IET Syst. Biol. 2008, 2, 163–172.

7794

Ind. Eng. Chem. Res., Vol. 49, No. 17, 2010

(7) Kreutz, C.; Timmer, J. Systems biology: experimental design. FEBS J. 2009, 276, 923–942. (8) Box, G. E. P.; Lucas, H. L. Design of experiments in non-linear situations. Biometrika 1959, 46, 77–90. (9) Box, G. E. P.; Hunter, W. G. Useful method for model-building. Technometrics 1962, 4, 301–318. (10) Asprey, S. P.; Macchietto, S. Designing robust optimal dynamic experiments. J. Process Control 2002, 12, 545–556. (11) Dette, H.; Melas, V. B.; Pepelyshev, A.; Strigul, N. Robust and efficient design of experiments for the Monod model. J. Theor. Biol. 2005, 234, 537–550. (12) Pronzato, L.; Walter, E. Robust experiment design via maximin optimization. Math. Biosci. 1988, 89, 161–176. (13) Goodwin, G. C.; Aguero, J. C.; Welsh, J. S.; Yuz, J. I.; Adams, G. J.; Rojas, C. R. Robust identification of process models from plant data. J. Process Control 2008, 18, 810–820. (14) Pronzato, L.; Walter, E. Robust experiment design via stochasticapproximation. Math. Biosci. 1985, 75, 103–120. (15) Chaloner, K.; Verdinelli, I. Bayesian experimental design: A review. Stat. Sci. 1995, 10, 273–304. (16) Frey, H. C.; Patil, S. R. Identification and review of sensitivity analysis methods. Risk Anal. 2002, 22, 553–578. (17) Cacuci, D. G.; Ionescu-Bujor, M. A comparative review of sensitivity and uncertainty analysis of large-scale systems - II: Statistical methods. Nucl. Sci. Eng. 2004, 147, 204–217. (18) Saltelli, A.; Tarantola, S.; Campolongo, F. Sensitivity analysis as an ingredient of modeling. Stat. Sci. 2000, 15, 377–395. (19) Saltelli, A.; Ratto, M.; Tarantola, S.; Campolongo, F. Sensitivity analysis practices: Strategies for model-based inference. Reliab. Eng. Syst. Saf. 2006, 91, 1109–1125. (20) Marino, S.; Hogue, I. B.; Ray, C. J.; Kirschner, D. E. A methodology for performing global uncertainty and sensitivity analysis in systems biology. J. Theor. Biol. 2008, 254, 178–196. (21) Cho, K. H.; Shin, S. Y.; Kolch, W.; Wolkenhauer, O. Experimental design in systems biology, based on parameter sensitivity analysis using a Monte Carlo method: A case study for the TNF alpha-mediated NF-kappa B signal transduction pathway. Simul.-Trans. Soc. Model. Simul. Int. 2003, 79, 726–739. (22) Zi, Z. K.; Cho, K, H.; Sung, M. H.; Xia, X. F.; Zheng, J. S.; Sun, Z. R. In silico identification of the key components and steps in IFN-gamma induced JAK-STAT signaling pathway. FEBS Lett. 2005, 579 (5), 1101– 1108. (23) Sidoli, F. R.; Mantalaris, A.; Asprey, S. P. Toward global parametric estimability of a large-scale kinetic single-cell model for mammalian cell cultures. Ind. Eng. Chem. Res. 2005, 44, 868–878. (24) Kontoravdi, C.; Asprey, S. P.; Pistikopoulos, E. N.; Mantalaris, A. Application of global sensitivity analysis to determine goals for design of experiments: An example study on antibody-producing cell cultures. Biotechnol. Prog. 2005, 21, 1128–1135. (25) Chu, Y.; Jayaraman, A.; Hahn, J. Parameter sensitivity analysis of IL-6 signalling pathways. IET Syst. Biol. 2007, 1, 342–352. (26) King, J. M. P.; Titchener-Hooker, N. J.; Zhou, Y. Ranking bioprocess variables using global sensitivity analysis: a case study in centrifugation. Bioprocess. Biosyst. Eng. 2007, 30, 123–134. (27) Brockmann, D.; Rosenwinkel, K. H.; Morgenroth, E. Practical identifiability of biokinetic parameters of a model describing two-step nitrification in biofilms. Biotechnol. Bioeng. 2008, 101, 497–514. (28) Chhatre, S.; Francis, R.; Newcombe, A. R.; Zhou, Y. H.; TitchenerHooker, N.; King, J.; Keshavarz-Moore, E. Global Sensitivity Analysis for the determination of parameter importance in bio-manufacturing processes. Biotechnol. Appl. Biochem. 2008, 51, 79–90. (29) Yue, H.; Brown, M.; He, F.; Jia, J. F.; Kell, D. B. Sensitivity Analysis and Robust Experimental Design of a Signal Transduction Pathway System. Int. J. Chem. Kinet. 2008, 40, 730–741. (30) Martinez, E. C.; Cristaldi, M. D.; Grau, R. J. Design of Dynamic Experiments in Modeling for Optimization of Batch Processes. Ind. Eng. Chem. Res. 2009, 48, 3453–3465. (31) Rabitz, H.; Kramer, M.; Dacol, D. Sensitivity analysis in chemicalkinetics. Annu. ReV. Phys. Chem. 1983, 34, 419–461.

(32) Turanyi, T. Sensitivity analysis of complex kinetic systems - Tools and applications. J. Math. Chem. 1990, 5, 203–248. (33) Saltelli, A.; Ratto, M.; Andres, T.; Campolongo, F.; Cariboni, J.; Gatelli, D.; Saisana, M.; Tarantola, S. Global SensitiVity Analysis. The Primer; John Wiley & Sons: England, 2008. (34) McKay, M. D. Nonparametric variance-based methods of assessing uncertainty importance. Reliab. Eng. Syst. Saf. 1997, 57, 267–279. (35) Sobol, I. M. Global sensitivity indices for nonlinear mathematical models and their Monte Carlo estimates. Math. Comput. Simul. 2001, 55, 271–280. (36) Cukier, R. I.; Levine, H. B.; Shuler, K, E. Non-linear sensitivity analysis of multi-parameter model systems. J. Comput. Phys. 1978, 26, 1– 42. (37) McRae, G. J.; Tilden, J. W.; Seinfeld, J. H. Global sensitivity analysis - A computational implementation of the Fourier amplitude sensitivity test (FAST). Comput. Chem. Eng. 1982, 6, 15–25. (38) Saltelli, A.; Tarantola, S.; Chan, K. P. S. A quantitative modelindependent method for global sensitivity analysis of model output. Technometrics. 1999, 41, 39–56. (39) Robert, C. P.; Casella, G. Monte Carlo statistical methods, 2nd ed.; Springer: New York, 2004. (40) Niederreiter, H. Quasi-Monte. Carlo methods and pseudo-random numbers. Bull. Amer. Math. Soc. 1978, 84, 957–1041. (41) James, F. A review of pseudorandom number generator. Comput. Phys. Commun. 1990, 60, 329–344. (42) Kuipers, L.; Niederreiter, H. Uniform distribution of sequences; Wiley: New York,1974. (43) Weyl, H. Mean motion. Am. J. Math. 1938, 60, 889–896. (44) Kiefer, J. Optimum experimental designs. J. R. Stat. Soc. Ser. B-Stat. Methodol. 1959, 21, 272–319. (45) Wynn, H. P. Results in the Theory and Construction of D-Optimum Experimental Designs. J. R. Stat. Soc. Ser. B-Stat. Methodol. 1972, 34, 133– 147. (46) Stjohn, R. C.; Draper, N. R. D-optimality for regression designs Review. Technometrics 1975, 17, 15–23. (47) Elfving, G. Optimum allocation in linear regression theory. Ann. Math. Stat. 1952, 23, 255–262. (48) Goodwin, G. C.; Payne, R. L. Dynamic System Identification: Experiment Design and Data Analysis; Academic Press: New York, 1977. (49) Hengl, S.; Kreutz, C.; Timmer, J.; Maiwald, T. Data-based identifiability analysis of non-linear dynamical models. Bioinformatics 2007, 23, 2612–2618. (50) Balsa-Canto, E.; Rodriguez-Fernandez, M.; Banga, J. R. Optimal design of dynamic experiments for improved estimation of kinetic parameters of thermal degradation. J. Food Eng. 2007, 82, 178–188. (51) Helton, J. C.; Johnson, J. D.; Sallaberry, C. J.; Storlie, C. B. Survey of sampling-based methods for uncertainty and sensitivity analysis. Reliab. Eng. Syst. Saf. 2006, 91, 10–11. (52) Gelb, A.; Vander Velde, W. E. Multiple-input describing functions and nonlinear system design; New York: McGraw-Hill, 1968. (53) Vidyasagar, M. Nonlinear systems analysis, 2nd ed.; Society for Industrial and Applied Mathematics: Philadelphia, 2002. (54) Fogler, H. S. Elements of Chemical Reaction Engineering, 4th ed.; Prentice Hall PTR: Upper Saddle River, 2005. (55) van de Vusse, J. G. Plug-flow type reactor versus tank reactor. Chem. Eng. Sci. 1964, 19, 994–997. (56) Doyle, F. J.; Ogunnaike, B. A.; Person, R. K. Nonlinear modelbased control using 2nd-order volterra models. Automatica 1995, 31, 697– 714. (57) Hahn, J.; Edgar, T. F. A gramian based approach to nonlinearity quantification and model classification. Ind. Eng. Chem. Res. 2001, 40, 5724–5731.

ReceiVed for reView June 17, 2009 ReVised manuscript receiVed September 27, 2009 Accepted September 29, 2009 IE9009827