Nonlinear Chance-Constrained Process Optimization under Uncertainty

probability. Such problems are called optimization under chance constraints. By employment of the monotony of these variables to one of the uncertain ...
0 downloads 0 Views 111KB Size
Ind. Eng. Chem. Res. 2002, 41, 3621-3629

3621

Nonlinear Chance-Constrained Process Optimization under Uncertainty Moritz Wendt, Pu Li,* and Gu 1 nter Wozny Institut fu¨ r Prozess und Anlagentechnik, Technische Universita¨ t Berlin, 10623 Berlin, Germany

Optimization under uncertainty is considered necessary for robust process design and operation. In this work, a new approach is proposed to solve a kind of nonlinear optimization problem under uncertainty, in which some dependent variables are to be constrained with a predefined probability. Such problems are called optimization under chance constraints. By employment of the monotony of these variables to one of the uncertain variables, the output feasible region will be mapped to a region of the uncertain input variables. Thus, the probability of holding the output constraints can be simply achieved by integration of the probability density function of the multivariate uncertain variables. Collocation on finite elements is used for the numerical integration, through which sensitivities of the chance constraints can be computed as well. The proposed approach is applied to the optimization of two process engineering problems under various uncertainties. 1. Introduction Optimization under uncertainty is considered necessary in process design and operation. Generally speaking, there are two types of uncertainties: internal uncertainties, which represent the unavailability of the knowledge of a process (e.g., model parameters), and external uncertainties, which, mainly affected by market conditions, are from outside but have impacts on the process (e.g., feedstock condition and product demand). At the design stage, the conventional way to accommodate these uncertainties is an overdesign of the process equipment. At the operation stage, processes are usually run with an operating point defined by an overestimation of the uncertainties. By these heuristic rules, much more costs than necessary will be caused. In most prevailing deterministic optimization approaches, the expected (nominal) values of these uncertainties are usually employed. In reality, however, the uncertain variables will deviate from their expected values and thus some specifications or output constraints may be violated when applying the a priori optimization results. Therefore, optimization under the uncertainties should be considered. Two methods have been used for representing uncertain variables: discrete and continuous distribution. In the former method, the bounded uncertain variables will be discretized into multiple intervals such that each individual interval represents a scenario with an approximated discrete distribution.1-4 Thus, a so-called multiperiod optimization problem will be formulated. The latter method considers the continuous stochastic distribution of the uncertain variables, in which a multivariate numerical integration method will be chosen. This leads to a stochastic programming problem. An approximated integration through a sampling scheme5 and a direct numerical integration6 have been used. To solve an optimization problem under uncertainty, some special treatments to the objective function, the * Corresponding author. Tel.: +49-30-31423418. Fax: +4930-31426915. E-mail: [email protected].

equality and inequality constraints, have to be considered in order to relax the stochastic problem to an equivalent nonlinear programming (NLP) problem so that it can be solved by the existing optimization routines. Minimizing the expected value of the objective function has been usually adopted.7,8 To handle the inequality constraints under uncertainty, recourse and chance constraints are the two main approaches used.9 In the approach of recourse, violations of the constraints are allowed and will be compensated for by introducing a penalty term in the objective function. This approach is very suitable to solving process planning problems under demand uncertainty.2,10,11 The recent development of algorithms of this approach has made it possible to solve large-scale linear and mixed-integer linear process planning problems.12 However, the application of recourse is limited because in many cases a suitable measure of compensation of the constraint violation is not available. In these cases, the approach of chance constraints can be used, in which a user-defined level of probability of holding the constraints (reliability of being feasible) will be ensured. In the framework of a linear system, the solution of a problem with single (separate) chance constraints can be derived simply by a coordinate transformation. In the case of a linear problem with a joint chance constraint, the relaxed problem is convex if the probability distribution function (PDF) of the uncertain variables is quasi-concave.13 Thus, it can be solved with an NLP solver. The probabilities and gradients of the constraints, composed of stochastic variables with multivariate normal distribution, can be computed using an efficient simulation approach.9 These results have been applied to linear model predictive control.14,15 However, methods of solving chance-constrained optimization problems for nonlinear processes under uncertainty have not been well developed. The main challenge lies in the difficulty in determining the probability of the output constraints and its sensitivity, if there exist nonlinear relations between the uncertain variables and the output constraints. Integration or sampling can be used to address the problem. Generally speaking, sampling in a space is

10.1021/ie010649s CCC: $22.00 © 2002 American Chemical Society Published on Web 06/26/2002

3622

Ind. Eng. Chem. Res., Vol. 41, No. 15, 2002

equivalent to multivariate integration. Diwekar and Kalagnanam5 proposed an efficient sampling method. The work of Whiting et al.16,17 concerns sampling parameters of thermodynamic models for process simulation. The samples are checked by comparing the experimental data such that practically meaningful output distribution can be obtained. They consider the correlation of the parameters, but the knowledge of distribution is not required. The integration method, based on the known probability density function (PDF), only considers integrals in the feasible region of the uncertain variables. Moreover, for optimization of such problems the gradient information is required. For a numerical gradient computation by sampling, two times of sampling are required. This will lead to more computation time. The advantage of direct numerical integration of the PDF is that one can compute the gradients simultaneously during multivariate integration. In this work, we propose a new approach to solve nonlinear chance-constrained optimization problems under uncertainty. We employ the monotony of the constrained output variables to one of the uncertain variables, so that the feasible region will be mapped to a region of the uncertain variables. Through this projection, the probability of holding the output constraints can be simply achieved by integration of the joint density function of the multivariate uncertain variables. Assuming that the uncertain variables have a correlated normal distribution, we use collocation on finite elements for the numerical integration, through which sensitivities of the chance constraints to the decision variables can be computed as well. A sequential computation scheme is proposed for the nonlinear stochastic optimization. The results of the application to the design and operation of a reactor network and a distillation column demonstrate the scope of the proposed approach. 2. Nonlinear Chance Constrained Optimization Problem A general nonlinear minimization problem under uncertainty can be written as

min f(x,u,ξ) s.t.

g(x,u,ξ) ) 0 h(x,u,ξ) e 0 x ∈ X, u ∈ U, ξ ∈ Ξ

(1)

where f is the objective function, g and h are the vectors of the equality and inequality constraints, and x ⊂ R n, u ⊂ R m, and ξ ⊂ R S are the vectors of state, decision, and uncertain variables, respectively. Here ξ includes both the internal and external uncertain variables. We assume that ξ has a joint PDF F(ξ) such that its integration in the entire uncertain space

∫ΞF(ξ) dξ ) 1

(2)

There may exist correlations between these uncertain variables. Historic operating data can be used to estimate their stochastic properties such as expected values and the correlation matrix. Because of the existing uncertainties, some modifications to (1) have to be made in order to arrive at a meaningful problem formulation.

Figure 1. Relation between single and joint chance constraints.

We first consider the equality constraints g under uncertainty. In fact, the impact of g is the projection of the uncertain variables (inputs) to the state space (outputs), with some given u. This implies that the required values of the state variables can be computed by a multivariate integration of the model equations in the space Ξ; i.e., x is a function of u and ξ. By doing so, g will be eliminated from (1). Second, a usual representation of the objective function under uncertainty is the minimization of its expected value:

EΞ(f(u)) )

∫Ξ f(u,ξ) F(ξ) dξ

(3)

In the work of Bernardo et al.,6 the computation of this integral is discussed. It is shown that a direct multivariate integration is computationally advantageous over the method of sampling. Third, the inequality constraints under uncertainty have to be considered. In process engineering practice, a very popular form of the inequality constraints is the specification (limitation) of some of the state variables (yi ) xi, i ) 1, ..., I and I e n) which we call output variables. Then, the inequalities in (1) can be written as

yi(u,ξ) e ySP i

i ) 1, ..., I

(4)

is the specified bound value of an output where ySP i variable, such as a pressure or a temperature limitation of a plant. Usually holding these constraints is critical for the production and safety of a process operation. Because of the existence of the uncertain variables ξ, however, it is impossible to hold (4) for sure. Thus, a probability level of satisfying (4) will be defined to represent a reliability of being feasible. This leads to the form of a joint chance constraint

P{yi(u,ξ) e ySP i , i ) 1, ..., I} g R

(5)

or single chance constraints

Pi{yi(u,ξ) e ySP i } g Ri, i ) 1, ..., I

(6)

where P is the probability measure of the given probability space and 0 < R e 1 is a probability level, a parameter defined based on the process operation requirement. A higher value (R f 1) will be specified if holding the constraints is more strongly desired. Figure 1 shows the relation between single and joint constraints. The difference between (5) and (6) is that a joint chance constraint represents the reliability in the output feasible region as a whole, while single chance constraints describe the reliability in the individual output feasible region. It can be seen that the joint feasibility involves the single feasibility, but the reverse

Ind. Eng. Chem. Res., Vol. 41, No. 15, 2002 3623 Table 1. Parameters of the Uncertain Inputs in the Illustrative Example

Figure 2. Mapping between an uncertain input variable and an output variable.

is not true. If the constraints are related to the safety consideration of a process operation, a joint chance constraint may be preferred. Single chance constraints may be used when some output constraints are more critical than the other ones, and in this case, different probability levels in (6) can be specified. Based on the above context, the stochastic optimization problem we consider is formulated with (3) as the objective function subject to (5) or (6) as the constraints. Solving the problem means to search for optimal values of the decision variables u for a minimum expected value of the objective function and meanwhile satisfying the probabilistic constraints, under the condition that the values of the uncertain variables ξ will be randomly distributed in their effective region Ξ.

expected value

standard deviation

correlation matrix

ξ1

1.0

0.2

ξ2

2.0

0.3

1.0 0.5 0.5 1.0

]

In Figure 2, the right and left circles represent the whole region of Yi and ΞS, respectively. The points are some realization of the variables based on their distribution functions. A point in ΞS leads to a point in Yi through the projection yi ) F(ξS). Because of the monotony, a point yi can lead to a unique ξS through the reverse projection ξS ) F-1(yi). The shaded area in the right circle is the output feasible region Y′i such that

P{yi e ySP i } g R

(7)

is satisfied, while the shaded area Ξ′S in the left circle is the region, with the bound ξLS, of the random variable corresponding to Y′i. It can be easily seen that the representation

3. Solution Approach We first consider the relaxation of the defined stochastic optimization problem to an NLP problem, so that it can be solved by a standard NLP solver like SQP. It is well-known that the solution approaches to a nonlinear problem can be classified into simultaneous approaches, where both the state and the decision variables are included in NLP, and sequential approaches, where an integration step is used to compute the state variables x and thus only the decision variables u will be solved by NLP. Because a multivariate integration in the domain of the uncertain variables is required for evaluating the objective function and the inequality constraints, a sequential approach would be preferred in the case of stochastic optimization. In a sequential NLP framework, the values of both the constraints and their sensitivities to the decision variables are required. While the calculation of the objective function (3) has been proposed from previous studies with a straightforward multivariate integration,6 no reports on the calculation of the values of the probability for the nonlinear chance constraints (5) and (6) have been found in the open literature. The challenge lies in the difficulty in determining the output constraints by a nonlinear propagation of the uncertain input variables ξ. Therefore, the key problem is the obstacle to obtain the probability distribution of the output variables.18 Projection of the Feasible Region. In this work, we avoid direct computing of the output probability distribution. Instead, we derive an equivalent representation of the probability by mapping it to an integration in a region of the random inputs. Consider the confined feasible region that will be formed by the nonlinear projection from the region of the uncertain variables at some given u. For a practical engineering problem, it is realistic that one can find a monotone relation between an output variable yi ∈ Yi and one of the uncertain input variables ξS ∈ ΞS, where ΞS is a subspace of Ξ. When this monotone relation is denoted as yi ) F(ξS), the mapping between yi and ξS can be schematically depicted as in Figure 2.

[

P{ξS e ξLS} g R

(8)

is the same as (7). This implies that the probability of holding the output constraint can be computed by integration in the corresponding region of the uncertain variable. It should be noted that all uncertain variables have to be taken into account when computing P{yi e ySP i }. In addition, the value of the decision variables also has an impact on the projected region. Then the bound ξLS will change based on the realization of the individual uncertain variables ξs (s ) 1, ..., S - 1) and the value of u, i.e.,

ξLS ) F-1(ξ1,...,ξS-1,ySP i ,u)

(9)

and this leads to the following representation:

P{yi e ySP i } )

∫-∞∞‚‚‚∫-∞∞∫-∞ξ F(ξ1,...,ξS-1,ξS) dξS dξS-1 ... dξ1 L S

(10)

The right-hand side of (10) is a multivariate integration in the whole region of Ξ except for the part over the limiting value ξLS. In fact, the result of this integration is the value of the PDF of the random inputs. Thus, the probability of the output constraint can be obtained by computing the value of the PDF. A numerical integration of (10) is required when taking a joint distribution function into account. Moreover, because of the model complexity, an explicit expression of (9) is usually not available. Therefore, a Newton-Raphson step has to be used for computing the bound value ξLS with given ySP i , u, and ξ1, ..., ξS-1 for each integrated subspace. These will be discussed in more detail in the next section. Illustrative Example. The above approach is illustrated with the following simple problem with two uncertain variables that have a joint normal distribution. The data describing the stochastic properties of the random inputs are given in Table 1. The output variable has a nonlinear relation with these variables:

3624

Ind. Eng. Chem. Res., Vol. 41, No. 15, 2002

y ) exp(ξ1 + ξ2) and the value P{y e 30} is to be calculated. We consider ξ2 which is monotone to y. From the reverse projection ξ2 ) -ξ1 + ln y, the bound of the uncertain input region is

ξL2 ) -ξ1 + ln 30 Then the output constraint probability will be

P{y e 30} )

∫-∞∞∫-∞-ξ +ln 30 φ2(ξ1,ξ2) dξ2 dξ1 1

where φ2 is the density function of the two-dimensional normal distribution. Its integration results in the value P{y e 30} ) 0.83. Figure 3 shows the mapping between the input uncertainties and the output variable, which is created by a Monte Carlo sampling with 1000 points. It can be seen that, although it is difficult to describe the output distribution induced by the nonlinear model, the probability of holding the output constraint can be figured out: about 83% of the points are under the bounding lines in both diagrams. Computing the Gradients of the Probability. In an NLP framework, we need to compute the gradients of the output constraint probability to the decision variables u. From (9) and (10), u has the impact on the value of the probability through the integration bound of the corresponding region of the random inputs. Thus, the gradients can be computed as follows:

∂P{yi e ySP i } ) ∂u ∂ξL

∫-∞∞‚‚‚∫-∞∞F(ξ1,...,ξS-1,ξLS) ∂uS dξS-1 ... dξ1

(11)

where the gradient vector ∂ξLS/∂u can also be obtained in the Newton-Raphson step during computing of the value of ξLS. Similar to (10), the right-hand side of (11) can be computed by a numerical integration in the input region. Until now we have discussed the computation of the probability and its sensitivities of a single chance constraint. For a joint constraint defined in (5), one should first analyze the relation between the constrained outputs and the uncertain inputs. Again, one of the uncertain variables ξLS, which is monotone with all output variables, may be found, such that l L P{yi e ySP i , i ) 1, ..., I} ) P{ξS e ξS e ξS} (12)

where ξLS and ξlS are the upper and lower bounds of the uncertain input region, respectively. This region is formed by the cutting planes mapped by yi(u,ξ) e ySP i (i ) 1, ..., I). Then the joint probability (12) can be computed by

P{yi e ySP i , i ) 1, ..., I} )

∫-∞∞‚‚‚∫-∞∞∫

ξLS ξlS

F(ξ1,...,ξS-1,ξS) dξS dξS-1 ... dξ1 (13)

The whole computational strategy proposed for solving the nonlinear chance-constrained optimization problem is a sequential NLP approach that can be depicted in Figure 4. SQP is chosen for searching for the decision

variables u in the NLP solver. The values of the objective function, the probabilistic constraints, and their sensitivities are computed by the multivariate integration, while the upper and lower bounds of the integration region will be calculated through solving the model equations by the Newton-Raphson algorithm. It should be noted that this approach is not dependent on the distribution of the random variables. Finally, the issue of feasibility is important for the chance-constrained problem. It is likely that the predefined probability level R is higher than reachable. Then the SQP algorithm cannot find a feasible solution. A straightforward way to address this problem is to compute the maximum reachable probability before doing the optimization. The only way to do this is to replace the original objective function with

max P{yi e ySP i , i ) 1, ..., I}

(14)

and then run the above computation. The two necessary conditions to use this approach are the knowledge of the PDF and a monotone relation between the constrained output variable and one of the uncertain variables or parameters. The solution approach does not depend on the form of distribution of the uncertain variables, provided the multivariate integration can be made. Whether they are correlated or not has no influence on the approach either. It should be noted that the correlation between the variables will not make the problem simpler, but rather it leads to a much more complicated integration because of the overlapping of the spaces of uncertain variables. If the variables are uncorrelated (independent), the multivariate integration becomes repeated integrations of individual functions with a single variable. We assume a correlated normal distribution because the uncertain variables in practical problems are usually correlated. Until now, only inequality constraints in which some state variables are restricted have been considered as output probability constraints. In practice, there may be other forms of inequality constraints. Corresponding to (1), a general form of probability constraint for such inequalities can be described as

P{h(x,u,ξ) e 0} g R

(15)

To deal with this constraint by the proposed approach, a new variable z can be introduced as

z ) h(x,u,ξ)

(16)

which can be considered as an output variable. This equation will be added into the equation system, so that the constraint can be expressed as

P{z e 0} g R

(17)

which is the same form as (7). 4. Numerical Integration In this section, we derive a numerical approach to the multivariate integration by using collocation on finite elements for computing values of the chance constraints (13). This computation depends decisively on the distribution of the uncertain variables concerned. In process engineering, many uncertain variables may be approximately treated as normally distributed. Accord-

Ind. Eng. Chem. Res., Vol. 41, No. 15, 2002 3625

Figure 3. Mapping between the input (left) and output (right) variables in an illustrative example.

ξˆ , which conforms to the standard normal distribution with zero mean and a covariance matrix Σˆ having unit diagonal elements. Computing the Function Value. The probability of the random inputs in a certain region can be obtained from the standard PDF ΦS which will be computed by integrating the standard density function φˆ S, i.e.,

ΦS(z1,...,zS,Σˆ ) )

∫-∞z ‚‚‚∫-∞z φˆ S(ξˆ 1,...,ξˆ S) dξˆ S ... dξˆ 1 1

S

(20)

For the multivariate standard normal distribution, it is well-known that (20) can be written as

Figure 4. Computational strategy for solving chance-constrained problems.

ing to the central limit theorem,19 if a random variable is generated as the sum of the effects of many independent random parameters, the distribution of the variable approaches a normal distribution, regardless of the distribution of each individual parameter. An example in work by Papoulis20 illustrates a surprisingly good approximation of a normal distribution by the sum of three uniformly distributed parameters (thus, each of those has a distribution very different from Gaussian). In practical engineering problems, the mean values and correlation matrix of variables can be relatively easily obtained, but it is difficult to achieve their distribution. Thus, we derive an integration method for the case of normally distributed random variables that have the PDF

φS(ξ) )

T -1 1 e-1/2(ξ-µ) Σ (ξ-µ) S/2 |Σ | (2π)

(18)

1/2

where µ and Σ are known expected values and the covariance matrix of these stochastic variables, which have the following form:

[] [

µ1 µ µ) 2 ‚‚‚ µS

σ12 σσr Σ ) 1 2 12 ‚‚‚ σ1σSr1S

σ1σ2r12 ‚‚‚ σ22 ‚‚‚ ‚‚‚ ‚‚‚ σ2σSr2S ‚‚‚

σ1σSr1S σ2σSr2S ‚‚‚ σS2

]

(19)

where σi is the standard deviation of each individual random variable and ri,j ∈ (-1, 1) is the correlation coefficient between ξi and ξj (i, j ) 1, ..., S). ξ can be easily transformed through a linear transformation to

ΦS(z1,...,zS,Σˆ ) )

(1) (1) ˆ )φˆ 1(ξˆ 1) dξˆ 1 ∫-∞z ΦS-1(z(1) 2 ,..., zS ,Σ 1

(21) where ΦS-1 is the (S - 1)-dimensional standard PDF while φˆ 1 is the standard normal density function of a single random variable,9 and

z(1) k )

zk - rk,1ξˆ 1 , k ) 2, ..., S 2 1 r x k,1

(22)

Σˆ (1) is the (S - 1) × (S - 1) correlation matrix with entries (1) ) ri,j

ri,j - ri,1rj,1

, i, j ) 2, ..., S 2 2 1 r 1 r x i,1 x j,1

(23)

Now the S-dimensional multivariate integration is reduced to an (S - 1)-dimensional one: (1) (1) ΦS-1(z(1) ˆ )) 2 ,..., zS ,Σ

∫-∞z ‚‚‚∫-∞z φˆ S-1(ξˆ 2,..., ξˆ S) dξˆ S ... dξˆ 2 (1) 2

(1) S

(24)

Continuing this procedure for S - 2 steps, we arrive at the two-dimensional integration (S-2) (S-2) Φ2(zS-1 ,zS ,Σˆ

(S-2)



))



(S-2) zS-1 z(S-2) S φˆ 2(ξˆ S-1,ξˆ S) -∞ -∞

dξˆ S dξˆ S-1 (25)

where Σˆ (S-2) is the 2 × 2 correlation matrix. Similar to (21), the right-hand side of (25) can be written as

3626

Ind. Eng. Chem. Res., Vol. 41, No. 15, 2002

(S-2) (S-2) (S-2) Φ2(zS-1 ,zS ,Σˆ ))

∫-∞z

(S-2) S-1

[

Φ1

]

(S-2) z(S-2) - r1,2 ξˆ S-1 S

x

(S-2) 2 1 - (r1,2 )

φ1(ξˆ S-1) dξˆ S-1 (26)

where Φ1 is the standard probability function of a normally distributed variable. Because an analytical solution of (26) is not available, a numerical integration is required. Collocation on finite elements has been regarded as an efficient method;21 thus, we use it for the numerical integration of (26). In the collocation (S-2) framework, the integrated interval (-∞, zS-1 ] will be discretized into subintervals. It should be noted that for the standard normal distribution the domain [-3, (S-2) ] can be used. The method of collocation on finite zS-1 elements is briefly presented in appendix A. Based on (20)-(26), a nested computational approach can be developed. From the collocation method, the values of Φ2 on the collocation points of the random variable will be calculated. These values will be used for computing the values of Φ3, and the procedure continues upward until the values of ΦS are computed. The choice of the number of intervals and the number of collocation points depends on the compromise between the accuracy and computation expense. More intervals and more collocation points will lead to more accurate results, but the computation time will be increased. Simulation results show that three intervals with three-point collocation or two intervals with fivepoint collocation can reach a probability accuracy with an error of less than 1%. Computing the Gradients. For the case of normal distribution, the gradients of the probability to the decision variables u can be computed by using the same approach as that described above. We note that only the last bound value of the integral in (20) has something to do with u [see also (9)]. From (21), we have the gradient vector:

∂ΦS ) ∂u

z1 ∂ΦS-1

∫-∞

∂u

φ ˆ 1(ξˆ 1) dξˆ 1

(27)

Figure 5. Flowsheet of the reactor network.

computed from (31) upward to (27), also by using the collocation method. 5. Application to Process Engineering Problems Reactor Network Design. The first example is a reactor network design problem, as shown in Figure 5. This example is taken from Rooy and Sahinidis,22 and it is extended for the purpose of optimal design under uncertainty. The design problem is defined as the cost minimization under the product specification. It is assumed that the uncertainties are from the kinetic parameters (the activation energy and the frequency factor in the Arrhenius equation), while the decision variables are the volumes of both reactors. The chanceconstrained optimization problem is formulated as follows:

min f ) xV1 + xV2 s.t.

CA1 + k1CA1V1 ) 1 CA2 - CA1 + k2CA2V2 ) 0 CB1 + CA1 + k3CB1V1 ) 1 CB2 - CB1 + CA2 - CA1 + k4CB2V2 ) 0 k1 ) k10e-E1/RT1 k2 ) k10e-E1/RT2 k3 ) k20e-E2/RT1 k4 ) k20e-E2/RT2

Because (1) (1) ΦS-1(z(1) ˆ ) 2 ,...,zS ,Σ

P{CB2 g CSP B2} g R

)



z(1) 2 ΦS-2(z(2) 3 ,..., -∞

z(2) ˆ (2)) φ ˆ 1(ξˆ 2) dξˆ 2 (28) S ,Σ

then

∂ΦS-1 ) ∂u

∫-∞z

(1) 2

∂ΦS-2 φ ˆ 1(ξˆ 2) dξˆ 2 ∂u

(29)

Continuing this procedure for S - 2 steps, we arrive at

∂Φ2 ) ∂u

∫-∞z

∂Φ1 φ ˆ (ξˆ ) dξˆ S-1 ∂u 1 S-1

(S-2) S-1

(30)

and

∂z(S-1) ∂Φ1 S )φ ˆ 1(z(S-1) ) S ∂u ∂u

(31)

In section 3 we have discussed how to compute zS and ∂zS/∂u. Thus, the gradients of the probability can be

0 e CA1, CA2, CB1, CB2 e 1, 0 e V1, V2 e 16

(32)

where CAi, CBi, Vi, and Ti (i ) 1 and 2) are the concentrations of component A and B and the volume and temperature of both reactors, respectively. The temperatures in both reactors are known and constant (i.e., RT1 ) 5180.869 and RT2 ) 4765.169), where R is the ideal gas constant. The random kinetic parameters are assumed to conform to a joint normal distribution, the data of which are given in Table 2, in which the standard deviations are 3% of their expected values. The entries of the correlation matrix are assumed values that may result from the parameter estimation. It should be noted that the assumption that the Arrehenius constants for the two reactors are correlated and that all are normally distributed is not necessarily a practical assumption. Because CB2 is monotone with k20, we chose k20 as the random variable for computing the bound of the region of the uncertain variables. Because there is the relation k20 V w CB2 v, we can transform P{CB2 g CSP B2} to P{k20 e

Ind. Eng. Chem. Res., Vol. 41, No. 15, 2002 3627 Table 2. Parameters of the Random Parameters in the Reactor Network expected value

standard deviation

E1

6665.948

200

E2

7965.248

240

k10 k20

0.715 0.182

[

1 0.5 0.3 0.2

0.0215 0.0055

correlation matrix

0.5 1 0.5 0.1

0.3 0.5 1 0.3

0.2 0.1 0.3 1

]

Table 3. Comparison of the Stochastic (ST) and Deterministic (DT) Optimization Results V1 SP CB2

R

0.50 0.50 0.52 0.52 0.54 0.54

0.90 0.95 0.90 0.95 0.90 0.95

ST

V2 DT

ST

CPU time (s)

f DT

ST

DT

NV %

ST DT ST

DT

3.301 3.222 3.266 2.814 3.624 3.472 0.8 0.2 10 50.52 3.497 3.245 3.671 1.0 5 3.808 3.452 3.795 3.416 3.899 3.706 1.0 0.2 10 50.85 3.854 4.001 3.963 1.2 5 4.474 3.910 4.908 4.168 4.331 4.019 1.1 0.2 10 50.33 4.701 5.439 4.501 2.0 5

L L k20 } according to (7) and (9). Thus, we can use k20 ) SP -1 F (E1,E2,k10,CB2,V1,V2) according to (10) as the upper bound for the random variable k20 in the numerical integration. The three-point collocation and four intervals for each random variable are used in the multivariate integration, with which the inaccuracy of the probability computation will be less than 10-4. Table 3 shows the results of the stochastic optimization compared with those of the deterministic optimization. In the deterministic optimization, the random parameters are set constant with their expected values. In the first two columns of Table 3, six cases (three product specifications and two probability levels) are specified. It can be seen that the volumes of both reactors should be increased (which thus leads to higher costs), if the product specification rises. For a same product specification, the volumes should also be increased, if a higher probability level is required. For this problem, the maximum reachable probabilities for the cases of CB2 g 0.5, 0.52, and 0.54 are 1.0, 0.999, and 0.971, respectively. It can be noted that the results are more sensitive to the predefined probability level R when R approaches the maximum reachable probability of satisfying the constraint. In comparison to the results of the stochastic optimization, the required volumes from the deterministic optimization are smaller and thus the total cost is lower. However, the results from the deterministic optimization cannot be applied because the product constraint then will be very likely violated. The last two columns show the percentage of the total number of violations (NV %) of the constraint when implementing respectively the stochastic and deterministic optimization results, which were obtained by sampling the random parameters, according to Table 2, with 10 000 samples by the Monte Carlo method. As expected, about 10% (in the case of R ) 0.9) and 5% (in the case of R ) 0.95) of violations will result from the results of the stochastic optimization, while implementing the results of the deterministic optimization will lead to about 50% of violations of the constraint. Moreover, it can be seen that the difference between the CPU times (SUN Ultra 1 Station) of the two methods is insignificant. Operation of a Distillation Column. A pilot distillation column to separate a methanol-water mixture

Figure 6. Pilot distillation column for separating a watermethanol mixture. Table 4. Parameters of the Random Variables in the Distillation Process expected value

standard deviation

ηR

0.8

0.016

ηS

0.8

0.016

37 0.2

1.11 0.008

F (L/h) xF

[

1 0.5 0.3 0.1

correlation matrix

0.5 1 0.3 0.1

0.3 0.3 1 0.3

0.1 0.1 0.3 1

]

is considered (Figure 6). The column has a diameter of 100 mm and 20 bubble-cap trays with a central downcomer. The column is operated under atmospheric pressure. The feed to the column is assumed to come from upstream plants and thus the feed flow rate F and composition xf are considered as external uncertain variables. The tray efficiency in the stripping section ηS and in the rectifying section ηR usually cannot be determined accurately. Thus, these two parameters are also considered to be uncertain variables. A rigorous model composed of component and energy balances, vapor-liquid equilibria, and tray hydraulics for each tray is used to describe the column, which leads to a complicated nonlinear equation system. The model is validated by comparison of the simulation and experimental results on the plant.23 The objective function of the optimization problem is defined as

max f ) c1D - c2L

(33)

where D and L are the distillate flow rate and the reflux rate of the column, respectively. These two variables are the decision variables of the optimization problem. The physical meaning of (33) is the maximization of the top product amount, while the operating energy should be lowered. The optimization problem is subject to the equalities of the nonlinear equation system and a probabilistic constraint which is assigned to the top product, namely,

P{xD g xSP D } g R

(34)

where xD and xSP D are the distillate composition and the purity specification, respectively. The constants in (33) are set to c1 ) 4 and c2 ) 1. The values of an assumed joint normal distribution of the random parameters are listed in Table 4. These data are estimated based on experimental experience on the pilot plant. It should be noted that there is a strong positive correlation between the two tray efficiencies because both of them may increase or decrease synchronously corresponding to load changes. Feed flow

3628

Ind. Eng. Chem. Res., Vol. 41, No. 15, 2002

computing the maximum probability. It is close to 1.0 for both selected product specifications.

Table 5. Stochastic (ST) and Deterministic (DT) Optimization Results D

L

f

xSP D

R

ST

DT

ST

DT

ST

DT

0.995 0.995 0.995 0.990 0.990 0.990

0.95 0.90 0.5 0.95 0.90 0.5

0.848 0.867 0.931 0.852 0.871 0.938

0.938

1.834 1.820 1.754 1.516 1.506 1.470

1.765

1.5575 1.6473 1.9683 1.8925 1.9780 2.2806

1.9862

0.943

1.501

2.2697

and feed composition may have correlation due to the operation of the upstream plant. For example, considering a reactor as the upstream plant, if the feed flow of the reactor increases randomly, the composition of the educt component in the outflow (feed flow of the column) of the reactor may be increased as well because of an uncompleted reaction. It is known in the design of distillation columns that the influence of feed flow is seldom considered. However, feed flow has a significant impact on the liquid and vapor loads in the column. This leads to a pressure drop of the column and thus has an influence on the product composition (especially bottom composition). Because changes of the feed flow are more critical to load changes than changes of the feed concentration, the correlation of the feed flow to the tray efficiencies is much stronger than that of the feed concentration. The standard deviations are assumed to be 2% of the expected values of the tray efficiencies, 3% of the feed flow rate, and 4% of the feed concentration. Now we have to chose one of the uncertain variables, which is monotone with the restricted state variable in the probabilistic constraint xD for computing the bound of the region of all uncertain variables. It can be easily found out that there is a positive monotony between xF and xD. That means there is the relation xF v w xD v. Thus, according to (7) and (8), we can transform P{xD L L g xSP D } to 1 - P{xF e xF}. According to (10), xF ) SP F-1(ηR,ηS,F,xD ,D,L) can be used as the upper bound for the random variable xF in the numerical integration. The stochastic optimization problem is solved for different product specifications and probability levels. The results, shown in Table 5, are compared with those of the deterministic optimization. It is hardly a surprise to note that an increase of the product specification and the probability level causes an increase of the reflux and a decrease of the distillate stream and thus a reduction of the objective function. However, it is interesting to note that the objective function and the decision variables are much more sensitive to the probability level than to the product specification. Similar to the previous example, the results of the deterministic optimization (based on the expected values of the uncertain parameters) are close to but not the same as the results computed with a probability level of 0.5. The required CPU time for solving this problem depends strongly on the choice of initial values of the decision variables. One iteration step of the optimizer takes approximately 3 min with a SUN Ultra 1 station. A skillful choice of initial values causes about seven iteration steps. It should be noted that to ensure a feasible solution for different product specifications, it is not necessary to compute the maximum probability with the NLP solver in this example. Instead, one simulation step with the total reflux is sufficient for

6. Conclusions In this work a new approach to solving nonlinear chance-constrained optimization problems under uncertainty is proposed. This approach is applicable to cases of a constrained output variable in the chance constraint having a nonlinear relation to uncertain variables with joint multivariate distribution, if at least one of those uncertain variables has a monotonic relation to the constrained output. The monotony can be exploited by mapping the output feasible region to the region of uncertain variables. Through this reverse projection, an upper bound of the uncertain variable can be computed and the required probability can be computed by numerical integration. The collocation method is used in the integration. It is well-known that it is a highly efficient integration method. In the proposed numerical integration, the same collocation points are also used for computing the gradients. Therefore, the advantage of direct numerical integration of the PDF is that one can compute the gradients simultaneously during the multivariate integration. This approach is applied to two process optimization problems under uncertainty. The knowledge of a monotone relationship between the constrained output and one of an input uncertain variable is the essence of the approach. This means that to carry out optimization under uncertainty by the proposed approach, the impact of the uncertain variables to the output variables should be analyzed at first, based on the properties of the process. This can also be done by using process simulation. The solution approach itself (described in section 3) is not dependent on the distribution of the random variables. If the PDF of the random variables is known (e.g., uniform, exponential, normal, or log-normal distribution), the approach can be applied. However, for numerical integration, we have only derived the method for normally distributed random variables given in section 4. The application of the approach to processes under uncertainties with other distributions will be a challenging future work. We eliminate the equality constraints by expressing the state variables in terms of the decision and uncertain variables in an implicit way, namely, using simulation of the process by the Newton method. If a monotone relation exists between the constrained output variable and one selected input variable, we can be sure that one solution point exists when the original function is replaced by the reverse function (see Figure 1). If the process is too complex, there may be problems in simulation for the mapping of the output constraint to the uncertain input region. Then plenty of work may be required to overcome convergence problems. Very often, convergence properties strongly depend on the initial values of the state variables. In our method, the previously computed solution point is used to ensure that the initial values are sufficiently close to the new solution point. If a joint constraint is considered, it is necessary to have an uncertain variable that has a monotone relationship with all of the chance-constrained variables. For a concrete process, it may be difficult to find one such uncertain variable. This will be a challenging task in the future work. Another future work is to extend

Ind. Eng. Chem. Res., Vol. 41, No. 15, 2002 3629

the approach to dynamic optimization problems with time-dependent random variables (i.e., stochastic processes). In addition, to minimize the CPU time, the collocation method used for numerical integration can be further studied with respect to the optimal number of collocation points. Acknowledgment We thank Deutsche Forschungsgemeinschaft (DFG) for financial support from the program "Echtzeit-Optimierung grosser Systeme". Appendix A: Collocation on Finite Elements for Numerical Integration The integration required has the form

Ω)

∫vv ω(v) dv f

0

(A1)

namely,

dΩ ) ω(v), Ω(v0) ) 0 dv

(A2)

The domain [v0, vf] of the variable v will be discretized into subintervals ([vl, vl+1], l ) 1, ..., L). In each interval the integration function will be approximated by Lagrange functions NC

Ω(v) )

Γj(v) Ωj ∑ j)0

(A3)

where NC is the number of collocation points and Γ is the Langrange function NC

Γj(v) )

∏ i)0 i*j

v - v(i)

(A4)

v(j) - v(i)

Ωj and v(j) are the values of the integration and the corresponding collocation points which are usually the zeros of the shifted Legendre function,21 respectively. Thus, the derivatives of Ω on the collocation points can be calculated as

dΩ dv

|

vi

∑ | Ωj, j)0 dv v NC

)

dΓj

i ) 1, ..., NC

(A5)

i

From (A2) and (A5), the value of the integration on the collocation points can be calculated. For the continuity, we use the integrated value on the last collocation point of the current interval as the initial value of the next interval. Literature Cited (1) Halemane, K. P.; Grossmann, I. E. Optimal Process Design under Uncertainty. AIChE J. 1993, 29, 425.

(2) Subrahmanyam, S.; Pekny, J. F.; Reklaitis, G. V. Design of Batch Chemical Plants under Market Uncertainty. Ind. Eng. Chem. Res. 1994, 33, 2688. (3) Pistikopoulos, E. N.; Ierapetritou, M. G. Novel Approach for Optimal Process Design under Uncertainty. Comput. Chem. Eng. 1995, 19, 1089. (4) Rooney, W. C.; Biegler, L. T. Incorporating Joint Confidence Regions into Design under Uncertainty. Comput. Chem. Eng. 1999, 23, 1563. (5) Diwekar, U. M.; Kalagnanam, J. R. Efficient sampling technique for optimization under uncertainty. AIChE J. 1997, 43, 440. (6) Bernardo, F. P.; Pistikopoulos, E. N.; Saraiva, P. M. Integration and Computational Issues in Stochastic Design and Planning Optimization Problems. Ind. Eng. Chem. Res. 1999, 38, 3056. (7) Torvi, H.; Herzberg, T. Estimation of Uncertainty in Dynamic Simulation Results. Comput. Chem. Eng. 1997, 21 (Suppl.), S181. (8) Acevedo, J.; Pistikopoulos, E. N. Stochastic Optimization Based Algorithms for Process Synthesis under Uncertainty. Comput. Chem. Eng. 1998, 22, 647. (9) Pre´kopa, A. Stochastic programming; Kluwer: Dordrecht, The Netherlands, 1995. (10) Ahmed, S.; Sahinidis, N. V. Robust Process Planning under Uncertainty. Ind. Eng. Chem. Res. 1998, 37, 1883. (11) Gupta, A.; Maranas, C. D. A Two-Stage Modeling and Solution Framework for Multidite Midterm Planning under Demand Uncertainty. Ind. Eng. Chem. Res. 2000, 39, 3799. (12) Clay, R. L.; Grossmann, I. E. A Disaggregation Algorithm for the Optimization of Stochastic Planning Models. Comput. Chem. Eng. 1997, 21, 751. (13) Kall, P.; Wallace, S. W. Stochastic programming; Wiley: New York, 1994. (14) Schwarm, A. T.; Nikolaou, M. Chance-constrained Model Predictive Control. AIChE J. 1999, 45, 1743. (15) Li, P.; Wendt, M.; Wozny, G. Robust Model Predictive Control under Chance Constraints. Comput. Chem. Eng. 2000, 24, 829. (16) Whiting, W. B.; Tong, T.; Reed, M. E. Effect of Uncertainties in Thermodynamic Data and Model Parameters on Calculated Process Performance. Ind. Eng. Chem. Res. 1993, 32, 1367. (17) Vasquez, V. R.; Whiting, W. B. Uncertainty and Sensitivity Analysis of Thermodynamic Models Using Equal Probability Sampling (EPS). Comput. Chem. Eng. 2000, 23, 1825. (18) Morgan, M. G.; Henrion, M. Uncertainty; Cambridge University Press: Cambridge, U.K., 1990. (19) Loeve, M. Probability Theory; Van Nostrand-Reinhold: Princeton, NJ, 1963. (20) Papoulis, A. Probability, Random Variables and Stochastic Processes; McGraw-Hill: New York, 1965. (21) Finlayson, B. A. Nonlinear Analysis in Chemical Engineering; McGraw-Hill: New York, 1980. (22) Rooy, H. S.; Sahinidis, N. V. Global Optimization of Nonconvex NLPs and MINLPs with Applications in Process Design. Comput. Chem. Eng. 1995, 19, 551. (23) Li, P.; Wendt, M.; Arellano, H. G.; Wozny, G. Optimal Operation of Distillation Processes under Uncertain Inflows Accumulated in a Feed Tank. AIChE J. 2002, 48, 1198.

Received for review July 31, 2001 Revised manuscript received April 10, 2002 Accepted May 22, 2002 IE010649S