Set-Point Optimization for Closed-Loop Control Systems under

Uncertainty is an important aspect in process engineering.1,2 The .... Instead of optimal controls, optimal setpoints for closed-loop control systems ...
0 downloads 0 Views 388KB Size
4930

Ind. Eng. Chem. Res. 2007, 46, 4930-4942

Set-Point Optimization for Closed-Loop Control Systems under Uncertainty Thomas Flemming, Martin Bartl, and Pu Li* Department of Simulation and Optimal Processes, Institute for Automation and Systems Engineering, Technische UniVersita¨t Ilmenau, P.O. Box 10 05 65, 98684 Ilmenau, Germany

This work concerns the determination of setpoints for closed-loop control of complex processes under uncertainty. The resulting setpoints should be both optimal and robust, i.e., under the influence of uncertain disturbances, output constraints must be ensured, although these output variables are in an open-loop relationship to disturbances. In this paper, a stochastic closed-loop optimization approach is proposed and the formulated problem is to be solved with nonlinear chance constrained programming. This method utilizes the stochastic distribution of the uncertainties explicitly and allows the satisfaction of restrictions with a user-defined probability level, so that a desired compromise between profitability and reliability can be achieved. Solution methods for nonlinear systems with multiple constrained outputs are considered, at which one can choose between single or joint chance constraints. Finally, the proposed approach is applied to a continuous distillation column to demonstrate its effectiveness. 1. Introduction Uncertainty is an important aspect in process engineering.1,2 The parameters of a process model usually are uncertain, because of the unavailability of process knowledge, and, for this reason, the model is only an approximation of the real process. In the work of Kassmann et al.,3 model uncertainty is considered in steady-state target calculation. They use a primaldual interior-point method to solve the second-order cone programming problem. A feature of their work is that a linear model is used for the set-point optimization. Because of the nonlinear behavior of most chemical processes, it is necessary to develop approaches to nonlinear optimization under uncertainty, especially for searching an operating point for a nonlinear process (if only a control problem to hold a given setpoint is considered, a linear approximation may be used). Moreover, often there exist various impacts from outside the process, mainly affected by changing market conditions, such as product specifications, feed qualities, or the supply of utilities.4 These changes lead to significant disturbances to processes, and, unfortunately, their amplitude and frequency are usually uncertain. For the solution of optimal design and operation problems of such processes, the uncertainties must be considered a priori at decision making. Within conventional deterministic optimization strategies, there is merely a simplified treatment of uncertain variables (i.e., they are treated by their expected values and no information about their stochastic properties will be included into the decision process). Therefore, the resulting solutions will be optimal but not robust. For this reason, stochastic programming methods must be used.5-9 These methods consider the stochastic properties of uncertain variables during the decision process, which leads to robust solutions. Generally, there are two solution approaches in the context of stochastic programming: the recourse approach and the chance constrained programming approach. In the recourse approach, decision variables of an optimization problem under uncertainty are divided into two sets. The first-stage decision variables (which are called design variables) are decided and fixed before the realization of the uncertain * To whom correspondence should be addressed. Tel.: +49 3677 69 1423. Fax: +49 3677 69 1415. E-mail address: pu.li@ tu-ilmenau.de.

variables and the second-stage variables (which are called operation variables) are decided after the realization of the uncertain variables. Thereby, violations of process constraints are allowed and compensated by penalty functions, which are introduced in the objective function. This method is very suitable to solving planning problems under demand uncertainty.10,11 However, because of the requirement of a model for the penalty function, which is not available in many cases, applications of this approach are limited. The second approach, chance constrained programming, does not require a penalty function. The inequalities (process restrictions) will be formulated as chance constraints, which are to be satisfied with a user-defined probability level. In the case of normal distributed uncertainties, the solution of a linear problem with separate chance constraints can be achieved by a simple coordinate transformation. The consideration of simultaneous chance constraints, which are described by a multivariate probability density function (pdf), however, leads to a nonlinear optimization problem that can be solved with a nonlinear programming (NLP) solver. In this case, probabilities and gradients of the constraints can be evaluated by an efficient simulation approach.6 Over the past few years, several studies concerning static and dynamic linear problems, including model predictive control under chance constraints, have been conducted.5,12,13 Unlike the linear case, for nonlinear systems, it is very difficult to describe the distribution of constrained outputs explicitly. Hence, the main difficulty in solving nonlinear chance constrained optimization problems lies in the calculation of the probabilities and their gradients of holding the inequalities. A straightforward way to overcome this difficulty is to use sampling methods. An efficient sampling approach14 can be used to accomplish this task. However, for the calculation of gradients, two sample passes are required, which cost much computational effort. Another way to accomplish the computation task is the direct numerical integration of the multivariate pdf, which allows the calculation of gradients simultaneously during integration. Unfortunately, because of the nonlinear relation between the uncertain inputs and the constrained outputs, an analytical description of this function is not available. To address this problem, Wendt et al. proposed the inverse mapping approach, where one single chance constraint was taken

10.1021/ie061540t CCC: $37.00 © 2007 American Chemical Society Published on Web 06/07/2007

Ind. Eng. Chem. Res., Vol. 46, No. 14, 2007 4931

Figure 1. From open-loop control to closed-loop control, under uncertain disturbances.

into account.15 With this approach, it is possible to utilize the given joint probability distribution of stochastic inputs (uncertainties), instead of the unknown output distribution, to calculate the probabilities and its gradients, provided that a monotone relation between the respective input and output variable exists. This approach has been extended to solve nonlinear dynamic optimization problems under uncertainty with constrained outputs at selected time points and with time-independent uncertain inputs.16 All of the chance constrained approaches mentioned previously provide open-loop solutions (i.e., decision variables are control variables u). According to the solution, the optimal values of the controls should be realized to the open-loop system (see Figure 1a). In this way, these values compensate the uncertainties considered in the problem formulation. However, additional (unknown) disturbances that are not considered in the problem formulation cannot be compensated by the openloop framework and, thus, the solution will often lead to violations of process constraints. To address this problem, repeated open-loop optimization in the framework of chance constrained model predictive control for linear systems17-19 and nonlinear systems20 has been considered. However, in many practical applications for process operations, optimal controls obtained from the open-loop optimization cannot be implemented directly. In such cases, an optimal and robust operation with the available closed-loop control systems is preferred. In this paper, optimal operation of nonlinear processes under uncertainty in a closed-loop framework is concerned. We use nonlinear chance constrained optimization to solve this problem. Instead of optimal controls, optimal setpoints for closed-loop control systems will be achieved. In this way, the stochastic properties of the significant uncertain disturbances known a priori can be taken into account and, at the same time, the unknown disturbances can be compensated by the closed-loop control. It is noted that this a priori knowledge of uncertain variables is not used here for a feed-forward action. Standard feed-forward action for disturbance compensation works as follows: it is assumed that the value of the disturbance is known a priori or measurable and the compensation is calculated accordingly. In our work, the stochastic property of uncertain disturbances is considered, not its specific (measured) value. The action achieved from our approach is robust (i.e., a predefined probability of holding constraints can be ensured with any realized value of disturbances). In a closed-loop control system, the control variables u are known as manipulated variables, which will be derived by the controller and, thus, they are not available as the degrees of freedom for optimization anymore.21 In this case, the decision variables will be controlled variables y and, after solving the optimization problem, they can be provided as setpoints for the closed-loop control systems.

Figure 2. Comparison of different approaches for stochastic problems.

Unfortunately, it is generally known that, in the process industry, there exist many output variables yc (e.g., concentrations, viscosities) that are difficult, or even not possible, to measure directly or on-line. For this reason, they are not included in the control loops. That means, yc are merely in an open-loop relation with disturbances ξ. However, these variables represent product quality and, thus, are very important for the process operation (i.e., they must be held within given limitations (e.g., yc e yc,lim)). In industrial practice, usually simple measurable variables y (e.g., temperatures, pressures) are utilized for indirect control of immeasurable variables yc, where a monotone relationship between the individual variables y and yc is needed. The control concept previously described is shown in Figure 1b. The task for determining suitable setpoints for such control systems is to develop a method to calculate economical (cost) optimal setpoints w ) y* that, meanwhile, enables the satisfaction of constraints of yc, despite their open-loop relationship with uncertain disturbances (i.e., the achieved operation should be both profitable and reliable). Because of the complexity of the considered problem, such a result can only be achieved by appropriate numerical optimization methods. In regard to constrained nonlinear processes with uncertain disturbances ξ, the following stochastic optimization problem can be formulated:

min f(x,y,yc,u,ξ)

(1)

s.t. g(x,y,y ,u,ξ) ) 0 c

yci (x,y,u,ξ) e yc,lim i

(for i ) 1, ..., I)

y ∈ Y, u ∈ U, ξ ∈ Ξ To address uncertain variables in such an optimization problem, three approaches are commonly used, as schematically shown in Figure 2. We consider, for example, one restricted variable yc1 and the corresponding measurable variable y1, which behave monotone to each other. That means, if they are positively monotone, then y1v S yc1v and y1V S yc1V; if they are negatively monotone, then y1v S yc1V and y1V S yc1v. Both variables y1 and yc1 are plotted on vertical axes. The setpoints w1 resulting from different approaches are specific values of y1. The ellipsoids represent the possible output areas for yc1, according to setpoints w1, as a result from the effect of uncertain variables. With an increasing set-point value, the objective function value (the costs) also increases but the solution will be more robust. The first method is the so-called “worst-case-analysis”,22,23 where the boundary values of uncertain variables are taken into can be held in any case. This account. The constraint yc,lim 1 approach leads to very risk-averse setpoints (see wwc 1 in Figure 2) associated with considerably high operating costs.

4932

Ind. Eng. Chem. Res., Vol. 46, No. 14, 2007

The second popular way to treat uncertainty is to substitute ξ by their mean values. In this way, expression 1 becomes a deterministic optimization problem. As a result of this simple treatment, very optimistic and cost-saving setpoints (wdet 1 in Figure 2) will be obtained, but, on the other hand, the process specification yc,lim will be violated with a high probability. 1 To overcome the drawbacks of the aforementioned two approaches, a third approach, chance constrained programming, can be used. This method utilizes the stochastic properties of uncertainties (mean, variance, distribution function) explicitly in the problem formulation. To solve such problems the original stochastic optimization problem (expression 1) is to be transformed or relaxed to an equivalent deterministic nonlinear programming (NLP) problem and finally solved using an available optimization routine. To relax the problem, the stochastic objective function is to be substituted by its mean value as well as its variance, and the inequalities will be formulated as chance constraints. The equalities g(‚) ) 0 represent process model equations and, therefore, they must be fulfilled for any realization of uncertain variables. This can be ensured by a step of stochastic simulation within the optimization procedure. The formulation of chance constraints means that holding process constraints of yc is not required by 100% probability but with a high probability level R (e.g., R ) 95%) specified by user. Thus, as shown in Figure 2, it shows the meaning of chance constraints with the values of yc1 resulting from setpoint wcc 1 . The points outside the specified bound can be neglected, because of their rareness. As a result, chance constrained programming makes it possible to obtain a user-defined compromise between reliability and cost effectiveness. A general optimization problem under chance constraints can be formulated as follows:

min {ω ‚ E[f(x,y,yc,u,ξ)] + (1 - ω) ‚ V[f(x,y,yc,u,ξ)]}

(2)

s.t. g(x,y,y ,u,ξ) ) 0 c

c c,lim Psep } g Ri i {yi (x,y,u,ξ) e yi

(for i ) 1, ..., I)

or

Psim{yci (x,y,u,ξ) e yc,lim i

(for i ) 1, ..., I)} g R

y ∈ Y, u ∈ U, ξ ∈ Ξ In expression 2, f(x,y,u,ξ) can be a function of operating costs, E[f(‚)] its expectancy, and V[f(‚)] is its variance. The term ω is a weighting factor concerning E[f(‚)] and V[f(‚)], and g(‚) are the model equations. The vector x contains the state variables, y represents the measurable output (controlled) variables, and yc represents the constrained but not measurable outputs. u and ξ are vectors of manipulated variables and uncertain disturbances, respectively. R is the user-defined probability level (R ∈ [0,1]). To define the chance constraints, it is possible to choose between two different forms. Whereas single chance constraints c Psep i {‚} are corresponding to individual variables yi , the joint chance constraint Psim{‚} is related to the simultaneous satisfaction of the restrictions of all variables. Depending on process characteristics, as well as the distribution of uncertain disturbances ξ, the optimization problem (expression 2) can be formulated as a static or dynamic problem. In this work, we consider continuous processes operated at the operating point and often influenced by time-invariant uncertain disturbances. In such cases, expression 2 is a steady-state optimization problem that will be addressed in the following

studies in this paper. Our formulation in the presented work is valid only for uncertain disturbances with time-invariant properties (e.g., fixed average). In the case of time-variant uncertain disturbances with changing averages (stochastic processes), a dynamic stochastic optimization problem must be formulated, where the disturbances with changing averages over a future time horizon will be taken into account. Such cases have been considered for linear control systems by Li et al.19 We are currently conducting studies to extend our approach to nonlinear dynamic optimization problems under chance constraints where time-dependent uncertain variables will be considered. To solve expression 2, the major challenge lies in the computation of the probabilities P{‚} and their gradients, with respect to decision variables required by the NLP solver. As mentioned previously, for nonlinear systems, it is very difficult to describe the distribution of constrained outputs explicitly, and, thus, a direct integration on the output side for calculating the probabilities and gradients is impossible. To overcome this problem, the inverse mapping approach proposed by Wendt et al.15 can be used, which makes it possible to use the distribution of given stochastic input variables to calculate the probabilities via back projection from the output space to the input space. A monotone relationship between the respective input and output variable is a prerequisite for using this method. In the aforementioned work,15 nonlinear open-loop problems with merely one single constrained output were considered and addressed by optimization under a single chance constraint. In this paper, the back-mapping approach is extended and adapted to solve optimal operation problems under uncertainty in a closed-loop framework, as described in Figure 1b. In this way, in addition to restricted outputs yc, also the constraints of manipulated variables u must be formulated as chance constraints, because of their stochastic nature, resulting from the closed-loop treatment. Furthermore, systems with multiple restricted outputs are considered. In this case, the user can choose between single and joint chance constraints to define the optimization problem. The effects of both constraint forms are discussed and illustrated with a concrete example. The paper is structured as follows. The formulation of the stochastic optimization problem, depending on the entry point of uncertain disturbances into the closed-loop control system, is discussed in section 2. The treatment of chance constraints, objective function, and the bounds of manipulated variables, as well as some notes about feasibility analysis, are further aspects of this section. In section 3, the solution approach is developed and computational issues are presented. The application of the proposed approach to a continuous distillation column is shown in section 4. In addition, different impacts between single and joint chance constraints are discussed. Finally, some concluding remarks completes this paper in section 5. 2. Nonlinear Chance Constrained Optimization for Closed-Loop Systems The consideration of set-point optimization problems for closed-loop systems under uncertainties relates to disturbances, which have important effects on steady-state process behavior. This is the case where disturbances have a significant higher time constant than the process itself. These disturbance variables can be treated as constants but with uncertain future values. 2.1. Problem Formulation Depending on the Entry Point of Uncertain Disturbances. To make a clear explanation of the closed-loop concept, we consider, in the following, simply a single input/single output system. Depending on the entry point of the uncertain disturbance ξ into the system, one can

Ind. Eng. Chem. Res., Vol. 46, No. 14, 2007 4933

case, all variables must be taken into account simultaneously and then the stochastic optimization problem becomes substantially more complex and is defined as follows:

min costs ) E[f(x,y,yc,u,ξ)] y∈Y

Figure 3. Schematic of the disturbance inside the control loop.

(5)

s.t. g(x,y,yc,u,ξ) ) 0 P{yc e yc,lim} g R ξ∈Ξ

Figure 4. Schematic of the disturbance outside the control loop.

Figure 5. Schematic of an inseparable process.

distinguish three different steady-state problem formulations. The aim of optimization is to minimize operating costs by determining optimal set-point w for the controlled variable y, and meanwhile the immeasurable output variable yc should be kept in the predefined limits. In the first case (see Figure 3), ξ is inside the control system, which includes only a part of the process. The two subprocesses are described by nonlinear equations g1 and g2, respectively. In this case, the effect of disturbance ξ can be compensated completely by the controller. This presupposes an ideal controller behavior (i.e., the controller can keep the process at the operating point w with any realization of the disturbance). This prerequisite can be realized by the choice of a suitable controller (e.g., using integral algorithm for zero steady-state error). As a result, the optimal operation for this case can be determined by solving the following deterministic optimization problem:

min costs ) f(x2,y,y ) c

y∈Y

(3)

s.t. g2(x2,y,yc) ) 0 yc e yc,lim In the second case (see Figure 4), the disturbance ξ has an effect directly on the second subprocess outside the control system. In this case, the immeasurable output variable yc is a stochastic variable and, thus, a simple treatment of the constraint according to expression 3 is not possible. It must be treated as a chance constraint. If the objective function behaves stochastically, its expected value E[f(‚)] is to be minimized. To find an optimal setpoint, the following stochastic optimization problem must be solved:

min costs ) E[f(x2,y,yc,ξ)] y∈Y

(4)

s.t. g2(x2,y,yc,ξ) ) 0 P{yc e yc,lim} g R ξ∈Ξ In the third case (see Figure 5), a nonseparable process is considered, as described by its nonlinear model equations g. Unlike the problems described in expressions 3 and 4, in this

where the vector x contains the state variables, u is the manipulated variable, y and yc are the measurable and nonmeasurable outputs, respectively, and ξ is the uncertain disturbance. The expected value of the objective function f is denoted as E[f(‚)]. One should again note that, unlike the open-loop consideration, in the closed-loop optimization concept considered here, the controlled variable y is defined as the decision variable (independent variable), whereas the manipulated variable u and state variables x are dependent variables. After the solution of expression 3, 4, or 5, the resulting optimal y* can be provided as setpoint w to the closed-loop system. 2.2. Dealing with a Stochastic Objective Function. If an objective function in expression 4 or 5 is dependent on uncertain variables, it must be transformed (relaxed) to a deterministic form. Note that the objective function is also stochastic, although there is no explicit dependence on ξ but only on x, yc, or u, because these variables are dependent on ξ because of the closed-loop consideration. In the context of relaxation, usually the expected value of the original objective function is to be minimized. If the uncertain variables have a large variance, the variance of the objective function should be taken into account. The relaxed objective function has the following form:

ω ‚ E[f(x,y,yc,u,ξ)] + (1 - ω) ‚ V[f(x,y,yc,u,ξ)] w min

(6)

where E[‚] represents the expected value of the stochastic objective function and V[‚] represents its variance. With parameter ω, a weight between expectation and variance can be specified. The computation expense for evaluating the relaxed objective function is primarily dependent on the complexity of the objective function f(x,y,yc,u,ξ). If single variables are connected linearly with each other, the expected value of f(‚) is only dependent on the expected value of ξ, denoted by µξ, and not on its variance σξ. For example, a simple linear stochastic objective function f(y,ξ) ) y + ξ has the following expected value:

E[f(y,ξ)] ) E[y + ξ] ) E[y] + E[ξ] ) y + µξ ) f(y,µξ) where y is a decision variable. For dependent variables that have nonlinear relations with the uncertain variables within the objective function, the expected value is dependent not only on µξ but also on σξ. Generally, for practically relevant cases with multiple correlated uncertain variables, the calculation of the expected value is, numerically, a quite expensive task. A possibility to avoid this considerable numerical burden consists of replacing the stochastic objective function by an equivalent deterministic objective function. That means an expression must be found that is equivalent to the original objective function in a physical meaning and, moreover, it is only dependent on

4934

Ind. Eng. Chem. Res., Vol. 46, No. 14, 2007

the decision variables. This function must be in a strictly monotone relationship to the expected value of the original objective function, then E[f(x,u)] S fe(y). This method of substitution is applied to an example given in section 4. Moreover, according to expression 6, the variance of the objective function must be evaluated. Its computation for a stochastic nonlinear function with multivariate uncertain variables remains a challenging problem. However, this can also be avoided if an equivalent deterministic objective function can be found. 2.3. Dealing with Bounds of Manipulated Variables. In the open-loop optimization, the manipulated variable u is the decision variable and, therefore, has a deterministic nature. In the closed-loop framework considered here, u is dependent on the stochastic disturbance ξ, as well as the decision variable y. For this reason, u is now stochastic. In practice, physical bounds always exist for manipulated variables that must be taken into account in the closed-loop optimization framework. The problem definitions described by expressions 3 and 4 are not dependent on u directly. In these cases, the bound consideration concerning u represent a task in process design combined with controller synthesis. In expression 5, however, the variables are coupled with each other, and, therefore, the constrained manipulated variables must be included into the optimization task explicitly. Because of the stochastic nature of u, a simple treatment of range limits in the form of umin e u e umax within the optimization problem is impossible. Therefore, according to stochastic outputs yc, a chance constraint for u can be defined:

P{umin e u e umax} g R

(7)

i.e., holding the restrictions of manipulated variable is ensured with user-defined probability. This means the closed-loop should work with this confidence level, when the optimal setpoints gained from the solution of the optimization problem are implemented. According to discussions in section 2.4 one can decide how to satisfy the bounds in expression 7, in single or joint chance constraints. In the context of an open-loop optimization, input bounds are decision variables and thus can be treated as hard bounds. However, an open-loop result (i.e., a fixed optimal value for the input) is difficult to implement. In a closed-loop framework, inputs will be used to compensate disturbances. Thus, any closed-loop system cannot ensure input saturation, if a disturbance is large enough. In our closed-loop framework, inputs are influenced by uncertain variables and, thus, must be considered to be stochastic variables. Their bounds are treated in a stochastic way, namely, as chance constraints. In this way, the saturation effect, as a result of uncertain disturbances, can be avoided with a predefined high probability. If the input probability level is chosen as 100%, the satisfaction of input constraints can be ensured. However, this choice may lead to high operating costs. 2.4. Single Versus Joint Chance Constraints. The stochastic optimization problem (expression 5) becomes considerably more complicated if a multi-input/multi-output (MIMO) system is considered with multiple constrained output variables yci (for i ) 1, ..., I) and manipulated variables uj (for j ) 1, ..., J), respectively. In this case, one can choose between two different forms to formulate chance constraints. Either one defines a single chance constraint for every single restriction,

} g Ri Pi{yci e yc,lim i

(for i ) 1, ..., I)

(8)

or one considers all restricted variables together as a joint chance

constraint:

P{yci e yc,lim (for i ) 1, ..., I)} g R i

(9)

In addition, constraints for double-bounded variables have the following form:

e yci e yc,max yc,min i i

(for i ) 1, ..., I)

(10)

They are often found in process engineering and the corresponding chance constraints for each single variable can be defined as

e yci e yc,max } g Ri P{yc,min i i

(for i ) 1, ..., I) (11)

Strictly speaking, these chance constraints simultaneously consist of i constraints of the form

P

{

}

yci g yc,min i g Rsim i yci e yc,max i

(for i ) 1, ..., I)

(12)

whereas separate chance constraints for such double-bounded variables can be approximated as follows:

} g Ri 1 P{yci e yc,max i } g Ri2 P{yci g yc,min i

(for i ) 1, ..., I)

(13a) (13b)

It is left to the user whether single or joint chance constraints will be defined. Independent of it, for double-bounded variables, the properties of a joint chance constraint can be achieved by the following expression:

e yci e yc,max (for i ) 1, ..., I)} g R P{yc,min i i

(14)

Using expressions 12 and 13, the difference between the two restrictions should be briefly discussed. Because, by definition, P{-∞ e yci e ∞} ) 1, one can obtain the form

} g Ri2 1 - P{yci < yc,min i

(for i ) 1, ..., I)

(15)

by reformulating the second expression in expression 13. Adding expression 13a to expression 15 leads to

(for i ) 1, ..., I) For example, it is assumed, for Ri1 ) Ri2 ) 0.9, the outcome is value of 0.8. Thus, it can be shown that a confidence an Rsim i level for single chance constraints does not guarantee that both limits can be held simultaneously with this level. This is primarily dependent on the concrete application for which the restriction model is used. Because, in separate cases, the probability level can be defined individually for every restriction, they can be used if different weights or safety requirements should be assigned to single restrictions. If simultaneous satisfaction of the restrictions is desired with a high probability, a joint chance constraint is the choice. A comparison of both restriction forms concerning several constrained output variables is made in detail in section 4, using a concrete example. 2.5. Feasibility Analysis and Convexity of Optimization Problems under Chance Constraints. An important aspect in solving chance constrained optimization problems is to perform

Ind. Eng. Chem. Res., Vol. 46, No. 14, 2007 4935

Figure 6. Sequential optimization scheme.

a feasibility analysis, so that the solvability of the optimization problem will be ensured. It should be avoided that the solution space is empty, because of the predefinition of a probability level that is too high and, thus, no feasible solution exists. Therefore, the following optimization problem must be solved to evaluate the maximum reachable probability level Rmax:

max R s.t. g(x,y,u,ξ) ) 0 P

{

yci e yc,lim (for i i min uj e uj e umax j

}

(17)

) 1, ..., I) )R (for j ) 1, ..., J)

y ∈ Y, ξ ∈ Ξ Unlike the previous problem formulations, where R is a predefined confidence level, in this problem, it is defined as a decision variable. Convexity analysis of chance constrained optimization problems is another essential aspect, i.e., the optimality concerning the resultant solution should be assessed. In particular, because a gradient-based NLP solver is usually used to solve the relaxed problem, which converges to a local minimum for nonconvex problems. The convexity of optimization problems under chance constraints is briefly discussed, based on a theorem of Prekopa.6 We consider the nonlinear constrained functions yci (y,u,ξ) (for i ) 1, ..., I) and uj(y,yc,ξ) (for j ) 1, ..., J). For the existence of a convex region, two conditions must be satisfied. On one hand, it is demanded that uncertain variables ξ have a pdf whose logarithm is concave. Normally distributed random variables considered here are logarithmically concave. On the other hand, the individual functions yci and uj must be quasi-convex, which is usually difficult to identify. Generally, this is not available, because functional relations for yci and uj result from a complex model. For this reason, the solution of nonlinear chance constrained problems gained from the proposed optimization approach generally represents a local optimum. The difficult task of finding the global solution for such problems is out of the scope of this paper and, therefore, is not considered here. 3. Set-Point Optimization Framework The objective is to determine the optimal setpoints of controlled variables for optimal operations of nonlinear processes. Hence, instead of manipulated variables u, the controlled variables y are defined as decision variables. As a result, the optimal solutions for the controlled variables y* are to be used as setpoints for the available closed-loop control system. The numerical implementation scheme to solve the chance constrained optimization problem is shown in Figure 6. The sequential optimization approach developed in Wendt et al.15 is adapted to address the problem with the closed-loop framework. Systems with multiple constrained outputs yc are con-

sidered in this work. The solution scheme consists of two layers. In the optimization layer, only decision variables are treated by the NLP solver (often, a SQP method is used). In each iteration, their values are delivered to the simulation layer. In the simulation layer, based on the model equations, the probabilities for holding the process constraints and the objective function, as well as their gradients to decision variables, will be computed. Note that, for linear process models, interior-point methods would be very efficient. Interior-point methods also have been applied to many nonlinear optimization cases, especially for problems with many inequality constraints.24,25 However, in our work, the formulation of the optimization problem under chance constraints leads to a nonlinear program with few inequality constraints (a joint chance constraint leads to only one). In the following, some computation aspects are discussed, in regard to probability and gradient computation, which represent the core of the proposed optimization approach. 3.1. Computation of Chance Constraints. A favorable property of linear systems is the fact that normal distributed input variables ξ lead to normal distributed outputs yc ) flin(ξ). Therefore, the pdf of output variables can be derived and the probability for holding output chance constraints directly calculated on the output side. For nonlinear systems, this property is not given, and, thus, it is difficult to determine an explicit expression for the output distribution. To avoid this difficulty, Wendt et al.15 proposed an approach that utilizes an inverse mapping from the output constrained region to an input region. The probability calculation then can be made on the input side, where the distribution of the stochastic input variables is known. They are generally given as multivariate normal distribution:

Fn(ξ) ≈ Nn(µ,Σ) )

1

x(2π) |Σ|

e-(1/2)(ξ-µ)

T‚Σ-1(ξ-µ)

(18)

n

where Fn is the n-dimensional pdf of the normal distributed uncertain variables ξ. In eq 18, the means of the stochastic variables are summarized in expected value vector µ ) [µ1, µ2, ..., µn]T and the covariance matrix is given by

(

var1 cov12 Σ) l covn1

cov12 var2 l covn2

‚‚‚ ‚‚‚ ‚‚‚ ‚‚‚

)

cov1n cov2n ∈ Rn×n l varn

(19)

where vara represents the variance of each stochastic variable and covab is the covariance concerning variable ξa and variable ξb (for a,b ) 1, ..., n). In the previously mentioned work,15 only one single onesided restricted output variable is considered in the form of a single chance constraint. In this work, systems with multiple constrained output variables yc and, in addition, double-bounded variables are taken into account. In this way, it is possible to choose between single and joint chance constraints in the nonlinear problem formulation. The prerequisite for using the reverse mapping approach is a monotone relationship between the respective constrained output variable yci and one stochastic input variable ξn,i in the single case and a monotone relationship between all yci (for i ) 1, ..., I) and one input variable ξn in the case of a joint chance constraint, respectively. For the probability calculation, all random variables that have an influence on yci must be taken into account. This leads to a multivariate integration of the n-dimensional joint pdf Fn (see expression 18), according to the following expression:

4936

Ind. Eng. Chem. Res., Vol. 46, No. 14, 2007 min max P{yc,min e yci e yc,max } ) P{ξn,i e ξn,i e ξn,i } i i ∞

)

ξˆ l )

max ∞ ξn,i

∫ ‚‚‚ ∫ ∫ Fn(ξ1, ξ2, ..., ξn,i) dξn,i dξn-1 ... dξ1

-∞

(20)

min -∞ ξn,i

max The integration limits ξmin n,i and ξn,i of the innermost integral, which corresponds to the monotone input variable ξn,i, can be determined using process model equations g(‚). Therefore, the output constraints will be mapped to the input limits and g(‚) can be considered to be mathematically eliminated from the optimization problem. The calculation occurs via (note the order of the subscripts max and min)

{

max/min ξn,i

where g-1(‚) represents the inverse system of process model equations. For example, in the case of a positive monotony between yci and ξn,i, as well as a given upper bound yc,max , it i follows an upper bound ξmax n,i for integration at the input region. eyci eyc,max (for i ) 1, ..., I)} The joint probability P{yc,min i i results from subspace Ξn that all individual probabilities have first in common. This means that, for all yci , the limits ξmin/max n,i must be calculated according to expression 21. After that, the bounds of the innermost integral necessary for the calculation ) min [ξmax of simultaneous probability arises to ξmax n n,i ] and min min ξn ) max [ξn,i ], respectively. The integration then is performed according to expression 20, but with modified integral and ξmax limits ξmin n n . For one-sided constrained variables of the form yci e yc,lim , i the aforementioned calculation scheme regarding a joint chance constraint can be used analogously. In this case, the aforementioned calculation will be reduced to one integral limit (ξmax n,i or ), concerning the innermost integral. ξmin n,i Because a gradient-based method is used to solve the NLP problem, in addition to probabilities, their gradients to the decision variables are needed. Because only the integral limits are dependent on decision variables, the following expression can be used for the gradient calculation: ∞



The covariance matrix Σ ˆ in standard form is structured as follows:

(

1 F12 Σ ˆ) l Fn1





∫ ‚‚‚ ∫ Fn(ξ1,ξ2, ...,

-∞

-∞

)

F1n F2n l ∈ Rn×n 1

‚‚‚ ‚‚‚ • ‚• ‚‚‚

(24)

ˆ) ) Φ ˆ n(zˆ1min e ξˆ 1 e zˆ1max,...,zˆnmin e ξˆ n e zˆnmax, Σ zˆ1max



zˆ1min

zˆnmax

‚‚‚

∫ φˆ n(ξˆ ,Σˆ ) dξˆ n ... dξˆ 1

(25)

zˆnmin

which can also be written as6

ˆ) ) Φ ˆ n(zˆ1min e ξˆ 1 e zˆ1max,...,zˆnmin e ξˆ n e zˆnmax, Σ zˆ1max

∫ Φˆ n-1(zˆ2(1)

min

e ξˆ (1) ˆ2(1)max, ..., zˆn(1)min e ξˆ (1) ˆn(1)max, Σ ˆ (1)) × 2 ez n ez

zˆ1min

φˆ 1(ξˆ 1) dξˆ 1 (26) In expression 26, the parenthetical superscript “(1)” means a reduction of the integral dimension by one, φˆ 1 is the one-dimensional standard normal pdf of ξˆ 1 (expressed by the subscript “1”), and Φ ˆ n-1 represents the (n - 1)-dimensional standard normal cumulative distribution function (cdf) of ξˆ (1) 2 , ..., ξˆ (1) ˆk(1)min and zˆ(1) ˆ (1) n . The parameters z kmax, as well as ξ k , are given by (1) ) zˆk,min/max

min ∂ξn,i dξn-1 ... dξ1 (22) ∂y

min ξn,i )

As already described in section 2.3, manipulated variables also are stochastic, and, thus, their constraints should be treated as chance constraints. This can be done in the same way as treating yci . 3.2. Numerical Integration. The computation of the multivariate integrals in expressions 20 and 22 is performed via standardization and a dimension reduction algorithm, as briefly described in the following discussion. Initially, uncertain variables will be transformed to the standard form, with

F12 1 l Fn2

where Fab represents the correlation coefficient between ξa and ξb (for a,b ) 1, ..., n). The probability of uncertain variables in a certain region can be calculated via integration of the multivariate standard normal pdf,

max



(23)

ˆ) φˆ n(ξˆ ) ≈ Nn(µˆ ) 0, Σ

∂ξn,i ∂P{‚} max ‚‚‚ Fn(ξ1,ξ2, ..., ξn,i ) ) dξn-1... dξ1 ∂y ∂y -∞ -∞



(for l ) 1, ..., n)

where µl and σl are the expected value and the standard deviation of each ξl, respectively. As a result, the original joint pdf (expression 18) is replaced by the standard normal pdf,

) g-1(ξ1,...,ξn-1,yc,max/min , y) i (if ξn,i is positive monotone to yci ) (21) -1 , y) g (ξ1,...,ξn-1,yc,min/max i (if ξn,i is negative monotone to yci )

ξl - µ l σl

ξˆ (1) k )

zˆk,min/max - Fk,1ξˆ 1

ξˆ k - Fk,1ξˆ 1 2 x1 - Fk,1

x1 - Fk,12 (for k ) 2, ..., n)

(27)

(28)

which can be derived from the conditional pdf. The elements of Σ ˆ (1) ∈ R(n-1)×(n-1) have the form

{

(1) ) Fa,b (Fa+1,b+1 - Fa+1,1Fb+1,1)

(x1 - Fa+1,12x1 - Fb+1,12) (29) (if a,b ) 1, ..., n - 1 and a * b) 1 (if a ) b)

Ind. Eng. Chem. Res., Vol. 46, No. 14, 2007 4937

The function Φ ˆ n-1 can be expressed as

(n-2) (n-2) (n-2) , zˆn(n-2) e ξˆ (n-2) e zˆ(n-2) ˆ (n-2)) ∂Φ ˆ 2(zˆn-1 e ξˆ n-1 e zˆn-1 n nmax , Σ min max min

∂y

Φ ˆ n-1(zˆ2(1)min e ξˆ (1) ˆ2(1)max, ..., zˆn(1)min e ξˆ (1) ˆn(1)max, Σ(1)) ) 2 ez n ez zˆ2(1)max



‚‚‚

zˆ2(1)min



(n-2) zˆn-1 max

[∂Φ ˆ 1(zˆn(n-1) e ξˆ (n-1) e zˆn(n-1) )] n min max

(n-2) zˆn-1 min

φˆ n-1(ξˆ (1),Σ ˆ (1)) dξˆ (1) ˆ (1) n ... dξ 2

∂y

where

∫ Φˆ n-2(zˆ3(2)

e ξˆ (2) ˆ3(2)max, ..., zˆn(2)min e ξˆ (2) 3 ez n e min

∂Φ ˆ 1(zˆn(n-1) e ξˆ (n-1) e zˆn(n-1) ) n min max

zˆ2(1)min

ˆ (2))φˆ 1(ξˆ (1) ˆ (1) zˆn(1)max, Σ 2 ) dξ 2 (30) where the superscript “(2)” means a reduction of the integral dimension by two. Continuing this procedure for (n - 2) times leads to (n-2) (n-2) (n-2) e ξˆ n-1 e zˆn-1 , zˆn(n-2) e ξˆ (n-2) e zˆn(n-2) , Σ(n-2)) Φ ˆ 2(zˆn-1 n min max min max

∂y

∫ ∫

(n-2) φˆ 2(ξˆ (n-2),Σ ˆ (n-2)) dξˆ (n-2) dξˆ n-1 n

(1) ∂zˆn,min/max ) ∂y

(n-2) zˆn-1 max

)



(n-2) (n-2) Φ ˆ 1(zˆn(n-1) e ξˆ (n-1) e zˆn(n-1) )φˆ 1(ξˆ n-1 ) dξˆ n-1 n min max

(n-2) zˆn-1 min

(31)

where Φ ˆ 1 is the cdf of a one-dimensional standard normal distribution, which can be calculated directly by zˆn(n-1) max

Φ ˆ 1(zˆn(n-1) min

e

ξˆ (n-1) n

e

zˆn(n-1) ) max

)

) dξˆ (n-1) ∫ φˆ 1(ξˆ (n-1) n n

(32)

zˆn(n-1) min

with (n-1) zˆn,min/max )

(n-2) (n-2) zˆn,min/max - F(n-2) ˆ n-1 2,1 ξ 2 x1 - (F(n-2) 2,1 )

(33)

With this dimension reduction procedure, the original integral will be computed stepwise, starting from Φ ˆ 1 to Φ ˆ n. The calculation of Φ ˆ 1 can be done by a standard numerical toolbox function (e.g., normcdf in MATLAB). For numerical evaluation ˆ n orthogonal collocation on of the other integrals, a Φ ˆ 2, ..., Φ finite elements26 has been used. Note that, for infinite integration bounds [-∞, +∞] (e.g., for some integrals in expression 20), the numerical calculation based on the standard normal distribution can occur in the domain [-4, +4]. The gradients of the probabilities to the decision variables can be calculated simultaneously at the same collocation points. This is an important advantage of this optimization scheme, in regard to computational efficiency. From expression 26, we have ∂Φ ˆ n(zˆ1min e ξˆ 1 e zˆ1max, ..., zˆnmin e ξˆ n e zˆnmax, Σ)



∂y zˆ1max

zˆ1min

∂y

-

φˆ 1(zˆn(n-1) ) min

∂zˆn(n-1) min ∂y

(36)

(

)

)

zˆn,min/max - Fn,1ξˆ 1 2 x1 - Fn,1

∂y

(

)

∂zˆn,min/max ∂y

(37)

2 x1 - Fn,1

The integration bounds zn,min/max, as well as the gradients to the decision variables (∂zn,min/max/∂y), will be computed, based on the process model equations (see also expression 21):

,y) zn,min/max ) g-1(ξ1,ξ2,...,ξn-1,yc,min/max i

(38)

The Newton-Raphson method is used to solve the model equations. For every integration level, it is necessary to reverse the dimension reduction before calculating the bounds. This can be done through reformulation of expression 28, for instance, the reduction of the integral dimension by one (red ) 1): 2 ξˆ k ) ξˆ (1) ˆ1 k x1 - Fk,1 + Fk,1ξ

(for k ) 2, ..., n) (39)

The uncertain variables then are converted from the standard form to real values, according to

ξl ) ξˆ lσl + µl

(for l ) 1, ..., n - 1)

(40)

such that the integration bounds can be calculated by means of the process model equations. After that, the computed bounds zn,min/max and their gradients (∂zn,min/max/∂y) will be transformed back to the standard form zˆn,min/max and (∂zˆn,min/max/∂y) and the dimension must again be reduced until the respective integration level is reached. For example, for the calculation of Φ ˆ 1, the bounds zˆn(n-1) and zˆ(n-1) ˆn(n-1) /∂y) nmax , as well as their gradients (∂z min min

)

/∂y), are required, which can be evaluated using and (∂zˆn(n-1) max

[∂Φ ˆ n-1(zˆ2(1)min e ξˆ (1) ˆ2(1)max, ..., zˆn(1)min e ξˆ (1) ˆn(1)max, Σ ˆ (1))] 2 ez n ez ∂y

∂zˆn(n-1) max

To compute the gradients, a dimension reduction also must be taken, for instance, a reduction of the integral dimension by one:



(n-2) zˆ (n-2) zˆn-1 min nmin

)

) φˆ 1(zˆn(n-1) max

(n-2) zˆn-1 zˆ(n-2) max nmax

)

×

(n-2) (n-2) φˆ 1(ξˆ n-1 ) dξˆ n-1 (35)

zˆn(1)min

zˆ2(1)max

)



zˆn(1)max

)

×

φˆ 1(ξˆ 1) dξˆ 1 (34)

According to probability calculations, the transformation for (n - 2) times leads to

expressions 33 and 37, respectively. A sequential approach described in this section is known to have the drawback that, if the Newton solver fails because of a numerical reason or an infeasible scenario, the optimization will not converge. This leads to a breakoff of the optimization procedure. An infeasible scenario can be avoided by solution of expression 17 before the solution of the original stochastic

4938

Ind. Eng. Chem. Res., Vol. 46, No. 14, 2007 Table 1. Parameters of Random Variables random variable

µ

σ

η F TF x1,F

0.70 20.00 L/h 30.00 °C 0.20 mol/mol

0.001 1.000 L/h 1.000 °C 0.010 mol/mol

the tray efficiency is known as the most significant uncertain parameter in a distillation model. Other parameters in the model (e.g., thermodynamic parameters in the equilibrium and enthalpy calculation) are relatively accurate and, thus, are considered to be deterministic parameters. The feed conditions are considered as uncertain parameters, because, in practice, the feed comes from upstream plants or external suppliers. Hence, the composition of the delivered feed stream, as well as its flow rate, vary in some ranges, which represent significant (uncertain) disturbances for the distillation process. All these random variables are assumed as correlated and normally distributed, i.e., given as a multivariate normal distribution described by parameters listed in Table 1 and the correlation matrix, R:

[

1.0 0.3 R) 0.1 0.1

Figure 7. Schematic of a distillation column with control loops.

optimization problem (see section 2.5). We only consider the set-point optimization, which is usually computed off-line; therefore, one can re-start the computation if these cases occur. For an on-line computation, a simultaneous approach24 may be taken into account. 4. Application to a Continuous Distillation Column In this section, the proposed optimization approach is applied to the optimal operations of a distillation column. The setpoints of closed-loop controllers are to be determined and the results from both single and joint chance constraints compared. The column has 20 bubble-cup trays and operates continuously to separate a mixture of methanol and water. The component with the lower boiling point (methanol) is taken at the condenser with a distillate flow rate D and the second component (water) is taken at the bottom with a product flow rate L. The plant and feedback control loops are shown schematically in Figure 7. The closed-loop control of this plant is performed via temperature control loop TC001 at the column bottom as well as the reflux ratio control loop V001 at the column top. The controlled variables y are the bottom temperature TN and the reflux ratio r. One immeasurable variable is the concentration of methanol in the distillate (x1,1), which must be constrained due to product specification. Furthermore, the distillate flow rate must be greater than Dmin (e.g., as feed for a downstream plant), which is another constraint concerning the column operation. In this example, we consider the feed conditions (flow rate (F), concentration of methanol (x1,F), temperature (TF)) as external uncertainties and the tray efficiency η as a model parameter uncertainty. The reason for this consideration is that

0.3 1.0 0.1 0.3

0.1 0.1 1.0 0.1

0.1 0.3 0.1 1.0

]

The optimization task is to find economically optimal setpoints for the two control loops under the uncertainties and, meanwhile, satisfy the constraints given previously. Economically optimal operation for this plant means that the reboiler heat duty Pheat required at the bottom is to be minimized. Because of the closed-loop consideration, the manipulated variable Pheat is stochastic (it is dependent on uncertain variables) and, therefore, its expected value E[Pheat(‚)] must be taken into account for defining the objective function. As described in section 2.2, the resulting numerical burden can be avoided through substitution of Pheat(‚) by an equivalent deterministic objective function. For this reason, instead of Pheat(‚), the bottom temperature TN can be chosen and should be minimized. This is equivalent to the reboiler heat duty minimization. Because TN is a decision variable, the objective function is deterministic. From the aforementioned considerations, we define the following chance constrained optimization problem:

min f ) TN

(41)

TN,r

s.t. 90 °C e TN e 100 °C 1ere6 g)0 P{D g 6 L/h} g RD P{x1,1 g 0.99 mol/mol} g Rx1,1 P{0 e Pheat e 6800 W} g RPheat or

{

}

D g 6 L/h P x1,1 g 0.99 mol/mol g R 0 e Pheat e 6800 W where the bounds of the decision variables, TN and r, are specified according to process values. The process model

Ind. Eng. Chem. Res., Vol. 46, No. 14, 2007 4939

Figure 8. Monotone relationships between the feed concentration and the constrained variables.

contained in g involves 193 nonlinear equations, based on mass and energy balances, as well as phase equilibrium relations on each tray of the distillation column. The inequality constraints for x1,1, D, and Pheat are formulated as both single and joint chance constraints, respectively. As described in section 2.3, the bounds of the manipulated variable Pheat have to be taken into consideration as chance constraints, to ensure the availability of the reboiler heat duty in the operation. To use the mapping approach, a monotonic relationship between the constrained variables x1,1, D, and Pheat, respectively, and one stochastic input variable must be determined. Because of the complex system of model equations, the proof of monotony can only be achieved by simulation. In particular, for a joint chance constraint, it is a difficult task, because an uncertain input must be determined that is monotonic to both outputs, as well as to the manipulated variable Pheat. As shown in Figure 8, the feed concentration x1,F is monotonic to all constrained variables and, thus, can be used as a projected stochastic input variable. The integral bounds related to the limited variables can be calculated using the Newton method, based on the model equations of the distillation column: x1,1 xmin ) g-1(η,F,TF,x1,1 ) 0.99 mol/mol,TN,r) (42a) 1,F

-1 D xmin 1,F ) g (η,F,TF,D ) 6 L/h,TN,r) Pheat/maxPheat ) g-1(η,F,TF,Pheat ) 0/6800 W,TN,r) xmin 1,F

(42b) (42c)

In the case of single chance constraints, the resulting single integral bounds can be used directly to calculate the respective probabilities. The single chance constraint for the methanol concentration in the distillate (x1,1), for instance, can then be calculated as follows: x1,1 } P{x1,1 g 0.99 mol/mol} ) P{x1,F g xmin 1,F

∞ ∞ ∞

)



∫∫∫ ∫

-∞-∞-∞xminx1,1 1,F

F(η,F,TF,x1,F) dx1,F dTF dF dη

(43)

The respective gradient can be calculated by the following expression: ∞ ∞ ∞

min

∂x1,F x1,1 ∂P{‚} x1,1 ) dTF dF dη F(η,F,TF,xmin ) 1,F ∂y ∂y -∞-∞-∞

∫∫∫

(44)

For the calculation of a joint probability, a choice must be made, according to maxPheat T ] xmax 1,F ) min[∞,∞,x1,F

(45a)

minD minx1,1 minPheat T xmin ,x1,F ] 1,F ) max[x1,F ,x1,F

(45b)

and

Using these integration limits, the joint probability can be calculated by

{

}

D g 6 L/h max P x1,1 g 0.99 mol/mol ) P{xmin 1,F e x1,F e x1,F } 0 e Pheat e 6800 W 1,F ∞ ∞ ∞ xmax

)

∫ ∫ ∫ ∫ F(η,F,TF,x1,F) dx1,F dTF dF dη

(46)

-∞-∞-∞ xmin 1,F

and the gradient can be calculated by ∞ ∞ ∞

max

∂x1,F ∂P{‚} F(η,F,TF,xmin ) dTF dF dη 1,F ) ∂y ∂y -∞-∞-∞

∫∫∫

∞ ∞ ∞

∫∫∫

-∞-∞-∞

∂xmin 1,F dTF dF dη (47) ∂y

F(η,F,TF,xmin 1,F )

The aforementioned integrals can be evaluated numerically, as described in section 3.1. To specify the confidence level to hold the constraints, initially, a feasibility analysis should be made, namely, to determine the maximum reachable probability level (see section 2.5). To accomplish this task, the following optimization problem must be solved:

4940

Ind. Eng. Chem. Res., Vol. 46, No. 14, 2007

min f ) -R

(48)

TN,r

Table 2. Optimization Results constraint form

s.t. 90 °C e TN e 100 °C 1ere6 g)0

{

}

D g 6 L/h )R P x1,1 g 0.99 mol/mol 0 W e Pheat e 6800 W As a solution of this problem, the maximum reachable probability level is R ) 96.27%. Hence, to define the optimization problem, the R-levels in expression 41 are specified as 95% and 90%, respectively. To make a direct comparison between the single and joint chance constraints, it is further assumed that R ) RD ) Rx1,1 ) RPheat. The following calculations are performed on a Pentium IV personal computer (PC) (3 GHz, 1 GB RAM) and the SQPbased MATLAB function fmincon is utilized as the NLP solver. For numerical integration via the collocation method, eight collocation points in one finite element are used. The choice of these parameter settings is made from a computation study on the integration of multivariate normal density functions by a collocation on finite elements. For the purpose of comparison, a deterministic optimization is also performed, where uncertain variables are set with their expected values. The effect of deterministic results is investigated by Monte Carlo (MC) simulation with 100 000 samples of the uncertain variables and is discussed later. The optimization results for the setpoints are given in Table 2. As expected, the objective function value (i.e., the optimal bottom temperature, TN) decreases (so that a lower reboiler heat duty is needed), if the specified probability level is reduced. In addition, in comparison to the case of single chance constraints, one can see that TN must be increased, if a joint chance constraint is defined. At the same time, the reflux ratio r must be increased. This leads to a higher distillate concentration x1,1, in association with a reduced distillation flow rate D. The improvement of the objective function value with single chance constraints, compared to that with a joint chance constraint, is insignificant. However, this slightly improvement causes a significant loss of reliability, as shown in Table 3, where the specified chance constraint values (probabilities) are shown in bold face. These values can be verified using MC simulation with 100 000 samples of the uncertain variables and the optimal setpoints. To show the difference of the impact of the single and joint constraints, the probabilities of satisfying the different constraints are calculated from the MC simulation, as shown by the italicized terms in Table 3. It is shown that, even if constraints are satisfied separately with at least 95% probability, the simultaneous fulfilment of all restrictions can be guaranteed merely with a probability of 92%. On the other hand, the specification of the joint probability level (e.g., 90%) means that restrictions individually are satisfied with higher probabilties (e.g., 99.69% for Rx1,1) or at least with the specified level. Furthermore, there are also significant differences between the two constraint forms, in regard to the convergence behavior during the solution, as shown in Table 4. In comparison to the single chance constraints, a higher number of iterations is needed, using the joint chance constraint. The major reason for the higher number of iterations is the consideration of all constraints simultaneously within the optimization problem, which makes the finding of feasible solutions more difficult, in

r*

T*N [°C]

single joint

R ) 95% 1.4118 1.4404

96.1632 96.3085

single joint

R ) 90% 1.3970 1.4387

95.1935 95.2227

1.3645

92.2996

deterministic

Table 3. Comparison of Attainable Probability Levels for Different Optimization Strategies Optimization Parameters constraint form single joint

single joint deterministic

RD (%) 95.00

Rx1,1 (%) R ) 95% 95.00

RPheat (%)

Rsim (%)

99.82 92.04 95.00

95.52

99.72

99.57

90.00

R ) 90% 90.00

99.99 85.37 90.00

90.11 49.40

99.69 50.00

99.88 100.00

40.14

Table 4. Convergence Properties constraint form

iterations

single joint

R ) 95% 13 31

single joint

11 19

t [min]

titer [min]

40.40 96.23

3.11 3.10

34.61 58.95

3.15 3.10

R ) 90%

comparison to the case of single chance constraints. In addition, a lower probability level leads to a lower number of iterations required. This is because the stricter the requirement on the solution, the smaller the feasible region (relatively), and then the more difficult it is to find the solution. The computation time of each iteration is approximately the same for both constraint forms. This is because the calculation effort for the integral bounds, which dominates the computational effort, is equal for both cases. Based on the results discussed, the differences between single and joint chance constraints become apparent. Furthermore, the meaning of the results can be derived, namely, a user-defined compromise between profit (or cost minimization) and reliability that can be determied by the choice of R. The relationship

Figure 9. Relationship between the probability level (R) and the value of the objective function (TN).

Ind. Eng. Chem. Res., Vol. 46, No. 14, 2007 4941

Figure 10. Comparison of deterministic optimization results to stochastic optimization results.

between the objective function value and probability level, which can be regarded as information of the decision support, is shown in Figure 9. The impacts of the stochastic and deterministic optimization results also are compared using MC simulation, as shown in Figure 10. The comparatively better values for r* and T*N from the deterministic optimization (see Table 2) cause a higher probability of violating the constraints; more precisely, they lead to only a 40% probability to hold the joint chance constraint, for instance, as shown in Table 3. In Figure 10, the results of the joint chance constraint with R ) 95% are shown. The dashed line represents the constraints for each single variable. It is shown that, in the deterministic case, the constrained outputs x1,1 and D (corresponding to optimal setpoints) move to the infeasible area (beyond the given constraint) with a high probability (∼50%); i.e., the process specifications very often will be violated. In contrast, using results from the stochastic optimization, the constraints will be violated with the desired low probability level (e.g., D ≈ 4.5%, x1,1 ≈ 0.5%). Therefore, by the chance constrained optimization, both optimal and significantly more-robust setpoints can be achieved, in comparison to the deterministic case. Moreover, it can be observed from Figure 10 that the constraint of the reboiler heat duty can be ensured with a probability of almost 100%. This means that the supply capacity of the reboiler heat duty is sufficiently large for the operation of the distillation column. 5. Conclusions This paper presents a novel approach for determining the optimal setpoints for closed-loop control systems under uncertainty. It is an extension of the nonlinear chance constrained programming approach proposed by Wendt et al.,15 in which only open-loop systems that included one single constrained

output variable are considered. In addition, multiple constrained outputs are investigated in this work. Because of the closedloop framework, in addition to constrained outputs, manipulated variables also have a stochastic nature, and, hence, their bounds must be formulated with chance constraints. Using this approach, economically optimal set-point values for closed-loop controls of a distillation column can be obtained by minimizing the column bottom temperature. Although the constrained distillate concentration is in an open loop, in relation to the stochastic disturbances, its specification can be held with a desired probability level. For the heating power, doublebounded chance constraints are defined. The stochastic optimization problems, under both single and joint chance constraints, are solved. By comparing the optimization results among single and joint chance constraints, different impacts of both forms are studied. It is indicated that satisfying constraints separately with a defined probability level corresponds to the fulfilment of all constraints simultaneously with a significant lower probability, and inversely, the definition of a joint probability means that restrictions individually are satisfied with a much higher probabilty, or at least with the specified probability level. In regard to convergence properties, in the simultaneous case, more iterations are needed, whereas the computing time per iteration is almost the same for both constraint forms. Furthermore, the results from the stochastic optimization are compared with those from the deterministic optimization. While stochastic strategies provide robust and reliable solutions that guarantee the satisfaction of constraints with the desired probability level, the deterministic solutions lead to the violation of constraints with a considerably high probability. In summary, in this study, we have solved nonlinear chance constrained optimization problems for steady-state operations

4942

Ind. Eng. Chem. Res., Vol. 46, No. 14, 2007

in a closed-loop framework. Addressing dynamic nonlinear optimization problems under uncertainty with chance constrained programming in the closed-loop framework remains as a challenging task. Acknowledgment Financial support of this work from the Deutsche Forschungsgemeinschaft (under Contract No. LI806/8-1) is gratefully acknowledged. Literature Cited (1) Biegler, L. T.; Grossmann, I. E. Retrospective on Optimization. Comput. Chem. Eng. 2004, 28, 1169. (2) Pistikopoulos, E. N. Uncertainty in process design and operations. Comput. Chem. Eng. 1995, 19, 553. (3) Kassmann, D. E.; Badgwell, T. A.; Hawkins, R. B. Robust SteadyState Target Calculation for Model Predictive Control. AIChE J. 2000, 46 (5), 1007. (4) Henrion, R.; Li, P.; Mo¨ller, A.; Steinbach, M. C.; Wendt, M.; Wozny, G. Stochastic Optimization for Operating Chemical Processes under Uncertainty. In Online Optimization of Large Scale Systems; Gro¨tschel, M., Krumke, S. O., Rambau, J., Eds.; Springer: Berlin, 2001; p 499. (5) Kall, P.; Wallace, S. W. Stochastic Programming; Wiley: New York, 1994. (6) Prekopa, A. Stochastic Programming; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1995. (7) Ruszczynski, A.; Shapiro, A. Stochastic ProgrammingsHandbooks in Operations Research and Management Science, Vol. 10; Elsevier: Amsterdam, 2003. (8) Birge, J. R.; Louveaux, F. Introduction to Stochastic Programming; Springer: New York, 2000. (9) Sahanidis, N. V. Optimization under uncertainty: state-of-the-art and opportunities. Comput. Chem. Eng. 2004, 28, 971. (10) Ahmed, S.; Sahinidis, N. V. Robust Process Planning under Uncertainty. Ind. Eng. Chem. Res. 1998, 37, 1883. (11) Pistikopoulos, E. N.; Ierapetritou, M. G. Novel Approach for Optimal Process Design under Uncertainty. Comput. Chem. Eng. 1995, 19, 1089. (12) Henrion, R.; Mo¨ller, A. Optimization of a continuous distillation process under random inflow rate. Comput. Math. Appl. 2003, 45, 247.

(13) Li, P.; Arellano-Garcia, H.; Wozny, G. Chance Constrained Programming Approach to Process Optimization under Uncertainty. Proceedings of the ESCAPE16/PSE06-Symposium: Garmisch-Partenkirchen, July 9-13, 2006; p 1245. (14) Diwekar, U. M.; Kalagnanam, J. R. Efficient sampling technique for optimization under uncertainty. AIChE J. 1997, 43, 440. (15) Wendt, M.; Li, P.; Wozny, G. Nonlinear Chance Constrained Process Optimization under Uncertainty. Ind. Eng. Chem. Res. 2002, 41, 3621. (16) Arellano-Garcia, H.; Martini, W.; Wendt, M.; Li, P.; Wozny, G. Chance Constrained Batch Distillation Process Optimization under Uncertainty. In FOCAPO Proceedings 2003; Grossmann, I. E., McDonald, C. M., Eds.; p 609. (17) Schwarm, A. T.; Nikolaou, M. Chance-constrained model predictive control. AIChE J. 1999, 45, 1743. (18) Li, P.; Wendt, M.; Wozny, G. Robust Model Predictive Control under Chance Constraints. Comput. Chem. Eng. 2000, 24, 829. (19) Li, P.; Wendt, M.; Wozny, G. A Probabilistically Constrained Model Predictive Controller. Automatica 2002, 38, 1171. (20) Xie, L.; Li, P.; Wozny, G. Chance Constrained Nonlinear Model Predictive Control. In Proceedings of International Workshop on Assessment and Future Directions of Nonlinear Model PredictiVe Control: Freudenstadt-Lauterbad, 2005; p 357. (21) Seborg, D. E.; Edgar, T. F.; Mellichamp, D. A. Process Dynamics and Control; Wiley: New York, 1989. (22) Fialho, I. J.; Tryphon, T. G. Worst Case Analysis of Nonlinear Systems. IEEE Trans. Autom. Control 1999, 44 (6), 1180. (23) Nagy, Z. K.; Braatz, R. D. Worst-Case and Distributional Robustness Analysis of Finite-Time Control Trajectories for Nonlinear Distributed Parameter Systems. IEEE Trans. Control Syst. Technol. 2003, 11, 694. (24) Biegler, L. T.; Cervantes, A. M.; Wa¨chter, A. Advances in simultaneous strategies for dynamic process optimization. Chem. Eng. Sci. 2002, 57 (4), 575. (25) Kassmann, D. E.; Badgwell, T. A. Interior Point Methods in Robust Model Predictive Control. Presented at the 1997 AIChE Meeting, Los Angeles, CA. (26) Finlayson, B. A. Nonlinear Analysis in Chemical Engineering; McGraw-Hill: New York, 1980.

ReceiVed for reView November 30, 2006 ReVised manuscript receiVed March 14, 2007 Accepted April 16, 2007 IE061540T