Article pubs.acs.org/IECR
Optimization of Chemical Process Design with Chance Constraints by an Iterative Partitioning Approach Gennady M. Ostrovsky,*,† Nadir N. Ziyatdinov,‡ Tatjana V. Lapteva,‡ and Anna Silvestrova‡ †
Karpov Institute of Physical Chemistry, Vorontsovo Pole, 10, Moscow 103064, Russia Kazan State Technological University, Karl Marx str., 68, Kazan 420111, Russia
‡
ABSTRACT: Generally, chemical processes are designed with the use of inaccurate mathematical models. Therefore, it is important to create a chemical process that guarantees the satisfaction of all design specifications either exactly or with some probability. This paper considers the issue of chemical process optimization when at the operation stage the design specifications should be met with some probability and the control variables can be changed. We have developed a common approach for solving one-stage and two-stage optimization problems with joint chance constraints for the cases in which the uncertain parameters are either independent random variables with arbitrary probability distributions or dependent normally distributed random variables. This approach is based on approximate transformation of chance constraints into deterministic constraints. This excludes the numerical calculation of multiple integrals for computation of the left-hand sides of chance constraints.
1. INTRODUCTION We will consider the problem of the optimal design of a chemical process (CP) for the case in which inexact mathematical models are used. The inexactness of mathematical models arises because of the original uncertainty of chemical, physical, and economic data which are used during a CP design. The optimization problem of a CP design in the case of the using an exact mathematical model can be expressed as follows min f ̅ (d , x , z , θ )
f (d , z , θ ) ≡ f ̅ (d , z , x(d , z , θ ), θ ), g (d , z , θ ) ≡ g ̅ (d , z , x(d , z , θ ), θ ), ϕ(d , z , x(d , z , θ ), θ ) ≡ 0
Later, we will employ form 5. The constraints can be “hard” or “soft”.1 Hard constraints must never be violated during the operation stage. On the other hand, if occasional violations are allowed, then the constraints are said to be soft. The problem of the optimal design of a CP under uncertainty can be formulated as follows: it is necessary to create an optimal CP that would guarantee the satisfaction of all design specifications (with some probability) in the case when inexact mathematical models are used and internal and external factors change during the CP operation stage. Usually, the following two formulations of this problem are used: the two-stage optimization problem and the one-stage optimization problem. The formulation of the two-stage optimization problem (TSOP) is based on the following assumptions: 1) At the design stage, values of the uncertain parameters are unknown; however, we know distribution probability functions of the uncertain parameters. 2) At the operation stage, (a) at each time instant values of all the uncertain parameters can be either measured or calculated using available experimental data (thus, at each time instant a process model can be corrected) and (b) control variables can change and therefore be adjusted depending on a chemical process state. Thus, the control variables must be functions of the uncertain parameters, θ.
(1)
d ,x ,z
ϕi(d , x , z , θ ) = 0,
i = 1, ..., q
(2)
gj̅ (d , x , z , θ ) ≤ 0,
j = 1, ..., m
(3)
where f(d, z, θ) is the goal function; equality constraints (eq 2) are equations of material and energy balances; inequality constraints (eq 3) are design specifications; d is a vector of design variables, x a vector of state variables (dim x = q), z a vector of control variables, and θ a p-vector of parameters. We suppose that the functions f(d, z, θ) and gj(d, z, θ), j = 1, ..., m are continuously differentiable. As the dimension of the vector x is equal to that of the vector φ, the value of x can be determined either analytically or numerically by solving a set of nonlinear equations (eq 2) for each set of values d, z, and θ. Consequently, we can consider x as an implicit function of the variables d, z, and θ such that x = x(d , z , θ )
(4)
Substituting this expression for x in eqs 1−3, we can eliminate the variables x as min f (d , z , θ )
(5)
d ,z
gj(d , z , θ ) ≤ 0,
(7)
j = 1, ..., m
Received: Revised: Accepted: Published:
(6)
where © 2015 American Chemical Society
3412
December March 11, March 18, March 18,
9, 2014 2015 2015 2015 DOI: 10.1021/ie5048016 Ind. Eng. Chem. Res. 2015, 54, 3412−3429
Article
Industrial & Engineering Chemistry Research
technique. Geletu et al.20 showed that the mathematical analysis of monotonic relations can be done using the so-called global implicit function theorem. In ref 21, a new approach was proposed for addressing nonconvex OSOPCC with nonGaussian distributed uncertainties. A review paper22 summarized the recent advances in solving the OSOPCC. However, their approach still requires the calculation of multiple integrals. Ostrovsky et al.23 have proposed an approach to solving the OSOPCC based on the transformation of chance constraints into deterministic constraints and the partition of the uncertainty region. Nemirovski and Shapiro24 proposed an ellipsoid-like iterative algorithm for solving two-stage optimization problems with chance constraints for the special case where the left-hand sides of constraints in eq 2 are biaffine with respect to the variables d, z and θ. Erdoǧan and Iyengar25 extend this algorithm to the case in which the left-hand sides of constraints 2 are biconvex. Chen et al.26 proposed an approximation approach for solving a class of multistage chance-constrained stochastic linear optimization problems. Wang and Ahmed27 proposed a sample average approximation method for stochastic programming problems involving expected values of constraints. Thus, these papers consider stochastic optimization problems when an objective function and constraints are either linear or biconvex functions. It should be noted that it is very difficult to check the convexity property for a realistic model. Significantly less attention has been given to formulation and solution of the TSOP for chemical processes with soft constraints. Wellons and Reklaitis1 consider the approach based on the penalty method. Here, difficulties arise with the assignment of a penalty coefficient. In the papers of Ierapetritou and Pistikopoulos28 and Bernardo et al.,29 the TSOP with soft constraints is formulated by relaxing the soft constraints and penalizing the objective function (using Taguchi loss function). Here also one can encounter difficulties when assigning the penalty coefficient. Bernardo and Saraiva30 consider the twostage optimization problem with the following soft constraint: the expected value of the variance of a variable y characterizing the process quality must be less than or equal to some given value. Ostrovsky et al.31,32 have formulated the two-stage optimization problems with chance constraints (TSOPCC) and have developed a method of solving this problem based on the transformation of chance constraints into deterministic constraints and a partition of the uncertainty region for the cases when the uncertain parameters have the normal distribution. We showed31,32 that the following chance constraint
The formulation of the one-stage optimization problem (OSOP) is based on the assumption that control variables are constant at the operation stage. First we will consider methods of solving the one-stage and two-stage optimization problems with individual chance constraints in the case when uncertain parameters are independent, random variables. Then we will show (see section 5) that an optimization problem with dependent, normally distributed uncertain parameters can be transformed into an optimization problem with independent, normally distributed uncertain parameters. Therefore, all the methods which will be considered can be used if uncertain parameters are dependent and normally distributed. Methods of solving the two-stage optimization problem with hard constraints (TSOPHC) and the one-stage optimization problem with chance constraints (OSOPCC) were developed significantly. Methods of solving the two-stage optimization problems with hard constraints have been developed in refs 2−6. Methods of solving the one-stage optimization problems for CP with chance constraints have been developed in refs 7 and 8. The main issue in solving optimization problems under uncertainty is the computation of multiple integrals for calculation of the expected value of the objective function and probabilities of constraints satisfaction. This operation is very computationally intense even when a dimensionality of the uncertain parameters vector θ is low because a total number of knot points in the Gauss quadrature9 increases exponentially with the number of uncertain parameters. Three approaches are used to overcome this difficulty. The first method is the improvement of the Gauss quadrature. Acevedo and Pistikopoulos10 suggested three alternative integration schemes. For normally distributed uncertain variables, Bernardo et al.11 obtained a special quadrature formula that significantly decreases the number of required evaluation points (knot points) for the multidimensional integrals. One must note refs 12 and 13 in which methods of the approximate evaluation of multidimensional integral are considered. The second method is based on one of the sampling techniques7,8 (Monte Carlo, Latin hypercube, or Hammersley sequence sampling). The third approach is the transformation of chance constraints into deterministic constraints. It is clear that this transformation allows us to avoid calculation of multiple integrals. First of all, one must note that linear chance constraints can be transformed into deterministic constraints.14 Maranas15 used the transformation of chance constraints into deterministic constraints for the case when the constraints are nonlinear with respect to search variables and linear with respect to uncertain parameters. Wendt et all.16 proposed a back-mapping method of solving the steady-state OSOPCC based on the use of the monotonic relationship between constrained output and one of the input uncertain parameters. They transform chance constraints on output variables (with unknown distribution) into chance constraints on uncertain input variables (with known distribution) based on a monotonic relation. Later, Li et al.17 applied this strategy to solve the problems of the optimal design of CP, optimization of nonlinear dynamic systems, optimal production planning, and optimal control of chemical processes under uncertainty. In ref 18, the back-mapping approach was improved for addressing closed-loop control design under uncertainty. Klöppel et al.19 use the sparse-grid integration technique to improve the efficiency of multiple integration
Pr{gj(d , z(θ ), θ ) ≤ 0} ≥ αj
(8)
can be transformed into the following two constraints max g (d , z(θ ), θ ) ≤ 0
θ∈ Tαj j
Pr{θ ∈ Tαj} ≥ αj
j = 1, ..., m (9)
j = 1, ..., m
(10)
where Pr{θ ∈ Tαj}is the probability of satisfaction of the condition θ ∈ Tαj and the region Tαj is an arbitrary one satisfying condition in eq 10. Therefore, in the TSOPCC, we substituted each chance constraint in eq 8 with constraints in eqs 9 and 10 and included the regions Tαj in a set of search variables. Because it is difficult to look for a form and location 3413
DOI: 10.1021/ie5048016 Ind. Eng. Chem. Res. 2015, 54, 3412−3429
Article
Industrial & Engineering Chemistry Research of the regions Tαj, we approximate this region by a region T̃ j which is a union of multidimensional rectangles Tlj
determined by solving problem 11. This is the advantage of the TSOPCC formulation in comparison with the OSOPCC formulation, which is based on the assumption that the control variables are constant at the operation stage. That is why a chemical process design obtained by solving the TSOPCC is, generally speaking, “better” in comparison with a design obtained by solving the OSOPCC. To avoid the operation of integration over regions with infinite limits when calculating the expected value of the function f(d, z(θ), θ) and the probability measures Pr{gj(d, z, θ) ≤ 0}, j = 1,...,m, we use the concept of an uncertainty region, Tθ. Its probability measure is close to 1
Tj̃ = T1j ∪ T j2 ∪ ··· ∪ T jNk
where Nk is a number of rectangles Tlj at the kthe iteration and T jl = {θi: θiL, jl ≤ θi ≤ θiU, jl , i = 1, ..., p}
In this case, the search for the regions T̃ j is reduced to the search for the subregions Tlj. In its turn, the search for the subregions Tlj is reduced to the search for the values of the U,jl L,jl variables θL,jl i and θi . Thus, the new search variables θi and U,jl θi are added to the usual search variables d and z. It is shown in refs 31 and 32 that the optimization problem which will be solved at the kth iteration has nd + (nz + 2mp)Nk search variables when solving the TSOPCC and nd + nz + 2mpNk search variables when solving the OSOPCC. Even after a small number of iterations the number of search variables can become very large. This is a serious drawback of this method. Therefore, we will work out a new approximate method of transformation of chance constraints into deterministic constraints, which will not have this drawback. This method avoids the computationally intensive operation of multiple integral calculations when solving chance constraints optimization problems.
Pr[θ ∈ Tθ ] ≥ α̅ where α̅ is close to 1. Thus, the uncertainty region is the region a point θ hits with a large probability. We will use α̅ = 0.999. Thus, with probability 0.999, all values of the vector θ are found in the region Tθ. We will use the following approximate expression for the expected value E[f(d, z(θ), θ)] and the probabilities measures Pr{gj(d, z, θ) ≤ 0}, j = 1,...,m Eθ [f (d , z(θ ), θ )] =
θ
Pr{gj(d , z(θ ), θ ) ≤ 0} =
min E[f (d , z(θ ), θ )] j = 1, ..., m
∫Ω ρ(θ) dθ
where Ωj = {θ: gj(d , z(θ ), θ ) ≤ 0, θ ∈ Tθ}
Because the uncertain parameters are independent, random variables, the uncertainty region Tθ is a multidimensional rectangle of the following form:
(11)
Pr{gj(d , z(θ ), θ ) ≤ 0} ≥ αj}
(15)
j
2. FORMULATION OF OPTIMIZATION PROBLEM WITH CHANCE CONSTRAINTS The two-stage optimization problem with individual chance constraints has the following form (see, for example, Ostrovsky et al.32) d , z(θ )
∫T f (d , z(θ), θ) ρ(θ) dθ
Tθ = {θi: θiL ≤ θi ≤ θiU , i = 1, ..., p}
(12)
(16)
Thus, we will solve the following problem
where 0 ≤ αj < 1; j = 1,...,m; and αj, j = 1,...,m are given by the designer and E[f(d, z(θ), θ)] is the expected value of the function f(d, z(θ), θ),
f * = min Eθ [f (d , z(θ ), θ )]
(17)
d , z(θ )
+∞
E[f (d , z(θ ), θ )] =
∫−∞
f (d , z(θ ), θ ) ρ(θ ) dθ
Pr{gj(d , z(θ ), θ ) ≤ 0} ≥ αj
(13)
ρ(θ) is the probability density function and Pr{gj(d, z(θ), θ) ≤ 0} is the probability of satisfaction of the constraint gj(d, z(θ), θ) ≤ 0, i.e., it is the probability measure of the region Ω̅ j = {θ: gj(d, z(θ), θ) ≤ 0, −∞ ≤ θi ≤ +∞, i = 1,...,p}: Pr{gj(d , z(θ ), θ ) ≤ 0} =
∫Ω̅ ρ(θ) dθ j
(18)
Designate the solution of problem 17 as d*, z*(θ).
3. GENERAL DESCRIPTION OF AN APPROXIMATE METHOD OF SOLVING THE OPTIMIZATION PROBLEM WITH CHANCE CONSTRAINTS There are the following main difficulties of solving problem 17: (a) We have noted already that the calculation of the expected value of the goal function and probabilities of constraints satisfaction is very computationally intensive. (b) Search variables are multivariate functions; therefore, problem 17 is an infinite dimensional programming problem. There are efficient methods of solving optimization problems with finite number of search variables, so it is desirable to reduce solving problem 17 to solving an optimization problem with a finite number of search variables. We will develop an iterative method which will be based on a partition of the uncertainty region Tθ into subregions. At the kth iteration the region, let Tθ be already partitioned into a set R(k) of subregions (multidimensional rectangles) R(k) l
(14)
If the control variables z(θ) do not depend on θ, i.e., z(θ) = z = const ∀ θ ∈ Tθ, then TSOPCC (eq 11) is transformed into the one-stage optimization problem with chance constraints f = min E[f (d , z , θ )] d ,z
Pr{gj(d , z , θ ) ≤ 0} ≥ αj
j = 1, ..., m
j = 1, ..., m
For the sake of simplicity of description, we will call both d and z(θ) search variables. Note that in problem 11, the control variables z(θ) are functions of the parameters θ. This means that at the operation stage the control variables can take values corresponding to a state of the CP at each time instant. These values are 3414
DOI: 10.1021/ie5048016 Ind. Eng. Chem. Res. 2015, 54, 3412−3429
Article
Industrial & Engineering Chemistry Research
(1)1,mid Figure 1. Functions ϕj(d, z, θ1) and ϕ(1) ) at the first iteration. j,1 (d, z, θ1
R l(k) = {θi: θi(, lk)L ≤ θi ≤ θi(, lk),U , l = 1, ..., Nk}
parameters are independent, random variables, the probability density function, ρ(θ), has the following form:33
(19)
where Nk is the number of the subregions R(k) at the kth l iteration and l is the index of a subregion in the set R(k). Let (k) (R(k) l ) designate the set of interior points of the region Rl . We (k) will construct the regions Rl so that the following conditions will be met R1(k) ∪ ··· ∪ R N(kk) = Tθ
ρ(θ ) = ρ(θ1)ρ(θ2)··· ρ(θp)
where the function ρi(θi) is the probability density function of the random parameter θi.
(20)
4. CONSTRUCTION OF THE APPROXIMATIONS FOR ONE-STAGE OPTIMIZATION PROBLEM
(R l(k)) ∩ (R q(k)) = ⌀ , ∀ l , q 1 ≤ l ≤ Nk , 1 ≤ q ≤ Nk
4.1. Approximation of the Region Ωj. Equation of boundary of the region Ωj has the form
(21)
At the kth iteration of this procedure, we will use the following approximations: 1. Approximation Ω̃(k) j of the region Ωj 2. Approximation z̃(θ) of multivariate functions z(θ). We will approximate multivariate functions with the help of functions of a finite number of variables. 3. Approximation E(k) ap [f(d, z̃(θ), θ)] of the expected value of the goal function f(d, z(θ̃), θ). We will construct the regions Ω̃(k) j in such a way that we can calculate Pr{θ ∈ Ω̃j} without using multiple numerical integration. Thus, at the kth iteration, we will solve the following problem (k) f (k) = min Eap [f (d , z(̃ θ ), θ )] d , z(̃ θ )
(k) Pr{θ ∈ Ω̃ j } ≥ αj
(23)
g j (d , z , θ ) = 0
j = 1, ..., m
(24)
Using eq 24 for fixed j, we can represent one of the parameters θi (for example, θp) as a function of other parameters θi, i = 1, ..., p − 1 θp = ϕj(d , z , θ1 , ..., θp − 1)
(25)
A value of the parameter θp can be obtained either analytically or numerically for known values of the variables d and z and the parameters θi, i = 1,..., p − 1. In the latter case we should calculate a value of the parameter θp solving one eq 24 with one unknown quantity θp for known values of the variables d and z and the parameters θ1, ..., θp−1. It is clear that the following identity holds:
(22)
j = 1, ..., m
gj(d , z , θ1, ..., θp − 1, ϕj(d , z , θ1, ..., θp − 1)) ≡ 0
At each iteration, all the approximations z̃(θ), Ω̃(k) j , j = 1,..., m, and E(k) ap [f(d, z̃(θ), θ)] will be made more exact with the help of a partition of the region, Tθ. We now consider ways of constructing these approximations for the OSOPCC and the TSOPCC. Because the uncertain
j = 1, ..., m
We will suppose that from eq 24 we can express the variable θp as a function of the variables θ1, ..., θp−1 for all j = 1, ..., m. Let us introduce the following designation 3415
DOI: 10.1021/ie5048016 Ind. Eng. Chem. Res. 2015, 54, 3412−3429
Article
Industrial & Engineering Chemistry Research
Figure 2. Functions ϕj(d, z, θ1) and ϕ̅ (2) j (d, z, θ1) at the second iteration.
The region Ωj can be represented in the form
gj̅ (d , z , θ1 , θ2 , ..., θp)
Ωj = Ωj ,1 ∪ Ωj ,2 ∪ ··· ∪ Ωj , Nk̅
⎧ ∂g ⎪ θp − ϕ(d , z , θ1 , ..., θp − 1), if j ≥ 0 (26) j ⎪ ∂θp ⎪ =⎨ ∂gj ⎪ < 0 (27) ⎪ ϕj(d , z , θ1 , ..., θp − 1) − θp , if ⎪ ∂θp ⎩
where Ωj , l = {θ: θp − ϕj(d , z , θ1 , ..., θp − 1) ≤ 0, θ ∈ R̅ l(k)}
R̅ l(k) ⊂ R s(lk) ,
U
Ωj = {θ: gj̅ (d , z , θ ) ≤ 0, θi ≤ θi ≤ θi , i = 1, ..., p}
∂θp
≥ 0, ∀ θ , z , d ,
(34)
(2) The functions ϕ̅ (1) j (d, z, θ1, ..., θp−1) and ϕ̅ j (d, z, θ1, ..., θp−1) for the first two iterations are shown in Figures 1 and 2 (p = 2). In both cases, the curve ABC represents the function φj(d, z, θ1). In Figure 1, the segment DE represents the function φ(1) j,1 (d, z, θ(1)1,mid ). In Figure 2, the segments GFH and KLM represent 1 (2) (2) the functions ϕj,1 (d, z, θ1(2)1,mid ), ϕj,2 (d, z, θ1(2)2,mid), respectively. The polygonal line GHKM represents the function ̃ (k) ϕ̅ (2) j (d, z, θ1). Introduce the following regions Ωj,l
(28)
Let us designate the set of the parameters θ1, ..., θp−1 as θ̅. At the kth iteration we will approximate the function φj(d, z, θ1, ..., θp−1) with the help of a piecewise constant function ϕ̅ j(d, z, θ1,...,θp−1) of the form
(k) Ω̃ j , l = {θ: θp − ϕj(,kl ) ≤ 0, θ ∈ R̅ l(k)}
(35)
Later we will suppose that the following inequalities always hold
(29)
(k)l,mid where ϕ(k) ) and θ̅(k)l,mid = (θ(k)l,mid , ..., θ(k)l,mid ) jl = ϕj(d, z, θ̅ 1 p−1 (k) is the middle point of the region R̅ l and
R̅l(k) = {θi: θi(, lk)L ≤ θi ≤ θi(, lk),U , i = 1, ..., p − 1}
(33)
R̅ (k) = R1̅ (k) ∪ ··· ∪ R̅ N(kk)
j = 1, ..., m
⎧ ϕ(k) if θ ̅ ∈ R (k) 1̅ ⎪ j ,1 ⎪ (k) ϕj̅ (d , z , θ1 , ..., θp − 1) = ⎨⋮ ⎪ ⎪ ϕj(,kN) if θ ̅ ∈ R̅ N(k) ⎩ k k
l = 1, 2, ..., Nk̅
Introduce the following region R̅ (k)
At first we will suppose that the following inequalities hold ∂gj
(32)
It is seen that ranges of change of the variables θi i = 1, ..., p − 1 in the region R̅ (k) l are the same as in one of the regions of the set R(k). Let us designate this region as R(k) sl . Therefore, the (k) is a part of the region R . Consequently, the region R̅ (k) l sl following relations hold:
Equations 26 and 27 are based on the following rule: the sign of derivative of the function gj̅ (d, z, θ1, θ2, ..., θp) with respect to θp should coincide with the sign of the derivative of the function gj(d, z, θ1, θ2, ..., θp) with respect to θp. The region Ωj can be represented in the form L
(31)
ϕj(,kl ) ≤ θpl(k)U
Thus, in the region Ω̃(k) j,l , the parameter θi varies within the following interval
l = 1, ..., Nk̅ (30) 3416
DOI: 10.1021/ie5048016 Ind. Eng. Chem. Res. 2015, 54, 3412−3429
Article
Industrial & Engineering Chemistry Research
(4) (4) (4) (4) (4) (4) Figure 3. Subregions R(4) 1 , R2 , R3 , and R4 (rectangles AEKS, EBUK, SUFH, and HFCD, respectively) and R̅ 1 , R̅ 2 , and R̅ 3 (segments AS, SH, and HD, respectively) at the fourth iteration.
Iijl(k) = {θ: θijl(k)L ≤ θi ≤ θijl(k),U , i = 1, ..., p}
region. At the first and second iterations we partitioned the rectangle ABCD with help of the line HF and the rectangle ABFH with help of the line SU. The rectangles AEKS, EBUK, (4) (4) (4) SUFH, and HFCD are the subregions R(4) 1 , R2 , R3 , R4 , respectively. The segments AS, SH, and HD are the subregions (4) (4) R̅ (4) 1 , R̅ 2 , and R̅ 3 , respectively. The ordinates of the points R, (4) (4) Y, and E are equal to φ(4) 1,1 , φ1,2 , and φ1,3 , respectively. The subregions AMNS, SLQH, and HOPD are the subregions Ω̃(4) j,1 , (4) (4) (4) (4) ̃ Ω̃(4) , and Ω , respectively. It is seen that R ⊂ R , R ̅ ̅ j,2 j,3 1 1 2 ⊂ (4) (4) R(4) 3 , and R̅ 3 ⊂ R4 . Transform approximation Pr{θ ∈ Ω̃(k) j } of the left-hand side of the chance constraints of the OSOPCC (eq 18). Because all the parameters θi are independent, the probability measure J(k) ijl of the interval I(k) ijl (see eq 36) is equal to
(36)
(k)L (k)U where θ(k)L = θUil , i = 1, 2, ..., p − 1. θ(k)U ijl = θil , i = 1, ..., p; θijl pjl (k) (k) ̃ = φj,l .Thus, the region Ωj,l can be represented by the following form
(k) Ω̃ j , l = {θ: θi ∈ Iijl(k) , i = 1, 2, ..., p}
(37)
The region Ω̃(k) j,l is some approximation of the region Ωj,l. In Figure 1, the region FABCG is the region Ω1,1 and the region FDEG is the region Ω̃(1) 1,1 At the kth iteration, the approximation Ω̃(k) j of the region Ωj will have the following form (k) (k) (k) (k) Ω̃ j = Ω̃ j ,1 ∪ Ω̃ j ,2 ∪ ··· ∪ Ω̃ j , Nk̅
(38)
Jijl(k) =
We will use the following approximation of the left-hand side of constraints in eq 18 Pr{θ ∈
(k) Ω̃ j }
=
∫Ω̃
(k) j
ρ (θ ) d θ
(k)L ijl
ρ(θi) dθi = Fi(θijl(k)U) − Fi(θijl(k)L)
(41)
where Fi(θi) is the distribution function of random parameter θi. Because all the parameters θi are independent, the probability measure of the region Ω̃(k) jl is equal to the product of the probability measures of the intervals I(k) ijl , i = 1, ..., p
(39)
Because there exist relations 21 and 33, the following relations hold: (R̅ l(k)) ∩ (R̅ q(k)) = ⌀ , ∀ l , q 1 ≤ l ≤ Nk̅ , 1 ≤ q ≤ Nk̅
(k) Pr{θ ∈ Ω̃ jl } =
p
∏ [Fi(θijl(k)U) − Fi(θijl(k)L)] i=1
Therefore
(42)
Taking into account eq 36, we obtain
(k) (k) (k) Pr{θ ∈ Ω̃ j } = Pr{θ ∈ Ω̃ j ,1} + Pr{θ ∈ Ω̃ j ,2} + (k) ··· + Pr{θ ∈ Ω̃ j , Nk̅ }
∫θ
θijl(k)U
p−1
(k) (k)L Pr{θ ∈ Ω̃ jl } = [Fp(ϕj(,kl )) − Fp(θpjl )] ∏ [Fi(θijl(k)U) − Fi(θijl(k)L)]
(40)
i=1
Let us illustrate this description using Figure 3. Consider the fourth iteration. Here, the rectangle ABCD is the uncertainty
Introduce the following designation 3417
DOI: 10.1021/ie5048016 Ind. Eng. Chem. Res. 2015, 54, 3412−3429
Article
Industrial & Engineering Chemistry Research p−1
Ajl =
Eθ [f ̃ (d , z , θ , θr ); R r(k)] = ar f (d , z , θr )
∏ [Fi(θijl(k)U) − Fi(θijl(k)L)]
p
i=1
+ (1 − ar ) ∑
Then we have
i=1
Substituting the expressions for Pr{θ ∈ Ω̃(k) jl } into eq 40, we obtain
∫T
Nk
∑ Ajl [Fp(ϕj(,kl )) − Fp(θpjl(k)L)]
(43)
i=1
=
∫R
(k) r
p i=1
∫R
ρ (θ ) d θ
ar = Pr[θ ∈ Tr(k)]
(k) r
θρ i (θ ) d θ
(51)
Because all the parameters θi are independent, the probability measure of R(k) is equal to the product of the probability r (k)L (k),U measures of the intervals I(k) il = {θi: θi,l ≤ θi ≤ θi,l }, i = 1,...,p ar = [F1(θ1(rk)U) − F1(θ1(rk)L)][F2(θ2(rk)U) − F2(θ2(rk)L)] (k)U (k)L ) − Fp(θpr )] ···[Fp(θpr
We now transform the expression E[θi;T(k) r ] (see eq 46) and substitute in eq 46 the expression for ρ(θ) from eq 23
f ̃ (d , z , θ , θr )ρ(θ ) dθ
∫R
(50)
To calculate the values ar and in eq 47, we must calculate 2Rk multivariate integrals (eq 46). One can significantly simplify the calculation of these values. Indeed, it is clear that ar is the probability measure of the region (multidimensional rectangle) R(k) r
∂f (d , z , θr ) (θi − θir ) ∂θi
Eθ [θi ; R r(k)] =
∑ Eθ[f ̃ (d , z , θ , θr); R r(k)] E[θi;T(k) r ]
∂f (d , z , θr ) (Eθ [θi ; R r(k)] − ar θir ) ∂θi
(k) r
(49)
Nk
r=1
θ1Ur
∫θ ∫θ
E[θi ; R r(k)] =
L 1r
θ2Ur
L 2r
···
∫θ
θprU
L pr
θρ i (θ1)ρ(θ2)··· ρ(θi)
··· ρ(θi) dθ1 dθ2 ··· dθp = J1r ··· Jir̅ ··· Jpr
where (45)
J1r =
∫θ
θ1Ur
L 1r
where ar =
∑ Eθ[f (d , z , θ); R r(k)]
(k) Eap [f (d , z , θ ); Tθ ] =
Eθ [f ̃ (d , z , θ , θ r ); R r(k)] = ar f (d , z , θr )
∑
f (d , z , θ ) ρ (θ ) d θ
̃ z, θ, The value Eθ[f(d, is an approximation of the value Eθ[f(d, z, θ); R(k) ]. r ̃ z, θ, Therefore, because eq 49, the sum of the values Eθ[f(d, (k) θr);Rr ] is an approximation of the value Eθ[f(d, z, θ); Tθ]. Thus, the approximation E(k) ap [f(d, z, θ)] of the expected value of the function f(d, z, θ) has the form
Using simple transformations one can obtain the following ̃ z, θ); R(k) expression for E[f(d, r ]
+
(k) r
θr);R(k) r ]
The point θr is a linearization point in the subregion and θir is the ith component of the vector θr. The right-hand side of eq 44 is the linear part of the Taylor’s expansion of the function f(d, z, θ) at the point θr. Introduce the following designation Eθ [f ̃ (d , z , θ , θ );
∫R
r=1
(44)
R r(k)]
(48)
Nk
Eθ [f (d , z , θ ); Tθ ] =
R(k) r
r
f (d , z , θ )ρ (θ ) d θ
Then eq 48 can be rewritten in the form
where
∑
R r(k)
r=1
Eθ [f (d , z , θ ); R r(k)] =
⎧ f ̃ (d , z , θ , θ ) if θ ∈ R (k) 1 1 ⎪ ⎪ f ̃ (d , z , θ ) = ⎨⋮ ⎪ ⎪ f ̃ (d , z , θ , θN ) if θ ∈ R N(k) ⎩ k k
f ̃ (d , z , θ , θr ) = f (d , z , θr ) +
∑∫
Introduce the following designation
4.2. Approximation of the Expected Value of the Objective Function. When we solve problem 17, we must calculate a multiple integral to find the expected value of the function f(d, z, θ). The exact calculation of this integral at each iteration leads to significant computational expenditures. At the same time exact calculation of the expected value of the objective function is pointless so far as an approximation Ω̃(k) j of the region Ωj is not yet sufficiently exact. Therefore, we propose calculating only an approximation of this integral at each iteration. The approximation will be improved with the number of iterations. At the kth iteration, let the region Tθ be partitioned already into Nk subregions (multidimensional rectangles) R(k) r , r = 1, ..., Nk (see eq 19). For the approximate calculation of the expected value of the function f(d, z, θ), we ̃ z, θ) will use the following piecewise linear approximation f(d, of the function f(d, z, θ)
p
Nk
f (d , z , θ )ρ (θ ) d θ =
θ
l=1
(47)
Because all the subregions R(k) r do not have common points (see eq 21), then the integral of the function f(d, z, θ) ρ(θ) over the region Tθ is equal to the sum of the integrals of the function f(d, z, θ) ρ(θ) over the regions R(k) r
(k) (k)L Pr{θ ∈ Ω̃ jl } = Ajl [Fp(ϕj(,kl )) − Fp(θpjl )]
(k) Pr{θ ∈ Ω̃ j } =
∂f (d , z , θr ) Eθ [θi ; R r(k)] ∂θi
..., Jpr =
ρ(θ1) dθ1 , J2r =
∫θ
θprU
L pr
ρ(θp) dθp
(46)
If we use the point Eθ[θi;R(k) r ] as a linearization point, then we have
Jir̅ = 3418
∫θ
θirU
L ir
θρ i (θi) dθi
∫θ
θ2Ur
L 2r
ρ(θ2) dθ2 ,
(52)
(53) DOI: 10.1021/ie5048016 Ind. Eng. Chem. Res. 2015, 54, 3412−3429
Article
Industrial & Engineering Chemistry Research For the calculation of pNk values E[θi;R(k) r ] and Nk values ar, we must calculate pNk univariate integrals Ji̅ r and pNk univariate integrals Jir. It is clear that calculating univariate integrals Jir (i = 1, ..., p and r = 1, ..., Nk) and Ji̅ r (r = 1,...,Nk) is significantly simpler than calculating multivariate integrals (eq 46). Moreover, one can avoid the calculation of these univariate integrals. Let us introduce the following integral:
Nk
∑ Ajl [Fp(ϕj(,kl )) − Fp(θpjl(k)L)] ≥ αj
where z, θ);T] is given by eq 46. Introduce the following designations gj(d , z , θ mid) =
θi
Jir̅ =
−
(k) f (k) = min Eap [f (d , z , θ ); Tθ ]
Hi(θirL)
gj(d , z , θ mid) ≥ αj ,
ϑi = (θi − E[θi])(σi)−1
(54)
(55)
Using eq 54 and making simple transformations, we obtain
Jir =
∫ϑ
L ijl
∫ϑ
ϑirU
L ir
(k) f (k) = min Eap [f (d , z , θ ); Tθ ]
ρ(ϑi)d ϑi =
−
Φ(ϑijlL )
ρ(ϑi)d ϑi = Φ(ϑirU) − Φ(ϑirL)
Ψ(ϑ̅ )Ψ(ϑ̅ )
gj(d , z , θ mid) ≥ αj ,
(56)
⎧ Nk ∂g ⎪∑ A [F (ϕ(k)) − F (θ (k)L)] if j > 0 jl p j , l p pjl ⎪ ∂θp ⎪ l=1 mid ⎨ g j (d , z , θ ) = ∂gj ⎪ Nk (k)U ⎪∑ Ajl [Fp(θpjl − Fp(ϕj(,kl ))] if 0 jl p j , l p pjl ⎪ ∂θp ⎪ l=1 =⎨ ∂gj ⎪ Nk (k)U ⎪∑ Ajl [Fp(θpjl − Fp(yj , l )] if 0 jl p j , l p pjl ⎪ ∂θp ⎪ l=1 =⎨ ∂gj ⎪ Nk (k)U ⎪ ∑ Ajl [Fp(θpjl − Fp(ϕj(,kl ))] if νl** and the region R̅ (k) if μ < ν . Consider now the rule of selection of l** l* l** the splitting variable. We will use the following rule: each component of the vector θ will be used in turn as a splitting variable. Suppose at the kth iteration the subregion R(k) lk should be partitioned and the parameter θs should be used as a splitting variable. Then the splitting point will be selected in the middle (k)L (k),U of the interval I(k) slk = {θ: θslk ≤ θi ≤ θilk ). It is easy to see that if at the kth iteration the parameter θp is used as the splitting variable then at this iteration only approximations of the expected value of a goal function are improved. 6.1. Stop Criterion. The iteration procedure stops at the kth iteration if the following condition is met
It is easy to see that the TSOPCC has nd + nzN̅ k search variables. Consequently, a number of search variables increases proportionally to a number of subregions R̅ (k) l . It is easy to see that TSOPCC (eq 81) changes into a OSOPCC if z̃(θ̅) = const ∀ θ ∈ Tθ. Because OSOPCC is a particular case of TSOPCC, later we consider only a method of solving TSOPCC. When solving the optimization problem (eq 81) we should at each search point solve independently each of m equations (eq 24) with respect to the variable θp (in order to obtain eq 25). Suppose we solved eq 17 and obtained the solution d*, z*(θ). We can realize the optimal control z*(θ) because at each time instant of the operation stage we can determine values of the uncertain parameters θ.
|f (k) − f (k − 1) | ≤ ε1 3421
DOI: 10.1021/ie5048016 Ind. Eng. Chem. Res. 2015, 54, 3412−3429
Article
Industrial & Engineering Chemistry Research where f(k) is the solution of eq 81 and ε1 is the desired precision level. The flowchart of the algorithm of solving TSOPCC (eq 17) is given below. Step 1: Set iteration counter k = 1. Specify initial values d(0) , (0) (1) zl , an initial set of subregions R(1) 1 (usually, N1 = 1 and R1 = Tθ). (k) (k) (k)l,mid Step 2: Determine the values φ(k) ) jl = φj(d , zl , θ̅ (k) where d(k), z(k) is the solution of eq 81 for k > 1 and d = d(0), l (0) z(k) for k = 1. l = zl Step 3: Solve eq 81. Let f(k) be the solution of eq 81. Step 4: If
where ai = E[θi] and random parameters ηi are independent and have standard normal distribution N1(0, 1). The matrix C = (cij) satisfies the condition
CCT = Λ
It is clear that in the case of independent uncertain parameters the matrix Λ is diagonal. We now rewrite eq 85 in the matrix form θ = a + Cη
where ε1 is the desired precision level, then the iteration procedure stops; otherwise, continue to step 5. Step 5: Determine the values μl* and νl** l
(87)
where a = (a1, ..., an)T, η = (η1, ..., ηn)T. We can consider matrix equality (eq 86) as a system of n2 nonlinear equations with n2 unknown elements cij of the matrix C. However, the number of independent equations is equal to 0.5n2 because the matrices Λ and CCT are symmetric. Thus, generally speaking, an infinite number of matrices C satisfy eq 86. Suppose we found a matrix C that satisfies eq 86. We can make the change of variables θ in eq 5 using eq 87
|f (k) − f (k − 1) | ≤ ε1
μl * = max μl ,
(86)
νl** = max νl
min F(d , z , η)
l
(88)
d ,z
where μk and νk are the solutions of eqs 83 and 84 for the subregion R(k) l , respectively. Step 6: Partition the region R(k) l* if μl* > νl** and the region R(k) l** if μl* < νl** into two subregions Rpk and Rqk. Step 7: Set k = k +1 and go to step 2. Because at each iteration of solving eq 81 the number of the subregions Rr(k) increases by one, the total number of subregions Nk at the kth iteration is equal to k. In section 4 we showed that computational intensity of calculation of the approximation E(k) ap [f(d, z, θ); Tθ] is equal to (p + 1)Nk. Consequently, computational intensity of calculation of the approximation E(k) ap [f(d, z, θ); Tθ] is equivalent to k(p + 1) calculations of the goal function. It increases linearly in the number of iterations of solving eq 81. While at the first iteration a number of calculations of values of the goal function is equal to (p + 1), it grows to k(p + 1) at the kth iteration.
Gj(d , z , η) ≤ 0
j = 1, ..., m
where F(d, z, η, C) ≡ f(d, z, a + Cη), Gj(d, z, η, C) ≡ gj(d, z, a + Cη) j = 1, ..., m. The two-stage optimization problem with chance constraints for problem 88 for fixed values cij is of the form F * = min Eη[F(d , z(η), η , C)]
(89)
d , z(η)
Pr{Gj(d , z(η), η , C) ≤ 0} ≥ αj
j = 1, ..., m
However, the obtained value F* depends on the selected values cij. To obtain the optimal value F* we should include the values cij in the set of search variables and use matrix equality (eq 86) as a constraint. Thus, we should solve the problem F * = min Eη[F(d , z(η), η , C)] d , z(η), cij
7. APPROXIMATE METHOD OF SOLVING TSOP WITH CHANCE CONSTRAINTS: CASE OF DEPENDENT UNCERTAIN PARAMETERS We consider a method of solving eq 11 in the case when uncertain parameters θi are random, dependent parameters having multivariate normal distribution Np(E[θ], Λ) with the probability density function33 ρ (θ ) =
(2π )
p /2
1 1 exp − (θ − μ)T Λ−1(θ − μ) 1/2 2 (detΛ)
{
Pr{Gj(d , z(η), η , C) ≤ 0} ≥ αj
i = 1, ..., n
j = 1, ..., m
CCT = Λ Because in eq 90 the random parameters ηi are independent with the normal distribution N1(0; 1), we can use the iteration procedure developed in sections 3−5 to solve this problem.
8. COMPUTATIONAL EXPERIMENT To check the efficiency of the proposed methods, we solved three examples. Equations 64 and 81 were solved with the SQP method. Partial derivatives of the goal function and constraints were calculated using the finite difference method. 8.1. Example 1. Consider the design problem of the chain of two continuous stirred tank reactors (Figure 4). The reactions in the first and second reactors are
}
where μ is the vector of the mean values of the parameters θi(μi = E[θi]) and Λ is the covariance matrix: Λ = (λij), where λij = ρijσiσ j. (σ i)2 is the variance of the parameter θ i; ρij is the correlation coefficient of the parameters θ i and θj; ρij = 1 if i = j. We develop the method which is based on a reduction of a problem with dependent, normally distributed, random parameters into a problem with independent random parameters having normal distribution. Suppose we have n random parameters θi, i = 1, ..., n having multivariate normal distribution. It is known that the dependent random parameters θi, i = 1,..., n can be represented in the form33 θi = ai + ci1η1 + ci2η2 + ··· + cinηn ,
(90)
k1
k2
A → B → C,
k1
k2
A→B→C
There is a detailed description of this problem in ref 37 and Wendt et al.16. We showed that the optimization problem can be reduced to the form38 min( V1 +
(85)
Vi , Ti
3422
V2 )
(91) DOI: 10.1021/ie5048016 Ind. Eng. Chem. Res. 2015, 54, 3412−3429
Article
Industrial & Engineering Chemistry Research
Let us select the parameter k20 as the parameter θp. Taking into account that in this case ∂g1/∂θp < 0, we transform constraint 92 to the form ⎧ ⎫ ⎪ ⎪ −b + b2 − 4ac ⎬ Pr⎨ − ≤ 0 ≥α k 20 ⎪ ⎪ 2a ⎩ ⎭
where ⎛ E ⎞ ⎛ E ⎞ a = C BSP2 V1V2 exp⎜ − 2 ⎟ exp⎜ − 2 ⎟ ⎝ RT2 ⎠ ⎝ RT1 ⎠
Figure 4. Flowsheet of a chemical process with two continuous stirred tank reactors. SP Pr{C B2(k1 , V1 , k 2 , V2 , k 3 , k4) ≥ C B2 }≥α
⎛ ⎛ E ⎞⎞ ⎛ E ⎞ b = C BSP2 ⎜⎜V2 exp⎜ − 2 ⎟ + V1 exp⎜ − 2 ⎟⎟⎟ ⎝ RT1 ⎠⎠ ⎝ RT2 ⎠ ⎝
(92)
0 ≤ V1 ≤ 16, 0 ≤ V2 ≤ 16, 550 ≤ T1 ≤ 650, 550 ≤ T2 ≤ 650
⎛ ⎜ 1 +⎜ ⎜ 1 + k10V2 exp − E1 RT2 ⎝
where CAi, CBi, Vi, Ti, (i=1 and 2) are the concentrations of the components A and B and the volumes and temperatures of both reactors, respectively.
( )
C B2(k1, V1, k 2 , V2 , k 3 , k4) = (C B1 + CA1 − CA2)(1 + k4V2)−1
c=
−1
CA1(k1, V1) = (1 + k1V1)
(1 + k V exp(− ))(1 + k V exp(− ))
C B1(k1, V1, k 3) = CA1(1 + k 3V1)
k 2 = k10e−E1/ RT2
k 3 = k 20e−E2 / RT1,
k4 = k 20e−E2 / RT2
E1 RT1
10 2
E1 RT2
+ C BSP2 − 1
−1
k1 = k10e−E1/ RT1,
( ) ( )
1 10 1
CA2(k1, V1, k 2 , V2) = CA1(1 + k 2V2)−1
⎞ E V1 exp − RT2 ⎟ 1 − 1⎟ ⎟ 1 + k10V1 exp − E1 RT1 ⎠
In Table 2 we give the optimal values of the goal function, design variables, and computational times obtained by solving the OSOPCC (eq 64) for CSP B2 = 0.5, 0.52, and 0.54 and α = 0.9 and 0.95 with the help of the proposed method and the method from ref 16. It should be noted that the sign off the derivative ∂g1/∂θp does not change during the solving of problem 91. It is seen that the computational time required for solving the OSOPCC when using the proposed method is significantly (approximately four times) less than the computational time required for solving these problems when using the method from ref 16. The reason is that the proposed method does not use multiple numerical integration when solving the OSOPCC. In addition, it is seen that the optimal values of the goal functions obtained with the proposed method are better than the ones obtained by the method from ref 16. The reason is that in our case there are more degrees of freedom because we use the control variables T1 and T2 in addition to the design variables V1 and V2 as the search variables. 8.2. Example 2. Let us consider the production of species B from A as A → B in a continuous stirred tank reactor in the flowsheet in Figure 5 (see ref 4). The mathematical formulation of the optimization problem for this example is given in ref 4. In this problem, design variables are volume of the reactor, V, and a heat transfer area, A. The control variables are temperature in the reactor, T1 (311 ≤ T1 ≤ 389), and the recycle temperature, T2 (311 ≤ T2 ≤ 389). The state variables are the concentration, CA1, of reactant
In this problem, CAi and CBi (i = 1 and 2) are the state variables, Vi (i = 1 and 2) the design variables, and Ti (i = 1 and 2) the control variables; θ = (E1, E2, k10, k20) is the uncertain parameters vector. We suppose that the uncertain parameters are dependent, random variables with the normal distribution Np(E[θi], Λ), (i = 1, 2, 3, 4). The quantities (E[θi]; σi) (i = 1, ..., 4) have the following values (6665.9; 200), (7985.2; 240), (0.715; 0.0215),(0.182; 0.0055), respectively. The correlation matrix is given in Table 1.16 Table 1. Correlation Matrix ⎛ 1 0.5 0.3 0.2 ⎞ ⎜ ⎟ ⎜ 0.5 1 0.5 0.1⎟ ⎜ 0.3 0.5 1 0.3 ⎟ ⎜ ⎟ ⎝ 0.2 0.1 0.3 1 ⎠
Let us transform constraint 92. In this case, eq 24 is of the form SP g1(d , z , θ ) ≡ C B2 − C B2(k1 , V1 , k 2 , V2 , k 3 , k4) = 0.
Table 2. Results of Solving OSOPCC for Example 1: Comparison of the Proposed Method and the Method from Ref 16 method proposed in ref 16
proposed method
CSP B2
α
f
V1
V2
CPU time (s)
f
V1
V2
CPU time (s)
0.5 0.5 0.52 0.52 0.54
0.9 0.95 0.9 0.95 0.9
3.624 3.671 3.899 3.963 4.331
3.301 3.497 3.808 3.854 4.474
3.266 3.245 3.795 4.001 4.908
0.8 1.0 1.0 1.2 1.1
2.71 2.749 3.0266 3.128 4.2847
1.836 1.889 2.290 2.446 2.792
1.836 1.889 2.290 2.446 7.545
0.2 0.2 0.2 0.4 0.9
3423
DOI: 10.1021/ie5048016 Ind. Eng. Chem. Res. 2015, 54, 3412−3429
Article
Industrial & Engineering Chemistry Research
We will suppose that the uncertain parameters θi are independent, random parameters θi, having the normal distribution N1(E[θi], σi), where E[θi] = θNi , θNi is the nominal value of the uncertain parameter θi. The vector θN of the nominal values is of the form θN = (45.36; 333; 300; 9.8; 1635). The values σi are determined from the condition 3.3σi = γΔθi, where Δθi is the ith component of the vector Δθ = (0.1; 0.02; 0.03; 0.1; 0.1)3 and γ is a coefficient. The uncertainty region is Tθ = {θi: θNi − γΔθi ≤ θi ≤ θNi + γΔθi, i = 1, ..., p} and γΔθi is the maximal deviation of the parameter θi from the nominal value. With the help of the parameter γ, we can change the region T. We chose σi so that the probability measure of the interval θNi − γΔθi ≤ θi ≤ θNi + γΔθi is equal to 0.999. The objective function is f = 691.2V0.7 + 873.6A0.6 + 1.76W + 7.056 F1 . We solved three problems. In the first problem, the uncertain parameters were equal to their nominal values θN (their expected values). This is the nominal optimization problem. It has form of eq 6 in which θ = θN. The second problem is the one-stage optimization problem with chance constraints. The third problem is the two-stage optimization problem with chance constraints. In Tables 3 and 4 we give the results of solving the nominal optimization problem, the OSOPCC, and the TSOPCC by the proposed method and the method from ref 32 for the case of independent uncertain parameters. It is clear that the line γ = 0 corresponds to the nominal optimization problem. A dash in Table 3 means that the method from ref 32 cannot find the solution of OSOPCC for given α and γ. Tables 3 and 4 show that in this case the optimal values of the goal function obtained using TSOPCC are insignificantly (approximately 0.2−0.6%) better than the ones obtained using OSOPCC. In addition, these tables show that the computational time of solving problems OSOPCC and TSOPCC using the proposed method is significantly less than the computational time of solving these problems using the method from ref 32. We nowompare the proposed approach with the straightforward way of solving problem 17 when using a method of nonlinear programming (for example, SQP30). Suppose we use the straightforward way of solving problem 17. In this case, it is necessary to calculate at each iteration the values of the objective function, the left-hand sides of constraints of problem 17, and the gradients of these functions. Let us estimate the
Figure 5. Flowsheet of a chemical process with continuous stirred tank reactor and heat exchanger.
A in the product; flow rate of the recycle, F1; outlet temperature of the cooling water, Tw2; and the flow rate, W, of the cooling water. Vector θ = (F0, T0, Tw1, k0, U) is the vector of uncertain parameters, where F0 is a feed flow rate, T0 a feed flow temperature, Tw1 an inlet temperature of the cooling water, k0 the rate constant, and U the overall heat transfer coefficient. The transformation of problem 1 into problem 6 is given in ref 32. Nonlinear constraints (eq 5) have the form 0.9 − (cA0 − cA1)/cA0 ≤ 0 Tw2 − Tw1 ≥ 0 Tw1 − T2 + 11.1 ≤ 0
T2 − Tw1 ≥ 11.1
Tw2 − 355 ≤ 0
301 − Tw2 ≤ 0
(93)
Linear constraints (eq 5) have the form T2 − T1 ≤ 0
311 − T2 ≤ 0
T2 − 389 ≤ 0 311 − T1 ≤ 0 T1 − 389 ≤ 0
(94)
We will consider all nonlinear constraints (eq 93) as chance constraints. We suppose that during the operation stage there is enough process data to allow us either to measure or to calculate the values of all the uncertain parameters. Therefore, the control variables can be used for improvement of the chemical process performance at each time instant (for each value of the uncertain parameters).
Table 3. Results of Solving OSOPCC for Example 2 with the Proposed Method and the Method Proposed in Ref 32 OSOPCC method proposed in ref 32 γ 0 1.0 1.0 1.0 1.25 1.25 1.25 1.5 1.5 1.5 2.5 2.5
proposed method
α
f
V
A
CPU time (s)
f
V
A
CPU time (s)
0.5 0.75 0.95 0.5 0.75 0.95 0.5 0.75 0.95 0.5 0.95
9374 9893 9972 10132 9970 10069 10242 10038 10190 10371 − −
5.48 5.63 5.79 6.04 5.66 5.87 6.19 5.7 5.95 6.35 − −
7.88 7.44 7.48 7.62 7.48 7.59 7.82 7.68 7.98 8.07 − −
0.01 10 11 12.6 14.5 37 32 65 70 73 − −
9379 9608 9983 9382 9677 9965 9386 9750 10113 9409 10824
5.48 5.64 5.84 5.48 5.67 5.93 5.48 5.72 6.03 5.48 6.42
7.87 7.70 7.68 7.86 7.68 7.62 7.86 7.66 7.64 7.82 8.61
2.9 3.38 3.85 2.8 2.5 10 2.73 2.32 14 2.19 10
3424
DOI: 10.1021/ie5048016 Ind. Eng. Chem. Res. 2015, 54, 3412−3429
Article
Industrial & Engineering Chemistry Research Table 4. Results of Solving TSOPCC for Example 2 with the Proposed Method and the Method Proposed in Ref 32 TSOPCC method proposed in ref 32
proposed approach
γ
α
f
V
A
CPU time (s)
f
V
A
CPU time (s)
1.0 1.0 1.0 1.25 1.25 1.25 1.5 1.5 1.5 2.5 2.5
0.5 0.75 0.95 0.5 0.75 0.95 0.5 0.75 0.95 0.5 0.95
9865 9944 10097 9937 10032 10196 9991 10106 10302 10283 10813
5.57 5.73 6.00 5.6 5.79 6.14 5.62 5.86 6.28 5.71 6.88
7.39 7.39 7.51 7.53 7.57 7.65 7.36 7.72 7.81 8.41 8.72
315 411 987 501 645 400 574 584 591 2020 2100
9359 9528 9901 9360 9601 9961 9364 9683 10050 9384 10798
5.45 5.62 5.82 5.45 5.64 5.92 5.47 5.70 6.02 5.47 6.38
7.81 7.62 7.15 7.75 7.12 7.0 7.83 7.05 7.35 7.71 8.59
109 134 283 108 212 324 167 322 325 199 500
Figure 6. Reaction−separation system.
splitter (4), flash tank (5), distillation column (6), and mixers (7 and 8). The effluent (with flow rate F1) from the reaction part is split using the splitter (4) into streams F2, F3, F8, and F9. At high conversion in the reactor the bypass streams F8 and F9 can be significant depending on the capital cost of the reactor, flash tank, and distillation column. The system has two product streams with flow rates P1 and P2, in which the specified purities of species B and A are given. The process model of the reaction part is given in ref 28. The process constraints are given by eqs 93 and 94. The flow F1 in Figure 5 is designated as F11 in Figure 6. The process model of the reaction part is the same as in example 2. We excluded only the constraint 0.9 − (cA0 − cA1)/ cA0 ≤ 0, which in this case is unnecessary. The process model of the separation part is given as follows:10 Split mass balances
total CPU time required for solving problem 17. Using the Monte Carlo method from the software package Mathematica, we computed 12 multiple integrals used to determine the values of objective function and the left-hand sides of constraints of problem 17 for z(θ) = z*(θ) where z*(θ) is the solution of problem 17 for γ = 1.0 and αj = 0.5. The total CPU time (Pentium 4) required for the computation of these multiple integrals is approximately 40 min. Therefore, because at each iteration it is necessary to calculate the gradients of the objective function and the left-hand sides of constraints of problem 17, solving it can require several hours of computational time. At the same time, the total CPU time required to solve problem 17 (for γ = 1.0, αj = 0.5) with the help of our approach is equal to 1 s. Thus, the proposed approach reduces the computational time significantly. The value of the objective function obtained by solving problem 17 by the proposed method is equal to 9848. At the same time, the value of the expected value E[f(d, z, θ)] obtained by numerical integration by the Monte Carlo method is 9852. Thus, the piecewise linear approximation of the goal function provides sufficient accuracy. 8.3. Example 3. In this system (Figure 6), species B is produced from species A as in Figure 2. The flowsheet consists of two parts. The reaction part consists of reactor (1), heat exchanger (2), and compressor (3). This is the same system considered in example 2. The separation part10 consists of the
F 2A = S1F1A
F 2 B = S1(F1 − F1A )
F 8A = S2F1A
F 8B = S2(F1 − F1A )
F 9A = S3F1A
F 9B = S3(F1 − F1A )
F 3A = S4F1A
F 3B = S4(F1 − F1A )
S1 + S2 + S3 + S4 = 1 3425
DOI: 10.1021/ie5048016 Ind. Eng. Chem. Res. 2015, 54, 3412−3429
Article
Industrial & Engineering Chemistry Research
drum and the distillation column, respectively. As control variables we use [T1, T2, S1, S2, S3, S4]. Design variables are [V, A, Df, Dc]. Such selection of the control variables permits us to evaluate all the state variables without iterations. Other data for this example are given in Table 5. Constraints connected with the separation part are as follows:10 Design constraints
Flash mass balances F 4A = 0.9F 2A F 4 B = (1 − k f )F 2 B F 5A = 0.1F 2A F 5B = k f F 2 B
Distillation column mass balances F 6A = 0.97F 3A F 6 B = (1 − k C)F 3B F 7A = 0.03F 3A F 7B = k CF 3B
DF2 ≥
1.27M(F 2A + F 2 B) 45υρν
DC2 ≥
1.27M(F 3A + F 3B) 35υρν
Mixer mass balances P1A = F 4A + F 6A + F 8A P1B = F 4 B + F 6 B + F 8B P 2A = F 5A + F 7A + F 9A P 2 B = F 5B + F 7B + F 9B
υ = 0.064 (ρL − ρV )/ρV
Values of parameters of the mathematical model are given in Table 5.
Quality constraints
Table 5. Data for Example 3
9P1B − P1A ≤ 0,
parameter ρcp cpw F0 CA0 EA/R M Np kc
value 3
167.4 kJ/(m K) 4.19 kJ/(kg K) 4.12 m3/h 8.0 kmol/m3 555.6 K 92.0 kg/kmol 20 0.91
parameter
value
k0 U ΔH T0 Tw1 ρL ρU kf
28.52 1/h 148.636 kJ/(m2 h K) −3500.0 kJ/kg mol 333.0 K 300.0 K 883.0 kg/m2 3.0 kg/m2 0.85
3P 2A − P 2 B ≤ 0
Size restrictions 1.0 ≤ DF ≤ 5.5,
1.0 ≤ DC ≤ 5.5
Uncertain parameters for the reaction part F0( ± 10%)
T0( ± 2%)
Tw1( ± 3%)
k(± 10%)
U ( ± 10%)
Uncertain parameters for the separation part k f ( ±10%)
kc( ±10%)
Thus, there are four design variables, six control variables, six uncertain parameters, and nine chance constraints in this optimization problem. The performance objective function is profit, which includes the operating costs of the streams with flow rates P1 and P2,
Here FIA, FIB (I=2, ..., 8) are flow rates (in kilomoles per hour) of species A and B, repsectively; Si (i = 1, ..., 4) are split fractions; Df and Dc are diameters of flash tank and column, respectively; kc and kf are coefficients of separation in the flash
Table 6. Results of Solving Nominal Optimization, OSOPCC, and TSOPCC for Example 3
nominal OSOPCC
TSOPCC
γ
α
f ($/h)
V (m3)
A (m2)
Df (m)
Dc (m)
CPU (s)
− 1 1 1 1.25 1.25 1.25 1.5 1.5 1.5 2.5 2.5 2.5 1 1 1 1.25 1.25 1.25 1.5 1.5 1.5 2.5 2.5 2.5
− 0.5 0.75 0.95 0.5 0.75 0.95 0.5 0.75 0.95 0.5 0.75 0.95 0.5 0.75 0.95 0.5 0.75 0.95 0.5 0.75 0.95 0.5 0.75 0.95
10 060 9 848 9 834 9 807 9 848 9 829 9 794 9 848 9 825 9 780 9 848 9 806 9 653 9 868 9 856 9 847 9 868 9 852 9 841 9 868 9 846 9 801 9 868 9 810 9 685
1.928 1.8266 1.8582 1.9038 1.82661 1.8661 1.9231 1.82662 1.874 1.9424 1.82664 1.9057 2.0196 1.8256 1.8276 1.8293 1.8256 1.8260 1.8271 1.8256 1.8266 1.9064 1.8256 1.9045 2.001
5.551 5.386 5.3608 5.3229 5.386 5.3543 5.3066 5.386 5.3478 5.2900 5.386 5.3213 5.2224 5.381 5.359 5.362 5.381 5.363 5.366 5.381 5.362 5.311 5.381 5.320 5.1204
19.509 19.767 19.706 19.623 19.767 19.691 19.59 19.767 19.676 19.557 19.767 19.619 19.266 19.737 19.671 19.639 19.737 19.732 19.781 19.737 19.643 19.557 19.737 19.549 19.132
1 1 1 1 1 1 1 1 1 1 1 1 2.957 1 1 1 1 1 1 1 1 1.01 1 1.03 2.726
0.01 0.84 0.59 0.45 0.51 0.43 0.43 0.51 0.48 0.62 0.51 0.45 0.43 108 240 450 108 300 530 108 512 609 108 465 536
3426
DOI: 10.1021/ie5048016 Ind. Eng. Chem. Res. 2015, 54, 3412−3429
Article
Industrial & Engineering Chemistry Research
solving the one-stage optimization problem. Moreover, this approach has a significant advantage in comparison with the method of solving the one-stage and two-stage optimization problems with chance constraints proposed by us earlier.31,32 The method proposed here should solve at each iteration problem 64 when solving OSOPCC and problem 81 when solving TSOPCC. We showed that these problems have nd + nz search variables (d, z) and nd + nzN̅ k search variables (d, zl), respectively. At the same time, we noted in Introduction that the optimization problem, which is solved at the kth iteration of the method described in ref 32, has nd + (nz + 2mp)Nk search variables when solving the TSOPCC and nd + nz + 2mpNk search variables when solving the OSOPCC. Thus, the number of search variables in problems 64 and 81 is significantly less than the number of search variables in the optimization problems which we have to solve if we use the method proposed in.32 Consider example 2, where m = 5, p = 5, nd = 2, and nz = 2. Then at the fifth iteration of solving TSOPCC the number of search variables in optimization problem 81 will be equal to 12. At the same time, the number of search variables in the optimization problem solved at the fifth iteration when we use the method from refs 31 and 32 will be equal to 262. Suppose we solve OSOPCC. Then at the fifth iteration the number of the search variables in optimization problem 64 will be equal to 4. At the same time, the number of the search variables when we use the method from refs 31 and 32 will be equal to 254. The computational experiment showed that the proposed approach decreases significantly the computational time of solving the OSOPCC and TSOPCC for the design problems of the chain of two continuous stirred tank reactors and the process consisting of a continuous stirred tank reactor and heat exchanger. The following is a summary of all assumptions used in this paper: 1. The functions f(d, z, θ) and gj(d, z, θ), j = 1, ..., m are continuously differentiable. 2. Chance constraints are individual ones. 3. If random variables θ are independent, then they have arbitrary probability distribution; if they are dependent random variables, then they should have normal distribution. 4. At the operation stage, values of all the uncertain parameters can be either measured or calculated using available experimental data. 5. There is the possibility of change in the control variables at the operation stage.
capital and operating costs of the reaction, and separation parts as follows: f = {cost of products} − {capital cost for the flash tank and distillation column} − {operating cost in the separation part} − {capital costs for the reactor and heat exchanger} − {operating costs for the reaction part, i.e., cost of the cold water and pumping} This is also given by f = {75(P1A + P1B ) + 55(P 2A + P 2 B)} − {10Df 2 + (10 + 0.3Np)Dc 2} − {0.2(F 2A + F 2 B) + 2.0(F 3A + F 3B)} − {250.0V 0.7 + 87.36A0.6} − {1.76W1 + 7.056F11}
where Np is the number of trays in the distillation column. We suppose that at the operation stage there is enough process data to correct all the uncertain parameters. In Table 6 we give the results of solving the nominal optimization problem, the OSOPCC, and the TSOPCC by the proposed method for the case of independent uncertain parameters.
9. CONCLUSION We proposed the approach for solving chance constraints optimization problems for the cases when the uncertain parameters are either independent random variables with arbitrary probability distributions or dependent normally distributed random variables. The approach is based on three operations. 1. Approximate transformation of chance constraints into deterministic constraints. We devised a new technique of this transformation, which is based on the approximation of each region Ωj = {θ: gj(d, z(θ), θ) ≤ 0, θ ∈ Tθ } by a union of multidimensial rectangles. 2. Approximate calculation of the expected value of the objective function with the help of the piecewise linear approximation of the objective function. 3. Approximate representation of the control variables z(θ) as piecewise constant functions. To improve these approximations, partition of the uncertainty region into subregions is performed at each iteration. The proposed method excludes multiple integration. At the same time, all the methods known to us16,24−29 require the use of multiple integration to calculate the left-hand sides of chance constraints and the expected value of the goal function. Compare, for example, the proposed method with methods using the Gauss quadrature method. We showed that the computational intensity of calculating the approximation E(k) ap [f(d, z, θ); Tθ] at the kth iteration is equal to k(p + 1) calculations of values of the goal function. At the same time, to calculate the expected value of the goal function using the Gauss quadrature method, we would have to calculate the goal function qp times, where q is the number of subintervals into which each interval of change of parameters θi is divided and p is the dimensionality of the vector θ. In comparison with the back-mapping method,16 the proposed method has the following advantages: the use of the back-mapping method and collocation on finite elements significantly diminishes the computational burden of numerical integration. However, our approach excludes completely the use of numerical integration. In addition, we proposed the approach for solving the TSOPCC. Wendt at al.16 considered only the method of
■ ■
AUTHOR INFORMATION
Notes
The authors declare no competing financial interest.
NOMENCLATURE
Acronyms
TSOP = two-stage optimization problem OSOP = one-stage optimization problem TSOPHC = two-stage optimization problem with hard constraints TSOPCC = two-stage optimization problem with chance constraints OSOPCC = one-stage optimization problem with chance constraints CP = chemical process 3427
DOI: 10.1021/ie5048016 Ind. Eng. Chem. Res. 2015, 54, 3412−3429
Article
Industrial & Engineering Chemistry Research Notation
(12) Pintaric, N.; Kravanja, Z. A strategy for MINLP synthesis of flexible and operable processes. Comput. Chem. Eng. 2004, 28, 1087− 1103. (13) Wey, J.; Realf, M. J. Sample average approximation method for stochastic MINLPs. Comput. Chem. Eng. 2004, 28 (3), 319−332. (14) Call, P.; Wallace, S. W. Stochastic Programming; John Wiley & Sons Inc: New York, 1994. (15) Maranas, K. Optimal molecular design under property prediction uncertainty. AIChE J. 1997, 43, 1250−1264. (16) Wendt, M.; Li, P.; Wozny, G. Nonlinear chance-constrained process optimization under uncertainty. Ind. Eng. Chem. Res. 2002, 41, 3621−3629. (17) Li, P.; Arellano-Garcia, H.; Wozny, G. Chance constrained programming approach to process optimization under uncertainty. Comput. Chem. Eng. 2008, 32, 25−45. (18) Flemming, Th.; Bartl, M.; Li, P. Set-point optimization for closed-loop control systems under uncertainty. Ind. Eng. Chem. Res. 2007, 46, 4930−4942. (19) Klöppel, M.; Geletu, A.; Hoffmann, A.; Li, P. Using sparse-grid methods to improve computation efficiency in solving dynamic nonlinear chance-constrained optimization problems. Ind. Eng. Chem. Res. 2011, 50, 5693−5704. (20) Geletu, A.; Hoffmann, A.; Klöppel, M.; Li, P. Monotony analysis and sparse-grid integration for nonlinear chance constrained process optimization. Eng. Optim. 2011, 43, 1019−1041. (21) Geletu, A.; Hoffmann, A.; Klöppel, M.; Li, P. A tractable approximation of non-convex chance constrained optimization with non-Gaussian uncertainties. Eng. Optim. 2015, 47, 495−520. (22) Geletu, A.; Klöppel, M.; Zhang, H.; Li, P. Advances and applications of chance-constrained approaches to systems optimization under uncertainty. Int. J. Syst. Sci. 2013, 44, 1209−1232. (23) Ostrovsky, G.; Ziyatdinov, N.; Lapteva, T. One-stage optimization problem with chance constraints. Chem. Eng. Sci. 2010, 65, 2373−2381. (24) Nemirovski, A.; Shapiro, A. Scenario approximations of chance constraints. In Probabilistic and Randomized Methods for Design under Uncertainty; Calafiore, G., Dabbene, F., Eds.; Springer-Verlag: London, 2006; pp 3−47. (25) Erdoǧan, E.; Iyengar, G. On two-stage convex chance constrained problems. Math. Methods Oper. Res. 2007, 65, 115−140. (26) Chen, X.; Sim, M.; Sun, P. A Robust Optimization Perspective on Stochastic Programming. Oper. Res. 2007, 55, 1058−1071. (27) Wang, W.; Ahmed, S. Sample average approximation of expected value constrained stochastic programs. Oper. Res. Lett. 2008, 36, 515−519. (28) Ierapetritou, M. G.; Pistikopoulos, E. N. Global optimization for stochastic planning, scheduling and design problems. In Global optimization in engineering design; Grossmann, I. E., Ed.; Kluwer Academic Publ.: Boston, 1996; pp 231−287. (29) Bernardo, F. P.; Pistikopoulos, E. N.; Saraiva, P. M. Robustness criteria in process design optimization under uncertainty. Comput. Chem. Eng. 1999, S459−S462. (30) Bernardo, F. P.; Saraiva, P. M. Quality costs and robustness criteria in chemical process design optimization. Comput. Chem. Eng. 2001, 25, 27−40. (31) Ostrovsky, G. M.; Ziyatdinov, N. N.; Lapteva, T. V.; Zaitsev, I. V. Two-stage optimization problem with chance constraints. Chem. Eng. Sci. 2011, 66, 3815−3828. (32) Ostrovsky, G. M.; Ziyatdinov, N. N.; Lapteva, T. V. Optimal design of chemical processes with chance constraints. Comput. Chem. Eng. 2013, 59, 74. (33) Cramer, H. Mathematical Methods of Statistics; Princeton University Press: New York, 1946. (34) Abramowitz, M.; Stegun, I. A. Handbook of Mathematical Functions; National Bureau of Standards, U.S. Government Printing Office: Washington, DC, 1974. (35) Biegler, L. T. Nonlinear Programming Concepts: Algorithms, and Applications to Chemical Processes; Society for Industrial and Applied Mathematics: Phildelphia, PA, 2010.
d = an nd-vector of design variables z = an nz-vector of control variables d(k), z(k) = optimal values of the design and control variable vectors obtained at the kth iteration (the solution of eq 58), respectively θ = a p-vector of the uncertain parameters Tθ = uncertainty region E[f(d, z(θ), θ)] = the expected value of the function f(d, z(θ), θ) R(k) = the l-subregion (multidimensional rectangle) of the l region T R̅ (k) = the l-subregion (multidimensional rectangle) in the l space of variables θ̅ Nk = number of regions R(k) l at the kth iteration N̅ k = number of regions R̅ (k) l at the kth iteration θ(k)L = lower bound of change of the variable θi in the region il R(k) l at the kth iteration θ(k)U = upper bound of change of the variable θi in the region il R(k) l at the kth iteration z̃(θ) = approximation of the multivariate functions z(θ) E(k) ap [f(d, z̃(θ), θ)] = approximation of the expected value of the function f(d, z(θ), θ) at the kth iteration zl = a vector of the control variables corresponding to the subregion R(k) l ̃ z̃(θ), θ) = piecewise linear approximation of the function f(d, f(d, z̃(θ), θ) θr = linearization point at the subregion R(k) r ̃ zr, θ, θr) = linear part of the Taylor’s expansion of the f(d, function f(d, zr, θ) at the point θr μq = a quality of an approximation of the objective function at the region R(k) q
■
REFERENCES
(1) Wellons, H.; Rekleitis, G. The design of multiproduct bath plants under uncertainty with staged expansion. Comput. Chem. Eng. 1989, 13, 115−126. (2) Halemane, K. P.; Grossmann, I. E. Optimal process design under uncertainty. AIChE J. 1983, 29, 425−433. (3) Swaney, R. E.; Grossmann, I. E. An index for operational flexibility in chemical process design. AIChE J. 1985, 31, 621−630. (4) Grossmann, I. E.; Floudas, C.A. Active constraints strategy for flexibility analysis in chemical processes. Comput. Chem. Eng. 1987, 11, 675−693. (5) Ostrovsky, G. M.; Volin, Yu. M.; Senyavin, M. M. An approach to solving a two-stage optimization problem under uncertainty. Comput. Chem. Eng. 1997, 21, 311−325. (6) Rooney, W. C.; Biegler, L. T. Optimal process design with model parameter uncertainty and process variability. AIChE J. 2003, 49, 438− 449. (7) Bernardo, F. P.; Saraiva, P. M. Robust optimization framework for process parameter and tolerance design. AIChE J. 1998, 44, 2007− 2117. (8) Diwaker, U. M.; Kalagnanam, J. R. An efficient sampling technique for optimization under uncertainty. AIChE J. 1997, 43, 440−447. (9) Carnahan, B. H.; Luther, A.; Wilkes, J. O. Applied Numerical Methods; John Wiley & Sons Inc: New York, 1969. (10) Acevedo, J.; Pistikopoulos, E. N. Stochastic optimization based algorithms for process synthesis under uncertainty. Comput. Chem. Eng. 1998, 22, 647−671. (11) Bernardo, F. P.; Pistikopoulos, E. N.; Saraiva, P. M. Integration and Computational issues in stochastic design and planning optimization problems. Ind. Eng. Chem. Res. 1999, 38, 3056−3068. 3428
DOI: 10.1021/ie5048016 Ind. Eng. Chem. Res. 2015, 54, 3412−3429
Article
Industrial & Engineering Chemistry Research (36) Gill, P.; Murray, W.; Wright, M. Practical Optimization; Academic Press: London, 1981. (37) Rooy, H.; Sahinidis, N. Global Optimization of Nonconvex NLPs and MINLPs with Applications in Process Design. Comput. Chem. Eng. 1995, 19, 551. (38) Ostrovsky, G. M.; Ziyatdinov, N. N.; Lapteva, T. V.; Zaitsev, I. V. Optimization of chemical processes with dependent uncertain parameters. Chem. Eng. Sci. 2012, 83, 119−127.
3429
DOI: 10.1021/ie5048016 Ind. Eng. Chem. Res. 2015, 54, 3412−3429