Optimization of Chemical Process with Joint Chance Constraints

Feb 7, 2017 - Karpov Institute of Physical Chemistry, Vorontsovo Pole, 10, Moscow ... Kazan National Research Technological University, Karl Marx Stre...
0 downloads 0 Views 1MB Size
Subscriber access provided by University of Newcastle, Australia

Article

OPTIMIZATION OF CHEMICAL PROCESS WITH JOINT CHANCE CONSTRAINTS Gennady M. Ostrovsky, Nadir N. Ziyatdinov, Tatyana V. Lapteva, Anna S. Silvestrova, and Quan T. Nguyen Ind. Eng. Chem. Res., Just Accepted Manuscript • DOI: 10.1021/acs.iecr.6b02683 • Publication Date (Web): 07 Feb 2017 Downloaded from http://pubs.acs.org on February 18, 2017

Just Accepted “Just Accepted” manuscripts have been peer-reviewed and accepted for publication. They are posted online prior to technical editing, formatting for publication and author proofing. The American Chemical Society provides “Just Accepted” as a free service to the research community to expedite the dissemination of scientific material as soon as possible after acceptance. “Just Accepted” manuscripts appear in full in PDF format accompanied by an HTML abstract. “Just Accepted” manuscripts have been fully peer reviewed, but should not be considered the official version of record. They are accessible to all readers and citable by the Digital Object Identifier (DOI®). “Just Accepted” is an optional service offered to authors. Therefore, the “Just Accepted” Web site may not include all articles that will be published in the journal. After a manuscript is technically edited and formatted, it will be removed from the “Just Accepted” Web site and published as an ASAP article. Note that technical editing may introduce minor changes to the manuscript text and/or graphics which could affect content, and all legal disclaimers and ethical guidelines that apply to the journal pertain. ACS cannot be held responsible for errors or consequences arising from the use of information contained in these “Just Accepted” manuscripts.

Industrial & Engineering Chemistry Research is published by the American Chemical Society. 1155 Sixteenth Street N.W., Washington, DC 20036 Published by American Chemical Society. Copyright © American Chemical Society. However, no copyright claim is made to original U.S. Government works, or works produced by employees of any Commonwealth realm Crown government in the course of their duties.

Page 1 of 66

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

OPTIMIZATION OF CHEMICAL PROCESS WITH JOINT CHANCE CONSTRAINTS Gennady M. Ostrovsky1,2, Nadir N. Ziyatdinov2, Tatyana V. Lapteva2, Anna S. Silvestrova2*, Quan T. Nguyen2 1

Karpov Institute of Physical Chemistry, Vorontsovo Pole, 10, Moscow, 103064, Russia, [email protected]

2

Kazan National Research Technological University, Karl Marx str., 68, Kazan, Russia, 420015

[email protected], [email protected], [email protected]*, [email protected] 1. Introduction In many optimization applications, it is assumed that the problem data are known and certain. However, uncertainties exist in most realistic problem due to the random nature of the process data, measurement errors or other reasons. We will consider the problem of the optimal design of a chemical process (CP) for the case when inexact mathematical models are used. The optimization problem of a CP design in the case of the use of an exact mathematical model can be expressed as follows min f ( d , z , θ )

(1)

g j ( d , z , θ ) ≤ 0 , j = 1,..., m ,

(2)

d ,z

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

where f ( d , z,θ ) is the objective function, inequality constraints (2) are design specifications, d is a vector of design variables, z is a vector of control variables, and θ is a p -vector of parameters. We suppose that the functions f ( d , z , θ ) , g j (d , z,θ ) , j = 1,..., m are continuously differentiable, and the parameters θ are uncertain. A few approaches to account for uncertainty in optimization have been proposed, such as stochastic programming1, robust optimization2, approximate dynamic programming3, and chance-constrained optimization4, which is the focus of this article. The problem of the optimal design of a CP under uncertainty is usually formulated in two form: the two-stage optimization problem and the one-stage optimization problem. Methods of solving the two-stage optimization problem with hard constraints (TSOPHC) and the one-stage optimization problem with chance constraints (OSOPCC) were developed significantly in5,6,7,8,9,10. The optimization problem with chance constraints (CCs) was introduced in a work of11 and an extensive review can be found in12. Joint chance constrained (JCCs) problem for independent random variables were used initially by13,14. There are many challenging aspects of solving CCs optimization problem. The main issues in solving CCs optimization problems are the computation of multiple integrals for calculation of the probabilities of constraints satisfaction (that is very computationally intensive operation) and the feasible region of chance constrained optimization problem is often nonconvex15. To avoid this difficulties, existing approaches for a CCs optimization problem solving rely on solving an approximation problem. Generally, three groups of approximation methods are used to approximate a CCs: 1) improvement of the Gauss quadrature (see papers16,17,18), 2) sampling based approaches, 3) analytical approximation approaches. One must note the papers19,20,21 where methods of the approximate evaluation of multidimensional integral are considered.

ACS Paragon Plus Environment

Page 2 of 66

Page 3 of 66

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

The second group of methods contains two categories of technique: scenario approximation and sample average approximation. Research contributions in scenario approximation method development have been made by22,23,24,9,10, etc. Nemirovski and Shapiro23 proposed an ellipsoidlike iterative algorithm for solving two-stage optimization problems with chance constraints for the special case where the left hand sides of constraints (2) are bi-affine with respect to the variables d , z and θ . Erdoǧan and Iyengar25 extend this algorithm to the case when the left hand sides of constraints (2) are bi-convex. However, the scenario approximation itself is random and its solution may not satisfy the chance constraint15. Sample average approximation methods use an empirical distribution associated with random samples to replace the actual distribution and to evaluate the chance constraint. This kind of method for chance constrained problems has been investigated by26,24, etc. The third group of methods is divided into three categories of technique: robust optimization, well-known probability inequalities and transformation of chance constraints into deterministic ones. The CCs deterministic approximation based techniques use Chebyshev’s inequality27, Hoeffding’s inequality28,29,30, etc. Nemirovski and Shapiro31 devised convex approximations of chance constraints. Hong et al.32 proposed convex approximations for joint chance constrained problem. They studied the robust counterpart optimization techniques for linear optimization and mixed integer linear optimization problems. Nemirovski and Shapiro23, Bertsimas et al.33 employ Bonferroni's inequality to conservatively approximate a robust joint chance constraint with violation probability α by m robust individual chance constraints whose violation probabilities sum up to α . Although this approach has a long tradition in chance constrained programming Prékopa12, Chen et al.34 proposed an approximation approach for solving a class of multistage chance-constrained stochastic linear optimization problems and in35 demonstrate that the quality

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

of this approximation can deteriorate substantially with m if the safety conditions are positively correlated. The approaches for robust optimization problem solving are studied by36, in series of papers37,38, where both continuous (general, bounded, uniform, normal) and discrete (general, binomial, Poisson) uncertainty distributions are studied and applied the framework to operational planning problems. Erdogan and Iyengar 39 study robust JCCs where the ambiguity set contains all distributions that are within a certain distance of a nominal distribution (in terms of the Prohorov metric). They derive a conservative approximation by sampling from the nominal distribution. Hu et al.40 propose approach for convex reformulations of ambiguous JCCs problems. The approaches based on the transformation of chance constraints into deterministic ones allows us to avoid calculation of multiple integrals. We must note, that except for a few specific probability distributions (e.g., normal distribution), it is difficult to formulate an equivalent deterministic constraint for the chance constraint. First of all, one must note that linear chance constraints can be transformed into deterministic ones41. Maranas42 derive the technique for the case when the constraints are nonlinear with respect to decision variables and linear with respect to uncertain parameters. Wendt et al.43 have proposed a back-mapping method of solving the steady-state OSOPCC based on the use of the monotone relationship between constrained output and one of input uncertain parameters. Later on, Li et al.44 applied this strategy to solve the problems of the optimal design of CP, optimization of nonlinear dynamic systems, optimal production planning and optimal control of chemical processes under uncertainty. In45, the backmapping approach was improved for addressing closed-loop control design under uncertainty. Geletu et al.46 showed that the mathematical analysis of monotonic relations can be done using the so-called global implicit function theorem. In46, a new approach to solve the nonconvex

ACS Paragon Plus Environment

Page 4 of 66

Page 5 of 66

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

OSOPCC with non-Gaussian distributed uncertainties was proposed. A review paper47 summarized the recent advances in solving the OSOPCC. However, their approach still requires the calculation of multiple integrals. Ostrovsky et al.48 have proposed an approach to solving OSOPCC based on the transformation of chance constraints into deterministic ones and the partition of the uncertainty region. Thus, these papers consider stochastic optimization problems when an objective function and constraints are either linear or bi-convex functions. It should be noted that it is very difficult to check the convexity property for a realistic model. Significantly less attention has been given to formulation and solution of the TSOP for chemical processes with chance constraints. In the papers49,50 the TSOP with chance constraints is formulated by relaxing the chance constraints and penalizing the objective function (using Taguchi loss function). Here difficulties arise with the assignment of a penalty coefficient. Bernardo and Saraiva51 consider the two-stage optimization problem with the following constraint: the expected value of the variance of a variable y characterizing the process quality must be less than or equal to some given value. Ostrovsky et al.52,53 have formulated the twostage optimization problems with chance constraints (TSOPCC) and have developed a method of solving this problem based on the transformation of chance constraints into deterministic ones and a partition of the uncertainty region for the cases when the uncertain parameters have the normal distribution. However, in these papers we considered the case of individual chance constraints. In this paper, we will work out a new approximate method of solving the optimization problems with joint chance constraints for the case when the uncertain parameters are independent random variables. We only consider chance constraints with left-hand side uncertainty. The method will be based on transformation of chance constraints into deterministic

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Page 6 of 66

form. This permits to avoid the computationally intensive operation of multiple integrals calculation when solving joint chance constraints optimization problems. As we showed earlier54 an optimization problem with dependent, normally distributed uncertain parameters can be transformed into an optimization problem with independent, normally distributed uncertain parameters. Therefore, all the methods, which will be considered in this paper, can be used if uncertain parameters are dependent and normally distributed. As a result, the assumptions of the proposed method are following: 1. Chance constraints have uncertainty at left-hand side. 2. It should be available to express an uncertain parameter as a function of all other uncertain parameters and decision variables using constraints functions (2) g j (d , z,θ ) , j = 1,..., m . 3. All functions f ( d , z , θ ) , g j (d , z,θ ) , j = 1,..., m are continuously differentiable. 4. Uncertain parameters are uncorrelated, and their probability density functions are known. In case of correlated, normally distributed uncertain parameters, one can use method54 to transform an optimization problem with correlated, normally distributed uncertain parameters into an optimization problem with independent, normally distributed uncertain parameters. 2. Formulation of optimization problems with joint chance constraints The TSOPJCC has the following form min E [ f ( d , z (θ ), θ )]

(3)

Pr{g j (d , z(θ ),θ ) ≤ 0, j = 1,..., m} ≥ α ,

(4)

d , z (θ )

where a probability level α is given by designer ( 0 ≤ α ≤ 1 ), E [ f ( d , z (θ ), θ )] is the expected value of the function f ( d , z (θ ), θ ) , Eθ [ f ( d , z (θ ), θ )] =

∫ f ( d , z (θ ), θ )ρ (θ )dθ , Tθ

ACS Paragon Plus Environment

(5)

Page 7 of 66

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

where



is

an

uncertainty

region,

ρ (θ )

is

the

probability

density

function,

Pr{g j (d , z (θ ), θ ) ≤ 0, j = 1,..., m} is the probability of joint satisfaction of the constraints g j (d , z(θ ),θ ) ≤ 0 , j = 1,..., m . Let the solution of problem (3) exist and designate it as d * , z * (θ ) . Since the uncertain parameters are independent, random variables the uncertainty region Tθ is a multidimensional rectangle of the following form

Tθ = {θi : θiL ≤ θi ≤ θiU , i = 1,..., p} .

(6)

The probability measure of the uncertainty region Tθ is close to 1. Let us introduce the region Ω T = {θ : g j (d , z (θ ),θ ) ≤ 0, j = 1,..., m,θ ∈ Tθ } . It is clear that probability measure of the region ΩT Pr{g j (d , z (θ ),θ ) ≤ 0, j = 1,..., m} =

∫ ρ (θ )dθ .

(7)

ΩT

If the control variables z (θ ) do not depend on θ , i.e. z (θ ) = z = const ∀θ ∈ Tθ , then TSOPJCC (3) is transformed into the OSOPJCC f = min E [ f ( d , z , θ )]

(8)

d ,z

Pr {g j ( d , z , θ ) ≤ 0, j = 1,K , m} ≥ α ,

where Pr{g j (d , z , θ ) ≤ 0, j = 1,..., m} = ∫ ρ (θ )dθ , Ω = {θ : g j (d , z , θ ) ≤ 0, j = 1,..., m, θ ∈ Tθ } . Let Ω

the solution of problem (8) exist. In the TSOPJCC the control variables z (θ ) are functions of the parameters θ . Therefore, at each time instant of the operation stage the control variables can take values corresponding to the state of CP at this time instant. This is the advantage of the TSOPCC formulation in comparison

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

with the OSOPJCC. That is why a chemical process design obtained by solving the TSOPJCC is, generally speaking, “better” in comparison with a design obtained by solving the OSOPJCC. There are following main difficulties of solving problem (3): a) The calculation of the expected value of the objective function and probabilities of constraints satisfaction is very computationally intensive. b) Decision variables are multivariate functions; therefore, TSOP problem (3) is an infinite dimensional programming problem. There are efficient methods of solving optimization problems with finite number of decision variables, so it is desirable to reduce solving problem (3) to solving an optimization problem with finite number of decision variables. We will develop an iterative method, which will be based on the next general operations:

~ a) construction of a region Ω Т ,( k ) ∈ Tθ to approximate the region Ω Т for TSOPJCC and the ~ region Ω ( k ) ∈ Tθ to approximate the region Ω for OSOPJCC at each iteration of method. We ~ ~ will construct the regions Ω ( k ) (and the region Ω T ,( k ) ) in such a way that we can calculate ~ ~ Pr{θ ∈ Ω ( k ) } (and Pr{θ ∈ Ω T ,( k ) } ) without using numerical multiple integration. We will propose an approach to improve the quality of these approximations. b) approximation of the multivariate functions z (θ ) with the help of function ~z ( k ) (θ ) of finite number of variables in TSOPJCC. We will use the approximation of the multivariate functions, that we proposed52 earlier. c) approximation of the expected value of the TSOPJCC objective function with the help of the approximation E ap [ f (d , ~ z (θ ),θ )] and approximation E ap [ f (d , z ,θ )] in case of OSOPJCC objective function. We will use the approximations that we proposed48 earlier. Thus, at the k-th iteration we will solve the following problem in case of TSOPJCC

ACS Paragon Plus Environment

Page 8 of 66

Page 9 of 66

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

(k ) f 2( k ) = min E ap [ f (d , z (θ ),θ )] ~

(9)

d , z (θ )

~ Pr{θ ∈ Ω T ,( k ) } ≥ α . ~ At each iteration all the approximations z% (θ ) , Ω T ,( k ) and Eap( k ) [ f ( d , z% (θ ), θ )] will be made ~ more exact with the help of a partition of the region Tθ an the region Ω T ,( k ) . In case of OSOPJCC control variables z (θ ) = z = const ∀θ ∈ Tθ and we will solve the problem (k ) [ f (d , z , θ )] f 1( k ) = min E ap

(10)

d ,z

~ Pr{θ ∈ Ω ( k ) } ≥ α .

(11)

~ (k ) [ f (d , z ,θ )] will be made more exact with the help of a All the approximations Ω ( k ) and Eap ~ partition of the region Tθ an the region Ω T ,( k ) at each iteration too.

% ( k ) for one-stage optimization problem 3. Construction of the approximations Ω At the k-th iteration we solve the problem (9). Let us introduce the regions Ω j , j = 1,..., m ,

Ω j = {θ : g j (d , z ,θ ) ≤ 0, θ ∈ Tθ } , j = 1,..., m .

(12)

Since the solution of problem (9) exist, there are some subregions Ω j , Ω j ≠ ∅ , j = 1,..., m , in the region Tθ . It is clear that Ω = Ω1 I Ω 2 I ... I Ω m .

(13)

Equation of boundary of the region Ω j has the following form g j (d , z ,θ1 ,θ 2 ,...,θ p ) = 0 , j = 1,..., m .

(14)

Using equation (14) for fixed j we can represent one of parameters θi (for example, θ p ) as a function θ p = ϕ j (d , z,θ1,...,θ p−1 ) . We will obtain numerically a value of the parameter θ p solving one equation (14) with one unknown quantity θ p for known values of the variables d , z and the parameters θ1 ,...,θ p−1 . Let us introduce the following designation

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

∂g j  θ p − ϕ j ( d , z, θ1 ,..., θ p −1 ), if ∂θ ≥ 0 p  g j ( d , z, θ1 ,..., θ p ) =  ϕ (d , z, θ ,..., θ ) − θ , if ∂g j < 0 1 p −1 p  j ∂θ p 

Page 10 of 66

(15) (16)

Now the region Ω can be represented in the following form Ω = {θ : g j ( d , z , θ1 ,..., θ p ) ≤ 0, j = 1,..., m, θ iL ≤ θ i ≤ θ iU , i = 1,..., p} .

Let partition the region Tθ into a set of subregions Rl( k ) on each iteration of iterative method. We will show a technique for construct of these subregions later. Let at the k-iteration the region

Tθ be already partitioned into a set R( k ) of subregions (multidimensional rectangles) Rl( k ) , l = 1,..., N k , Rl( k ) = {θi : θi(,lk ) L ≤ θ i ≤ θ i(,lk ),U , i = 1,..., p}, l = 1,..., N k ,

(17)

where N k is the number of the subregions Rl( k ) at the k -th iteration, l is the index of a subregion in the set R( k ) . Let ( Rl( k ) ) designate the set of interior points of the region Rl( k ) . We will construct the regions Rl( k ) so that the following conditions will be met R1( k ) ∪ L ∪ RN( kk ) = Tθ ,

(18)

( Rl( k ) ) I ( Rq( k ) ) = ∅, ∀l , q 1 ≤ l ≤ N k , 1 ≤ q ≤ N k .

(19)

Introduce the following regions Ω l ,( k ) Ω l ,( k ) = Rl( k ) I Ω , l =1,..., N k . Due to relations (12), (13) and (18), (19) next condition is hold

Ω = Ω1,( k ) U Ω 2,( k ) U ... U Ω N k ,( k ) . We will consider three cases of the functions g j ( d , z , θ ) behavior. At first, we will suppose that the following inequalities hold ∂g j ∂θ p

≥ 0 , ∀θ , d , z, j = 1,..., m .

ACS Paragon Plus Environment

(20)

Page 11 of 66

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

This means that the functions g j ( d , z , θ ) are monotone nondecreasing functions with respect to the variable θ p . Let us introduce a vector θ = (θ1 ,..., θ p −1 )T and regions Rl( k ) in the space of parameters θ Rl( k ) = {θ i : θ i(,lk ) L ≤ θ i ≤ θ i(,lk ),U , i = 1,..., p − 1} , l = 1,K, N k ,

(21)

where Nk is a number of the regions Rl( k ) at the k-th iterations. It is seen that variation interval of each of the variables θi , i = 1,K , p − 1 , in the region Rl( k ) is the same as in some region Rs(lk ) . Consequently, the following relations hold Rl( k ) ∈ Rs(lk ) , l = 1, 2,..., N k .

(22)

The region Ω l ,( k ) can be represented in the following form

Ω l ,( k ) = {θ : θ p − ϕ j (d , z ,θ ) ≤ 0, j = 1,..., m,θ ∈ Rl( k ) }

(23)

At the k -th iteration we will approximate the function ϕ j ( d , z , θ ) with the help of the piecewise constant function ϕ (j k ) ( d , z , θ ) of the following form ϕ (j k,1) if θ ∈ R1( k )  , ϕ j( k ) (d , z , θ ) = M ϕ ( k ) if θ ∈ R ( k ) Nk  j ,Nk

(24)

where ϕ (jlk ) = ϕ j ( d , z , θ ( k ) l ,mid ) and θ ( k ) l ,mid = (θ1( k ) l ,mid ,..., θ p( k−1) l ,mid ) is the middle point of the region Rl( k ) .

~ Let introduce the Ωl(k ) ~ Ω l( k ) = {θ : θ p − ϕ (j k,l) ≤ 0, j = 1,..., m, θ ∈ Rl( k ) } l = 1,K, N k .

~ The regions Ω l ,( k ) will be approximated by the regions Ωl(k ) . l = 1,K, N k .

% ( k ) of region Ω can be represented in the Due to relations (19), (22) the approximation Ω following form

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

~ ~ ~ ~ Ω ( k ) = Ω1( k ) U Ω (2k ) U ... U Ω (Nkk) .

Page 12 of 66

(25)

Since there exist relations (19), (22) the following relations hold % (k ) ∩ Ω % ( k ) = ∅ , ∀l , q , l = 1,K, N , q = 1,..., N . Ω k k l q

It is easy to see that if

θ p ≤ min ϕ (j k,l) ,

(26)

j∈J

where J = (1,..., m ) then the following inequalities are met

θ p − ϕ (j k,l ) ≤ 0, j = 1,..., m .

(27)

In addition, vice versa, if conditions (27) are met then conditions (26) are met. Introduce the variable yl( k ) as follows yl( k ) = min ϕ (j k,l ) .

(28)

j∈J

~ In the region Ωl(k ) the parameter θ i , i = 1,..., p − 1 , varies within the following intervals I il( k ) = {θ : θ il( k ),L ≤ θ i ≤ θ il( k ),U }, i = 1,..., p − 1, I (plk ) = {θ : θ pl( k ),L ≤ θ p ≤ yl( k ) } .

(29)

~ Thus, the region Ωl(k ) can be represented by the following form

% ( k ) = {θ : θ ∈ I ( k ) , i = 1, 2,..., p} . Ω l i il

(30)

~ Since all the parameters θ i are independent, the probability measure of the region Ωl(k ) is equal to the product of the probability measures of the intervals I il( k ) , i = 1,..., p p −1 % ( k ) } =  [ F (θ% ( k )U ) − F (θ% ( k ) L )]  [ F ( y% ( k ) ) − F (θ% ( k ) L )] , Pr{θ ∈ Ω l i il p pl  ∏ i il  p l  i =1 

Fi (θ%il( k )U ) = Fi ((θil( k )U − E[θi ])(σ i )−1 ) , Fi (θ%il( k ) L ) = Fi ((θil( k ) L − E[θi ])(σ i ) −1 ) , Fp ( ~ y l( k ) ) = F p (( y l( k ) − E[θ p ])(σ p ) −1 ) ,

ACS Paragon Plus Environment

(31)

Page 13 of 66

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

where E[θi ] is a mean value, (σ i )2 is the variance of parameter θi , Fi (ϑ ) is the standard distribution function. There is the table of values of the function Fi (ϑ ) . Using methods of curve fitting we can obtain an analytical form of the function Fi (ϑ ) . Introduce the following designation p −1

(k ) l

A

= ∏ [ Fi (θ%il( k )U ) − Fi (θ%il( k ) L )] .

(32)

i =1

The value Al( k ) does not depend on decision variables. Taking into account (31) we obtain % ( k ) } = A( k ) [ F ( y% ( k ) ) − F (θ% ( k ) L )] Pr{θ ∈ Ω l l p l p pl

(33)

We will use the following approximation of the left-hand side of constraints (11) % (k ) } = Pr {θ ∈ Ω



ρ (θ )dθ .

(34)

% (k) Ω

Therefore, taking into account (25) we obtain ~ ~ ~ ~ Pr{θ ∈ Ω ( k ) } = Pr{θ ∈ Ω1( k ) } + Pr{θ ∈ Ω (2k ) } + ... + Pr{θ ∈ Ω (Nkk) } .

(35)

~ Substituting the expressions (33) for Pr{∈ Ωl(k ) } into (35) we obtain Nk

% ( k ) } = ∑ A( k ) [ F ( y% ( k ) ) − F (θ% ( k ) L )] . Pr{θ ∈ Ω l p l p pl

(36)

l =1

~ Substituting the expressions for Pr{θ ∈ Ω (k ) } and yl( k ) (see (28)) into problem (10) we obtain (k ) f 1( k ) = min( k ) E ap [ f (d , z , θ )]

(37)

d , z , yl

Nk

∑A

(k ) l

[ Fp ( y% l( k ) ) − Fp (θ%pl( k ) L )] ≥ α ,

(38)

l =1

yl( k ) = min ϕ (j k,l ) , l = 1, 2,..., N k . j∈J

(39)

Unfortunately, in problem (37) the right hand sides of constraints (39) is a discontinuous functions. Indeed, suppose we solve problem (37) using an NLP method (for example SQP method55). Suppose at the beginning of search we have

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Page 14 of 66

min ϕ (j k,l ) = ϕ sl( k ) . j∈J

Then we have

yl( k ) = ϕ sl( k ) . Suppose at some search point d , z , yl(k ) we obtain min ϕ (j k,l ) = ϕ rl( k ) . j∈J

Then we have

yl( k ) = ϕ rl( k ) . Thus at the search point d , z , yl(k ) the variable yl( k ) changes stepwise from the value ϕ sl(k ) to the value ϕ rl(k ) . Presence of discontinuous functions significantly complicates solving of optimization problems. In connections with this, we transform problem (37) into a problem with continuous, differentiable functions. Substitute in problem (37) equality constraint (36) with the inequality constraints of the following form. yl( k ) ≤ min ϕ (j k,l ) , l = 1, 2,..., N k . j∈J

(40)

Then problem (37) has the following form (k ) f 1( k ) = min( k ) E ap [ f (d , z , θ )]

(41)

d , z , yl

Nk

∑A

(k ) l

[ Fp ( y% l( k ) ) − Fp (θ%pl( k ) L )] ≥ α ,

l =1

yl( k ) ≤ min ϕ (j k,l ) , l = 1,..., N k . j∈J

(42)

Let us show that problem (41) is equivalent to problem (37). Indeed, since Fp ( yl(k ) ) is a monotone nondecreasing function, then with the increase of the value yl( k ) the left-hand side of constraint (38) also increases. This means that the feasible region of problem (37) will be increased. Since the objective function does not depend on yl( k ) its value can be only less in the

ACS Paragon Plus Environment

Page 15 of 66

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

increased feasible region. Therefore, algorithm of the NLP will increase the value of the variable yl( k ) until equality (39) will be met. It is clear that one constraint (40) is equivalent to m

following constraints yl( k ) ≤ ϕ (j k,l ) , j = 1,..., m .

(43)

Substituting constraints (43) instead of constraint (42) rewrite problem (41) in the following form (k ) f 1( k ) = min( k ) E ap [ f (d , z , θ )]

(44)

d , z , yl

Nk

∑A

(k ) l

[ Fp ( y% l( k ) ) − Fp (θ%pl( k ) L )] ≥ α ,

l =1

yl( k ) ≤ ϕ (j k,l ) , j = 1,..., m , l = 1,..., N k .

(45)

In problem (44) all functions are continuously differentiable. Therefore, for solving this problem we can use efficient methods of nonlinear programming55. Consider now the second case when the following condition holds ∂g j ∂θ p

< 0 , ∀θ , d , z, j = 1,..., m .

(46)

In this case, the region Ω l ,( k ) has the following form

Ω l ,( k ) = {θ : ϕ j (d , z ,θ ) − θ p ≤ 0, j = 1,..., m,θ ∈ Rl( k ) } .

(47)

~ The region Ω l ,( k ) will be approximated by the region Ωl(k ) (see (15),(16), (24)) % ( k ) = {θ : ϕ ( k ) − θ ≤ 0, j = 1,..., m, θ ∈ R ( k ) } . Ω l j ,l p l

It is easy to see that if max ϕ (j k,l) ≤ θ p ,

(48)

j∈J

then the following inequalities are met

ϕ (j k,l ) − θ p ≤ 0, j = 1,..., m .

(49)

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Page 16 of 66

And vice versa, if conditions (49) are met then conditions (48) are met. Introduce the following designation vl( k ) = max ϕ (j k,l )

(50)

j∈J

~ Thus, the region Ωl(k ) can be represented in the following form % ( k ) = {θ : v ( k ) − θ ≤ 0, θ ∈ R ( k ) } . Ω l l p l

~ In the region Ωl(k ) the parameter θ i changes within the following intervals I il( k ) = {θ : θ il( k ), L ≤ θ ≤ θ il( k ),U } , i = 1,..., p − 1 , I pl( k ) = {θ : vl( k ) ≤ θ ≤ θ pl( k ),U } .

(51)

~ Thus, the region Ωl(k ) can be represented in the following form

% ( k ) = {θ : θ ∈ I ( k ) , i = 1, 2,..., p} . Ω l i il

(52)

% (k ) At the k -th iteration the region Ω will be approximated by the region Ω ~ ~ ~ ~ Ω ( k ) = Ω1( k ) U Ω (2k ) U ... U Ω (Nkk) .

(53)

~ Since all the parameters θ i are independent, the probability measure of the region Ωl(k ) is equal to the product of the probability measures of the intervals56 I il(k ) , i = 1,..., p p −1 % ( k ) } =  [ F (θ% ( k )U ) − F (θ% ( k ) L )]  [ F (θ% ( k )U ) − F (v% ( k ) )] , Pr{θ ∈ Ω ∏ l i il i il p l   p pl  i =1 

(54)

where F p ( vˆ l( k ) ) = F p (( v l( k ) − E[θ p ])(σ p ) −1 ) . Then, taking into account (32) we obtain % ( k ) } = A( k ) [ F (θ% ( k )U ) − F ( v% ( k ) )] . Pr{θ ∈ Ω l l p pl p l

(55)

~ Substituting the expression for Pr{∈ Ω l(k ) } in (35), we obtain Nk

% ( k ) } = ∑ A( k ) [ F (θ% ( k )U ) − F ( v% ( k ) )] . Pr{θ ∈ Ω l p pl p l l =1

~ Substituting the expressions for Pr{θ ∈ Ω (k ) } and vl (see (50)) into problem (10) we obtain

ACS Paragon Plus Environment

Page 17 of 66

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

(k ) f 1( k ) = min( k ) E ap [ f (d , z , θ )]

(56)

d , z , vl

Nk

∑A

(k ) l

[ Fp (θ%pl( k )U ) − Fp ( v%l( k ) )] ≥ α ,

l =1

vl( k ) = max ϕ (j k,l ) , l = 1,..., N k . j∈J

(57)

Conducting reasonings similar to reasonings in the case when condition (20) is met one can show that equality constraint (57) can be change by the following inequality constraints vl( k ) ≥ max ϕ (j k,l ) , l = 1,..., N k . j∈J

(58)

This constraints is equivalent to the following l⋅m constraints vl( k ) ≥ ϕ (j ,kl ) , j = 1,..., m , l = 1,..., N k .

(59)

Substituting constraints (59) instead of constraints (57) rewrite problem (56) in the following form (k ) [ f (d , z , θ )] f 1( k ) = min( k ) E ap

(60)

d , z , vl

Nk

∑A

(k ) l

[ Fp (θ%pl( k )U ) − Fp ( v%l( k ) )] ≥ α ,

l =1

vl( k ) ≥ ϕ (j ,kl ) , j = 1,..., m , l = 1,..., N k .

In problem (60) all functions are continuously differentiable. Therefore, for solving this problem we can use efficient methods of nonlinear programming55. Consider now the third case when the following conditions hold ∂g j ∂θ p ∂g j ∂θ p

≥ 0 , ∀θ , d , z, j ∈ J1 ,

(61)

≤ 0 , ∀θ , d , z, j ∈ J 2 .

(62)

Using the reasonings similar to the reasonings conducting above one can show that if

θ p ≤ yl( k ) ,

(63)

where

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Page 18 of 66

yl( k ) = min ϕ (j k,l ) ,

(64)

θ p − ϕ (j k,l ) ≤ 0, j ∈ J 1 .

(65)

j∈J1

then

And if vl( k ) ≤ θ p ,

(66)

vl( k ) = max ϕ (j k,l ) ,

(67)

ϕ (j k,l ) − θ p ≤ 0, j ∈ J 2 .

(68)

where j∈J 2

then

~ Thus, the region Ωl(k ) can be represented in the following form % ( k ) = {θ : θ − ϕ ( k ) ≤ 0, j ∈ J , ϕ ( k ) − θ ≤ 0, j ∈ J , θ ∈ R ( k ) } . Ω l p j ,l 1 j ,l p 2 l

~ It is clear that the region Ωl(k ) can be transformed into the following form % ( k ) = {θ : θ − y ( k ) ≤ 0, j ∈ J , v ( k ) − θ ≤ 0, j ∈ J , θ ∈ R ( k ) } , Ω l p l 1 l p 2 l

or in the following form % ( k ) = {θ : v ( k ) ≤ θ ≤ y ( k ) , θ ∈ R ( k ) } . Ω l l p l l

~ Thus, in the region Ωl(k ) the parameter θ i changes in the following interval I il( k ) = {θ : θ il( k ),L ≤ θi ≤ θil( k ),U }, i = 1,..., p − 1, I (plk ) = {θ : vl( k ) ≤ θ p ≤ yl( k ) } .

(69)

~ Probabilistic measure of the region Ωl(k ) is equal to p −1 % ( k ) } =  [ F (θ% ( k )U ) − F (θ% ( k ) L )]  [ F ( y% ( k ) ) − F (v% ( k ) )] . Pr{θ ∈Ω ∏ l i il i il p l   p l  i =1 

(70)

Then taking into account (69), we can rewrite (70) in the following form % ( k ) } = A( k ) [ F ( y% ( k ) ) − F ( v% ( k ) )] . Pr{θ ∈ Ω l l p l p l

ACS Paragon Plus Environment

(71)

Page 19 of 66

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

~ Substituting the expression for Pr{∈ Ω l(k ) } in (35), we obtain the probabilistic measure of the

% (k ) region Ω Nk

% ( k ) } = ∑ A( k ) [ F ( y% ( k ) ) − F ( v% ( k ) )] . Pr{θ ∈ Ω l p l p l l =1

% ( k ) in (11), we can transform problem (10) to the Using probabilistic measure of the region Ω following form f 1( k ) = Nk

∑A

(k ) l

min

d , z , yl( k ) ,vl( k )

(k ) E ap [ f (d , z , θ )]

(72)

[ Fp ( y% l( k ) ) − Fp ( v%l( k ) )] ≥ α ,

l =1

yl( k ) ≤ ϕ (j k,l ) , j ∈ J 1 , l = 1,..., N k ,

(73)

vl( k ) ≥ ϕ (j ,kl ) , j ∈ J 2 , l = 1,..., N k .

(74)

It is clear that this approach can be applied if min ϕ (j k,l ) ≥ max ϕ (j ,kl ) . j∈J1

(75)

j∈J 2

Consider now the case when the functions g j ( d , z, θ1 ,..., θ p ) are not monotone. In this case a sign of derivatives

∂g j ∂θ p

can change. Suppose for definiteness that in the beginning the following

≥ 0 , j ∈ J1 ,

(76)

< 0 , j ∈ J2 ,

(77)

conditions hold ∂g j ∂θ p ∂g j ∂θ p where J 1 = {1,..., q} and J 2 = {q + 1,..., m} .

ACS Paragon Plus Environment

(78)

Industrial & Engineering Chemistry Research

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Page 20 of 66

Then in the beginning we must solve problem (72), in which the sets J 1 and J 2 have the form (78). Suppose we solve problem (72) with the help of any NLP method (for example, SQP). Let in some point d , z , y , v a sign of derivative

∂g q ∂θ p

changes. After this point we should solve

problem (72) in which J 1 = {1,..., q − 1} and J 2 = {q,..., m} . Thus, in the point d , z , y , v a composition of constraints changes. This means that at the point d , z , y , v one must begin solving problem (72) from the start, i.e. the point d , z , y , v must be used as an initial point when using an NLP method. 3.1. Motivating example Consider the next optimization problem

min Eθ [ z ⋅ θ 1 + 2 ⋅ θ 1 ⋅ θ 2 + 2 ⋅ d ] d ,z

 − d + z ⋅ θ1 ⋅ θ 2 + 2 ⋅ θ 2 − 8 ≤ 0  Pr   ≥ 0.75 , − 2 ⋅ d + 2 ⋅ θ 1 ⋅ θ 2 + z ⋅ θ 1 − 2 ≤ 0  0 ≤ d ≤ 10 , 1 ≤ z ≤ 10 ,

E[θ1 ] = 1.5 , σ 1 = 0.128 , E[θ 2 ] = 0.5 , σ 2 = 0.128 . Let solve problem (79) in the form of OSOP. Let problem solution tolerance ε 1 = 0.1 . Thus, we have a region Tθ = {θ :1 ≤ θ1 ≤ 2;0 ≤ θ2 ≤ 1} where . Pr{θ ∈ Tθ } = 0.9999 . Let write constraints derivatives

∂g1 ∂g = z ⋅ θ1 + 2 ; 2 = 2 ⋅ θ1 . ∂θ 2 ∂θ 2 In this case the following inequalities hold

∂g j ∂θ2

≥ 0 , ∀θ , d , z, j = 1;2 .

ACS Paragon Plus Environment

(79)

Page 21 of 66

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

Thus, taking into account formulae (12), (14) and (15) we can write expressions for boundaries of the regions Ω1 and Ω 2 g j ( d , z, θ1 , θ 2 ) = θ 2 − ϕ j ( d , z, θ1 ) , j = 1;2 ,

where

ϕ1 (d , z, θ1 ) =

d +8 1 + d − 0.5 ⋅ z ⋅ θ1 , ϕ 2 ( d , z, θ1 ) = . θ1 z ⋅ θ1 + 2

R(1) consist of only the region Tθ , R1(1) = Tθ and

At the first iteration the set

(1) L (1)U R1(1) = {θ1 :1 ≤ θ1 ≤ 2} , N1 = 1 , θ1,1(1) L = 1, θ1,1(1)U = 2, θ 2,1 = 0, θ 2,1 = 1.

Using the reasonings similar to the reasonings conducting above one can write formulae (23), (24) in the forms

Ω1,(1) = {θ : θ 2 − ϕ j ( d , z, θ 1 ) ≤ 0, j = 1;2; θ 1 ∈ R1( k ) } , (1) ϕ j(1) (d , z , θ1 ) = ϕ (1) j ,1 , θ1 ∈ R1 , j = 1;2 , (1)1,mid ϕ (1) ) , j = 1;2 , j 1 = ϕ j ( d , z , θ1

~ ~ Ω (1) = Ω 1(1) = {θ : θ 2 − ϕ (j1,1) ≤ 0, j = 1;2; θ 1 ∈ R1(1) } , where ϕ11(1) =

d +8 1 + d − 0.75 ⋅ z , ϕ 21(1) = , θ1(1)1,mid = 1.5 . 1.5 ⋅ z + 2 1.5

Thus, we can write formula (36) for left-hand side of joint constraints approximation % (1) } = [ F ( θ1 Pr{θ ∈ Ω 1

(1)U

− E [θ1 ]

σ1

) − F1 (

θ1(1) L − E [θ1 ] y (1) − E [θ 2 ] θ (1) L − E [θ 2 ] )][ F2 ( 1 ) − F2 ( 2 )] , σ1 σ2 σ2

where θ1(1)U = 2, θ1(1) L = 1, θ 2(1) L = 0 and Fi (ϑ ) is the standard distribution function. Taking into account [ F1 (θ%1(1)U ) − F1 (θ%1(1) L )] = 1 we can write the problem, that should be solved at the first iteration (see (44)-(45))

min Eθ [ z ⋅ θ1 + 2 ⋅ θ1 ⋅ θ 2 + 2 ⋅ d ]

d , z , y1

ACS Paragon Plus Environment

(80)

Industrial & Engineering Chemistry Research

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Page 22 of 66

  y − E [θ 2 ]   0 − E [θ 2 ]   − F2   ≥ 0.75 , 1 ⋅  F2  1 σ σ 2 2      y1 ≤

d +8 1 + d − 0.75 ⋅ z , y1 ≤ , 0 ≤ d ≤ 10 , 1 ≤ z ≤ 10 , 0 ≤ y1 ≤ 1 . 1.5 ⋅ z + 2 1.5

We should calculate multivariate integral Eθ [ z ⋅ θ 1 + 2 ⋅ θ 1 ⋅ θ 2 + 2 ⋅ d ] to solve the problem (80). We will show the approach to approximate multivariate integral later.

~ 4. Construction of the approximations ~z (θ ) and Ωl( k ) for two-stage optimization problem In this case, the control variables depend on the parameters θ and constraints (2) have the following form g j ( d , z (θ ), θ ) ≤ 0, j = 1,..., m .

(81)

We proposed52 to approximate the control variables z (θ ) as a piece-wise constant function of the parameters θ . Let us they will be θ = {θ1 ,..., θ p −1} . Thus, in this case the approximation z% (θ ) of the function z (θ ) at the k -th iteration will have the following form  z1 if θ ∈ R1( k ) ,  z% (θ ) = M  z if θ ∈ R ( k ) , Nk  Nk

(82)

where zl is the vector of the control variables corresponding to the subregion Rl( k ) (see (21)). Again we will express the parameter θ p as a function of the parameters θ1 ,...,θ p−1 , taking account of (82). We will get the equations

θ p = ϕ j ( d , z% (θ ), θ1 ,..., θ p −1 ) , j = 1,..., m . As above in previous section construct the functions g j ( d , z% (θ ), θ1 , θ 2 ,..., θ p ) in the form

ACS Paragon Plus Environment

Page 23 of 66

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

∂g j  θ p − ϕ j ( d , z% (θ ), θ1 ,..., θ p −1 ), if ∂θ ≥ 0, p  g j ( d , z% (θ ), θ1 , θ 2 ,..., θ p ) =  ϕ ( d , z% (θ ), θ ,..., θ ) − θ , if ∂g j ≤ 0. 1 p −1 p  j ∂θ p 

We should consider three case of changing of the sign of derivatives

∂g j ∂θ p

, as show above. We

will consider only first case, when the following inequalities hold ∂g j ∂θ p

≥ 0 , θ , d , ~z (θ ) , j = 1,K , m .

Now the regions Ω and Ω l ,( k ) can be represented in the following form

Ω = {θ : g j (d , ~z (θ ),θ ) ≤ 0, j = 1,..., m,θ iL ≤ θ ≤ θ iU , i = 1,..., p} , Ω l ,( k ) = {θ : θ p − ϕ j (d , ~z (θ ),θ ) ≤ 0, j = 1,..., m,θ ∈ Rl( k ) } . We will approximate the function ϕ j (d , ~ z (θ ),θ ) with the help of a piece-wise constant function ϕ j (d , z% (θ ), θ1 ,..., θ p −1 ) of the following form ϕ (j k,1) if θ ∈ R1( k ) ,  ϕ j( k ) (d , z% (θ ), θ1 ,..., θ p −1 ) = M ϕ ( k ) if θ ∈ R ( k ) , Nk  j ,Nk

where

ϕ (jlk ) = ϕ j ( d , zl , θ ( k ) l ,mid ) ,

(83)

and θ ( k ) l ,mid = (θ1( k ) l ,mid ,..., θ p( k−1) l ,mid ) is the middle point of the region Rl( k ) . Similar to previous

% ( k ) of the regions Ω( k ) section introduce the approximations Ω l l % ( k ) = {θ : θ − ϕ ( k ) ≤ 0, j = 1,..., m, θ ∈ R ( k ) } . Ω l p j ,l l

(84)

% ( k ) is some approximation of the region Ω and Ω % ( k ) is an approximation of the The region Ω l l region Ω at the k -th iteration, where

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

% (k ) = Ω % (k ) ∪ Ω % (k ) ∪L ∪ Ω % (k ) . Ω 1 2 Nk

Page 24 of 66

(85)

% (k )} , Repeating the reasoning made above one can obtain expressions for Pr{θ ∈Ω l % ( k ) } that is similar to (33), (36), in which the regions Ω % ( k ) and Ω % ( k ) are determined by Pr{θ ∈Ω l formulae (84), (85). In this case, problem (9) will have the following form (k ) f 2( k ) = min( k ) E ap [ f (d , ~z (θ ),θ )] d , zl , yl

Nk

∑A

(k ) l

[ Fp ( y% l( k ) ) − Fp (θ%pl( k ) L )] ≥ α ,

l =1

yl( k ) ≤ ϕ (j k,l ) , j = 1,..., m , l = 1,..., N k ,

where the functions ϕ (j k, l) are given by formulae (83). Repeating the arguments of the previous sections one can obtain the problem’s (9) forms in case of

∂g j ∂θ p

≤ 0 , θ , d , ~z (θ ) , j = 1,K , m , or in case of the functions g j ( d , z, θ1 ,..., θ p ) are not

monotone.

5. Approximation of the expected value of the objective function On the solving TSOPJCC (9) we should calculate multivariate integral in form (5). For construction of an approximation of the expected value of the two-stage optimization problem objective function, we will use the approach developed in53, based on the a step-wise linear approximation of the function f ( d , z% (θ ), θ ) of the following form  f% ( d , z1 , θ , θ ( k )1 ) if θ ∈ R1( k )  , f% ( d , z, θ ) = M % ( k ) Nk (k ) ) if θ ∈ RN k  f (d , z Nk , θ ,θ

where p ∂f (d , zr , θ ( k ) r ) f% (d , zr , θ , θ ( k ) r ) = f (d , zr , θ ( k ) r ) + ∑ (θi − θ ir ) . ∂θ i i =1

ACS Paragon Plus Environment

(86)

Page 25 of 66

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

The point θ ( k ) r is a linearization point in the subregion Rr( k ) , θir is the i-th component of the vector θ ( k ) r . The right-hand side of formula (86) is the linear part of the Taylor’s expansion of f ( d , z, θ ) at the point θ ( k ) r . Using this approximation, we constructed

the function

(k ) Eap [ f ( d , z% (θ ), θ )] of the following form53

Nk p   ∂f ( d , z r , θ ( k ) r ) (k ) Eap [ f ( d , z% (θ ), θ )] = ∑  a r f ( d , z r , θ ( k ) r ) + (1 − a r ) ∑ E [θ i ; Rr( k ) ] , ∂θ i r =1  i =1 

(87)

where



ar =

ρ (θ )dθ ,

E [θi ; Rr( k ) ] =

Rr( k )

∫ θ ρ (θ )dθ . i

(88)

Rr( k )

In the case of independent uncertain parameters calculation of the values a r and E[θi ; Rr( k ) ] reduces to calculation of some of univariate integral53 θ

F (θ ) =



−∞

θ

ρ (ϑ ) dϑ , H (θ ) = ∫ ϑρ (ϑ )dϑ . −∞

Using the methods of curve fitting57, we can obtain analytical approximations of the functions F (θi ) and H (θi ) .With the help of the functions F (θi ) and H (θi ) we can calculate the values

% ( k ) } of the left-hand sides of the chance a r , E[θi ; Rr( k ) ] and the approximations Pr{θ ∈Ω constraints without the use of the operation of numerical integration.

% ( k ) } , Pr{θ ∈Ω % (k )} Using the expressions for z% (θ ) (78), Eap( k ) [ f ( d , z% (θ ), θ )] (83) and Pr{θ ∈Ω l (see (33), (36)) we can obtain the optimization problem that should solved at the k-th iteration. For example, for the case when

∂g j ∂θ p

≥ 0 , ∀θ , d , z, j = 1,..., m , it will have the following form

(k ) f 2( k ) = min( k ) E ap [ f (d , ~z (θ ),θ )] d , zl , yl

Nk

∑A

(k ) l

[ Fp ( y% l( k ) ) − Fp (θ%pl( k ) L )] ≥ α ,

l =1

ACS Paragon Plus Environment

(89)

Industrial & Engineering Chemistry Research

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

yl( k ) ≤ ϕ (j k,l ) , j = 1,..., m , l = 1,..., N k .

Page 26 of 66

(90)

It is easy to see that the TSOPCC has nd + (nz + 1) N k decision variables. Consequently, a number of decision variables increases proportionally to a number of subregions Rl( k ) . It is easy to see that TSOPCC (87) changes into a OSOPCC if z% (θ ) = const ∀θ ∈ Tθ . Problem (89) is a usual NLP problem. To solve it we used the SQP method. Suppose we solved problem (89) and obtained the solution d * , z * (θ ) . We can realize the optimal control z * (θ ) since at each time instant of the operation stage we can determine values of the uncertain parameters θ . 5.1. Motivating example (continued) We have shown that problem (79) transformed to the problem (80) at the first iteration. Let construct the approximation of the expected value of the objective function of the problem (80). At the first iteration the set R(1) consist of only the region R1(1) = Tθ = {θ :1 ≤ θ1 ≤ 2;0 ≤ θ2 ≤ 1} , N1 = 1 . Taking into account (88) we can write

a1 = ∫ ρ (θ )dθ = 1 ,

(1 − a1 ) = 0 .



Using the point θ (1)1 = {E[θ 1 ]; E[θ 2 ]} as the linearization point in the region R1(1) , we should rewrite the formula (87) (1) Eap [ f ( d , z , θ ), θ ] = f ( d , z , θ (1)1 ) .

Thus, we should solve the following problem at the first iteration f

(1)

= min(1.5z + 2 ⋅ d + 1.5) d , z , y1

  y − E [θ 2 ]   0 − E [θ 2 ]   − F2   ≥ 0.75 , 1 ⋅  F2  1 σ2   σ2    y1 ≤

d +8 1 + d − 0.75 ⋅ z , y1 ≤ , 0 ≤ d ≤ 10 , 1 ≤ z ≤ 10 , 0 ≤ y1 ≤ 1 . 1.5 ⋅ z + 2 1.5

ACS Paragon Plus Environment

Page 27 of 66

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

We have got the solution f (1) = 4.25948 , d (1) = 0.62974 , z (1) = 1 , y1 = 0.58649 at the first iteration. Figure 1 illustrates the approximation quality of the objective function f (d (1) , z (1) , θ ) ~ by the function f (d (1) , z (1) , θ , θ (1),1 ) in the region R1(1) . On the figure 2 the rectangle 0ABC is the

% (1) and y is its boundary. Figure 2 illustrates the approximation quality approximation region Ω 1 1 of the functions ϕ1 (d (1) , z (1) , θ 1 ) , ϕ 2 (d (1) , z (1) , θ 1 ) by the functions ϕ1 ( d (1) , z (1) , θ11,mid ) ,

ϕ 2 ( d (1) , z (1) , θ11,mid ) , θ11,mid = 1.5 respectively.

Figure 1. The objective function approximation quality at the first iteration

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Page 28 of 66

Figure 2. Approximation quality of the functions ϕ1 (d (1) , z (1) , θ 1 ) , ϕ 2 (d (1) , z (1) , θ 1 ) by the

% (1) functions ϕ11 (d (1) , z (1) , θ 11,mid ) , ϕ 21 (d (1) , z (1) , θ11,mid ) , and form of the region Ω 1 6. Partition of subregions Rq( k ) The iteration procedure of solving TSOPJCC and OSOPJCC is based on a partition of the uncertainty region. The aim of the partition is to improve of the approximations of the region Ω , multivariate functions z (θ ) and the expected value of a objective function f ( d , z (θ ), θ ) . The following reasoning will be made for TSOPJCC. Let the region Tθ is partitioned into a set of subregions Rl( k ) , l = 1,..., N k , on k -th iteration of procedure. Usually, N 1 = 1 at the first iteration. Let us consider at first an issue of selection of a subregion Rl( k ) for partition. The simplest rule is to partition at k -th iteration all subregions Rl( k ) , l = 1,..., N k . In this case, a number of decision variables will increase very rapidly. But it may be not necessary to improve of the approximations in each subregion Rl( k ) , l = 1,..., N k . In connection with this, we will consider a more efficient way of selection of the subregions Rl( k )

ACS Paragon Plus Environment

Page 29 of 66

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

for partition. We will partition the subregion Rl(*k ) , l * ∈ L , where the quality of approximation of ~ the function f (d ( k ) , z l(*k ) ,θ ) by the function f ( d ( k ) , z l(*k ) , θ , θ ( k ),l ) will be the worst and the total error

of

approximation

of

the

functions

ϕ j ( d ( k ) , zl( k ) , θ1 ,..., θ p −1 ) *

by

the

functions

ϕ (jlk ) = ϕ j ( d ( k ) , zl( k ) , θ l ,mid ) , θ ( k ) l ,mid = (θ1( k ) l ,mid ,..., θ p( k−1) l ,mid ) , will be the worst. *

Thus, at the k-th iteration the approximation quality of the subregion Rl( k ) will be defined by the value µl( k ) :

(

~

µ l( k ) = max ( f ( d ( k ) , z l( k ) , θ ) − f ( d ( k ) , z l( k ) , θ , θ ( k ) l )) 2 + θ ∈Rl( k )

m

+ ∑ (ϕ j ( d

(k )

,z

(k ) l

,θ ) − ϕ j (d

(k )

,z

(k ) l



( k ) l , mid

))

2

(91)

)

j =1

where values d ( k ) , zl( k ) are the solution of problem (87) at k -th iteration. We will select for partition the region Rl(*k( k)) in which the variable µ l(*k( k)) takes the greatest value. Thus, we can obtain the index l *(k ) of region Rl(*k( k)) from following

l *( k ) = arg max µl( k ) ,

(92)

l∈L

here L = {1,..., N k } . Let us consider a rule of partition of the subregion Rl(*k ) . This region should be partitioned into two regions R (pkk +1) and Rq(kk +1) , ( Rlk = R p( kk +1) ∪ Rq(kk +1) ) as follows

{

}

{

}

R p( kk +1) = θ : θ ∈ Rl(kk ) , θ s ≤ csp( kk) , Rq(kk +1) = θ : θ ∈ Rl(kk ) , θ s ≥ csp( kk) ,

(93)

here θ s and csl( k ) are referred to as a splitting variable and a splitting point. We partition the region Rl(kk ) with the help of a hyperplane θ s = csp( kk) . In53 we propose a rule of selection of the splitting variable and a splitting point at the k-th iteration: each component of the vector θ will be used in turn as a splitting variable; the splitting point will be selected in the middle of the

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Page 30 of 66

interval I sl( k*)( k ) = {θ : θ sl( k*()kL) ≤ θ s ≤ θ sl( k*()kU) } of the subregion Rl(*k ) , which should be partitioned, here the parameter θ s is a splitting variable. Taking in account the form of the subregions Rl( k ) (see (21)) we will partition the subregion Rl(*k ) using the same splitting variable and splitting point. It is easy to see that if at the k -th iteration the parameter θ p is used as the splitting variable then at this iteration only approximations of the expected value of a objective function are improved. In case of OSOPJCC we should rewrite (91) in the following form

(

~

µ l( k ) = max ( f ( d ( k ) , z ( k ) , θ ) − f ( d ( k ) , z ( k ) ,θ , θ ( k ) l )) 2 + θ ∈Rl( k )

m

+ ∑ (ϕ j ( d ( k ) , z ( k ) , θ ) − ϕ j ( d ( k ) , z ( k ) , θ

( k ) l ,mid

)) 2 )

.

(94)

j =1

6.1. Motivating example (continued) Let us choose the variable θ1 as a splitting variable and θ1 = 1.5 as a splitting point in the region R1(1) . Thus, we should partition the region R1(1) into two regions R1(2) and R2(2) with the help of a hyperplane θ1 = 1.5 R1(2) = {θ : 1 ≤ θ1 ≤ 1.5;0 ≤ θ 2 ≤ 1}, R2(2) = {θ : 1.5 ≤ θ1 ≤ 2;0 ≤ θ 2 ≤ 1} , N 2 = 2 .

Due to the hyperplane θ1 = 1.5 partitions the region R1(1) into two regions too, we have got two regions R1(2) and R2(2) R1(2 ) = {θ : 1 ≤ θ1 ≤ 1.5} , R2( 2) = {θ : 1.5 ≤ θ1 ≤ 2} , N 2 = 2 .

Taking in account the formula (23) the regions Ω l ,( k ) , l = 1,K, N 2 , can be represented in the following form

Ω l ,( 2 ) = {θ : θ 2 − ϕ j (d , z ,θ 1 ) ≤ 0, j = 1,2;θ1 ∈ Rl( 2 ) } , l = 1,..., N 2 ,

% (2) , l = 1,..., N , have got the forms and regions Ω l 2 ~ Ω (l 2 ) = {θ : θ 2 − ϕ (j 2,l) ≤ 0, j = 1;2; , θ 1 ∈ Rl( 2 ) } , l = 1,K, N 2 ,

ACS Paragon Plus Environment

Page 31 of 66

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

where region R1(2) :

ϕ11(2) =

d +8 1 + d − 0.625 ⋅ z , ϕ 21(2) = , θ1( 2)1,mid = 1.25 , 1.25 ⋅ z + 2 1.25

region R2(2) :

ϕ12(2) =

d +8 1 + d − 0.875 ⋅ z , ϕ 22(2) = , θ1( 2 )2,mid = 1.75 . 1.75 ⋅ z + 2 1.75

Thus, we obtain % (2) } = Pr{θ ∈Ω % (1) } + Pr{θ ∈Ω % (2) } = A(2) [ F ( y% ( 2) ) − F (θ% (2) L )] + A( 2) [ F ( y% ( 2) ) − F (θ% ( 2) L )] . Pr{θ ∈Ω 1 2 1 2 1 2 2,1 2 2 2 2 2,2

Let calculate values of parameters A1( 2) , A2( 2) A1(2) = [ F1 (θ%1,1(2)U ) − F1 (θ%1,1( 2) L )] = F1 ((1.5 − E [θ1 ]) / σ 1 ) − F1 ((1 − E[θ1 ]) / σ 1 ) = 0.5 , (2)U (2) L A2( 2) = [ F1 (θ%1,2 ) − F1 (θ%1,2 )] = F1 ((2 − E[θ1 ]) / σ 1 ) − F1 ((1.5 − E[θ1 ]) / σ 1 ) = 0.5 .

Thus, we can write formula (36) for left-hand side of joint constraints approximation (2) (2) % (2) } = 0.5  F  y1 − 0.5  − F  0 − 0.5   + 0.5  F  y2 − 0.5  − F  0 − 0.5   . Pr{θ ∈ Ω  2  2 2  2     0.128    0.128     0.128    0.128 

Next, taking into account (86)-(88) we should construct the approximation for objective function expected value at the second iteration. Let write derivatives of the objective function ∂f ( d , z , θ ) ∂f ( d , z , θ ) = z + 2θ 2 , = 2θ1 . ∂θ1 ∂θ 2

(95)

Taking into account (88) and using the middle points θ (2)1 = {1.25;0.5} , θ (2)2 = {1.75;0.5} of regions R1(2) and R2(2) as linearization points respectively, we can calculate the values of the a r and E[θi ; Rr( k ) ] , r = 1,..., N 2 , N 2 = 2 , (see formulae (88)) and can write approximation 1.5

a1 =

1

2

∫ ρ (θ )∫ ρ (θ )dθ dθ 1

1

1

2

2

1

0

1.5

2

= 0.5 , a2 =

1

∫ ρ (θ )∫ ρ (θ )dθ dθ 1

1.5

1

2

2

1

0

1

E [θ1 ; R1(2) ] = ∫ θ1ρ1 (θ1 ) ∫ ρ 2 (θ 2 )dθ1dθ 2 = 0.6973366 , 1

1.5

E [θ 2 ; R1(2) ] =

0

1

∫ ρ1 (θ1 ) ∫ θ2 ρ 2 (θ2 )dθ1dθ2 = 0.2499277 , 1

0

ACS Paragon Plus Environment

2

= 0.5 ,

Industrial & Engineering Chemistry Research

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Page 32 of 66

E [θ1; R2(2) ] = 0.8022294 , E [θ 2 ; R2(2) ] = 0.2499277 . Now we can write new form of formula (85) (2) Eap [ f ( d , z , θ ), θ ] = (0.625 z + d + 0.625) + (0.875 z + d + 0.875) +

+ ( z + 0.5)( E [θ1 ; R1( 2) ] + E [θ1 ; R2( 2) ]) + 1.25 E [θ 2 ; R1(2) ] + 1.75E [θ 2 ; R2(2) ] .

Second iteration. We can write the optimization problem that should be solved at the second iteration f ( 2) = min (2 d + 2.249783 z + 2.999566) d , z , y1 , y2

  y (2) − 0.5    y2(2) − 0.5   0 − 0.5    0 − 0.5   0.5  F2  1 − F + 0.5  F2  2    ≥ 0.75 ,   − F2   0.128    0.128     0.128    0.128 

y1 ≤

d +8 1 + d − 0.625 ⋅ z d +8 1 + d − 0.875 ⋅ z , y1 ≤ , y2 ≤ , y2 ≤ , 1.25 ⋅ z + 2 1.25 1.75 ⋅ z + 2 1.75

0 ≤ d ≤ 10 , 1 ≤ z ≤ 10 , 0 ≤ y1 ≤ 1 , 0 ≤ y 2 ≤ 1 . We have got the solution

f ( 2) = 6.75046 ,

d (2) = 0.75056 ,

z (2) = 1 ,

y1 = 0.90045 ,

y 2 = 0.50032 at the second iteration. Since we have got the objective value f (1) = 4.25948 at the first iteration and

f ( 2) − f (1) = 2.491 > ε 1 = 0.1 , one can see, that iterations must be

repeated. Let us define the approximation quality of regions R1(2) and R2(2) . We should solve two problems, taking into account (94)

µ1( 2) = max (( f (d (2) , z ( 2) ,θ ) − f% (d ( 2) , z (2) ,θ ,θ (2)1 ))2 + θ ∈R1( 2 ) 2

+ ∑ (ϕ j (d ( 2) , z (2) ,θ1 ) − ϕ j (d ( 2) , z (2) ,θ1(2)1,mid ))2 ) ,

(96)

j =1

µ2( 2) = max (( f ( d (2) , z( 2) ,θ ) − f% (d (2) , z( 2) ,θ ,θ ( 2)2 ))2 + θ ∈R2( 2 ) 2

+ ∑ (ϕ j (d , z ,θ1 ) − ϕ j (d , z ,θ (2)

j =1

(2)

(2)

( 2)

( 2)2,mid 1

)) ) . 2

Let us write components of maximized function in (96) f ( d (2) , z(2) ,θ ) − f% ( d ( 2) , z(2) ,θ ,θ (2)1 ) = z(2) ⋅ θ1 + 2 ⋅ θ1 ⋅ θ2 + 2 ⋅ d (2) −

ACS Paragon Plus Environment

(97)

Page 33 of 66

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

−(1.25z(2) + 2d (2) + 1.25 + ( z(2) + 1.25)(θ1 − 1.25) + 2.5(θ2 − 0.5)) = 2θ1 ⋅ θ2 − θ1 − 2.5θ2 + 1.25 ,

ϕ1 (d (2) , z ( 2) ,θ1 ) − ϕ1 (d (2) , z (2) ,θ ( 2)1,mid ) = ϕ2 ( d , z ,θ1 ) − ϕ2 (d , z ,θ ( 2)

(2)

(2)

(2)

( 2)1,mid

d ( 2) + 8 d ( 2) + 8 − , z ( 2) ⋅ θ1 + 2 1.25 z (2) + 2

1 + d ( 2) − 0.5 ⋅ z (2) ⋅ θ1 1 + d (2) − 0.625 ⋅ z ( 2) − . )= θ1 1.25

Let us write components of maximized function in (97) f ( d (2) , z(2) ,θ ) − f% ( d ( 2) , z(2) ,θ ,θ (2)2 ) = z(2) ⋅ θ1 + 2 ⋅ θ1 ⋅ θ2 + 2 ⋅ d (2) −

−(1.75z(2) + 2d (2) + 1.75 + ( z (2) + 1)(θ1 − 1.75) + 3.5(θ2 − 0.5)) = 2θ1 ⋅ θ2 − θ1 − 3.5θ2 + 1.75 ,

ϕ1 (d (2) , z ( 2) ,θ1 ) − ϕ1 (d (2) , z (2) ,θ ( 2)1,mid ) =

d ( 2) + 8 d ( 2) + 8 , − z ( 2) ⋅ θ1 + 2 1.75 z (2) + 2

ϕ2 ( d ( 2) , z (2) ,θ1 ) − ϕ2 (d (2) , z (2) ,θ ( 2)1,mid ) =

1 + d ( 2) − 0.5 ⋅ z (2) ⋅ θ1 1 + d (2) − 0.875 ⋅ z ( 2) − . θ1 1.75

We have got the solutions µ1( 2) = 0.23542 , µ2( 2) = 0.11808 of problems (96), (97) respectively,

µ1( 2) > µ2( 2) . It is clear the region R1(2) must be partitioned into two regions. Figure 3 illustrates the approximation quality of the objective function in region R1(2) (a) and ~ region R2(2) (b). The rectangle 0ABF is the approximation region Ω1( 2) and y1 is its boundary, the ~ rectangle FCDE is the approximation region Ω (22) and y 2 is its boundary, 0F is the region R1( 2) , FE is the region R2( 2) on the figure 4. Figure 4 illustrates the approximation quality

ϕ1 (d (2) , z (2) ,θ1 ) , by functions ϕ11 , ϕ 21 using region R1( 2) and of the function ϕ2 ( d ( 2) , z (2) ,θ1 ) by functions

ϕ 21 ,

ϕ 22

using

region

R2( 2) ,

ϕ11 = ϕ1 ( d , z , θ1(2)1,mid ) ,

ϕ 21 = ϕ 2 ( d , z , θ1( 2)1,mid ) ,

ϕ12 = ϕ1 ( d , z , θ1(2)2,mid ) , ϕ 22 = ϕ 2 ( d , z , θ1(2 )2,mid ) . Let us choose the variable θ 2 as a splitting variable and partition the region R1(2) into two regions R1(3) and R3(3) with the help of a hyperplane θ 2 = 0.5 . Thus, we have got three regions

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

R1(3) = {θ : 1 ≤ θ1 ≤ 1.5;0 ≤ θ 2 ≤ 0.5} , R1(3) = {θ1 : 1 ≤ θ1 ≤ 1.5} , R2(3) = {θ : 1.5 ≤ θ1 ≤ 2;0 ≤ θ 2 ≤ 1} , R2(3) = {θ1 : 1.5 ≤ θ1 ≤ 2} , R3(3) = {θ : 1 ≤ θ1 ≤ 1.5;0.5 ≤ θ 2 ≤ 1} , R3(3) = R1(3) = {θ : 1 ≤ θ1 ≤ 1.5} , N 3 = 3, N 3 = 2 .

a)

b)

Figure 3. Objective function approximation quality at the second iteration

Figure 4. Approximation quality of the functions ϕ1 (d (2) , z (2) ,θ1 ) , ϕ2 ( d ( 2) , z (2) ,θ1 ) and form of ~ ~ the region Ω1( 2) , Ω (22) at the second iteration Thus, we have two regions: R1(3) = {θ1 : 1 ≤ θ1 ≤ 1.5} , R2(3) = {θ1 : 1.5 ≤ θ1 ≤ 2} .

ACS Paragon Plus Environment

Page 34 of 66

Page 35 of 66

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

Figure 5 illustrates uncertain region T partition at the third iteration. On the figure the rectangle AKLD is the region R1(3) , rectangle DCEF is the region R2(3) and rectangle KBCL is the region R3(3) , AD is the region R1(3) and DF is the region R2(3) . θ2 1

B

C

0.5 K

L

0 A

D 1.5

1

E

F 2

θ1

Figure 5. Partitioning of the uncertain region Т at the third iteration Since R3(3) = R1(3) , the joint constraints approximation will be the same (3) (3) % (3) } = 0.5  F  y1 − 0.5  − F  0 − 0.5   + 0.5  F  y2 − 0.5  − F  0 − 0.5   , Pr{θ ∈ Ω  2  2   2 2    0.128    0.128     0.128    0.128 

and the following constraints should be used y1 ≤

d +8 1 + d − 0.625 ⋅ z d +8 1 + d − 0.875 ⋅ z , y1 ≤ , y2 ≤ , y2 ≤ . 1.25 ⋅ z + 2 1.25 1.75 ⋅ z + 2 1.75

Let us construct the approximation of the objective function expected value. Using precisely the same reasoning, we can write a1 = 0.25 , a2 = 0.5 , a3 = 0.25 , E [θ1 ; R1(3) ] = 0.3488 , E [θ1; R2(3) ] = 0.80224 , E [θ1 ; R3(3) ] = 0.3488 , E [θ 2 ; R1(3) ] = 0.09875 , E [θ 2 ; R2(3) ] = 0.25 , E [θ 2 ; R3(3) ] = 0.15121 .

Using the middle points θ ( 3)1 = {1.25;0.25} , θ ( 3) 2 = {1.75;0.5} , θ ( 3) 2 = {1.25;0.75} of regions R1( 3) , R2( 3) and R3( 3) as linearization points respectively, we can write ( 3) E ap [ f (d , z ,θ ),θ ] = a1 f (d , z ,θ (3)1 ) + a 2 f (d , z ,θ ( 3) 2 ) + a3 f (d , z ,θ (3)3 ) +

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Page 36 of 66

 ∂f ( d , z, θ ( 3)1 )  ∂f ( d , z, θ ( 3)1 ) + (1 − a1 ) E[θ1 ; R1( 3) ] + E[θ 2 ; R1( 3) ]  + ∂θ1 ∂θ 2    ∂f ( d , z , θ ( 3) 2 )  ∂f ( d , z, θ ( 3) 2 ) + (1 − a 2 ) E[θ1 ; R2( 3) ] + ⋅ E[θ 2 ; R2( 3) ] + ∂θ1 ∂θ 2    ∂f ( d , z , θ ( 3) 3 )  ∂f (d , z , θ ( 3) 3 ) + (1 − a 3 ) E[θ1 ; R3( 3) ] + E[θ 2 ; R3( 3) ] ∂θ1 ∂θ 2   Third iteration. Thus, we should solve the following problem at the third iteration ( 3) f ( 3) = min E ap [ f (d , z , θ ),θ ]

d , z , y1 , y 2

  y (3) − 0.5    y2(3) − 0.5   0 − 0.5    0 − 0.5   0.5  F2  1 − F + 0.5  F2     ≥ 0.75 , 2   − F2  0.128 0.128 0.128 0.128            

y1 ≤

d +8 1 + d − 0.625 ⋅ z d +8 1 + d − 0.875 ⋅ z , y1 ≤ , y2 ≤ , y2 ≤ , 1.25 ⋅ z + 2 1.25 1.75 ⋅ z + 2 1.75

0 ≤ d ≤ 10 , 1 ≤ z ≤ 10 , 0 ≤ y1 ≤ 1 , 0 ≤ y 2 ≤ 1 . We have got the solution f ( 3) = 7.2556 , d (3) = 0.7506 , z ( 3) = 1 , y1 = 0.90045 , y 2 = 0.50032 at the third iteration. Since we have got the objective value f ( 2) = 6.75046 at the second iteration and

f (3) − f ( 2) = 0.5051 > 0.1 , iterations must be repeated. Let us define the

approximation quality of regions R1( 3) , R2( 3) and R3( 3) . We should solve three problems, taking into account (94) ( θ1( 3)1 = 1.25 , θ1( 3) 2 = 1.75 ) 2

µ1(3) = max (( f ( d (3) , z (3) ,θ ) − f% (d (3) , z(3) ,θ ,θ (3)1 ))2 + ∑ (ϕ j ( d (3) , z (3) ,θ1 ) − ϕ j (d (3) , z (3) ,θ1(3)1 ))2 ) , θ ∈R1( 3)

j =1 2

µ2(3) = max (( f ( d (3) , z (3) ,θ ) − f% (d (3) , z(3) ,θ ,θ (3)2 ))2 + ∑ (ϕ j (d (3) , z (3) ,θ1 ) − ϕ j ( d (3) , z (3) ,θ1(3)2 ))2 ) , θ ∈R2( 3)

µ

(3) 3

j =1

(

2

= max ( f ( d , z ,θ ) − f% (d (3) , z(3) ,θ ,θ (3)3 ))2 + ∑ (ϕ j ( d (3) , z (3) ,θ1 ) − ϕ j ( d (3) , z (3) ,θ1(3)1 ))2 ) . ( 3) θ ∈R3

(3)

(3)

j =1

Using the reasonings similar to the reasonings conducting above we have got µ1( 3) = 0.18855 ,

µ 2( 3) = 0.11808 , µ 3( 3) = 0.18855 . It is clear the regions R1( 3) and R3( 3) must be partitioned. Let us

ACS Paragon Plus Environment

Page 37 of 66

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

the regions R3( 3) will be partitioned into two regions. Figure 6 illustrates the approximation quality of the objective function in regions R1( 3) (a), R2( 3) (b) and R3( 3) (c). Due to the joint constraints approximation was the same, figure 4 illustrate the approximation quality of the function ϕ1 (d (3) , z (3) ,θ1 ) by functions ϕ11 , ϕ 21 using region R1( 3) and of the function

ϕ2 ( d (3) , z (3) ,θ1 ) by functions ϕ 21 , ϕ 22 using region R2( 3) . The rectangle 0ABF is the ~ ~ approximation region Ω1(3) , the rectangle FCDE is the approximation region Ω (23) and on the figure 4.

a)

b)

c) Figure 6. Objective function approximation quality at the third iteration

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Page 38 of 66

Let us choose the variable θ1 as a splitting variable and partition the region R3( 3) into two regions R3( 4) and R4( 4) with the help of a hyperplane θ1 = 1.25 . Thus, we have got four regions R1( 4) = {θ : 1 ≤ θ 1 ≤ 1.5;0 ≤ θ 2 ≤ 0.5} , R2( 4) = {θ : 1.5 ≤ θ 1 ≤ 2;0 ≤ θ 2 ≤ 1} , R3( 4 ) = {θ : 1 ≤ θ 1 ≤ 1.25;0.5 ≤ θ 2 ≤ 1} , R4( 4) = {θ : 1.25 ≤ θ1 ≤ 1.5;0.5 ≤ θ 2 ≤ 1} , N 4 = 4 .

Due to we should partition the region R3( 3) into two regions R3( 4) , R4( 4) using the hyperplane

θ1 = 1.25 , the region R1( 3) should be partitioned into two regions R1( 4) = {θ1 : 1 ≤ θ1 ≤ 1.25} and R3( 4) = {θ1 : 1.25 ≤ θ 1 ≤ 1.5} . Thus, we have got the following regions R1( 4) = {θ 1 : 1 ≤ θ1 ≤ 1.25} , R2( 4) = {θ 1 : 1.5 ≤ θ 1 ≤ 2} , R3( 4) = {θ1 : 1.25 ≤ θ 1 ≤ 1.5} , N 4 = 3 , and R1( 4 ) ∈ R1( 4 ) , R1( 4 ) ∈ R3( 4 ) , R2( 4 ) ∈ R2( 4 ) , R3( 4 ) ∈ R1( 4 ) , R3( 4 ) ∈ R4( 4 ) . Figure 7 illustrates uncertain region partition at the forth

iteration. Here rectangle AKLD is the region R1( 4 ) , rectangle DCEF is the region R2( 4 ) , rectangle KBMQ is the region R3( 4 ) , and rectangle QMCL is the region R4( 4 ) , AN is the region R1( 4 ) , ND is the region R3( 4 ) , DF is the region R2( 4 ) .

θ2

1

B

M

K Q

0 A 1

N 1.25

C

E

L

D

F 2

1.5

θ1

Figure 7. Partitioning of the uncertain region Т at the forth iteration: – region R2( 4 ) ,

– region R3( 4 ) ,

– region R4( 4 ) .

ACS Paragon Plus Environment

– region R1( 4 ) ,

Page 39 of 66

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

Using the reasonings similar to the reasonings conducting above one can get region R1( 4) :

ϕ11(4) =

d +8 1 + d − 0.5625 ⋅ z , ϕ 21(4) = , θ1( 4)1,mid = 1.125 , 1.125 ⋅ z + 2 1.125

region R2( 4 ) :

ϕ12(4) =

d +8 1 + d − 0.875 ⋅ z , ϕ 22(4) = , θ1( 4 )2,mid = 1.75 , 1.75 ⋅ z + 2 1.75

region R3( 4 ) :

ϕ13(4) =

d +8 1 + d − 0.6875 ⋅ z , ϕ 23(4) = , θ1( 4)3,mid = 1.375 . 1.375 ⋅ z + 2 1.375

Thus, we obtain % (4) } = Pr{θ ∈Ω % (4) } + Pr{θ ∈Ω % (4) } + Pr{θ ∈Ω % (4) } = Pr{θ ∈Ω 1 2 3 . ( 4) (4 ) ( 4) L (3) ( 4) ( 4) L ( 4) L % % = A1 [ F2 ( y%1 ) − F2 (θ2,1 )] + A2 [ F2 ( y% 2 ) − F2 (θ 2,2 )] + A3( 4) [ F2 ( y% 3(4 ) ) − F2 (θ%2,3 )]

Let calculate values of parameters A1( 4) , A2( 4) , А3( 4 ) A1(4) = [ F1 (θ%1,1(4)U ) − F1 (θ%1,1( 4) L )] = F1 ((1.25 − E[θ1 ]) / σ 1 ) − F1 ((1 − E [θ1 ]) / σ 1 ) = 0.25 , (4)U (4) L A2( 4) = [ F1 (θ%1,2 ) − F1 (θ%1,2 )] = F1 ((2 − E[θ1 ]) / σ 1 ) − F1 ((1.5 − E[θ1 ]) / σ 1 ) = 0.5 , (4)U ( 4) L A3(4) = [ F1 (θ%1,2 ) − F1 (θ%1,2 )] = F1 ((1.5 − E[θ1 ]) / σ 1 ) − F1 ((1.25 − E [θ1 ]) / σ 1 ) = 0.25 .

Thus, we can write formula (36) for left-hand side of joint constraints approximation ( 4) (4) % (4) } = 0.25  F  y1 − 0.5  − F  0 − 0.5   + 0.5  F  y2 − 0.5  − F  0 − 0.5   + Pr{θ ∈ Ω  2  2 2  2     0.128    0.128     0.128    0.128 

  y ( 4) − 0.5   0 − 0.5   +0.25  F2  3  .  − F2   0.128     0.128 

Next, taking into account (86)-(88) we should construct the approximation for objective function expected value at the forth iteration. Using objective function derivatives forms (95) and the middle points θ (4)1 = {1.25;0.25} , θ (4)2 = {1.75;0.5} , θ (4)3 = {1.125;0.75} , θ (4)4 = {1.375;0.75} of regions R1(4) , R2(4) , R3( 4) and R4( 4) as linearization points respectively, we can calculate the values of a r and E[θi ; Rr( k ) ] , r = 1,..., N 4 , N 4 = 4 , (see formulae (88)) and can write approximation a1 = 0.25 , a2 = 0.5 , a3 = 0.0128 , a4 = 0.2372 , E [θ1; R1(4) ] = 0.3488 , E [θ1; R2(4) ] = 0.80224 , E [θ1 ; R3(4) ] = 0.0173 , E [θ1; R3(4) ] = 0.3315 ,

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Page 40 of 66

E [θ 2 ; R1( 4) ] = 0.09875 , E [θ 2 ; R2( 4) ] = 0.25 , E [θ 2 ; R3( 4) ] = 0.0087 , E [θ 2 ; R4( 4) ] = 0.1425 , 4

( 4) E ap [ f (d , z, θ ), θ ] = ∑ a r f (d , z, θ ( 4 ) r ) +

r =1

 ∂f (d , z, θ ( 4 ) r )  ∂f ( d , z, θ ( 4 ) r ) + ∑ (1 − a r ) E [θ1 ; Rr( 4 ) ] + E [θ 2 ; Rr( 4 ) ]. ∂θ1 ∂θ 2 r =1   4

Forth iteration. Thus, we should solve the following problem at the forth iteration f ( 4) =

( 4) min E ap [ f (d , z , θ ),θ ]

d , z , y1 , y 2 , y3

% (4) } ≥ 0.75 , Pr{θ ∈ Ω

y1 ≤

d +8 d +8 d +8 , y2 ≤ , y3 ≤ , 1.125 ⋅ z + 2 1.75 ⋅ z + 2 1.375 ⋅ z + 2

y1 ≤

1 + d − 0.5625 ⋅ z 1 + d − 0.875 ⋅ z 1 + d − 0.6875 ⋅ z , y2 ≤ , y3 ≤ . 1.125 1.75 1.375

0 ≤ d ≤ 10 , 1 ≤ z ≤ 10 , 0 ≤ y1 ≤ 1 , 0 ≤ y 2 ≤ 1 , 0 ≤ y 3 ≤ 1 . We have got the solution f ( 4 ) = 7.3955 , d ( 4 ) = 0.7581 , z ( 4 ) = 1 , y1 = 0.1 , y 2 = 0.5046 , y 3 = 0.7786 at the forth iteration. Since we have got the objective value f ( 3) = 7.2556 at the

third iteration and

f ( 4) − f (3) = 0.1399 > ε 1 = 0.1 , it is clear iterations must be repeated.

7. Algorithm of solving the TSOPJCC with chance constraints Using the results of previous sections we can formulate algorithm of solving the TSOPCC with joint chance constraints. For definition we will consider the case (20). Flow chart of the algorithm of solving TSOPCC (89) is given below Algorithm 1

Step 1: Set iteration counter k = 1 , s = 1 . Specify initial values d (0) , zl(0) , an initial set of subregions R1(1) (usually, N1 = 1 and R1(1) = Tθ ), small ε 1 .

Step 2: Solve problem (89). Let f 2( k ) , d ( k ) , z l( k ) are the solution of problem (89). Step 3: If k > 1 go to Step 4, otherwise set p k = 1 , q k = 2 , partition the region R1(1) = Tθ into two subregions R p( kk +1) and Rq(kk +1) according to rule (91), set N k +1 = 2 , go to Step 8.

ACS Paragon Plus Environment

Page 41 of 66

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

Step 4: If f 2( k ) − f 2( k −1) ≤ ε 1

where ε 1 is the desired precision level, then the iteration procedure stops, otherwise go to Step 5.

Step 5: Determine the values ϕ (jlk ) = ϕ j (d ( k ) , z l( k ) ,θ

( k ) l , mid

).

Step 6: Solve problem (91) for l = 1,..., N k . Solve problem (92) to get l *(k ) . Determine splitting variable θ s and a splitting point csl( k ) .

Step 7: Partition the region Rl(*k ) into two subregions R p( kk +1) and Rq(kk +1) . N k +1 = N k + 1 . If s ≠ p Partition the region Rl(*k ) into two subregions Rr(kk +1) and Rh(kk +1) .. N k +1 = N k + 1 .

Step 8: Set k = k + 1 and go to Step 2. Since at each iteration of solving problem (89) the number of the subregions Rr( k ) increases by one, the total number of subregions N k at the k-th iteration is equal to k (if N1 = 1 ). 7.2. Case of optimization problem with individual and joint chance constraints It is easy to generalize this approach to the case when there are individual and joint chance constraints. We will show it on example of OSOPJCC. Suppose constraints (2) with numbers j = 1,..., q should satisfy the individual chance constraints, and constraints (2) with numbers j = q + 1,..., m

∂g j ∂θ p

should satisfy the joint chance constraint. Consider the case when

≥ 0, ∀θ , z , d , j = 1,K , m . For constraints with numbers j = 1,..., q we introduce the

following regions Ω j = {θ : g j ( d , z , θ ) ≤ 0, θiL ≤ θ i ≤ θ iU , i = 1,..., p} , Ω j ,l = {θ : θ p − ϕ j ( d , z , θ1 ,..., θ p −1 ) ≤ 0, θ ∈ Rl( k ) } , l = 1, 2,..., N k .

The following equality holds Ω j = Ω j ,1 ∪ Ω j ,2 ∪ L ∪ Ω j , Nk .

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Page 42 of 66

% ( k ) be the approximation of the region Ω at the k − th iteration. We will use Let the region Ω j ,l j ,l % (k ) the following expression for Ω j ,l % ( k ) = {θ : θ − ϕ ( k ) ≤ 0, θ ∈ R ( k ) } , j = 1,..., q , l = 1, 2,..., N . Ω k j ,l p j ,l l % ( k ) the parameters θ varies within the following intervals In the region Ω j ,l i ( k ), L I ijl( k ) = {θ : θ ijl( k ),L ≤ θ i ≤ θ ijl( k ),U }, i = 1,..., p − 1, I (pjlk ) = {θ : θ pjl ≤ θ p ≤ ϕ (j k,l ) } .

By similar reasoning as in the previous sections we obtain % ( k ) } = A( k )  F ( y% ( k ) ) − F (θ% ( k ) L )  , Pr{θ ∈ Ω jl jl  j ,l pjl  Nk

% ( k ) } = ∑ A( k )  F ( y% ( k ) ) − F (θ% ( k ) L )  , Pr{θ ∈ Ω j jl  j ,l pjl  l =1

p −1

where A(jlk ) = ∏  F (θ%ijl( k )U ) − F (θ%ijl( k ) L )  . i =1

Finally, using expression (87) as Eap( k ) [ f ( d , z% (θ ), θ )] we can obtain the optimization problem that should be solved at the k-th iteration f (k ) = Nk

∑A

(k ) jl

min

d , z , y (j k, l) , yl( k )

(k ) Eap [ f ( d , z, θ ); Tθ ]

L [ Fp ( y% (jk,l) ) − Fp (θ%pjl )] ≥ α j , j = 1,K , q ,

l =1

y (jk,l) ≤ ϕ (j k,l ) , j = 1,..., q , l = 1,..., N k , Nk

∑A

(k ) l

[ Fp (θ%pl( k )U ) − Fp ( v%l( k ) )] ≥ α ,

l =1

yl( k ) ≤ ϕ (j k,l ) , j = q + 1,..., m , l = 1,..., N k ,

where ϕ (jlk ) = ϕ j ( d , z , θ ( k ) l ,mid ) and θ ( k ) l ,mid = (θ1( k ) l ,mid ,..., θ p( k−1) l ,mid ) is the middle point of the region

Rl( k ) . If constraints (2) with numbers j = 1,K , q are hard then the parameters α j ( j = 1,K , q ) should be close to 1.

ACS Paragon Plus Environment

Page 43 of 66

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

To check efficiency of the proposed methods, we solved three examples. Problems (72), (89) were solved with the SQP method. Partial derivatives of the objective function and constraints were calculated using the finite difference method.

9. Computational experiment Example 1. Let us consider the production of species B from A as A → B in a continuous stirred tank reactor in the flowsheet in Figure 15. CA0, T0, F0 F1, T2

V, T 1 Tw2

W, Tw1

CA1

Figure 1. Flowsheet of a chemical process with continuous stirred tank reactor and heat exchanger The mathematical formulation of the optimization problem for this example is given in6. In this problem, design variables are volume of the reactor V and a heat transfer area A, the control variables are temperature in the reactor T1 (311≤T1≤389) and the recycle temperature T2 (311≤T2 ≤389), the state variables are the concentration CA1 of reactant A in the product, the flow rate of

the recycle F1, the outlet temperature of the cooling water Tw2 and the flow rate W of the cooling water, vector θ =(F0 T0, Tw1, k0, U) is the vector of uncertain parameters, where F0 is a feed flow rate, T0 is a feed flow temperature, Tw1 is an inlet temperature of the cooling water, k0 is the rate

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Page 44 of 66

constant and U is the overall heat transfer coefficient. Nonlinear constraints (2) have the following form 0.9 − (c A0 − c A1 ) / c A0 ≤ 0 , Tw 2 − Tw1 ≥ 0 , Tw1 − T2 + 11.1 ≤ 0 , T1 − Tw 2 ≥ 11.1 , Tw 2 − 355 ≤ 0 , 301 − Tw 2 ≤ 0 .

(98)

Linear constraints (2) have the following form T2 − T1 ≤ 0 , 311 − T2 ≤ 0 , T2 − 389 ≤ 0 , 311 − T1 ≤ 0 , T1 − 389 ≤ 0 .

(99)

We will consider all nonlinear constraints (98) as chance constraints (except for Tw 2 − Tw1 ≥ 0 ). Since the condition Tw 2 − Tw1 < 0 contradicts the physical sense this constraint must be considered as a hard constraint. We suppose that during the operation stage there is enough process data, which allows us either to measure or to calculate the values of all the uncertain parameters. Therefore, the control variables can be used for improvement of the chemical process performance at each time instant (for each value of the uncertain parameters). We will suppose that the uncertain parameters θi are independent, random parameters, having the normal distribution N1 ( E [θ i ], σ i ) , where E[θi ] = θiN , uncertain parameter θi . The vector

θN

θiN is the nominal value of the

of the nominal values is of the form

θ N = (45.36; 333; 300; 9.8; 1635) . The values σ i

are determined from the condition

3.3σ i = γ∆θi , where ∆θi is the i-th component of the vector ∆ θ = (0.1; 0.02; 0.03; 0.1; 0.1) 58 and γ is a coefficient. The uncertainty region is Tθ = {θi : θiN − γ∆θi ≤ θi ≤ θiN + γ∆θi , i = 1,..., p} and γ∆θ i is the maximal deviation of the parameter θi from the nominal value. With the help of the parameter γ we can change the region Tθ . We chose σ i so that the probability measure of the interval θiN − γ∆θi ≤ θi ≤ θiN + γ∆θi is equal to 0.999 . The objective function is

ACS Paragon Plus Environment

Page 45 of 66

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

f = 691.2V 0.7 + 873.6 A0.6 + 1.76W + 7.056F1 . We solved three problems. In the first problem the uncertain parameters were equal to their nominal values θ N (their expected values). This is the nominal optimization problem. It has form (1), in which θ = θ N . The second problem is the one-stage optimization problem with chance constraints. The third problem is the two-stage optimization problem with chance constraints. In Tables 1 and 2 we give the results of solving the nominal optimization problem, the OSOPCC and the TSOPCC with the proposed method for the case of independent normally distributed uncertain parameters.

Table 1. Results of solving OSOPCC

γ

OSOPCC

α

0

f

V

A

CPU-time,s

9374

5.48

7.88

0.01

1.0

0.5

9937

5.76

7.41

0.3

1.0

0.75

10038

5.85

7.54

0.35

1.0

0.95

10168

5.97

7.84

0.3

1.25

0.5

9986

5.66

7.48

0.3

1.25

0.75

10097

5.91

7.82

0.3

1.25

0.95

10193

6.21

7.85

0.3

1.5

0.5

10014

5.81

7.49

0.3

1.5

0.75

10155

6.15

7.84

0.3

1.5

0.95

10268

6.23

8.02

0.35

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Page 46 of 66

2.5

0.5

10093

5.91

7.81

0.3

2.5

0.95

10833

6.89

8.75

0.5

Table 2. Results of solving TSOPCC

γ

TSOPCC

α f

V

A

CPU-time,s

1.0

0.5

9957

5.89

7.0

132

1.0

0.75

9996

5.62

7.37

131

1.0

0.95

10107

5.87

7.72

305

1.25

0.5

9967

5.93

7.0

139

1.25

0.75

10045

6.02

7.35

185

1.25

0.95

10148

6.07

7.75

208

1.5

0.5

9983

5.94

7.02

158

1.5

0.75

10098

6.06

7.34

312

1.5

0.95

10193

6.14

7.65

324

2.5

0.5

10027

6.00

7.36

185

2.5

0.95

10801

6.47

8.65

603

It is clear that the line γ = 0 corresponds to the nominal optimization problem. Tables 1 and 2 show that in this case the optimal value of the objective function obtained using TSOPJCC are insignificantly (approximately 0.2%–0.6%) better than the ones obtained using OSOPJCC. Compare the proposed approach with the straightforward way of solving problem (9) when using some method of nonlinear programming (for example, SQP55). Suppose we use the straightforward way of solving problem (9). In this case it is necessary to calculate at each

ACS Paragon Plus Environment

Page 47 of 66

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

iteration the values of the objective function, the left-hand sides of constraints of problem (9) and the gradients of these functions.

Example 2. In this system (Figure 2), species B is produced from species A as in (Figure 2). The flowsheet consists of two parts. The reaction part consists of reactor (1), heat exchanger (2) and compressor (3). This is the same system considered in example 1. The separation part16 consists of the splitter (4), flash tank (5), distillation column (6) and mixers (7) and (8).

Figure 2. Reaction-Separation System The effluent (with flowrate F1 ) from the reaction part is split using the splitter (4) into streams

F 2 , F 3 , F 8 and F 9 . At high conversion in the reactor the bypass streams F 8 and F 9 can be significant depending on the capital cost of the reactor, flash tank and distillation column. The system has two product streams with flowrates P1 and P2 , in which the specified purities of the species B and A are given. Process model of reaction part is the same as in Example 1. The flow F1 in Figure 1 is designated as F11 in Figure 2. The process model of the separation part is given as follows16:

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Split mass balances F 2 A = S1F 1A ,

F 2 B = S1 (F1 - F1A ),

F 8 A = S2 F 1A ,

F 8B = S2 (F 1 - F1A ),

F 9 A = S 3 F 1A ,

F 9 B = S3 (F1 - F1A ),

F 3A = S4 F 1A ,

F 3B = S4 (F 1 - F1A ),

S1 + S2 + S3 + S4 = 1.

Flash mass balances F 4 A = 0.9 F 2 A ,

F 4 B = (1 − k f ) F 2 B ,

F 5 A = 0.1F 2 A ,

F 5 B = k f F 2 B.

Distillation column mass balances F 6 A = 0.97 F 3A ,

F 6 B = (1 − kC ) F 3B ,

F 7 A = 0.03F 3A ,

F 7 B = kC F 3B.

Mixer mass balances P1A = F 4 A + F 6 A + F 8 A ,

P1B = F 4 B + F 6 B + F 8B ,

P2 A = F 5A + F 7 A + F 9 A ,

P 2 B = F 5B + F 7 B + F 9 B.

Values of parameters of the mathematical model are given in Table 3.

Table 3. Data for Example 2

ρcp

167.4 kJ/(m3 K)

k0

28.52 1/h

cpw

4.19 kJ/(kg K)

U

148.636 kJ/ (m2 h K)

F0

4.12 m3/h

∆H

-3500.0 kJ/kg mol

CA0

8.0 kmol/m3

T0

333.0 K

EA/R

555.6 K

Tw1

300.0 K

M

92.0 kg/kmol

ρL

883.0 kg/ m2

ACS Paragon Plus Environment

Page 48 of 66

Page 49 of 66

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

Np

20

ρU

3.0 kg/ m2

kc

0.91

kf

0.85

Here FI A , FI B (I=2,…,8) are flowrates (kmol/h) of species A and B, S i , (i = 1,...,4) are split fraction; D f and Dc are diameters of flash tank and column; k c and k f are coefficients of separation in the flash drum and the distillation column, respectively. As control variables we use [ T1 , T2 , S1 , S2 , S 3 , S4 ]. Design variables are [ V , A , D f , Dc ]. Such selection of the control variables permits to evaluate all the state variables without iterations. Other data for this example are given in Table 3. The reaction part constraints are given by (91), (92). We excluded only the constraint 0.9 − (c A0 − c A1 ) / c A0 ≤ 0 , which in this case is unnecessary. Constraints connected with the separation part are16: design constraints DF2 ≥

1.27 M ( F 2 A + F 2 B ) , 45υρν

DC2 ≥

1.27 M ( F 3 A + F 3B ) , 35υρν

υ = 0.064 ( ρ L − ρV ) / ρV , quality constraints

9P1B − P1A ≤ 0 , 3P2 A − P2 B ≤ 0 . Size restrictions: 1.0 ≤ DF ≤ 5.5, 1.0 ≤ DC ≤ 5.5 . Uncertain parameters for the reaction part: F0 (± 10%) , T0 (± 2%) , Tw1 (± 3%) , k ( ±10% ) , U (± 10%) . Uncertain parameters for the separation

part: k f (± 10% ) , k c (± 10% ) . Thus, there are 4 design variables, 6 control variables, 6 uncertain parameters and 9 chance constraints in this optimization problem.

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Page 50 of 66

The performance objective function is profit, which includes the operating costs of the streams with flowrates P1 and P2 , capital and operating costs of the reaction and separation parts as follows f = {cost of products}–{capital cost for the flash tank and distillation column}–

–{operating cost in the separation part}–{capital costs for the reactor and heat exchanger}– –{operating costs for the reaction part, i.e. cost of the cold water and pumping} This is also given by

f = {25( P1A + P1B ) + 75( P2 A + P2B )} − {10D2f + (10 + 0.3Np) Dc2} − − {0.2( F 2 A + F 2 B ) + 2.0( F 3A + F 3B )} − {250.0V 0.7 + 87.36 A0.6 } − {1.76W1 + 7.056 F11}

where Np is the number of trays in the distillation column. We suppose that at the operation stage there is enough process data in order to correct all the uncertain parameters. In Table 4 we give the results of solving the nominal optimization problem, the OSOPCC and the TSOPCC by the proposed method.

Table 4. Results of solving nominal optimization, OSOPCC and TSOPCC for Example 2 γ

α

f

V, m 3

A, m 2

Df , m

Dc , m

CPU, sec

Nominal





10060

1.928

5.551

19.509

1

0.01

OSOPCC

1

0.5

9818

1.8869

5.333

19.671

1

1.03

1

0.75

9793

1.921

5.306

19.622

1

0.98

1

0.95

9762

1.9603

5.295

19.543

1

0.9

1.25

0.5

9818

1.8869

5.333

19.671

1

0.85

1.25

0.75

9781

1.942

5.299

19.563

1

0.8

1.25

0.95

9739

1.961

5.289

19.526

1

1.01

ACS Paragon Plus Environment

Page 51 of 66

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

TSOPCC

1.5

0.5

9817

1.8871

5.333

19.670

1

0.81

1.5

0.75

9779

1.942

5.29

19.560

1

0.8

1.5

0.95

9721

1.9703

5.281

19.516

1

1.02

2.5

0.5

9817

1.8871

5.333

19.670

1

0.8

2.5

0.75

9751

1.9571

5.291

19.534

1

0.85

2.5

0.95

9602

2.119

5.022

19.061

2.617

1.03

1

0.5

9851

1.826

5.388

19.652

1

153

1

0.75

9812

1.8906

5.329

19.631

1

261

1

0.95

9806

1.906

5.324

19.623

1

500

1.25

0.5

9851

1.826

5.388

19.652

1

150

1.25

0.75

9805

1.9065

5.322

19.622

1

295

1.25

0.95

9791

1.931

5.318

19.592

1

610

1.5

0.5

9850

1.826

5.388

19.650

1

158

1.5

0.75

9798

1.933

5.320

19.602

1

482

1.5

0.95

9782

1.939

5.310

19.559

1.01

621

2.5

0.5

9850

1.826

5.388

19.650

1

155

2.5

0.75

9775

1.951

5.30

19.558

1.01

618

2.5

0.95

9652

2.104

5.0846

19.101

2.546

664

Example 3. Consider the design problem of the chain of two continuous stirred tank reactors (Fig. 4). There is a detailed description of this problem in59,60. The reactions in the first and second reactors are k1 k3 A  → B  →C

,

k2 k4 A  → B  →C

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

k1

k3

k2

A→ B →C

C A0 = 1

T1

V1

Page 52 of 66

k4

A→ B →C

CA1

T2

V2

CB1

C A2 CB 2

2

1

Figure 4. Flowsheet of a chemical process with two continuous stirred tank reactors Component balances for reactors

C A1 + k1C A1V1 = 1 ;

CB1 + C A1 + k3CB1V1 = 1 ;

C A2 − C A1 + k 2C A2V2 = 0 ; k1 = k10 e − E1

RT1

, k 2 = k10 e − E1

(100)

C B 2 − C B1 + C A2 − C A1 + k 4C B 2V2 = 0 ; RT 2

, k3 = k 20 e − E 2

RT1

, k4 = k20 e − E2

RT2

.

where С Ai , СBi ,Vi , Ti are the concentrations of the components A and B and the volumes , temperatures, respectively, and ∆H i (i=1 and 2) are the reaction enthalpies , U1 ,U 2 are the utility supply for both reactors for decrease of reactor temperatures.

Component B ( CB ), as the intermediate product, is deemed to be the desired product. For this example the reaction enthalpies and the kinetic parameters θ = (E1 , E2 , k10 , k20 ) are considered to be uncertain. We suppose that the uncertain parameters are correlated, random variables with the

normal distribution.

Table 5. Information about uncertain parameters Uncertain parameters

Expected value Standard deviation

E1

6665,948

200

E2

7985,248

240

ACS Paragon Plus Environment

Page 53 of 66

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

k10

0,715

0,0215

k20

0,182

0,0055

Table 6. Correlation matrix Uncertain parameters

E1

E2

k10

k20

E1

1

0.5

0.3

0.2

E2

0.5

1

0.5

0.1

k10

0.3

0.5

1

0.3

k20

0.2

0.1

0.3

1

It is assumed that the feed flow into the 1st reactor is the product stream from an upstream plant and can be supplied to the network with a given composition and temperature. The aim of the optimization is the minimization of total costs that include both the material costs of the reactor depending on the volumes ( Vi , i = 1;2 ) and operational costs Ui , i = 1;2 , minus a term indicating the total amount of the desired product ( PB = V2CB 2 ), which is assumed to be profitable60. To achieve the optimization goal, the design parameters such as volumes Vi , i = 1;2 , of both reactors, as well as operational parameters such as temperatures Ti , i = 1;2 are used as free decision variables. The objective function of the optimization problem can be written as follows, where сi are specific price factors:

f (V1 ,V2 , T1 , T2 ) = c1 V1 + c2 V2 − c3 PB + c4U 1 + c5U 2 We suppose Ui , i = 1;2 , are the utility supply for both reactors for decrease of reactor temperatures. Thus, we formulated operation costs in the following forms

U1 = ∆H 1V1 ⋅ (k1C A1V1 ) + ∆H 2V1 ⋅ (k 3C B1V1 ) − c p ,1 ρ1V1 (T1 − T0 )

ACS Paragon Plus Environment

(101)

Industrial & Engineering Chemistry Research

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Page 54 of 66

U 2 = ∆H 1V2 ⋅ (k 2C A2V2 ) + H 2V2 ⋅ (k 4C B 2V2 ) − c p , 2 ρ 2V2 (T2 − T1 )

(102)

here ∆H i (i=1 and 2) are the reaction enthalpies, c p ,i , ρi (i=1 and 2) are the mixture heat capacity and density of both reactors, respectively. The utility costs are proportional to the value of the current temperature deviation.

Additionally, there are lower bounds for product concentration C BSP and upper bounds U1SP , U 2SP of utility supply for both reactors ( U1 ,U 2 ) for decrease of reactor temperatures.

C B 2 ≥ C BSP ,

0 ≤ U 1 ≤ U 1SP ,

0 ≤ U 2 ≤ U 2SP .

(103)

Table 7. Other data for Example 3 c p ,1

3.2

c p ,2

3.2

c1

5.0

c4

0.1

U 2SP

100.0

ρ1

1.18

ρ2

1.18

c2

5.0

c5

0.1

-

-

∆H1

21.2

∆H 2

63.6

c3

15.6

U1SP

100.0

-

-

However, since all constraints are affected by the uncertain parameters, they should be reformulated to chance constraints. The uncertain parameters also have an impact on the objective function. In this example problem reformulated as minimizing an upper bound b of objective function and its compliance can be guaranteed with certain reliability by formulating an additional chance constraint60. The reformulation is written as follows, where α is the probability limits min

b ,T1 ,T2 ,V1 ,V2

(104)

b

 f (V1 , V2 , T1 , T2 ) ≤ b    C B 2 ≥ C BSP   Pr  ≥α . SP 0 ≤ U ≤ U 1 1    0 ≤ U 2 ≤ U 2SP  We suppose that at the operation stage there is enough process data in order to correct all the uncertain parameters.

ACS Paragon Plus Environment

Page 55 of 66

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

Due to correlation of the uncertain parameters θ = ( E1 , E2 , k10 , k20 ), we use proposed approach54, that based on the transformation of a set of correlated, normally distributed uncertain parameters to a set of statistically independent, normally distributed uncertain parameters. In this case we can write54 the following system of equations, where θ1 = E1 , θ 2 = E 2 , θ 3 = k10 ,

θ 4 = k 20 , 4

θ i = E[θ i ] + ∑ cikη k , i = 1,...,4 ,

(105)

k =1

here ηi , i = 1,...,4 , are statistically independent random parameters having invariant standard distribution N 1 (0;1) and the matrix C = ( cij ) satisfied the following condition

CC T = Λ ,

where the matrix Λ is covariance one

 σ 12  ρ σσ Λ =  21 2 1  ρ 31σ 3σ 1   ρ 41σ 4σ 1

ρ12σ 1σ 2 ρ13σ 1σ 3 ρ14σ 1σ 4   σ 22 ρ 23σ 2σ 3 ρ 24σ 2σ 4  , ρ 32σ 3σ 2 σ 32 ρ 34σ 3σ 4   ρ 42σ 4σ 2 ρ 43σ 4σ 3 σ 42 

where σ i2 is the dispersion of the parameter θi , ρ ij is the correlation coefficient. We have previously shown54 that matrix C can be determined by solving the following optimization problem 2

4 4  4  min ∑∑  ∑ cik ckj − λij  , cij i =1 j =1  k =1 

(106)

where λij are the elements of the matrix Λ . Taking into account (105) and data from Table 5 we can reformulate formulae (100) in the form C A1 + k1C A1V1 = 1 ;

C B1 + C A1 + k 3C B1V1 = 1 ;

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

C A2 − C A1 + k 2C A2V2 = 0 ; k1 = k10 e − E1

RT1

, k 2 = k10 e − E1

Page 56 of 66

C B 2 − C B1 + C A2 − C A1 + k 4C B 2V2 = 0 ; RT2

, k3 = k 20 e − E2

RT1

, k4 = k20 e − E2

RT2

,

where 4

4

4

4

k =1

k =1

k =1

k =1

k10 = 0,715 + ∑ c1kηk , k 20 = 0,812 + ∑ c2kη k , E1 = 6665,948 + ∑ c1kηk , E 2 = 7985,248 + ∑ c1kηk .

Thus, we can reformulated operation costs (101), (102) in the following forms

U1 = ∆H 1V1 ⋅ (k1C A1V1 ) + ∆H 2V1 ⋅ (k 3C B1V1 ) − c p ,1 ρ1V1 (T1 − T0 ) , U 2 = ∆H 1V2 ⋅ (k 2C A2V2 ) + H 2V2 ⋅ (k 4C B 2V2 ) − c p , 2 ρ 2V2 (T2 − T1 ) and objective function in the form

f (V1 ,V2 , T1 , T2 ) = c1 V1 + c2 V2 − c3 PB + c4U 1 + c5U 2 . Taking into account problem (106) we can write problem (104) in the following form min

b ,T1 ,T2 ,V1 ,V2 ,cij

(107)

b

 f (V1 , V2 , T1 , T2 ) ≤ b    C B 2 ≥ C BSP   Pr  ≥α . SP 0 ≤ U ≤ U 1 1    0 ≤ U 2 ≤ U 2SP  Problem (107) is the optimization problem with statistically independent parameters ηi having the standard normal distribution N1 (0;1) . We can use the proposed method to solve this problem. It should be noted decision variables of the problem (107) are cij , i = 1,...,4 , j = 1,...,4 , as well as

b, T1 , T2 ,V1 ,V2 . In Table 8 we give the results of solving the nominal optimization problem, the OSOPCC and the TSOPCC by the proposed method.

Table 8. Results of solving nominal optimization, OSOPCC and TSOPCC for Example 3 C BSP

α

f

V1 , m3

ACS Paragon Plus Environment

V2 , m3

CPU, sec

Page 57 of 66

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

Nominal





-28.199

2.762

7.323

0.01

OSOPCC

0.5

0.9

-19.498

6.438

7.953

1.48

0.5

0.95

-17.511

6.134

7.675

1.84

0.52

0.9

-20.89

6.438

7.895

1.49

0.52

0.95

-18.763

6.133

7.606

1.4

0.5

0.9

-19.995

6.431

7.953

86

0.5

0.95

-17.973

6.133

7.671

115

0.52

0.9

-21.437

6.432

7.891

98

0.52

0.95

-19.192

6.131

7.542

132

TSOPCC

Table 8 shows that the two-stage optimization strategy improves (in comparison with the OSOPCC) the optimal value of the objective function (approximately by 0.3% ). One can see that optimal values of TSOPJCC design variables not much less than the optimal values of OSOPJCC ones. It can be explained by the fact that there are higher temperatures in reactors in case of TSOPJCC optimal solution, that allows to get more amount of the desired product

10. Conclusion We proposed the approach for solving optimization problems with joint chance constraints for the cases when the uncertain parameters are independent random variables with arbitrary probability distributions. It is based on three following operations: 1. Approximate transformation of joint chance constraints into deterministic ones. We devised the new technique of these transformation, which is based on the approximation of the region

Ω = {θ : g j (d , z (θ ),θ ) ≤ 0, j = 1,...,m, θ ∈ Tθ } by a union of multidimensional rectangles. 2. Approximate calculation of the expected value of the objective function with the help of the piece-wise linear approximation of the objective function.

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Page 58 of 66

3. Approximate representation of the control variables z (θ ) as piece-wise constant functions. The considered method worked out for the case of the uncertain parameters are independent random variables. However, we showed in54 that an optimization problem with chance constraints and with correlated normally distributed random variables can be transformed into an optimization problem with chance constraints and with uncorrelated, normally distributed random variables. After this transformation, we can use the worked out method. We illustrated this by the example 3.

Notation d – an vector of design variables, z – an vector of control variables, d (k ) , z(k )

– optimal values of the design and control variable vectors obtained at the k-th

iteration (the solution of problem (59)), respectively,

θ – a p-vector of the uncertain parameters, Tθ – uncertainty region, E [ f ( d , z (θ ), θ )] – the expected value of the function f ( d , z (θ ), θ ) ,

Rl( k ) – the l- subregion (multidimensional rectangle) of the region T, Rl( k ) – the l- subregion (multidimensional rectangle) in the space of variables θ , N k – number of regions Rl( k ) at the k -th iteration,

Nk – number of regions Rl( k ) at the k -th iteration,

θil( k ) L – lower bound of change of the variable θi in the region Rl( k ) at the k-iteration, θil( k )U – upper bound of change of the variable θi in the region Rl( k ) at the k-iteration, z% (θ ) – approximation of the multivariate functions z (θ ) ,

ACS Paragon Plus Environment

Page 59 of 66

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

(k ) Eap [ f ( d , z% (θ ), θ )] – approximation of the expected value of the function f ( d , z (θ ), θ ) at the

k-iteration, zl – a vector of the control variables corresponding to the subregion Rl( k ) ,

f% (d , z% (θ ),θ ) – piecewise linear approximation of the function f ( d , z% (θ ), θ ) ,

θr – linearization point at the subregion Rr( k ) , f% ( d , z r , θ , θ r ) – linear part of the Taylor’s expansion of the function f (d , zr ,θ ) at the point θ r ,

µq – a quality of an approximation of the objective function at the region Rq( k ) . References (1) Birge, J.R.; Louveaux, F. Introduction to Stochastic Programming. Springer Science+Business Media, LLC, second edition. New York, USA, 2011. (2) Ben-Tal, A.; Ghaoui, L.E.; Nemirovski, A. Robust Optimization. Princeton Series in Applied Mathematics. New Jersey, USA, 2009.

(3) Powell, W.B. Approximate Dynamic Programming: Solving the Curses of Dimensionality. Wiley Series in Probability and Statistics. John Wiley & Sons, Inc., second edition. Hoboken,

NJ. USA, 2011. (4) Calfa, Bruno A.; Grossmann, Ignacio.E.; Agarwal, Anshul.; Bury, Scott.J,; Wassick, John.M. Data-Driven Individual and Joint Chance-Constrained Optimization via Kernel Smoothing. Comp. Chem. Eng. 2015, 78, 51. (5) Halemane, K.P.; Grossmann, I.E. Optimal process design under uncertainty. AIChE J.

1983, 29, 425.

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Page 60 of 66

(6) Grossmann, I.E.; Floudas, C.A. Active constraints strategy for flexibility analysis in chemical processes. Comp. Chem. Eng. 1987, 11, 675. (7) Ostrovsky, G.M.; Volin, Yu.M.; Senyavin, M.M. An approach to solving a two-stage optimization problem under uncertainty. Comp. Chem. Eng. 1997, 21, 311. (8) Rooney, W.C.; Biegler, L.T. Optimal process design with model parameter uncertainty and process variability. AIChE J. 2003, 49, 438. (9) Bernardo, F.P.; Saraiva, P.M. Robust optimization framework for process parameter and tolerance design. AIChE J. 1998, 44, 2007. (10) Diwekar, U.M.; Kalagnanam, J.R. An efficient sampling technique for optimization under uncertainty. AIChE J. 1997, 43, 440. (11) Charnes; A.; Cooper, W.W.; Symonds, G.H. Cost horizons and certainty equivalents: an approach to stochastic programming of heating oil. Manag. Sci. 1958, 4:235–63. (12) Prékopa, A. Stochastic programming. Springer: 1995. (13) Miller, B. L.; Wagner, H.M. Chance-constrained programming with joint constraints. Oper. Res. 1965. 13(6), 930.

(14) Jagannathan, R. Chance-constrained programming with joint constraints. Oper. Res. 1974. 22(2), 358.

(15) Zhuangzhi Li.; Zukui Li. Optimal robust optimization approximation for chance constrained optimization problem. Comp. Chem. Eng. 2015, 74, 89.

ACS Paragon Plus Environment

Page 61 of 66

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

(16) Acevedo, J.; Pistikopoulos, E.N. Stochastic optimization based algorithms for process synthesis under uncertainty. Comp. Chem. Eng. 1998, 22, 647. (17) Bernardo, F.P.; Pistikopoulos, E.N.; Saraiva, P.M. Integration and Computational issues in stochastic design and planning optimization problems. Ind. Eng. Chem. Res. 1999a, 38, 3056. (18) Klöppel, M.; Geletu, A.; Hoffmann, A.; Li, P. Using sparse-grid methods to improve computation efficiency in solving dynamic nonlinear chance-constrained optimization problems. Ind. Eng. Chem. Res. 2011, 50, 5693.

(19) Pintaric, N.; Kravanja, Z. A strategy for MINLP synthesis of flexible and operable processes. Comp. Chem. Eng. 2004, 28, 1087. (20) Wey, J.; Realf, M.J. Sample average approximation method for stochastic MINLPs. Comp. Chem. Eng. 2004, 28, 3, 319.

(21) Bernardo, F.P. Performance of cubature formulae in probabilistic model analysis and optimization. J. of Comp. and Appl. Math. 2015, 280, 110. (22) Calafiore G.C, Campi M.C. The scenario approach to robust control design. IEEE Trans. Autom. Control. 2006, 51, 742.

(23) Nemirovski, A.; Shapiro, A. Scenario approximations of chance constraints. In Probabilistic and Randomized Methods for Design under Uncertainty. Calafiore, G., Dabbene,

F., Eds.; Springer-Verlag: London, 2006b, 3. (24) Pagnoncelli, B.; Ahmed, S.; Shapiro, A. Sample average approximation method forchance constrained programming: theory and applications. J. Optim. Theory Appl. 2009; 142, 399.

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Page 62 of 66

(25) Erdoǧan, E.; Iyengar, G. On two-stage convex chance constrained problems. Math. Methods Oper. Res. 2007, 65, 115.

(26) Luedtke, J.; Ahmed, S. A sample approximation approach for optimization with probabilistic constraints. SIAM J. Optim. 2008; 19, 674. (27) Xu, H.; Caramanis, C.; Mannor, S. Optimization under probabilistic envelope constraints. Oper. Res. 2012, 60(3), 682.

(28) Ben-Tal, A.; Nemirovski, A. Robust solutions of uncertain linear programs. Oper. Res. Lett. 1999, 25, 1.

(29) Ben-Tal, A.; Nemirovski, A. Robust solutions of linear programming problems contaminated with uncertain data. Math. Program. 2000; 88, 411. (30) Bertsimas D, Sim M. The price of robustness. Oper. Res. 2004, 52, 35. (31) Nemirovski, A.; Shapiro, A. Convex approximations of chance constrained programs. SIAM J. Optim. 2006a, 17, 969.

(32) Hong, L.J.; Yang, Y,; Zhang, L. Sequential convex approximations to joint chance constrained programs: a Monte Carlo approach. Oper. Res. 2011, 59, 617. (33) Bertsimas, D. Gupta, V.; Kallus, N. Data-driven robust optimization. Available on Optimization Online, 2013. (34) Chen, X.; Sim, M.; Sun, P. A Robust Optimization Perspective on Stochastic Programming. Oper. Res. 2007, 55, 1058.

ACS Paragon Plus Environment

Page 63 of 66

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

(35) Chen, W.; Sim, M.; Sun, J.; Teo, C.P. From CVaR to uncertainty set: Implications in joint chance-constrained optimization. Oper. Res. 2010, 58(2), 470. (36) Li, Z.; Ding, R.; Floudas, C.A. A Comparative Theoretical and Computational Study on Robust Counterpart Optimization: I. Robust Linear and Robust Mixed Integer Linear Optimization. Ind. Eng. Chem. Res. 2011, 50, 10567. (37) Li, Z.; Tang, Q.; Floudas, C.A. A Comparative Theoretical and Computational Study on Robust Counterpart Optimization: II. Probabilistic Guarantees on Constraint Satisfaction. Ind. Eng. Chem. Res. 2012, 51, 6769.

(38) Li, Z.; Floudas, C.A. A Comparative Theoretical and Computational Study on Robust Counterpart Optimization: III. Improving the Quality of Robust Solutions. Ind. Eng. Chem. Res.

2014, 53, 13112. (39) Erdogan, E.; Iyengar, G. Ambiguous chance constrained problems and robust optimization. Math. Program. B, 2006, 107(1-2), 37. (40) Hu, Z.; Hong, L.J.; So, A.M.-C. Ambiguous probabilistic programs. Available on Optimization Online, 2013. (41) Call, P.; Wallace, S.W. Stochastic programming. New York: John Wiley & Sons, Inc., 1994. (42) Maranas, K. Optimal molecular design under property prediction uncertainty. AIChE J.

1997, 43, 1250. (43) Wendt, M.; Li, P.; Wozny, G. Nonlinear chance–constrained process optimization under uncertainty. Ind. Eng. Chem. Res. 2002, 41, 3621.

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Page 64 of 66

(44) Li, P.; Arellano-Garcia, H.; Wozny, G. Chance constrained programming approach to process optimization under uncertainty. Comp. Chem. Eng. 2008, 2, 25. (45) Flemming, Th.; Bartl, M; Li, P. Set-point optimization for closed-loop control systems under uncertainty. Ind. Eng. Chem. Res. 2007, 46, 4930. (46) Geletu, A.; Hoffmann, A.; Klöppel, M.; Li, P. Monotony analysis and sparse-grid integration for nonlinear chance constrained process optimization. Eng. Optim. 2011, 43, 1019. (47) Geletu, A.; Klöppel, M.; Zhang, H.; Li, P. Advances and applications of chanceconstrained approaches to systems optimization under uncertainty. Int. J. Syst. Sci. 2013, 44, 1209. (48) Ostrovsky, G.M.; Ziyatdinov, N.N.; Lapteva, T.V. One-Stage Optimization Problem with Chance Constraints. Chem. Eng. Science. 2010, 65, 2373. (49) Ierapetritou, M.G.; Pistikopoulos, E.N. Global optimization for stochastic planning, scheduling and design problems. In: Grossmann, I.E. Global optimization in engineering design. Boston: Kluwer Academic Publ., 1996, 231. (50) Bernardo, F.P.; Pistikopoulos, E.N.; Saraiva, P.M. Robustness criteria in process design optimization under uncertainty. Comp. Chem. Eng. 1999b, 459. (51) Bernardo, F.P.; Saraiva, P.M. Quality Costs and robustness criteria in chemical process design optimization. Comp. Chem. Eng. 2001, 25, 27. (52) Ostrovsky, G.M.; Ziyatdinov, N.N.; Lapteva, T.V.; Zaitsev, I.V. Two-stage optimization problem with chance constraints. Chem. Eng. Sci. 2011, 66, 3815.

ACS Paragon Plus Environment

Page 65 of 66

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

Industrial & Engineering Chemistry Research

(53) Ostrovsky, G.M.; Ziyatdinov, N.N.; Lapteva, T.V. Optimal design of chemical processes with chance constraints. Comp. Chem. Eng. 2013, 59, 74. (54) Ostrovsky, G.M.; Ziyatdinov, N.N.; Lapteva T.V.; Zaitsev I.V. Optimization of chemical processes with dependent uncertain parameters. Chem. Eng. Science. 2012, 83, 119. (55) Biegler, L.T. Nonlinear programming concepts, Algorithms, and Applications to Chemical Processes. SIAM. 2010. (56) Cramer, H. Mathematical methods of statistics. New York: Princeton Univ Press, 1946. (57) Abramowitz, M.; Stegun, I.A. Handbook of mathematical functions with Formulas, Graphs, and Mathematical Tables. Applied Mathematics Series 55 (10 ed.). New York, USA:

United States Department of Commerce, National Bureau of Standards, 1964. (58) Swaney, R.E.; Grossmann, I.E. An index for operational flexibility in chemical process design. AIChE J. 1985, 31, 621. (59) Rooy, H.; Sahinidis, N. Global Optimization of Nonconvex NLPs and MINLPs with Applications in Process Design. Comp. Chem. Eng. 1995, 19, 551. (60) Arellano-Garcia H.; Martini, W.; Wendt, M.; Wozny, G. A new optimization framework for dynamic systems under uncertainty. Comp. Aided Chem. Eng. 2004, 18, 553.

ACS Paragon Plus Environment

Industrial & Engineering Chemistry Research

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60

For Table of Contents Only

ACS Paragon Plus Environment

Page 66 of 66