An Efficient Algorithm for Convex Multiparametric Nonlinear

Oct 10, 2003 - by several heuristics that significantly reduce the number of primal subproblems solved and the complexity of the master problems...
0 downloads 0 Views 85KB Size
Ind. Eng. Chem. Res. 2003, 42, 5883-5890

5883

An Efficient Algorithm for Convex Multiparametric Nonlinear Programming Problems Joaquin Acevedo* and Martha Salgueiro Department of Chemical Engineering, ITESM Monterrey, Avenida E. Garza Sada 2501, Monterrey, N.L., Mexico 64849

An efficient algorithm is proposed for the solution of multiparametric convex nonlinear problems (NLPs). Based on an outer-approximation algorithm, the proposed iterative procedure involves the solution of deterministic NLP subproblems and master multiparametric linear problems, with which an -approximate parametric solution profile can be defined. The procedure is guided by several heuristics that significantly reduce the number of primal subproblems solved and the complexity of the master problems. The applicability of the procedure is demonstrated through different variations of problems taken from the open literature, which serve to explain the algorithm in detail and compare its performance with those of previous approaches. Introduction Chemical processes have to deal with uncertainties arising from either internal variations (such as flows, catalyst decay, equipment efficiencies and failures, etc.) or external conditions (such as product requirements and demands, raw material availability, etc.). The incorporation of these conditions into a process mathematical model has been largely discussed in the open literature for many years,1-3 involving in general terms an optimization problem as follows:

minx f(x,θ) s.a. h(x,θ) ) 0 g(x,θ) e 0

(1)

x ∈ X; θ ∈ Ξ where x is the vector of variables defining design (physical parameters such as volume) and operating values (e.g., temperature, concentrations, etc.), usually constrained in a closed set of values X. Vector θ represents the uncertain parameters varying in a given space Ξ. Two main directions have been followed for the solution of problem (1): deterministic (multiperiod or scenario) and stochastic (chance constraint, multistage) algorithms. In deterministic models,4-7 the uncertain parameters are discretized into fixed values that apply to the process in different periods or scenarios, that is, fixed values of the uncertain parameters. The optimal solution is then selected so as to minimize an average of the objective value, weighted by the duration or the probability of the given period or scenario. In a stochastic optimization framework, a probability distribution function for the uncertain parameters is used explicitly in the model. In this approach, two-stage stochastic formulations have been favored to optimize an expected value of the objective function.8-13 In both cases, however, the computational requirements for the solution of such models increase dramatically with the number * To whom correspondence should be addressed. Tel.: (52) 81 8158 2034. Fax: (52) 81 8320 4250. E-mail: jacevedo@ itesm.mx.

of uncertain parameters. Recently, other approaches such as simulated annealing and fuzzy logic have also been proposed to address process optimization problems under uncertainty.3,14 Another approach used to deal with uncertainty in mathematical programming problems is through the formulation of a parametric programming problem.15-18 The main advantage of this approach is that the computational requirements do not increase as fast as those of other approaches as the number of uncertain parameters increases. The solution of the parametric problem gives explicit functions of the objective value with respect to the uncertain parameters, making it also possible to solve two-stage stochastic formulations in a very efficient way, even when the number of uncertain parameters is increased.19,20 Other applications of the approach have dealt with multiobjective optimization,21 minimization of the environmental impact of chemical processes,22 and model predictive control.23 It has also been shown that parametric programming allows the incorporation of continuous and discrete uncertainties without changes in the formulation or solution procedure.22 Several solution algorithms have been presented for linear24 and mixed-integer linear formulations, with one17,25,26 or several parameters varying independently.16 For nonlinear models, however, the reported developments are rather scarce. While most of the theoretical work was reported by Fiacco,27,28 few algorithms have been presented for the solution of multiparametric nonlinear or mixed-integer nonlinear problems (NLPs), mainly for right-hand-side (RHS) parametrizations.15,18,23,28 In this work, a new algorithm for the solution of RHS multiparametric convex nonlinear problems is proposed. The algorithm, based on the procedure proposed by Dua and Pistikopoulos,18 iterates between deterministic NLP subproblems and master multiparametric linear problems. A series of heuristics are implemented that significantly decrease the number of NLP subproblems that must be solved and reduce the complexity of the master problem, resulting in solution times that are orders of magnitude below those of previous algorithms. The paper is organized as follows. First, an outline of the general methodology is presented, and the develop-

10.1021/ie0301278 CCC: $25.00 © 2003 American Chemical Society Published on Web 10/10/2003

5884 Ind. Eng. Chem. Res., Vol. 42, No. 23, 2003

ment of the heuristics that guide the optimization is justified. The algorithm is then explained in detail, highlighting theoretical and practical issues of its application. Finally, three examples are presented to illustrate the basic steps of the algorithm, comparing its performance with those of previous approaches. Outline of the Proposed Procedure The right-hand-side multiparametric nonlinear programming (mpNLP) problem considered in this work is of the following form:

z(θ) ) minx f(x) s.t. Ax ) b g(x) e c + Fθ

(2)

θmin e θ e θmax x ∈ X; θ ∈ Ξ where vectors b and c and matrix F are constant terms, and θ represents the vector of uncertain parameters. Matrix F allows, without loss of generality, to consider the lower limit of θ, θmin ) 0, and its upper limit, θmax) 1. The nonlinear functions f(x) and g(x) are assumed convex, and the equality constraints are formulated here as linear to guarantee the convexity of the model. Recently, Dua and Pistikopoulos18 presented an algorithm for the solution of multiparametric nonlinear (mpNLP) and mixed-integer nonlinear (mpMINLP) problems. In both cases, the procedure requires the solution of mpNLPs, for which an outer-approximation algorithm is defined. The basic idea of the algorithm is to alternate between a set of deterministic NLP primal subproblems and a multiparametric linear master problem, so as to establish an iterative procedure. The deterministic NLP subproblems are defined by fixing the values of θ, transforming problem (2) into the following:

z(θi) ) minx f(x) s.a. Ax ) b g(x) e c + Fθi

(3)

x∈X where θi are the fixed values of the uncertain parameters. A multiparametric linear master problem can then be generated by constructing linear approximations of the nonlinear functions in (2). These linearizations can be defined at the solution points of (3), x*, leading to the following cumulative multiparametric linear (mpLP) master problem (linearization of nonlinear equalities can also be included following the formulation of Kocis and Grossmann29):

Figure 1. Error of the parametric approximation.

the Jacobian of the constraints, are evaluated at x*, and µ is a scalar variable. Subindex i ) 1, ..., Ik represents the number of different θ values used at iteration k to / represents then the define the primal subproblem. Xi,k solution points found for every fixed value of θ considered at a given iteration k. The solution of problem (4) can be obtained by following the algorithm of Gal;24 a detailed explanation of the algorithm is also given in Acevedo and Pistikopoulos.16 The algorithm yields a series of linear optimal solution functions in terms of θ, and the space in Ξ where each one of these functions is optimal; these spaces are called critical regions. The critical regions are also defined through the solution procedure by sets of linear constraints. The convexity of the model ensures that the solution of problem (4), zˇ (θ), provides a valid lower bound to the solution of (2). Furthermore, as depicted in Figure 1, it also holds that the maximum error [z(θ) - zˇ (θ)] will always be at one of the vertices of the critical regions.18 In this way, the solution of deterministic NLPs at these vertices is a natural choice to construct problem (4). In theory, considering the solution of every deterministic NLP solved throughout the iterations to generate the new master problem as stated in (4) would result in a tight approximation of the master problem and, therefore, a rapid convergence to the nonlinear multiparametric solution. In practice, however, including all these linearizations in the master problem results in a complicated mpLP, difficult to solve, which in turn results in a large number of critical regions. Since the vertices of these regions define the number of deterministic NLPs that are solved for the primal problem, the procedure becomes intractable after a few iterations. Formulation and Solution of the Primal Problem. One way to reduce the computational effort is to avoid the solution of the NLP subproblems at vertices where the error of the parametric solution could be small, that is, where z*(θA) - zˇ (θA) e , where  is a specified tolerance. Since the real optimal value z*(θA) is not known prior to the solution of the NLP at θΑ, an estimation of this value is used. This estimation is based on the possible change in the objective function when moving from a vertex of known solution (θB) to the vertex under study (θA), as expressed by B

∆z ) OFθlin(θA) - z*(θB)

zˇ (θ) ) minx µ s.a.

Ax ) b

g(x*) + ∇g(x*)(x - x*) e c + Fθ f(x*) + ∇f(x*)(x - x*) - µ e 0

}

(4)

/ x* ∈ Xi,k i ) 1, ..., Ik; k ) 1, ..., K

x ∈ X; θ ∈ Ξ

where ∇f, the gradient of the objective function, and ∇g,

(5)

where z*(θB) is the known optimal solution at θB and B OFθlin(θA) is the linearization of the objective function around vertex θB evaluated at θA. The linearization of the objective function with respect to θ is obtained from the following equation: B

OFθlin(θ) ) z*(θB) + ∇θz*(θB)(θ - θB)

(6)

Ind. Eng. Chem. Res., Vol. 42, No. 23, 2003 5885

solved. A representative coefficient of the expected error (CEErep) is then obtained for each vertex as follows: B

OFθlin(θA) - z*(θB) | z*(θB) (9)

CEEArep ) minB(CEEA,B) ) minB|

Figure 2. Evaluation of the criterion of expected error.

where ∇θz is given by the Lagrange multiplier of the parametric constraints at θB. Inserting (6) into (5), a coefficient of the expected error at θA based on the solution obtained at θB (CEEA,B) can be defined as B

A,B

CEE

OFθlin(θA) - z*(θB) ∇θz*(θB)(θA - θB) )| |)| | z*(θB) z*(θB) (7)

The coefficient of expected error weighs the “distance” between θA and θB and the speed of change of the objective function around the vertex where a solution has been found. If the objective function changes rapidly with respect to θ (mathematically expressed by large Lagrange multipliers of the parametric constraints), it is likely that the new NLP will add important information to the master problem. A heuristic criterion (criterion of the expected error) is then applied to decide whether the NLP should be solved or not at that point. This criterion is mathematically defined as follows:

|

B OFθlin(θA)

B

[

A

]

B OFθlin(θB) B

zˇ (θ ) - z*(θ ) |g 1+ B z*(θ ) |z*(θ )|

(8)

where zˇ(θA) is the parametric solution obtained from the last mpLP master problem, evaluated at θΑ, and  is a specified tolerance. The right-hand side of the criterion in (8) has a second term that modifies the selection criterion. As depicted in Figure 2, if the linearization of the objective function around θB has already been included in the master problem, its parametric solution will always be greater B than or equal to this linearization, i.e., zˇ (θ) g OFθlin(θ), and, therefore, the formulation in (7) will overestimate the error of the solution at θA. To compensate for this overestimation, the relative difference between the parametric solution and the linearized objective function, evaluated at θA, is added to  to increase the tolerance on avoiding the solution of the NLP at that vertex. Conversely, if the linearization has not been included in the master problem, it is possible that the parametric solution will be below the linearization, in which case this modification would decrease the tolerance on avoiding the solution of the NLP. On the basis of the assumption that CEEA,B estimates the error that the parametric linear approximation could have at θA if only the solution at θB is considered, and the fact that the mpLP master problem accumulates linearizations, a representative value of the error at θA should consider all the points where an NLP has been

The vertex with the largest CEErep, θrep, represents an estimation of the point in θ where the maximum error between the parametric linear approximation and the nonlinear solution could occur. Thus, this vertex is always selected for the solution of a deterministic NLP. The criterion in (8) is then evaluated for the rest of the vertices to decide the points where new NLPs will be solved, assuming the tolerance ) [z*(θrep) - zˇ (θrep)]/z*(θrep). Infeasible NLP Subproblems. Given the convexity of problem (2), the feasible region depicted by the linear constraints in (4) is a valid outer approximation of the original nonlinear feasible region. This means that some vertices of the linear parametric solution may lie outside the nonlinear feasible region, resulting in infeasible NLP subproblems. The case of infeasible problems can be treated following the procedure proposed by Dua and Pistikopoulos.18 This implies the solution of an infeasibility subproblem as follows:

minx,θ,r

∑R2

s.t. Ax ) b g(x) e c + Fθ r ) θ - θinf

(10)

θmin e θ e θmax x∈X where θinf is the vertex where the infeasible subproblem was found and r is a vector of free variables. θ represents the nearest feasible point where a new NLP is solved to obtain new linearizations of the nonlinear functions. In the proposed algorithm, however, this procedure is followed only when the infeasible NLP is obtained at the beginning of the solution of the primal problem, that is, for the largest CEErep. Otherwise, if a deterministic NLP has already been solved at the given iteration, the infeasible vertex is discarded from the analysis, and the solution of the primal subproblem terminates. Formulation and Solution of the Master mpLP Problem. To reduce the complexity of the master problem while still keeping a tight approximation to the nonlinear formulation, the linearized constraints are compared and selected so as to ensure that the value of the constraint at each solution point obtained from the NLPs is approximated to the overall tolerance, δ, with the minimum number of linear functions. For every nonlinear constraint, the procedure starts evaluating the error of the linearizations of the constraint at all the vertices where a deterministic NLP has been solved as follows: A

A,B

error

)

g(x/B) - gθlin(x/B) g(x/B)

(11)

where errorA,B represents the difference between the

5886 Ind. Eng. Chem. Res., Vol. 42, No. 23, 2003

nonlinear constraint evaluated at the solution found at vertex θB, g(x/B), and the linearization of this constraint constructed at θA, evaluated also at x/B. A The linear function gθlin that approximates the nonlinear constraint at more vertices (errorA,B e δ) is passed to the new master problem, and the selection process is repeated until every vertex is covered. For a nonlinear objective function, its linearizations are included as constraints in problem (4), and are analyzed in the same way. Although this procedure might not have a great impact in the complexity of a linear problem, numerical experience shows that it can affect the number of critical regions and the number of constraints that will define these critical regions. These two factors will, in turn, define the complexity of the solution of the parametric linear program and the number of vertices passed to the primal subproblem. An Algorithm for Multiparametric Nonlinear Problems Based on the definitions presented in the previous section, the main steps of the proposed algorithm are as follows: 0. Initialization. Define a vector of θ points where an NLP has been solved, ΘS ) {L}, and the overall tolerance δ. Set the index of iterations k) 1. 1. Solution of the Primal NLP Subproblems. a. if k ) 1, solve problem (2) assuming the uncertain parameters as variables. Add the solution, θ*, to ΘS and go to the solution of the master problem. b. With the new parametric solution, find the vertices defined by the critical regions,18 to define the vector of θ points to evaluate, ΘE. Set  ) δ. c. Evaluate the representative coefficient of the expected error, CEEArep in (7), for each vertex in ΘE with respect to every vertex in ΘS. d. Evaluate the criterion in (8) at the vertices with the largest value of CEEArep. • If a new NLP should be solved according to the criterion, then solve the deterministic NLP with the uncertain parameters fixed at θA; move θA from ΘE to ΘS. Else, go to step 1.f. • If [z*(θA) - zˇ (θA)]/z*(θA) > , then set  ) [z*(θA) zˇ (θA)]/z*(θA). Else, go to step 1.f. e. If there are vertices left in ΘE, then return to step 1.c. (Note that the new evaluation of CEEArep will only require the evaluation with respect to the last vertex where the deterministic NLP was solved.) f. If no new NLPs where solved, then the algorithm has converged and the solution of the nonlinear problem is given by the solution of the last parametric master problem. Else, the solution of the primal problem is considered finished, and the rest of the vertices in ΘE are discarded. 2. Solution of the Multiparametric Master Problem. a. Construct new linearizations of the nonlinear functions around the solution points found in the primal problem. b. Define a vector Θ* ) ΘS. c. For each nonlinear constraint c.1. Evaluate the error of each linearization at every vertex in Θ*. c.2. Select the linearization with errorA,B e δ for more vertices in Θ*.

Figure 3. Process structure for example 1.

c.3. Remove the vertices where errorA,B e δ from Θ*. Repeat step c.2. until Θ* ) {L}. d. For a nonlinear objective function: if {[z*(θA) zˇ (θA)]/z*(θA)} > δ, then select the new linearization, else, repeat steps c.2 and c.3 with the linearizations of the objective function. e. Solve the new mpLP master problem following the algorithm of Gal.24 f. Set k ) k + 1 and return to the solution of a new primal problem. This algorithm has three main features. First, it reduces considerably the number of NLPs to be solved per iteration; second, it exploits the information available from the solution of deterministic NLPs throughout the algorithm; and third, it reduces the complexity of the master mpLP problem while maintaining a tight approximation of the original problem. In the next section, three numerical examples are presented where the algorithm is explained in detail to evaluate these features. Numerical Examples In this section, three numerical examples are used to illustrate the main steps of the algorithm. The first two examples are taken from the open literature,18 while the third is a variation where more uncertain parameters are included to study the impact of the number of uncertainties on the computational requirements of the procedure. In all cases, the deterministic NLPs were solved using GAMS,30 while the mpLPs were solved through a code developed in Fortran. Example 1. The first example corresponds to the structure used in the first iteration of the MINLP problem proposed by Dua and Pistikopoulos.18 The integer variables are fixed to select two process (P1 and P3) to produce product C from raw materials A and B. A schematic representation of the problem is given in Figure 3. The mathematical formulation includes five variables and two uncertain parameters as follows:

max z(θ) ) 11C1 - (5 + 1.8A + 1.2B3 + 7BP) s.t. B1 - B3 - BP ) 0 -0.9B1 + C12/15 e 0 -A + exp(B3/1.2) e 1 BP e 1.95 B3 e 20 A ) 0.5 + 0.25θ1 C1 ) 5.5 + 0.5θ2 0eθe1 where z(θ) represents the optimal profit, and A, B1, B3, and BP represent the material flows through the process. The two uncertain parameters represent an

Ind. Eng. Chem. Res., Vol. 42, No. 23, 2003 5887 Table 1. Evaluation of CEErep for Example 1 vertex

CEErep

distance from θ*

A(0,0) D(0,0.509) B(1,0) C(1,0.898)

0.0332 0.0332 0 0

1.34 1.073

Table 2. Definition of Vertices To Solve Deterministic NLPs vertex D(0,0.509) B(1,0)

criterion of expected error with respect to C(1,0.898) A(0,0) 0.0332 > 0.0044 0.0000 < 0.0046

solve NLP?

0.0323 > 0.00438 0.0170 > 0.00439

Yes No

Table 3. Parametric Solution of Example 1 region

optimal solution

Figure 4. Critical regions of the solution of example 1. Table 4. Comparison of Solution Algorithms for Example 1

critical region

1

zˇ (θ) ) 41.73 + 0.71θ1 + 2.64θ2

2

zˇ (θ) ) 41.8414 + 0.71θ1 + 2.41θ2 -0.20θ1 + 0.44θ2 e 0.21 θ1 e 0.474 θ2 g 0.449

3

zˇ (θ) ) 41.8155 + 0.54θ1 + 2.64θ2 θ2 e 0.449 θ1 g 0.474

4

zˇ (θ) ) 41.92 + 0.54θ1 + 2.41θ2

θ1 e 0.474 θ2 e 0.449

-0.171θ1 + 0.44θ2 e 0.224 θ1 g 0.449 θ2 g 0.474

uncertain availability of the raw material varying from 0.5 to 0.75, and an uncertain demand varying from 5.5 to 6. The tolerance selected for the final solution is taken as 0.3%, that is, δ ) 0.003. The procedure starts by solving a deterministic NLP with θ1 and θ2 as variables, finding a first solution with θ*) (1,0.898) and z* ) 44.63. The master mpLP problem is then obtained by substituting the two nonlinear constraints with their linearizations. The solution of this master mpLP, following the algorithm of Gal,24 yields only one critical region, defined by the constraint -0.17θ1 + 0.44θ2 e 0.22 and the original bounds for θ. The optimal solution function in this region, an upper value for this maximization problem, is given by zˇ (θ)) 41.92 + 0.54θ1 + 2.42θ2. The rest of the uncertain space represents an infeasible region. The solution of the master problem gives four vertices, including the first solution found at the initialization step. The CEErep for these vertices, based on the first solution θ* ) (1,0.898), is given in Table 1, where vertices have already been sorted in descending order of CEErep. Since there are two vertices with the same CEErep, the distance from θ* is evaluated (squared norm), and vertex A is chosen to solve a new NLP. The solution of the NLP problem at θA ) (0,0) gives an optimal value of z*(θA) ) 41.7369. The error of the parametric solution at this point is then 0.439%, so  is set to 0.00439. The CEEs of the remaining vertices are evaluated with respect to θ* and θA, and then compared to the criterion in (9) to define if another NLP should be solved or not. The result of this evaluation, shown in Table 2, defines a new point, θD) (0,0.509), to solve a deterministic NLP. The nonlinear problem is found infeasible at this point, marking the end of the solution of the primal problem. The new master problem, considering the linearizations around θ* and θA, results in four critical regions and their corresponding optimal solution functions, as given in Table 3. Figure 4 shows these regions within the initial uncertain space. Note that vertices θ* ) θC

Dua and Moncada and proposed Pistikopoulos18 Acevedo22 algorithm no. of NLPs solved/vertices (first iteration) no. of NLPs solved/vertices (second iteration) total no. of NLPs solved

4/4

2/4

2/4

9/9

2/9

2/9

14

5

5

Table 5. Mathematical Model for Example 2 Objective Function min z(θ) ) 36.3 + 1.1IS12 + 1.2IS1 + 1.3IS22 + 1.2IS2 + 1.6IS32 + 1.5IS3 + 1.5IS42 + 1.5IS4 + 2.1IS52 + 1.8IS5 + 1.9IS62 + 1.8IS6 + 1.7IS72 + IS82 + 3IS92 +3.7IS102 +4.1IS112 - 50P1 - 60P2 - 68P3 Material Balances -0.65IS7 + OS7 ) 0 -0.90IS8 + OS8 ) 0 -0.72IS9 + OS9 ) 0 -0.68IS10 + OS10 ) 0 -0.71IS11 + OS11 ) 0 IS7 - OS1 - OS2 - I ) 0

Production Relations -IS1 + 2.0 exp(OS1/1.8) e 2.0 -IS2 + 2.1 exp OS2/2.0) e 2.1 -IS3 + 2.6 exp(OS3/2.5) e 2.6 -IS4 + 2.1 exp(OS4/2.2) e 2.1 -IS5 + 2.5 exp(OS5/2.8) e 2.5 -IS6 + 3.0 exp(OS6/3.0) e 3.0

Maximum Capacities IS1 e 4.0; IS2 e 3.0; IS3 e 3.5 IS4 e 5.5; IS5 e 7.5; IS6 e 6.0 IS7 e 8.0; IS8 e 6.0; IS9 e 5.0 IS10 e 5.0; IS11 e 5.0

Raw Material Availability A ) IS1 + IS2 e 1.5 B ) IS3 + IS4 e 1.6 C ) IS5 + IS6 e 1.7

Product Demand P1 ) OS9 ) 0.5 + 0.5θ1 P2 ) OS10 ) 0.5 + 0.34θ2 P3 ) OS11 ) 1.0 + 0.1θ3

Variation of Parameters 0eθe1

and θA are included in this solution, but the infeasible vertex θD has been excluded. Evaluating CEErep for the remaining seven vertices with respect to θ* and θA, vertex θE ) (0.474,0.449) is selected to solve the first NLP of the new iteration, having the largest CEErep. The solution found at this point is z*(θE) ) -43.26, with an error of 0.1%. Since this error is less than the overall tolerance, the procedure ends at this point with the parametric solution shown in Table 3. To test the procedure, six NLPs where solved at the remaining vertices, finding a maximum error of 0.06% at three vertices. A comparison of the performance of different algorithms for the solution of this problem is presented in Table 4, where the numbers of deterministic NLPs solved by two different algorithms18,22 are contrasted with those of the proposed approach. The total number of NLPs solved includes the initial problem where uncertain parameters are considered variables. Example 2. The second example corresponds also to the second example presented by Dua and Pistikopou-

5888 Ind. Eng. Chem. Res., Vol. 42, No. 23, 2003 Table 6. Solution Procedure for Example 2 primal problem (NLP)

master problem (MPLP)

iteration

vertex

CEE

error (%)

1 2

θ* ) (1,1,1) θB ) (0,0,0) θC ) (0,1,0) θD ) (0.75,0,0)

0.275 0.159 0.187

8 4 2.7 < δ

3

no. of constraints

critical regions

no. of vertices

33 34

1 4

8 20

Table 7. Parametric Solution of Example 2 optimal solution

critical region

1

zˇ (θ) ) -18.23θ1 -14.61θ2 - 4.35θ3 - 65.76

θ1 + 0.48θ2 + 0.034θ3 e 0.756 θ2 e 1 θ3 e 0.79

2

zˇ (θ) ) -18.23θ1 -14.61θ2 - 4.33θ3 - 65.77

θ1 + 0.48θ2 + 0.031θ3 e 0.754 θ2 e 1 θ3 g 0.79 θ3 e 1

3

zˇ (θ) ) -11.53θ1 - 11.4θ2 - 4.12θ3 - 70.82

θ1 + 0.48θ2 + 0.0339θ3 e 0.758 θ1 + 0.48θ2 + 0.0341θ3 g 0.756 θ1 + 0.48θ2 + 0.0311θ3 g 0.754 θ2 e 1 θ3 e 1

4

zˇ (θ) ) -11.45θ1 -11.36θ2 - 4.11θ3 - 70.88

θ1 + 0.48θ2 + 0.0339θ3 g 0.758 θ1 e 1 θ2 e 1 θ3 e 1

Table 8. Comparison of Solution Algorithms for Example 2 Dua and Moncada and proposed Pistikopoulos18 Acevedo22 algorithm no. of NLPs solved/vertices (first iteration) no. of NLPs solved/vertices (second iteration) total no. of NLPs solved

8/8

7/8

3/8

12/20

8/28

1/20

21

16

5

los,18 considering all the processes in the superstructure, that is, all integer variables are set to 1 (see also example 3). The mathematical model, presented in Table 5, includes 30 continuous variables and three uncertain parameters, defining the demand of each of the three possible products. As proposed by the original authors,18 the tolerance for the final solution is set at 2.8%. The solution procedure is summarized in Table 6. To define the solution of the primal subproblems, Table 6 shows the vertices where deterministic NLPs were solved, their coefficients of expected error, and the real percentage of error (difference between the NLP solution and the parametric approximation, evaluated at each vertex). For the master problems, the number of constraints in the MPLP, the resulting critical regions, and the number of vertices defining these regions are given. The final solution, obtained from the second master problem, has four critical regions, with optimal solution functions as reported in Table 7. Several issues regarding these numerical results are worth noting. First, only three major iterations are required, but, more significantly, only one or two NLPs are solved per iteration. These results imply that the procedure selected only those vertices where the larger discrepancies between the nonlinear solution and the linear approximation occurred, thus providing good points where new outer approximations should be constructed. Second, the MPLP master problems in both

Table 9. Mathematical Model for Example 3 Objective Function min z(θ) ) -36.3 + 1.1IS12 + 0.6IS1 + 1.3IS22 + 0.6IS2 + 1.6IS32 + 1.5IS3 + 1.5IS42 + 1.5IS4 + 2.1IS52 + 1.8IS5 + 1.9IS62 + 1.8IS6 + 1.7IS72 + IS82 + 0.5IS92 + 3.7IS102 + 4.1IS112 - 60P1 - 68P2 Material Balances -0.65IS7 + D ) 0 -0.15IS7 + IS9 ) 0 -0.90IS8 + OS8 ) 0 -0.72IS9 + W ) 0 -0.68IS10 + P1 ) 0 -0.71IS11 + P2 ) 0 IS7 - OS1 - OS2 - I ) 0 IS8 - D - OS3 -OS4 ) 0 OS8 - E - IS10 ) 0 IS11 - E - OS5 - OS6 ) 0

Production Relations -IS1 + 2.0 exp(OS1/1.8) e 2.0 -IS2 + 2.1 exp(OS2/2.0) e 2.1 -IS3 + 2.6 exp(OS3/2.5) e 2.6 -IS4 + 2.1 exp(OS4/2.2) e 2.1 -IS5 + 2.5 exp(OS5/2.8) e 2.5 -IS6 + 3.0 exp(OS6/3.0) e 3.0

Maximum Capacities IS1 + IS2 e 1.0; IS4 e 0.75 IS5 + IS6 e 1.25; IS7 e 8.0 IS9 e 5.0; IS10 e 5.0; IS11 e 5.0 IS3 e 0.5θ5; IS8 e 1.0 + 0.6θ6

Raw Material Availability A ) IS1 + IS2 e 1.5 B ) IS3 + IS4 e 1.6 C ) IS5 + IS6 e 1.7 I e 0.50 + 0.10θ4

Product Demand P1 ) 0.50 + 0.25θ1 P2 ) 0.90 + 0.10θ2 W e 0.10 + 0.05θ3

Variation of Parameters 0eθe1

iterations were basically of the same size, since only one new linearization of the objective function (constructed around the optimal solution at θB) was added, and the constraints were approximated by a selection of the nonredundant linearizations from the three solution points available. Finally, the procedure successfully identified when the optimal parametric solution had been found, as the vertex with the largest CEErep (θD) resulted in a real error lower than the tolerance δ (the solution of deterministic NLPs at every vertex of the final critical regions corroborates θD as the point with the largest error). Comparing the performance of the algorithm with those of previous approaches, the results presented in Table 8 are obtained. It is clear that, as the number of

Ind. Eng. Chem. Res., Vol. 42, No. 23, 2003 5889 Table 10. Solution Procedure for Example 3 primal problem (NLP)

master problem (MPLP)

iteration

vertex

error (%)

1 2

θ* ) (1,1,1,1,1,1) θB ) (0,0,0,0,0,0) θC ) (0.58,0,1,1,0,0.16) θD) (0.9,0.2,0.1,0,0.06,0.5)

3.75 infeas. 2.6 < δ

3

no. of constraints

critical regions

no. of vertices

36 36

3 5

140 238

obtained from the solution of the MPLP. Thus, the number of vertices increased more than 10 times from example 2 to example 3. This, however, did not have a large effect on the whole solution procedure, as the same number of NLP subproblems and MPLP master problems were solved for both examples. The computational efficiency of the algorithm was demonstrated by solving only two NLPs out of a total of 140 vertices in the first iteration, and only one in the second iteration out of a total of 238. Solving deterministic NLPs at these final vertices, the largest error is found at θD (given in Table 10) and θ ) (1,0,0.1199,1,0,1), both with an error of 2.6%. Figure 5. Process structure for example 3.

uncertain parameters is increased, the proposed procedure becomes more advantageous. Example 3. The third example is a variation of the previous problem where three additional uncertain parameters are included: (i) availability of raw material I, varying from 0.5 to 0.6, (ii) availability of process 3, defining a discrete uncertainty, and (iii) capacity of process 8, varying from 1 to 1.6. In addition, process 7 has now two different products, one going to process 8 and another going to process 9, where an undesirable waste is produced. This flow is limited by what could be viewed as an uncertain legislation defining the maximum limit of discharge. Finally, cost coefficients and the definition of the uncertain demands of the products were changed to accommodate the proposed additions. The final structure of the process is given in Figure 5, while Table 9 presents the mathematical model, where IS represents the input stream to a process and OS its output stream. The new mpNLP formulation includes 30 continuous variables, 15 equality constraints, 20 inequality constraints, and 6 uncertain parameters. The final tolerance was set again as in the original work18 to 2.8%, i.e., δ ) 0.028. The solution procedure is summarized in Table 10. Since the mathematical model is very similar to that of example 2, the analogous behavior in terms of the number of major iterations (three in both cases), number of deterministic NLPs solved (four in both cases), and size of the master problem was expected. Nevertheless, there are two important features that this solution highlights, related to the increase in the number of uncertain parameters. First, note that the computational requirements for the solution of the multiparametric linear problem, mostly defined by the size of the deterministic LP and the number of critical regions, do not increase exponentially with the number of uncertain parameters. For these problems, in fact, the solution times for the master problems of examples 2 and 3 were practically the same. In contrast, the number of vertices resulting from the master problem depends largely on the number of uncertain parameters and the number of critical regions

Concluding Remarks In this work, a parametric programming approach was proposed for the consideration of uncertainty in process optimization. Compared to other approaches, parametric programming offers the advantage of giving a complete map of the optimal solution in the space of the uncertain parameters, reducing the exponential increase of the computational requirements with respect to the number of uncertain parameters. The proposed algorithm proved to be very efficient and consistent in the examples that we have solved. The estimation of an expected error allows, on one hand, the selection of suitable points for the solution of deterministic NLPs and, on the other hand, the incorporation of the information obtained throughout the algorithm to reduce the number of subproblems that are solved. The incorporation of these developments in algorithms for the solution of mixed-integer models and their extension to nonconvex formulations is now envisioned. Literature Cited (1) Grossmann, I. E.; Halemane, K. P.; Swaney, R. E. Optimisation Strategies for Flexible Chemical Processes. Comput. Chem. Eng. 1983, 7, 439-462. (2) Ierapetritou, M. G.; Acevedo, J.; Pistikopoulos, E. N. An Optimisation Approach for Process Engineering Problems under Uncertainty. Comput. Chem. Eng. 1996, 20, 703-709. (3) Sahinidis, N. V. Optimisation under Uncertainty: Stateof-the-Art and Oportunities. In Proceedings of FOCAPO 2003, Coral Springs, FL; Grossmann, I. E., McDonald, C. M., Eds.; CACHE Corp.: Austin, Texas, 2003; pp 153-164. (4) Grossmann, I. E.; Sargent, R. W. H. Optimal Design of Chemical Plants with Uncertain Parameters. Comput. Chem. Eng. 1978, 24, 1021-1028. (5) Sha, N.; Pantelides, C. C. Design of Multipurpose Batch Plants with Uncertain Production Requirements. Ind. Eng. Chem. Res. 1992, 36, 717-728. (6) Paules, G. E., IV; Floudas, C. A. Stochastic programming in process synthesis: a two-stage model with MINLP recourse for multiperiod heat-integrated distillation sequences. Comput. Chem. Eng. 1992, 16, 189-210. (7) Rooney, W. C.; Biegler, L. T. Design for model parameter uncertainty using nonlinear confidence regions. AIChE J. 2001, 47, 1794-1804.

5890 Ind. Eng. Chem. Res., Vol. 42, No. 23, 2003 (8) Subrahmanyam, S.; Kudva, G. K.; Bassett, M. H.; Pekny, J. F. Issues in solving large scale planning, design and scheduling problems in batch chemical plants. Comput. Chem. Eng. 1993, 17, 339-354. (9) Liu, M. L.; Sahinidis, N. V. Optimization in process planning under uncertainty. Ind. Eng. Chem. Res. 1996, 35, 4154-4165. (10) Diwekar, U. M.; Kalagnanam, J. R. Efficient sampling technique for optimization under uncertainty. AIChE J. 1997, 43, 440-447. (11) Acevedo, J.; Pistikopoulos, E. N. Stochastic Optimization based algorithms for process synthesis under uncertainty. Comput. Chem. Eng. 1998, 22, 647-671. (12) Gupta, A.; Maranas, C. D. A two-stage modeling and solution framework for multisite midterm planning under demand uncertainty. Ind. Eng. Chem. Res. 2000, 39, 3799-3813. (13) Ahmed, S.; Sahinidis, N. V.; Pistikopoulos, E. N. An improved decomposition algorithm for optimization under uncertainty. Comput. Chem. Eng. 2000, 24, 1589-1604. (14) Kim, K.-J.; Diwekar, U. M. Efficient combinatorial optimization under uncertainty. 1. Algorithmic development. Ind. Eng. Chem. Res. 2002, 41, 1276-1284. (15) Acevedo, J.; Pistikopoulos, E. N. A Parametric MINLP Algorithm for Process Synthesis Problems under Uncertainty. Ind. Eng. Chem. Res. 1996, 35, 147-158. (16) Acevedo, J.; Pistikopoulos, E. N. A Multiparametric Programming Approach for Linear Process Engineering Problems under Uncertainty. Ind. Eng. Chem. Res. 1997, 36, 717-728. (17) Pertsinidis, A.; Grossmann, I. E.; McRae, G. J. Parametric Optimization of MILP programs and a framework for the parametric optimization of MINLPs. Comput. Chem. Eng. 1998, 22S, S205-S209. (18) Dua, V.; Pistikopoulos, E. N. Algorithms for the Solution of Multiparametric Mixed-Integer Nonlinear Optimisation Problems. Ind. Eng. Chem. Res. 1999, 38, 3976-3987. (19) Acevedo, J.; Pistikopoulos, E. N. A Hybrid Parametric/ Stochastic Programming Approach for Mixed-Integer Linear Problems under Uncertainty. Ind. Eng. Chem. Res. 1997, 36, 22622270. (20) Hene´, T. S.; Dua, V.; Pistikopoulos, E. N. A hybrid parametric/stochastic programming approach for mixed-integer

nonlinear problems under uncertainty. Ind. Eng. Chem. Res. 2002, 41, 67-77. (21) Papalexandri, K. P.; Dimkou, T. I. A Parametric MixedInteger Optimisation Algorithm for Multiobjective Engineering Problems Involving Discrete Decisions. Ind. Eng. Chem. Res. 1998, 37, 1866-1882. (22) Moncada, A. E.; Acevedo, J. A methodology for the minimization of environmental impact of chemical process under the presence of uncertainty. Proceedings of the 6th World Congress of Chemical Engineering, Melbourne, Australia, 2001. (23) Dua, V.; Bozinis, N. A.; Pistikopoulos, E. N. A multiparametric programming approach for mixed-integer quadratic engineering problems. Comput. Chem. Eng. 2002, 26, 715-733. (24) Gal, T. Postoptimal Analyses, Parametric Programming and Related Topics, 2nd ed.; Walter de Gruyter: New York, 1995. (25) Geoffrion, A. M.; Nauss, R. Parametric and Post-optimality Analysis in Integer Linear Programming. Manage. Sci. 1977, 23, 453-471. (26) Ohtake, Y.; Nishida, N. A. A Branch and Bound algorithm for 0-1 Parmetric Mixed-Integer Programming. Oper. Res. Lett. 1985, 4, 41-48. (27) Fiacco, A. V. Introduction to Sensitivity and Stability Analysis in Nonlinear Programming; Academic Press: San Diego, CA, 1983. (28) Fiacco, A. V. Global Multiparametric Optimal Value Bounds and Solution Estimates for Separable Parametric Programs. Ann. Oper. Res. 1990, 27, 381-396. (29) Kocis, G. R.; Grossmann, I. E. Relaxation Strategy for the Structural Optimization for Process Flowsheets. Ind. Eng. Chem. Res. 1987, 26, 1869-1880. (30) Brooke, A.; Kendrick, D.; Meeraus, A. GAMS: a users guide; Scientific Press: Washington, DC, 1996.

Received for review February 12, 2003 Revised manuscript received July 15, 2003 Accepted August 15, 2003 IE0301278