Optimization in Process Planning under Uncertainty - ACS Publications

Oct 1, 1996 - tion algorithm for the solution of the stochastic model. The case of continuous random variables is handled through the same algorithmic...
2 downloads 0 Views 217KB Size
4154

Ind. Eng. Chem. Res. 1996, 35, 4154-4165

Optimization in Process Planning under Uncertainty Ming Long Liu and Nikolaos V. Sahinidis* Department of Mechanical & Industrial Engineering, University of Illinois, Urbana-Champaign, 1206 W. Green Street, Urbana, Illinois 61801

This paper develops a two-stage stochastic programming approach for process planning under uncertainty. We first extend a deterministic mixed-integer linear programming formulation to account for the presence of discrete random parameters. Subsequently, we devise a decomposition algorithm for the solution of the stochastic model. The case of continuous random variables is handled through the same algorithmic framework without requiring any a priori discretization of their probability space. Computational results are presented for process planning problems with up to 10 processes, 6 chemicals, 4 time periods, 24 random parameters, and 524 scenarios. The efficiency of the proposed algorithm enables for the first time not just solution but even on-line solution of these problems. Finally, a method is proposed for comparing stochastic and fuzzy programming approaches. Overall, even in the absence of probability distributions, the comparison favors stochastic programming. 1. Introduction Deterministic models for process planning assume that predictions for prices, as well as demands and availabilities of chemical products and raw materials, are known with certainty. An optimal solution comprises selection of processes from among competing alternatives and timing of capacity expansions in a way that maximizes the net present value of the project over a planning horizon. Recent approaches to this problem include Sahinidis et al. (1989), Sahinidis and Grossmann (1991a,b, 1992), Norton and Grossmann (1994), and Liu and Sahinidis (1995, 1996a). The increased activity in this area in recent years is partly motivated by the fact that scenario analysis of a deterministic optimization model can be used to evaluate indirectly the effects of random parameters on the problem. The goal of this paper is to develop direct ways for studying the effects of uncertainties in process planning. In dealing with similar problems, researchers have developed two vastly distinct research philosophies over the past 30 years: fuzzy and stochastic programming. Fuzzy programming assumes that uncertain parameters in a mathematical model are fuzzy numbers defined on a fuzzy set associated with a membership function. The objective function may be either a fuzzy goal or a crisp function, while the constraints may allow some violations (soft constraints). Much of the work in this area is based on the seminal paper by Bellman and Zadeh (1970) and has been recently popularized by the work of Zimmermann (1987, 1991). Stochastic programming deals with optimization problems whose parameters take values from given discrete or continuous probability distributions. The decision must be made before the actual values of these random variables are realized. Stochastic programming with recourse (Dantzig, 1955) is often used to model uncertainty giving rise to large-scale mathematical programs that require the use of decomposition methods and approximation schemes for their solution. Surveys of developments and applications of stochastic programming can be found in Dempster (1980), Ermoliev and Wets (1988), Wets (1989), Birge and Wets (1991), and Kall and Wallace (1994). * Author to whom correspondence is addressed. E-mail: [email protected]. Phone: 217-244-1304. Fax: 217-244-6534.

S0888-5885(95)00451-9 CCC: $12.00

Overall, there exists a vast literature on fuzzy programming and one of comparable size on stochastic programming. However, until now no systematic methods have been proposed to compare these two techniques. In order to quantify the ability of a chemical process to deal with uncertainty, Grossmann et al. (1983) proposed the concept of flexibility. Reviews of the literature on flexibility in process design and operations can be found in Grossmann and Straub (1991) and Pistikopoulos (1995). The computational study of Liu and Sahinidis (1995) revealed that uncertainty in prices and demands does not have any major impact on the quality of deterministic MILP solutions of process planning problems provided that plans are allowed to be revised through adjustment of production levels and amounts of sales and purchases with no economic penalty. The current paper and some recent works address explicitly the economic implications of uncertainty in process operations. Ierapetritou et al. (1994) proposed an algorithm for two-stage stochastic linear planning models based on multiparametric linear programming. No capacity expansions were considered, and therefore only short- to medium-term planning was allowed. The same production planning problem was considered by Clay and Grossmann (1996), who developed a successive disaggregation algorithm for two-stage linear problems with discrete uncertainties. Ierapetritou and Pistikopoulos (1994) also proposed a two-stage stochastic programming approach using Benders decomposition to solve short/long-term planning problems. Their algorithm works without any a priori discretization of the space of the random parameters. However, this is accomplished at a large computational expense as a Gaussian quadrature was used to evaluate the expectation of the second-stage objective function. This paper considers the process planning problem with some or all of the parameters that determine the economics of the production plan being random. We first address the case in which forecasts for prices, demands, and availabilities come in a finite number of possible scenarios, each of which has an associated probability. We take a two-stage approach to this problem. In the first stage, we assume that, due to lead times and contractual requirements for plant construction, capital investment decisions must be made hereand-now. These capacity expansion decisions must be © 1996 American Chemical Society

Ind. Eng. Chem. Res., Vol. 35, No. 11, 1996 4155

optimal in a probabilistic sense. As the realization of the random parameters is unknown at the time of planning, all different possible scenarios must be anticipated. Subsequently, outcomes of the random variables will be revealed, and, for each second-stage scenario, we will select an optimal operating plan. The resulting two-stage mathematical model is similar in structure to existing models developed for the deterministic case. Yet, the size of this model is the major obstacle to its application. We overcome this difficulty by developing a Benders-based decomposition approach to this problem. As opposed to the stochastic decomposition approach of Ierapetritou and Pistikopoulos (1994) which uses Gaussian quadrature, our algorithm uses Monte Carlo sampling to estimate the expectation of the objective function. This facilitates the solution of much larger problems as sampling permits an accurate estimation of the cost function without requiring the explicit solution of a large number of optimization subproblems. We also deviate from the sampling-based stochastic decomposition schemes of Dantzig and Glynn (1990) and Infanger (1994) by not solving the decomposition master problem to optimality in every iteration and by using a particularly simple sampling scheme for solving the subproblems. Solving the master problem suboptimally during initial iterations permits the solution of larger problems and facilitates the handling of integer variables which account for capacity expansions. A key feature of the proposed approach is the incorporation of feasibility constraints in the decomposition master problem. In addition to expediting convergence, these constraints guarantee feasibility of the decomposition subproblems, thus allowing the application of sampling-based techniques which require the so-called “complete recourse”. The rest of the paper is organized as follows. In section 2, we extend a previously developed deterministic optimization model to account for the presence of discrete random variables. Section 3 develops the details of the solution algorithm which is extended to handle problems with continuous random variables in section 4. In section 5, we present a comparison between the stochastic programming approach of this paper and a recently developed fuzzy programming approach to the same problem. The procedure proposed for comparing the two modeling philosophies can be used in the context of other applications besides process planning. An important outcome of the comparison analysis is a simple method for constructing the probability distributions needed by stochastic programming from a given range of the random parameters. Section 6 presents computational results for the models and algorithms presented in this paper. The decomposition method is so efficient that it can solve stochastic integer problems that are much larger than those solved in the past in process planning or any other application area. The results clearly demonstrate that stochastic programming is becoming a viable tool for solving chemical process planning and expansion problems. Conclusions from this work are summarized in section 7.

(first-stage decisions). Then, as some of the uncertainties are revealed, the remaining decisions will be made (second-stage decisions). In this section, we assume that the random events which occur at the second stage are described by finitely many, mutually exclusive scenarios that are independent of the first-stage decisions. These second-stage scenarios are denoted by the index s and are assumed to occur with respective probabilities ps, s ) 1, ..., NS. A subsequent section discusses the case of random variables that assume values from continuous probability distribution functions. Compared to optimization models used to describe the deterministic problem (Sahinidis et al., 1989), the following two-stage stochastic programming model for process planning accounts for the different scenarios that might occur. This is achieved by introducing the stochastic index s in the space of problem variables:

Model SP: NS NC NM NT

max NPV )

∑ ∑ ∑ ∑psγjltsSjlts s)1 j)1 l)1 t)1

NP NT

NS NP NT

∑ ∑(RitEit + βityit) - s)1 ∑∑ ∑ psδitsWits i)1 t)1 i)1 t)1 NS NC NM NT

The model includes a set of binary decision variables modeling the selection of process capacity expansions and a set of continuous variables determining the operating plan of the chemical complex (operating levels, purchases, and sales). Some of these decisions must be made with incomplete information about the future

(1)

i ) 1, NP t ) 1, NT

(2)

s.t. yitELit e Eit e yitEU it Qit ) Qi,t-1 + Eit

i ) 1, NP t ) 1, NT

(3)

i ∈ I′ ⊂ {1, 2, ..., NP}

(4)

NT

yit e NEXP(i) ∑ t)1 NP

(RitEit + βityit) e CI(t) ∑ i)1 Wits e Qit NM

t ∈ T′ ⊂ {1, 2, ..., NT} (5)

i ) 1, NP t ) 1, NT s ) 1, NS

NP

NM

(6)

NP

Pjlts + ∑ηijWits ) ∑Sjlts + ∑µijWits ∑ l)1 i)1 l)1 i)1 j ) 1, NC t ) 1, NT s ) 1, NS (7)

aLjlts e Pjlts e aU jlts

j ) 1, NC l ) 1, NM t ) 1, NT s ) 1, NS (8)

dLjlts e Sjlts e dU jlts

j ) 1, NC l ) 1, NM t ) 1, NT s ) 1, NS (9)

yit ) 0 or 1 2. Formulation of the Stochastic Programming Model

∑ ∑ ∑ ∑ psΓjltsPjlts

s)1 j)1 l)1 t)1

i ) 1, NP t ) 1, NT

Eit, Qit, Wits, Pjlts, Sjlts g 0

∀i, j, l, t, s

(10) (11)

In (1), the overall objective is to maximize the expected net present value over the two stages of the capacity expansion project. In this equation, NPV is defined as the expectation of the sales revenue minus the sum of the first-stage’s investment, the expectation of the

4156 Ind. Eng. Chem. Res., Vol. 35, No. 11, 1996

second-stage’s operating costs, and the expectation of the raw materials costs. The investment term includes a cost, R, proportional to the size of the capacity expansion, as well as a fixed charge cost, β, to account for economies of scale. The objective function terms are taken over all involved processes (i ) 1, ..., NP), chemicals (j ) 1, ..., NC), markets (l ) 1, ..., NM), time periods (t ) 1, ..., NT) of the planning horizon, and scenarios per time period (s ) 1, ..., NS). Constraint (2) provides variable lower and upper bounds for the capacity expansions. It forces capacity expansions to zero whenever the corresponding binary decision variable is zero. Equation 3 defines the total capacity available for process i at each time period t. Inequalities (4) and (5) express limits on the number of expansions of processes and on the capital available for investment during some time periods, respectively. Inequality (6) ensures that the operating level of a process under any scenario cannot exceed the installed capacity. Note that in (6) the process capacity is required to be the same under all possible scenarios as a result of the here-and-now policy we have adopted for expansions. We use (7) to express the material balances for each process. The parameters η and µ in this equation are nothing else but stoichiometric coefficients reflecting the process recipes. Finally, (8) and (9) express lower and upper bounds for raw material availabilities and product demands under all possible scenarios. Observe that, since for the second-stage problem there are no constraints linking time periods, the different scenarios between different time periods have been treated independently in the above formulation. In this way, if we assume that availabilities and demands are independent random parameters and that each random parameter can take any one out of m possible discrete values, then there will be NS ) mNC scenarios per time period, whereas the total number of scenarios is ns ) mNC×NT. If, on the other hand, one wishes to consider short- or medium-term planning in which inventory might be carried over from one time period to the next, then the total number of scenarios per time period must equal NS ) mNC×NT. The inclusion of inventory constraints into the above formulation is straightforward. A recourse cost function can be easily accommodated in the above formulation. In particular, if violations of the second-stage constraints (6)-(9) are allowable for a penalty (e.g., outsourcing of production requirements), one can incorporate additive corrections in left/righthand sides of the constraint set and add the corresponding penalty terms in the objective. The mathematical structure of the problem remains the same, and the algorithms that will be developed in subsequent sections are still applicable. Above, we assumed that the optimization model must be solved well ahead of its actual implementation. Therefore, the operational variables of the first time period were considered as second-stage variables. It should be clear that the definition of first- and secondstage variables is not restrictive and that some operational variables could be included among the set of firststage variables simply by dropping the index s from their definition in the formulation. Treating capacity expansion decisions as part of the second-stage decision variables might be more natural in certain applications. In this case, however, a binary decision variable would have to be defined for each possible second-stage scenario, and the combinatorial complexity of the problem would, in turn, be increased

exponentially. Nevertheless, the conceptual framework of the approach taken in this paper is still applicable to this case. SP is a two-stage mixed-integer linear programming (MILP) model. This model is difficult to solve directly since, in addition to involving integer variables, the number of variables and constraints can obviously become very large as NS, the number of scenarios, grows. The number of scenarios, in turn, grows exponentially in terms of the number of random parameters as shown above. Benders decomposition has been proposed as a method for dividing mixed-integer models into their integer and continuous components (Benders, 1962; Geoffrion, 1972). It has also been used in stochastic programming for breaking the model down into small components that can be analyzed separately (Van Slyke and Wets, 1969; Bienstock and Shapiro, 1988; Dantzig and Glynn, 1990; Higle and Sen, 1991, 1994; Infanger, 1994). These two features of Benders decomposition can be combined to solve the stochastic MILP planning model as described in the next section. 3. Decomposition Algorithm In this algorithm, the two-stage stochastic MILP model SP is solved iteratively through a sequence of LP subproblems and MILP master problems, with the former providing lower bounds to the expected net present value and the latter providing upper bounds. We partition the two-stage stochastic MILP problem according to the two decision stages. At the first stage the selection of capacity expansions (and perhaps some operational decisions for the first time period) must be made, and then at the second stage the operating levels, purchases, and sales must be determined. Hence, the master problem involves all the first-stage decisions yit, Eit, and Qit (and perhaps some operational variables of the first time period), and the subproblem involves the remaining variables which are Wits, Pjlts, and Sjlts. The purpose of the master problem is to find the optimal capacity expansion decisions, whereas the purpose of the subproblem is to evaluate the effect of the master problem suggestions on the net present value by taking into account the different scenarios that might occur. Trial values, yitK, EitK, and QitK, of the first-stage decisions in iteration K of the decomposition algorithm are passed from the master problem to the subproblem to be evaluated. The subproblem and master problems at iteration K are then as follows:

The subproblem SUB(K): NS NC NM NT

max NPVK )

∑ ∑ ∑ ∑psγjltsSjlts -

s)1 j)1 l)1 t)1 NP NT (RitEKit + i)1 t)1

∑∑

NS NP NT

βityKit ) -

∑ ∑ ∑psΓjltsPjlts

s)1 i)1 t)1

s.t. Wits e QKit

i ) 1, NP t ) 1, NT s ) 1, NS

(12)

Constraints (7)-(9) Wits, Pjlts, Sjlts g 0 This subproblem can be solved as a sequence of independent problems, one for each time period. In addition,

Ind. Eng. Chem. Res., Vol. 35, No. 11, 1996 4157

subproblems for each time period can be separated into a series of independent optimization problems, one for each scenario.

The master problem MAS(K): max M s.t. M e Lk(yit,Eit,Qit,Wkits,Pkjlts,Skjlts) k ) 1, 2, ..., K

(13)

Constraints (2)-(5) and (10) Eit, Qit g 0 where NP NT

Lk(yit,Eit,Qit,Wkits,Pkjlts,Skjlts) ) -

∑ ∑(RitEit + βityit) + i)1 t)1

NS NC NM NT

∑ ∑∑ ∑ps(γjltsSkjlts - ΓjltsPkjlts) + s)1 j)1 l)1 t)1 NP NT NS

Fkits(Wkits - Qit) ∑ ∑ ∑ i)1 t)1 s)1 k and Fits are the Lagrange multipliers of constraint (12) k k k and Wits , Pjlts , and Sjlts is the optimal solution of the subproblem in iteration k. Constraint (13) is a so-called Benders cut and can be thought of as an aggregate representation of the effect of all scenarios on the net present value of the capacity expansion plan suggested by the master problem in iteration k. The cuts are initially absent from the master problem and then sequentially added one at a time after solving a subproblem. Note that the above form of the Benders cut assumes that all subproblems are feasible. Subproblem feasibility is guaranteed through a technique discussed later in this section. The Benders decomposition algorithm is as follows:

feasible integer solution is found, which is such that the lower and upper bounds of the current iteration do not cross, this solution can be used to solve the subproblem of the next iteration (Geoffrion and Graves, 1974). This little known variant of Benders decomposition greatly reduces the CPU requirements for solving the master problem in a computer implementation of the algorithm. Yet, it appears that this approach for solving the master problem has not been incorporated in stochastic programming applications. Solving the Decomposition Subproblems. Step 2 of the algorithm deserves a detailed discussion as the solution of the subproblems presents a major difficulty. Since the number of scenarios may easily get out of hand, it may not always be possible to solve all subproblems in order to obtain an exact Benders cut. For this reason, we use Monte Carlo sampling to obtain an estimation of the cut. Monte Carlo sampling has been widely used in many applications. In stochastic programming, sampling has been incorporated into several solution algorithms (Jagannathan, 1985; Dantzig and Glynn, 1990; Infanger, 1994; Higle and Sen, 1991, 1994). Recall that the objective function of the subproblem involves the evaluation of an expectation. We denote this expectation as follows: NS

z)

Step 2.1. Let the optimal solution be WKits, PKjlts, and SKjlts. Step 2.2. Update the lower bound by setting LB ) max{LB, NPVK}. Step 3. Solve the master problem MAS(K) to K+1 K+1 determine yK+1 and an upper it , Eit , and Qit bound to NPV. Set UB equal to the solution value of this MAS(K). Step 4. If UB - LB < tolerance, stop. Otherwise, set K r K + 1, and go to step 2. In step 1, the initial values, yit1 , Eit1 , and Qit1 , can be obtained by solving the expected-value problem (EVP) which is obtained by replacing each random parameter by its expectation. The expected-value problem provides an upper bound for SP (Birge, 1985). In step 3, it is not necessary to solve the master problems to optimality in each iteration. As long as a

(14)

where xs denotes the random parameter vector under scenario s, g(xs) denotes the objective function, and ps denotes the probability of scenario xs. In a Monte Carlo method, z in (14) is estimated by random sampling. Let a1, a2, ..., aN be independent random samples drawn from the discrete distribution of the random parameters. Assuming that the objective function value is normally distributed, we obtain the Monte Carlo estimator (Hammersley and Handscomb, 1964) of the expectation of the objective as follows:

Step 1. Set K r 1. Select yKit , EKit , and QKit ; set UB ) +∞ and LB ) -∞. Step 2. Solve subproblem SUB(K).

∑ps g(xs)

s)1

zj )

1

N

∑ g(an) Nn)1

(15)

Other sampling schemes are also possible within the above decomposition algorithm. For example, Infanger (1994) uses importance sampling with the aim of reducing the variance of the estimation. However, the computational results obtained with the sampling scheme used here are of excellent quality as demonstrated in section 6. In particular, the quality of the solution can be easily assessed by first estimating the variance of the estimation in (15): N

σˆ 2 )

(g(an) - zj)2/(N - 1) ∑ n)1

(16)

Once this variance is computed, it can be used to calculate confidence intervals. For example, for sample sizes less than N e 30, the 95% confidence interval of the estimation can be obtained as zj ( r, where r ) t0.25,N-1σˆ /xN and t0.25,N-1 is the t value with N - 1 degrees of freedom. Finally, it should be mentioned that sampling provides only an estimate of the expected value of the subproblems and the expected dual multipliers. The end result is an estimate of the Benders cut itself. Therefore, strict convergence proofs for this type of

4158 Ind. Eng. Chem. Res., Vol. 35, No. 11, 1996

algorithm can only be provided in an asymptotic sense as the number of samples becomes very large. In particular, the strong law of large numbers (e.g., Hogg and Craig, 1978) guarantees that the Monte Carlo procedure converges almost surely in the sense that

1

lim

N

∑ g(an) ) z

Nn)1

Nf∞

Although theoretical convergence of the decomposition is obtained only at the limit as the number of samples, N, is made very large, the computational results of section 6 demonstrate that a very small number of samples is sufficient to obtain optimal or excellent nearoptimal solutions. Master Problem Solutions Which Guarantee Feasible Subproblems. When a standard Benders decomposition algorithm without sampling is applied to solve SP or when a branch-and-bound algorithm is used to directly solve this model, the solution in terms of the first-stage variables is guaranteed to satisfy the secondstage constraints under any scenario as all constraints (6)-(9) are present in the model. However, when sampling (Monte Carlo or Gaussian quadrature, for example) is used to solve the subproblems as proposed above, only a small fraction of the total possible number of scenarios is generated. Therefore, the trial points generated by the master problem may not necessarily be feasible for all realizations of the random parameters. As solutions leading to infeasible operating plans are unacceptable, their generation appears to be the major drawback of sampling-based stochastic optimization techniques. However, there are two ways to ensure subproblem feasibility, thus making possible the application of such methods. One possibility, albeit rather cumbersome, is to solve a series of feasibility subproblems to identify constraints to be included in the master problem. The literature on the flexibility problem offers systematic means for identifying these constraints (e.g., Straub and Grossmann, 1993). A more straightforward approach, albeit not applicable to other two-stage stochastic programming models but only to the process planning problem under consideration, is to add feasibility constraints to the master problem as follows:

W′it e Qit NM

∑ l)1

P′jlt +

NP

∑ i)1

i ) 1, NP t ) 1, NT NM

ηijW′it )

∑ l)1

(17)

NP

S′jlt +

µijW′it ∑ i)1

j ) 1, NC t ) 1, NT (18) max aLjlts e P′jlt e min aU jlt s

s

j ) 1, NC l )

(6)-(9) and can therefore be handled easier as part of the master problem. Moreover, this augmented master problem has a mathematical form which is identical to that of deterministic models (e.g., Sahinidis et al., 1989). Therefore, the cutting plane algorithms and valid inequalities developed in earlier works apply directly in this context and can be used to expedite the solution of the master problem. Note that (17)-(20) can be used to tighten the master problem representation independently of whether sampling is used for the solution of the subproblems or not. Their use expedites convergence of the master problem and leads to a decomposition scheme which unlike earlier schemes (e.g., Ierapetritou and Pistikopoulos, 1994) does not require any complex feasibility analysis of the subproblem.

4. Extension to Problems with Continuous Random Variables Above, we considered planning problems with random variables that assume values from discrete probability distributions. It should be obvious that the same sampling scheme can be used for the case of continuous distribution functions as well. For the continuous case, the summation in (14) must be replaced by an integral:

z ) E(θ) )

∫θ∈Θg(θ) ν(θ) dθ

where ν(θ) now denotes the probability density function of the continuous random parameter vector θ which takes values from Θ. Equation 15 is still valid for this case, and the Benders decomposition scheme described above can be applied. Alternatively, branch-and-bound can be used after discretizing the probability distribution space. No a priori discretization is required. One only needs to specify the number of scenarios to be generated, and the same Monte Carlo sampling scheme used above can be used here to sample the event space and generate a small number of scenarios that will provide a good approximation of the expectation. When the decomposition algorithm is used to solve this problem, the solutions of the subproblems that are generated by sampling can be used to estimate the expectation and to calculate confidence intervals for the estimation of the expectation using (16) as discussed in the previous section. Similarly, when branch-andbound is used after discretization, the optimal values of the variables corresponding to the different scenarios can be used to calculate the expectation and confidence intervals as (16) still applies.

1, NM t ) 1, NT (19) max dLjlts e S′jlt e min dU jlt s

s

j ) 1, NC l ) 1, NM t ) 1, NT (20)

The variables W′, P′, and S′ are not indexed over the possible scenarios s. Constraints (17)-(20) correspond to only one possible scenario of raw material availabilities and product demands. Since all scenarios must be satisfied, these constraints are valid constraints for the master program. Observe that (17)-(20) correspond to the worst case scenario possible for supplies and demands. Therefore, any capacity expansion sequence that satisfies (17)-(20) will also satisfy (6)-(9). Constraints (17)-(20) are much smaller in number than

5. Comparison of the Stochastic and Fuzzy Programming Approaches to Process Planning So far, the paper has developed a stochastic programming approach to process planning. In this section, we are interested in comparing the stochastic to a fuzzy programming approach which has attracted a great deal of attention in the literature. First, we briefly review the basics of the fuzzy optimization method. Subsequently, we develop a method for comparing the two formulations. The main difference between the stochastic and fuzzy optimization approaches is the way they model uncer-

Ind. Eng. Chem. Res., Vol. 35, No. 11, 1996 4159

tainty. In the stochastic programming case, uncertainty is modeled through discrete or continuous probability functions. On the other hand, fuzzy programming considers random parameters as fuzzy numbers and constraints are treated as fuzzy sets. Some constraint violation is allowed and the degree of satisfaction of a constraint is defined as the membership function of the constraint. For example, consider a linear constraint atx e β in terms of the decision vector x and assume that the random right-hand side β can assume values in the range from b to b + ∆b, with ∆b g 0. Then, the membership function, u(x), of this constraint is defined as:

{

{ {

0

if ν- < NPV < ν+ if NPV e ν-

NC NM NT

∑ ∑ ∑γjltSjlt j)1 l)1 t)1

NP NT

NC NM NT

∑ ∑(RitEit + βityit - δitWit) - ∑ ∑∑ΓjltPjlt i)1 t)1 j)1 l)1 t)1

t

if b < a x e b + ∆b

(21)

if b + ∆b < atx s.t. Wit e Qit NM

∑ l)1

NP

Pjlt +

∑ i)1

i ) 1, NP t ) 1, NT NM

ηijWit )

∑ l)1

(22)

NP

Sjlt +

µijWit ∑ i)1

j ) 1, NC t ) 1, NT (23) aLjlt e Pjlt e aU jlt

j ) 1, NC l ) 1, NM t ) 1, NT

dLjlt e Sjlt e dU jlt

j ) 1, NC l ) 1, NM t ) 1, NT

Eit, Qit, Wit, Pjlt, Sjlt g 0

∀i,j,l,t,

(24)

Constraints (2)-(5) and (10) The upper bound ν+ will be obtained by solving the following problem:

u(8)(Pjlt) ) if Pjlt e aU jlt U U if aU jlt < Pjlt < ajlt + ∆ajlt

ν+ ) max NPV in (21)

∀j,l,t

U if aU jlt + ∆ajlt e Pjlt

0

ν - NPV ν+ - ν-

The lower bound ν- will be obtained as the optimal value of the following problem:

if a x e b

Although other types of membership functions are also possible, the above linear membership function is typically used. Objective functions in fuzzy mathematical programming are treated as constraints, with the lower and upper bounds of these constraints defining the decision maker’s expectations. For the process planning model, it is possible to consider uncertainty of right-hand sides, objective function, and constraint coefficients as has been recently shown by Liu and Sahinidis (1996b). To simplify the presentation, in this paper we only consider uncertainty of market demands and supplies. The notation in developing the fuzzy programming model is similar to that of the stochastic programming model. The index s denoting scenarios is naturally dropped. It will be assumed that the purchase and sale amounts are fuzzy variables and are allowed to assume values in the range U U U U U U [ajlt , ajlt + ∆ajlt ] and [djlt , djlt + ∆djlt ], respectively, where ∆ represents the tolerance or the spread of a fuzzy number. The membership functions of constraints (8) and (9) are then defined as follows:

1-

if ν+ e NPV +

u(NPV) ) 1 -

t

t u(x) ) 1 - a x - b ∆b 0

Pjlt - aU jlt U ∆ajlt

{

1

max NPV )

1

1

is unsatisfactory. Let u(NPV) be the membership function of the net present value defined as:

s.t.

U aLjlt e Pjlt e aU jlt + ∆ajlt,

∀j,l,t

U dLjlt e Sjlt e dU jlt + ∆djlt,

∀j,l,t

and

u(9)(Sjlt) ) if Sjlt e dU jlt

1 10

Sjlt - dU jlt U ∆djlt

if

dU jlt

< Sjlt
3600 >3600 >3600 >3600 >3600

56 55 59 43 >3600 >3600 >3600 >3600 >3600

2 3 2 2 69 757 364 6 18

average

136

Example. This problem is taken from Ierapetritou and Pistikopoulos (1994) and involves only two random parameters θ1 and θ2: f f max profit ) 100S13 + 80S14 + 10I11 + 15I12 + f f 20I13 + 25I14 - 10P11 - 15P12 - 10P21 - 15P22 f f 30R11 - 35R12 + 90S23 + 80S24 + 10I21 + 15I22 + f f in in 25I23 + 20I24 - 30R21 - 35R22 - 10I21 - I22 in in 30I23 - 30I24

Figure 2. Flow diagram for a small chemical complex.

s.t.

Table 5. Computational Results for Example samples

CPU s

expected profit

r

2 5 10 20 30 50 100

0.05 0.07 0.17 0.30 0.52 1.12 3.08

831.8 829.5 828.6 828.6 829.7 830.4 830.1

46.2 8.9 3.8 2.5 1.9 1.5 1.1

of the problems considered could not be solved with branch-and-bound or standard Benders decomposition, they can be solved with the proposed algorithm within a few CPU minutes on the Sun SPARC 2 workstation. The times shown are for the solution of the optimization subproblems only, and they do not include the times needed to actually generate the problems corresponding to the different scenarios. When sampling is used, this generation time is negligible, as a very small number of scenarios are generated. However, when branch-andbound or Benders decomposition without sampling is used, this time was very significant. For the problems for which no solution was obtained within 1 CPU h, branch-and-bound was still attempting to solve the root node LP while Benders (without sampling) was still attempting to solve the subproblem of the first iteration. For the results with Benders decomposition with sampling in Table 4, 20 samples were used for the largest problems 5-7 and 10 samples for all the rest. For these tests, a solution with no capacity expansions was used to initialize the decomposition algorithms. 6.2. Problems with Continuous Random Variables. In this subsection, we consider planning problems that involve random variables that assume values from continuous probability distributions. First, Monte Carlo sampling is used to discretize the continuous probability distributions for a small example. Branchand-bound is subsequently applied to this problem. Finally, larger problems are solved using the decomposition algorithm to demonstrate that the algorithm can solve problems of industrial relevance.

in f Pk1 + Ik1 ) 0.4Rk1 + Ik1 in f Pk2 + Ik2 ) 0.6Rk1 + Rk2 + Ik2 in f Ik3 + 0.5Rk1 ) Ik3 + Sk3 in f Ik4 + 0.6Rk2 ) Ik4 + Sk4

20 e Rk1 e 100 30 e Rk2 e 100 20 e P11 e 100 30 e P12 e 200 20 e P21 e 100 10 e P22 e θ1 20 e S13 e 40 20 e S14 e 85 θ2 e S23 e 40 20 e S24 e 85 f e 20 0 e Ikj in )0 I1j f in ) I2j I1j

There are two decision stages in this problem, and there is uncertainty involved only in the second stage. The problem formulation involves the two decision stages

Ind. Eng. Chem. Res., Vol. 35, No. 11, 1996 4163

(k ) 1, 2), four chemicals (j ) 1, 4), and two processes (i ) 1, 2). The problem is stated in terms of purchases (Pkj), sales (Skj), operating levels (Rkj), and initial and in f final inventories (Ikj , Ikj ). The corresponding chemical complex involves two processes as shown in Figure 2. Chemical 1 is input to process 1, whereas chemical 2 is input to both processes 1 and 2. Chemicals 3 and 4 are produced from processes 1 and 2, respectively. There are two random parameters, θ1 and θ2, which denote the availability of the second raw material and the demand of the product of process 1, respectively. It is assumed that the two random parameters follow normal distribution functions N(50,2.5) and N(20,2.5), respectively. There are no capacity expansion decisions for this problem, and, therefore, the resulting model is a stochastic LP. As this is a small problem, we will directly solve the SP model that is obtained after Monte Carlo discretization of the probability space. Table 5 summarizes results with this approach as a function of the number of samples used to produce the discretization of the probability space. Using (16) to calculate the deviation of the estimation of the expectation, we find that the 95% confidence interval, zj ( r where r ) t0.25,N-1σˆ /xN, decreases as the number of samples is increased. It can be seen from the last column of the table that an excellent estimation of the optimal expected profit is obtained even with as few as 5 samples per random variable. Moreover, even with as many as 100 samples, the algorithm requires only about 2 CPU s on a Sun SPARC 2 workstation, which is 2 orders of magnitude faster than the 254 CPU s taken by the Ierapetritou and Pistikopoulos (1994) Benders decomposition/Gaussian quadrature algorithm to solve the same problem on a Sun SPARC 2 workstation. The solution found for this problem using 30 samples has an expected total profit of 829 with the following values for the first-stage operational variables: (R11, R12, S13, S14, P11, P12) ) (80, 36.667, 40, 20, 39.613, 104.667). The solutions obtained with other sample sizes are very similar. For example, with 100 samples, we have an expected total profit of 830.1 with the vector of first-stage operational variables equal to (80, 36.667, 40, 20, 40.213, 104.667). Solution of Large-Scale Problems. The same data set used to evaluate the performance of the algorithm for the discrete probability case is used here with continuous random variables. The problems are considered with all the availabilities and demands being continuous random variables. These random variables are assumed to follow normal distributions with a mean equal to the value of the parameter in the deterministic problem used in Liu and Sahinidis (1995, 1996b). The standard deviation is taken as 10% of the value of the mean. This produces a level of about (20% of uncertainty around the nominal values of availabilities and demands. Results with these problems are presented in Table 6. Benders decomposition with sampling was used. The number of samples used to solve the subproblems was 30 for problems 5-7 and 20 for all other cases. For all problems, a solution with no capacity expansions was used to initialize the algorithm. In this table, Nθ denotes the number of random variables. It can be seen that problems with as many as 24 continuous random variables, 10 processes, and 4 time periods can be solved in a matter of minutes on the Sun SPARC 2 workstation. Note from the last column of this table that the algorithm provides a 95% confidence interval NPV ( r

Table 6. Computational Results for Continuous Random Variable Problems Using Benders Decomposition with Sampling problem size

solution

problem NC NP NT Nθ 0-1 iterations CPU s 100r/NPV 1 2 3 4 5 6 7 8 9

3 3 3 3 6 6 6 4 5

3 3 3 3 10 10 10 6 6

3 3 3 3 4 4 4 3 3

9 9 9 9 24 24 24 12 15

9 9 9 9 40 40 40 18 18

average

14 14 14 8 47 51 51 6 18

7 7 7 4 310 1127 1437 9 55

0.5 0.6 0.5 0.4 0.4 0.2 0.2 0.3 0.5

25

329

0.4

Table 7. Comparative Results with Stochastic and Fuzzy Programming Models stochastic programming

fuzzy programming

problem

CPU s

investment

λ

CPU s

investment

1 2 3 4 5 6 7 8 9

2 3 2 2 69 757 364 6 18

213 215 213 316 2631 2233 1743 1263 1115

0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.6 0.5

1 1 3 1 10 12 106 4 7

218 225 218 307 2020 1679 1546 1251 1162

which represents a very small percentage of the expected NPV (0.4% on average). This demonstrates that only a small number of samples is needed in order to converge the confidence intervals and thus to obtain solutions of excellent quality. In going from Table 2 to Tables 4 and 6, the computational requirements of the decomposition algorithm increase. This is expected as these tables consider progressively more complex problems. The case of continuous random variables is more difficult than its discrete counterpart. Experiments using distributions other than normal to generate values for the random variables yielded similar results. 6.3. Comparison of the Stochastic and Fuzzy Optimization Approaches. Finally, the same data set used in the two previous subsections was modified in order to compare the fuzzy and stochastic programming approaches. In formulating the fuzzy optimization model FP, a positive right-hand side tolerance of 50% of the deterministic values was used along with a linear membership function. In constructing the required input parameters for the stochastic programming model SP, we discretized the random parameter space into 5 scenarios as described in section 5 and shown in Figure 1. Results with the two models are presented in Table 7. It can be seen that, although the fuzzy model is easier to solve, the corresponding stochastic model can also be solved in modest computational time. At the same time, with virtually identical investment costs for the small problems (problems 1-4 and 8-9) and an investment cost increase of 30%, 32%, and 12% for problems 5-7, respectively, the stochastic model provides solutions which are guaranteed to be feasible for all possible discrete outcomes of the random parameters. On the other hand, the fuzzy solution will satisfy approximately 50% of the range of the random parameters as the fourth column of this table indicates. In addition, the stochastic solution is optimal in a probabilistic sense, whereas no optimality claims can be made for the fuzzy model solution.

4164

Ind. Eng. Chem. Res., Vol. 35, No. 11, 1996

7. Conclusions As the chemical industry becomes increasingly competitive, tools to hedge against uncertainty become increasingly important. This paper developed a twostage stochastic programming approach to the problem of planning in the process industries. Problems with up to 10 processes, 4 products, 6 chemicals, and 24 random parameters (with up to 524 scenarios for the discrete distribution case) were solved in at most a few CPU minutes on a workstation using a combination of Benders decomposition with Monte Carlo sampling. This was possible as only a small number of samples was sufficient to converge the confidence intervals of the solutions. This demonstrates that the proposed model and decomposition algorithm can be readily applied to problems of industrial relevance. A comparison of fuzzy and stochastic programming models favors the use of the stochastic model even when the complete probability distributions of the random parameters are unavailable. As a small number of iterations of the decomposition method was sufficient for convergence, current computational limits of the proposed method for stochastic process planning appear to follow closely those of MILP models for deterministic problems. Therefore, one can expect to solve in reasonable computing time problems with as many as 40 processes, a few dozen products, and 4-8 time periods. It should be noted though that solutions obtained by means of sampling-based methods are optimal only in a statistical sense. Providing a deterministic guarantee of optimality for problems of the size solved here does not appear to be within the reach of current optimization technology. Nonetheless, the very small 95% confidence intervals observed in this paper indicate that the proposed method yields optimal or excellent near-optimal solutions and is so efficient that it can solve large-scale problems even on-line. Acknowledgment Partial financial support from the EXXON Education Foundation and from the National Science Foundation through CAREER award DMI-9502722 to N.V.S. is gratefully acknowledged. Nomenclature Indices i ) processes (i ) 1, ..., NP) j ) chemicals (j ) 1, ..., NC) k ) iteration counter in the decomposition algorithm (k ) 1, ..., K) l ) markets (l ) 1, ..., NM) t ) time periods (t ) 1, ..., NT) s ) scenarios (s ) 1, ..., NS) Parameters a1, a2, ..., aN ) independent random samples L U , ajlts ) lower and upper bounds for purchases of ajlts product j from market l in period t under scenario s CI(t) ) capital investment limitation corresponding to period t L U djlts , djlts ) lower and upper bounds for sales of chemical j in market l during period t under scenario s EitK ) capacity expansions of process i in period t in iteration K of the decomposition algorithm (solution of the master problem) EitL, EitU ) lower and upper bounds for capacity expansions of process i in period t g(xs) ) objective function of scenario s

LB ) lower bound for optimal net present value N ) number of samples NC ) number of chemicals in the chemical complex NEXP(i) ) maximum allowable number of expansions for process i NM ) number of markets from which raw materials can be purchased or to which products can be sold NP ) number of processes in the chemical complex NPVK ) subproblem objective function value in iteration K NS ) number of scenarios per time period ns ) total number of scenarios over the entire planning horizon NT ) number of time periods in the planning horizon Nθ ) number of random variables following continuous probability distributions NS ps ) probability for scenario s to occur (∑s)1 ps ) 1) k Pjlts ) value of variable Pjlts in the solution of the subproblem in the kth iteration of the decomposition algorithm Qi0 ) existing capacity of process i at time t ) 0 QitK ) capacity of process i in period t in iteration K of the decomposition algorithm (solution of the master problem) r ) 95% confidence half-interval of the estimation k Sjlts ) value of variable Sjlts in the solution of the subproblem in the kth iteration of the decomposition algorithm t ) student’s t distribution value ν+ ) conservative (upper) bound on fuzzy objective ν- ) riskier (lower) bound on fuzzy objective ν(θ) ) probability density function of the continuous random parameter vector θ k Wits ) value of variable Wits in the solution of the subproblem in the kth iteration of the decomposition algorithm xs ) random parameter vector under scenario s yitK ) 1 if QitK is nonzero, and 0 otherwise z ) objective function expectation zj ) estimation of z Rit, βit ) variable, fixed term of investment cost for process i in period t γjlts, Γjlts ) sales, purchase prices of chemical j in market l during time period t under scenario s ∆ ) tolerance (spread) of a fuzzy number δits ) unit operating cost for process i during time period t under scenario s ηij ) coefficients for material balance of raw material j of process i θ ) continuous random parameter Θ ) space of θ values µij ) coefficients for material balance of product j of process i k Fits ) Lagrange multipliers of constraint (12) in the kth iteration of the decomposition algorithm σ j 2 ) variance of estimation Variables, Functions, and Models Eit ) capacity expansion of process i to be installed in period t (kton/yr) Lk ) Lagrangian associated with benders cut (13) M ) decomposition master problem objective function MAS(K) ) master problem in iteration K NPV ) expected net present value ($) Pjlts ) amount of product j purchased from market l at the beginning of each year of period t under scenario s (kton/ yr) Qit ) total capacity of process i available in period t (kton/ yr) Sjlts ) amount of product j sold to market l at the beginning of each year of period t under scenario s (kton/yr) SUB(K) ) subproblem in iteration K u ) membership function

Ind. Eng. Chem. Res., Vol. 35, No. 11, 1996 4165 Wits ) operating level of process i during time period t under scenario s (kton/yr) yit ) binary variable which is 1 whenever there is an expansion for process i at the beginning of time period t, and 0 otherwise λ ) fuzzy model objective function

Literature Cited Bellman, R.; Zadeh. L. A. Decision-making in a Fuzzy Environment. Manag. Sci. 1970, 17, 141-161. Benders, J. F. Partitioning Procedures for Solving Mixed-Variables Programming Problems. Numer. Math. 1962, 4, 238-252. Bienstock, D.; Shapiro, J. F. Optimizing Resource Acquisition Decision by Stochastic Programming. Manag. Sci. 1988, 34 (2), 215-229. Birge, J. R. Aggregation Bounds in Stochastic Linear Programming. Math. Program. 1985, 31, 25-41. Birge, J. R., Wets, R. J.-B., Eds. Stochastic Programming, Proceedings of the 5th International Conference on Stochastic Programming. Ann. Oper. Res. 1991, 30 and 31. Brooke, A.; Kendrick, D.; Meeraus, A. GAMSsA User’s Guide; The Scientific Press: Redwood City, CA, 1988. Clay, R. L.; Grossman, I. E. Optimization of Stochastic Planning Models. Comput. Chem. Eng. 1996, in press. CPLEX Optimization, Inc. Using the CPLEX Callable Library and CPLEX Mixed Integer Library; CPLEX Optimization, Inc.: Incline Village, NV, 1993. Dantzig, G. B. Linear Programming under Uncertainty. Manag. Sci. 1955, 1, 197-206. Dantzig G. B.; Glynn, P. W. Parallel Processors for Planning Under Uncertainty. Ann. Oper. Res. 1990, 22, 1-21. Dempster, M. A. H., Ed. Stochastic Programming; Academic Press: New York, 1980. Ermoliev, Y., Wets, R. J.-B., Eds. Numerical Techniques for Stochastic Optimization; Springer-Verlag: Berlin, 1988. Geoffrion, A. M. Generalized Benders Decomposition. J. Optim. Theor. Appl. 1972, 10 (4). Geoffrion, A. M.; Graves, G. W. Multicommodity Distribution System Design by Benders Decomposition. Manag. Sci. 1974, 20 (9), 822-844. Grossmann, I. E.; Straub, D. Recent Developments in the Evaluation and Optimization of Flexible Chemical Processes. In Computer-Oriented Process Engineering; Puigjaner, L., Espuna, A., Eds.; Elsevier Science Publishers: Amsterdam, The Netherlands, 1991; pp 49-59. Grossmann, I. E.; Halemane, K. P.; Swaney, R. E. Optimization Strategies for Chemical Processes. Comput. Chem. Eng. 1983, 7 (4), 439-462. Hammersley, J. M.; Handscomb, D. C. Monte Carlo Methods; Methuen & Co. Ltd.: London, 1964. Higle, J. H.; Sen, S. Stochastic Decomposition: An Algorithm for Two-Stage Linear Programs with Recourse. Math. Oper. Res. 1991, 16 (3), 650-669. Higle, J. H.; Sen, S. Finite Master Program in Regularized Stochastic Decomposition. Math. Program. 1994, 67, 143-168. Hogg, R. V.; Craig, A. T. Introduction to Mathematical Statistics; Macmillan Co.: New York, 1978. Ierapetritou, M. G.; Pistikopoulos, E. N. Novel Optimization Approach of Stochastic Planning Models. Ind. Eng. Chem. Res. 1994, 33, 1930-1942.

Ierapetritou, M. G.; Pistikopoulos, E. N.; Floudas, C. A. Operational Planning Under Uncertainty. Comput. Chem. Eng. 1994, 18S, S553-S557. Infanger, G. Planning under Uncertainty: Solving Large-scale Stochastic Linear Programs; Boyd & Fraser Publishing Co.: Boston, MA, 1994. Jagannathan, R. Use of Sample Information in Stochastic Recourse and Chance-Constrained Programming Models. Manag. Sci. 1985, 31 (1), 96-108. Kall, P.; Wallace, S. W. Stochastic Programming; John Wiley & Sons: Chichester, England, 1994. Liu, M. L.; Sahinidis, N. V. Computational Trends and Effects of Approximations on MILP Model for Process Planning. Ind. Eng. Chem. Res. 1995, 34 (5), 1662-1673. Liu, M. L.; Sahinidis, N. V. Long Range Planning in the Process Industries. A Projection Approach. Comput. Oper. Res. 1996a, 23 (3), 237-253. Liu, M. L.; Sahinidis, N. V. Process Planning in a Fuzzy Environment. Eur. J. Oper. Res. 1996b, accepted for publication. Norton, L.; Grossmann, I. E. Strategic Planning Model for Complete Process Flexibility. Ind. Eng. Chem. Res. 1994, 33, 69-76. Pistikopoulos, E. N. Uncertainty in Process Design and Operations. Comput. Chem. Eng. 1995, 19S, S553-S563. Sahinidis, N. V.; Grossmann, I. E. Multiperiod Investment Model for Processing Networks with Dedicated and Flexible Plants. Ind. Eng. Chem. Res. 1991a, 30, 1165-1171. Sahinidis, N. V.; Grossmann, I. E. Reformulation of Multiperiod MILP Models for Planning and Scheduling of Chemical Processes. Comput. Chem. Eng. 1991b, 15 (4), 255-272. Sahinidis, N. V.; Grossmann, I. E. Reformulation of the Multiperiod MILP Model for Capacity Expansion of Chemical Processes. Oper. Res. 1992, 40 (S1), S127-S144. Sahinidis, N. V.; Grossmann, I. E.; Fornari, R.; Chathrathi, M. Long Range Planning Model for the Chemical Process Industries. Comput. Chem. Eng. 1989, 13 (9), 1049-1063. Straub, D. A.; Grossmann, I. E. Design optimization of stochastic flexibility. Comput. Chem. Eng. 1993, 17, 339-354. Van Slyke, R. M.; Wets, R. J.-B. L-Shaped Linear Programs with Applications to Optimal Control and Stochastic Programming. SIAM J. Appl. Math. 1969, 17, 638-663. Wets, R. J.-B. Stochastic Programming. In Handbook on Operations Research and Management Science; Nemhauser, G. L., Kan, A. H. G., Todd, M. J., Eds.; North-Holland: Amsterdam, The Netherlands, 1989; pp 573-629. Zimmermann, H.-J. Fuzzy Sets and Decision Making, and Expert Systems; Kluwer Academic Publishers: Boston, 1987. Zimmermann, H.-J. Fuzzy Set Theory and its Application, 2nd ed.; Kluwer Academic Publishers: Boston, 1991.

Received for review July 20, 1995 Revised manuscript received April 16, 1996 Accepted July 30, 1996X IE9504516

X Abstract published in Advance ACS Abstracts, October 1, 1996.