Optimal Operation of Batch Processes under Uncertainty: A Monte

Optimal Operation of Batch Processes under Uncertainty: A Monte Carlo ... Gerald Bode, Reinhard Schomäcker, Konrad Hungerbühler, and Gregory J. McRa...
4 downloads 0 Views 130KB Size
Ind. Eng. Chem. Res. 2003, 42, 6815-6822

6815

Optimal Operation of Batch Processes under Uncertainty: A Monte Carlo Simulation-Deterministic Optimization Approach Ioannis K. Kookos* Department of Chemical Engineering, University of Manchester Institute of Science and Technology, P.O. Box 88, M60 1QD Manchester, U.K.

In this paper, a systematic methodology is presented for the deterministic optimization of batch processes under uncertainty. The methodology is based on the use of classical Monte Carlo simulation in order to evaluate the objective function and the process constraints together with their analytical derivatives with respect to the optimization parameters. A deterministic, derivative-based optimization algorithm can then be used to locate the optimum values of the optimization parameters. The main advantage of the proposed methodology stems from the fact that the size of the resulting optimization problem is the same as that of the nominal (without uncertainty) optimization problem and it is independent of the number of uncertain parameters. Three examples, involving a batch chemical reactor, a batch distillation column, and a batch polymerization reactor, are presented to illustrate the usefulness of the proposed methodology. 1. Introduction The incorporation of elements of uncertainty in the design and operation of chemical plants has attracted significant research effort. The aim is to design process systems that feature adequate robustness with respect to uncertainties in the operating environment and the imprecise information about the process itself. However, progress in this field has been limited because of the complexity of the problem and its multifaceted character. The first works in this field considered stochastic uncertainties and were aimed at formalizing the calculation of safety factors commonly used to overdesign processes.1-3 A number of works appeared in the 1970s including the works by Takamatsu et al.4 and Dittmar and Hartmann,5 in which the distinction between design and control variables for continuous plants was proposed for the first time. The methodology presented by Grossmann and Sargent6 on flexibility and extended later by Grossmann and co-workers7 has been proven to be a powerful tool for feasibility analysis. This methodology identified the need to guarantee the existence of a feasible region of operation for the specified range of uncertain parameters, and in addition the distinction between static degrees of freedom (design variables) and dynamic degrees of freedom (control variables) is for the first time mathematically formulated. The concept of flexibility has also been extended by Mohideen et al.8 to incorporate elements of dynamics of continuous plants. Ierapetritou et al.9 and Rooney and Biegler10 have presented important extensions and efficient methodologies for solving the flexibility problem by taking explicitly into consideration the fact that manipulated variables can be used during operation to enhance feasibility. Ierapetritou11 and Goyal and Ierapetritou12 discuss the conservativeness of the flexibility analysis and present promising alternative formulations. The design and operation of batch plants, considered in the present work, is noticeably different than that of * To whom correspondence should be addressed. Tel.: +44 (0)161 200 4346. Fax: +44 (0) 161 200 4399. E-mail: i.kookos@ umist.ac.uk.

continuous plants. In contrast to continuous plants, where a significant degree of automation is used, in fine and specialty chemicals industry processes are implemented in multipurpose batch plants with minimal use of automated control.13 Terwiesch et al.14 reported that most batch reactors are operated using the simplest possible operating policies, which are implemented, in most cases, manually. Batch and semibatch processes are inherently dynamic, and as a result, the complete profiles of the degrees of freedom are needed in order to define their optimal operating policies. Batch process optimization is usually performed using model-based optimization techniques.14 In most studies, a nominal model is considered, and the available degrees of freedom are varied in order to optimize a measure of the economic performance of the process or of the product variability. Constraints related to product specification, safety, environmental impact, and equipment limitations are normally taken into consideration. Uncertainty in the model parameters or in the operating environment might, however, have detrimental effects on the optimized process.14-17 To take uncertainty into consideration, most existing approaches adopt a probabilistic view of the plant operation, in which the expected value of the performance index is optimized while ensuring feasible operation.15-17 In this work, a methodology for optimizing the performance of batch and semibatch plants under uncertainty is presented. The method is based on the evaluation of the systems’ performance through simulation (Monte Carlo calculation) and on the use of derivative-based optimization to optimize the available degrees of freedom using analytically calculated gradient information. The evaluation of the complex multidimensional integrals needed to estimate the values of the objective function and the constraints is performed using a Monte Carlo calculation that is independent of the size of the uncertainty space. The method has a number of important advantages over many of the existing methodologies. It can be used to investigate processes with a large number of uncertain parameters, while the final optimization problem is of the same size and complexity as the nominal optimization problem.

10.1021/ie034001m CCC: $25.00 © 2003 American Chemical Society Published on Web 11/18/2003

6816 Ind. Eng. Chem. Res., Vol. 42, No. 26, 2003

The paper is organized as follows. Section 2 presents the mathematical formulation of the problem and section 3 the proposed methodology for its solution. Case studies are presented in section 4, followed by conclusions in section 5.

The need to introduce the expectation operation is due to the fact that, because θ is a random variable, the realvalued function J is random as well. For the same reason, the inequality constraints in formulation (6) are also not well defined and are replaced by

Pr[g(d,θ)e0] g R

2. Problem Statement In this work, we consider batch process models of the form

f(x(t),x3 (t),y(t),u(t),v,t;θ) ) 0, t ∈ (0, τf]

(1)

where t is the time, x ∈ X ⊂ Rn are the differential variables, y ∈ Y ⊂ Rm are the algebraic variables, u ∈ U ⊂ Rp are the time-varying degrees of freedom, v ∈ V ⊂ Rq are the time-invariant degrees of freedom, θ ∈ Θ ⊂ Rµ are the uncertain parameters (on which we have no control or have limited or imprecise information) with probability distribution function (pdf) Π(θ) and f: X × Rn × Y × U × V × Q × (0, τf] f Rn+m are the differential algebraic equations (DAEs) that describe the behavior of the system. Assuming that the index18 of the system is at most 1 and given a set of consistent initial conditions

φ(x(0),x3 (0),y(0),u(0),v;θ) ) 0

(2)

as well as the realization of the degrees of freedom and the uncertainties, a DAE18 solver can be used to determine the value of the differential and algebraic variables for the time interval of interest. This is accomplished by a convenient finite-dimensional parametrization of the time-varying degrees of freedom u(t) ) ζ(v). We also assume that the process operation is restricted by a set of path constraints of the form (safety, environmental or product quality constraints, and equipment limitations)

r(x(t),x3 (t),y(t),u(t),v,t;θ) e 0, t ∈ [0, τf]

(8)

That is, the probability (Pr[‚]) of satisfying the constraint gi(d,θ) must be greater than the level of confidence Ri. The final mathematical programming problem is a stochastic optimization problem of the following form:

min Eθ[J(d,θ)] d

s.t. R - Pr[g(d,θ)e0] e 0

(9)

The aim is to determine the values of the design variables d so as to minimize the expected value of the objective function and at the same time satisfy the probabilistic constraints. Problem (9) is known as chance-constrained programming (CCP) problem for which a vast amount of literature is available.19,20 However, most of the available results are applicable only when the uncertainty appears linearly and certain convexity conditions are satisfied.20 The methodology presented in this work is not limited by these restrictive assumptions. It assumes, however (as do most stochastic optimization techniques including CCP), that the uncertainty distributions are stable distribution functions.19 To solve problem (9), a simulation-based strategy can be used in which the uncertain parameter space is sampled using independent realizations from the distribution Π(θ). Then, the stochastic problem (9) is approximated by the deterministic problem

˜ (d,θ) min J

(3)

d

s.t. Ri - δ˜ [gi(d,θ)] e 0, ∀ i

(10)

The objective function can be expressed, without loss of generality, as a function of the value of all variables at the final time

where J ˜ denotes the sample average approximation, i.e.

J(x(τf),x3 (τf),y(τf),u(τf),v,τf;θ)

1

N

1

N

(4)

J ˜ (d,θ) )

By definition of the function

g(x(τf),x3 (τf),y(τf),u(τf),v;θ) ) max r(x(t),x3 (t),y(t),u(t),v;θ) (5) t∈[0,τf]

solver18

min Eθ[J(d,θ)] d

s.t. g(d,θ) e 0

(6)

where d ) [υT, vT, τf]T and E[‚] denotes the expected value defined as

Eθ[J(d,θ)] )

∫θΠ(θ) J(d,θ) dθ

(7)

(11)

and similarly

δ˜ [gi(d,θ)] )

can be and using the fact that any DAE employed to calculate the values of the differential and algebraic variables (for given values of the parameters and optimization variables) and thus effectively eliminate them, the mathematical formulation obtains the form

J(d,θs) ∑ N s)1

δs[gi(d,θs)] ∑ N s)1

(12)

where δ is a binary variable defined as

δ[gi(d,θ)] )

{

1, if gi(d,θ) e 0 0, if gi(d,θ) > 0

(13)

N is the number of scenarios considered, and s is an index over the set of scenarios. It should be noted that eq 12 is a crude application of the intuitive definition of the probability as the ratio of the number of experiments that give a desired outcome over the overall number of experiments. Different techniques select different sampling strategies to generate the scenarios. Straub and Grossmann,21 Ierapetritou et al.,9 and Rooney and Biegler10 use

Ind. Eng. Chem. Res., Vol. 42, No. 26, 2003 6817

implicit quadrature methods to determine the multidimensional integrals. Shah and Pantelides22 and Subrahmanyan et al.23 use explicit methods where the scenarios are selected based on a priori information. Diwekar and Kalagnanam,24 Chaudhuri and Diwekar,25 and Bernardo and Saraiva26 use stratified sampling to sample efficiently the uncertainty space. The resulting approximate optimization problem is then solved using either deterministic optimization or stochastic search (simulated annealing or genetic algorithms) methodologies. There are a number of silent features that make solution of problem (10) a complex undertaking. First of all, the evaluation of the mean objective function requires efficient sampling of those areas in the uncertainty space that are characterized from a high probability of occurrence. The evaluation of the probability of constraint satisfaction, on the other hand, necessitates efficient sampling of the areas of the uncertainty space that have low probability of occurrence (rare events). Second, it is well-known from elementary statistics that the sample average is a consistent estimate of the expected value, in the sense that it will converge to it in probability as N f ∞. Taking these facts into consideration, it is not surprising that computer simulations show that most of the sampling techniques that use a relatively small amount of scenarios are subjected to significant error (irrespective of their sophistication). As a result, a significant increase in the accuracy of the stochastic model, especially in the cases where rare events are important, can only be achieved by using a significantly large number of scenarios. A closer examination of the definition given by eq 13, which is used to evaluate the probability of constraint satisfaction (eq 12), reveals that δ[‚] is a nondifferential operator. As a result, deterministic, derivative-based optimization cannot be used. However, many authors have presented results in which derivative optimization techniques are used to solve problem (10) with a relatively small number of scenarios in which the derivative information is obtained using finite differences. This, apart from being incorrect, can result in significant degradation in the performance of the optimization software. Stochastic search, such as simulated annealing, is commonly employed to avoid these difficulties.25 However, using a stochastic search technique to solve a stochastic optimization problem makes the problem intractable, especially as the number of optimization parameters increases. Therefore, it is highly desirable to alleviate the restrictions imposed by the nondifferentiable operator defined by eq 13 so as to be able to use the more efficient (in terms of computational speed) derivative-based optimization. In addition, the development of methodologies that can be used to compute efficiently the analytical derivatives will increase the robustness and reduce the time requirements of derivative-based methodologies. 3. Proposed Methodology The proposed methodology addresses directly many of the shortcomings of the available techniques that are discussed in the previous section. First, to avoid the nondifferentiability problems caused by the operator defined in eq 13, smooth approximation functions, as proposed by Samsatli et al.,13 for the efficient calculation of robustness measures in stochastic dynamic optimiza-

tion can be used. A suitable smooth function is the hyperbolic tangent

1 Σ[gi(d,θ)] ) {1 - tanh[Μgi(d,θ)]} 2

(14)

where Μ is a sufficient large number. In this case, formulation (10) can be written as

˜ (d,θ) min J d

s.t. Ri - Σ ˜ [gi(d,θ)] e 0, ∀ i

(15)

Given the derivative of a constraint with respect to a design variable dgi/ddj, then the derivative of the smooth approximation function can be calculated as follows:

Σdj[gi(d,θ)] ) dΣ [gi(d,θ)] Μ dgi(d,θ) ) {tanh[Μgi(d,θ)]2 - 1} (16) ddj 2 ddj The determination of the analytical derivatives is also expected to improve the performance of the derivativebased optimization software. Consider the gradient of the objective function with respect to the design variables dJ/dd. The expected value of this gradient, assuming that the uncertainty vector is independent of the design vector, can be calculated as follows:



[

]

dJ(d,θ) ) Eθ[Jd] ) dd

∫θΠ(θ)

dJ(d,θ) dθ (17) dd

while the sample average is given by

J ˜d )

1

N



dJ(d,θs)

N s)1

(18)

dd

Following the same lines, the sample average for the gradient of each constraint can be calculated as

gid(d,θ) )

1

N



N s)1

dgi(d,θs) dd

(19)

and that of the smooth approximating function as

Σ ˜ d[gi(d,θ)] )

1

N



N s)1

dΣ[gi(d,θs)] dd

(20)

It is interesting to note that the calculation of the sample average of (a quantity defined as) the derivative of a function is common in certain areas such as the area of molecular simulation. The force between two molecules is, for instance, calculated as the sample average of the derivative of the intermolecular potential and is then used to calculate, among others, the average pressure of the simulated system. In the proposed methodology for solving stochastic optimization problems, a large number of scenarios are used to ensure consistent approximation of the initial stochastic problem together with any deterministic, derivative-based optimization techniques (such as successive quadratic programming, SQP) with analytical determination of the derivatives in order to ensure a robust and efficient solution. The proposed algorithm is summarized as follows.

6818 Ind. Eng. Chem. Res., Vol. 42, No. 26, 2003

Step 1. Select the number of scenarios and supply an initial estimation of the design variables. Step 2. Use a sampling technique to sample efficiently and consistently the uncertainty space. Step 3. Simulate the system to obtain sample averages for the objective function, the constraints, and their derivatives with respect to the design variables. Step 4. Use an effective SQP algorithm and the analytical gradient information to obtain better estimates of the design variables. If the SQP algorithm has converged, go to step 5; otherwise, go to step 3. Step 5. If the number of scenarios is sufficiently large to satisfy the accuracy requirements, the algorithm terminates; otherwise, increase the number of scenarios, and go to step 2. A number of silent features make the proposed methodology an efficient and accurate methodology for solving stochastic optimization problems. The first advantage is due to the fact that simulation (Monte Carlo calculation) is used to approximate the multidimensional integrals of the form

J ˜ (d,θ) )

introducing any further complications. Additionally, because the number of scenarios that are used can be close to the statistical limit, satisfaction of all hard constraints is guaranteed in probability, and as a result, the feasibility problem needs not to be addressed explicitly as in many alternative algorithms. It is also interesting to note that the methodology is suitable for parallel implementations that will further extend its applicability and potential to solve problems of industrial relevance. 4. Case Studies 4.1. Simple Batch Reactor. In this case study a batch reactor is considered in which the reaction R f P f B is taking place, where R is the reactant, P is the desired product, and B is a byproduct. Both reactions are assumed to be first order. The aim is to achieve a certain concentration of the product and minimize the batch time by manipulating the reaction temperature (assumed constant). The mathematical model is the following

∫θ ∫θ ...∫θ Π(θ1,θ2,...,θµ) 1

2

minτF

µ

T,τF

J(d,θ2,...,θµ) dθ1 dθ2 ... dθµ (21) The convergence rate of the simulation follows a 1/N1/2 rate of convergence, independently of the dimension of the uncertainty space. The convergence rate of the numerical integration schemes, commonly used in stochastic optimization, follow a 1/N1/µ rate of convergence.27 Therefore, when realistic applications are examined where the number of uncertain parameters are on the order of 101-102, all numerical integration schemes become clearly impractical. Simulation-based schemes are insensitive to the dimension of the uncertainty space, and this is a key feature of the proposed algorithm. Second, the optimization problem is solved in the reduced space of the available degrees of freedom using analytical gradient information. This has a number of important advantages related to the speed and robustness of the solution. Third, the way that the smooth approximation function (defined in eq 14) is employed in the proposed and previous algorithms is different and important to notice. Previous algorithms use the smoothing function in a multiscenario approach in which the numerical properties of the approximation become important when a number of constraints are active at the optimum solution.13 This might result in convergence problems due to ill-conditioning. In a simulationbased methodology, the points that correspond to nearactive constraints contribute to the estimation of the sample average with weights that reflect their probability of occurrence. This probability is vanishingly small, and any potential numerical problems are filtered out. Finally, although the computational time is increased (almost proportionally) as the number of scenarios is increased, the size of the optimization problem (as measured by the number of optimization variables) solved in the proposed methodology is the same as the size of the nominal optimization problem. This is important to notice because the available DAE solvers are significantly more robust compared to the local optimization solvers and their repetitive use causes no problems apart from increasing the computational time. The proposed methodology can handle hard (Ri f 1) and soft constraints in a uniform framework without

s.t.

(22)

dCR(t) ) -k1(T) CR(t), CR(0) ) CR0 dt dCP(t) ) k1(T) CR(t) - k2(T) CP(t), CP(0) ) 0 dt dCB(t) ) k2(T) CP(t), CB(0) ) 0 dt ki ) k0i exp(-Ei/RgT), i ) 1, 2 CP(τF) g 0.9CR0

(23)

where CR0 ) 30 kg mol/m3, E1/Rg ) 180.42 K, and E2/ Rg ) 2405.6 K. k01 and k02 are considered to be uncertain with nominal values k01 ) 20 h-1 and k02 ) 2.5 h-1 and a 10% range. It is also assumed that the final time constraint on the concentration of P is to be satisfied with a confidence level of 99%, i.e.

Pr[CP(τF) g 0.9CR0] g 0.99

(24)

This case study is considered because an analytical solution of the differential equations can easily be obtained,28 making the investigation of the properties of the proposed methodology more transparent. The proposed methodology was applied to this problem by assuming independent pdf’s for the uncertain parameters of the three forms shown in Figure 1. The optimization was solved using the SQP implementation available in MATLAB.29 It is interesting to note at this point that the optimal batch time for the nominal process is 1.7 h while the optimal temperature is 332.6 K. First, uniform distribution is considered with the pdf defined as

{

1 if k0i - ∆ki e ki e k0i + ∆ki Π(ki) ) 2∆ki 0 otherwise

(25)

The optimum values found are τF ) 1.90 h and T ) 326.3 K, and the histogram of the product concentration

Ind. Eng. Chem. Res., Vol. 42, No. 26, 2003 6819

Figure 1. Assumed pdf of the uncertain parameters. Figure 4. Product distribution for uncertain parameters with a triangular pdf (example 1).

Figure 2. Product distribution for uncertain parameters with a uniform pdf (example 1). Figure 5. Product cumulative distribution function (example 1).

{

triangular distribution is assumed with pdf

ki - (k0i - ∆ki)

∆ki2 Π(ki) ) (k0i + ∆ki) - ki

Figure 3. Product distribution for uncertain parameters with a normal pdf (example 1).

is given in Figure 2. The computational time depends almost linearly on the number of scenarios selected and is 0.8 s for 103 scenarios and 884 s for 106 scenarios (Intel Celeron 0.65 GHz, 128 MB memory computer). The number of SQP iterations and function evaluations are decreasing as the number of scenarios increases. This is due to the fact that the use of more samples improves the accuracy in the estimation of the gradient of the objective function and constraints. For the case where a normal pdf is assumed with standard deviation that is equal to 3.333% of the nominal value (3σ ≈ 10%) of the uncertain parameters, then the optimum values are τF ) 1.85 h and T ) 329.3 K. The histogram of the product concentration is given in Figure 3. Finally, when

2

∆ki 0

if k0i - ∆ki e ki e k0i if k0i e ki e k0i + ∆ki

(26)

otherwise

then the optimum values are τF ) 1.87 h and T ) 328.2 K and the histogram of the product concentration is given in Figure 4. In Figure 5, the cumulative distribution functions for the three different uncertainty pdf’s are compared. A number of important conclusions can be drown from Figures 2-5. First, assuming uniform distribution results in the larger batch time in order to achieve the product purity specification. Assuming normal distribution, on the other hand, results in the smallest batch time. From this, it can be concluded that assuming pdf’s that assign significant probabilities of occurrence to extreme values of the uncertain parameters can result in the most conservative design. Although this cannot be generalized, it is in accordance with intuition and shows that the assumption about the particular form of the uncertainty distribution is important for the degree of conservativeness of our final, robust design. Second, from the histograms of the process output (product concentration in this case), it is clear that the form of the output resembles the form of the input pdf. In addition, when the process behavior is not particularly nonlinear, the output pdf is almost symmetric. This

6820 Ind. Eng. Chem. Res., Vol. 42, No. 26, 2003

Figure 6. Simplified flowsheet of the batch distillation column (example 2).

Figure 7. Product distribution (example 2).

suggests that a local, linear analysis might give important information about the process behavior and can be used, among others, to identify the uncertain parameters in which the process is most likely to be sensitive. This can be achieved for the case of a normal pdf by first observing that the required “back-off” from the constraints (in order to mitigate the effect of uncertainty) is given by20,30

βi )

x {[ | ]

F-1(1 - ai)

diag

∂gi ∂θ

N T

E[θθT]

[ | ]} ∂gi ∂θ

N

, ∀ i (27)

where F-1(p) is the inverse of the standard normal cumulative distribution function -1

-1

y ) F (p) ) -x2erfc (2p) ) y 1 t2 y: F(y) ) exp dt ) p -∞ 2 x2π

{



[ ]

}

(28)

Superscript N denotes the nominal conditions. Then formulation (6) can be simplified to the following:

min J(d,θN) d

s.t. gi(d,θN) e βi, ∀ i

x {

[

diag [0.0808 - 0.0008]

]

in the paper by Tørvi and Hertzberg.31 Constant relative volatility, constant molar overflow, and constant tray holdups are the main modeling assumptions. A 10% uncertainty (uniform distribution) is assumed in relative volatility (nominal value: 2), feed composition (nominal value: 50%), initial holdup (nominal value: 110 kmol), reflux ratio (nominal value: 2.25), and vapor flow rate (nominal value 100 kmol/h). The aim is to determine the batch time so as to maximize the production rate subject to the following constraints:

(29)

To test our hypothesis about the validity of the linearized analysis, we evaluate β from eq 27 to obtain β ) -2.3263 ×

Figure 8. Product cumulative distribution function (example 2).

}

0.0208 0.000 [0.0808 -0.0008 ] 0.0000 1.3274

)

-0.0156 (30)

and then solve problem (29) to obtain τF ) 1.86 (which is very close to the value obtained by the exact methodology). It is interesting to note that the actual “backoff” from the constraint, as was calculated at the solution of the full nonlinear model, is -0.0148. Furthermore, the results obtained by simulating the system show that constraint (23) is satisfied with a probability slightly larger than 0.99. Although this is not a proof of the general validity of the linear analysis, it is, however, a strong indication that local analysis can be used for initial calculations. 4.2. Batch Distillation Column. In this case study, a batch distillation column with a binary mixture is considered (Figure 6). The modeling details can be found

Pr[xP(τF) g 0.9] g 0.95

(31)

Pr[MB(τF) g 5] g 0.99

(32)

The optimum batch time found is τF ) 1.085 h. The computational time is 640 s for 103 scenarios, 4410 s for 104 scenarios, and 28 560 s for 105 scenarios. As was also observed in the previous case study, the number of SQP iterations and function evaluations decreases as the number of scenarios increases. The histogram of the product mole fractions is given in Figure 7, while Figure 8 shows the cumulative distribution function of the product mole fraction. It is clear in this case study that, because of the fact that the product composition behaves highly nonlinearly (close to a mole fraction of 1), the histogram of the product composition is nonsymmetric. In this case the linearized approximation proposed in the previous case study is of limited usefulness. 4.3. Polymerization Reactor. A batch reactor where the free-radical solution polymerization of methyl methacrylate initiated by benzoyl peroxide is taking place is considered in this case study. The modeling of this reactor is discussed in detail by Penlidis et al.32 By

Ind. Eng. Chem. Res., Vol. 42, No. 26, 2003 6821

assumption of a constant density and application of the quasi-steady-state approximation for live radicals, the following model is obtained:32

dI ) -kd(T) I dt

(33)

dM ) -kp(T) Mλ0 dt

(34)

(

)

ktc 2 dµ0 ) [ks(T) S + km(T) M]λ0 + ktd + λ dt 2 0 dµ1 ) [ks(T) S + km(T) M]λ1 + ktλ0 λ1 dt

(35) (36) Figure 9. Conversion distribution (example 3).

dµ2 ) [ks(T) S + km(T) M]λ2 + ktλ0 λ1 + ktcλ12 (37) dt λ0 ) λ1 )

( ) 2fkdI kt

1/2

(38)

2fkdI + (kpM + kmM + ksS) ktλ0 + kmM + ksS

2kpMλ1 λ2 ) λ1 + ktλ0 + kmM + ksS x(t) )

M(0) - M(t) M(0)

(

Mw ) Wm

)

µ2 + λ2 µ1 + λ1

(39)

5. Conclusions

(40)

(41)

(42)

where I is the initiator concentration, M is the monomer concentration, µ0, µ1, and µ2 are the zeroth, first, and second moments of the polymer chains, respectively, λ0, λ1, and λ2 are the zeroth, first, and second moments of the polymer radicals, respectively, T is the temperature, x is the conversion, and Mw is the weight-average molecular weight. Isothermal operation of the polymerization reactor is considered in this case study. The aim is to find the optimal temperature and optimal initial initiator concentration so as to minimize the batch time (τF) subject to the following constraints:

I(0) e 0.1 gmol/L

(43)

Pr[x(τF) g 0.7] g 0.95

(44)

4

Pr[Mw(τF) e 6 × 10 ] g 0.95

sion of the monomer. In this case study, a linear dependence of the computation time on the number of scenarios was also observed, and despite the fact that the model is highly nonlinear, the distribution of the monomer conversion follows an almost symmetric distribution that resembles the distribution assumed for the uncertain parameters.

(45)

The rate constants kp, kd, and kt are assumed to be uncertain, having independent and normal pdf’s with standard deviations that are 6.67% of their nominal values. The nominal (no uncertainty) optimization problem is solved first, and the minimum batch time found is 1.46 h. Then the optimization problem under uncertainty is solved to obtain a minimum batch time of 1.68 h (15% increase). In all cases, the best initiator initial concentration is found to be at its upper bound of 0.1 gmol/L. The computation time was 79 s for 102 scenarios, 684 s for 103 scenarios, and 7826 s for 104 scenarios. Figure 9 shows the histogram of the conver-

In this paper, a methodology is presented for the formal optimization of batch processes under uncertainty. The methodology is based on Monte Carlo simulations to evaluate the objective function, probabilistic constraints, and their derivatives with respect to the optimization variables. Any deterministic nonlinear programming optimization algorithm can then be used to determine the values of the optimization variables that satisfy the stochastic constraints and optimize the objective function. A number of small- and mediumscale examples are presented to demonstrate the usefulness of the methodology. It was observed that the computation time is an almost linear function of the number of scenarios used in the Monte Carlo simulation. Future work will be directed toward the evaluation of more effective sampling schemes such as the Latin Supercube Sampling33,34 in order to reduce the computational time of the solution procedure. This will be investigated, with emphasis placed mainly on the accurate evaluation of the probabilistic constraints for which the evidence is not enough to suggest that their evaluation is equally efficient to the evaluation of the average of the objective function. Nomenclature C ) concentration d ) vector of optimization variables E ) expected value or activation energy f ) vector of DAE equations g ) vector of constraints maximums I ) initiator concentration J ) objective function k ) reaction rate constant M ) monomer concentration Mw ) molecular weight Rg ) universal gas constant r ) vector of path inequality constraints T ) temperature t ) time u ) vector of time varying degrees of freedom v ) time-invariant degrees of freedom

6822 Ind. Eng. Chem. Res., Vol. 42, No. 26, 2003 x ) vector of differential variables x ) conversion y ) vector of algebraic variables Greek Letters R ) level of confidence β ) constraint back-off vector δ ) function defined in eq 13 θ ) vector of uncertain parameters λ ) moment of the polymer radicals Μ ) sufficiently large number µ ) dimension of the uncertainty vector or moment of the polymer chains Π ) probability distribution function Σ ) function defined in eq 14 τF ) final time φ ) vector of initial conditions

Literature Cited (1) Buckley, P. S. Sizing Process Equipment by Statistical Methods. Chem. Eng. 1950, 112. (2) Saletan, D. I.; Caselli, A. V. Optimum design capacity of new plants. Chem. Eng. Prog. 1963, 59, 69. (3) Kittrell, J. R.; Watson, C. C. Don’t overdesign process equipment. Chem. Eng. Prog. 1966, 62, 79. (4) Takamatsu, T.; Hashimoto, I.; Shioya, S. On Design margin for process system with parameter uncertainty. J. Chem. Eng. Jpn. 1973, 6, 453. (5) Dittmar, R.; Hartmann, K. Calculation of optimal design margins for compensation of parameter uncertainty. Chem. Eng. Sci. 1976, 31, 563. (6) Grossmann, I. E.; Sargent, R. W. Optimum Design of Chemical Plants with Uncertain Parameters. AIChE J. 1978, 24, 1021. (7) Grossmann, I. E.; Halemane, K. P.; Swaney, R. E. Optimization strategies for flexible chemical processes. Comput. Chem. Eng. 1983, 7, 439. (8) Mohideen, J. M.; Perkins, J. D.; Pistikopoulos, E. N. Optimal Design of Dynamic Systems under Uncertainty. AIChE J. 1996, 42, 2251. (9) Ierapetritou, M. G.; Avecedo, J.; Pistikopoulos, E. N. An Optimisation Approach for Process Engineering Problems Under Uncertainty. Comput. Chem. Eng. 1996, 6/7, 703. (10) Rooney, W. C.; Biegler, L. T. Optimal Process Design with Model Parameter Uncertainty and Process Variability. AIChE J. 2003, 49, 439. (11) Ierapetritou, M. G. A New Approach for Quantifying Process Feasibility: Convex and One-Dimensional Quasi-Convex Regions. AIChE J. 2001, 47, 1407. (12) Goyal, V.; Ierapetritou, M. G. Determination of Operability Limits Using Simplicial Approximation. AIChE J. 2002, 48, 2902. (13) Samsatli, N. J.; Sharif, M.; Shah, N.; Papageorgiou, L. G. Operational Envelopes for Batch Processes. AIChE J. 2001, 47, 2277. (14) Terwiesch, P.; Agarwal, M.; Rippin, D. W. T. Batch unit Optimisation with Imperfect Modelling: A Survey. J. Process Control 1994, 4, 238. (15) Ruppen, D.; Benthack C.; Bonvin, D. Optimisation of Batch Reactor Operation Under Parametric UncertaintysComputational Aspects. J. Process Control 1995, 5, 235.

(16) Terwiesch, P.; Ravemark, D.; Schenker, B.; Rippin, D. W. T. Semi-batch Process Optimisation Under Uncertainty: Theory and Experiments. Comput. Chem. Eng. 1998, 22, 201. (17) Samsatli, N. J.; Papageorgiou, L. G.; Shah, N. Robustness Metrics for Dynamic Optimisation Models under Parameter Uncertainty. AIChE J. 1998, 44, 1993. (18) Brenan, K. E.; Campbell, S. L.; Petzold, L. R. Numerical Solution of Initial Value Problems in Differential-Algebraic Equations; North Holland: New York, 1989. (19) Diwekar, U. M. Introduction to Applied Optimisation; Kluwer Academic Publishers: Boston, MA, 2003. (20) Kall, P.; Wallace, S. W. Stochastic Programming; John Wiley & Sons: New York, 1994. (21) Straub, D. A.; Grossmann, I. E. Integrated Stochastic Metric of Flexibility for Systems with Discrete State and Continuous Parameter Uncertainties. Comput. Chem. Eng. 1990, 14, 967. (22) Shah, N.; Pantelides, C. C. Design of Multipurpose Batch Plants with Uncertain Production Requirements. Ind. Eng. Chem. Res. 1992, 31, 1325. (23) Subrahmanyan, S.; Pekny, J. F.; Reklaitis, G. V. Design of Batch Chemical Plants under Market Uncertainty. Ind. Eng. Chem. Res. 1994, 33, 2688. (24) Diwekar, U. M.; Kalagnanam, J. R. Robust Design using an Efficient Sampling Technique. Comput. Chem. Eng. 1996, 20, S389. (25) Chaudhuri, P. D.; Diwekar, U. M. Process Synthesis under Uncertainty: A Penalty Function Approach. AIChE J. 1996, 42, 742. (26) Bernardo, F. P.; Saraiva, P. M. Robust Optimization Framework for Process Parameter and Tolerance Design. AIChE J. 1998, 44, 2007. (27) Banks, J. Handbook of Simulation: principles, methodology, advances, applications and practice; John Wiley & Sons: New York, 1998. (28) Bequette, W. B. Process Dynamics: Modeling, Analysis, and Simulation; Prentice Hall: Englewood Cliffs, NJ, 1998. (29) The Mathworks. Optimization Toolbox for Use with MATLAB, User Guide, Version 2; The Mathworks: Natick, MA, 2002. (30) Loeblein, C.; Perkins, J. D. Economic analysis of different structures for on-line process optimisation systems. Comput. Chem. Eng. 1998, 22, 1257. (31) Tørvi, H.; Hertzberg, T. Methods for evaluating uncertainties in dynamic simulationsA comparison of performance. Comput. Chem. Eng. 1998, 22, S985. (32) Penlidis, A.; Ponnuswamy, S. R.; Kiparissides, C.; O’Driscoll, K. F. Polymer reaction engineering: modelling considerations for control studies. Chem. Eng. J. 1992, 50, 95. (33) Morokoff, W. J.; Caflisch, R. E. Quasi-Monte Carlo Integration. J. Comput. Phys. 1995, 122, 218. (34) Owen, A. B. Latin Supercube Sampling for Very HighDimensional Simulations. ACM Trans. Model. Comput. Simul. 1998, 8, 71.

Received for review July 9, 2003 Revised manuscript received October 14, 2003 Accepted October 15, 2003 IE034001M