Dynamic Optimization of Batch Reactors Using Adaptive Stochastic

Apr 15, 1997 - In this paper, we propose two adaptive stochastic algorithms, ICRS/DS (Integrated Controlled Random. Search for Dynamic Systems) and th...
11 downloads 0 Views 201KB Size
2252

Ind. Eng. Chem. Res. 1997, 36, 2252-2261

Dynamic Optimization of Batch Reactors Using Adaptive Stochastic Algorithms Eugenio F. Carrasco†,‡ and Julio R. Banga*,§ Department of Chemical Engineering, Facultad de Ciencias, Universidad de Santiago, Av/Alfonso X el Sabio s/n, 27002 Lugo, Spain, and Chemical Engineering Lab., Instituto Investigacions Marinas (CSIC), C/Eduardo Cabello 6, 36208 Vigo, Spain

The dynamic optimization (optimal control) of chemical batch reactors is considered. The solution of these types of problems is usually very difficult due to their highly nonlinear and multimodal nature. In fact, although several deterministic techniques have been proposed to solve these problems, convergence difficulties have been frequently found. Here, two algorithms based on stochastic optimization are proposed as reliable alternatives. These stochastic algorithms are used to succesfully solve four difficult case studies taken from the recent literature: the Denbigh’s system of reactions, the oil shale pyrolysis problem, the optimal fed-batch control of induced foreign protein production by recombinant bacteria, and the optimal drug scheduling of cancer chemotherapy. The advantages of these alternative techniques, including ease of implementation, global convergence properties, and good computational efficiency, are discussed. 1. Introduction The dynamic optimization (optimal control) of chemical batch reactors has received major attention since the early works of Denbigh (1958) and Aris (1960). These authors considered the problem of determining the optimum temperature profiles for cases where there are competing side reactions, demonstrating that large increases in yield may be obtained using the optimal profiles. Another interesting and challenging type of related problems, which has been studied by many researches, is the dynamic optimization of batch and semibatch bioreactors, like the determination of the optimal feed rate profile of fed-batch reactors (e.g., Johnson, 1987; San and Stephanopoulos, 1989; Chen and Hwang, 1990; Modak and Lim, 1992; Bibila and Robinson, 1995). These problems are especially difficult to solve due to their highly nonlinear (and sometimes distributed) nature and also due to the presence of constraints on both the state and control variables (see, for example, Lim et al., 1986). Besides, many of these problems are multimodal, so most deterministic methods may fail to converge to the global solution. The variational formulation (maximum principle of Pontryagin) has been suggested by several authors to solve these optimal control problems. However, this approach is very difficult in most cases, especially when constraints on the state variables are present. Alternatively, several transformation techniques have been proposed (see the literature review of Vassiliadis, 1993) which can be classified under the following categories: complete parametrization (Biegler, 1984; Cuthrell and Biegler, 1989; Tanartkit and Biegler, 1995), control parametrization (Goh and Teo, 1988; Vassiliadis, 1993; Vassiliadis et al., 1994a,b; Feehery and Barton, 1996), and iterative dynamic programming (IDP; Luus, 1994; Dadebo and McAuley, 1995). However, both complete and control parametrization strategies may present convergence difficulties due to * To whom correspondence should be addressed. Phone: +34-86-231930. Fax: +34-86-292762. E-mail: [email protected]. † Universidad de Santiago. ‡ Phone: +3482223325. Fax: +34-82-224904. E-mail: [email protected]. § Instituto Investigacions Marinas (CSIC). S0888-5885(96)00718-X CCC: $14.00

the highly nonlinear, multimodal, and/or discontinuous nature of these systems (Luus, 1994). In particular, the use of a control parametrization approach together with deterministic (gradient based) optimization methods, such as sequential quadratic programming (SQP), usually causes convergence to local optima. Luus (1994) proposed the use of the iterative dynamic programming (IDP) algorithm to surmount these difficulties. Nevertheless, the IDP algorithm requires preparatory runs to estimate penalty factors, and the associated computation effort is usually considerable. Stochastic algorithms are robust alternatives for the global optimization of chemical processes, especially when the process models have discontinuities or multimodal performance indexes (Banga and Seider, 1995). In this paper, we propose two adaptive stochastic algorithms, ICRS/DS (Integrated Controlled Random Search for Dynamic Systems) and the new ARDS/DS (Adaptive Randomly Directed Search for Dynamic Systems), as reliable and efficient alternatives for the global dynamic optimization of chemical reactors. We also demonstrate the advantages of these algorithms for solving several challenging optimal control problems taken from the literature: the Denbigh’s system of reactions, the oil shale pyrolysis problem, the optimal fed-batch control of induced foreign protein production by recombinant bacteria, and the optimal drug scheduling of cancer chemotherapy. 2. Dynamic Optimization Using Adaptive Stochastic Algorithms The general dynamic optimization (optimal control) problem of a chemical reactor, considering a fixed terminal time tf, can be stated as finding the control vector u{t} over t ∈ [t0, tf] to minimize (or maximize) a performance index J[x,u]:

J[x,u] ) Θ[x{tf}] +

∫tt Φ[x{t},u{t},t] dt f

0

(1)

subject to a set of ordinary differential equality constraints (eq 2) where x is the vector of state variables,

dx dt

) Ψ[x{t},u{t},t]

© 1997 American Chemical Society

(2)

Ind. Eng. Chem. Res., Vol. 36, No. 6, 1997 2253

with initial conditions x{t0} ) x0, and also subject to sets of algebraic equality and inequality constraints, (eqs 3 and 4).

h[x{t},u{t}] ) 0

(3)

g[x{t},u{t}] e 0

(4)

An additional set of algebraic inequality constraints is the upper and lower bounds on the state and control variables (eqs 5 and 6).

xL e x{t} e xU

(5)

uL e u{t} e uU

(6)

The above formulation assumes that the reactor is modeled as a lumped system (i.e., described by ordinary differential equations). If the reactor is modeled as a distributed system (e.g., a fixed-bed reactor where state variables are a function of both time and spatial position), the governing partial differential equations (PDEs) are introduced as an additional set of equality constraints:

p(x,xκ,xκκ,...,κ) ) 0 a(x,xκ,...,κ) ) 0

κ∈Ω κ ∈ δΩ

proposed recently (Feehery et al., 1995; Feehery and Barton, 1996). If the system is distributed, the PDEs (eqs 7 and 8) can be transformed into a set of DAEs using the numerical method of lines (Schiesser, 1991) or into a system of algebraic equations using finite differences (Banga et al., 1991, 1994). These algorithms are very simple to implement and use. With this approach, no preparation or modification of the original equations is required, even if the model has discontinuities, and there is no need to provide analytical derivatives or to define penalty functions and their weights. 3. Case Studies Four difficult case studies are examined in this paper. In this section, the optimization problems are formulated. The results are presented and discussed in the next section. 3.1. Case Study I: Denbigh’s System of Reactions. This optimal control problem is based on the system of chemical reactions initially considered by Denbigh (1958), which was also studied by Aris (1960) and more recently by Luus (1994):

(7)

A+BfX

(8)

A+BfP

where κ are the independent variables (time and spatial position). Equation 7 is the system of governing PDEs within the domain Ω, where xκ ) ∂x/∂κ, and equation 8 is the set the auxiliary conditions of the PDEs (boundary and initial conditions) on the boundary δΩ of the domain Ω. In this paper, the ICRS/DS and the new ARDS/DS stochastic algorithms are proposed as robust alternatives to solve these types of problems. The ICRS/DS algorithm was originally developed to solve several difficult optimal control problems related to the thermal sterilization of bioproducts (Banga et al., 1991). This algorithm has successfully solved many other optimal control problems, including complex distributed systems (Banga and Singh, 1994; Banga et al., 1994; Banga and Seider, 1995; Irizarry-Rivera and Seider, 1996). Here, we also present the new ARDS/DS algorithm, which has been developed in order to improve the computational efficiency of ICRS/DS when many constraints on the state variables are present. Both algorithms, which are detailed in Appendix A, are based on the transformation of the original optimal control problem into a nonlinear programming (NLP) problem using a control parametrization approach. The use of variable-length piecewise-linear polynomials for the control parametrization was found to give the best results. The resulting constrained NLP problems are then solved using the corresponding (ICRS or ARDS) adaptive stochastic algorithms. This prevents convergence to local optima, which is one of the major drawbacks of deterministic (gradient-based) approaches. The set of equality constraints (differential-algebraic equations, DAEs) (eqs 2 and 3) is integrated at each iteration. The DAEs are usually stiff, so a robust and efficient initial value integrator must be used. We have used DASSL (Petzold, 1982; Brenan et al., 1989) with excellent results in all cases. Note that in this paper we assume that the DAEs are index-1, so they can be solved with DASSL. However, a novel approach to dynamic optimization of high-index DAEs has been

XfY XfQ where X is an intermediate, Y is the desired product, and P and Q are waste products. This system is described by the following differential equations:

dx1 ) -k1x1 - k2x1 dt

(9)

dx2 ) k1x1 - (k3 + k4) x2 dt

(10)

dx3 ) k3x2 dt

(11)

where the state variables are x1 ) [A][B], x2 ) [X], and x3 ) [Y]. The initial condition is

x{0} ) [1 0 0]T

(12)

The rate constants are given by

(

ki ) ki0 exp -

Ei RT

)

i ) 1, 2, 3, 4

(13)

where the values of ki0 and Ei are given by Luus (1994). The optimal control problem is to find T(t) (the temperature of the reactor as a function of time) so that the yield of Y is maximized at the end of the given batch time tf. Therefore, the performance index to be maximized is

J ) x3{tf}

(14)

where the batch time tf is specified as 1000 s. The constraints on the control variable (reactor temperature) are

273 e T e 415

(15)

2254 Ind. Eng. Chem. Res., Vol. 36, No. 6, 1997

3.2. Case Study II: Oil Shale Pyrolysis. A very challenging optimal control problem is the oil shale pyrolysis case study, as considered by Luus (1994). The system consists of a series of five chemical reactions:

A1 f A2 A2 f A3 A1 + A2 f A2 + A2 A1 + A2 f A3 + A2 A1 + A2 f A4 + A2 This system is described by the differential equations

dx1 ) -k1x1 - (k3 + k4 + k5)x1x2 dt

(16)

dx2 ) k1x1 - k2x2 + k3x1x2 dt

(17)

dx3 ) k2x2 + k4x1x2 dt

(18)

dx4 ) k5x1x2 dt

(19)

where the state variables are the concentrations of species, xi ) [Ai], i ) 1, ..., 4. The initial condition is

x{0} ) [1 0 0 0]T

(20)

The rate expressions are given by:

(

Ei ki ) ki0 exp RT

)

i ) 1, 2, ..., 5

(21)

where the values of ki0 and Ei are given by Luus (1994). The optimal control problem is to find the residence time tf and the temperature profile T(t) in the time interval 0 e t e tf so that the production of pyrolytic bitumen, given by x2, is maximized. Therefore, the performance index is

J ) x2{tf}

(22)

The constraints on the control variable (temperature) are

698.15 e T e 748.15

(23)

3.3. Case Study III: Optimal Control of a FedBatch Bioreactor for Protein Production. This problem considers the optimal fed-batch control of induced foreign protein production by recombinant bacteria as presented by Lee and Ramirez (1994), using the nutrient and inducer feeding rates as control variables. The model is given by the following ordinary differential equations:

dx1 ) u1 + u2 dt

(24)

u1 + u2 dx2 ) µ{x3,x5,x6,x7}x2 x2 dt x1

(25)

dx3 u1 f u1 + u2 ) Cn x3 - Y-1µ{x3,x5,x6,x7}x2 dt x1 x1

(26)

dx4 u1 + u2 ) Rfp{x3,x5}x2 x4 dt x1

(27)

dx5 u2 f u1 + u2 ) Ci x5 dt x1 x1

(28)

dx6 ) -k1x6 dt

(29)

dx7 ) k2(1 - x7) dt

(30)

where the state variables are the reactor volume (x1), the cell density (x2), the nutrient concentration (x3), the foreign protein concentration (x4), the inducer concentration (x5), the inducer shock factor on cell growth rate (x6), and the inducer recovery factor on cell growth rate (x7). The two control variables are the glucose feeding rate (u1) and the inducer feeding rate (u2). The initial conditions for the system are

x{0} ) [1 0.1 40 0 0 1 0]T

(31)

The objective is to maximize the profitability of the process (i.e., the difference between the value of the product and the cost of the inducer) for a specified final time of fed-batch operation, tf. Therefore, the performance functional to maximize is

∫tt u2 dt

J ) x4x1 - Q

f

0

(32)

where Q is the ratio of the cost of inducer to the value of the protein product. 3.4. Case Study IV: Optimal Drug Scheduling for Cancer Chemotherapy. Martin (1992) considered the problem of determining the optimal cancer drug scheduling to decrease the size of a malignant tumor as measured at some particular time in the future. The drug concentration must be kept below some level throughout the treatment period, and the cumulative effect of the drug must be kept below the ultimate tolerance level. This problem was also studied by Bojkov et al. (1993) and Luus et al. (1995) using direct search optimization. The system is given by the differential equations:

dx1 ) -k1x1 + k2(x2 - k3)H{x2 - k3} dt

(33)

dx2 ) u - k4x2 dt

(34)

dx3 ) x2 dt

(35)

where the tumor mass is given by N ) 1012 exp(-x1) cells, x2 is the drug concentration in the body in drug units [D], and x3 is the cumulative effect of the drug, and the parameters are taken as k1 ) 9.9 × 10-4 days, k2 ) 8.4 × 10-3 days-1 [D-1], k3 ) 10 [D-1], and k4 ) 0.27 days-1. The initial state considered is

x{0} ) [ln(100) 0 0]T

(36)

Ind. Eng. Chem. Res., Vol. 36, No. 6, 1997 2255

where

{

H{x2 - k3} )

1 if x2 g k3 0 if x2 < k3

(37)

The performance index to be maximized is

J ) x1{tf}

(38)

where the final time tf ) 84 days. The optimization is subject to the following constraints on the drug delivery (control variable):

u(t) g 0

(39)

and on the state variables:

x2(t) e 50

(40)

x3(t) e 2.1 × 103

(41)

Figure 1. Optimal control profiles for case study I (Denbigh’s system of reactions) obtained using the ICRS/DS and ARDS/DS algorithms.

for t0 e t e tf. Also, there should be at least a 50% reduction in the size of the tumor every 3 weeks, so that the following point constraints must be considered:

x1{21} g ln(200)

(42)

x1{42} g ln(400)

(43)

x1{63} g ln(800)

(44)

4. Results and Discussion These four case studies were successfully solved using Fortran 77 (double-precision) implementations of the ICRS/DS and ARDS/DS algorithms. In order to ensure high accuracy, the relative and absolute error tolerances for the integrations with the DASSL solver were set to values ranging from 10-7 to 10-9. With respect to the convergence criterion of the optimization loop, the tolerance (referred to decision variables, tol{ξ}, as described in Appendix A) was set to values ranging from 10-3 to 10-5 in both algorithms. One of our objectives was to demonstrate that these stochastic algorithms are efficient enough to be used in low-cost computing platforms, like standard personal computers. Therefore, the programs were compiled (using the Microsoft Fortran PowerStation 4.0 compiler) and run on a PC Pentium/133 MHz. As a reference for comparison with other computers, we compiled and ran the Linpack benchmark (double precision, Fortran 77 version) in the PC Pentium/133, obtaining an average performance of 12 Mflops. A performance database server based on the Linpack benchmark is available in the Internet at http://performance.netlib.org/. 4.1. Case I: Denbigh’s System of Reactions. Luus (1994) reported a maximum value of the performance index of J ) 0.669 33, using the Iterative Dynamic Programming (IDP) algorithm and starting from an initial constant profile u0{t} ) 344 K. He reported CPU times around 4 h using a PC 486/33 MHz, which has a performance of 1.4 Mflops, as measured by the Linpack benchmark. This problem was easily solved using the ICRS/DS algorithm and the same initial profile. The value of the performance index obtained was J ) 0.668 74, which is only 0.09% from the global optimum obtained by Luus (1994), with a CPU time of only 1.1 min using a PC Pentium/133 MHz. This is a very significant improvement, as the corresponding CPU time for IDP in the Pentium would be

Figure 2. Convergence rates (relative error versus CPU time) of the ICRS/DS and ARDS/DS algorithms for case study I.

around 28 min (the Pentium/133 is approximately 8.6 times faster than the 486/33, as estimated from the Linpack performance values). This problem was also solved using the ARDS/DS algorithm with the same starting profile used previously. A maximum performance index of J ) 0.667 45 was obtained, which is 0.2% from the global optimum obtained by Luus (1994), but the CPU time was only 28 s (PC Pentium/133 MHz). The optimal control profiles obtained with ICRS/DS and ARDS/DS are very similar (see Figure 1). However, for that CPU time, the ICRS/DS algorithm was already closer to the global optimum. In order to compare fairly the performance of both stochastic methods, the corresponding error curves (or convergence rate plots, that is, the relative distance from the optimum versus CPU time) are compared in Figure 2, showing the superiority of the ICRS/DS algorithm in this case study. 4.2. Case II: Oil Shale Pyrolysis. Luus (1994), using the IDP algorithm, found a global maximum of J ) 0.353 820 occurring at a final residence time of tf ) 9.3 h. He reported convergence problems, concluding that the chance of obtaining the global optimum was highly dependent on the values assigned to some search parameters of IDP. With the ICRS/DS algorithm, no convergence problems were encountered. Using an initial constant profile of u0{t} ) 720 and a final time of tf ) 9.3 h, a performance index of J ) 0.353 805 was obtained, which is only 0.004% from the global optimum obtained by Luus (1994). The CPU time was 2.5 min (PC Pentium/133 MHz). In this case, Luus did not report CPU times so we cannot make a comparison.

2256 Ind. Eng. Chem. Res., Vol. 36, No. 6, 1997

Figure 3. Optimal control profiles for case study II (oil shale pyrolysis) obtained using the ICRS/DS and ARDS/DS algorithms.

Figure 4. Convergence rates (relative error versus CPU time) of the ICRS/DS and ARDS/DS algorithms for case study II. Table 1. Performance Index J for Case Study III Considering Two Q Values ICRS/DS ARDS/DS Lee and Ramirez (1994)

Q)0

Q)5

1.0097 1.0036 1.0012

0.9981 0.9646 0.7988

The problem was also solved using the ARDS/DS algorithm with the same starting profile. A maximum of J ) 0.353 734 was obtained, which is 0.02% from the optimum obtained by Luus (1994), requiring 0.9 min of CPU time (PC Pentium/133). The corresponding optimal control profiles are shown in Figure 3. It can be seen that, although the performance index values are very close, the control profiles are quite different. This fact indicates a relatively low sensibility of the performance index with respect to the control, which can explain the convergence problems found by other deterministic methods, as discussed by Luus (1994). The error curves for both the ICRS/DS and the ARDS/DS algorithms are presented in Figure 4, showing again a better performance for the ICRS/DS algorithm. 4.3. Case III: Optimal Control of a Fed-Batch Bioreactor for Protein Production. Lee and Ramirez (1994) used a formulation based on Pontryagin’s maximum principle to solve this problem. They studied two cases: one not considering the cost of the inducer (Q ) 0) and a second considering the relative cost of the inducer to the value of the product as 5 (Q ) 5). Their results, for tf ) 10 h, are presented in Table 1. It should be noted that these authors reported some convergence problems. This problem was solved successfully using the ICRS/ DS algorithm. The results are also presented in Table

Figure 5. Optimal control profiles for case study III, with Q ) 0, obtained using the ICRS/DS algorithm.

Figure 6. Convergence rates (relative error versus CPU time) of the ICRS/DS and ARDS/DS algorithms for case study III (Q ) 0).

1. For Q ) 0, we obtained a performance index J slightly better than the one reported by Lee and Ramirez, and no convergence difficulties were found. The optimal control is shown in Figure 5. Considering the case of Q ) 5, the performance index obtained with the ICRS/DS algorithm is a very significant (25%) improvement over the result of Lee and Ramirez (1994). Again, no convergence difficulties were found. The problem was also solved using the ARDS/DS algorithm for both Q ) 0 and Q ) 5. Better performance indexes than Lee and Ramirez (1994) were obtained, but they were not as good as those obtained with the ICRS/DS algorithm (Table 1). The error curves for both stochastic algorithms (considering Q ) 0) are shown in Figure 6. Once again, the CPU times are very reasonable. Note that we cannot make a comparison with Lee and Ramirez (1994) as they do not provide these data. As also mentioned by Lee and Ramirez (1994), it seems that this optimal control problem is highly multimodal due to the low sensitivity of the performance index with respect to the controls. Although the standard ARDS algorithm is usually more efficient than the ICRS algorithm considering standard nonlinear programming (steady-state) problems, ICRS/DS seems to have better efficiency and global convergence properties than ARDS/DS when applied to multimodal dynamic optimization problems (like this case study). This is in agreement with the results found for case studies I and II. However, as it will be shown in case study IV, when there are many constraints on the state variables, the ARDS/DS algorithm is more efficient. 4.4. Case IV: Optimal Drug Scheduling for Cancer Chemotherapy. The path and point constraints on the state variables make this problem especially difficult. Bojkov et al. (1993) used direct

Ind. Eng. Chem. Res., Vol. 36, No. 6, 1997 2257

search optimization along with an intuitive approach for the path constraints. They arrived at a performance index value of J ) 17.223 which corresponds to a final tumor size of N ) 3.31 × 104 cells at tf ) 84 days, which was a 32% improvement over the result of Martin (1992). Luus et al. (1995) recognized that the use of the intuitive approach could narrow the search, so the global optimum solution was not obtained. Alternatively, they used a direct search method with search region contraction, obtaining a performance index J ) 17.476, which corresponds to a final tumor size of N ) 2.57 × 104 cells, a 22% improvement over the results of Bojkov et al. (1993). They reported a computation time of 3 days using a PC 486/66. This machine has a Linpack performance of 2.4 Mflops, so the corresponding time using a Pentium/133 would be 14.4 h. The integration of the differential equations of this problem was found to be very sensitive to the tolerance values, so we used a value of 10-12 with DASSL in order to assure high accuracy and reproducibility. In fact, using this tolerance, we obtained a performance index value of J ) 17.33 for the optimal control of Luus et al. (1995), but the path constraint on x2 was not satisfied. After slightly correcting it to make it feasible, J ) 17.06 was obtained. These differences with their reported J value are probably due to the different integrators, tolerance values, and ways of checking the path constraints used. We successfully solved this problem using the ICRS/ DS alogrithm, starting from a constant control profile u0{t} ) 6.75, as done by Luus et al. (1995). The value of the performance index obtained was J ) 17.00, which is in close agreement with the optimal control of Luus et al. (1995) integrated with DASSL, as explained above. The computation time was 30 min using a Pentium/133. Using the ARDS/DS algorithm, a performance index of J ) 17.01 was obtained, with a computation time of 15 min. This problem was also solved by not considering the point constraints on x1 (eqs 42-44). Using the ICRS/ DS algorithm and starting from a constant control profile u0{t} ) 4.5, we obtained a performance index of J ) 17.571 (final tumor size of N ) 2.34 × 104 cells), which is an improvement of 29% over the results of Bojkov et al. (1993) and 9% over the results of Luus et al. (1995). The typical computation time was 45 min using a Pentium/133, although a performance index of J ) 17.5 was reached with only 9 min of CPU time. We also solved this problem without point constraints using the ARDS/DS algorithm, starting from the same constant control profile. A performance index of J ) 17.742 was obtained (final tumor size N ) 1.97 × 104 cells), an improvement of 40% over the results of Bojkov et al. (1993), 23% over the results of Luus et al. (1995), and 16% over the results of ICRS/DS. The CPU time was only 2.5 min. It is especially interesting to note that, if the point constraints are not taken into account, the final performance index value is improved. That is, although the point constraints are intended to force the tumor size to decrease at a given rate in order to assure a low intermediate tumor burden, they actually cause the final solution (final tumor size) to be worse than if they are not considered. This is in agreement with the observation of Martin (1992) that the optimal treatment maintains a relatively high tumor burden during therapy. The optimal control profiles calculated with both stochastic algorithms are shown in Figure 7. In Figure

Figure 7. Optimal control profiles for case study IV (drug scheduling for cancer chemotherapy) obtained using the ICRS/ DS and ARDS/DS algorithms.

Figure 8. Convergence rates (relative error versus CPU time) of the ICRS/DS and ARDS/DS algorithms for case study IV.

8, the error curves of both algorithms are compared. In contrast with the results for the previous case studies, these curves clearly show the superiority of the ARDS/ DS over the ICRS/DS algorithm. The reason is that this case study is heavily constrained: the ARDS/DS algorithm, thanks to its adaptive parameters which reflect the history of the search, deals more efficiently than the ICRS/DS algorithm with these types of problems. 5. Conclusions It has been shown that the ICRS/DS and ARDS/DS algorithms are simple, yet robust, alternatives for the dynamic optimization of chemical reactors. With respect to the challenging case studies considered here, which have been taken from the recent literature, these stochastic algorithms are efficient enough to be used in low-cost computing platforms with very reasonable CPU times. Furthermore, in several cases we have been able to significantly improve the solutions obtained with other methods. Comparing both stochastic algorithms, the ICRS/DS algorithm seems to be more efficient for dynamic optimization problems without path or point constraints on the state variables. However, for heavily constrained problems, the ARDS/DS algorithm is preferable because the history of the search is used in a more elaborate way to guide the step size selection. Appendix A: The ICRS/DS and ARDS/DS Algorithms Basically, these algorithms are based on a sequential (control parametrization) strategy: the original dynamic

2258 Ind. Eng. Chem. Res., Vol. 36, No. 6, 1997

optimization problem is transformed into a constrained nonlinear programming (NLP) problem using a simple, yet flexible, parametrization of the control function. The constrained NLP is solved using a stochastic algorithm (ICRS/DS or ARDS/DS, described below), where the equality constraints (set of differential-algebraic equations) are simultaneously solved at each iteration using DASSL (Brenan et al., 1989). In order to avoid convergence to local optima, which is the major problem of deterministic approaches, some of the heuristic parameters of these stochastic algorithms must be given adequate values, as is explained below. Parametrization of the Control Function. In the description given here, we will consider the case of a single control function, but the extension to multiple controls is straightforward. The control function u{t}, t ∈ [t0, tf], is parametrized using a grid of N points, (θi, ωi), i ) 1, ..., N. The value of u{t} can be calculated using several types of interpolation. We have tested piecewise constant, piecewise linear, B-splines, and Lagrange polynomials, and we have found that variablelength piecewise linear polynomials gave the best results in most cases. Also, it was found that adequate parametrizations were achieved with N ranging from 5 to 10 grid points. Therefore, this is the default option for the control parametrization in both the ICRS/DS and ARDS/DS algorithms. However, in some cases, the optimal control may not be smooth, presenting many abrupt changes, so variable-length piecewise constant parametrization is given as an alternative option to further refine the solution. It should be noted that if the global optimal control is nonlinear, a control parametrization approach based on piecewise linear polynomials will give a suboptimal solution. However, we have also found that this solution is very close to the global one due to the flexibility of the parametrization scheme used. Following this scheme, for iteration k within the k optimization algorithm and θki e t e θi+1 the control is given by the piecewise linear parametrization:

uk{t} )

k - ωki ωi+1 (t - θki ) + ωki k k θi+1 - θi

(45)

The original dynamic optimization problem results in a NLP problem of dimension 2N (or 2NM if there are M control variables), where the objective is to find (θ* i, ω* i), i ) 1, ..., N, which minimize (or maximize) the performance index J. Therefore, the decision variables vector ξ for any iteration k within the ICRS/DS or ARDS/DS algorithm is

{

ξki ) θki , i ) 1, ..., N k k ξi ) ωi-N, i ) N + 1, ..., 2N

}

(46)

The ICRS/DS algorithm requires the explicit declaration of upper and lower bounds for the decision variables. Therefore, the upper and lower constraints for the control vector (eq 6) are declared in a simple way:

{

U L L ξU i ) 1, ..., N i ) θi , ξi ) θi ; U U L L ξi ) ωi-N, ξi ) ωi-N; i ) N + 1, ..., 2N

}

L θU i ) θi +  )

(

If a fixed constant discretization ratio of time is desired, it suffices to take

(48)

where  is a small number. ICRS/DS Algorithm. This algorithm is a modification of the ICRS algorithm (Banga and Casares, 1987; Casares and Rodriguez, 1989), which was a generalization of the method originally proposed by Goulcher and Casares (1978) for steady-state optimization problems. The algorithm has three heuristic parameters (K1, K2, ne), where default values are K1 ) 1/3, K2 ) 1/2, and ne ) 4. The parameter K1 controls the initial step size of the search, which will be subsequently reduced using the parameter K2. The integer parameter ne controls the rate of convergence. For problems without local optima, these default values are adequate. For highly multimodal problems, K1 must be increased to values from 1.0 to 2.0. For example, Banga and Seider (1995) have shown that, using K1 ) 2, the global minimum of a difficult multiphase equilibria problem was reached with a success ratio of 97%. The algorithm is as follows: Step 1. Read parameters and input data of the problem, including the upper and lower bounds for the decision variables as in eq 47, and the initial vector of decision variables ξ0, that must be feasible. If no initial vector is available, the ICRS/DS algorithm may generate one automatically by doing random searches from a uniform probability distribution over each of the decision variables until a feasible vector is found. Step 2. Evaluate the performance index at the initial vector J0 ) J{ξ0} and check the inequality constraints. The procedure for doing this is as follows: Step 2a. Using the parametrization of the control variables presented above, solve the differential equality constraints, (eq 2), together with the equality algebraic constraints (eq 3), if any is present, using DASSL. Step 2b. Check the inequality constraints, (eq 4), and the upper and lower bounds of the state variables (eq 5). If any of them are not satisfied, then the decision vector is not feasible. Step 2c. If it is feasible, calculate the performance index value (eq 1). Step 3. If the initial vector is not feasible, return to step 1 and either stop or generate a new one. Step 4. Calculate the minimum intervals of the decision variables, defined by the initial vector and its bounds: k k L λki ) min {(ξU i - ξi ), (ξi - ξi )},

i ) 1, ..., 2N (49)

Step 5. Calculate the vector of standard deviations, using the heuristic parameter K1:

σki ) K1λki ,

i ) 1, ..., 2N

(50)

using a Step 6. Generate a new decision vector ξk+1 i Gaussian distribution of standard deviation σki and mean ξki (the old decision vector):

) ξki + σki Γi, ξk+1 i (47)

)

tf (i - 1) +  N-1

i ) 1, ..., 2N

(51)

where Γ is a vector of pseudo-random numbers generated from a Gaussian probability distribution of mean 0 and standard deviation 1. Check that ξk+1 ∈ (ξLi , i U ξi ), i ) 1, ..., 2N; otherwise repeat step 6.

Ind. Eng. Chem. Res., Vol. 36, No. 6, 1997 2259

Step 7. Evaluate the new value of the performance index, Jk+1, using the new decision vector, following the same procedure as in steps 2a-c. If this new vector is feasible and the performance index has improved (Jk+1 < Jk for minimization), go to step 9. Otherwise, go to step 8. Step 8. Increment the failure counter F. If F > ne × 2N, then make σki r K2σki , i ) 1, ..., 2N, and reset counter F. Return to step 6. Step 9. Check the convergence criterion, which can be referred to the decision vector and/or to the objective function according to previously specified tolerances:

|ξk+1 - ξki | i L ξU i - ξi

Step 3. If the initial vector is not feasible, return to step 1 and either stop or generate a new one. Step 4. Calculate the initial range of the decision variables, defined by k

k Ri ) min {2 × 10int(log ξi ); K1(ξU i - ξi );

K1(ξki - ξLi )},

Step 5. Generate a new decision vector using the following equations:

) ξki + ξk+1 i < tol{ξ}

(52)

and/or

D) |Jk+1 - Jk| < tol{J} |Jk|

(53)

If convergence has been achieved, save data and stop. Otherwise, make ξk ) ξk+1, Jk ) Jk+1, reset counter F, increment the iteration counter k, and return to step 4. ARDS/DS Algorithm. Here we present the new ARDS/DS algorithm, which is based on the ARDS algorithm, proposed for steady-state optimization problems by Martin and Gaddy (1982). The ARDS algorithm has the best reported efficiency of adaptive random search methods, which is due to the fact that, at a given iteration, the previous history of the search is used to guide the step size selection. This results in an improvement of the search efficiency, especially when many constraints are present. The ARDS/DS algorithm has four heuristic parameters (K1, KR, SR, ∆KR), which are quite problem dependent, with value ranges of K1 ) 1/4-2.0, KR ) 0.51.0, SR ) 0.1-0.4, and ∆KR ) 0.01-1.0. The parameter K1, which is similar to that used by the ICRS/DS algorithm, controls the initial step size of the search. The parameter KR is the range reduction coefficient, which controls the contraction of the search region. The parameter SR, called the success ratio coefficient (calculated as the number of successful function evaluations divided by the total number of function evaluations), controls the rate of convergence. Finally, the parameter ∆KR is the increment of the range reduction coefficient. For highly multimodal problems, K1 must be increased to values ranging from 1.0 to 2.0, while SR must be decreased to values ranging from 0.1 to 0.2. The algorithm is as follows: Step 1. Read parameters and input data of the problem, including the upper and lower bounds for the decision variables, and the initial vector of decision variables ξ0, which must be feasible. Step 2. Evaluate the performance index at the initial vector J0 ) J{ξ0} and check the inequality constraints. The procedure for doing this is as follows: Step 2a. Using the parametrization of the control variables presented above, solve the differential equality constraints (eq 2), together with the equality algebraic constraints (eq 3), if any is present, using DASSL. Step 2b. Check the inequality constraints (eq 4) and the upper and lower bounds of the state variables (eq 5). If any of them is not satisfied, then the decision vector is not feasible. Step 2c. If it is feasible, calculate the performance index value (eq 1).

i ) 1, ..., 2N (54)

x

Ri (2Ξi - 1) KR D

(55)

N

(2Ξi - 1)2 ∑ i)1

(56)

for i ) 1, ..., 2N, where Ξ is a vector of pseudo-random numbers between 0 and 1 generated from a uniform distribution. Step 6. Evaluate the new value of the performance index Jk+1, using the new decision vector, following the same procedure as in steps 2a-c. If this new vector is feasible and the performance index has improved (Jk+1 < Jk for minimization), go to step 9. Otherwise, go to step 7. Step 7. The ranges Ri are halved if no improvements are found in the first five function evaluations. Thus, smaller ranges will be quickly established if larger step sizes are not appropriate. Return to step 5. Step 8. If the success ratio is less than a previously specified value (SR), then make KR ) KR + ∆KR. Return to step 5. Step 9. Check the convergence criterion, which can be referred to the decision variables vector and/or to the objective function according to previously specified tolerances:

- ξki | |ξk+1 i

< tol{ξ}

(57)

|Jk+1 - Jk| < tol{J} |Jk|

(58)

L ξU i - ξi

and/or

If convergence has been achieved, save data and stop. Else, continue to step 10. Step 10. Check whether the algorithm makes adequate progress toward convergence: if a change in the step size is less than 0.1% after 5 improvements in the objective function, reset SR, and reduce the ranges Ri (to facilitate faster movement) to a fraction of their initial values using the series of divisors (5, 10, 15, 20, ...), which was found to give the best convergence reliability and efficiency. Step 11. Make ξk ) ξk+1, Jk ) Jk+1, increment the iteration counter k and return to step 5. Nomenclature a ) set of auxiliary conditions of the partial differential equations Cfi ) inducer level of feed (g/L), case study III Cfn ) nutrient level of feed (g/L), case study III

2260 Ind. Eng. Chem. Res., Vol. 36, No. 6, 1997 Ei ) activation energy of ith reaction (J/mol), case studies I and II F ) failure counter in the ICRS/DS algorithm g ) inequality algebraic constraints h ) equality algebraic constraints J ) performance index k ) kinetic coefficient (s-1), case studies I and II K1 ) heuristic parameter in the ICRS/DS and ARDS/DS algorithms K2 ) heuristic parameter in the ICRS/DS algorithm KR ) heuristic parameter in the ARDS/DS algorithm ∆KR ) heuristic parameter in the ARDS/DS algorithm M ) number of control variables ne ) heuristic parameter in the ICRS/DS algorithm N ) number of grid points of the control parametrization p ) set of governing partial differential equations Q ) relative cost of the inducer, case study III R ) gas constant (J/mol‚K) R1, ..., Ri ) range of the state variables Rfp ) foreign protein production rate (h-1), case study III SR ) heuristic parameter in the ARDS/DS algorithm t ) time (s) T ) temperature (K) u ) control variable u ) vector of control variables x1, ..., xi ) state variables x ) vector of state variables Y-1 ) inverse of growth yield coefficient, case study III Greek Letters  ) small number θi ) time coordinates of control parametrization grid Θ ) function of the state variables κ ) independent variables in a distributed system λ ) minimum intervals vector Γ ) vector of random numbers (Gaussian distribution) Ξ ) vector of random numbers (uniform distribution) ξ ) decision variables vector µ ) specific growth rate (h-1), case study III σ ) standard deviation vector Φ ) function of state and control variables Ψ ) right hand side of ordinary differential equality constraints ωi ) control coordinates of control parametrization grid Ω ) domain of a distributed system Subscripts 0 ) initial f ) final i ) element index in a vector M ) minimum Superscripts 0 ) initial values k ) iteration counter L ) lower U ) upper

Literature Cited Aris, R. On Denbigh’s optimum temperature sequence. Chem. Eng. Sci. 1960, 12, 56. Banga, J. R.; Casares, J. J. Integrated Controlled Random Search: application to a wastewater treatment plant model. Inst. Chem. Eng. Symp. Ser. 1987, 100, 183. Banga, J. R.; Singh, R. P. Optimization of the air drying of foods. J. Food Eng. 1994, 23, 189. Banga, J. R.; Seider, W. D. Global optimization of chemical processes using stochastic algorithms. State of the Art in Global Optimization: Computational Methods and Applications; Princeton University: Princeton, NJ, April 1995.

Banga, J. R.; Perez-Martin, R. I.; Gallardo, J. M.; Casares, J. J. Optimization of the thermal processing of conduction-heated canned foods: study of several objective functions. J. Food Eng. 1991, 14, 25. Banga, J. R.; Alonso, A. A.; Perez-Martin, R. I.; Singh, R. P. Optimal control of heat and mass transfer in food and bioproducts processing. Comput. Chem. Eng. 1994, 18, S699. Bibila, T. A.; Robinson, D. K. In pursuit of the optimal fed-batch process for monoclonal antibody production. Biotechnol. Prog. 1995, 11, 1. Biegler, L. T. Solution of dynamic optimization problems by successive quadratic programming and orthogonal collocation. Comput. Chem. Eng. 1984, 8, 243. Bojkov, B.; Hansel, R.; Luus, R. Application of direct search optimization to optimal control problems. Hung. J. Ind. Chem. 1993, 21, 177. Brenan, K. E.; Campbell, S. L.; Petzold, L. R. Numerical solution of initial-value problems in differential-algebraic equations; North-Holland: New York, 1989. Casares, J. J.; Rodriguez, J. Analysis and evaluation of a wastewater treatment plant model by stochastic optimization. Appl. Math. Model 1989, 13, 420. Chen, C. T.; Hwang, C. Optimal on-off control for fed-batch fermentation processes. Ind. Eng. Chem. Res. 1990, 29, 1869. Cuthrell, J. E.; Biegler, L. T. Simultaneous optimization and solution methods for batch reactor control profiles. Comput. Chem. Eng. 1989, 13, 49. Dadebo, S. A.; McAuley, K. B. Dynamic optimization of constrained chemical engineering problems using dynamic programming. Comput. Chem. Eng. 1995, 19, 513. Denbigh, K. G. Optimum temperature sequences in reactors. Chem. Eng. Sci. 1958, 8, 125. Feehery, W. F.; Barton, P. I. Dynamic simulation and optimization with inequality path constraints. Comput. Chem. Eng. 1996, 20, S707. Feehery, W. F.; Banga, J. R.; Barton, P. I. A novel approach to dynamic optimization of ODE and DAE systems as high-index problems. AIChE Annual Meeting, Miami, FL, Nov 1995. Goh, C. J.; Teo, K. L. Control parametrization: a unified approach to optimal control problems with general constraints. Automatica 1988, 24, 3. Goulcher, R.; Casares, J. J. The solution of steady-state chemical engineering optimization problems using a random search technique. Comput. Chem. Eng. 1978, 2, 33. Irizarry-Rivera, R.; Seider, W. D. Optimal interface for the Bridgman crystallization process. Numer. Heat Transfer, Part A 1996, 29, 227. Johnson, A. The control of fed-batch fermentation processessa survey. Automatica 1987, 23, 691. Lee, J.; Ramirez, W. F. Optimal fed-batch control of induced foreign protein production by recombinant bacteria. AIChE J. 1994, 40 (5), 899. Lim, H. C.; Tayeb, Y. J.; Modak, J. M.; Bonte, P. Computational algorithms for optimal feed rates for a class of fed-batch fermentation: numerical results for penicillin and cell mass production. Biotechnol. Bioeng. 1986, 28, 1408. Luus, R. Optimal control of batch reactors by Iterative Dynamic Programming. J. Proc. Control 1994, 4 (4), 218. Luus, R.; Hartig, F.; Keil, F. J. Optimal drug scheduling of cancer chemotherapy by direct search optimization. Hung. J. Ind. Chem. 1995, 23, 55. Martin, R. B. Optimal control drug scheduling of cancer chemotherapy. Automatica 1992, 28 (6), 1113. Martin, D. L.; Gaddy, J. L. Process optimization with the Adaptive Randomly Directed Search. AIChE Symp. Ser. 1982, 78, 99. Modak, J. M.; Lim, H. C. Optimal mode of operation of bioreactor for fermentation processes. Chem. Eng. Sci. 1992, 47, 3869. Petzold, L. R. Differential/algebraic equations are not ODEs. SIAM J. Sci. Stat. Comput. 1982, 3, 367. San, K. Y.; Stephanopoulos, G. Optimization of a fed-batch penicillin fermentation: a case of singular optimal control with state constraints. Biotechnol. Bioeng. 1989, 34, 72. Schiesser, W. E. The Numerical Method of Lines: Integration of Partial Differential Equations; Academic Press, Inc.: San Diego, 1991.

Ind. Eng. Chem. Res., Vol. 36, No. 6, 1997 2261 Tanartkit, P.; Biegler, L. T. Stable decomposition for dynamic optimization. Ind. Eng. Chem. Res. 1995, 34, 1253. Vassiliadis, V. Computational solution of dynamic optimization problems with general differential-algebraic constraints. Ph.D. Dissertation, Imperial College, London, U.K., 1993. Vassiliadis, V. S.; Sargent, R. W. H.; Pantelides, C. C. Solution of a class of multistage dynamic optimization problems. 1. Problems without path constraints. Ind. Eng. Chem. Res. 1994a, 33, 2111. Vassiliadis, V. S.; Sargent, R. W. H.; Pantelides, C. C. Solution of a class of multistage dynamic optimization problems. 2.

Problems with path constraints. Ind. Eng. Chem. Res. 1994b, 33, 2123.

Received for review November 12, 1996 Revised manuscript received February 3, 1997 Accepted February 4, 1997X IE960718G X Abstract published in Advance ACS Abstracts, April 15, 1997.