Dynamic Optimization of Nonlinear Processes with an Enhanced

Apr 3, 2009 - Our algorithm is depicted in section 3. ... In practice, the dynamic optimization of distributed systems typically ..... used as case st...
0 downloads 0 Views 3MB Size
4388

Ind. Eng. Chem. Res. 2009, 48, 4388–4401

Dynamic Optimization of Nonlinear Processes with an Enhanced Scatter Search Method Jose A. Egea, Eva Balsa-Canto, Marı´a-Sonia G. Garcı´a, and Julio R. Banga* Process Engineering Group, Instituto de InVestigaciones Marinas (IIM-CSIC) Eduardo Cabello, 6, 36208 Vigo, Spain

An enhanced scatter search method for the global dynamic optimization of nonlinear processes using the control vector parametrization (CVP) approach is presented. Sharing some features of the scatter search metaheuristic, this new method presents a simpler but more effective design which helps to overcome typical difficulties of nonlinear dynamic systems optimization such as noise, flat areas, nonsmoothness, and/or discontinuities. This new algorithm provides a good balance between robustness and efficiency in the global phase, and couples a local search procedure to accelerate the convergence to optimal solutions. Its application to four multimodal dynamic optimization problems, compared with other state-of-the-art global optimization algorithms, including an advanced scatter search design, proves its efficiency and robustness, showing a very good scalability. 1. Introduction Dynamic optimization (also called open loop optimal control) appears in many industrial applications to optimize (or at least improve) a predefined performance index such as profitability, product quality, or productivity subject to safety or environmental specifications over a time interval.1 Mathematical models describing (bio)chemical engineering processes are usually nonlinear, dynamic, and/or distributed, which often causes their associated dynamic optimization problems to be nonconvex. Besides, nonsmoothness and discontinuities can be present, thus the use of global optimization methods is needed for many dynamic optimization problems.2-7 Global optimization methods can be roughly divided into deterministic8 and stochastic.9 Advances in global deterministic methods for dynamic optimization have been achieved in recent years7,10,11,3 but they still need some requirements regarding the functions differentiability and the path constraints type to be handled. Besides, the computational effort (specially in the case of large-scale and/or distributed problems) is still a barrier for the application of these methods. Stochastic global optimization methods have been successfully applied to dynamic optimization problems.12-19 Other approaches using hybrid methods, which accelerate the convergence to the optimal solutions by combining stochastic global methods with a deterministic local search, have shown very good results too.5,6 In recent years, a special class of stochastic global optimization methods called metaheuristics20 have appeared as efficient optimization techniques. These methods combine mechanisms of exploration of the search space and exploitation of previous knowledge obtained by the search, so as hopefully to find the global solution. Some examples of the use of metaheuristics for dynamic optimization problems with very promising results can be found in the recent literature.4,5,14-19,21-28 In this study, we present an enhanced scatter search (eSS) method for solving nonlinear optimization problems. It shares some concepts of the scatter search metaheuristic,29 which has provided excellent results for the optimization of dynamic nonlinear problems from chemical and bioprocess engineering,30,31 but includes fundamental modifications which improve the balance between intensification (i.e., efficiency) and diversifica* To whom correspondence should be addressed. E-mail: julio@ iim.csic.es.

tion (i.e., robustness) of the search with a lower number of tuning-parameters. In particular, our new algorithm uses a type of solution combination method which does not restrict the search to the relative directions defined by the solutions in the population, but allows movements toward other areas, which increases the search diversity. Together with this, and in order to guide the search, a bias has been introduced. This bias consists in favoring the approach to high quality solutions and the escape from bad quality solutions within the population. This enhances the intensification. Another fundamental change with respect to the original scatter search design is the type of population update. Classical scatter search designs use a population update method similar to the (µ + λ) evolutionary algorithms, which can make the search to stagnate prematurely (especially in the case of algorithms with a low number of population members, like scatter search). Distance filters can be defined to surmount this,31 although they can also reduce the convergence rate to optimal solutions. In this new method, we have implemented a (µ + λ) updating strategy applied to every member of the population, as used by other optimization algorithms.32 This strategy avoids premature stagnation allowing intensification in promising areas at the same time. These enhancements make of this method a particularly attractive alternative to solve dynamic optimization problems where smooth control profiles are desirable, or those associated with distributed-parameters systems. This paper is organized as follows: Section 2 states the problem of dynamic optimization and the control vector parametrization (CVP) approach for solving it. Our algorithm is depicted in section 3. Section 4 presents four dynamic optimization case studies, comprising both lumped and distributed nonlinear models, used in this paper for our experiments, as well as the results obtained. The paper finishes with some conclusions. 2. Dynamic Optimization 2.1. Problem Statement. Dynamic optimization allows the computation of the optimal operating policies to maximize a predefined performance index such as productivity or other economic indexes.5 This may be mathematically formulated as follows:

10.1021/ie801717t CCC: $40.75  2009 American Chemical Society Published on Web 04/03/2009

Ind. Eng. Chem. Res., Vol. 48, No. 9, 2009 4389 min C(x(ξ, tf), z(tf), v, tf) +

u(t),v,tf

∫ L(x(ξ, t), z(t), u(t), v, ξ, t) dt tf

t0

(1)

where the scalar functions C (Mayer term) and L (Lagrangian term) are continuously differentiable with respect to all of their arguments and the final time tf can be either fixed or free. This is subject to the system dynamics: F(xt, xξ, xξξ, x(ξ, t), z(t), u(t), v, t) ) 0

(2)

x(ξ, 0) ) x0

(3)

u(0) ) u0

(4)

z(0) ) z0

(5)

x(Ω, t) ) xΩ

(6)

where xt ) ∂x/∂t, xξ ) ∂x/∂ξ, xξξ ) ∂2x/∂ξ2; x(ξ,t) ∈ X ⊂ Rnl+nd (with nl ) number of lumped state variables and nd ) number of distributed state variables) and z(t) ∈ Z ⊂ Rm are the vectors of differential and algebraic states respectively; u(t) ∈ U ⊂ Rp is the vector of control (input) variables; v ∈ V ⊂ Rq are time invariant parameters; t is the time (and tf is the final time); F is the set of partial differential-algebraic equations describing the systems dynamics; finally, x0, z0, and u0 are the values of the respective vectors at the initial time t0, and xΩ is the value of x at spatial boundary. Equality and inequality constraints may be imposed. Some of them must be satisfied over the whole process time (path constraints), Hpath(x(ξ, t), z(t), u(t), v, t) ) 0

∀t

(7)

Gpath(x(ξ, t), z(t), u(t), v, t) e 0

∀t

(8)

while others must be only satisfied at the end of the process (end point constraints), Hend(x(ξ, tf), z(tf), u(tf), v, tf) ) 0

(9)

Gend(x(ξ, tf), z(tf), u(tf), v, tf) e 0

(10)

The control variables and/or the time-invariant parameters may be subject to lower and upper bounds: uL e u(t) e uU

(11)

vL e v e vU

(12)

In practice, the dynamic optimization of distributed systems typically involves transforming the original system into an equivalent lumped system and applying lumped-system dynamic optimization methods. Therefore, a spatial discretization approach is usually used to transform the original infinite dimension partial differential equations (PDE) into a large-scale, and possibly stiff, set of ordinary differential equations (ODEs).33 The accurate solution of the resulting ODE system then often requires the use of an implicit ODE solver. 2.2. Solution method: CVP. Basically, direct approaches for dynamic optimization transform the original problem into a nonlinear programming problem using either control vector parametrization (CVP)34 or complete parametrization.35 From these methods, the CVP approach seems to be the most convenient for dealing with large scale ODE systems,33 such as those resulting from distributed systems, like in some of the examples presented in this study.

The CVP approach proceeds dividing the time horizon into a number of F time intervals. The control variables are then approximated within each interval by means of basis functions, usually low order polynomials. This parametrization transforms the original (infinite dimensional) dynamic optimization problem into a nonlinear programming problem (NLP) where the systems dynamics (differential equality constraints) must be solved for each evaluation of the performance index. In this work we will use piecewise constant approximation, PC (i.e., zero order polynomial). NLPs arising from the application of direct approaches (such as CVP) are frequently multimodal. Therefore, gradient-based local optimization techniques (such as SQP methods) may converge to local optima. Global optimization methods are robust alternatives to local methods. Banga et al.5 showed that hybrid methods combining a stochastic global solver with a deterministic local search were the most efficient option for solving dynamic optimization problems using the CVP approach. A key issue of this methodology is the selection of the switching point from the global to the local phase.6 This selection is not obvious and must be defined by the user. In this work, we present a hybrid method combining a global phase with a local search, in which the initial points used for the local procedure are automatically selected by the algorithm based on a balance between quality and diversity among the solutions generated in every iteration. 3. Enhanced Scatter Search Design Scatter search is a population-based metaheuristic which combines a global phase with an intensification method (i.e., a local search). The scatter search template36 has served as the main reference for most of the implementations to date. Scatter search methodology is very flexible, since each of its elements can be implemented in a variety of ways and degrees of sophistication. A basic design to implement scatter-search is given on the well-known “five-method template”:37 (1) A DiVersification Generation Method to generate a collection of diverse trial solutions. (2) An ImproVement Method to transform a trial solution into one or more enhanced trial solutions. (3) A Reference Set Update Method to build and maintain a reference set consisting of the b “best” solutions found, where the value of b is typically small compared to the population size of other evolutionary algorithms. Solutions gain membership to the reference set according to their quality or their diversity. (4) A Subset Generation Method to operate on the reference set, to produce several subsets of its solutions as a basis for creating combined solutions. (5) A Solution Combination Method to transform a given subset of solutions produced by the Subset Generation Method into one or more combined solution vectors. An advanced scatter search design for chemical and bioprocess optimization problems, providing excellent results, was recently developed in our group.31 Here we introduce a set of changes and improvements which makes the algorithm be more robust and efficient using less tuning parameters. During this section we will compare the new algorithm presented here with our previous design, justifying the use of the new type of combination and update methods in order to obtain a better balance between diversification and intensification, which is the key point of global optimization algorithms. To illustrate how the algorithm works, during the following subsections we will consider a 2-D dimensional unconstrained function to be minimized, shown as contour plots. In particular, we consider

4390 Ind. Eng. Chem. Res., Vol. 48, No. 9, 2009

the function f(x1,x2) ) 2 + 0.01(x2 - x12)2 + (1 - x1)2 + 2(2 - x2)2 + 7 sin(0.5x1) sin(0.7x1x2) in the range x1 ∈ [-6,6], x2 ∈ [-2,7], which presents several minima. 3.1. Initial Population Members Selection. The algorithm starts generating an initial set S of m diverse vectors (normally m ) 10 × nVar, nVar being the problem size) in the search space using a latin hypercube uniform sampling.38 All these vectors are evaluated and the b/2 best ones in terms of quality are selected as members of the initial population. For example, in a minimization problem, provided the diverse vectors are sorted according to their function values (the best one first), the initial selection is [x1, x2, · · · , xb/2]T, such that

f(x1) e f(x2) e ... e f(xb)

Let us consider a solution, xi, to be combined with the rest of solutions in the population, xj, ∀i,j ∈ [1, 2, · · · , b], i * j. Two new points within the search space are defined: (15)

c2 ) xi + d(1 - R · β)

(16)

x j - xi 2

(17)

1 if i < j -1 if j < i

(18)

|j - i| - 1 b-2

(19)

d)

j

The initial population is completed selecting randomly b/2 additional vectors from the remaining m - b/2 diverse vectors in S. Scatter search usually makes use of relative distances to maximize the diversity of the solutions added to the initial population. However, this may be rather costly for large-scale problems. The strategy proposed here is as effective as the use of the relative distances, but more efficient. 3.2. Combination Method. After the initial population has been built, its solutions are sorted according to their quality (i.e., the best solution is the first) and we select all pairs of solutions in the population to combine them. Combinations based on hyper-rectangles within and beyond the segments linking every pair of solutions in the population have provided very good results for bioprocess optimization problems.30,31 However, these hyper-rectangles are usually created along the directions defined by every pair of solutions, thus restricting possible promising search areas (Figure 1a). In our enhanced design, we define the hyper-rectangles around solutions in the population, which allows the number of search directions to increase. Besides, we consider a larger area covered by the hyper-rectangles, which enhances diversification not only regarding search directions but also regarding search distance (Figure 1b). To add some kind of guidance to the search, a bias which takes into account the relative position of the population members regarding their quality has been introduced. Indeed, the areas which contain high quality solutions are likely to be the most promising areas, thus they should be favored with respect to other areas. Following this idea, the relative quality of every pair of solutions (regarding their position in the population) is used as a measure of bias to create the hyperrectangles. “Bad” solutions will try to approach “good” solutions whereas “good” solutions will try to escape from “bad” solutions. The higher the difference of quality between solutions, the higher the bias is introduced. Figure 2a shows the hyperrectangles generated by the best solution in the population. They are defined by its relative position with respect to the rest of solutions in the population: The higher the difference of quality, the further the hyper-rectangle from the “bad” solution. Similarly, Figure 2a shows the hyper-rectangles generated by the worst solution in the population. In this case, they are generated to create solutions close to high quality solutions with increasing probability according to their quality. Every member of the population generates b - 1 hyperrectangles. A new solution is created inside every hyperrectangle, which means that b2 - b new solutions are created in every iteration. It must be noted that the population members are sorted according to their function values (the best one first) in every iteration. Considering minimization, this means:

c1 ) xi - d(1 + R · β)

where

f(x ) e f(x ) ∀j > i, i ∈ [1, 2, ..., b/2 - 1], j ∈ [2, 3, ..., b/2] (13) i

(14)

R)

{

and β)

The new solution, xnew, will be created in the hyper-rectangle defined by c1 and c2: xnew ) c1 + (c2 - c1) · r

(20)

where r is a vector of dimension nVar with all its components being uniformly distributed random numbers in the interval [0,1]. The notation ( · ) above indicates an entrywise product (i.e., the vectors are multiplied component by component), thus it is not a scalar product. The incorporation of a memory term is quite common in advanced scatter search implementations. This term avoids doing combinations among population members previously combined and it is intended to prevent the method from visiting already explored areas thus making the search more efficient. However, this strategy may ignore a promising area around a pair of solutions just because they did not generate a high quality solution in a previous iteration. This effect might be increased in our case, since we consider a broader hyper-rectangle around every solution. Consider the situation depicted in Figure 3: a new solution is created on iteration i. Since it does not outperform the population member which generated its hyper-rectangle, we could consider that this is not a promising area and we would forbid the creation of new solutions inside it. However, doing so we would miss a high quality solution created in iteration i + 1. For that reason, we do not consider any memory term in this step of the algorithm. 3.3. Updating the Population. As explained in section 1, our algorithm uses a (µ + λ) strategy applied to every element of the population. This method provides a good balance between intensification and diversification, avoiding premature stagnation of the population while not restricting intensification in promising areas. For continuous problems, (µ + λ) updating strategies may rapidly converge to suboptimal solutions. This effect has been overcome in different ways for different metaheuristics39,31,40 by creating some sort of filters which may prevent fast convergence to the global optimum. The (µ,λ) strategies do not present this effect, but they may need a much higher number of function evaluations to achieve the optimal solutions. The (µ + λ) strategy applied to every element of the population used by our algorithm has the advantages of both methods without suffering their drawbacks. This strategy can be expressed by saying that a solution can only enter the population

Ind. Eng. Chem. Res., Vol. 48, No. 9, 2009 4391

Figure 1. Hyper-rectangles defining the areas for generating new solutions. Figure 2. Biased hyper-rectangles.

by replacing its parent, and not any other solution. How many “children” can a population member have? As stated in section 3.2, every solution in the population is combined with all the rest of members of the population, thus it performs b - 1 combinations creating b - 1 new solutions (or “children”). Among these new solutions, we identify the best one in terms of quality. If it outperforms its parent (i.e., the solution which was being combined), the former replaces the latter in the population. This strategy acts by performing individual movements of the population members within the search space instead of performing movements of the whole population at once as considered in (µ + λ) and (µ,λ) strategies. Although these individual movements are conditioned by the position and distance of the population members, we could consider that every solution follows a self-tuned annealing scheme, in which big steps are allowed at the beginning of the search whereas the solution moves much more locally in the end, due to the proximity of the population members in the final stages. 3.4. The go-beyond Strategy. An advanced strategy to enhance the intensification of the search has been implemented in our algorithm. It has been named go-beyond strategy and consists in exploiting promising directions. When performing the combination method all the new solutions created around a population member are sorted by quality. If the best of them outperform its parent, a new nonconvex solution in the direction defined by the child and its parent is created. The child becomes

Figure 3. Two solutions generated in the same hyper-rectangle in two consecutive iterations.

the new parent and the new generated solution is the new child. If the improvement continues, we might be in a very promising area, thus the area of generation of new solutions is increased. A straightforward question arises from the last paragraph: how do we identify the parent of a generated solution? As explained in section 3.2, the new solutions are created in hyper-rectangles

4392 Ind. Eng. Chem. Res., Vol. 48, No. 9, 2009

Figure 6. Optimal control profile for the ethanol production problem (F ) 40).

Figure 4. The go-beyond strategy.

Figure 5. Histogram of the local solutions obtained for the drying problem case study (F ) 20).

defined by the pair of solutions combined and around one of the solutions of the pair. The parent of a solution will be the population member around which the hyper-rectangle containing the new solution has been generated. Figure 4 depicts how the go-beyond strategy works: from a pair of population members, two new solutions are generated in the corresponding hyperrectangles. The squared solution is the child whose parent is the population member closest to it. Since the child outperforms

the parent in quality, a new hyper-rectangle (solid line) is defined by the distance between the parent and the child. A new solution (triangle) is created in this hyper-rectangle. This new solution becomes the child and the old child (i.e., the squared solution) becomes the parent. Since the new child (triangle) outperforms again its parent (square), the process is repeated, but the size of the new hyper-rectangle created (dotted line) is doubled because there has been improvement during two consecutive children generations. Finally, a new solution (starred) is created in an area very close to the global minimum. Algorithm 1 shows a pseudocode of the go-beyond strategy procedure. Even if the go-beyond strategy has been mainly designed to enhance the intensification, the fact that the size of the hyperrectangles are increased if the new solutions improve the old ones during at least two consecutive iterations may make the search be more diverse, exploring regions where different minima can be found. 3.5. Local Search. A local search is implemented in our algorithm, selecting the initial points by means of a systematic method which balances between quality and diversity. In this work, we have considered the following methods. (i) fmincon: this is a local gradient-based method, implemented as part of the Matlab Optimization Toolbox,41 which finds a local minimum of a constrained multivariable function by means of a SQP (sequential quadratic programming) algorithm. The method uses numerical or, if available, analytical gradients. (ii) solnp: the SQP method by Ye.42 (iii) fsqp: this algorithm is a SQP method for minimizing smooth objective functions subject to general smooth constraints. The successive iterates generated by the algorithm all satisfy the constraints.43 (iv) misqp: the solver called Mixed-Integer Sequential Quadratic Programming (MISQP) is a SQP trust-region method, recently developed by Exler and Schittkowski,44 which handles both continuous and integer variables. (v) fminsearch: this is a direct search method which is also part of the Matlab Optimization Toolbox.41 It does not use numerical or analytic gradients. We have used an adapted code to handle bound constraints. Other type of constraints are handled adding a penalty term to the objective function (as described in section 3.6). (vi) NOMADm: Nonlinear optimization for mixed variables and derivatives-Matlab, abbreviated as NOMADm,45 is a Matlab code that runs various generalized pattern search (GPS) algorithms to solve nonlinear and mixed variable optimization problems. This solver is suitable when local gradient-based solvers do not perform well. (vii) dhc: the dynamic hill climbing algorithm by de la Maza and Yuret46 is a direct search algorithm which explores every dimension of the search space using dynamic steps. Only the local phase of the algorithm has been implemented.

Ind. Eng. Chem. Res., Vol. 48, No. 9, 2009 4393

For our experiments we examined the performance of the different local solvers and chose those who provided the best results in the lowest computation time. In classical scatter search designs, the local search is applied to a large number of solutions. However, in applications related to process engineering, we often face time-consuming evaluation problems or functions containing noise and/or discontinuities, which can make the local search inefficient. This implies that its application should be restricted to a low number of promising solutions. This idea has also been used by other authors in memetic algorithms, assigning different probabilities to the individuals to be subject to a local search.47,48 To avoid poor-quality solutions resulting from the first iteration, we have introduced a parameter, n1 that determines the iteration number in which the local search is applied for the first time. A second parameter, n2 determines the minimum number of iterations between two consecutive local searches. Apart from being subject to these two parameters, solutions are selected as candidates for being the initial point of a new local search by means of systematic method balancing between quality and diversity. This method consists in a competitive ranking. Considering the set of new solutions created in a given iteration C ) [x1, x2, · · · , xnchild] being nchild the number of new solutions created in the current iteration, we define two lists: the first one contains the solutions in C sorted by quality (i.e., the best one in the first place and the last one in the last place). Then, we compute the normalized Euclidean distances from the solution in C to the already found minima (i.e., found in previous local searches). For each solution we store the minimal distance to any of the minima. The second lists contains the solutions in C sorted by this minimal distance (i.e., the solution with the highest minimal distance in the first place and the solution with the smallest minimal distance in the last place). Two auxiliary functions, φ1(x) and φ2(x), are then defined. φ1(x) computes the index of solution x in list 1 and φ2(x) computes the index of solution x in list 2. A ranking function is defined by: Φ(x) ) γ · φ1(x) + (1 - γ) · φ2(x)

(21)

where γ ∈ [0,1] balances between quality and diversity among the candidates. The solution z will be the starting point for the next local search if Φ(z) e Φ(y) ∀z,y ∈ C and z * y. Therefore, high quality solutions which are far from the already found minima are favored to be the selected as starting points. Values of γ close to 0 favor diversity over quality whereas values of γ close to 1 favor quality solutions. In our algorithm, we use a value of γ ) 0.5 by default.

An alternative strategy has been implemented for applying the local search in our algorithm: instead of using the competitive ranking, a local search is performed every time that the algorithm finds a better solution than the best current one, using this new found best solution as initial point. To avoid performing many local searches from similar initial points, the parameter n2 allows the algorithm to search more globally not spending computation time on intermediate local searches without improving the best solution significantly. Algorithm 2 shows a pseudocode of the application of these strategies. 3.6. Constraints Handling. Constraint handling of stochastic optimization methods has been a subject of research since these algorithms arose. Many different techniques have been proposed to handle problems with constraints (e.g., see reviews of Michalewicz49 and Coello-Coello).50 In our algorithm, we have implemented a simple strategy consisting in a static penalty function. The objective function evaluated by the algorithm has the following form: F(p, v) ) C(p, v) + w · max{max{Viol(Hend), Viol(Gend)}} (22) where p, v is the solution being evaluated (i.e., the variables defining the control parametrization and the time-invariant parameters, respectively), C(p,v) is the original objective function value, Hend is the set of equality end-point constraints and Gend is the set of inequality end-point constraints. w is a penalization parameter selected by the user, which is constant during the optimization procedure (and usually has a high positive value). We use the L - ∞ norm of the constraints set to penalize the original objective function. Other penalty function approaches usually use the L - 1 (exact penalization) or the L - 2 (quadratic penalization). More sophisticated strategies might be implemented, although we strongly rely on the local solvers used by our algorithm to achieve feasible optimal points. Regarding path constraints, we follow the same strategy described by Vassiliadis et al.51 in which an additional differential equation is introduced for each inequality path constraint. 3.7. Stopping Criterion. The stopping criterion of our algorithm is taken as a combination of three conditions: (i) maximum number of evaluations exceeded; (ii) maximum computational time exceeded; (iii) specified value of the cost function reached (if known). By default, the algorithm will stop when any of these conditions is satisfied. Since the local search is selectively applied, when a stopping criterion has been met and before

4394 Ind. Eng. Chem. Res., Vol. 48, No. 9, 2009

abandoning the search, we apply the local search to the best solution found so far to refine it. In this final local search, the stopping tolerance assigned to the local solver is tighter than in the rest of the local searches performed along the optimization procedure. Algorithm 3 shows summarizes in pseudocode how our algorithm work. 4. Computational Experiments In this section, a set of process engineering dynamic optimization problems of different levels of complexity will be used as case studies to test the performance of the algorithm proposed in this work. We consider both lumped and distributed models as well as different types of constraints (i.e., path and end-point constraints). The mathematical statement of every problem is presented in the appendix. All the experiments were performed on a P-IV 2.66 GHz workstation, using Matlab R2006b under Windows XP 64-bit. These preliminary lines present the optimization methods used for comparison with our algorithm and the procedure used to carry out the experimental analysis. 4.1. Comparison with Selected Optimization Methods. A set of different state-of-the-art global optimization methods has

been selected to compare their results with those obtained with the algorithm proposed in this study. CMAES: Covariance matrix adaptation evolutionary strategy. This is an evolutionary algorithm that makes use of the covariance matrix in a similar way to the inverse Hessian matrix in a quasi-Newton method.52 This can improve the performance on ill-conditioned problems by orders of magnitude. DE: Differential evolution. This is a heuristic algorithm for the global optimization of nonlinear and (possibly) nondifferentiable continuous functions.32 This population-based method handles stochastic variables by means of a direct search method

Figure 7. Optimal control profiles for the penicillin production problem (F ) 40).

Ind. Eng. Chem. Res., Vol. 48, No. 9, 2009 4395 Table 1. Maximum Number of Function Evaluations Fixed for Every Problem problem section

F ) 10

F ) 20

F ) 40

4.2.1 4.2.2 4.2.3 4.2.4a

20000 55000 20000 12000

40000 90000 60000 40000

60000 250000 200000 60000

a

Discretization levels for this problem are F ) 5, 10, and 20.

which outperforms other popular global optimization algorithms, and it is widely used by the evolutionary computation community. SRES: Stochastic ranking evolutionary strategy. This algorithm consists of a (µ,λ) evolutionary strategy combined with an approach to balance objective and penalty functions stochastically.53 In the (µ,λ)-ES algorithm, the evaluated objective and the penalty functions for each individual are used to rank the individuals in a population, and the best (highest ranked) µ individuals out of λ are selected for the next generation. This feature makes it especially appealing for the case of constrained problems. DIRECT: Dividing rectangles. This is a deterministic global optimization algorithm based on a modification of the Lipschitzian optimization scheme to solve difficult global optimization problems.54 We will use glcDirect, the DIRECT implementation of Holmstro¨m and Edvall55 in our computational testing. OQNLP: OptQuest-NLP. This is an optimization engine released by OptTek Systems, Inc., which uses the OptQuest callable library.56 In a recent review by Neumaier et al.57 comparing several GO solvers over a set of 1000 constrained GO problems, OQNLP obtained the best performance among the stochastic methods. Furthermore, it solved the highest percentage of problems with a high number of decision variables. SSm: Scatter search for Matlab. This is an advanced scatter search implementation developed in our group which has proved its efficiency and robustness by its application to different bioprocess optimization problems.30,31 In the following lines we highlight some aspects of the procedure followed to test the case studies presented in this work. The same protocol was used in every problem. The reader should take into account these points: (i) Three levels of discretization will be used for each problem in order to check the scalability of the different optimization methods. (ii) For every problem, a multistart procedure with a SQP method was performed over a set of 100 diverse initial points in order to check the practical multimodality of the problem. It consists in applying the local search algorithm to a set of initial points uniformly distributed within the bounds (including the initial point used for every problem). Figure 5 represents the histogram obtained for one of the examples, revealing that different initial guesses end up in different (sub)optimal solutions. (iii) A fixed number of function evaluations was set for every problem. These numbers are shown in Table 1. (iv) For stochastic solvers (i.e., CMAES, DE, SRES, SSm and eSS) a set of 10 runs was performed. The OQNLP implementation that we used in this study does not allow to change the seed of the random numbers generator. The best, mean (as recommended by Birattari and Dorigo),58 and worst objective function value found in all the optimizations are reported. (v) For all the solvers, default parameters provided by their authors were used. In the case of the population-based algo-

Table 2. Selected Parameters for DE and SRES DE

SRES

VTR ) -inf NP ) 10 × nVar F ) 0.85 CR ) 1 strategy ) 3

λ ) 10 × nVar µ ) λ/7 pf ) 0.45 varphi ) 1

rithms DE and SRES, some parameters have to be chosen by the user. The chosen parameters are shown in Table 2. (vi) For solvers not handling constraints (i.e., CMAES and DE) a modification of the scripts was carried out to allow them to be applicable to constrained problems. In particular, the implemented strategy was the same as the one used by SSm and eSS. Thus, a static penalty term is added to the objective function value: the maximum absolute violation of the constraints multiplied by 106 is added to the actual function value. In every case, a maximum constraint violation of 10-5 is considered as acceptable. 4.2. Case Studies. 4.2.1. Fed-Batch Reactor for Ethanol Production. This system is a fed-batch bioreactor for the production of ethanol from the anaerobic glucose fermentation by Saccharomyces cereVisiae. The dynamic optimization of this process with fixed final time was studied by several authors.13,59-62 The objective is to find the feed rate which maximizes the yield of ethanol. From the different local search methods available in our algorithm, SQP-based algorithms were the most competitive for this problem, and misqp was the chosen method to perform the local search in this problem due to its high convergence rate. Table 3 presents results for every solver with the different levels of discretization. Both SSm and eSS obtain the best results for this problem for every discretization level (together with CMAES for F ) 10 and OQNLP for F ) 10 and 20) with the smallest dispersion in the results. Figure 6 shows the control profiles resulting from the best solution found for F ) 40. As it is well-known, the formation of alcohol inhibits the growth of yeast, so the optimal control policy found is one which ensures an optimal dilution rate, that is, keeps diluting the media so the ethanol concentration does not reach levels where the growth is significantly affected. 4.2.2. Fed-Batch Fermenter for Penicillin Production. This problem deals with the dynamic optimization of a fed-batch fermenter for the production of penicillin through anaerobic glucose fermentation. This problem with fixed final time was studied by Banga et al.,13 Cuthrell and Biegler,35 and Luus.63 The objective is to maximize the total amount of penicillin produced using the feed rate of substrate as the control variable. For this problem, a direct search local method provided better results than a gradient-based one, and we chose fminsearch as local solver. Table 4 presents results for every solver with the different levels of discretization. DE provided the best results and smallest dispersion for the levels of discretization F ) 10 and 20. For F ) 40, SSm provided the best solution. Figure 7 shows the control profiles resulting from the best solution found for F ) 40. In this case the product of interest (penicillin) is only produced when the substrate is close to zero (the biomass is starving). The optimal control presents an initial peak of substrate addition (which creates a large enough amount of biomass) followed by an almost constant federate phase which adds the minimum flow of substrate necessary to keep the existing biomass alive but the substrate concentration in the fermenter close to zero. 4.2.3. Drying Operation. In this section we will consider the optimization of a bioproduct drying process, similar to the one formulated by Banga and Singh.64 In particular, the aim is

4396 Ind. Eng. Chem. Res., Vol. 48, No. 9, 2009 Table 3. Results for the Ethanol Production Problem CMAES

DE

glcDirect

OQNLP

SRES

SSm

eSS

F ) 10

best mean worst

20316.11 19889.67 18996.02

20316.08 20100.72 19672.46

20203.74 -

20316.11 -

20305.96 20093.14 19554.01

20316.11 20291.38 20192.48

20316.11 20254.29 20192.48

F ) 20

best mean worst

20412.14 20273.76 19953.39

20404.36 20383.95 20341.29

19738.01 -

20412.19 -

20327.11 20237.58 20095.71

20412.19 20412.19 20412.19

20412.19 20412.19 20412.19

F ) 40

best mean worst

20430.84 20360.73 20110.08

20375.32 20239.27 19902.08

19544.88 -

20444.47 -

20214.40 19726.07 19466.64

20444.86 20444.86 20444.86

20444.86 20444.86 20444.86

Table 4. Results for the Penicillin Production Problem CMAES

DE

glcDirect

OQNLP

SRES

SSm

eSS

F ) 10

best mean worst

87.934 87.837 87.340

87.934 87.914 87.835

87.258 -

87.775 -

87.927 87.688 87.348

87.931 87.906 87.889

87.928 87.894 87.834

F ) 20

best mean worst

87.948 87.841 87.599

88.013 87.955 87.767

84.490 -

87.400 -

87.671 86.900 85.064

87.998 87.885 87.796

87.944 87.868 87.545

F ) 40

best mean worst

87.914 87.861 87.745

87.926 87.802 87.565

80.657 -

87.547 -

82.709 82.709 82.709

87.999 87.863 87.595

87.961 87.895 87.848

Table 5. Results for the Drying Process Problem CMAES

DE

glcDirect

OQNLP

SRES

SSm

eSS

F ) 10

best mean worst

0.20002 0.19710 0.19108

0.20003 0.19683 0.18939

0.19979 -

0.19875 -

0.20001 0.19894 0.19579

0.20003 0.19694 0.18742

0.20003 0.19835 0.19369

F ) 20

best mean worst

0.19997 0.19696 0.19298

0.19913 0.19608 0.19185

0.19329 -

0.15483 -

0.19989 0.19878 0.19728

0.20010 0.19687 0.19326

0.20009 0.19900 0.19624

F ) 40

best mean worst

0.19952 0.19751 0.19522

0.19859 0.19442 0.19103

0.18848 -

0.15102 -

0.19001 0.18796 0.18623

0.19788 0.19618 0.19311

0.20015 0.19955 0.19861

to dry a cellulose slab maximizing the retention of a nutrient (ascorbic acid). The dynamic optimization problem associated with the process consists in finding the dry bulb temperature, Tdb, along the time to maximize the ascorbic acid retention, retAA, at the final time, tf (with tf ) 1250 min). Due to the inefficiency of the local solvers for this problem, the local search was deactivated for SSm and our eSS, performing a final local refinement with the direct search solver fminsearch. OQNLP also provided better solutions deactivating its local search, thus only these results are presented in Table 5. Our algorithm provided the best results for all the levels of discretization (together with DE and SSm for F ) 10 and with SSm for F ) 20). Figure 8 shows the control profiles resulting from the best solution found for F ) 40. The control profile shows that smaller levels of discretization (e.g., F ) 2-3) could achieve similar results in less computational effort. Initial low temperatures avoid a quick degradation of the ascorbic acid and favor the internal heat diffusion and water transport inside the food. The moisture content raises during this first stage and, therefore, the temperature needs to be increased to obtain the average moisture content at the final time imposed by the constraint. 4.2.4. Combined Microwave-Oven Heating of Foods. This section deals with the optimization of the heating process of foods in a combined microwave-convection oven. In particular, the product considered is a cylinder with radius Rm and height

Zm. These heating mechanisms complement each other: while microwaves favor the internal heating (especially for food products with significant water content), convection acts in the surface. With microwave heating alone it is often the case that the final product presents hot spots and overheating in corners. In contrast, convection/conduction heating is slow and creates large gradients of temperature between the surface and the interior of the product. The problem stated aims to find the optimal combination of both heating mechanisms and allows compensating for their individual weaknesses while enhancing their strengths. Thus, both mechanisms may be optimally combined to maximize the uniformity in the product temperature, avoiding cold or hot points (typical in microwave processing) and their possible effects over the security and/or quality of the final product.

Figure 8. Control profiles for the drying process problem (F ) 40).

Ind. Eng. Chem. Res., Vol. 48, No. 9, 2009 4397

for a short period at the beginning of the process (bringing enough energy to internal areas), and then it is not activated anymore (so diffusion will reduce the gradients created). In addition, the optimal control shows that an smooth convective heating, plus the ongoing diffusion inside the product, is able to ensure the final desired uniformity. 4.3. Executive Summary of Results

Figure 9. Control profiles for the combined oven problem (F ) 20).

We will try to solve an optimal control problem similar to the one proposed by Saa et al.65 and Banga et al.,66 where the objective is to obtain the profiles for the microwave power, P0(t), and the oven temperature, Toven(t) to maximize the temperature difference between the hottest and coldest point in the food, for a final time of 270 s. In this case, we add a term to restrict steep changes in the control variables, which would make the control strategy very difficult to implement in practice. For this problem, the local search was also deactivated for SSm and our algorithm, performing a final local refinement with the direct search solver fminsearch. Since there are two control variables in this example, the discretization levels considered are F ) 5, 10, and 20, in order to evaluate the same number of decision variables used in the previous case studies. Table 6 shows the results obtained by every solver for the different levels of discretization. Our algorithm provided the best results for the levels of discretization F ) 5 and 10, whereas CMAES provided the best solution for F ) 20 (and solutions very close to the best in the other discretization levels). Figure 9 shows the control profiles resulting from the best solution found for F ) 20. The solution found shows that the maximum microwave power is only used

In this section we provide a summary of the results obtained in this work. The methods used are compared in terms of robustness, efficiency, and scalability. Table 7 shows a summary of the performance of the different algorithms used for solving the case studies, reporting the number of times that each solver obtained the best solution and the best mean value along the different runs. Considering the best result obtained by any of the solvers, we consider that a solution is equivalent if it lies in an interval of range (0.05%. eSS obtains the highest score for the best and mean value. To take into account not only final objective function values but also the efficiency of the different methods, a more rigorous comparison can be done by making use of the performance profiles methodology.67 Following Auger and Hansen,68 we define the success performance FE for a solver on a specific problem by FE ) timemean

no. all runs (10) no. successful runs

(23)

where a run is considered successful if it obtained the best found solution with a relative error e0.05%. timemean is the mean CPU time employed by each method (considering only successful runs) to locate the best solution within the tolerance defined above. With this definition the best success performance FEbest is given by the lowest value of FE for every problem. Figure 10 shows the empirical distribution function of the success performance FE/FEbest over all the problems. The y-axis represents the percentage of solved problems whereas the x-axis gives an idea of the efficiency. Performance profiles methodology ranks eSS in the first place regarding both solved problems and efficiency for the set of case studies considered in this work compared with the rest of solvers tested. Regarding scalability, eSS proved to be successful even for the finer discretization levels. eSS obtained better results for increasing discretization levels in practically all cases with a reasonable increase of the computational effort. In contrast, all the other methods experienced converge difficulties when increasing the number of decision variables. Conclusions

Figure 10. Performance profiles.

Here, we presented a new enhanced scatter search (eSS) method especially suited for the global dynamic optimization of nonlinear processes and systems. This novel approach introduces new combination and population-update strategies which allow a better balance between intensification and diversification. The method also makes use of an intelligent local search strategy which speeds-up the convergence to optimal solutions without performing too many redundant searches. To test the capabilities of eSS, we considered the solution as a set of challenging benchmark problems containing both lumped and distributed dynamic process models. Three major aspects where evaluated: efficiency, robustness, and scalability. For the sake of comparison, these problems were also solved using several state-of-the-art global optimization methods. The results revealed that eSS was able to achieve the best solution in most cases

4398 Ind. Eng. Chem. Res., Vol. 48, No. 9, 2009 Table 6. Results for the Combined Oven Problem CMAES

DE

glcDirect

OQNLP

SRES

SSm

eSS

F)5

best mean worst

4.0771 4.0805 4.0845

4.0777 4.0798 4.0840

7.1913 -

4.0791 -

4.0796 4.0836 4.0898

4.0788 4.0855 4.0899

4.0768 4.0827 4.0920

F ) 10

best mean worst

3.8808 3.9021 3.9248

3.9058 3.9291 3.9676

5.7808 -

3.9749 -

3.8926 3.9969 4.2353

3.8985 3.9370 3.9577

3.8789 3.9108 3.9695

F ) 20

best mean worst

3.8349 4.0097 4.1796

4.5702 5.0500 5.5443

10.0597 -

4.3850 -

5.3720 5.7839 6.0930

3.9284 4.0916 4.3236

3.8876 4.0145 4.3132

of the system is given by y(0) ) [0 150 0 10]T and the limits for the control variable are: 0eu(t) e 12 (in L/h). Fed-Batch Fermenter for Penicillin Production. Find u(t) to maximize

Table 7. Summary of Results solver

no. of best

no. of mean

CMAES DE glcDirect OQNLP SRES SSm eSS

7 6 0 3 3 9 10

4 3 1 5 6

J(u) ) y2(tf)y4(tf) subject to the system dynamics, described by y˙1 ) h1y1 -

with the least computational effort. In addition its performance was shown to scale up well with increasing discretization levels of the control variables, allowing the computation of more refined control profiles in reasonable computation times.

The authors acknowledge financial support from the Spanish Ministry of Science and Innovation (MICINN project MultiSysBio ref DPI2008-06880-C03-02). Mathematical Statements of Case Studies

y˙3 ) -

y1 y4

150 - y2 y4

()

y˙3 ) p2y1 - u

(33)

(

) (34)

y˙4 ) u/500

(35)

h2 ) (25)

)

y3 y4

(26)

(27)

y˙4 ) u

0.11y3 0.006y1 + y3

(36)

0.0055y3 0.0001 + y3(1 + 10y3)

(37)

h1 )

(24)

subject to the system dynamics, described by

(

u · y2 500y4

with

J(u) ) y3(tf)y4(tf)

y˙2 ) -10p1y1 + u

(32)

h2y1 0.029y1y3 h1y1 y3 u + 10.47 1.2 0.0001 + y3 y4 500

Fed-Batch Reactor for Ethanol Production. Find u(t) to maximize

()

u · y1 500y4

y˙2 ) h2y1 - 0.01y2 -

Acknowledgment

y˙1 ) p1y1 - u

(31)

where y1, y2, and y3 are, respectively, the biomass, penicillin, and substrate concentrations (in g/L); y4 is the fermenter volume (in L). The vector of initial conditions is y(0) ) [1.5 0 0 7]T. The final product is destined to human consumption. Therefore, it must be produced under certain conditions to avoid harmful effects. For that reason, the concentrations of the present species are subject to a set of path constraints, which are 0 e y1 e 40

(38)

0 e y3 e 25

(39)

0 e y4 e 10

(40)

(28)

with

( (

)( )(

y2 0.408 p1 ) 1 + y3 /16 0.22 + y2 p2 )

) )

y2 1 1 + y3 /71.5 0.44 + y2

(29)

(30)

where y1 represents the microbial population concentration, y2 is the substrate concentration, y3 the product concentration (all of them in g/L), and y4 is the volume (in L), which must satisfy the following end-point constraint: y4(tf) e 200 for tf ) 54 h (optimal time calculated by Chen and Hwang60). The initial state

Bounds for the control variable are defined as 0 e u e 50 (in L/min) and the total process time is fixed in tf ) 132 h. Drying Operation. Find Tdb to maximize retAA(tf)

(41)

subject to the system dynamics: CAA,avg(t) 1 ) retAA(t) ) CAA,0 L1



L1

0

CAA(x, t) dx CAA,0

(42)

Ind. Eng. Chem. Res., Vol. 48, No. 9, 2009 4399

A0 ) 2(L1L2 + L1L3 + L2L3)

The problem has an end-point constraint related with the average moisture content at the final time: mavg(tf) e mavg,f

(43)

where mavg,f ) 0.1 Kg/Kg of dry solid. The bounds for the control variable, Tdb, are 60 e Tdb(t) e 95 (in °C). It is assumed that the transport of water within the solid is the controlling mechanism, and that the driving force is the gradient of moisture content. Thus, the governing equation will be Fick’s equation for diffusion (Fick’s second law): dm ) ∇(D∇m) dt

(44)

Becaue of the small thickness of the slab as compared with the other dimensions, we can consider a semi-infinite system where the moisture content depends only on the position with respect to the minor (thickness) dimension. Moreover, in order to take the shrinking effect into account, the diffusivity D is assumed to be a nonlinear function of both the moisture content and the temperature, thus eq 44 reads:

( ) 2

dm ∂m ∂D ∂m )D + dt ∂m ∂x ∂x2

2

( )

(45)

where L1, L2, and L3 are the slab dimensions and p1, p2 are the model parameters calculated by Mishkin et al.70 The nutrient degradation (ascorbic acid) is supposed to be described by first order kinetics.71,72 dCAA ) -kAACAA dt

[ (

ED 1 1 R Ts Tref

)]

(46)

ln kAA ) a1m + a2T-3 + a3m3 + a4m2T-1 + a5mT-2 + a6m3T-3 + a7 (56) Combined Microwave-Oven Heating of Foods. Find P0(t) and Toven(t) to minimize Tdif(tf) ) Tmax(tf) - Tmin(tf)

(

ED )

(

b1 + b2m 1 + b3m

b4 + b5m 1 + b6m

)

)

J ) Tdif(tf) + P

(48)

The average moisture content of the slab is calculated using mavg

1 ) L

ms )



L

0

m(x) dx

F0L1L2L3 mavg,0 + 1



tf

0

| |

dT dt dt

(49)

(50)

The temperature of the slab is assumed to be uniform. Therefore, an energy balance gives (thin slab assumption)

( )

(58)

with P being a penalty weight. The problem is subject to the system dynamic and an inequality constraint related with the minimum desired final temperature: Tmin(tf) g TSET min ) 60

(47)

(57)

To avoid large fluctuations in Toven, as experienced in preliminary runs, we have added a term which penalizes steep changes in that control variable. Therefore, the function J to be minimized is

where Dref and ED are functions of the moisture content: Dref ) exp -

(55)

where kAA is a function of the temperature and the moisture content:

m being the moisture content. D is considered to be dependent on the temperature and m. Its value is calculated following Luyben et al.:69 D ) Dref · exp -

(54)

(59)

The bounds for the control variables are 0 e P0 e 190 (in W) and 25 e Toven e 200 (in °C). The mathematical model used was formulated by Lin et al.73 in which it is assumed that the equation governing the heating process in a combined oven is the second Fourier’s law with a term, Φ, of energy generation arising from the microwave heating: CpF˜

(

)

∂2T 1 ∂T ∂T ∂2T )k + 2 + 2 +Φ ∂t r ∂r ∂r ∂z

(60)

The thermal conductivity, k, is assumed to be constant and the generation term, Φ, is based on Lambert’s law.74 Besides, Lin et al.73 added a correction term due to the internal wave reflection. Here it is assumed that the product has a uniform initial temperature, T(r,z) ) T0 ) 25 °C. Heating by convection is imposed by the boundary conditions

dTs dmavg ) hA(Tdb - Ts) + msλw dt dt (51)

k

∂T ) h(Toven - T) for r ) Rm ∂r

(61)

where the latent heat of vaporization, λw, depends on temperature:

k

∂T ) h(Toven - T) for z ) Zm ∂z

(62)

(msCps + msmavgCpw)

λw ) R1 - R2Ts

(52)

The heat transfer coefficient and the surface area are variable during drying, so hA is estimated using an empirical linear function of moisture: hA ) A0(p1mavg + p2) where:

Due the radial and axial symmetry, the internal temperature distribution of the product can be fully described with the temperature change in a quarter of the transverse section of the z axis. Besides, the product is considered homogeneous, as expressed by the following equations:

(53) ∂T ) 0 for r ) 0 ∂r

(63)

4400 Ind. Eng. Chem. Res., Vol. 48, No. 9, 2009

∂T ) 0 for z ) 0 ∂z

(64)

The incident power in the product’s surface is considered as constant and orthogonal to it. The term of heat generation for a cylindrical geometry is given by the equations Φr(r, T) ) 2β(T)

Rm P [e-2β(T)(Rm-r) + e-2β(T)(Rm+r)] (65) r r

Φz(z, T) ) 2β(T)Pz[e-2β(T)(Zm-z) + e-2β(T)(Zm+z)]

(66)

Φ(r, z, T) ) Φr(r, T) + Φz(z, T)

(67)

Pr ) P0 /(4πRmZm)

(68)

Pz ) 0.107P0 /(8πRmRm)

(69)

with

To avoid the central singularity of eq 65 when r f 0, the radial component of the heat generation term is calculated with

{

Φr(r, T) ) Rm P [e-2β(T)(Rm-r) + e-2β(T)(Rm+r)] if r > ε r r (70) Rm 2β(T) Pr[e-2β(T)(Rm-ε) + e-2β(T)(Rm+ε)] if r e ε ε with ε ) 3.5 × 10- 2Rm. Themophysical parameters and product dimensions were taken from Chen et al.75 2β(T)

Literature Cited (1) Biegler, L. T.; Grossmann, I. E. Retrospective on optimization. Comput. Chem. Eng. 2004, 28, 1169–1192. (2) Papamichail, I.; Adjiman, C. S. Global optimization of dynamic systems. Comput. Chem. Eng. 2004, 28, 403–415. (3) Chachuat, B.; Singer, A. B.; Barton, P. I. Global methods for dynamic optimization and mixed-integer dynamic optimization. Ind. Eng. Chem. Res. 2006, 45, 8373–8392. (4) Banga, J. R.; Moles, C. G.; Alonso, A. A. Global optimization of bioprocesses using stochastic and hybrid methods. In Frontiers In Global Optimization; Floudas, C. A., Pardalos, P. M., Eds.; Kluwer Academic Publishers: Hingham, MA, 2003; Vol. 74, pp 45-70. (5) Banga, J. R.; Balsa-Canto, E.; Moles, C. G.; Alonso, A. A. Dynamic optimization of bioprocesses: Efficient and robust numerical strategies. J. Biotechnol. 2005, 117, 407–419. (6) Balsa-Canto, E.; Vassiliadis, V. S.; Banga, J. R. Dynamic optimization of single- and multi-stage systems using a hybrid stochasticdeterministic method. Ind. Eng. Chem. Res. 2005, 44, 1514–1523. (7) Esposito, W. R.; Floudas, C. A. Deterministic global optimization in nonlinear optimal control problems. J. Global Optim. 2000, 17, 97–126. (8) Grossmann, I. E. Global Optimization in Engineering Design; Kluwer Academic Publishers, Norwell, MA, 1996. (9) Guus, C.; Boender, E.; Romeijn, H. E. Stochastic Methods. In Handbook of Global Optimization; Horst, R., Pardalos, P. M., Eds.; Kluwer Academic Publishers, Norwell, MA, 1995; pp 829-869. (10) Singer, A. B.; Bok, J. K.; Barton, P. I. Convex underestimators for variational and optimal control problems. Comput.-Aided Chem. Eng. 2001, 9, 767–772. (11) Papamichail, I.; Adjiman, C. S. A rigorous global optimization algorithm for problems with ordinary differential equations. J. Global Optim. 2002, 24, 1–33. (12) Banga, J. R.; Seider, W. D. Global optimization of chemical processes using stochastic algorithms. In State of the Art in Global Optimization; Floudas, C. A., Pardalos, P. M., Eds.; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1996; pp 563-583. (13) Banga, J. R.; Alonso, A. A.; Singh, R. P. Stochastic dynamic optimization of batch and semicontinuous bioprocesses. Biotechnol. Prog. 1997, 13, 326–335. (14) Roubos, J. A.; Van Straten, G.; Van Boxtel, A. J. B. An evolutionary strategy for fed-batch bioreactor optimization; concepts and performance. J. Biotechnol. 1999, 67, 173–187.

(15) Rajesh, J.; Gupta, K.; Kusumakar, H. S.; Jayaraman, V. K.; Kulkarni, B. D. Dynamic optimization of chemical processes using ant colony framework. Comput. Chem. (Oxford) 2001, 25, 583–595. (16) Sarkar, D.; Modak, J. M. Optimization of fed-batch bioreactors using genetic algorithm: Multiple control variables. Comput. Chem. Eng. 2004, 28, 789–798. (17) Zhang, B.; Chen, D.; Zhao, W. Iterative ant-colony algorithm and its application to dynamic optimization of chemical process. Comput. Chem. Eng. 2005, 29, 2078–2086. (18) Faber, R.; Jockenho¨vel, T.; Tsatsaronis, G. Dynamic optimization with simulated annealing. Comput. Chem. Eng. 2005, 29, 273–290. (19) Shelokar, P. S.; Jayaraman, V. K.; Kulkarni, B. D. Multicanonical jump walk annealing assisted by tabu for dynamic optimization of chemical engineering processes. Eur. J. Oper. Res. 2008, 185, 1213–1229. (20) Glover, F. W.; Kochenberger, G. A. Handbook of Metaheuristics (International Series in Operations Research & Management Science); Springer: New York, 2003. (21) Banga, J. R.; Balsa-Canto, E.; Moles, C. G.; Alonso, A. A. Improving food processing using modern optimization methods. Trends Food Sci. Technol. 2003, 14, 131–144. (22) Pham, Q. T. Dynamic optimization of chemical engineering processes by an evolutionary method. Comput. Chem. Eng. 1998, 22, 1089– 1097. (23) Angira, R.; Santosh, A. Optimization of dynamic systems: A trigonometric differential evolution approach. Comput. Chem. Eng. 2007, 31, 1055–1063. (24) Park, C.; Lee, T.-Y. Optimal control by evolutionary algorithm technique combined with spline approximation method. Chem. Eng. Commun. 2004, 191, 262–277. (25) Wang, F.-S.; Su, T.-L.; Jang, H.-J. Hybrid differential evolution for problems of kinetic parameter estimation and dynamic optimization of an ethanol fermentation process. Ind. Eng. Chem. Res. 2001, 40, 2876– 2885. (26) Babu, B. V.; Angira, R. Modified differential evolution (MDE) for optimization of non-linear chemical processes. Comput. Chem. Eng. 2006, 30, 989–1002. (27) Kookos, I. K. Optimization of batch and fed-batch bioreactors using simulated annealing. Biotechnol. Prog. 2004, 20, 1285–1288. (28) Sun, D. Y.; Lin, P. M. The solution of time optimal control problems by simulated annealing. J. Chem. Eng. Jpn. 2006, 39, 753–766. (29) Glover, F.; Laguna, M.; Martı´, R. Scatter Search. In AdVances in EVolutionary Computation: Theory and Applications; Ghosh, A., Tsutsui, S., Eds.; Springer-Verlag: New York, 2003; pp 519-537. (30) Rodrı´guez-Ferna´ndez, M.; Egea, J. A.; Banga, J. R. Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems. BMC Bioinformatics 2006, 7, 483+. (31) Egea, J. A.; Rodrı´guez-Ferna´ndez, M.; Banga, J. R.; Martı´, R. Scatter search for chemical and bio-process optimization. J. Global Optim. 2007, 37, 481–503. (32) Storn, R.; Price, K. Differential Evolution - A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces. J. Global Optim. 1997, 11, 341–359. (33) Balsa-Canto, E.; Banga, J. R.; Alonso, A. A.; Vassiliadis, V. B. Dynamic optimization of distributed parameter systems using second-order directional derivatives. Ind. Eng. Chem. Res. 2004, 43, 6756–6765. (34) Vassiliadis, V. S.; Sargent, R. W. H.; Pantelides, C. C. Solution of a class of multistage dynamic optimization problems. 1. Problems without path constraints. Ind. Eng. Chem. Res. 1994, 33, 2111–2122. (35) Cuthrell, J. E.; Biegler, L. T. Simultaneous optimization and solution methods for batch reactor control profiles. Comput. Chem. Eng. 1989, 13, 49–62. (36) Glover, F. A Template for Scatter Search and Path Relinking. In Artificial EVolution; Hao, J. K., Lutton, E., Ronald, E., Schoenauer, M., Snyers, D., Eds.; Springer Verlag: New York, 1998; Vol. 1363, pp 13-54. (37) Laguna, M.; Martı´, R. Scatter Search: Methodology and Implementations in C; Kluwer Academic Publishers: Norwell, MA, 2003. (38) McKay, M. D.; Beckman, R. J.; Conover, W. J. A comparison of three methods for selecting values of input variables in the analysis of output from a computer code. Technometrics 1979, 21, 239–245. (39) Herrera, F.; Lozano, M. Gradual distributed real-coded genetic algorithms. IEEE T. EVolut. Comput. 2000, 4, 43–62. (40) Tfaili, W.; Siarry, P. A new charged ant colony algorithm for continuous dynamic optimization. Appl. Math. Comput. 2008, 197, 604– 613. (41) Optimization Toolbox 4 User’s guide; The MathWorks: Natick, MA, 2008. (42) Ye, Y. Interior algorithms for linear, quadratic and linearly constrained non-linear programming. Ph.D. Thesis. Stanford University, CA, 1987.

Ind. Eng. Chem. Res., Vol. 48, No. 9, 2009 4401 (43) Panier, E.; Tits, A. L. On Combining Feasibility, Descent and Superlinear Convergence In Inequality Constrained Optimization. Math. Program. 1993, 59, 261–276. (44) Exler, O.; Schittkowski, K. A trust region SQP algorithm for mixedinteger nonlinear programming. Optim. Lett. 2007, 1, 269–280. (45) Abramson, M. A. Pattern Search Algorithms for Mixed Variable GeneralConstrainedOptimizationProblems.Ph.D.Thesis.RiceUniversity,Houston, TX, 2002. (46) de la Maza, M.; Yuret, D. Dynamic hill climbing. AI Expert 1994, 9, 26–31. (47) Lozano, M.; Herrera, F.; Krasnogor, N.; Molina, D. Real-coded memetic algorithms with crossover hill-climbing. EVol. Comput. 2004, 12, 273–302. (48) Molina, D.; Herrera, F.; Lozano, M. Adaptive local search parameters for real-coded memetic algorithms. IEEE Congress EVol. Comput. 2005, 2005, 888–895. (49) Michalewicz, Z. Evolutionary algorithms for constrained parameter optimization problems. EVol. Comput. 1996, 4, 1–32. (50) Coello Coello, C. A. Theoretical and numerical constraint-handling techniques used with evolutionary algorithms: A survey of the state of the art. Comput. Method. Appl. M. 2002, 191, 1245–1287. (51) Vassiliadis, V. S.; Sargent, R. W. H.; Pantelides, C. C. Solution of a class of multistage dynamic optimization problems. 2. Problems with path constraints. Ind. Eng. Chem. Res. 1994, 33, 2123–2133. (52) Hansen, N.; Mu¨ller, S. D.; Koumoutsakos, P. Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (CMA-ES). EVol. Comput. 2003, 11, 1–18. (53) Runarsson, T. P.; Yao, X. Stochastic Ranking for Constrained Evolutionary Optimization. IEEE T. EVol. Comput. 2000, 4, 284–294. (54) Jones, D. R. DIRECT global optimization algorithm. In Encyclopedia of Optimization; Floudas, C. A., Pardalos, P. M., Eds.; Kluwer Academic Publishers: Dordrecht, The Netherlands, 2001; pp 431-440. (55) Holmstro¨m, K.; Edvall, M. M. The Tomlab Optimization Environment. In Modeling Languages in Mathematical Optimization; Kallrath, J., Ed.; Kluwer Academic Publishers: Boston, MA, 2004; Vol. 88, Chapter 19, pp 369-378. (56) Laguna, M.; Martı´, R. The OptQuest Callable Library. In Optimization Software Class Libraries; Voss, S., Woodruff, D. L., Eds.; Kluwer Academic Publishers: Boston, MA, 2002. (57) Neumaier, A.; Shcherbina, O.; Huyer, W.; Vinko´, T. A comparison of complete global optimization solvers. Math. Program. 2005, 103, 335– 356. (58) Birattari, M.; Dorigo, M. How to assess and report the performance of a stochastic algorithm on a benchmark problem: Mean or best result on a number of runs. Optim. Lett. 2007, 1, 309–311. (59) Hong, J. Optimal substrate feeding policy for a fed batch fermentation with substrate and product inhibition kinetics. Biotechnol. Bioeng. 1986, 28, 1421–1431.

(60) Chen, C. T.; Hwang, C. Optimal control computation for differential-algebraic process systems with general constraints. Chem. Eng. Commun. 1990, 97, 9–26. (61) Chen, C. T.; Hwang, C. Optimal on-off control for fed-batch fermentation processes. Ind. Eng. Chem. Res. 1990, 29, 1869–1875. (62) Luus, R. Application of dynamic programming to differentialalgebraic process systems. Comput. Chem. Eng. 1993, 17, 373–377. (63) Luus, R. Piecewise linear continuous optimal control by iterative dynamic programming. Ind. Eng. Chem. Res. 1993, 32, 859–865. (64) Banga, J. R.; Singh, R. P. Optimization of air drying of foods. J. Food Eng. 1994, 23, 189–211. (65) Saa, J.; Alonso, A. A.; Banga, J. R. Optimal Control of Microwave Heating Using Mathematical Models of Medium Complexity, Automatic Control of Food and Biological Processes (AcoFop IV), Go¨teborg, Sweden, Sept. 21-23, 1998. (66) Banga, J. R.; Saa, J.; Alonso, A. A. Model-based optimization of microwave heating of bioproducts, 7th International Conference On Microwave and High Frequency Heating, Valencia, Spain, 1999. (67) Dolan, E. D.; More´, J. J. Benchmarking optimization software with performance profiles. Math. Program. 2002, 91, 201–213. (68) Auger, A.; Hansen, N. Performance evaluation of an advanced local search evolutionary algorithm. IEEE Congress EVol. Comput. 2005, 2005, 1777–1784. (69) Luyben, K. C. A. M.; Liou, J. K.; Bruin, S. Enzyme degradation during drying. Biotechnol. Bioeng. 1982, 24, 533–552. (70) Mishkin, M.; Karel, M.; Saguy, I. Applications of optimization in food dehydration. Food Technol.-Chicago 1982, 36, 101–109. (71) Villota, R.; Karel, M. Prediction of ascorbic acid retention during drying. I. Moisture and temperature distribution in a model system. J. Food Process. Pres. 1980, 4, 111–134. (72) Villota, R.; Karel, M. Prediction of ascorbic acid retention during drying. II. Simulation of retention in a model system. J. Food Process. Pres. 1980, 4, 141–159. (73) Lin, Y. E.; Anantheswaran, R. C.; Puri, V. M. Finite element analysis of microwave heating of solid foods. J. Food Eng. 1995, 25, 85– 112. (74) Ohlsson, T.; Bengtsson, N. E. Microwave heating profiles in food. MicrowaVe Energy Appl. Newslett. 1971, 4, 1–8. (75) Chen, D.-S. D.; Singh, R. K.; Haghighi, K.; Nelson, P. E. Finite element analysis of temperature distribution in microwaved cylindrical potato tissue. J. Food Eng. 1993, 18, 351–368.

ReceiVed for reView November 11, 2008 ReVised manuscript receiVed February 17, 2009 Accepted February 27, 2009 IE801717T