Novel Hybrid Evolutionary Algorithm for Dynamic Optimization

Nov 17, 2012 - optimization (PSO) is proposed to solve the dynamic optimization problems of chemical processes using numerical methods. Based on the ...
0 downloads 0 Views 639KB Size
Article pubs.acs.org/IECR

Novel Hybrid Evolutionary Algorithm for Dynamic Optimization Problems and Its Application in an Ethylene Oxide Hydration Reactor Feng Qian,*,† Fan Sun,† Wenli Du,† and Weimin Zhong*,‡ †

Key Laboratory of Advanced Control and Optimization for Chemical Processes, Ministry of Education, and ‡Department of Automation, East China University of Science and Technology, Shanghai 200237, China ABSTRACT: A novel hybrid evolutionary algorithm (HEA) that combines the genetic algorithm (GA) and particle swarm optimization (PSO) is proposed to solve the dynamic optimization problems of chemical processes using numerical methods. Based on the characteristics of dynamic optimization problems, the concept of “search region reduction” is integrated into the HEA to improve the convergence rate. A control vector parametrization (CVP) method based on the HEA is also employed to improve the accuracy of the results. The dynamic optimization problem with state variable constraints is an important research area in process system engineering and is difficult to solve. Thus, the present work also proposes a novel method embedding information about infeasible chromosomes into the evaluation function to solve dynamic optimization problems with or without state variable constraints. The results of several case studies demonstrate the feasibility and efficiency of the proposed methods. Finally, the proposed methods are used to solve the temperature distribution problem in an ethylene oxide hydration reactor. Moreover, the proposed algorithm can be regarded as a useful optimization tool, especially when gradient information is unavailable. state variables: umin ≤ u(t) ≤ umax, xmin ≤ x(t) ≤ xmax. In a particular case, state variable constraints typically exist at the final time tf: xfmin ≤ x(tf) ≤ xfmax. When xfmin = xfmax, the problem is converted into a dynamic optimization problem with a fixed state variable boundary. Dynamic optimization methods can be roughly divided into dynamic programming, indirect methods, and direct methods. Dynamic programming (DP) methods rely on Bellman’s optimality conditions.4 DP is a successful method for solving dynamic optimization problems, except for the dimension curse, which impairs the applicability of this method. To overcome this drawback, iterative DP (IDP) was proposed by Luus.5 IDP reduces the expansion of a dimension and promotes the reliability of the solution through iterative computation. However, the high computational cost for systems involving large numbers of DAEs restricts the use of IDP to problems on a small scale. Indirect methods are based on the solution of the first-order necessary conditions for optimality that are obtained from Pontryagin’s maximum principle.6,7 The resulting two-point boundary value problem (TPBVP) can be addressed with different approaches, including single shooting, invariant embedding, and multiple shooting.8,9 These are the most precise methods for solving optimal control problems. However, if the problem requires the handling of active inequality constraints, finding the correct switching structure as well as suitable initial guesses for state and adjoint variables is often very difficult.10 Therefore, indirect methods are extremely complicated to apply in practice.

1. INTRODUCTION Chemical processes can often be described by a group of differential algebraic equations (DAEs). Dynamic optimization problems are encountered in a large number of chemical engineering applications aimed at optimizing a predefined performance index, such as profitability, product quality, or productivity, subject to safety or environmental specifications over a time interval by controlling operation variables.1,2 A typical dynamic optimization problem for a continuous process3 is described as follows: min J(x , u) = Φ[x(tf ), tf ] +

∫t

tf

Ψ[x(t ), y(t ), u(t ), t ] dt

0

(1)

subject to the dynamics of the process, described by a set of DAEs ⎧ dx = ϕ[x(t ), y(t ), u(t ), t ] ⎪ ⎪ dt ⎨ ⎪ h[x(t ), y(t ), u(t )] = 0 ⎪ g[x(t ), y(t ), u(t )] ≤ 0 ⎩

(2)

where u(t) denotes the control variable (or decision variable). x(t) and y(t) are the differential and algebraic state variables, respectively. J is the performance index, t0 is the initial time, and tf is the final time. Φ is a penalty function for the final state x(tf) at the final time tf. Ψ is a function intended to weigh the state path traveled and the control sequence used. φ represents the differential equality constraints, h and g are sets of algebraic equality and inequality constraints, respectively. Initial conditions are also specified to ensure the solvability of the DAEs: x(t0) = x0. Constraints in dynamic optimization problems can include operational bounds and validity ranges for both control and © 2012 American Chemical Society

Received: Revised: Accepted: Published: 15974

September 1, 2011 November 15, 2012 November 16, 2012 November 17, 2012 dx.doi.org/10.1021/ie201977x | Ind. Eng. Chem. Res. 2012, 51, 15974−15985

Industrial & Engineering Chemistry Research

Article

strategy with acceptable computation cost, but the results are unsatisfactory.24 A nested constraint-preferred optimization algorithm was proposed by Liu and Chen to reduce the fixed boundary optimal control problem into a free boundary optimization problem.25 Several cases showed the advantage of the algorithm in terms of convergence performance and computation efficiency. However, this algorithm can solve only problems with a fixed boundary. When solving dynamic optimization problems in chemical processes, gradient information is sometime unavailable. For problems with a large numbers of local extreme values, some of these algorithms are highly dependent on the selection of the initial value. Intelligent optimization algorithms can effectively avoid such problems. Therefore, the current study considered PSO and GA and developed a novel hybrid evolutionary algorithm (HEA) to effectively combine the properties of GA and PSO for solving dynamic optimization problems. The current article is organized as follows: Section 2 presents related studies on PSO and GA. Section 3 describes the HEA proposed for solving dynamic optimization problems. In section 4, the HEA is applied to several case studies to demonstrate its effectiveness in dealing with dynamic optimization problems. In section 5, the obtained results are compared with the results of other methods, and the performance of the algorithm is discussed. In section 6, the HEA is used to solve the temperature distribution problem in an ethylene oxide (EO) hydration reactor and also to obtain the optimal temperature distribution of the reactor, which could be used to guide the industrial production and process design. Section 7 presents concluding remarks on the behavior of the proposed methods.

Direct methods convert the continuous optimization problem into a nonlinear programming (NLP) problem. Direct methods employ two different types of parametrization, namely, simultaneous approach and control vector parametrization (CVP).11 Simultaneous approach discretizes both state and control variables, whereas CVP discretizes only control variables. In its earliest incarnation, the simultaneous approach discretized control variables and approximated the differential equations of the model using finite differences. Subsequently, simultaneous approach parametrized the time paths for control and state variables, thereby resulting in an NLP problem under algebraic constraints, where decision variables constitute the parameters for each path. Biegler et al. conducted a great deal of valuable work on the use of the simultaneous approach method for dynamic process optimization.12−15 However, the dimension of the NLP problem is greater in a simultaneous approach method than in a CVP method, and the same is true for the computational cost of using the former.16 Thus, this condition justifies the more frequent use of CVP methods in process engineering problems. CVP is relatively easy to apply, as it contains the components of reliable DAE solvers as well as NLP solvers. On the other hand, the drawback of CVP is that the differential equations must be stable. Moreover, repeated numerical integration of the DAE model is required, which can become time-consuming for largescale problems.13 The direct multiple shooting algorithm is an effective method of solving dynamic optimization problems. Multiple shooting serves as a bridge between CVP approaches and simultaneous approaches that are based on a complete discretization of the state and control variables. 12 For dynamic optimization, gradient algorithms based on the Hamiltonian function have proved to be reliable.9 The main advantage of gradient-based algorithms lies in the fact that a good initial guess for the decision variables is beneficial but not critical for convergence. However, gradient-based algorithms can be very expensive and time-consuming to implement because of the large amount of work required prior to the optimization itself: model reformulation, discretization, providing first- and second-order derivatives, and so on. Such methods become too complicated when the number of state and control variables increases and the complexity of the system grows.13 Unlike gradient-based methods, intelligent optimization algorithms, including the genetic algorithm (GA), ant-colony algorithm (ACA), and particle swarm algorithm (PSO), do not require gradient information. Such methods have been shown to be feasible.17−21 Rajesh et al.17 utilized the ant colony framework to solve dynamic optimization problems. The method is easier to implement than IDP, but at the cost of solution accuracy. The iterative ant-colony algorithm (IACA) was proposed by Zhang et al.19 The basic principle behind IACA is the iterative execution of ACA and the gradual approximation of the optimal control profile. The dynamic optimization problem with state variable constraints requires that a decision be made on whether the control strategy is feasible or infeasible.22 To solve dynamic optimization problems with a fixed state variable boundary, Zhang et al.23 developed a novel strategy called graded optimization. This strategy deals with the constraint of the fixed boundary prior to objective optimization. However, this approach can be used only for problems having a fixed boundary. An improved clonal selection algorithm (ICSA) was proposed by Lin et al.24 to solve constrained dynamic optimization problems. ICSA can identify the optimal control

2. INTRODUCTIONS TO PARTICLE SWARM OPTIMIZATION AND GENETIC ALGORITHM A number of comparisons of the performances of GA and PSO have been presented in the literature,26−28 emphasizing the reliability and convergence speed of both methods. The different search methods employed by GA and PSO allow them to show good performance for a number of particular applications, but not for others. Notably, GA sometimes outperforms PSO, but occasionally, the opposite occurs, thus showing the typical application-driven characteristics of any single technique.27 In particular, PSO appears to have a higher convergence rate than GA early in the run, but is often outperformed by GA for long simulations.28 2.1. Introduction to Particle Swarm Optimization. The PSO algorithm was first proposed by Eberhart and Kennedy.29,30 PSO starts with the initialization of a population of random particles, each of which is associated with a position and a velocity. The velocities are adjusted according to the historical behavior of each particle and its neighbors as they travel through the search space. The positions are updated according to the current positions and the velocities at the next step. Therefore, the particles have a tendency to move toward an increasingly better search area over the course of the search process. To identify an optimal or near-optimal solution to the problem, PSO updates the current generation of particles using the information on the best solution obtained by each particle and by the entire population. As a relatively new evolutionary algorithm, PSO has been successfully applied to unconstrained and constrained optimization, artificial neural network training, parameter optimization, and feature selection. Since the original version of 15975

dx.doi.org/10.1021/ie201977x | Ind. Eng. Chem. Res. 2012, 51, 15974−15985

Industrial & Engineering Chemistry Research

Article

PSO was proposed in 1995,29,30 a great deal of work has been conducted to develop this algorithm. To control the global exploration and local exploitation validly, Shi and Eberhart31 introduced a concept of inertia weight to the original PSO, thus developing modified PSO. Compared with the original version, modified PSO was observed to have notable improvement in performance for some problems. Thus, modified PSO is often referred to as the standard PSO version (SPSO) and has been employed in a large number of successful studies.32 The SPSO version can be described by the update equations

function. In the present work, piecewise linear and continuous control approximations are employed for u(t). The time interval [t0, tf] is uniformly divided into n stages, and the ith time stage is [ti−1, ti] (i = 1, 2, ..., n), where t0 = t0 and tn = tf. The optimal control profiles usually do not show uniform profiles. Thus, the solution quality in the present work is limited by the choice of the number of stages n.34 At any ith stage [ti−1, ti], the control profile u(t) is determined by u(ti−1) and u(ti). The whole profile of the control variable can be expressed by n + 1 points u(t0), u(t1), ..., u(tn), which are then searched by the algorithm. 3.2. Search Region Reduction. According to the characteristics of dynamic optimization problems, the concept of “search region reduction” is proposed in the current article. In the later period of evolution, an intelligent algorithm usually exhibits very slow convergence. To improve the convergence rate, the search region is reduced, and the best solution is again identified when the results are proximate to the best solution. In different time stages, the optimal control variables differ from one another. If the search region remains unchanged, the difficulty in finding the optimal solution is increased. The best profile found is considered as the baseline, and the control region is decreased with a coefficient, so that the control profile is located in the center of the new control region. The new region is much smaller than the original region, so that the probability of obtaining the best solution increases greatly. At the beginning of the search, all control variables in the whole time interval [t0, tf] have the same search region. The initial bounds of the control variables are denoted by uimax and uimin. The search region of the control variable ui(t) is denoted by [uimin, uimax] (i = 1, 2, ..., n). The initial search width of ui is

vidk+ 1 = ωvidk + c1rand1(pbestidk − sidk) + c 2 rand 2(gbestidk − sidk)

sidk+ 1 = sidk + vidk+ 1

(3) (4)

Suppose that, for particles moving through a D-dimensional search space (d = 1, 2, ..., D), si = (si1, si2, ..., siD) and vi = (vi1, vi2, ..., viD) represent the position and the velocity, respectively, of particle i (i = 1, 2, ..., n), whereas pbesti = (pbesti1, pbesti2, ..., pbestiD) is the best position already identified by particle i, and gbesti = (gbesti1, gbesti2, ..., gbestiD) is the best position among all particles. k is the current iteration, and rand1 and rand2 are random numbers between 0 and 1. Coefficients c1 and c2 are given acceleration constants toward pbest and gbest, respectively, and ω is the inertia weight. The update equation of velocity obviously comprises three components, namely, the previous velocity component, a cognitive component, and a social component. The cognitive component considers only the particle’s own experiences, whereas the social component represents information on the interactions among particles. 2.2. Introduction to Genetic Algorithm. GA is a stochastic optimization method developed by Holland.33 GA simulates natural evolution in terms of survival of the fittest, employing biological operators, such as selection, crossover, and mutation, to achieve faster convergence. GA starts with a randomly initialized population of individuals, which are among the candidate solutions to optimization problems. The individuals are always represented as chromosomes, which are most commonly encoded into a binary string. Some individuals called parents are then selected by the fitness function, which assesses the quality of the solution. Subsequently, a number of new individuals called offspring are reproduced from the parents by crossover and mutation operators. This step is iterated until the termination conditions are satisfied. GA has been successfully used in solving combinatorial problems. The population-to-population approach enables the method to search large, discontinuous search spaces without getting trapped in the local optimum. Population diversity can be maintained by setting the probability of the genetic operators. However, genetic operators can destroy good genes, thus delaying convergence.18

wi = ui max − ui min ,

i = 1, 2, ..., n

(5)

In the current study, when the optimal value obtained by the algorithm remained unchanged for 10 consecutive generations, the baseline for the search was moved to the optimal value obtained in the previous search. The region reduction strategy was employed to reduce the search region wi′ = rwi ,

i = 1, 2, ..., n ,

ui′min =

ui b − wi′ , 2

i = 1, 2, ..., n

ui′max =

ui b + wi′ , 2

i = 1, 2, ..., n

(6)

(7)

(8)

where r is the reduction ratio, 0 < r < 0.5. uib is the value of the optimal control variable obtained in the previous search. Through simulation experiments, r was selected as r = 0.2 in the current article. If uimin ′ < uimin or uimax ′ < uimax, then the new search region is forced to be in the lower bound or upper bound, that is, u′imin = uimin or u′imax = uimax, respectively. 3.3. Combination of Particle Swarm Optimization and Genetic Algorithm. For each iteration, the population is divided into two parts, and each part is evolved using either GA or PSO. In the updated population, the subdivisions are recombined, and the new population is again divided randomly into two parts in the next iteration for another run of GA or PSO operations. 3.3.1. Operation of Particle Swarm Optimization. In PSO, each potential solution is assigned a randomized velocity. The potential solutions, called particles, then travel through the problem space by following the current best particles. Each

3. HYBRID EVOLUTIONARY ALGORITHM The HEA proposed for solving dynamic optimization problems is described in this section. The HEA can effectively combine the properties of GA and PSO for solving dynamic optimization problems. 3.1. Control Profile Approximation by Control Vector Parametrization. Referring to eq 1, the objective is to find the profile of the control variable u(t) that optimizes the objective 15976

dx.doi.org/10.1021/ie201977x | Ind. Eng. Chem. Res. 2012, 51, 15974−15985

Industrial & Engineering Chemistry Research

Article

⎧ 0, if a ≤ x1(tf ) ≤ b ⎪ ⎪ d(x , Q ) = ⎨ a − x1(t f ), if x1(tf ) ≤ a ⎪ ⎪ x (t ) − b , if x (t ) ≥ b ⎩ 1 f 1 f

particle in PSO maintains a memory of its previous best position, which is represented by pbest. The best among all of the particles in the population is represented by gbest. PSO uses pbest and gbest as two exemplars to guide each particle while moving. Following the updates in eqs 3 and 4, all of the particles are bound to move toward the weighted point defined by their own and the swarm’s historical best position, which prompt the algorithm to converge toward a special point. According to the updates in eqs 3 and 4, PSO is primarily controlled by three parameters, namely, the inertia weight (ω) and the two acceleration coefficients (c1 and c2). Shi and Eberhart31 analyzed the effect of the inertia weight initially through empirical experiments. They found that a larger value of ω can result in a better global convergence capability, whereas a smaller value of ω can manipulate a better local search. The suitable selection of inertia weight ω provides a balance between global and local explorations. The constants c1 and c2 pull each particle toward the pbest and gbest positions, respectively. Low values allow particles to roam far from the target regions before being pulled back. On the other hand, high values result in abrupt movements toward, or past, the target regions.35 Trelea36 reported that PSO with ω = 0.6 and c1 = c2 = 1.7 had a higher convergence rate than that with ω = 0.729 and c1 = c2 = 1.494. In the present work, the PSO parameters were selected as ω = 0.6 and c1 = c2 = 1.7. 3.3.2. Genetic Operations. There are three genetic operations: roulette selection, crossover, and mutation. (1) The roulette-wheel selection approach is applied for the selection operation. A random number is generated, and the individual whose segment spans the random number is selected. The process is repeated until the desired number of individuals is obtained. The probability of the selection of an individual depends on the individual’s relative fitness. (2) Crossover is used to recombine genetic material in two parent chromosomes to produce two child chromosomes that share the characteristics of their parents. In the current article, two-point crossover was applied at a rate of Pc. The crossover rate refers to the number of chances for chromosomes in the population to undergo crossover. Pc was selected as Pc = 0.5 in the current article. (3) Uniform mutation at a rate of Pm was used in the present work. Genes have the same probability Pm to undergo mutation. Pm was selected as Pm = 0.1 in the current article. 3.4. New Evaluation Function. The penalty function method is often employed to deal with state variable constraints. However, the limitation is that the size of the penalty factor has a significant effect on the results, and setting a proper value for the penalty factor is difficult. In the present research, a new evaluation function is proposed for evaluating the infeasible chromosomes by embedding the information on infeasible chromosomes into the evaluation function. The feasible domain Q is defined as follows: Q = {x ∈ Q| a ≤ x1(tf) ≤ b}. When a = b, the function becomes an equality constraint problem. The distance d(x,Q) between x and the feasible domain Q is defined to be the maximum violation of the constraints

(9)

The distance d(x,Q) reveals the relationship between x and the feasible domain Q. d(x,Q) = 0 indicates that x ∈ Q. On the contrary, when d(x,Q) > 0, a greater d(x,Q) indicates that the performance of x “belonging to” Q becomes worse, that is, x is farther from the feasible domain Q. Embedding the distance from the feasible domain into the fitness evaluation of the algorithm yields ⎧ f (x ) f (x ) ≥ 0 ⎪ α, eval(x) = ⎨ [1 + d(x , Q )] ⎪ α ⎩ f (x)[1 + d(x , Q )] , f (x) < 0

(10)

where α ≥ 2 and f(x) is the fitness value of the individual. Generally, α = 2 is selected. In case d(x,Q) is extremely small, then α should be given a large value. In the current experiments, α = 2 is appropriate when the range of the state variables exceeds 10. When the range of the state variables is less than 1, α should greater than 5. When the range of the state variables is between 1 and 10, α should be given a value between 2 and 5. If several state variable constraints exist, the function becomes ⎧ f (x)/ [1 + d (x , Q )]α1 ×[1 + d (x , Q )]α2 f (x) ≥ 0 { 1 2 ⎪ α ⎪ ··· × [1 + dn(x , Q )] n }, eval(x) = ⎨ ⎪ f (x){[1 + d (x , Q )]α1 ×[1 + d (x , Q )]α2 f (x) < 0 1 2 ⎪ α ⎩ ··· × [1 + dn(x , Q )] n },

(11)

3.5. Steps of the HEA. The flowchart for the HEA is shown in Figure 1. The detailed steps of the HEA are as follows: Step 1. The population is initialized. The original population, including Popsize individuals, is generated randomly. The current generation is denoted by k, and k = 0 initially. The maximum evolutionary generation is denoted by Max_gen. Step 2. Steps 3−7 are repeated until the end condition is met. Every 50 generations, 10 new individuals, which are generated randomly to replace the 10 worst individuals in the population, are generated to ensure that the algorithm does not become trapped in a local candidate solution. Step 3. The objective function and individual fitness are calculated using the CVP method. The explicit Runge−Kutta method is used to solve the differential equations. Step 4. The stopping condition is validated. J(k) is the best objective function in the kth iteration. If |J(k) − J(k−1)| < 10−6 or k = Max_gen, the algorithm stops. Otherwise, step 5 is implemented. Step 5. The personal and global best solutions are updated. Step 6. The population is divided randomly into two parts, and a proportion of the individuals are evolved using PSO. Each individual’s fitness evaluation is compared with its best solution. If the current value is better than the previous best solution, the latter is replaced, and the former is set as pbest. The individual particle’s fitness is then compared with the population’s gbest. If the fitness of the current solution is better than gbest’s fitness, the former is set as the new gbest. The 15977

dx.doi.org/10.1021/ie201977x | Ind. Eng. Chem. Res. 2012, 51, 15974−15985

Industrial & Engineering Chemistry Research

Article

The maximum and minimum control regions were different in the different cases. The maximum number of evolutionary generations (Max_gen) was determined by both the time stage number (n) and the complexity of the cases (including the number of control variables and state variables and the number of DAEs for the system). These three parameters for different cases are listed in Table 2. Table 2. Parameters for Different Cases case 1a

2

150 (n = 10), 400 (n = 20) 100 (n = 4), 250 (n = 10) 250

3

300

1b

s.t.

Tmax = 1000

Tmin = 700

u1max = 0.01, u2max = 0.01 umax = 50

u1min = 0, u2min = 0 umin = 0

(12)

dCA = −k1CA dt dC B = k1CA − k 2C B dt C B(0) = 0

where tf = 12.5 s; ki = exp[−Ei/(RT)], i = 1, 2; k01 = 65.6 s−1; k02 = 1970 s−1; E1 = 4.18 × 104 J·mol−1; E2 = 6.688 × 104 J·mol−1; CA and CB are the concentrations of A and B, respectively; and k1 and k2 are rate constants. Rajesh et al.17 used the objective that maximized the concentration of target product B at the end of operation, and this subcase is denoted as case 1a. Applying constraints CB(tf) ≥ 0.44 and 700 ≤ T ≤ 1000 makes this function a more complicated dynamic optimization problem, which is denoted as case 1b. For case 1a, using the HEA to solve this problem, the optimal values are 0.48005 (n = 10) and 0.48012 (n = 20). The optimal temperature and concentration profiles at n = 20 are shown in Figure 2. For case 1b, using the HEA at n = 4, the concentrations of the products at the outlet of the reactor are CA(tf) = 0.47792, CB(tf) = 0.44000, and CC(tf) = 0.08208. The yield coefficient of the target product is 0.84279. The optimal temperature and concentration profiles are shown in Figure 3. At n = 10, the yield coefficient of the target product reaches 0.84335. 4.2. Case 2: Feeding-Rate Optimization of Foreign Protein Production. Foreign protein production using recombinant bacteria was modeled by Lee and Ramirez.38 The nutrient and the inducer are required for foreign protein production. To obtain maximum economic benefit, the glucose and the inducer feed rates should be controlled. This problem has been studied by Roubos et al.,39 Zhang et al.,19 Sarkar et

Table 1. HEA Parameters for All Cases value

Tmin = 700

CA(0) = 1

4. CASE STUDIES OF HEA Three cases reported in the literature are used to evaluate the performance of HEA. For each case, 10 runs were executed, and the parameters of the HEA for all cases are shown in Table 1.

Popsize = 50 Pc = 0.5 Pm = 0.1 c1 = c2 = 1.7 ω = 0.6 r = 0.2

minimum control region

Tmax = 1000

max J(T ) = C B(tf )/[1 − CA(tf )]

velocity and position of each individual are then modified according to eqs 3 and 4. Step 7. The remaining individuals are evolved using GA. The genetic operation, including roulette selection, crossover, and mutation, is performed. Step 8. If the optimal value obtained by the algorithm remains unchanged for 10 consecutive generations, search region reduction is executed. wi, uimin, and uimax are computed according to eqs 6−8, and the algorithm returns to step 3. Notably, search region reduction is performed only once in the run time of the algorithm. In the current experiments, after performing search region reduction for the first time, the accuracy of the results increased significantly. However, implementing search region reduction a second time had minimal improvement on the accuracy and increased the computational cost. As the reduction ratio r was set to be 20%, if search region reduction were implemented a second time, the search region would be reduced to 4% of the original region. The new region would be too small to search for a better solution. Thus, this operation was performed only once to balance the accuracy of the results and the computational cost.

parameter

maximum control region

4.1. Case 1. Case 1 is a temperature control problem for a consecutive reaction.37 The objective is to obtain the optimal temperature profile that maximizes the selectivity of target product B at the end of operation in a batch reactor, where the reaction A → B → C occurs. The mathematical model is given by

Figure 1. Flowchart of the hybrid evolutionary algorithm.

population size crossover probability mutation probability acceleration coefficients inertia weight reduction ratio

Max_gen

15978

dx.doi.org/10.1021/ie201977x | Ind. Eng. Chem. Res. 2012, 51, 15974−15985

Industrial & Engineering Chemistry Research

Article

Figure 2. Optimal temperature and concentration profiles for case 1a.

Figure 3. Optimal temperature and concentration profiles for case 1b.

foreign protein production rate (Rfp in h−1), and shock and recovery parameters (k1, k2). The system parameters for this bacteria system are given as

al.,40 and Lin et al.41 The model involves seven state variables and two control variables dx1 = dt dx 3 dt dx4 dt dx5 dt dx 7 dt

dx 2 u + u2 = μx 2 − 1 x2 , dt x1 u u + u2 = 1 Cnf − 1 x3 − Y −1μx 2 , x1 x1 u + u2 = R fpx 2 − 1 x4 , x1 dx6 u u + u2 = 2 C if − 1 = −k1x6 , x5 , dt x1 x1

⎛ ⎞ 0.22 x 7⎟ , ⎜x6 + 0.22 + x5 ⎠ (0.108 + x3 + x3 )/14814.8 ⎝ 0.09x5 k1 = k 2 = , 0.034 + x5 ⎛ 0.0005 + x5 ⎞ 0.095x3 R fp = ⎜ ⎟, (0.108 + x3 + x32)/14814.8 ⎝ 0.022 + x5 ⎠

u1 + u 2 ,

= k 2(1 − x 7)

0.407x3

μ=

2

C if = 4.0,

Cnf = 100.0,

Y = 0.51

(14)

The objective is to maximize J(u1,u2) by controlling the glucose (u1) and inducer feed rates (u2)

(13)

The state variables (x1−x7) are reaction volume (x1 in L), cell density (x2 in g·L−1), nutrient concentration (x3 in g·L−1), foreign protein concentration (x4 in g·L−1), inducer concentration (x5 in g·L−1), inducer shock factor on the cell growth rate (x6), and inducer recovery factor on the cell growth rate (x7). The two control variables (u1, u2) are the glucose feed rate (L·h−1) and the inducer feed rate (L·h−1). Additional variables are the growth yield coefficient (Y), nutrient concentration in the nutrient feed (Cnf in g·L−1), inducer concentration in the inducer feed (Cif in g·L−1), specific growth rate (μ in h−1),

J(u1 , u 2) = x4(tf ) x1(tf ) − Pr

∫t

tf 0

u 2 (t ) d t

(15)

where tf = 10 h and Pr is the price ratio of the inducer to the product, for which a value of 5 was chosen. The initial conditions for the system are x(0) = (1, 0.1, 40, 0, 0, 1, 0)T. The control variables (u1, u2) are restricted in the range of [0, 10−2] L·h−1. For n = 10, use of the HEA to solve the problem gave a maximum value of 0.8165. The optimal glucose feed rate is shown in Figure 4, and the optimal inducer feed rate is shown 15979

dx.doi.org/10.1021/ie201977x | Ind. Eng. Chem. Res. 2012, 51, 15974−15985

Industrial & Engineering Chemistry Research

Article

This problem has been studied by Cuthrell and Biegler,42 Mekarapiruk and Luus,43 Luus,44 Luus and Hennessy,45 and Julio et al.46 The system is described by four differential equations dx1 x1 u = h1x1 − dt 500x4 x2 dx 2 u = h2x1 − 0.01x 2 − dt 500x4 x ⎞u dx 3 0.029x3x1 ⎛ hx hx =− 11 − 2 1 − + ⎜1 − 3 ⎟ dt 0.47 1.2 0.0001 + x3 ⎝ 500 ⎠ x4 dx4 u = dt 500 (16)

Figure 4. Optimal profile for the glucose feed rate.

where h1 = 0.11x3/(0.006x1 + x3), h2 = 0.0055x3/[0.0001 + x3(1 + 10x3)], and x(0) = (1.5, 0, 0, 7)T. The constraints are 0 ≤ u ≤ 50, 0 ≤ x1 ≤ 40, 0 ≤ x3 ≤ 25, and 0 ≤ x4 ≤ 10. The first three state variables are the concentrations of biomass, product, and substrate (g·L−1); x4 denotes the volume (L); and u is the feed rate (L·h−1). The performance index to be maximized is the total amount of penicillin produced in the batch time tf = 132 h, which is given by

41

in Figure 5. To compare the results with HOpt-AiNet, we let n = 12, and the maximum value reached 0.8168. To compare

J = x 2(tf ) x4(tf )

(17)

The maximum distance d(xi,Q) (i = 1, 3, 4) between xi and the feasible domain Q is defined to be the maximum violation of the state path constraints. According to eq 11, the evaluation function is x2(tf) x4(tf)/{[1 + d(x1,Q)]2 × [1+d(x3,Q)]2 × [1+d(x4,Q)]2}. For n = 20, the best solution found by the HEA was 87.9913. The optimal control and state profiles are shown in Figure 6.

5. DISCUSSION 5.1. Summary of Different Methods. All of results of the different methods are summarized in Table 3. CACA19 and IGA20 were used to solve case 1a. CACA is easy to implement and is capable of solving problems with different levels of complexity. The result of IGA at n = 20 was slightly better than that of CACA, but the computational cost was relatively high compared with that of the HEA. ICSA26 was used to solve case

Figure 5. Optimal profile for the inducer feed rate.

the results with ANNSA,40 we let n = 14, and the maximum value reached 0.8171. 4.3. Case 3. Case 3 considers a fed-batch fermenter for the biosynthesis of penicillin, with constraints on the state variables.

Figure 6. Optimal control and state profiles for case 3. 15980

dx.doi.org/10.1021/ie201977x | Ind. Eng. Chem. Res. 2012, 51, 15974−15985

Industrial & Engineering Chemistry Research

Article

x4 ≤ 10. The constraint on the control variable was 0 ≤ u ≤ 50. So the initial search regions for the different time points were all [0, 50]. After the search region reduction operation, the search regions of the control variable at different time nodes are listed in Table 4. The evolutionary processes of the two

Table 3. Summary of Results for the Case Studies method

optimal solution

CACA17 IGA18

Case 1a 0.4793 0.4791 (n = 10), 0.4797 (n = 20)

HEA

0.48005 (n = 10), 0.48012 (n = 20)

ICSA24 HEA

Case 1b 0.83699 (n = 4) 0.84279 (n = 4), 0.84335 (n = 10)

simultaneous42 IDP44 IDP43 LJ procedure45

Case 2 0.8149 0.8167 (n = 14) 0.8158 (n = 10) 0.8165 (n = 12) 0.8165 (n = 10), 0.8168 (n = 12), 0.8171 (n = 14) Case 3 87.69 (n = 20) 87.948 (n = 20) 87.959 (n = 20) 87.9964 (n = 20)

TPH46 HEA

87.99 87.9913 (n = 20)

GA39 ANNSA40 IACA19 HOpt-AiNet41 HEA

CEa b 10000− 40000 7500− 20000

Table 4. Search Regions of Different Time Nodes after the Reduction

4159 5000− 12500 250000 2023201 120000 11745 12500− 17500

time

region

time

0 tf/20 2tf/20 3tf/20 4tf/20 5tf/20 6tf/20

[0, 10] [5.4, 15.4] [13.2, 23.2] [37.6, 47.6] [12.8, 22.8] [3.4, 13.4] [3.5, 13.5]

7tf/20 8tf/20 9tf/20 10tf/20 11tf/20 12tf/20 13tf/20

region [3.4, [3.7, [3.6, [3.5, [3.8, [3.9, [3.6,

13.4] 13.7] 13.6] 13.5] 13.8] 13.9] 13.6]

time 14tf/20 15tf/20 16tf/20 17tf/20 18tf/20 19tf/20 tf

region [3.4, [3.7, [3.8, [3.9, [3.9, [3.8, [3.9,

13.4] 13.7] 13.8] 13.9] 13.9] 13.8] 13.9]

b b b 4001− 80000 b 15000

a

Computational evaluation of the objective function, where one CE indicates that the objective function was calculated once. bNot reported.

1b. It was able to determine the optimal control strategy with an acceptable computation cost. GA,39 ANNSA,40 IACA,19 and HOpt-AiNet41 were used to solve case 2. The GA method is easy to understand. However, the computational cost for this method is relatively high, and the results of the other methods are all better than those of GA. The efficiency of ANNSA is independent of the choice of initial guesses for the decision variables. However, the computational cost is much greater than those of the other methods. IACA iteratively executes an ACA and has the merits of both IDP and the stochastic algorithm. However, the accuracy of the results requires improvement. HOpt-AiNet is robust, and the computational cost is relatively low. However, this method lacks the capacity to deal with constraints, and it could not be used to solve case 1b or case 3. For case 3, the value obtained by the HEA is a significant improvement over the 87.69 obtained by simultaneous method.42 The obtained solutions from IDP, TPH, and the LJ procedure43−46 are reliable, but the high computational costs for large systems have restricted their use to problems on a small scale. Compared with these methods, the results obtained by the HEA were relatively good, and the computational cost was relatively low. The results in Table 3 show that the HEA can obtain good results at an acceptable time interval partition number. Moreover, the HEA can effectively solve dynamic problems without constraints or even with different constraints (including path and terminal constraints). 5.2. Performance of Search Region Reduction in HEA. To verify the performance of search region reduction in the HEA, two different methods (HEA with and without search region reduction) were used to solve case 3. In this case, the constraints were 0 ≤ u ≤ 50, 0 ≤ x1 ≤ 40, 0 ≤ x3 ≤ 25, and 0 ≤

Figure 7. Evolutionary processes of case 3 for different methods.

methods are shown in Figure 7. It is shown that, at about the 220th generation, the search region reduction operation was executed. The evolutionary process of the HEA without the search region reduction was very slow after the 220th generation. 5.3. Comparison of PSO, GA, and HEA. The same parameter settings were used to solve case 3 presented in the current work using PSO, GA, and HEA to test the performance of the HEA. The comparisons for PSO, GA, and HEA are shown in Figure 8. These methods can all obtain acceptable solutions for a certain relative tolerance. However, the HEA performed better than PSO and GA. Especially after about the 220th generation, the HEA could obtain better solutions than PSO and GA.

6. APPLICATION TO AN ETHYLENE OXIDE HYDRATION REACTOR As the simplest glycol, ethylene glycol (EG) is an important raw material with extensive uses for industrial applications. EG is commonly produced by the hydration of ethylene oxide (EO). In the present article, EG production employed the technologies of Scientific Design (SD) company through oxidation in pure oxygen and EO was synthesized through the reaction of ethylene and oxygen under high temperature and high pressure in the presence of a catalyst. EG was then 15981

dx.doi.org/10.1021/ie201977x | Ind. Eng. Chem. Res. 2012, 51, 15974−15985

Industrial & Engineering Chemistry Research

Article

Figure 8. Evolutionary processes of case 3 for different methods.

Table 5. Values of α0i−α3i i 1 2 3 4

α0i 3.5654 8.6762 9.7358 1.0795

× × × ×

α1i 106 106 106 107

−1.5114 −3.6548 −4.0825 −4.5102

× × × ×

104 104 104 104

α2i

α3i

0.6733 1.4247 1.4298 1.4350

0.0385 0.0925 0.1027 0.1129

The reactions of EO with glycols occur based on the following stoichiometry k0

C2H4O + H 2O → C2H6O2 ki

C2H4O + C2i − 2H4i − 2Oi → C2iH4i + 2Oi + 1,

value(s)

[Tmin, Tmax] Max_gen PEO/PEG/PDEG MEO/MEG/MDEG C H2 O

[350, 450] 300 11.80:8.74:9.65 44.05:62.07:106.12 0.9560

CEO CEG CDEG, CTEG, CPEG lf

0.0435 0.0005 0 171 m

i = 2, ..., 6 (19)

Table 6. Parameters and Initial Conditions for the EO Hydration Reactor parameter/initial condition

(18)

The reactor axial length is significantly greater than the radial length. Thus, the assumption can be made that no temperature and concentration gradients exist in the radial direction. Moreover, another assumption is that plug flow occurs in the reactor, but no back-mixing occurs. Axial and radial diffusion can thus be ignored. 6.1. Kinetic Model. The reactions of EO with water and EG can be expressed as k1

EO + H 2O → EG k2

EO + EG → DEG k3

EO + DEG → TEG

produced by the hydration of EO under the conditions of a 22:1 water/EO molar ratio. The EO hydration reactor is a tubular reactor. During the reaction, diethylene glycol (DEG), triethylene glycol (TEG), and polyethylene glycol (PEG) are produced as byproducts.

k4

EO + TEG → PEG

(20)

where k1−k4 are the reaction rate constants. The rate expressions can be given by

Figure 9. Optimal solutions of the EO hydration reactor. 15982

dx.doi.org/10.1021/ie201977x | Ind. Eng. Chem. Res. 2012, 51, 15974−15985

Industrial & Engineering Chemistry Research rH2O =

dC H 2 O

Article

where r1−r4 are the reaction rates of the reactions in eq 20. 6.3. Objective Function.

= −k1C EOC H2O

dt dC EO rEO = dt = −k1C EOC H2O − k 2C EOC EG − k 3C EOC DEG

max J(T )= PEOMEOC EO(lf ) + PEGMEGC EG(lf ) + PDEGMDEGC DEG(lf )

− k4C EOC TEG

s.t.

dC EG rEG = = k1C EOC H2O − k 2C EOC EG dt dC DEG = k 2C EOC EG − k 3C EOC DEG rDEG = dt dC TEG rTEG = = k 3C EOC DEG − k4C EOC TEG dt dC PEG rPEG = = k4C EOC TEG dt

i = 1, 2, 3, 4

where P is the price, M is the molecular weight. The price ratio, the values of M, and the initial conditions are reported in Table 6. At n = 10, the HEA was used to solve this problem, and the parameters of the HEA were the same as shown in Table 1. The optimal concentration distributions of EO and EG are shown in Figure 9a. The optimal concentration distribution of DEG is shown in Figure 9b. The optimal temperature distribution of the EO hydration reactor is shown in Figure 9c. The optimal objective function obtained by the HEA was 23.65657. The optimal temperature distribution was characterized by two singular arcs, namely, a stage at approximately 373 K from the entrance of the reactor to 155 m along the length of the reactor and a stage with a sharply increased temperature from 155 m along the length of the reactor to the outlet of the reactor. At the outlet of the reactor, EO should completely become EG, DEG, TEG, and PEG. So the optimal solution shows clearly a peak behavior to the reaction. Through this temperature distribution, the EO in the reactor almost reacted with all of the water and glycols to produce EG and DEG, thus resulting in the maximum economic benefit.

(21)

(22)

where T is the absolute temperature in Kelvin, ki is in L·mol−1·min−1, and α0i−α3i are correction factors. The partial least-squares regression method was applied to establish the relationship between the reaction rate constant (ki) and the reaction temperature (T) according to the experimental data in the literature.47 The obtained values of α0i−α3i are reported in Table 5. The details of the reaction rate constant regression can be found in the literature.48 6.2. Mass Balance Equations. Taking a differential section dl, the mass balance equation of EG in the range from l to l + dl is given by ⎡ d(FC EG) ⎤ FC EG − ⎢FC EG + dl ⎥ = rEGπR2 dl ⎣ ⎦ dl

7. CONCLUSIONS From the case studies and one plant-wide benchmark discussed herein, the methods proposed in the current article can be concluded to have the capability to solve dynamic optimization problems efficiently and accurately. The novel HEA effectively combined the properties of GA and PSO in solving dynamic optimization problems. The concept of search region reduction was used to improve the convergence rate according to the characteristics of dynamic optimization problems. The CVP method in the HEA can increase the accuracy of the results. Using the new evaluation function, both dynamic optimization problems with and without constraints can be solved. The proposed HEA was compared with several algorithms by solving dynamic optimization cases. The HEA was shown to have the ability to solve the problems with satisfactory accuracy and relatively low computational cost. Finally, the HEA was used to solve the temperature distribution problem in an EO hydration reactor and subsequently obtained the optimal temperature distribution of the reactor, which could be used to guide industrial production and process design.

(23)

where F is the feed flow, R is the reactor radius, and rEG is the reaction rate of EG. The flow is almost constant. Thus, eq 23 can be simplified as dC EG r πR2 (r − r2)πR2 = − EG =− 1 dl F F

(24)

Similarly, the following expressions were derived ( −r1 − r2 − r3 − r4)πR2 r πR2 dC EO = − EO =− F F dl (r − r3)πR2 r πR2 dC DEG = − DEG =− 2 F F dl dC H 2 O dl

=−

rH2OπR2 F

=



r1πR2 F

2

AUTHOR INFORMATION

Corresponding Author

*Tel.: 86-21-64252060 (F.Q.), 86-21-64251250 (W.Z.). Fax: 86-21-64252060 (F.Q.), 86-21-64251250 (W.Z.). E-mail: [email protected] (F.Q.), [email protected] (W.Z.).

2

(r − r4)πR r πR dC TEG = − TEG − 3 F F dl r πR2 r πR2 dC PEG = − PEG =− 4 F F dl

350 ≤ T ≤ 450 C EO(lf ) ≤ 0.001

where CX is the concentration of X (X could be EG, EO, and so on) in the reaction mixture. Reaction temperature is the primary factor determining the reaction rate constants. In the present work, the reaction rate constants were expressed as a function of temperature ki = α0i + α1iT + α2iT 2 + α3iT 3 ,

(26)

Notes

The authors declare no competing financial interest.

(25) 15983

dx.doi.org/10.1021/ie201977x | Ind. Eng. Chem. Res. 2012, 51, 15974−15985

Industrial & Engineering Chemistry Research





ACKNOWLEDGMENTS

REFERENCES

(1) Srinivasan, B.; Palanki, S.; Bonvin, D. Dynamic optimization of batch processes I: Characterization of the nominal solution. Comput. Chem. Eng. 2003, 27, 1−26. (2) Irizarry, R. A generalized framework for solving dynamic optimization problems using the artificial chemical process paradigm: Applications to particulate processes and discrete dynamic systems. Chem. Eng. Sci. 2005, 60, 5663−5681. (3) Jose, A. E.; Eva, B. C.; Maria, G. G.; Julio, R. B. Dynamic optimization of nonlinear processes with an enhanced scatter search method. Ind. Eng. Chem. Res. 2009, 48, 4388−4401. (4) Bellman, R. Dynamic Programming; Princeton University Press: Princeton, NJ, 1959. (5) Luus, R. The application of iterative dynamic programming to singular optimal control problems. IEEE Trans. Autom. Control. 1992, 37, 1802−1806. (6) Pontryagin, L. S.; Boltyanskil, V. G.; Gamkrelidge, R. V.; Mishchenko, E. F. The Mathematical Theory of Optimal Processes; Interscience: New York, 1962. (7) Bryson, A. E.; Ho, Y. C. Applied Optimal Control; Hemisphere Publishing Corp.: Washington, DC, 1975. (8) Ray, W. H.; Szekely, J. Process Optimization; Wiley: New York, 1973. (9) Bryson, A. E. Dynamic Optimization; Addison Wesley: New York, 1999. (10) Sarkar, D.; Modak, J. M. Optimization of fed-batch bioreactors using genetic algorithms. Chem. Eng. Sci. 2003, 58, 2283−2296. (11) Sargent, R. W. H.; Sullivan, G. R. The development of an efficient optimal control package. Lect. Notes Control Inf. Sci., Optim. Techn. 1978, 7, 158−168. (12) Biegler, L. T.; Cervantes, A. M.; Wachter, A. Advances in simultaneous strategies for dynamic process optimization. Chem. Eng. Sci. 2002, 57, 575−593. (13) Biegler, L. T. An overview of simultaneous strategies for dynamic optimization. Chem. Eng. Process. 2007, 46, 1043−1053. (14) Biegler, L. T.; Zavala, V. M. Large-scale nonlinear programming using IPOPT: An integrating framework for enterprise-wide dynamic optimization. Comput. Chem. Eng. 2009, 33, 575−582. (15) Antonio, F. T.; Biegler, L. T. Simultaneous mixed-integer dynamic optimization for integrated design and control. Comput. Chem. Eng. 2007, 31, 588−600. (16) Jorge, E. J.; Ines, M. S.; Isidoro, G. Optimization of biotechnological processes. The acetic acid fermentation. Part III: Dynamic optimization. Biochem. Eng. J. 2009, 45, 22−29. (17) Rajesh, J.; Guptal, K.; Kusumakar, H. S.; Jayaraman, V. K.; Kulkarni, B. D. Dynamic optimization of chemical processes using ant colony framework. Comput. Chem. 2001, 25, 583−595. (18) Zhang, B.; Chen, D. Z. Iterative genetic algorithm and its application to policies optimization for bioreactor. J. Chem. Ind. Eng. 2005, 56, 100−104. (19) Zhang, B.; Chen, D. Z.; Zhao, W. X. Iterative ant-colony algorithm and its application to dynamic optimization of chemical process. Comput. Chem. Eng. 2005, 29, 2078−2086. (20) Tfaili, W.; Siarry, P. A new charged ant colony algorithm for continuous dynamic optimization. Appl. Math. Comput. 2008, 197, 604−613. (21) Faber, R.; Jockenhovel, T.; Tsatsaronis, G. Dynamic optimization with simulated annealing. Comput. Chem. Eng. 2005, 29, 273−290. (22) Dadebo, S. A.; Mcauley, K. B. Dynamic optimization of constrained chemical engineering problems using dynamic programming. Comput. Chem. Eng. 1995, 19, 513−525. (23) Zhang, B.; Chen, D. Z; Wu, X H. Graded optimization strategy and its application to chemical dynamic optimization with fixed boundary. J. Chem. Ind. Eng. 2005, 56, 1276−1280. (24) Lin, K. H.; He, Y. J.; Chen, D. Z. Improved clonal selection algorithm for constrained dynamic optimization problems. J. Chem. Eng. Chin. Univ. 2009, 23, 858−863.

This work was supported by Major State Basic Research Development Program of China (2012CB720500), National Natural Science Foundation of China (Key Program U1162202), National Science Fund for Outstanding Young Scholars (61222303), National Natural Science Foundation of China (21276078, 61174118), the Fundamental Research Funds for the Central Universities, and Shanghai Leading Academic Discipline Project (B504).



Article

NOMENCLATURE

ci = acceleration constant in PSO CX = concentration of X D = search space dimension in PSO d(x,Q) = distance between x and the feasible domain Q F = feed flow f = fitness of individual g = algebraic inequality constraint gbesti = best position among all particles h = algebraic equality constraint i = a number J = performance index k = current generation ki = reaction rate constant l = radial length of the reactor M = molecular weight Max_gen = maximum number of generations n = predefined number of time intervals P = price Pc = crossover probability in GA Pm = mutation probability in GA pbesti = best position of particle i Popsize = number of individuals of the population Q = feasible domain R = reactor radius r = reduction ratio ri = reaction rate of the reaction i rX = reaction rate of X randi = random number between 0 and 1 si = position of particle i in PSO T = temperature t = time tf = final time t0 = initial time u = control variable uib = optimal control variable value obtained in the previous search vi = velocity of particle i in PSO w = search width x = differential state variable y = algebraic state variable α = a constant α0i−α3i = correction factors Φ = penalty function φ = differential equality constraint ϕ = algebraic equation constraint Ψ = function ω = inertia weight in PSO 15984

dx.doi.org/10.1021/ie201977x | Ind. Eng. Chem. Res. 2012, 51, 15974−15985

Industrial & Engineering Chemistry Research

Article

(25) Liu, X. G.; Chen, L. Nested constraint-preferred optimization method for fixed boundary optimal control problem. J. Zhejiang Univ. Eng. Sci. 2010, 44, 1247−1250. (26) Boeringer, D. W.; Werner, D. H. Particle swarm optimization versus genetic algorithms for phased array synthesis. IEEE Trans. Antennas Propag. 2004, 52, 771−779. (27) Matekovits, L.; Mussetta, M.; Pirinoli, P.; Selleri, S.; Zich, R. E. Particle swarm optimization of microwave microstrip filters. Presented at the Antennas and Propagation Society International Symposium, Monterey, CA, Jun 20−25, 2004. (28) Gandelli, A.; Grimaccia, F.; Mussetta, M.; Pirinoli, P.; Zich, R. E. Development and validation of different hybridization strategies between GA and PSO. IEEE Congr. Evol. Comput. 2007, 2782−2787. (29) Kennedy, J.; Eberhart, R. C. Particle swarm optimization. Proc. IEEE Int. Conf. Neural Networks 1995, 1942−1948. (30) Eberhart, R. C.; Kennedy, J. A new optimizer using particle swarm theory. Proc. Sixth Int. Symp. Micromach. Human Sci. 1995, 39− 43. (31) Shi, Y.; Eberhart, R. C. A modified particle swarm optimizer. In Proceedings of the Conference on Evolutionary Computation; IEEE Press: Piscataway, NJ, 1998; pp 69−73. (32) Ratnaweera, A.; Halgamuge, S. K.; Watson, H. C. Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients. IEEE Trans. Evol. Comput. 2004, 8, 240−255. (33) Holland, J. H. Adaptation in Natural and Artificial Systems; University of Michigan Press: Ann Arbor, MI, 1975. (34) Schlegel, M.; Marquardt, W. Adaptive switching structure detection for the solution of dynamic optimization problems. Ind. Eng. Chem. Res. 2006, 45, 8083−8094. (35) Radha, T.; Millie, P.; Ajith, A.; Pascal, B. Particle swarm optimization: Hybridization perspectives and experimental illustrations. Appl. Math. Comput. 2011, 217, 5208−5226. (36) Trelea, I. C. The particle swarm optimization algorithm: Convergence analysis and parameter selection. Inf. Process. Lett. 2003, 85, 317−325. (37) Ray, W. H. Advanced Process Control; McGraw-Hill: New York, 1981. (38) Lee., J.; Ramirez, W. F. Optimal fed-batch control of induced foreign protein production by recombinant bacteria. AIChE J. 1994, 40, 899−907. (39) Roubos, J. A.; Straten, G.; Boxtel, A. J. An evolutionary strategy for fed-batch bioreactor optimization: Concepts and performance. J. Biotechnol. 1999, 67, 173−187. (40) Sarkar, D.; Modak, J. M. ANNSA: A hybrid artificial neural network/simulated annealing algorithm for optimal control problems. Chem. Eng. Sci. 2003, 58, 3131−3142. (41) Lin, K. H.; He, Y. J.; Chen, D. Z. Hybrid optimal artificial immune network and its application to dynamic process optimization. J. Zhejiang Univ. Eng. Sci. 2008, 42, 2181−2186. (42) Cuthrell, J. E.; Biegler, L. T. Simultaneous optimization and solution methods for batch reactor control profiles. Comput. Chem. Eng. 1989, 13, 49−62. (43) Mekarapiruk, W.; Luus, R. Optimal control of inequality state constrained systems. Ind. Eng. Chem. Res. 1997, 36, 1686−1694. (44) Luus, R. Piecewise linear continuous optimal control by iterative dynamic programming. Ind. Eng. Chem. Res. 1993, 32, 859−865. (45) Luus, R.; Hennessy, D. Optimization of fed-batch reactors by the Luus−Jaakola optimization procedure. Ind. Eng. Chem. Res. 1999, 38, 1948−1955. (46) Julio, R. B.; Eva, B. C.; Carmen, G. M.; Antonio, A. A. Dynamic optimization of bioprocesses: Efficient and robust numerical strategies. J. Biotechnol. 2005, 117, 407−419. (47) Altiokka, M. R.; Akyalcin, S. Kinetics of the hydration of ethylene oxide in the presence of heterogeneous catalyst. Ind. Eng. Chem. Res. 2009, 48, 10840−10844. (48) Sun, F.; Luo, N.; Qi, R. B.; Qian, F. Research on industrial modeling of ethylene oxide hydration reactor. Proc. Fourth Int. Symp. Adv. Control Ind. Processes 2011, 456−460.

15985

dx.doi.org/10.1021/ie201977x | Ind. Eng. Chem. Res. 2012, 51, 15974−15985