Ants Foraging Mechanism in the Design of ... - ACS Publications

In this paper, a novel evolutionary approach, the ants foraging mechanism (AFM), that effectively overcomes local optima is presented for the solution...
0 downloads 0 Views 91KB Size
6678

Ind. Eng. Chem. Res. 2002, 41, 6678-6686

PROCESS DESIGN AND CONTROL Ants Foraging Mechanism in the Design of Multiproduct Batch Chemical Process Wang Chunfeng* and Zhao Xin Institute of Systems Engineering, Tianjin University, Tianjin, P.R. China 300072

In this paper, a novel evolutionary approach, the ants foraging mechanism (AFM), that effectively overcomes local optima is presented for the solution of the optimal design of multiproduct batch chemical processes. To demonstrate the effectiveness of AFM in solving the proposed problem, four examples adopted from the literature are presented, together with the computation results. Satisfactory results are obtained in comparison with the results of mathematical programming (MP), tabu search (TS), genetic algorithm (GA), and simulated annealing (SA) algorithm. Introduction Batch process are widely used in the chemical process industry and are of increasing industrial importance because of the great emphasis on low-volume, highvalue-added chemicals and the need for flexibility in a market-driven environment. If two or more products require similar processing steps and are to be produced in low volume, it is economical to use the same set of equipment to manufacture them all. This, a multiproduct batch chemical process is proposed, requiring that all of the products follow essentially the same path through the process and use the same equipment, that only one product be manufactured at a time, and that production take place in a series of production runs or campaigns for each product in turn.1 In the optimal design of a multiproduct batch chemical process, the production requirements of each product and the total production time available for all products are specified. The number and size of parallel equipment units in each stage as well as the location and size of intermediate storage are determined to minimize the investment costs. The common approach used by previous research in solving the batch process design problem is to formulate it as a mixed integer nonlinear programming (MINLP) problem and then employ optimization techniques to solve it. Mathematical programming (MP) and heuristics1-4 are commonly used. A design problem without scheduling considerations using a minimal capital cost design criterion was formulated by Robinson and Loonkar in 1970.5 In 1979 and 1982, Grossman and Sargent,6 Knopf et al.,7 and Takamastsu et al.8 improved the MP methods and applied them to the design problem. Moreover, in 1989, Espun˜a and Puigjaner used an efficient optimization strategy based on gradient calculation, with the advantage of reducing the computing time, to solve the problem.9 In 1994, Barbosa-Po´vga and Macchietto constructed the model, including very general constraints and objective functions and permitting * To whom correspondence should be addressed. E-mail: [email protected]. Fax: 86-22-27401658.

both capital costs of equipment units and pipework and operating costs and revenues to be taken into account. As a result, a branch-and-bound method was adopted to solve the model successfully.10 Because of the NPhard nature of the design problem of batch chemical processes, an impractically long computational time will be induced by the use of MP when the design problem is somewhat complicated. Severe initial values for the optimization variables are also necessary. Moreover, as the size of the design problem increases, MP will become futile. Heuristics requires less computational time and does not necessitate severe initial values for optimization variables. However, it can end up with a local optimum because of its greedy nature. Also, heuristics is not a general method because it requires special rules for particular problems. Patel et al.,11 Tricoire and Malone12 used simulated annealing (SA) to solve the design problem of multiproduct batch chemical processes. SA performs effectively and gives a solution within 0.5% of the global optimum. However, SA has the disadvantage of long searching times, and hence, it requires more CPU time than heuristics. To accelerate the convergence of SA, Wang et al.13 combined SA with heuristics to solve the design problem of multiproduct batch chemical processes and obtained satisfactory results. Wang et al.14,15 also successfully applied genetic algorithm (GA) and tabu search (TS) approaches to the problem. Lin and Floudas put forward a model that accounts for the tradeoff between capital costs, revenues, and operational flexibility by considering design, synthesis, and scheduling simultaneously. They also used a branch-and-bound method to solve the resulting mixed integer linear program (MILP) and MINLP models and obtained satisfactory results.16 To solve the proposed problem more effectively, the ants foraging mechanism (AFM), a novel evolutionary approach, is presented in this paper, and satisfactory results are obtained. The rest of the paper is organized as follows: Section 2 presents a mathematical model for the problem of designing multiproduct batch chemical processes. The basic ideas of AFM are introduced in section 3. The

10.1021/ie010932r CCC: $22.00 © 2002 American Chemical Society Published on Web 11/16/2002

Ind. Eng. Chem. Res., Vol. 41, No. 26, 2002 6679

adaptation of AFM to the proposed optimization problem is described in section 4. To demonstrate the effectiveness of AFM in solving the proposed problem, four problems adopted from the literature, together with the computation results obtained using AFM, are presented in section 5. Comparisons with TS, MP, GA, and SA are given in section 6. Finally, section 7 provides the summary and conclusions. Mathematical Model of MBCP The optimal design of multiproduct batch processes can be formulated according to a MINLP model. This paper employs Modi’s model modified by Xu et al.4 It includes the following assumptions: (1) The processes operate in the way of overlay. (2) The devices in a given production line cannot be reused by the same product. (3) The long campaign and the single-product campaign are considered. (4) The type and size of parallel items in- or out-of-phase are the same in one batch stage. (5) All intermediate tanks are finite. (6) The operation between stages can satisfy zero wait or no intermediate tank conditions when there is no storage. (7) There is no limitation on the utility. (8) The cleaning time of the batch item can be neglected or included in the processing time. (9) The size of a device can change continuously in its own range. Assuming that (1) there are J batch stages, K semicontinuous stages, and I products to be manufactured; (2) there are moj out-of-phase groups of parallel units in each batch stage in which the sizes are all Vj; (3) there are Rk parallel units in-phase in each semicontinuous stage, the operating rates of which are all Rk; and (4) there are S - 1 intermediate tanks that divide the whole process into S subsystems. Also, let

Js ) (j| batch stage belonging to subprocess s), s ) 1, ..., S Ts ) (t| semicontinuous substrain belonging to subprocess s) s ) 1, ..., S Ut ) (k| semicontinuous stage k belonging to semicontinuous substrain t), t ) 1, ..., T Then, using the equipment investment as a criterion of optimization, which can be expressed as a power function of characteristic dimension of the equipment, the following mathematical model can be obtained

min f(V,R) ) J

∑ j)1

(mojmpjajVRj j) +

K

∑ k)1

I

Hg

∑ i)1

I

Hi )

Qi

∑ i)1 P

(4)

i

where the corresponding variables for product i are defined as follows

(a) productivity for product i Pi )

Bis

i ) 1, ..., I; s ) 1, ..., S

TLis

(5)

(b) limiting cycle time for product i in subprocess s TLis ) max [Tij, θij] j∈Js,i∈Ts

i ) 1, ..., I; j ) 1, ..., J (6)

(c) cycle time for product i in batch stage j Tij )

θiu + Pij + θi(u+1) moj

i ) 1, ..., I; j ) 1, ..., J

(7)

(d) processing time for product i in batch stage j Pij ) P0ij + gij

( ) Bis mpi

dij

i ) 1, ..., I; j ) 1, ..., J; j ∈ Js (8)

(e) operating time for product i in substrain t θit ) max k∈Ut

[ ] BisDik Rknk

i ) 1, ..., I; t ) 1, ..., T; t ∈ Ts (9)

(f) batch size for product i in subprocess s Bis ) min j∈Js

( ) mpiVi sij

i ) 1, ..., I; t ) 1, ..., T; t ∈ Ts (10)

(3) Product Quantity Constraints. A given product exhibits the same productivity in all subprocesses. (4) Storage Size Constraints. The size of intermediate storage is the maximum of what is needed by all products

V/s ) max[PiS/is(TLis - θiu + Ti(s+1) - θi(u+1))] i

i ) 1, ..., I; s ) 1, ..., S - 1 (11)

Using the mathematical model to optimize a design for a given product demand, the size and number of each type of equipment must be calculated to minimize the equipment investment.

S

(nkbkRβkk) +

(csV/γ ∑ s ) s)1 s

(1)

subject to the following constraints: (1) Dimensional Constraints. Each piece of equipment can be altered in its allowable range

e Vj e Vmax , j ) 1, ..., J Vmin j j

(2)

Rmin e Rk e Rmax k k , k ) 1, ..., K

(3)

(2) Time Constraint. The sum of available production time for all products is not more than the total time for production

Ants Foraging Mechanism In this section, the ants foraging mechanism, a new evolutionary method, is proposed according to the mechanism of army ant foraging and ACO (ant colony optimization)17 to solve the optimal problem in continuous space. ACO. ACO, which simulates the foraging behavior of ants, was first proposed by Dorigo and colleagues as a multiagent approach for solving difficult combinatorial optimization problems such as the traveling salesman problem (TSP) and the quadratic assignment problem (QAP).18 Ants are social insects that live in colonies whose behavior is directed more toward the survival of the colony as a whole than that of a single individual of

6680

Ind. Eng. Chem. Res., Vol. 41, No. 26, 2002

the colony. An important and interesting behavior of ant colonies is their foraging behavior. While walking from food sources to the nest and vice versa, ants deposit a substance called pheromone on the ground and form a pheromone trail. Ants can smell pheromone and, when choosing their way, tend to choose, with high probability, paths marked by strong pheromone concentrations (shorter paths). Also, other ants can use pheromone to find the locations of food sources found by their nestmates. In fact, ACO simulates the optimization of ant foraging behavior. Although it can be applied to scatter combination optimum problems, ACO is not easily used to solve continuous optimum problems well because the shortest path it finds is the arc that only links the scatter sites. Basic Idea of AFM. To solve continuous problems, we put forward the ants foraging mechanism (AFM), which combines army ants’ foraging behavior and ACO. A neighborhood-seeking mechanism, which is different from the tabu mechanism in ACO, is introduced to deal with continuous optimizations in AFM. In addition, army ant foraging behavior, which is stronger than that of common ants in ACO, is simulated in our AFM. As a special race of ants, thousands of army ants will leave their nest to form large columns and swarms in a matter of hours to find food for the colony, and their raids are able to sweep out an area of 1000 m2 in a single day. In fact, AFM, which simulates the strong foraging ability of army ants, is basically a kind of multiagent neighborhood-search method. From the initial solutions, it can find the best solution in the neighborhood of the given solutions. Then, taking the new solutions as initial solutions, AFM repeats the above step as long as necessary. The core of AFM consists of an aspiration criterion, movement probability, transition probability, and diversification. Aspiration Criterion. It has been found that the pheromone trail ants leave, which can be observed by other ants, motivates the colony to follow the path; i.e., a randomly moving ant will follow the pheromone trail with high probability. This is how the trail is reinforced and more and more ants follow that trail and why a colony of ants can find large food resources whereas a single ant would probably fail as ants are almost blind. To mimic this behavior, the aspiration criterion, which relates the quantity of trail to the optimum direction, is introduced into AFM to reduce the random seeking behavior and improve the optimization efficiency of AFM. According to this aspiration criterion, optimizing agents will select the better direction with high probability, and the quantity of pheromone in this direction will then increase. As a result, the searching process is a self-reinforced process that makes it possible to find the best solution of the optimization problem. Movement Probability. From the kth ant’s current location S1, we can generate three neighbors sites, S11, S12, and S13 (the use of three neighbor sites is just for a convenient description), that represent new feasible solutions. Movement probability is used to determine whether the ants should move or not. Because the quantity of the pheromone on the neighbor site is related to the value of its objective function (i.e., the greater the value of the objective function, the more the pheromone is left on the site), Richard et al. defined the movement probability of kth ant as19

Pm )

(

[

)]

φ(S11) + φ(S12) + φ(S13) 1 -1 1 + tanh 2 φ*

(12)

where φ(S11), φ(S12), φ(S13) g 0 are the trail concentrations at lattice sites S11, S12, and S13 respectively. The parameter φ* represents the concentration of the pheromone trail for which the probability of moving is 0.5 per step. Transition Probability. If movement is permitted, the transition probability must be calculated to determine which site should be selected as the next lattice. The transition probability is used to select the move direction (optimum direction) to ensure the diversity of the solutions and to accelerate the convergence of the algorithm. Because a path with higher pheromone content should be given a higher probability, the transition probability Pkkj can be formulated as

Pkkj

[τkj]R[ηkj]β

)

3

j ) 1, 2, 3

(13)

[τkj]R[ηkj]β ∑ k)1 where ηkj ) 1/dkj, j ) 1, 2, 3, is called the visibility; τkl ) Qk/dkl , l ) 1, 2, 3, represents the density of the trail on the three arcs; Qk is the total quantity of pheremone that the kth ant holds; dkj ) 1/∆fkj denotes the distance between the site Sk and its neighbors Skj, ∆fkj ) fk - fkj is the difference of the objective function values between the new solution and the current solution; and R and β are parameters that control the relative importance of the trail versus visibility.20-22 Diversification. An intelligent search technique should not only thoroughly explore a region that contains good solutions, but also have a general view of the solution space and try to make sure that no distant region has been entirely neglected.23 To realize such diversification, AFM repeats the entire search procedure with a collection of randomly generated initial solutions. By controlling the scale of the initial solution, probabilistic arguments can be applied to establish convergence properties to a global optimum. Outline of AFM. We take a kind of nonlinear programming (NLP) problem as an example to illustrate the details of the implementation of AFM. The problem is to minimize f(x) J

min f(V,R) )

∑ j)1

(mojmpjajVRj j) +

K

(nkbkRβk ) ∑ k)1 k

(14)

subject to the nonlinear constraints

x12 + x22 + x32 g 1 x12 + x22 + x32 e 4

(15)

and bounds

x1, x2, x3 > 0 The process of optimization for the example using AFM can be divided into three steps: initialization, iteration, and termination. (1) Initialization. In this step, the initial solution group is generated according to the work of the Wang et al.13,14 The size of the solution group is determined

Ind. Eng. Chem. Res., Vol. 41, No. 26, 2002 6681

by the user in terms of the complexity of the problem. The size of the solution is 10; i.e., 10 initialized solutions are generated in this step. To illustrate the next steps, a solution is given as (x1, x2, x3) ) (0.81, 0.60, 1.29), and its objective function is 0.1508. (2) Iteration. After the initialization process, the iteration process (i.e., the optimization process) begins. Above all, three neighboring feasible solutions are generated randomly for each initial solution in the group, and the movement probabilities of these new solutions are computed to determine whether these neighbors solutions can be candidate solutions. If the original solution can not be accepted, it will be replaced by a new one. For (x1, x2, x3) ) (0.81, 0.60, 1.29), the three neighbors solutions are (0.78, 0.66, 1.31), (0.82, 0.59, 1.47), and (0.84, 0.53, 1.33), and the movement probability of these neighbor solutions is 0.77 according to eq 12. Then, a random number, uniformly distributed on [0, 1], is generated. If this random number is smaller than the movement probability, these neighbors solutions can be candidate solutions. For this solution, the random number is 0.6724. Next, if the feasible neighbor solutions are accepted, their transition probabilities are calculated. For each current solution, a new solution is selected from its three candidate solutions according to the transition probabilities. If this new solution is better than the current one, the current one is replaced. Otherwise, the current solution is retained. For (x1, x2, x3) ) (0.81, 0.60, 1.29), the transition probabilities of (0.78, 0.66, 1.31), (0.82, 0.59, 1.47), and (0.84, 0.53, 1.33) are 0.72, 0.68, and 0.88, respectively. Obviously, the last candidate solution is selected. The value of the objective function of this candidate solution is 0.1535, which is better than that of the current solution. Thus, the current solution is replaced by this candidate solution. When all current solutions are updated, the best solution of the current cycle is stored, and another cycle begins. (3) Termination. The iteration process is repeated until the termination condition is reached. When the algorithm converges on one solution, the algorithm is terminated, and this solution is consider as a satisfactory solution of the problem. (Alternatively, if the maximum iterating generation is achieved, the satisfactory solution is selected from the stored solution.) For example, the satisfactory solution is (x1, x2, x3) ) (0.8450, 0.5120, 1.3201), and its objective function value is 0.1537. Implementation From the previous discussion, we know that the AFM method is, in fact, an intelligent search procedure that, in some sense, imitates army ant foraging behavior and applies some rules based on artificial intelligence principles. In this section, we apply AFM to the optimal design of multiproduct batch chemical processes. Figure 1 shows the flowchart of the algorithm. Neighborhood Structure. Neighborhood structure plays an important role in AFM implementation, influencing the solution’s quality and the computing speed. A large neighborhood provides generally high-quality solutions but can result in longer CPU times. A small neighborhood can accelerate the convergence of the searching process, but it might result in a reduction of the quality of the optimization results, i.e., the algorithm might become trapped in a local minimum. A tradeoff between the computing speed and the solution quality

Figure 1. AFM implementation.

must be made. For this reason, the concept of natural neighborhood size is introduced in this paper. We vary the neighborhood size over two different ranges (small and large) while the search process proceeds. At the beginning, a smaller neighborhood size is preferred for the rapid self-reinforcement of pheromone, and a larger neighborhood size is preferred for a thorough search just on the optimum directions identified according to the distribution of pheromone trails at the end of the smallneighborhood search. Because of the given seeking ability of the ants, this mechanism is appropriate in AFM, and the computed examples also support this view. Aspiration Criterion. If the search direction is accepted in the current step, it will be accepted with higher probability again in the next step. In this way, the self-reinforcement of pheromone occurs to improve the searching efficiency. The best search direction and solutions are related to a high density of pheromone. Thereby, it makes the better solution easier to accept. Obviously, the good solution spaces, which are more attractive for the optimizing agents, are given a more thorough search than other solution spaces. Step Size of Continuous Variables. For an MINLP problem, there exists a problem of step size of continuous variables when a neighborhood-search method is adopted. The step size can be neither too large nor too

6682

Ind. Eng. Chem. Res., Vol. 41, No. 26, 2002

small. A large step results in a local optimum, whereas a small step size requires a longer searching time. Two simple but effective dynamic methods are designed here to vary the step size of continuous variables. (1) We simply let x ) x[1 + (-R%)i-1] (where x represents an optimization variable and i represents the number of times neighborhood solutions are generated in a certain iteration step). (2) Another method is to let x ) x + (-R%)i-1x′ (where both x and x′ represent optimization variables and i represents the number of times neighborhood solutions are generated in a certain iteration step). The larger R is, the larger the initial step size, and vice versa. We can see that the step size changes with both the value of the optimization variables and the increase in i. Hence, the step size of a continuous variable varies adaptively during the search procedure. (1) At the beginning, when the optimization variables are larger, a larger step size is adopted to establish the search procedure, and at the end, when the optimization variables are smaller, a smaller step size is adopted to improve the precision of the search procedure. (2) As the value of i increases, the step size decreases. This leads to a more thorough search with each additional iteration step. (3) We initially use the first method to generate the neighborhood solutions because it is helpful to thoroughly seek the neighborhood solutions of the current solution. Sometimes, good neighborhood solutions cannot be found. In such a case, the second method, which can increase the solution’s diversification in finding good neighborhood solutions, is used. Because the unit size of a batch stage exerts a much greater influence on the objective function than the semicontinuous variables, we adopted two different R values for these two kinds of variables, a smaller one for batch stages and a larger one for semicontinuous stages. Termination Criterion. Two termination criteria can be used by AFM to control the schedule of the algorithm. One is using the single-solution ability of the ant colony to control the algorithm: if all optimizing agents obtain the same solution, the algorithm should be stopped. The other is the classical maximum generation criterion: when the cycle number of the algorithm reaches the previously established maximum generation number, the algorithm is terminated. The first termination criterion can find global solutions, but it is only fit for simple problems because its computation time for complex problems is too long. Although the second criterion perhaps decreases the quality of the optimization solution, it cuts the computation time greatly. Thus, for complex problems, the second criterion is adopted. Examples and Analysis Examples were computed to demonstrate the effectiveness of the algorithm described in the above section. Four examples, which are adopted from Wang et al.,15 are presented here. The samples selected are convenient for comparison with existing methods and demonstration of the efficiency of AFM. Because the four examples chosen present different levels of complexity, they are helpful to manifest the robustness of AFM to the complexity of the problem. The data for examples 1-4 are presented in Tables 1-4, respectively. The results are presented in Tables 5-8, respectively. The data for GA, which used a multiparameter crossed binary coding mechanism, come from Wang et al.14 The data for MP and TS are from Wang et al.,15 and TS represents the results of the standard tabu search.

Table 1. Data for Example 1a,b SC1 a, b, or c R, β, or γ I ) 1, S or D p0 g I ) 2, S or D p0 g I ) 3, S or D p0 g

B1

SC2 SC3

370 592 250 210 0.22 0.65 0.40 0.62 1.2 1.2 1.2 1.2 35 0.0 1.5 1.4 1.5 1.5 40 0.0 1.1 1.0 1.1 1.1 30 0.0

B2 SC4 582 0.39 1.5 1 0.0 1.2 1 0.0 1.0 2 0.0

T

SC5 SC6

B3

250 334 250 200 1200 0.4 0.59 0.40 0.83 0.52 1.2 1.1 1.4 1.4 1.1 4 0.0 1.5 1.1 1.5 1.5 1.2 8 0.0 1.1 1.1 1.2 1.2 1.0 4 0.0

a H ) 8000 h, J ) 3, I ) 3, Q ) [100 000, 100 000, 50 000], 800 e Vi e 2400, 300 e Rk e 1800. b SC indicates semicontinuous stage, B indicates batch stage, and T indicates intermediate storage.

Table 2. Data for Example 2a,b SC1 B1 SC2 SC3 a, b, or c R, β, or γ I ) 1, S or D p0 g I ) 2, S or D p0 g I ) 3, S or D p0 g

370 592 250 210 0.22 0.65 0.40 0.62 1.2 1.2 1.2 1.2 35 0.0 1.5 1.4 1.5 1.5 40 0.0 1.1 1.0 1.1 1.1 30 0.0

T

SC4 B2 SC5 SC6

278 250 582 250 0.49 0.40 0.39 0.40 1.1 1.2 1.5 1.4 1 0.0 1.1 1.5 1.2 1.5 1 0.0 1.1 1.1 1.0 1.2 2 0.0

B3

200 1200 0.83 0.52 1.4 1.1 4 0.0 1.5 1.2 8 0.0 1.2 1.0 4 0.0

a H ) 8000 h, J ) 3, I ) 3, Q ) [100 000, 100 000, 50 000], 800 e Vi e 2400, 300 e Rk e 1800. b SC indicates semicontinuous stage, B indicates batch stage, and T indicates intermediate storage.

Results and Comparison Analysis. From the results presented in Tables 5-8, we can see that, in example 1, AFM obtained better results than MP, TS, GA, and SA. In example 2, AFM found nearly the same result as MP but exhibited much faster convergence. Moreover, the result of example 2 using AFM is better than those obtained using TS, GA, and SA. We can also see that AFM yielded nearly the same result as TS and GA but a better result than MP and SA in the somewhat complicated example 3. For example 4, Patel et al.11 pointed out that this problem cannot be solved using any existing method other than SA because of the presence of intermediate storage, nonidentical units, and mixed modes of operation. However, AFM handles it successfully and does so in a computing time that is less than those of SA, GA, and TS. From the results of these examples, we found that AFM consistently obtained better results than GA and SA. This means that AFM adopted a satisfactory criterion for optimization in this optimal design problem. As is evident from the computation results of examples 1-4, AFM has advantages over MP, GA, and SA in terms of solution quality and computational time. As an AI (artificial intelligent) method, AFM can efficiently escape from getting trapped in local optimization to find more satisfactory solutions. The main drawback of MP, in fact, is that it very easily becomes trapped in local optimization, and from the examples, we can see that AFM obtains better solutions than MP. AFM exploits certain forms of flexible memory (history information) to control the search process, i.e., AFM emphasizes scouting successive neighborhoods to identify moves of high quality through the pheromone aspiration criterion and natural neighborhood size

Ind. Eng. Chem. Res., Vol. 41, No. 26, 2002 6683 Table 3. Data for Example 3

a, b, or c R, β, or γ I ) 1, S or D p0 g d I ) 2, S or D p0 g d I ) 3, S or D p0 g d

SC1

B1

SC2

T

SC3

B2

SC4

B3

SC5

B4

SC6

370 0.22 1.0

250 0.60 8.28 1.15 0.20 0.40 5.58 5.95 0.15 0.40 2.34 3.96 0.34 0.40

370 0.22 1.0

278 0.49 1.0

370 0.22 1.0

370 0.22 1.0

1.0

1.0

1.0

1.0

250 0.60 6.57 1.20 0.50 0.20 6.17 1.08 0.42 0.20 5.98 0.66 0.30 0.20

370 0.22 1.0

1.0

250 0.60 2.95 5.28 0.40 0.30 3.27 7.00 0.70 0.30 5.70 5.13 0.85 0.30

370 0.22 1.0

1.0

250 0.60 9.7 9.86 0.24 0.33 8.09 7.01 0.35 0.33 10.3 6.01 0.50 0.33

1.0

1.0

1.0

1.0

1.0

1.0

1.0

1.0

a H ) 6000 h, J ) 4, I ) 3, Q ) [437 000, 324 000, 258 000], 250 e V e 10 000, 300 e R e 10 000. b SC indicates semicontinuous i k stage, B indicates batch stage, and T indicates intermediate storage.

Table 4. Data for Example 4a,b

a, b, or c R, β, or γ g I ) 1, S or D p0 I ) 2, S or D p0 I ) 3, S or D p0 I ) 4, S or D p0 I ) 5, S or D p0 I ) 6, S or D p0 I ) 7, S or D p0 I ) 8, S or D p0 I ) 9, S or D p0 I ) 10, S or D p0 I ) 11, S or D p0 I ) 12, S or D p0 I ) 13, S or D p0 I ) 14, S or D p0 I ) 15, S or D p0

SC1

B1

SC2

SC3

B2

SC4

T

SC5

SC6

B3

SC7

370 0.22 1.2 1.5 1.1 1.5 1.3 1.4 1.2 1.1 1.3 1.4 1.5 1.2 1.5 1.8 1.5 -

592 0.65 0 1.2 3.0 1.5 6.0 1.1 2.0 1.5 2.0 1.3 1.0 1.4 2.0 1.2 1.0 1.1 4.0 1.3 2.0 1.4 2.5 1.5 3.0 1.2 3.5 1.5 5.0 1.8 4.5 1.5 3.0

250 0.40 1.2 1.5 1.1 1.5 1.3 1.4 1.2 1.1 1.3 1.4 1.5 1.2 1.5 1.8 1.5 -

210 0.62 1.2 1.5 1.1 1.5 1.3 1.4 1.2 1.1 1.3 1.4 1.5 1.2 1.5 1.8 1.5 -

582 0.39 0 1.4 1.0 0.0 0.0 1.2 2.0 1.8 1.5 3.0 2.0 2.1 2.5 5.2 0.5 2.1 3.5 1.1 3.0 1.5 2.5 1.7 2.0 1.9 4.5 3.7 7.0 2.2 3.0 2.7 2.0

250 0.40 1.4 0.0 1.2 1.8 3.0 2.1 5.2 2.1 1.1 1.5 1.7 1.9 3.7 2.2 2.7 -

200 0.39 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 -

250 0.40 1.4 1.5 1.2 1.8 3.0 2.1 5.2 2.1 1.1 1.5 1.7 1.9 3.7 2.2 2.7 -

200 0.85 1.4 1.5 1.2 1.8 3.0 2.1 5.2 2.1 1.1 1.5 1.7 1.9 3.7 2.2 2.8 -

1200 0.52 0 1.0 4.0 1.0 8.0 1.0 4.0 1.0 3.0 1.0 2.5 1.0 5.0 1.0 7.0 1.0 3.0 1.0 2.0 1.0 4.0 1.0 4.0 1.0 6.5 1.0 9.0 1.0 4.0 1.0 6.0

600 0.40 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 -

a Q(m 000) ) [40, 30, 10, 35, 33, 27, 25, 22, 20, 19, 15, 12, 9, 7, 5], H ) 8000 h, I ) 15, 300 e V e 2400, 300 e R e 2400. b SC indicates i k semicontinuous stage, B indicates batch stage, and T indicates intermediate storage.

described in this paper. Also, AFM seems to be more robust with respect to variations in the initial solution than SA. When SA is employed, a “good” choice of the control parameters (temperature, annealing schedule, etc.) greatly influences the solution quality, as pointed out by Patel et al.11 and Wang et al.13 It was also found that AFM performs better than GA in terms of simplicity, because GA is sensitive to coding design, which is both the key and the major difficulty in GA implementation, whereas AFM simply uses the objective function value to construct the solution. Moreover, we can see that the acceptance criterion of AFM is superior to those of TS, SA, and GA. AFM first uses movement probabilities to determine whether acceptable neighbor solutions of the current solution exist. If acceptable

neighbor solutions exist, then the optimization direction is determined by transition probabilities. This type of acceptance criterion not only allows AFM to reduce the computation time significantly but also can greatly improve the quality of the solutions. In addition, AFM employs intelligent strategies, i.e., natural neighborhood size, ant foraging behavior character, to accelerate its convergence. Computational Experience. In this section, some important aspects in our implementation of AFM and some problems in practice are discussed. (a) Initial Solution. In these four examples, we start the search procedure from the largest possible value of all optimization variables, that is, we adopted the

6684

Ind. Eng. Chem. Res., Vol. 41, No. 26, 2002

Table 5. Results of Example 1 TS (standard TS)15

AFM objective function j)1 j)2 j)3

188 257.1

191 267.3

MP15

189 294.8

189 015.7

moj

mpj

Vj

moj

mpj

Vj

moj

mpj

Vj

moj

mpj

Vj

1 1 1

1 1 1

1557.8 2028.2 800.0

1 1 1

1 1 1

1620.1 2010.3 800.0

1 1 1

1 1 1

1575.5 2061.5 800.0

1 1 1

1 1 1

1631.1 2039.2 800.0

k)1 k)2 k)3 k)4 k)5 k)6

nk

Rk

nk

Rk

nk

Rk

nk

Rk

1 1 1 1 1 1

1800.0 435.7 435.7 300.0 300.0 300.0

1 1 1 1 1 1

1800.0 460.5 460.5 300.0 300.0 300.0

85 85 85 85 85 85

1800.0 455.4 455.4 300.0 300.0 300.0

1 1 1 1 1 1

1800.0 435.4 435.4 300.0 300.0 300.0

V′s CPU time (s)a a

GA14

1629.5 20.8

1720.1 35.5

1644.5 75.5

1751.1 166.4

On an Intel PS 400 586 computer.

Table 6. Results of Example 2 TS (standard TS)15

AFM objective function j)1 j)2 j)3

170 373.6

170 604.3

170 553.1

170 539.8

moj

mpj

Vj

moj

mpj

Vj

moj

mpj

Vj

moj

mpj

Vj

1 1 1

1 1 1

1662.8 800.0 800.0

1 1 1

1 1 1

1665.1 800.0 800.0

1 1 1

1 1 1

1661.5 800.0 800.0

1 1 1

1 1 1

1677.5 800.0 800.0

nk

Rk

nk

Rk

nk

Rk

nk

Rk

1 1 1 1 1 1

1800.0 306.1 306.1 300.0 300.0 300.0

1 1 1 1 1 1

800.0 325.1 325.1 300.0 300.0 300.0

1 1 1 1 1 1

1800.0 316.4 316.4 300.1 300.1 300.1

1 1 1 1 1 1

1800.0 300.1 300.1 300.1 300.1 300.1

k)1 k)2 k)3 k)4 k)5 k)6 V′s CPU time (s)b a

SA15objective functiona

GA14

1722.7 72

1733 98

1728.3 209

1742.2 158

GA14

SA11

MP objective function ) 170 357.0. b On an Intel PS 400 586 computer.

Table 7. Results of Example 3 TS (standard TS)15

AFM objective function j)1 j)2 j)3 j)4

k)1 k)2 k)3 k)4 k)5 k)6 V′s a

36 885.8

368 131.4

362 130

368 88.3

moj

mpj

Vj

moj

mpj

Vj

moj

mpj

Vj

moj

mpj

Vj

2 2 2 1

1 1 1 1

4291.2 9923.2 5533.7 7633.0

1 2 2 1

1 1 1 1

7301.3 9926.3 5800.1 9905.2

1 2 2 1

1 1 1 1

6907 9918 5724 9466

2 2 2 1

1 1 1 1

4290.0 9930.0 5534.0 7627.0

nk

Rk

nk

Rk

nk

Rk

nk

Rk

1 1 1 1 1 1

9250.0 10 000 9671.1 10 000 8994.0 380.1

1 1 1 1 1 1

9006.2 4843.2 7101.4 9104.6 9860.6 3805.1

2 1 1 2 1 1

7717 2189 6637 9466 9926 5212

1 1 1 1 1 1

9252.0 10 000.0 9675.0 10 000.0 9000.0 390.0

1996.2

MP objective function ) 369

3103.6 728.11 b

2946

1997.0

CPU time for AFM on an Intel PS 400 586 computer is 86 s.

largest possible value of each optimization variable as the initial solution. By investigating the influence of the initial solution on the search procedure, we found that different initial solutions have no obvious influence on the optimization results, but a better initial solution will speed convergence greatly. For these examples, a “good” initial solution will result in at least a 20% reduction of computing time. This means that, although AFM has no special need for a particular initial solution, a

better initial solution, as opposed to a random start, is beneficial in the eventual success of AFM implementation (particularly for large problems, such as example 4 in this paper). A typical way of obtaining an appropriate initial solution is through the use of a heuristic procedure, such as that designed by Wang et al.13,14 (b) Natural Neighborhood. We also found that of all of the strategies explained in section 4, the strategy of natural neighborhood provides the greatest contribu-

Ind. Eng. Chem. Res., Vol. 41, No. 26, 2002 6685 Table 8. Results of Example 4 TS (standard TS)15

AFM objectives function j)1 j)2 j)3

k)1 k)2 k)3 k)4 k)5 k)6 k)7 V′s CPU time (min) a

414 563.8

GA14

433 329.7

SA11

425 676

450 983.0

moj

mpj

Vj

moj

mpj

Vj

moj

mpj

Vj

moj

mpj

Vj

2 2 2

1 1 1

1082.8 2397.2 1580.5

2 2 2

1 1 1

1090.3 2398.1 1605.2

2 2 2

1 2 1

1148 1585 1644

2 2 2

1 2 1

1590, 1780 2400, 896, 1934, 756 1897, 1871

nk

Rk

nk

Rk

nk

Rk

nk

Rk

2 1 1 2 1 1 1

2000.4 2102.3 1864.8 2287.5 1894.2 1432.2 1356.9

2 1 1 2 1 1 1

2010.2 2303 2100.2 2313.1 2080.5 1635.1 1930.6

2 1 2 2 1 1 1

1805 2061 2268 2362 2266 1248 1735

2 1 1 2 2 1 1

2050, 1645 1512 1512 1564, 559 918, 300 1185 2046

3022.4 3

MP objective function ) 369 728.11

4321.2 5.6 b

2172 4

5131.0 102

On a Sun Sparc workstation.

tion to acceleration of the search procedure. The simulation of army ant behavior shows its superiority over the method of simply adopting a smaller neighborhood size. The computational examples also revealed that this natural neighborhood reduces the searching time by at least 10%. (c) Reinforcement of Character and Diversification. The reinforcement of character has a greater influence than diversification in improving the best solution. However, it was found that a global optimization could not be obtained without the adoption of diversification. This phenomenon can be explained as follows: the special self-reinforcing character of AFM attracts the optimum agents to better solution spaces and explores these spaces more thoroughly, whereas diversification restarts the searching procedure from a less attractive solution space. Without diversification, the algorithm will become trapped in a local optimum because of its ignorance of some solution spaces in which the global optimum might be located. (d) Movement Probability. For other AI methods such as GA and SA, acceptable neighbor solutions are selected as soon as the neighbor solutions are randomly generated. When acceptable neighbor solutions do not exist, this mechanism would lead to a waste of searching time. However, the use of movement probabilities helps AFM identify whether acceptable neighbor solutions of the current solution exist and avoid wasting much time in searching for acceptable neighbor solutions when such solutions do not exist. In such a case, the seeking procedure for the optimization direction of AFM is more effective than the approaches used by other AI methods. For the four examples discussed, through the use of both movement probabilities and transition probabilities, the search time can be reduced by at least 5% compared with the time required by the GA and SA algorithms. (e) Step Size of Continuous Variable. For this problem, Patel et al. designed a method to vary the step size of continuous variables dynamically.7 However, our computational experience showed that Patel’s method induced unnecessary computation in the search procedure.11 (f) Termination. We used the single-solution termination criterion in computing examples 1 and 2 so that very satisfactory solutions would be obtained. For examples 3 and 4, we adopted the maximum generation

criterion and suggest 150-250 as a good compromise for the maximum value. Conclusion In this paper, the ants foraging mechanism (AFM), a novel evolutionary approach, is presented for the solution of the optimal design of multiproduct batch chemical processes, and satisfactory results are obtained. AFM proved to be fit for the proposed optimization problem and has the following advantages in application: (1) AFM is more robust than GA and SA. It requires neither the temperature-control parameter of simulated annealing nor the coding designed in genetic algorithms for special application. (2) AFM can find the best satisfactory solutions. In particular, AFM can find the global optimum solutions for simple problems through its single-solution termination criterion. (3) AFM makes no special demand for initial values of optimization variables. Rather, any feasible value of each optimization variable can be taken as the initial solution at any instance. (4) AFM makes no special demand for the form of the objective function. (5) As is evident from the computational results, AFM yields a highly satisfactory global optimum. (6) AFM is simple in structure and is convenient for simple implementation. Nomenclature aj ) cost coefficient for batch state j bk ) cost coefficient for semicontinuous stage k Bis ) batch size for product i in subprocess s, kg cs ) cost coefficient for intermediate storage s Dik ) duty factor for product i in semicontinuous stage k dij ) power coefficient for processing time for product i in stage j gij ) coefficient processing time for product i in storage j H ) horizon, h Hi ) production time of product i, h i ) index for product I ) total number of products j ) index for batch stage J ) total number of batch stages k ) index for semicontinuous stage

6686

Ind. Eng. Chem. Res., Vol. 41, No. 26, 2002

K ) total number of semicontinuous stages moj ) number of out-of-phase groups in batch stage j mpj ) number of in-phase parallel units in each of the outof-phase groups in batch stage j nk ) number of parallel units in semicontinuous stage k Pi ) rate of production of product i, kg/h pij ) processing time for product i in stage j, h p0ij ) constant in processing time equation for product i in stage j Qi, Q (m 000) ) demand for product i, kg Rk ) processing rate for semicontinuous unit k, kg/h ) maximum feasible size of semicontinuous stage k Rmax k Rmin ) minimum feasible size of semicontinuous stage k k S ) total number of subprocesses Sij ) size factor for product i in batch stage j S/is ) size factor for product i in storage s t ) index for substrains T ) total number of substrains tij ) recycling time for product i in batch stage j tLis ) limiting cycle time for product i in subprocess s Vj ) size of batch stage j, L ) maximum feasible size of batch stage j, L Vmax j Vmin ) minimum feasible size of batch stage j, L j V/s ) size of intermediate storage s, L Rj ) cost coefficient for batch stage j βk ) cost coefficient for semicontinuous stage k γs ) cost coefficeint for storage s θit ) operating time for product i in substrain t

Literature Cited (1) Modi, A. K.; Karimi, I. A. Design of multiproduct batch processes with finite intermediate storage. Comput. Chem. Eng. 1989, 13, 127. (2) Yeh, N. C.; Reklaitis; G. V. Synthesis and sizing of batch/ semicontinuous process. Presented at the AIChE Annual Meeting, Chicago, IL, 1985; Paper No. 35a. (3) Yeh, N. C.; Reklaitis, G. V. Synthesis and sizing of batch/ semicontinuous processes: Single product plants. Comput. Chem. Eng. 1987, 11, 639. (4) Xu X.; Zheng, G.; Cheng, S. Optimized design of multiproduct batch chemical processsA heuristic approach. J. Chem. Eng. 1993, 44, 442 (in Chinese). (5) Lookar, Y. R.; Robinson, J. D. Minimization of capital investment for batch processes. Ind. Eng. Chem. Process Des. Dev. 1970, 9, 625. (6) Grossmann, I. E.; Sargent, W. E. Optimal design of multipurpose chemical plants. Ind. Eng. Chem. Process Des. Dev. 1979, 18, 343. (7) Knopf, F. C.; Okos, M. R.; Reklaitis, G. V. Optimum design

of batch/semicontinuous processes. Ind. Eng. Chem. Process Des. Dev. 1982, 21, 79. (8) Takamatsu, T.; Hashimoto, I.; Hasebe, S. Optimal design and operation of a batch process with intermediate storage tanks. Ind. Eng. Chem. Process Des. Dev. 1982, 21, 431. (9) Espun˜a, A.; La´zaro, M.; Martı´nez, J. M.; Puigjaner L. An efficient and simplified solution to the predesign problem of multiproduct plants. Comput. Chem. Eng. 1989, 13 (1/2), 163. (10) Barbosa-Po´voa, A. P.; Macchietto, S. Detailed design of multipurpose batch plants. Comput. Chem. Eng. 1994, 18 (11/12), 1013 (11) Patel, A. N.; Mah, R. S. K.; Karimi, I. A. Preliminary design of multiproduct noncontinuous plants using simulating annealing. Comput. Chem. Eng. 1991, 15, 451. (12) Tricoire, B.; Malone, M. A new appoach for the design of multiproduct batch processes. Presented at the AIChE Annual Meeting, Los Angeles, CA, 1991. (13) Wang, C.; Quan, H.; Xu, X. Optimal design of multiproduct batch chemical processsMixed simulated annealing. J. Chem. Eng. 1996, 47, 1844 (in Chinese). (14) Wang, C.; Quan, H.; Xu, X. Optimal Design of Multiproduct Batch Chemical Process Using Genetic Algorithms. Ind. Eng. Chem. Res. 1996, 35, 3560. (15) Wang, C.; Quan, H.; Xu, X. Optimal Design of Multiproduct Batch Chemical Process Using Tabu Search. Comput. Chem Eng. 1999, 23, 427. (16) Lin, X.; Floudas, C. A. Design, synthesis and scheduling of multipurpose batch plants via an effective continuous-time formation. Comput. Chem. Eng. 2001, 5, 665. (17) Dorigo, M.; Maniezzo, V.; Colorni, A. Positive Feedback as a Search Strategy; Technical Report 91-016; Dipartimento di Elettronica, Politecnico di Milano: Milan, Italy, 1991. (18) Dorigo, M.; Di Caro, G. Ant Algorithms for Discrete Optimization. Artif. Life 1999, 5 (3), 137. (19) Sole´, R. V.; Bonabeau, E.; Delgado, J.; Ferna´ndez, P.; Marı´n, J. Pattern Formation and Optimization in Army Ant Raids. Proc. R. Soc. London B; http://www.santafe.edu/sfi/publications/ wpabstract/199910074 2000. (20) Dorigo, M.; Maniezzo, V.; Colorni, A. The ant system: Optimization by a colony of cooperating agents. IEEE Trans. Syst., Man, Cybernet. B 1996, 26 (1), 29. (21) Dorigo, M.; Gambardella, L. M. Ant colonies for the traveling salesman problem. BioSystems 1997, 43, 73. (22) Dorigo, M.; Gamgbardella, L. M. Ant colony system: A cooperative learning approach to the traveling salesman problem. IEEE Trans. Evol. Comput. 1997, 1 (1), 53. (23) Werra, D.; Hertz, A. Tabu search techniquessA tutorial and an application to neural network. OR Spektrum 1989, 11, 131.

Received for review November 19, 2001 Revised manuscript received September 23, 2002 Accepted October 10, 2002 IE010932R