Development of a Robust Multiobjective Simulated Annealing

Apr 19, 2011 - required to reach the final Pareto for the MOO problem considered for their study. Also, nondominating solutions in the Pareto front ob...
0 downloads 0 Views 1MB Size
ARTICLE pubs.acs.org/IECR

Development of a Robust Multiobjective Simulated Annealing Algorithm for Solving Multiobjective Optimization Problems B. Sankararao and Chang Kyoo Yoo* Centre for Environmental Studies, Department of Environmental Science and Engineering, Kyung Hee University, Seocheon-dong 1, Giheung gu, yongin-Si, Gyeonggi-Do, 446-701, Republic of Korea ABSTRACT: This paper describes the development of a robust algorithm for multiobjective optimization, known as robust multiobjective simulated annealing (rMOSA). rMOSA is a simulated annealing based multiobjective optimization algorithm, in which two new mechanisms are incorporated (1) to speed up the process of convergence to attain Pareto front (or a set of nondominating solutions) and (2) to get uniform nondominating solutions along the final Pareto front. A systematic procedure to call the process of choosing a random point in Archive for the perturbation step (in rMOSA) is chosen as the first mechanism to speed up the process of convergence to obtain/attain a final Pareto front, while the other is a systematic procedure to call the process of choosing a most uncrowded solution in Archive for the perturbation step to get a well-crowded uniform Pareto front. First, a Simple MOSA is developed by using the concepts of an archiving procedure, a simple probability function (which is used to set newpt as current-pt), single parameter perturbation, and a simple annealing schedule. Then, the proposed two new mechanisms are implemented on top of Simple MOSA to develop a robust algorithm for multiobjective optimization (MOO), known as rMOSA. Seven steps involved in the development of rMOSA are thoroughly explained while presenting the algorithm. Four computationally intensive benchmark problems and one simulation-intensive two-objective problem for an industrial FCCU are solved using two newly developed algorithms (rMOSA and Simple MOSA) and two well-known currently existing MOO algorithms (NSGA-II-JG and NSGA-II). Performances of newly developed rMOSA and Simple MOSA (i.e., rMOSA without two new mechanisms) are compared with NSGA-II-JG and NSGA-II, using different metrics available in MOO literature. Newly developed rMOSA is proved to converge to Pareto sets in less number of simulations with well-crowded uniform nondominating solutions in them, for all the problems considered in this study. Hence, rMOSA can be considered as one of the best algorithm for solving computationally intensive and simulation-intensive MOO problems in chemical as well as other fields of engineering.

’ INTRODUCTION In the last two decades, a big number of multiobjective optimization (MOO) algorithms were developed based on simple genetic algorithms (GAs. These algorithms were developed by extending the concepts of single objective GA to multiple objectives, coupled with the concepts of multiobjective optimization. A big number of applications using GA-based evolutionary MOO algorithms were also reported in the literature of different streams of science and engineering. Simulated annealing (SA) is also another popular algorithm for solving single objective optimization problems. Unlike GA, there were very few SAbased MOO techniques, and their applications were reported in the literature of MOO algorithms. Emphasis is, therefore, given to develop SA-based multiobjective techniques, that is, the extension of concepts of single objective SA to multiple objectives. In this study, a robust algorithm for multiobjective simulated annealing (MOSA) is developed based on SA, and henceforth is called as rMOSA in this paper. Many researchers have found that evolutionary algorithms based on GA are very promising algorithms for solving multiobjective optimization problems as they can find most of the solutions of the Pareto set in one complete run. However, the SA algorithm has hardly been used for multiobjective optimization because SA was originally constructed to use only one searching agent instead of a set (or population) of points. This is known to be a critical weakness of SA as it betrays the philosophy of r 2011 American Chemical Society

multiobjective optimization, that is, searching for all the nondominated solutions of the Pareto set instead of only one solution. As the result of this weakness, SA-based MOO algorithms remain as improper or not favorable for solving multiobjective optimization problems. The main obstacle for SA in multiobjective optimization is its inability to find multiple solutions by using only one searching agent. Therefore, obtaining a uniform and multiple solutions Pareto is a challenging task in the development of MOSA, hence, the efforts are made in this direction. Therefore, the motivation of this study is to develop an efficient and robust algorithm for MOSA to overcome the above-mentioned hurdles by incorporating the two new mechanisms to Simple MOSA. In this study, a Simple MOSA is developed first by using the concepts of archiving procedure, a simple probability function (which is used to set new-pt as current-pt), single parameter perturbation, and a simple annealing schedule. Two new mechanisms are implemented on top of Simple MOSA to develop a robust algorithm for MOSA (i.e., rMOSA). A systematic procedure to call the process of choosing a random point in Archive for the perturbation step (in rMOSA) is chosen as the first mechanism to speed up the Received: August 9, 2010 Accepted: April 19, 2011 Revised: March 26, 2011 Published: April 19, 2011 6728

dx.doi.org/10.1021/ie1016859 | Ind. Eng. Chem. Res. 2011, 50, 6728–6742

Industrial & Engineering Chemistry Research process of convergence to attain a Pareto set, while the other mechanism is a systematic procedure to call the process of choosing a most uncrowded solution in Archive for a perturbation step to get a well-crowded uniform Pareto set. The main steps involved in the development of rMOSA are (1) generating a user-specified number of initial feasible random solutions; (2) formation of initial archive from the initial feasible solutions generated in step 1 using the concepts of archiving and nondominance; (3) initializing the values of rMOSA parameters; (4) choosing a solution for perturbation based on two new mechanisms; (5) perturbation of single variable/parameter of a current solution to form a new solution; (6) acceptance of a bad solution based on a probability function which accountsfor the measure of the amount of domination between two solutions (i.e., current and new); (7) annealing schedule to decrease the parameter temperature. These seven steps are thoroughly described while presenting the algorithm for rMOSA. Newly developed rMOSA and Simple MOSA are compared with currently existing wellknown MOO algorithms, (elitist nondominated sorting genetic algorithm with jumping genes) NSGA-II-JG and NSGA-II, using different metrics available in MOO literature, such as spacing (S) and number of simulations required to converge to Pareto front. Optimization of four computationally intensive benchmark problems (ZDT1, ZDT2, ZDT3, and ZDT4) and one simulationintensive, real-life, two-objective problem for an industrial FCCU, is studied. This paper can be divided into the following sections: The first section discusses literature review on multiobjective optimization algorithms and their applications in the area of chemical engineering; the second section outlines the development of rMOSA and the detailed explanation of every step of the algorithm; the third sections presents the MOO problem formulation of industrial FCCU. The fourth and fifth sections give the results and conclusions, respectively.

’ LITERATURE REVIEW GA13 and SA35 are quite popular artificial intelligencebased robust techniques, which are being widely used in solving optimization problems of different areas of science and engineering. These algorithms are superior to traditional optimization techniques in many ways. These are better than calculus-based methods (both direct and indirect methods) that generally seek out the local optimum, and which may miss the global optimum. Several of the older techniques require the derivatives of the objective functions, and quite often, numerical approximations of the derivatives are used for optimization. In most real-life problems, the existence of derivatives is questionable and often, the functions are discontinuous, multimodal, and noisy. Because of these complications, GA and SA drew attention of most researchers working in the field of optimization (or operations research), to develop new techniques for solving single objective as well as multiobjective optimization problems. Several stochastic and nonstochastic multiobjective optimization techniques were also reported in the literature based on Tabu search, Scatter search, Ant system, Distributed reinforcement learning, Particle swarm optimization, Differential evolution, Artificial immune systems, Cooperative search and Cultural algorithms. Advantages and disadvantages of each technique over others were well reviewed by Coello Coello et al.6 The study of the present paper is limited to multiobjective versions SA and GA, hence, the literature survey is restricted to multiobjective

ARTICLE

versions of SA and GA, and is provided in the next few paragraphs. Development of huge number of GA-based MOO techniques were reported in the literature of multiobjective optimization. Some of the recent notable MOO algorithms are SPEA, PAES, NSGA, VEGA, NPGA, FFGA, NSGA-II, NSGA-II-JG, and HLEA. These techniques have been well reviewed by Coello Coello et al.6 NSGA7 and NSGA-II8 among GA-based techniques appear to have achieved the most attention in the EA literature and have been extensively used to solve a variety of multiobjective optimization problems in chemical engineering in recent years, in the areas of polymer reaction engineering,915 catalytic reactors,1618 membrane modules,19 cyclone separators,20 venturi scrubbers,21,22 petroleum operations,2325 water treatment,26 etc. A very few trails2733 were reported in the MOO literature to develop SA-based multiobjective optimization algorithms. A good review of several MOSA algorithms and their comparative performance analysis can be found in Suman and Kumar.34 In developing SA-based MOO algorithm, Sankararao and Gupta,29 and Suppapitnarm et al.32 have used one independent searching agent and step size perturbation mechanism based on the acceptance of current solution. It was observed that the convergence toward a final Pareto front is very slow with their perturbation mechanism. A small number of nondominating solutions is observed in the final Pareto front. On the other hand, Nam and Park27 used 100 independent searching agents simultaneously in the development of MOSA. That is, 100 agents search for the Pareto optimal solutions without exchanging information between them as the annealing process progresses. They observed that a huge number of iterations is required to reach the final Pareto for the MOO problem considered for their study. Also, nondominating solutions in the Pareto front obtained by them is sparse and not uniform. Smith et al.28 and Bandyopadhyay et al.31 tried a direction in which to obtain a more crowded Pareto in developing MOSA. Smith et al.28 used transversal and location scaling factors to update the step size of each (decision) variable, surface sampling method, and the concept of relative dominance of a solution over the existing solutions in the archive as the system energy for optimization. On the other hand, Bandyopadhyay et al.31 used the concept of relative dominance as well as the amount of domination as the system energy for optimization. They used an elaborate and a complex procedure for accepting a new solution. It is observed that the results of Bandyopadhyay et al.31 are better than the results of previous workers,2729,32 in terms of getting a good number of solutions in the Pareto, but it is found that their31 algorithm undergoes a large number of iterations to obtain a crowded Pareto front. Nondominating solutions in the crowded Pareto front obtained by using their algorithm are found to be nonuniform (i.e., some regions are crowded and some are not), as they have not adopted a special mechanism or a method in their algorithm to get a uniformly crowded Pareto. Also, no method is adopted in their algorithm to increase the speed of convergence to obtain a Pareto front. It is clear from the above discussion that much research is needed to carryout obtaining a quick (within less number of iterations), uniform, and crowded Pareto front. Therefore, the study of this paper mainly focuses on these aspects to develop an efficient and robust algorithm for MOSA. In this study, a Simple MOSA is first developed by taking simple and best concepts from different previous works on MOSA, such as the archiving 6729

dx.doi.org/10.1021/ie1016859 |Ind. Eng. Chem. Res. 2011, 50, 6728–6742

Industrial & Engineering Chemistry Research

ARTICLE

procedure,29 a simple probability function29 (used to set a new-pt as current-pt), single parameter perturbation,31 and a simple annealing schedule.31 It is to be noted that the use of a simple probability function and archiving procedure from Sankararao and Gupta29 in Simple MOSA eliminates the need for using an elaborate and complex procedure suggested by Bandyopadhyay et al.31 for accepting a new solution. Simple MOSA developed in this paper is a completely new technique and it is different from the algorithms developed by MOO researchers2733 previously, as it has different best concepts from different algorithms. Then, two new mechanisms are implemented on top of Simple MOSA to develop a robust MOSA (i.e., rMOSA). These mechanisms help in obtaining a quick, uniform, and crowded Pareto front. SA- and GA-based MOO techniques developed so far require large amounts of computational (CPU) time. This limitation restricts the use of SA- and GA-based MOO techniques to the online applications of the chemical and petroleum industries, such as process control and process optimization. Hence, new algorithms which can speed up the process of convergence to obtain/attain the final Pareto front are, thus, required. Simple MOSA and rMOSA are two such algorithms developed based on this theme. A very few29,3537 applications of MOSA can be found in chemical engineering literature as the developments on MOSA are still going on to make the algorithm more robust. Sankararao and Gupta29 carried out the multiobjective optimization of industrial FCCUs using MOSA by maximizing gasoline yield and minimizing the percentage of CO in flue gas coming out from the regenerator. Sankararao and Gupta35 have solved the MOO problem related to dynamic operation of a steam reformer by minimizing total (cumulative over t) deviations of the production of hydrogen and steam from their steady state values, integrated over an appropriately long time span. Sankararao and Gupta36 have carried out MOO of pressure swing adsorption units by maximizing purity and recovery of O2 considering several operating conditions as decision variables. Halim and Srinivasan37 have optimized batch process industries by simultaneous minimization of process waste, cleaning agent, and energy involved in the operation using MOSA.35 In this study, a new MOO problem related to an industrial FCCU is formulated and solved using the newly developed algorithm, rMOSA.

’ DEVELOPMENT OF RMOSA It is assumed that the reader is aware of preliminary details on single objective simulated annealing35 and MOO;38 hence, these are not being repeated here again for the sake of brevity. These concepts are quite helpful in understanding the proposed rMOSA effectively. Robust Multiobjective Simulated Annealing (rMOSA). A simplified flowchart and the algorithm for rMOSA are provided in Appendix I. All the steps involved in the development of rMOSA are thoroughly explained in this section. The detailed description of each step of the algorithm is as follows: 1. Generating User Specified Number of Initial Feasible Random Solutions. Generate user specified number (n = 400 in this study) of initial random points, one by one, using the following equation: Xi ¼ Xi, L þ RðXi U  Xi, L Þ;

0 e R e 1;

i ¼ 1, 2, ::::, N d ð1Þ

where Nd is the number of decision variables; Xi,L and XiU are the lower and upper bounds of each decision variable, i, respectively; R is the random variable between 0 and 1. Check constraints, if any, to find the feasibility of each point. If the point is not feasible, discard the point. Use eq 1 to create another random solution. This procedure is followed until a feasible solution is obtained 2. Formation of Initial Archive. Assign the first feasible random point (which is generated in the first step) as the first point of the archive. Take the next feasible point as the new-pt and check for placing this new point in the archive by comparing with every point in the archive of existing nondominating points, using the following procedure:

Follow the above process for all feasible points generated in Step 1 to create initial archive. In the above process, NDSol_num is the count for nondominating solutions in dup_archive; As is the total number of nondominating solutions in dup_archive. If case 3 (i.e., new-pt dominated over by NDSol_numth point) arises during the comparison process, then exit from the “for loop” and do not update the contents of Archive with the contents of dup_archive, and do not add new-pt to the Archive. If case 3 does not arise, then the contents of Archive will get updated with the contents of dup_archive and the new-pt is added to it. 3. Initializing the Values of rMOSA Parameters. The parameters that need to be set a priori are mentioned as follows: Tmax: maximum (initial) temperature Tmin: minimal (final) temperature Max_iter: maximum number of iterations to be performed at each temperature R: the cooling rate in SA R1, β1: parameters used in the first mechanism R2, β2: parameters used in the second mechanism 4. Choosing a Solution for Perturbation Based on Two New Mechanisms. Two new mechanisms are used in the development 6730

dx.doi.org/10.1021/ie1016859 |Ind. Eng. Chem. Res. 2011, 50, 6728–6742

Industrial & Engineering Chemistry Research of algorithm for rMOSA. These are explained in this step. Refer to algorithm and flowchart given in Appendix I, while going through the following mechanisms for a better understanding. Mechanism 1. Mechanism 1 deals with the systematic procedure to call the process of choosing a random point in Archive for perturbation (step 5 of the algorithm) step of rMOSA, the process of selecting a random point in Archive, henceforth, referred to as “process 1” in this paper. It can be observed from the algorithm (see Appendix I) that Max_iter number of iterations are being performed at each value of temperature. Process 1 is called at a regular interval (say “nn1”) of iterations while the main program is undergoing Max_iter number of iterations at each value of temperature. This means that the process 1 is called Max_iter/nn1 times at each value of temperature. It is to be noted that the value of nn1 varies only by varying the value of temperature; nn1 will have a single value for a fixed value of temperature. The parameters of mechanism 1, R1, and β1 are so chosen in a way that the value of nn1 is small at the start of the rMOSA run (i.e., at large value of temperature). The run starts with a small value of nn1, and this value increases with a decrease in the value of temperature (“temp” in the algorithm). nn1 attains a large value at the end of the run (i.e., at a small value of temperature). The equation used [nn1 = int(R1  temp þ β1); int operation will ensure to get an integer value of nn1] in the algorithm will ensure to get small values of nn1 at large values of temperature and large values of nn1 at the small values of temperature. The small values of nn1 at the initial stage of the run ensure a global search in the search space, which initiates the movement of Archive toward the final Pareto front. As the rMOSA run progresses, the value of temperature is reduced according to the annealing schedule (see step 7 for more details). At each value of temperature, the program undergoes Max_iter number of iterations together with Max_iter/nn1 calls to process 1. This mechanism helps the archive to reach the final (or near) Pareto front. Once Archive has reached the Pareto front, no rigorous global search is needed; the large values of nn1 at the small values of temperature will ensure no rigorous global search during the final phase of the rMOSA run. The parameters of the ZDT4 problem (this is one among the four test problems considered for our study) are taken for illustration of the above mechanism. In solving this problem, the values taken for R1 and β1 are 0.1 and 30. The values of Tmax, Tmin, R, Max_iter are 200, 0.001, 0.8, and 300, respectively. The value of nn1 is 10 at temp = Tmax (= 200), 30 at temp = Tmin (= 0.001). At the start of rMOSA run, process 1 is called at a regular interval of 10 iterations while the main program is undergoing 300 iterations at the value of temp = Tmax. The interval of iterations for which process 1 is called will slowly increase with a decrease in the value of temperature. The value of this interval reaches 30 at temp = Tmin. Therefore, the process 1 is called at a regular interval of 30 iterations at the end of the rMOSA run. Mechanism 2. Mechanism 2 deals with the systematic procedure to call the process of choosing a most uncrowded solution in Archive for perturbation step of rMOSA. This mechanism is useful for generating points around uncrowded points in the final Pareto front, thus, helps in formation of crowded and uniform Pareto. The process of selecting a most uncrowded point in Archive, henceforth, referred as ‘process 2’ in this paper. The process 2 is called at a regular interval (say ‘nn2’) of iterations while the main program is undergoing Max_iter number of iterations at each value of temperature. This means that the

ARTICLE

process 2 is called Max_iter/nn2 times at each value of temperature. It is to be noted that the value of nn2 varies only by varying the value of temperature. nn2 will have a single value for a fixed value of temperature. The parameters of mechanism 2, R2 and β2 are so chosen in a way that the value of nn2 is large at the start of the rMOSA run (i.e., at large value of temperature). Run starts with the large value of nn2 and this value decreases with decrease in the value of temperature. nn2 attains small value at the end of the run (i.e., at small value of temperature). The equation used [nn2 = int(R2  temp þ β2); int operation will ensure to get an integer value of nn2] in the algorithm will ensure to get large values of nn2 at the large values of temperature and small values of nn2 at small values of temperature. As the rigorous search around uncrowded point in the archive is not needed at the initial phase of the rMOSA run, hence, the algorithm is started with the large value of nn2. Rigorous search around uncrowded points is needed only during the final phase of rMOSA run, i.e., only when Archive has reached (or near to) Pareto front. Small values of nn2 at the small values of temperature (during the final phase of rMOSA run) will ensure the rigorous search around uncrowded points in the final Pareto front. The parameters of ZDT4 problem are again taken for illustration of mechanism 2. In solving this problem, the values taken for R2 and β2 are 0.1 and 5. The values of Tmax, Tmin, R, Max_iter are 200, 0.001, 0.8, and 300 respectively. The value of nn2 is 25 at temp = Tmax (= 200), 5 at temp = Tmin (= 0.001). At the start of rMOSA run, process 2 is called at a regular interval of 25 iterations while the main program is undergoing 300 iterations at the value of temp = Tmax. The interval of iterations for which process 2 called will slowly decreases with decrease in the value of temperature. The value of this interval reaches 5 at temp = Tmin. Therefore, the process 2 is called at a regular interval of five iterations at the end of the rMOSA run. Simultaneous Use of Mechanisms 1 and 2 in rMOSA. It can be observed from the algorithm of rMOSA (see Appendix I) that the mechanisms 1 and 2 are used simultaneously to increase the robustness of the algorithm. In the initial phase (i.e., at large values of temperature) of rMOSA run, the subroutine for process 1 is getting called more frequently than process 2 (due to small values of nn1 and large values of nn2 at large values of temperature) by the algorithm to carry out global search in the search space, which initiates the movement of Archive toward final Pareto front. In the middle phase of rMOSA run, both the processes are called with almost equal frequency. So, both the processes help in quick movement of archive (in the middle of the search space) toward final Pareto front. In the final phase of the rMOSA run, process 2 is called frequently than process 1 (due to small values of nn2 and large values of nn1 at small values of temperature) to carry out a local search around uncrowded points in the final Pareto to create new points around them. So, simultaneous use of mechanisms 1 and 2 in rMOSA algorithm (see Figure 2 and related description in “Results and Discussion” section for more details) is expected to obtain a quick, uniform, and crowded Pareto. It is to be noted that the point chosen based on mechanism 1 or 2 is used for perturbation in the step 5 of the algorithm. 5. Perturbation of Single Variable/Parameter of a Current Solution. Single variable is randomly chosen out of total number of decision variables for perturbation of a current-pt to form a new-pt. This process is similar to the one used by Smith et al.28 and Bandyopadhyay et al.31 The speed of convergence to obtain a 6731

dx.doi.org/10.1021/ie1016859 |Ind. Eng. Chem. Res. 2011, 50, 6728–6742

Industrial & Engineering Chemistry Research

ARTICLE

Table 1. Parameters Used in rMOSA and Simple MOSA for Solving Benchmark and FCCU Problems parameter

a

ZDT1

ZDT2

ZDT3

ZDT4

FCCU problem

Tmax

200

200

200

200

Tmin

0.001

0.000001

0.00001

0.001

6

Max_iter

450b, 600c

500b, 700c

600b, 1600c

300b, 500c

300b, 900c

100

R

0.8

0.8

0.8

0.8

0.8

R1a

0.1

0.1

0.1

0.1

0.1

R2a

0.1

0.1

0.1

0.1

0.1

β1a

30

30

30

30

20

β2a Nseed

20 0.58876

20 0.88876

20 0.88876

5 0.88876

10 0.88876

number of simulations

24300b, 32400c

42500b, 59500c

45000b, 120000c

16200b, 27000c

3600b, 10800c

b

c

Not required for Simple MOSA. For rMOSA. For Simple MOSA.

Pareto is observed29 to be low if all the variables are considered for perturbation. Therefore, a single variable is chosen for perturbation. 6. Acceptance of a Bad Solution Based on a Probability Function. If case 3 (refer to algorithm provided in Appendix I) arises while comparing new-pt with each of the nondominating solution in dup_archive, then accept that point as current-pt with the value of the probability calculated by the following expression:29 prob ¼

m Y i¼1

MOO Problem Formulation. Kasat et al.39 have presented

solutions of four multiobjective optimization problems (three problems involving two objective functions, and one with three objectives) for an industrial FCCU, using NSGA-II. In this study, another new problem involving two-objective functions (referred to as “FCCU problem” here) has been formulated to study the efficacy of newly developed algorithm rMOSA, as well as to provide some new results. This is described by the FCCU problem:

" # new-pt current-pt ðf i  fi Þ exp temp

Min f 1 ðT feed , T air , F cat , F air Þ ¼ %CO in the flue gas 2a 2b Min f 2 ðT feed , T air , F cat , F air Þ ¼ F air subject to ðs:t:Þ : Constraints: 2c 700 K e T rgn e 950 K 2d Crgc e 1% Model equations ðEqs: A1A42 in Kasat etal:39 Þ 2e Bounds: 2f 575 e T feed e 670 K 2g 450 e T air e 525 K 2h 115 e F cat e 290 kg=s 2i 11 e F air e 46 kg=s

where m is the total number of objectives; f is the value of the objective function; temp is the value of the temperature at that iteration. Generate a random number between 0 and 1, if that value is less than the value of prob (which is calculated using the above expression) then set new-pt as current-pt else reject new-pt by not setting it as current-pt. 7. Annealing Schedule. One most frequently used temperature decrement rule, geometric annealing schedule,31 is used in this article. This is given by

ð2Þ

tempk þ 1 ¼ R  tempk A typical value of decrement/cooling factor (R) is chosen in the range between 0.5 and 0.99. This annealing schedule has the advantage of being very simple. The annealing schedule should be so chosen that it should strike a good balance between exploration and exploitation of the search space. The algorithm for Simple MOSA can be obtained by just removing “step 4” (which contains mechanisms 1 and 2) from the rMOSA algorithm.

’ MOO PROBLEM FORMULATION OF AN INDUSTRIAL FCCU Model equations, solution scheme for solving model equations, five-lump kinetic scheme used for formulating model equations, and design specifications of industrial FCCU are the same as in Kasat et al.;39 hence, these are not being repeated here again for the sake of brevity.

The bounds and constraints in eq 2 are the same as in Kasat et al.39 But, the objective functions are different. The two objective functions are the minimization of air flow rate feeding to regenerator (economic reasons) and % CO in the flue gas (environmental reasons). Both these objectives are equally important and conflicting to each other, hence, considered as multiple objectives in this study. The decision variables used are the feed preheat temperature (Tfeed), the air preheat temperature (Tair), the catalyst flow rate (Fcat), and the air flow rate (Fair). Constraints are put on the temperature of the regenerated catalyst (Trgn) and coke on regenerated catalyst (Crgc).

’ RESULTS AND DISCUSSION A computer code is written for rMOSA in MS Fortran Power Station. In this section, four computationally intensive 6732

dx.doi.org/10.1021/ie1016859 |Ind. Eng. Chem. Res. 2011, 50, 6728–6742

Industrial & Engineering Chemistry Research

ARTICLE

Table 2. Parameters Used in NSGA-II-JG and NSGA-II for Solving Benchmark and FCCU Problems parameter

a

ZDT1

ZDT2

ZDT3

ZDT4

FCCU problem

Np

100

100

100

100

50

Pcross

0.9

0.9

0.9

0.9

0.95

Pmut

0.0b, 0.05c

0.0b, 0.05c

0.0b, 0.05c

0.0b, 0.05c

0.05b, 0.05c

Pjumpa

0.5

0.5

0.5

0.5

0.5

Ngen,max

1200b, 5000c

2000b, 8000c

1600b, 12000c

800b, 6000c

175b, 200c 40

lchr

900

900

900

300

Nseed

0.88876

0.88876

0.88876

0.88876

0.88876

number of simulations

120000b, 500000c

200000b, 800000c

160000b, 1200000c

80000b, 600000c

8750b, 10000c

Not required for NSGA-II. b For NSGA-II-JG. c For NSGA-II.

benchmark problems (namely, ZDT1, ZDT2, ZDT3, and ZDT4) and one simulation-intensive two-objective problem for an industrial FCCU (namely, FCCU problem) are solved using the four techniques, rMOSA, Simple MOSA, NSGA-IIJG,39 and NSGA-II. The details and complexity involved in solving each benchmark problem are thoroughly described in Deb;38 hence, these are not being repeated here again for the sake of brevity. The usual practice followed in the development of a new MOO algorithm is to compare the performance of a newly developed algorithm with the currently existing widely used algorithms to know the advantage offered by the new algorithm. Currently, multiobjective GA algorithms, namely, NSGA-II and NSGA-II-JG are widely being used for solving MOO problems. Hence, the performance of rMOSA and Simple MOSA are compared with NSGA-II and NSGA-II-JG. Best values of the several computational parameters (referred as reference values) have been obtained by trials for all the problems and for each of the algorithms. These are given in Tables 1 and 2. For FCCU problem, the starting trials for several of the parameters are from Kasat et al.39 The parameters finally used for FCCU problem are also given in Tables 1 and 2. Guidelines for setting the parameters R1 and β1 are the following: nn1 should be small at the start and large at the end of the algorithm. This means that number of times process 1 called is large at the start and small at the end of the algorithm. Initially, the number of times process 1 called at the start and end of the algorithm are taken as ‘Max_iter/10’ and ‘Max_ iter/30’ (i.e., the initial values of nn1 at the start and end of the algorithm are 10 and 30, respectively). Initial values of R1 and β1 are chosen corresponding to the values of nn1 being equal to 10 and 30. Later, the optimal values of number of times process 1 called at the start and end of the algorithm are found by tuning R1 and β1 (using trial and error procedure) to get the minimum number of total iterations required to obtain a Pareto set. The final values of R1 and β1 obtained for all five problems are presented in Table 1. Guidelines for setting the parameters R2 and β2 are as follows: nn2 should be large at the start and small at the end of the algorithm. This means that number of times process 2 called is small at the start and large at the end of the algorithm. Initially, the number of times process 2 called at the start and end of the algorithm are taken as ‘Max_iter/45’ and ‘Max_iter/15’ (i.e., the initial values of nn2 at the start and end of the algorithm are 45 and 15, respectively). Initial values of R2 and β2 are chosen corresponding to the values of nn2 equal to 45 and 15. Later, the

Table 3. Number of Simulations Required by Different Algorithms to Converge to Pereto Sets number of simulations required (Method 2) test problem

rMOSA

Simple MOSA

NSGA-II-JG

NSGA-II

ZDT1

24 300

32 400

120 000

ZDT2

42 500

59 500

200 000

800 000

ZDT3

45 000

120 000

160 000

1 200 000

ZDT4 FCCU problem

16 200 3 600

27 000 10 800

80 000 8 750

600 000 10 000

500 000

optimal values of “number of times process 2 called” at the start and end of the algorithm are found by tuning R2 and β2 (using trial and error procedure) to get the minimum number of total iterations required to obtain a Pareto set. The final values of R2 and β2 obtained for all five problems are presented in Table 1. No term exists in the NSGA-II algorithm which is equivalent to ‘iteration’ in MOSA, hence, the term ‘simulation’ is chosen29 to use in both contexts, rMOSA (or MOSA) and NSGA-II (or NSGA-II-JG), from now onward. One simulation represents 2 function evaluations in the case of the 2-objective optimization problem and 3 function evaluations in the case of the 3-objective optimization problem, in both the contexts. The ZDT4 problem is the most difficult of all the benchmark problems considered for this study; hence, it will be used for explaining key advantages of the proposed new mechanisms. It is clear from Table 3 that the number of simulations required for rMOSA to converge to the Pareto set is 16200; hence, Archives for Simple MOSA, Simple MOSA with mechanism 1, and Simple MOSA with mechanisms 1 and 2 (equivalent to rMOSA) are plotted in Figure 1 at the number of simulations equal to 16200. It is clear from the Figure 1a that the Archive of Simple MOSA is far away from the true Pareto front [extending from (0,1) to (1,0), such as in Figure 1c]. Application of mechanism 1 on top of the Simple MOSA (see Figure 1b) helped the Archive to move toward the true Pareto front. Figure 1b shows that the Archive obtained using the Simple MOSA with mechanism 1 is still not converged (little away) to the true Pareto front. Application of mechanisms 1 and 2 simultaneously on top of the Simple MOSA (see Figure 1c) helped the Archive to converge to the true Pareto front and obtain well-crowded uniform solutions along the Pareto set. From this explanation, it is clear that rMOSA is performing better than Simple MOSA and Simple MOSA with mechanism 1. 6733

dx.doi.org/10.1021/ie1016859 |Ind. Eng. Chem. Res. 2011, 50, 6728–6742

Industrial & Engineering Chemistry Research

ARTICLE

Figure 1. Nondominating solutions obtained (after 16200 simulations) for the ZDT4 problem using (a) Simple MOSA [rMOSA without two mechanisms], (b) Simple MOSA with mechanism 1, and (c) Simple MOSA with mechanisms 1 and 2 (rMOSA).

Figure 2. Nondominating solutions obtained for the ZDT4 problem at different stages of rMOSA run.

The contribution of mechanisms 1 and 2 to the robustness of the algorithm at different stages of the rMOSA run is explained in this section. Plots of Archives at different stages of rMOSA run are shown in Figure 2: at the start (Figure 2a) of the run; after 1/5th (i.e., after execution of 3300 simulations, Figure 2b) of the run; after 2/5th (i.e., after execution of 6600 simulations, Figure 2c) of the run; after 3/5th (i.e., after execution of 9900 simulations, Figure 2d) of the run; after 4/5th (i.e., after

execution of 13200 simulations, Figure 2e) of the run; after end (i.e., after execution of 16200 simulations, Figure 2f) of the run. Archive has moved from the initial location shown in Figure 2a to the middle of search space (shown in Figure 2b), after completion of 1/5th of the run. During this stage, the subroutine for process 1 is getting called more frequently than process 2 (due to small values of nn1 and large values of nn2 at large values of temperature) by the algorithm to carry out a global search in the 6734

dx.doi.org/10.1021/ie1016859 |Ind. Eng. Chem. Res. 2011, 50, 6728–6742

Industrial & Engineering Chemistry Research

ARTICLE

Figure 3. Comparison of the nondominating solutions of problems ZDT14 obtained by different algorithms. (a,b,c) ZDT1 (after 24300 simulations); (d,e,f) ZDT2 (after 42500 simulations); (g,h,i) ZDT3 (after 45000 simulations); (j,k,l) ZDT4 (after 16200 simulations). 6735

dx.doi.org/10.1021/ie1016859 |Ind. Eng. Chem. Res. 2011, 50, 6728–6742

Industrial & Engineering Chemistry Research

ARTICLE

Table 4. Computational Times Required by Different Algorithms to Converge to Pereto Sets

Table 5. Spacing (S) Values of Pereto Sets Obtained Using Different AlgOrithms spacing (Method 4)

computational time required (Method 3), s test problem

rMOSA

Simple MOSA

NSGA-II-JG

test problem

NSGA-II

rMOSA

Simple MOSA

NSGA-II-JG

NSGA-II

ZDT1

28.1

1.5

23.8

95.4

ZDT1

0.0033672

0.0036982

0.012412

0.0072847

ZDT2

83.9

3.0

39.9

150.7

ZDT2

0.0039074

0.0032835

0.021695

0.0091671

ZDT3

51.3

11.0

32

200

ZDT4

37.3

0.8

5.5

40.6

ZDT3 ZDT4

0.0077006 0.0039743

0.0046694 0.003659

0.038511 0.011444

0.018219 0.0078539

FCCU problem

12960

38520

31208

35666

FCCU problem

0.13258

0.1867

0.27337

0.48842

search space, which initiates the movement of Archive toward final Pareto front. Figure 2 panels c and d show Archives after completion of 2/5th and 3/5th of the run, respectively. During these stages, both the processes 1 and 2 are called by the algorithm with almost equal frequency. So, both the processes help in quick movement of the archive (in the middle of the search space) toward the final Pareto front. Figure 2d shows that the Archive after completion of 3/5th of the run has converged to the true Pareto front, and formation of high number of nondominating solutions in Archive is also observed due to the contribution of process 2. Figure 2 panels e and f show the Archives after completion of 4/5th and total run, respectively. During these stages, process 2 is getting called more frequently than process 1 (due to small values of nn2 and large values of nn1 at small values of temperature) by the algorithm to carry out a local search around uncrowded points in the final Pareto to create new points around them. The converged well-crowded uniform final Archive obtained at the end of the run is shown in Figure 2f. The following methods are used to compare four techniques (rMOSA, Simple MOSA, NSGA-II-JG, and NSGA-II) to determine which algorithm is more efficient. 1. Visual Plots. The visual plots of Archives obtained using Simple MOSA, NSGA-II-JG, and NSGA-II are compared with rMOSA to observe the strength of rMOSA over other algorithms. These plots for problems ZDT14 are shown in Figure 3. It is observed from Table 3 that the rMOSA took less numbers of simulations to converge to the final Pareto front when compared to other algorithms, for all problems considered for the study. Therefore, the number of simulations taken by rMOSA to converge to the final Pareto front is taken as the reference number for each problem studied. The plots of Archives of other algorithms are obtained at this reference number and compared with rMOSA to visually observe the difference between different algorithms. It is clear from Figure 3 that the archives obtained using Simple MOSA, NSGA-II-JG, and NSGA-II are far (or much away) from the true Pareto fronts, while rMOSA has already converged to the final Pareto fronts. The strength of rMOSA over other algorithms for all the problems studied is clearly observed in Figure 3. 2. Number of Simulations Required to Converge to a Final Pareto Front. The algorithm which takes less number of simulations to converge to a final Pareto front is considered to be superior when compared to other algorithms. Number of simulations taken by all algorithms for all problems considered for the study are reported in Table 3. It is clear from Table 3 that rMOSA took less number of simulations to converge to the final Pareto front when compared to other algorithms. The reason for quick convergence of rMOSA over other algorithms can be

explained by comparing the performances of different techniques, as follows: SA reveals its quality as a “quick starter” and it has the capability to obtain good (optimal) solution in a short time, but it is not able to improve this solution significantly during the later stages of operation.40 On the other hand, GA is known to be a “slow starter”, but it is capable of improving the solutions significantly, but it spends more time.40 Simple MOSA which uses the concept of SA, achieves the Pareto front quickly when compared with NSGA-II (makes use of the concept of GA). Hence, Simple MOSA took less number of simulations to converge to the Pareto front when compared with NSGA-II. Mechanism 1 used in rMOSA further enhances the speed of convergence to Pareto, while mechanism 2 improves the quality of the solutions (obtained during the global search) during the later stages of the run. These two mechanisms help rMOSA to achieve the Pareto front quicker than Simple MOSA; hence, took less number of simulations to converge to the Pareto front when compared with Simple MOSA. Now coming to the performance of NSGA-II-JG, concept of jumping genes (JG) in NSGA-II-JG generates more random solutions when compared with NSGA-II, speeds up the search process, in turn helps NSGA-II-JG to achieve Pareto front in less number of simulations when compared with NSGA-II. But, JG concept could not uplift the efficiency of NSGA-II-JG to the level of Simple MOSA and rMOSA. Hence, from the above explanation, it can be concluded that the performance of rMOSA is better than Simple MOSA, NSGA-II, and NSGA-II-JG. 3. Computational Time Required to Converge to a Final Pareto Front. The algorithm which takes less computational time to obtain a final Pareto front is considered to be superior when compared to other algorithms. Computational time is nothing but the sum of times taken for execution of the ‘algorithm’ part (i.e., algorithm without simulation) and ‘simulation’ part. Table 4 reports the computational times taken by different algorithms for all problems considered for the study. For problems ZDT14, in which one simulation time is less than a second, rMOSA took more computational time to converge to the final Pareto front when compared to Simple MOSA and NSGA-II-JG, even though rMOSA took less number of simulations to converge to Pareto (Table 3). This is because of the additional computational load due to mechanisms 1 and 2 of rMOSA. For example, in the ZDT4 problem, the computational time required for a complete rMOSA run is 37.3 s, and that for the simulation part is just 0.125 s. The computational load added due to mechanisms 1 and 2 is 37.175 (=37.3  0.125) s. But, this behavior is completely opposite in the case of the FCCU problem. It is clear from the Table 4 that rMOSA took less computational time to converge to the final 6736

dx.doi.org/10.1021/ie1016859 |Ind. Eng. Chem. Res. 2011, 50, 6728–6742

Industrial & Engineering Chemistry Research

ARTICLE

Figure 4. Continued 6737

dx.doi.org/10.1021/ie1016859 |Ind. Eng. Chem. Res. 2011, 50, 6728–6742

Industrial & Engineering Chemistry Research

ARTICLE

Figure 4. Box plots of the two objective functions (f1 and f2) for benchmark and FCCU problems, using different algorithms (extra stars represent outliers). (a,b) ZDT1; (c,d) ZDT2; (e,f) ZDT3; (g,h) ZDT4; (i,j) FCCU problem.

Pareto front when compared to other algorithms for FCCU problem. This is due to the time taken for the simulation part to dominate over the algorithm part. Time taken for execution of one simulation is more than a second in the case of the FCCU problem. Usually, real life problems in chemical engineering involve more time in the simulation part thanin the algorithm such as with the FCCU problem. The computational advantage gained due to the reduction in the number of simulations (to converge to the Pareto front) is huge when compared to the computational load added by mechanisms 1 and 2 in the case of rMOSA. Hence, rMOSA can be selected as a method of choice for solving such problems. 4. Spacing (S). The spacing is a measure of the relative distance between consecutive (nearest neighbor) solutions in the Archive. It is given by sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 Q ð3aÞ ðdi  dÞ2 S¼ Q i¼1



where di ¼

m

min

k ∈ Q ;k6¼ i

∑ jf il  f klj l¼1

ð3bÞ

and d¼

Q

∑ i i¼1 Q d

ð3cÞ

In eq 3, m is the number of objective functions and Q is the number of nondominated solutions in Archive. Clearly, di is the ‘spacing’ (sum of each of the distances in the f-space) between the ith point and its nearest neighbors, d is its mean value, and S is the standard deviation of the different di. An algorithm that gives Archive having a smaller value of S is superior when compared to other algorithms. Values of S for Archives of different algorithms for benchmark and FCCU problems are reported in Table 5. The values of S are calculated for all Archives obtained at the number of simulations reported in Table 3. It is clear from Table 5 that Archives of rMOSA have better (lower) S values for problems ZDT1 and FCCU than other algorithms. For the remaining problems ZDT2, ZDT3, and ZDT4, values of S for rMOSA are almost near to the values of Simple MOSA by taking less

numbers of simulations to converge to the Pareto sets (Table 3) when compared with Simple MOSA. Hence, rMOSA can be selected as a method of choice for solving benchmark and real life MOO problems. 5. Box Plots. Another method to compare algorithms for MOO problems is comparison of box plots of objective functions of different algorithms. The box plot of, say, f1 (Figure 4) for any technique indicates the entire range of f1 distributed over four quartiles, with 025% of the solutions having the lowest values of f1 indicated by the lower vertical line, the next 2550% of the solutions by the lower box, 5075% of the solutions by the upper part of the box, and the remaining 75100% of the solutions having the highest values of f1, by the upper vertical line. A few outliers41 are shown by separate stars on these plots. Such plots, thus, depict the range and the distribution of the points graphically. Box plots for Archives of different algorithms for benchmark and FCCU problems are shown in Figure 4. Box plots are plotted for all Archives obtained at the number of simulations reported in Table 3. It is clear from Figure 4 that Archives of rMOSA have a better distribution of values along the four quartiles than other algorithms, for objective function 2 of ZDT1, both the objective functions of ZDT2, both the objective functions of ZDT4, and both the objective functions of the FCCU problem. For the ZDT3 problem, box plots of Simple MOSA are better than rMOSA. The majority of box plots of rMOSA are better than other algorithms, hence, rMOSA can be selected as a method of choice for solving MOO problems. It is observed that a higher number of nondominating solutions are formed in the archives of rMOSA and Simple MOSA than in the population size of NSGA-II and NSGA-II-JG for all the problems taken for the study. A single linking clustering algorithm42 (SLCA) is used to bring down the number of nondominating solutions of rMOSA and Simple MOSA to the number of solutions equal to the population size of NSGA-II and NSGA-II-JG. These archives with equal number of nondominating solutions are used for calculation of spacing values and plotting box plots. We now discuss the results of FCCU problem (eq 2). Figure 5 shows the results of the FCCU problem for rMOSA after 3600 simulations. Figure 5a shows the plot of Pareto optimal solutions (f1 vs f2), and the associated decision variables are shown in Figure 5 panels bd. Table 6 reports the 6738

dx.doi.org/10.1021/ie1016859 |Ind. Eng. Chem. Res. 2011, 50, 6728–6742

Industrial & Engineering Chemistry Research

ARTICLE

Figure 5. (a) Set of nondominated solutions, (bd) decision variables obtained using rMOSA (after 3600 simulations) for FCCU problem (eq 2).

Table 6. Decision Variables and Objective Functions for Points 13 in Figure 5a FCCU problem nondominated solution decision variable

Point 1

Point 2

Point 3

0.0035 21.2651

4.7813 17.3993

8.6305 11.738

feed preheat temperature (K)

582.7672

593.3046

597.5472

air preheat temperature (K)

490.3064

450.5993

495.1839

catalyst flow rate (kg/s)

201.2335

188.5691

119.8786

21.2651

17.3993

11.738

% CO in the flue gas air flow rate (kg/s)

air flow rate (kg/s)

values of the decision variables for three nondominated solutions, points 1, 2 and 3, in Figure 5a. If we go from any one point (e.g., 1) to another (e.g., 2) in Figure 5a, we find that f1 (% CO in the flue gas) worsens (increases) whereas f2 (air flow rate) improves (decreases). This plot, therefore, represents a set of nondominated solutions. A decision maker would have to decide on the “preferred” solution (operating point) from among these several solutions. Low values of feed preheat

temperature (in the initial portion of Figure 5b, i.e., below 0.2% of CO) will in fact decrease the total conversion of cracking reactions, thus decrease the yields of products. But, simultaneous use of high values of catalyst flow rate (in the initial portion of Figure 5c, i.e., below 0.2% of CO) compensated the decrease caused by low feed preheat temperatures, and helped in formation of products with high yields. As a result of this, the yield of coke also increases as it is one of the products of the cracking reaction and deactivates the catalyst, which should be burnt in the regenerator with high air flow rates (in the initial portion of Figure 5a, i.e., below 0.2% of CO) with high air preheat temperatures (in the initial portion of Figure 5d, i.e., below 0.2% of CO). High values of feed preheat temperature (in the later portion of Figure 5b, i.e., above 0.2% of CO) will in fact increase the total conversion and yields of products of cracking reactions. But, the simultaneous use of low values of catalyst flow rate (in the later portion of Figure 5c, i.e., above 0.2% of CO) will compensate the increase caused by high feed preheat temperatures, and helps in formation of products with low yields. This also leads to the formation of low coke and it should be burnt in the regenerator with low air flow rates (in the later portion of Figure 5a, i.e., above 0.2% of CO) with low preheat temperatures (in the later portion of Figure 5d, i.e., above 0.2% of CO). So, it is clear from this discussion that catalyst flow rate played 6739

dx.doi.org/10.1021/ie1016859 |Ind. Eng. Chem. Res. 2011, 50, 6728–6742

Industrial & Engineering Chemistry Research an important role in cracking reactions when compared to feed preheat temperatures.

’ CONCLUSIONS First, a Simple MOSA is developed by using the concepts of archiving procedure, a simple probability function (to set new-pt as current-pt), single parameter perturbation, and a simple annealing schedule. Then, the proposed two new mechanisms are implemented on top of Simple MOSA to develop rMOSA. Seven steps involved in the development of rMOSA are thoroughly explained. Four computationally intensive benchmark problems and one simulation-intensive two-objective problem for an industrial FCCU are solved using two newly developed algorithms (rMOSA and Simple MOSA) and two well-known MOO

ARTICLE

algorithms (NSGA-II-JG and NSGA-II). Newly developed rMOSA and Simple MOSA are compared with NSGA-II-JG and NSGA-II, using different metrics available in MOO literature. Two new proposed mechanisms in rMOSA are proved to help Archives to converge to a final Pareto fronts within less number of simulations with well-crowded uniform nondominating solutions in them, for all the problems considered for the study. Hence, rMOSA can be considered as one of the best algorithms for solving computationally intensive and simulation-intensive MOO problems in chemical as well as other fields of engineering.

’ APPENDIX I Pseudo-Code (or algorithm) for rMOSA (see Flowchart in Figure A1.)

Figure A1. Flowchart of rMOSA. Refer to Appendix I for details. 6740

dx.doi.org/10.1021/ie1016859 |Ind. Eng. Chem. Res. 2011, 50, 6728–6742

Industrial & Engineering Chemistry Research

ARTICLE

’ ACKNOWLEDGMENT This work is supported by Brain Korea 21 project and the Korea Science and Engineering Foundation (KOSEF) grant funded by the Korea government (MEST) (KRF-20090076129) and funded by Seoul R&BD Program (CS070160). ’ REFERENCES

’ AUTHOR INFORMATION Corresponding Author

*E-mail: [email protected]. Tel.: 82-31-201 3824. Fax: 82-31-202 8854.

(1) Holland, J. H. Adaptation in Natural and Artificial Systems; University of Michigan Press: Ann Arbor, MI, 1975. (2) Goldberg, D. E. Genetic Algorithms in Search, Optimization and Machine Learning; Addison-Wesley: Reading, MA, 1989. (3) Deb, K. Optimization for Engineering Design: Algorithms and Examples; Prentice Hall of India: New Delhi, India, 1995. (4) Kirkpatrick, S.; Gelatt, C. D.; Vecchi, M. P. Optimization with Simulated Annealing. Science 1983, 220, 671. (5) Metropolis, N.; Rosenbluth, A.; Rosenbluth, M.; Teller, A.; Teller, E. Equation of State Calculations by Fast Computing Machines. J. Chem. Phys. 1953, 21, 1087. (6) Coello Coello, C. A.; Van Veldhuizen, D. A.; Lamont, G. B. Evolutionary Algorithms for Solving Multi-Objective Problems; Kluwer Academic Publishers: New York, 2002. (7) Srinivas, N.; Deb, K. Multiobjective Function Optimization using Nondominated Sorting Genetic Algorithms. Evol. Comp. 1995, 2, 221. (8) Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A Fast and Elitist Multiobjective Genetic Algorithm: NSGA-II. IEEE. Trans. Evol. Comput. 2002, 6, 182. (9) Bhaskar, V.; Gupta, S. K.; Ray, A. K. Multi-objective Optimization of an Industrial Wiped Film Poly(ethylene terephthalate) Reactor. AIChE J. 2000, 46, 1046. (10) Garg, S.; Gupta, S. K. Multi-objective Optimization of a Free Radical Bulk Polymerization Reactor using Genetic Algorithm. Macromol. Theory Simul. 1999, 8, 46. (11) Gupta, R. R.; Gupta, S. K. Multi-objective Optimization of an Industrial Nylon 6 Semi-batch Reactor System Using Genetic Algorithm. J. Appl. Polym. Sci. 1999, 73, 729. (12) Mitra, K.; Deb, K.; Gupta, S. K. Multi-objective Dynamic Optimization of an Industrial Nylon 6 Semi-batch Reactor using Genetic Algorithm. J. Appl. Polym. Sci. 1998, 69, 69. (13) Nayak, A.; Gupta, S. K. Multi-objective Optimization of Semibatch Copolymerization Reactors using Adaptations of Genetic Algorithm (GA). Macromol. Theory Simul. 2004, 13, 73. (14) Tarafder, A.; Rangaiah, G. P.; Ray, A. K. Multi-objective Optimization of an Industrial Styrene Monomer Manufacturing Process. Chem. Eng. Sci. 2005, 60, 347. (15) Yee, A. K. Y.; Ray, A. K.; Rangaiah, G. P. Multi-objective Optimization of an Industrial Styrene Reactor. Comput. Chem. Eng. 2003, 27, 111. (16) Rajesh, J. K.; Gupta, S. K.; Rangaiah, G. P.; Ray, A. K. Multiobjective Optimization of Industrial Hydrogen Plants. Chem. Eng. Sci. 2001, 56, 999. (17) Oh, P. P.; Rangaiah, G. P.; Ray, A. K. Simulation and Multiobjective Optimization of an Industrial Hydrogen Plant Based on Refinery Off-Gas. Ind. Eng. Chem. Res. 2002, 41, 2248. (18) Nandasana, A.; Ray, A. K.; Gupta, S. K. Dynamic Model of an Industrial Steam Reformer and Its Use for Multi-objective Optimization. Ind. Eng. Chem. Res. 2003, 42, 4028. (19) Chan, C. Y.; Aatmeeyata; Gupta, S. K.; Ray, A. K. Multiobjective Optimization of Membrane Separation Modules. J. Membr. Sci. 2000, 176, 177. (20) Ravi, G.; Gupta, S. K.; Ray, M. B. Multi-objective Optimization of Cyclone Separators. Ind. Eng. Chem. Res. 2000, 39, 4272. (21) Ravi, G.; Gupta, S. K.; Viswanathan, S.; Ray, M. B. Optimization of Venturi Scrubbers Using Genetic Algorithm. Ind. Eng. Chem. Res. 2002, 41, 2988. 6741

dx.doi.org/10.1021/ie1016859 |Ind. Eng. Chem. Res. 2011, 50, 6728–6742

Industrial & Engineering Chemistry Research

ARTICLE

(22) Ravi, G.; Gupta, S. K.; Viswanathan, S.; Ray, M. B. Multiobjective Optimization of Venturi Scrubbers using a 3-D Model for Collection Efficiency. J. Chem. Tech. Biotech. 2003, 78, 308. (23) Inamdar, S. V.; Saraf, D. N.; Gupta, S. K. Multi-objective Optimization of an Industrial Crude Distillation Unit Using the Elitist Nondominating Sorting Genetic Algorithm. Chem. Eng. Res. Des. 2004, 82, 611. (24) Kasat, R. B.; Kunzru, D.; Saraf, D. N.; Gupta, S. K. Multi-objective Optimization of Industrial FCC Units Using Elitist Non-dominated Sorting Genetic Algorithm. Ind. Eng. Chem. Res. 2002, 27, 4765. (25) Khosla, D. K.; Gupta, S. K.; Saraf, D. N. Multi-objective Optimization of Fuel Oil Blending Using the Jumping Gene Adaptation of Genetic Algorithm. Fuel Process Technol. 2007, 88, 51. (26) Guria, C.; Bhattacharya, P. K.; Gupta, S. K. Multi-objective Optimization of Reverse Osmosis Desalination Units Using Different Adaptations of the Non-dominated Sorting Genetic Algorithm (NSGA). Comput. Chem. Eng. 2005, 29, 1977. (27) Nam, D.; Park, C. Multi-objective Simulated Annealing: A Comparative Study to Evolutionary Algorithms. Int. J. Fuzz. Syst. 2000, 2, 87. (28) Smith, K.; Everson, R.; Fieldsend, J. Dominance Measures for Multi-objective Simulated Annealing. In Proceedings of the 2004 IEEE Congress on Evolutionary Computation Conference; IEEE Press: Piscataway, NJ, 2004; Vol. 1, pp 2330. (29) Sankararao, B.; Gupta, S. K. Multi-objective Optimization of an Industrial Fluidized-Bed Catalytic Cracking Unit (FCCU) Using Two Jumping Gene Adaptations of Simulated Annealing. Comput. Chem. Eng. 2007, 31, 1496. (30) Serafini, P. Simulated Annealing for Multiple Objective Optimization Problems. Multiple Criteria Decision Making. Expand and Enrich the Domains of Thinking and Application; Springer Verlag: Berlin, 1994; pp 283292. (31) Bandyopadhyay, S.; Saha, S.; Maulik, U.; Deb, K. A Simulated Annealing Based Multi-objective Optimization Algorithm: AMOSA. IEEE. Trans. Evol. Comput. 2008, 12, 269. (32) Suppapitnarm, A.; Seffen, K. A.; Parks, G. T.; Clarkson, P. J. A Simulated Annealing Algorithm for Multi-objective Optimization. Eng. Optim. 2000, 33, 59. (33) Ulungu, E. L.; Teghem, J.; Fortemps, P.; Tuyttens, D. MOSA Method: A Tool for Solving Multi-objective Combinatorial Optimization Problems. J. Multi-Crit. Dec. Anal. 1999, 8, 221. (34) Suman, B.; Kumar, P. A Survey of Simulated Annealing as a Tool for Single and Multi-objective Optimization. J. Oper. Res. Soc. 2006, 57, 1143. (35) Sankararao, B.; Gupta, S. K. Multi-objective Optimization of the Dynamic Operation of an Industrial Steam Reformer using the Jumping Gene Adaptations of Simulated Annealing. Asia-Pac. J. Chem. Eng. 2006, 01, 21. (36) Sankararao, B.; Gupta, S. K. Multi-objective Optimization of Pressure Swing Adsorbers (PSAs) for Air Separation. Ind. Eng. Chem. Res. 2007, 46, 3751. (37) Halim, I.; Srinivasan, R. Design of Sustainable Batch Processes Through Simultaneous Minimization of Process Waste, Cleaning Agent and Energy. Comput.-Aided Chem. Eng. 2009, 27, 801. (38) Deb, K. Multi-objective Optimization Using Evolutionary Algorithms; Wiley: Chichester, UK, 2001. (39) Kasat, R. B.; Gupta, S. K. Multiobjective Optimization of an Industrial Fluidized-Bed Catalytic Cracking Unit (FCCU) Using Genetic Algorithm (GA) with the Jumping Genes Operator. Comput. Chem. Eng. 2003, 27, 1785. (40) Mori, B. D.; de Castro, H. F.; Cavalca, K. L. Development of Hybrid Algorithm Based on Simulated Annealing and Genetic Algorithm to Reliability Redundancy Optimization. Int. J. Quality and Rel. Manage. 2007, 24, 972. (41) Chambers, J. M.; Cleveland, W. S.; Kleiner, B.; Tukey, P. A. Graphical Methods for Data Analysis; Wadsworth: Belmont, CA, 1983. (42) Jain, A. K.; Dubes, R. C. Algorithms for Clustering Data; Prentice-Hall: Englewood Cliffs, NJ, 1988. 6742

dx.doi.org/10.1021/ie1016859 |Ind. Eng. Chem. Res. 2011, 50, 6728–6742