Hybrid Optimization Method Based on Membrane Computing

Jan 26, 2011 - A hybrid optimization method combining an improved bioinspired algorithm .... to membrane m, with the quasi-Golgi membrane activated up...
0 downloads 0 Views 355KB Size
Ind. Eng. Chem. Res. 2011, 50, 1691–1704

1691

Hybrid Optimization Method Based on Membrane Computing Jinhui Zhao and Ning Wang* National Laboratory of Industrial Control Technology, Institute of Cyber-Systems and Control, Zhejiang UniVersity, Hangzhou 310027, China

A hybrid optimization method combining an improved bioinspired algorithm based on membrane computing (IBIAMC) with a sequential quadratic programming (SQP) algorithm (HBS) is proposed to overcome difficulties in solving complex constrained problems. The netted membrane structure of HBS is based on the distributed parallel computational mode of membrane computing (MC) and inspired by the shape, structure, and function of the Golgi apparatus of eukaryotic cells. The different localized subalgorithms in the proposed hybrid method are translated from the different local rules used in different membranes. When this hybrid method is applied to solve optimization problems, these subalgorithms operate in an orderly fashion on the objects containing a tentative solution in accordance with their probabilities; simultaneously, the communication object comprising best objects is transferred between different membranes according to the communication rule. The search capacity of the proposed method is ensured by both the global search subalgorithm of the improved BIAMC and the local search subalgorithm of SQP. Eight benchmark constrained problems are used to test the performance of the hybrid method, and then two simulation examples of the gasoline blending scheduling problem and the process design of the Williams-Otto flow sheet are applied to validate the proposed algorithms. 1. Introduction Optimization theory and techniques have been applied widely in the fields of science, engineering and industry. Once the mathematical model of an optimization problem has been formulated, an algorithm can be used to find its solution (although if the model is too complex, it might be too difficult to solve). Deterministic, stochastic, and hybrid optimization algorithms have been extensively studied for solving continuous and discontinuous unconstrained and constrained optimization problems, and many monographs1-4,13 and literature reports5-12 have been published. Generally, there is a particularly appropriate type of optimization algorithm to solve a sort of optimization problem and meet some other requirement such as reliability and cost effectiveness. Then, the behavior of the optimization algorithms in solving optimization problems is used for evaluating their performance, such as robustness and accuracy. For mathematical programming methods, such as linear programming (LP), nonlinear programming (NLP), mixedinteger linear programming (MILP), and mixed-integer nonlinear programming (MINLP), when the proper mathematical models and good initial solutions for optimization problems are available, they are probably the best choice of optimization methods. Moreover, deterministic optimization methods have been used in commercial softwares such as LINGO, CPLEX, and GAMS for their reliability and high efficiency. However, many optimization problems in industrial engineering are formulated as combinatorial or highly nonlinear constrained optimization models whose solutions cannot be found using deterministic methods or MINLP techniques without knowledge, skill, and persistence on these problems. They are generally typical NPhard or NP-complete problems, such as job shop scheduling and production scheduling. For decades, dozens of heuristic algorithms have been proposed and applied widely to solve these complex optimization problems.13-17 Nevertheless, most heuristic algorithms have flaws including local search capacity and handling constraints, and the times for finding precise optimum * To whom correspondence should be addressed. E-mail: [email protected].

solutions of different problems are indeterminate. Therefore, the simplex heuristic optimization technique is rarely used for application software. Recently, some hybrid methods have been investigated for better properties of usefulness, robustness, accuracy, and efficiency,18-20 which tend to integrate the advantages of heuristic and deterministic optimization methods. The complexity of hybrid methods is high, especially the interactive running between a global search subalgorithm and a local search subalgorithm during each iteration cycle. Additionally, the performance of these methods has not been evaluated in a systematic way. In this article, a hybrid algorithm combining an improved bioinspired algorithm based on membrane computing (IBIAMC) with the sequential quadratic programming (SQP) algorithm (abbreviated as HBS) is proposed for solving complex constrained problems. The integration of IBIAMC and SQP is serial to maintain the computational mechanisms of IBIAMC to the greatest extent. The hybrid optimization algorithm continues to use the algorithmic framework of BIAMC. The membrane structure is constructed by embedding a membrane S into the membrane structure used in BIAMC.33 The membrane S is filled with a sequential quadratic programming (SQP) algorithm that is called an SQP rule or SQP subalgorithm. Membrane computing (MC), also known as P systems, was presented in computer science by Gheorghe Paun in 2000;21 it aims to abstract computing ideas and models from the structure and functioning of living cells, as well as from the way that the cells are organized in tissues or higher-order structures.22 The essential ingredients of MC are membrane structures, which follow a hierarchical arrangement of membranes where various chemicals (objects) operate with certain probabilities (in some models) according to different local reaction rules. In this computing model, all membranes evolve simultaneously; that is, the objects can evolve in a maximally parallel, nondeterministic manner. The objects can be described by symbols (multisets of objects) or by strings of symbols (multiset of language).23 The reaction rules mainly include creating and dissolving membranes, indicating targets, communicating, rewriting, cooperating and noncooperating multisets of objects, and so on.

10.1021/ie101002n  2011 American Chemical Society Published on Web 01/26/2011

1692

Ind. Eng. Chem. Res., Vol. 50, No. 3, 2011

As a class of distributed and parallel computational models with the computational universality property, MC using some type of “time acceleration” goes beyond the power of universal Turing machines that can solve the halting problem in a bounded, known (external) time.24 MC techniques (P systems) are all theoretical models that have never been reduced to practice.25 As a framework for devising compartmentalized models, membrane computing has been applied to various problems,26-30 especially to solve NP-complete problems in polynomial time by creating exponential membranes (i.e., workspace). Simultaneously, a few optimization algorithms inspired by MC have been used for optimization problems. Membrane algorithms have been used to solve the traveling salesman problem successfully.16 The single- and multiobjective optimization algorithms inspired by MC have been investigated to solve problems with large feasible spaces and many parameters.31,32 The reported results demonstrated their good performance. Nevertheless, the problems solved were unconstrained. The bioinspired algorithm based on membrane computing (BIAMC) was proposed for solving both unconstrained and constrained problems;33 the experimental results show that this algorithm can find optimal or close-to-optimal solutions efficiently. However, the efficiency of BIAMC in solving constrained problems is inferior to that of deterministic optimization methods, especially when solving problems with equality constraints. Consequently, the hybrid method HBS is investigated herein. We test the proposed hybrid method (HBS) with eight benchmark constrained problems before applying it to practical problems. Two simulation examples of the gasoline blending scheduling problem and the flowsheet design of Williams-Otto progress are used to validate the practicability of the proposed algorithm. 2. Hybrid Optimization Method Based on Membrane Computing 2.1. Procedure. Minimization problems or revised minimization problems are considered in this article. The procedure flow diagram of the proposed hybrid method is shown in Figure 1. The main steps of the HBS are depicted in graphical mode. The rules of rewriting, pairing, and selection are in each parallel membrane. The selection rule works when the rewriting rule operates. The rules of target indication, transition, abstraction, and selection are distributed in membrane G. The numerical string operations of these rules are summarized in the following subsections. 2.2. Membrane Structure and Rules. The membrane structure and a schematic of the computing process of HBS are illustrated in Figure 2, where 1, ..., m represent the labels of parallel identical membranes; G and S denote the membranes of quasi-Golgi and SQP, respectively; asterisks (*) and points (•) denote objects and rules, respectively; and arrows (f) indicate the direction of the communication rule, which is symport (i.e., the communication object transfers between membranes in the given direction). The rules distribute in membranes merely where the objects are sent out and deleted after the rules are run. An object comprises a solution vector and other useful parameters, such as the values of the objective function and constrained functions. In Figure 2, certain initial objects are generated randomly and assigned to each parallel membrane sequentially by the surface membrane 0, and then the rules of parallel membranes start to operate on the objects. The first subalgorithm cycle of subalgorithm IBIAMC is depicted schematically in subgraphs a-d; it consists of applying the certain rules from membrane 1

Figure 1. Main loops of HBS.

to membrane m, with the quasi-Golgi membrane activated up to that time. The communication object transfers among membranes 0, 1, ..., m, and G during the subalgorithm IBIAMC computing. Subgraphs e and f describe the last subalgorithm cycle of the subalgorithm IBIAMC. Subgraph g shows the communication of objects between the membranes of quasiGolgi and S after membrane S is activated; that is, the communication object is transferred to the SQP subalgorithm when the computing of subalgorithm IBIAMC is finished. Subgraph h shows that the end objects are exported after the rules in membrane S operate. When the computation of the hybrid algorithm terminates, the end object (minimum or maximum) is exported by membrane 0. The rules in parallel membranes and the membrane of quasiGolgi are summarized briefly as follows: In each parallel membrane, the rewriting rule includes crossover and mutation modes introduced from the crossover and mutation operators of evolutionary computation (EC) to rewrite objects on the solution vectors and their elements, which is different from the approach reported by Zhao and Wang.33 Three-step crossover mode includes step 1 of arithmetic crossover; step 2 of two-point crossover, and two steps are put on randomly selected objects according to the probability; and step 3 of arithmetic crossover, which is put on every two adjacent numbered objects. The mutation mode is to add a random increment to an element of solution vector according to the probability. The pairing rule, inspired by the difference

Ind. Eng. Chem. Res., Vol. 50, No. 3, 2011

1693

Figure 2. Membrane structure and schematic computational processes of HBS: (a-d) Communication from membrane 1 to m in one algorithm cycle, (e,f) concise depiction of the rest of the algorithm cycles, (g,h) communication object transfer to membrane S and the output. arithmetic

in membrane protein fluidity in cell membranes, such as immobility, highly directed movement, restricted movement, and random movement,34 is to limit the randomly rewriting elements in their ranges by folding them on upper and lower bounds. (1) Selection Rule. The objects with the minimal objective function value are selected from the candidate objects according to the selection rule. (2) Rewriting Rule. (A) Crossover Mode. The crossover mode of the rewriting rule is described with the following steps: crossover

Si 98 S'i

(1)

mode

Step 1 if p e pcm

arithmetic

Si, Sk 98 S'i, S'k

{

crossover

Si, Sk f S'i, S'k

where

selection

S'i, Si, S'k, Sk 98 S'i, S'k rule

S'i ) ηSi + (1 - η)Sk, S'k ) (1 - η)Si + ηSk η ∈ [0, 1]; i, k ∈ [1, n]

Step 2 if p e pcm

Si, Si+1 98 S'i, S'i+1

{

crossover

Si, Si+1 f S'i, S'i+1

where

selection

S'i, Si, S'i+1, Si+1 98 S'i, S'i+1 rule

S'i ) ηSi + (1 - η)Si+1, S'i+1 ) (1 - η)Si + ηSi+1 η ∈ [0, 1]; i ) 1, ..., n - 1

pcm is the probability of crossover mode. p and η are uniformly random numbers. i, k, a, and b are random integers. n is the number of objects sent in a parallel membrane, l is the number of variables in a solution vector of object Si, and sij is the jth element of the solution vector. When the numbered objects in parallel membranes start to evolve by the crossover mode, first, n uniformly random numbers p are generated. If and only if p is less than pcm, two objects Si and Sk are selected randomly and are replicated. Then, two copies are rewritten according to the step 1 in mathematical representation 1. On condition that two rewritten copies are less than the initial two objects or one, the smaller rewritten copy replaces the initial smaller object or the one it is smaller than. Otherwise, the rewritten copies are deleted. In succession, these objects are treated formally as in step 2. To start, from generating n uniformly distributed numbers p to selecting two random objects, the operation is the same as that in step 1. After duplicating the selected objects, select two vector entries labeled a to b from two copies and interchange the elements of two vectors from a to b, so that the copies are rewritten. The selection rule runs likewise. In Step 3, every two copies of neighboring objects are rewritten as in Step 1 without the control of the probability of crossover mode pcm. (B) Mutation Mode. The rule is described by

two-point

{

Si, Sk 98 S'i, S'k

mutation

Si 98 S'i

crossover

where

Step 3

Si, Sk f S'i, S'k Si ) (si1, si2, ..., sil-1, sil), Sk ) (sk1, sk2, ..., skl-1, skl) S'i ) (si1, ska, ..., skb, sil), S'k ) (sk1, sia, ..., sib, skl) selection

Si, S'i, Sk, S'k 98 S'i, S'k rule

i, k ∈ [1, n]; a, b ∈ [1, l]; a < b

{

mode

where

Si ) (si1, ..., sil), S'i ) (s'i1, ..., s'il) s'ij ) sij + β (if pij e pmm)

(2)

β ) widej × nrand/[1.2 × (c × ml)1.1] pij ∈ [0, 1]; i ) 1, ..., n; j ) 1, ..., l

where pmm is the probability of mutation mode; β is a random increment for an element; widej is the variable range of element j and equals sjmax - sjmin; nrand is a pseudorandom variable with a value that draws from the standard normal distribution; and c

1694

Ind. Eng. Chem. Res., Vol. 50, No. 3, 2011

and ml are the current values of subalgorithm cycle and parallel membrane, respectively. The mutation mode of the rewriting rule generates l uniformly distributed random numbers p first when an object is rewritten and then allocates p to each element one by one. If and only if p is less than pmm, the associated elements are selected to be treated by the mutation mode. For example, the mutation mode generates a value β and adds it to element sij, and then element sij is rewritten. If the objective function value mapping from the solution vector with the rewritten element s′ij is less than the former, the mutated element s′ij replaces element sij. Otherwise, element sij is kept. (3) Pairing Rule. During the mutation mode of the rewriting rule, if the randomly rewritten elements exceed their variable ranges, the elements are kept in their ranges by folding them. The rule is formulized as follows

{

pairing

Si 98 S'i

(3)

rule

Si ) (si1, ..., sil), S'i ) (s'i1, ..., s'il) s'ij ) cj ; i ) 1, ..., n; j ) 1, ..., l cj ) 2sijmax - sij (if sij > sijmax) cj ) 2sijmin - cj (if cj < sijmin) where cj ) 2sijmin - sij (if sij < sijmin) cj ) 2sijmax - cj (if cj > sijmax) cj ) sjmin + widej × η (if cj > sijmax ∨ cj < sijmin) cj ) sij (if sijmin e sij e sijmax)

sum of changes of the directions; i.e., the direction of each element of the target indication vector is given by summing its directions calculated before. The magnitude of each element of the target indication vector is given according to its variable range. The target indication objects are built by adding the target indication vectors to the current objects. The rule is described as follows target indication

Si 98 Si1, Si2

(4)

rule

where

{

Si ) (si1, ..., sil), Si1 ) Si + λfi, Si2 ) Si - λfi λfi ) (ai1, ..., ail), |aij | ) η × widej

b λi is the target indication vector for object i. aij is element j of vector b λ i. (6) Transition Rule. Inspired by the rotational movement of membrane proteins, the transition rule is to rotate the solution vector in an object by rearranging the elements randomly. The rule begins with the creation of a uniformly distributed random number p for each element. Then, from the first element to the last one, if and only if pj for element sj is less than the probability of transition rule pt and, simultaneously, element st (randomly selected) is in the variable range of element sj and so does sj, sj is exchanged with st. The rule is described mathematically as transition

When cj is larger than 3 times the variable range (rarely, depending on the generated pseudorandom numbers), the spare operation works after cj is folded twice; that is cj ) sjmin + widej × η (4) Communication Rule. This rule sends a communication object into the membrane of quasi-Golgi, the next parallel membrane, or both from the current parallel membrane or the membrane of quasi-Golgi. In this article, the communication object contains four objects with the minimum objective function value selected from the current membrane. The communication object sent into the membrane of S comprises the best value calculated by the subalgorithm IBIAMC. When the membrane of quasi-Golgi is activated, the rules of target indication, transition, abstraction, selection, and communication start to work. Then the objects sent in the membrane of quasi-Golgi are modified, reconstructed, and recombined. Whether the membrane of quasi-Golgi is activated or not, the directions of the objects shifted to the optimum solution are calculated. First, four ordered objects sent by the communication rule are recorded. Then, the direction of each element of objects is calculated by comparison with the previous four objects sharing the same sequence numbers as sent by the communication rule. Then, the positive, negative or zero direction of each element is picked up, and the direction of the target indication vector is calculated. After the directions of the four target indication vectors are kept, the communication object is deleted (when the quasi-Golgi is not activated). The activating condition of the quasi-Golgi and operation of rules can be set flexibly according to the complexity of the optimization problem. (5) Target Indication Rule. The target indication rule is to generate objects near the current objects sent by the communication rule. When the quasi-Golgi is activated, the target indication vectors b λ and their reverse vectors will be generated. The direction of each target indication vector depends on the

{

S 98 S' rule

if (pj e pt) ∧ (sjmin e stj e sjmax) ∧

(stjmin e sj e stjmax) where S ) (s1, s2, ..., sl), S' ) (st , st , ..., st ) 1 2 l c ) stj, stj ) sj, sj ) c tj ) max(1, ηl); pj ∈ [0, 1]; j ) 1, ..., l

(5)

where  is the sign of floor round numbers. tj is a uniformly distributed random integer between 1 and l. (7) Abstraction Rule. When the abstraction rule acts on the objects, the elements from object 2, which can minimize the object 1, are abstracted. This rule begins with the generation of uniformly distributed random number p for each element of the solution vector in object 1. Then, from element 1 to element l, if and only if p for an element is smaller than pa, such as s1j, each element of object 2 within the variable range of element s1j replaces it singly. Then, the element in object 2 that minimized object 1 is abstracted and replaces element s1j. Otherwise, the initial element s1j is saved. The rule is described as follows abstraction

S1, S2 98 S', S2

(6)

rule

{

S1 ) (s11, s12, ..., s1l), S2 ) (s21, s22, ..., s2l) where S' ) (s1, s2, ..., sl) [if pj < pa ∧ f(S') < f(S1)] sj ∈ (s21, s22, ..., s2l, s1j), j ) 1, ..., l 2.3. Handling Constraints in the IBIAMC Subalgorithm. Here, the quadratic penalty method is employed because of its simplicity and intuition, although it has some important disadvantages.2 The revised minimization objective function with penalty functions is

Ind. Eng. Chem. Res., Vol. 50, No. 3, 2011

f'(x, µ) ) f(x) + µ

∑ {max[0, g (x)]}

2

ik

+

ik)1 p

µ

∑ ({min[0, h

ek(x)

+ δ]}2 + {max[0, hek(x) - δ]}2)

qkqkT

Hk+1 ) Hk +

ek)1

f(x), X ) [x1, x2, ..., xn] where min x∈X s.t. xlj e xj e xuj , j ) 1, ..., n hek(x) ) 0; ek ) 1, 2, ...,p gik(x) e 0; ik ) p + 1, ...,m

1695

(1) Updating the Hessian Matrix. The positive-definite approximation of the Hessian, H, is calculated using the BFGS method

m

-

qkTsk

HkTskTskHk skTHksk

(8)

m

(7)

In this expression, X ∈ Rn is a set of n decision variables of the solution. xlj and xuj are the lower and upper bounds, respectively, of the variable xj. f(x) is the objective function, which needs to satisfy m constraints [gik(x) represents the ikth inequality constraint] and p equality constraints [hek(x) represents the ekth equality constraint]. µ > 0 is the penalty parameter, and δ is the degree of violation for equality constraints. It has been demonstrated that, when the penalty parameter µ approaches infinity, the quadratic penalty function converges to the same global solution of the original constrained problem. However, µ cannot be set too large in practice in heuristic algorithms; otherwise, feasible solutions could not be obtained or would be of poor quality. The penalty parameters are difficult to set, which is different from the case for deterministic algorithms because of their random computational mechanism. However, there are some regular patterns to follow, which are gained by trial and error. The penalty parameter is to balance the objective and penalty terms depending on the dominance of the penalty and objective functions. Then, if the value of the objective function is very large, the penalty parameter µ can be set large, or otherwise. It will be better that the orders of magnitude of the objective function and the sum of penalty terms are close when they start to explore infeasible regions. Simultaneously, if the feasible space of problems is not too small compared to the search space, the penalty parameter can be set small, or otherwise. 2.4. Membrane of S. The membrane S contains an SQP rule (SQP subalgorithm) that receives the object sent by the communication rule and sends out the object searched by the SQP rule when it is activated. Here, we just select the best one in the communication object to membrane S. The SQP methods represent the state of the art in nonlinear programming methods and are widely used in solving constrained optimization problems because of their superior linear convergence. The frameworks of SQP methods are similar to those of quadratic program (QP) subproblems required to be solved at each iteration of the SQP algorithm: The QP subproblem is generated by an approximation of the Hessian of the Lagrangian function using a quasi-Newton updating method, and then its solutions are used to form a step and a search direction for a line search procedure. These steps defining the problems to be solved can be nonlinear and nonconvex, but must be differentiable. The SQP rule in membrane S applies the function fmincon from Matlab’s optimization toolbox. The active-set optimization is one of the algorithms that fmincon uses, which is a sequential quadratic programming method. In the active-set optimization, the function solves a QP subproblem at each iteration. fmincon updates an estimate of the Hessian of the Lagrangian function at each iteration using the BFGS formula and performs a line search using a merit function. Three main stages of the active-set optimization are discussed briefly in the following subsections:

qk ) [∇f(xk+1) +

∑ λ · ∇g (x i

i

k+1)]

- [∇f(xk) +

i)1

m

∑ λ · ∇g (x i

i

k+1)],

sk ) xk+1 - xk

i)1

where λi, i ) 1, ..., m, is an estimate of the Lagrange multipliers. A positive-definite Hessian is maintained providing that qTk sk is positive at each update and H is initialized with a positivedefinite matrix. When qTk sk is not positive, qTk is modified on an element-by-element basis so that qTk sk is greater than or equal to a small negative tolerance. (2) Quadratic Programming Solution. At each major iteration of the SQP method, a QP problem of the following form is solved 1 min q(d) ) dkTHkdk + ∇f(Xk)Tdk 2 d∈Rn s.t. [∇h(xk)]Tdk + hi(xk) ) 0, i ) 1, ..., p

(9)

[∇g(xk)] dk + gi(xk) e 0, i ) p + 1, ..., m T

where xk is the current feasible solution. The solution dk is a search direction to form a new iteration xk+1. The method used in this optimization toolbox is an active-set strategy. If a feasible solution is not found for this QP problem, the direction of search for the main SQP routine dk is taken as the solution that minimizes a given linear programming problem. (3) Line Search and Merit Function. The vector dk is used to form a new iterate xk+1 ) xk + akdk

(10)

ak is the step length parameter, which is determined by the solution of the system of linear equations to produce a sufficient decrease in a merit function that is used for setting the penalty parameter. 3. Simulation Study Eight widely used constrained problems are selected for investigating the performance of HBS before an application study, which comprise different decision variables and different characteristics (e.g., quadratic, nonlinear, polynomial). The different properties of the objective and constraint functions (e.g., linear or nonlinear, unimodal or multimodal, differential or nondifferential) determine the complexity of the problems. The mathematical representations of these problems are listed in the Appendix. The objective functions of eight constrained problems are reversed by eq 7 to be solved. 3.1. Numerical Simulation. To analyze the merits and demerits of the performance of HBS, we evaluate the hidden results calculated by the IBIAMC subalgorithm in addition to the test results of HBS. In addition, we execute the SQP method alone on these problems and evaluate the test results. Simultaneously, the reported results from two well-known algorithms using specially designed constraint-handling methods are introduced, namely, Runarsson and Yao’s stochastic ranking method based on evolutionary strategy35 (abbreviated as RY)

1696

Ind. Eng. Chem. Res., Vol. 50, No. 3, 2011

Table 1. Performance of HBS, IBIAMC Subalgorithm, and SQP Aa

best

HBS IBIAMC SQP

7049.248021 7003.157089 7049.248021

7049.248021 8968.397715 7049.248021

7049.248021 9255.318977 7106.817793

HBS IBIAMC SQP

-1.000003 -1.004864 -1.000001

-1.000000 -0.950021 -0.999999

-1.000000 -0.931006 -0.762498

HBS IBIAMC SQP

-0.803619 -0.790841 -0.361482

-0.597744 -0.571811 -0.202569

-0.626091 -0.571811 -0.199738

HBS IBIAMC SQP

680.630057 681.863605 680.630057

680.630057 697.347615 680.630057

680.630057 702.080380 680.630057

HBS IBIAMC SQP

-1.000000 -1.000000 -1.000000

-1.000000 -1.000000 -0.554016

-1.000000 -1.000000 -0.601824

HBS IBIAMC SQP

0.053950 0.092778 0.053950

0.438851 0.831045 0.438851

0.337481 1.369279 0.446385

HBS IBIAMC SQP

-47.761091 -45.948504 -47.761091

-47.761090 -43.390076 -47.761090

-47.761090 -43.147387 -47.761090

HBSe IBIAMCe SQPe HBS IBIAMC SQP

-0.095825 -0.095825 -0.095825 -0.105460 -0.105458 -0.105460

-0.095825 -0.095793 -5.97 × 10-13 -0.105459 -0.105393 7.1 × 10-16

-0.095825 -0.095704 NaNf -0.105459 -0.105308 NaNf

middle

mean

worst

avg tb (s)

avg FETc

std devd

7049.248021 14118.087559 10410.210061

0.535 0.305 0.339

6385 6078 461

2.4 × 10-9 1.5 × 103 3.9 × 102

-0.999992 -0.713023 0

0.507 0.282 0.303

7995 7742 530

1.4 × 10-6 5.9 × 10-2 4.2 × 10-1

-0.296018 -0.437674 -0.105831

1.221 0.960 0.510

16100 15561 693

1.2 × 10-1 9.7 × 10-2 4.7 × 10-2

680.630058 741.619633 680.630058

0.276 0.132 0.323

5939 5792 349

1.5 × 10-7 1.6 × 101 2.1 × 10-7

-0.999996 -0.999996 -0.554016

0.851 0.799 0.109

3684 3680 58

7.0 × 10-7 7.0 × 10-7 1.2 × 10-1

1.000000 11.046587 1.001271

0.484 0.326 0.126

4830 4747 145

2.7 × 10-1 1.9 × 100 3.0 × 10-1

-47.761087 -38.518898 -47.761085

0.566 0.306 0.222

8053 7741 390

1.1 × 10-6 1.4 × 100 9.5 × 10-7

-0.095825 -0.094601 NaNf -0.105459 -0.104362 NaNf

0.206 0.114 0.195 0.157 0.094 0.172

3307 3292 69 3306 3290 108

1.4 × 10-7 2.2 × 10-4 NaNf 1.1 × 10-7 2.2 × 10-4 NaNf

Function f1

Function f2

Function f3

Function f4

Function f5

Function f6

Function f7

Function f8

e

a Name of subalgorithm or algorithm. Maximize. f NaN: Not a number.

b

Average CPU time.

c

Average number of function evaluations.

and Chootinan and Chen’s GA used gradient-based repair method36 (abbreviated as CC), as well as those from the heuristic method of BIAMC.33 The parameter settings of HBS are used in the IBIAMC subalgorithm and SQP subalgorithm. Those used in the IBIAMC subalgorithm are the number of repeated trials, Nr ) 50 (which is to drive a statistic of rates of success reaching the optimum solution); the number of parallel membranes, Nml ) 4; the number of subalgorithm cycles for one trial, C ) 5; the number of objects being sent in each parallel membrane by the surface membrane, No ) 20; the sum of objects in a communication object, Nco ) 4; pcm ) 0.9; pmm ) 0.9; pt ) 0.5; and pa ) 1. The increment used in the mutation mode is (nrand × widej)/ [1.2 × (c × ml)1.1]. The magnitude used for target indication vectors is widej × 0.0618/(c × ml). The activating condition of membrane of quasi-Golgi is that the multiplication of the current value of subalgorithm cycles by the current value of parallel membranes is exactly divisible by 2 after one subalgorithm cycle. The membrane of S starts to run as soon as all of the cycles of the IBIAMC subalgorithm finish. All equality constraints of functions are transformed into inequality constraints, using the degree of violation δ ) 0.001. The penalty parameters used for the eight problems are 20000, 10000, 10, 600, 0.00001, 1000000, 1000000, and 1000000. The main optional parameter settings of the SQP subalgorithm (function

d

Standard deviation of test results.

fmincon) are MaxIter, 100; MaxFunEvals, 1000; Display, iter; GradObj, off; Hessian, off; and GradConstr, off. When the SQP method (function fmincon) runs alone, the optional parameter settings are the same as those used in HBS. The initial solutions of the SQP method are the same as the initial objects generated by HBS in the last run. Because the initial objects generated by HBS are 80 in each run, the number of repeated trials for the SQP method is different from that for HBS. All tests were executed on a notebook PC with a 1.73 GHz, 768 MB processor. The numerical results of the best, middle, mean, and worst values of the objective functions and the statistics of the standard deviation, average CPU time, and the average numbers of function evaluations calculated by HBS and the SQP method are listed in Table 1, along with those hidden results calculated by the IBIAMC subalgorithm. The convergence curves of the best, middle, mean, and worst values calculated by the IBIAMC subalgorithm are shown in Table 2 to reveal the effect of the IBIAMC subalgorithm brought into the integral capability of the HBS visually. In Table 2, curves of bestF/middleF/meanF/ worstF delineate the calculation process of revised objective function value along with increasing values of subalgorithm cycles and parallel membranes. Curves of log10|f| for bestF/ middleF/meanF/worstF display the absolute values of their

Ind. Eng. Chem. Res., Vol. 50, No. 3, 2011

1697

Table 2. Calculating Process of the IBIAMC Subalgorithm

common logarithms. The results from RY, CC, and BIAMC, which were executed for 30 independent runs or 50 continuous runs on each test problem, are shown in Table 3. The other algorithmic parameter settings can be obtained from the literature.33,35,36 The results in Table 1 show that the integrated performance of HBS gathers the merits of the two subalgorithms in consideration of the time, robustness, and computational accuracy approaching the optimal solution. The graphs in Table 2 show that the heuristic subalgorithm of IBIAMC works efficiently. The curves of log10|f| for bestF/middleF/meanF/ worstF reveal that the convergence of the algorithm is different for different problems according to the complication of the problems, although it definitely converges. The comparison with the other three algorithms in Table 3 shows that the comprehensive performance of HBS is superior even if a gradientbased or other method is employed to handle constraints. Additionally, all of the success rates of HBS approaching the optimal solutions are 1 except for f3 and f6. For f5, the solutions calculated by the IBIAMC subalgorithm are so good that the SQP subalgorithm does not require effort to work on the solution. It can be seen that the optimal solution of f7 found by HBS is x* ) (0.0406849236100804, 0.147780342511575, 0.783102914481044, 0.00140640086074316, 0.485243796211523, 0.000694218750979332, 0.0274117879652315, 0.0179543923233380, 0.0373382807883422, 0.0968543436537025), where f(x*) ) -47.7610908102858. The value of the objective function is

less than that of the best-known solution, whereas its degree of deviation from the bound of equality constraints (1 × 10-6) is far less than that of the best-known solution (1 × 10-4). For f8, the reported optimum solution is the maximum solution, and this solution is found perfectly. Here, the minimum solution is found, which is x* ) (1.22719142641355, 3.74684893827851) and f(x*) ) -0.105449212217375. Because the objective functions of f3 and f6 are multimodal, nondifferentiable, and highly nonlinear, their success rates are no more than 0.1 and 0.4. The SQP method fails to search the best-known solution of f3; moreover, when an initial solution offered by the IBIAMC subalgorithm is close to the nondifferentiable boundaries, the initial minimal solution will be lost by the SQP subalgorithm for both the objective function value and the constraint violation increasingly derived from the indeterminate search direction. For example, for f6, it is hard to guarantee a solution in a definite time. Thus, the IBIAMC subalgorithm should find and transfer the higher-quality solutions to the SQP subalgorithm for more complex problems, and then an additional test on f3 is taken for a better performance of HBS. The test results and parameter settings different from the above are listed in Table 4. It is obvious that the averages of the objective function value increase; however, all success rates remain stable, and it cannot be avoided that objects sent to the SQP membrane are probably close to the nondifferentiable solutions. More tests and investigations need to be made on these complex problems for a higher success rate.

1698

Ind. Eng. Chem. Res., Vol. 50, No. 3, 2011

Table 3. Comparison of Performance between HBS, BIAMC, CC, and RY Aa

best

HBS BIAMC CC RY

7049.2480 -f 7049.2607 7054.3161

HBS BIAMC CC RY

680.6301 -f 680.6303 680.630 -1.000000 -1.0063 -f -1.000000

HBS BIAMC CC RY HBS BIAMC CC RY HBS BIAMC CC RY

-0.626091 -0.594286 0.785746 -0.781975

Function f3, opte ) 0.803619 0.1191 0.0748 1.4 × 10-2 2.0 × 10-2

680.6301 -f 680.6381 680.656

Function f4, 1.5 -f 6.6 3.4

-1.000000 -1.0060 -f -1.000000

opte ) 680.630 × 10-7 × 10-3 × 10-2

Function f5, opte ) -1 7.0 × 10-7 0.0005 -f 0 Function f6, opte ) 0.053942 2.7 × 10-1 -f -f 3.1 × 10-2

0.053950 -f -f 0.053957

0.337481 -f -f 0.067543

-47.761091 -f -f -f

Function f7, opte ) -47.764888 -47.761090 1.1 × 10-6 -f -f -f -f -f -f

-0.095825 -f 0.095825 -0.095825

HBS BIAMC CCg RY

Function f2, opte ) -1 1.4 × 10-6 3.78 × 10-1 1.0 × 10-4 1.9 × 10-4

-1.0000 -1 0.9999 -1

-0.803619 -0.801921 0.801119 -0.803515

HBS BIAMC CCg RY

Function f1, opte ) 7049.331 2.4 × 10-9 -f 0.5699 5.3 × 102

7049.2480 -f 7049.5659 7559.192

-1.0000 -1 0.99998 -1

HBS BIAMC CCg RY

std devb

mean

-0.095825 -f 0.095825 -0.095825

Function f8, opte ) 0.095825 1.4 × 10-7 -f 2.6 × 10-17 2.7 × 10-9

avg FETc

avg td (s)

7049.2480 -f 7051.6857 8835.655

6385 -f 572629 350000

0.535 -f -f -f

-1.0000 -1 0.99979 -1

7995 9244 399804 350000

0.507 0.53 -f -f

-0.296018 -0.459195 0.745329 -0.72629

16100 87912 331972 350000

1.221 6.93 -f -f

680.6301 -f 680.6530 680.763

5939 -f 388453 350000

0.276 -f -f -f

3684 31111 -f 35000

0.851 1.05 -f -f

3680 -f -f 350000

0.799 -f -f -f

8053 -f -f -f

0.566 -f -f -f

3307 -f 6217 350000

0.206 -f -f -f

worst

-0.999996 -1.0041 -f -1.000000 1.000000 -f -f 0.216915 -47.761087 -f -f -f -0.095825 -f 0.095825 -0.095825

a

Name of subalgorithm or algorithm. b Standard deviation of test results. c Average number of function evaluations. d Average CPU time. e Optimum value or the best known objective function value. f Does not exist. g Maximization problem. Table 4. Performance of HBS Tested on f3 c

ml

algorithm

best

middle

mean

worst

avg t (s)

avg FET

std dev

5 10 10

10 4 10

HBS HBS HBS

-0.80362 -0.80362 -0.80362

-0.65448 -0.65294 -0.64622

-0.67442 -0.66032 -0.65892

-0.50289 -0.50741 -0.44049

2.252 2.731 5.306

32503 39851 80465

0.0778 0.0814 0.0750

3.2. Case Study on Gasoline Blending Scheduling. We construct an application of a nonlinear optimization problem to reveal the practicability of the proposed method. This case study is similar to those used in some other works.33,37-40 3.2.1. Problem Description. Gasoline blending is a process of mixing a number of feedstocks with different properties together with small amounts of additives according to a given specification.33,37-40 A schematic of the blending process is shown in Figure 3. The objective of gasoline blending and scheduling is to meet the product quality, the supply of feedstocks, and the product demand forecast; minimize the inventories of products; and maximize the profit of products. In this case, the profit objective function of gasoline blending and scheduling is summarized as follows33

Figure 3. Blending process.

td

maximize

Np

∑∑ d)1 n)1

(

Pd,pnVd,pn -

Ind. Eng. Chem. Res., Vol. 50, No. 3, 2011

)

Ncn

∑P

Table 5. Feedstock Qualities and Economic Data

(11)

d,cn,mVd,cn,m

m)1

where td is the scheduling period and d denotes the time interval. Np and Ncn are the total numbers of products and components, respectively. Vd,pn is the volume of product n produced in interval d. Vd,cn,m is the volume of component m for the use of product n in interval d. Pd,cn,mis the price of component m in interval d for the use of product n. Pd,pnis the price of product n in interval d. Specifications on gasoline qualities mainly include octane number, volatility, sulfur content, aromatics content, Reid vapor pressure (RVP), and so on. The ethyl RT-70 models are adopted widely to represent the mixing rules for octane numbers and RVP in production control.33,37-40 The ethyl RT-70 models are as follows

[

T

(r x)(s x) + (eTx) (aTx)2 + R3 aSTx - T (e x)

RONblending ) rTx + R1 rTdiag(s)x -

[

R2 oSTx -

(oTx)2 (eTx)

[

]

T

] [

[

R5 oSTx -

T

2

]

(o x) (eTx)

]

]

(12)

(mTx)(sTx) + (eTx) R6 (aTx)2 2 + aSTx - T (13) T 10000(e x) (e x)

MONblending ) mTx + R4 mTdiag(s)x -

[∑

[

RVPblending )

m)1

]

]

0.8

Ncn

1.25

um(RVPm)

1699

(14)

where RONblending, MONblending, and RVPblending denote the blended RON, MON, and RVP, respectively. R1 ) 0.03224, R2 ) 0.00101, R3 ) 0, R4 ) 0.04450, R5 ) 0.00081, and R6 ) -0.00645. x denotes a vector of flow rates of the feedstocks, r is a vector of RON, m is a vector of MON, s ) r - m, e is a unit volume vector, o is an olefin content vector (% by volume), os is the square of the olefin content vector, a is an aromatic content vector (% by volume), as is the square of the aromatic content vector. ui is the volume fraction of component i, and Ncn is the total numbers of components. Here, eqs 12-14 are considered as adequately representing the quality specifications of blending. 3.2.2. Simulation Results. A typical group of feedstocks compromising paraffins (alkanes), naphthenes (cycloalkanes), and olefins (alkenes) is used in this case study. The main qualities of components and reference economic data are listed in Table 5.41 The products are two grades of gasoline: regular and premium. The scheduling period is 8 days. It is assumed that the daily supply of components is the same and that the yield meets the daily product demand forecast to minimize the inventories of products. After the optimization recipes are calculated, the recipes can be used to guide the purchasing of feedstocks to minimize the inventories of components. The qualities of products and market information are listed in Table 6 and 7,38,41 respectively. Consequently, the data in Tables 5-7 contain bounds of decision variables, volumes of production, material balance equations between products and components, and constraints on product quality. Thus, there are 80 decision variables, 70 linear constraints (including 3 equality constraints), and 48 nonlinear constraints in this problem. Here, we adapt two optimization schemes presented by Zhao and Wang33 for comparison and analysis of the algorithmic

index\feedstock reformate LSR naphtha n-butane catalytic gas alkylate RON MON olefin (%) aromatics (%) RVP (psi) available (bbl/d) cost ($/bbl)

94.1 80.5 1.0 58.0 3.8 12000 34.0

70.7 68.7 1.8 2.7 12.0 6500 26.0

93.8 90.0 0 0 138.0 3000 10.3

92.9 80.8 48.8 22.8 5.3 4500 31.3

95.0 91.7 0 0 6.6 7000 37.0

Table 6. Qualities and Prices of Products production

value ($/bbl)

min RON

min MON

max RVP (psi)

regular premium

33.0 37.0

88.5 91.5

77.0 80.0

10.8 10.8

Table 7. Production Requirements demand time

regular (bbl)

premium (bbl)

day day day day day day day day

7000-8000 7000-8000 6000-7000 6000-7000 5500-6500 7000 8000 6000-7000

10000 5000-6000 9000-10000 8000-9000 9000 9000 8500-9500 10000-11000

1 2 3 4 5 6 7 8

performance. The daily optimization scheme is to calculate the daily recipe in succession for simplifying the computational complexity of this case study. Then, the daily decision variables number 10, and there are about 9 linear constraints (equality constraints included) and 6 nonlinear constraints. The unitary optimization scheme is to calculate a recipe for the whole scheduling period, for which there are 80 decision variables, 70 linear constraints, and 48 nonlinear constraints. The minimized objective function of two optimization strategies is given by 8

minimize f(V) ) -

2

∑∑ d)1 n)1

(

Pd,pnVd,pn -

5

∑P

m)1

d,cn,mVd,cn,m

)

(15)

where Pd,pn ) [33.0, 37.0] and Pd,cn,m ) [34.0, 26.0, 10.3, 31.3, 37.0]. The parameter settings for the daily model are the same as those used in the numerical simulation except that C ) 3, No ) 10, and Nco ) 2. The activating condition is that the multiplication of the current value of subalgorithm cycles by the current value of parallel membranes is exactly divisible by 3 after one algorithm cycle. The degree of violation δ ) 0.001. The penalty parameter µ is 120000. The parameter settings used for the unitary optimization scheme are the same as above except that Nml ) 5, No ) 20, MaxIter ) 200, and MaxFunEvals ) 15000. The results are shown in Table 8. Whether the daily or the unitary optimization scheme is applied, the same recipe is calculated on each run. The quality indices and profits are shown in Table 9, as well as the test results. avg t and avg FET consumed in the SQP subalgorithm are 2.751 s and 2089, respectively. Correspondingly, avg t and avg FET are 172.4244 s and 75115, respectively, under the unitary optimization scheme, where the SQP subalgorithm takes 21.8906 s and 9833, respectively. Comparing the results tested under the daily and unitary schemes, the daily optimization scheme is better for application, as it has a lower computational complexity and is easier to extend with more constraints and a

1700

Ind. Eng. Chem. Res., Vol. 50, No. 3, 2011

Table 8. Indexes and Profits of the Products time

gasolinea

RON

MON

RVP

profit ($)

profit ($)

grossb ($)

avg t (s)

avg FET

day 1

R P R P R P R P R P R P R P R P

88.5 91.5 88.5 91.5 88.5 91.5 88.5 91.5 88.5 91.5 88.5 91.5 88.5 91.5 88.5 91.5

77.888 80 78.252 80 77.890 80 77.972 80 77.981 80 77.972 80 77.913 80 77.861 80

10.8 10.8 10.8 10.8 10.8 10.8 10.8 10.8 10.8 10.8 10.8 10.8 10.8 10.8 10.8 10.8

16315.31 59546.73 20670.53 35909.73 14298.89 59567.31 15238.64 53864.6 14238.84 53864.6 15238.64 53864.6 16654.13 56831.84 13969.27 64630.48

75862.04

564704.14

7.622

18436

day 2 day 3 day 4 day 5 day 6 day 7 day 8 a

56580.26 73866.2 69103.24 68103.44 69103.24 73485.97 78599.75

R denotes regular gasoline, and P denotes premium gasoline. b Total of 8 days’ profits.

Table 9. Recipes and Qualities for Day 1 Calculated by HBS, BIAMC, GA, and PSO A HBS BIAMC GA PSO a

gasolinea

reformate

naphtha

n-butane

catalytic

alkylate

RON

MON

RVP

profit ($)

R P R P R P R P

5172.368 3817.376 4119.915 4767.480 4160.136 4042.932 5913.19 4216.09

2550.864 1566.148 2523.325 1433.063 2267.804 1592.172 1912.29 883.19

150.5295 222.3424 135.874 225.583 125.878 217.523 174.52 244.94

126.238 4373.762 1022.833 3426.985 1086.041 3147.374 0 4500

0 20.373 128.976 143.892 357.895 1000.00 0 155.78

88.5 91.5 88.5 91.8 89.2 91.5 88.5 91.5

77.888 80 78.197 80.103 78.721 80.526 77.888 80

10.8 10.8 10.6 10.7 10.1 10.8 10.8 10.8

75862.0 73473.6 68377.2 65987.9

R denotes regular gasoline, and P denotes premium gasoline.

Figure 4. Williams and Otto flow sheet.

long-term scheduling period. If possible, it is better to give the gradient background of the problems in the SQP subalgorithm, in which case the avg t and avg FET under the two schemes will decrease. The existing results for day 1 reported by Zhao and Rong39 and Zhao and Wang33 are listed in Table 8. The parameter settings reported by Zhao and Rong are as follows: number of particles, 20; dimension of particles, 8; and stopping condition, 500 iterations. The parameter settings reported by Zhao and Wang33 are Nr ) 50, Nml ) 10, No ) 6, C ) 10, Nco ) 2, pmm ) 0.9, and pcm ) 0.9. The penalty parameter is 120000. pa ) 1, pt ) 0.5. The increment used in the mutation mode is (nrand × widej)/[1.2 × (c × ml)1.1]. Each element of the target indication vector is widej × 0.0618/(c × ml1.2). The activating condition is the same as that of HBS. The main optional parameter settings of the ga

function are as follows: number of repeated trials, Nr ) 50; PopulationSize, 60; Generations, 1000; MutationFcn, mutationadaptfeasible; CrossoverFcn, crossoverheuristic; CrossoverFraction, 0.9; MigrationFraction, 0.2; SelectionFcn, selectionstochunif; and Display, off. The comparison shows that HBS can find the optimal solutions with higher robustness, efficiency, and accuracy. 3.3. Optimal Design of Williams-Otto Process for Single Economic Objectives. Williams-Otto (WO) process optimization has been used by many researchers as a representation of numerous chemical processes.1,7,40-42 A graph of the WO process is shown in Figure 4. In this typical process, reactants A and B, mixed with the recycle stream, enter a continuousflow stirred-tank reactor, where the following reactions take place

Ind. Eng. Chem. Res., Vol. 50, No. 3, 2011

A+BfC C+BfP+E P+CfG

(16)

Table 11. Optimal Solutions of the Flowsheet Obtained Using HBS objective functions

where C is the intermediate, P is the main product, E is the byproduct, and G is the waste product. G is entirely separated from the cooling reactor outlet in a decanter after a heat exchanger. Product P is collected in a distillation column. Some of the product (equivalent to 10 wt % of the mass flow rate of component E) is retained in the column bottoms because of the formation of an azeotrope.42 Part of the bottoms is purged to avoid accumulation of the byproduct, whereas most of it is recycled to the reactor. The purge stream can be sold on the market for its substantial fuel value. 3.3.1. Process Model. The mathematical model of the WO process, excluding the economic and objective functions, consists of 82 variables and 78 equality constraints. The variables are the rate constants of three reactions (k1, k2, k3), the mass flow rates Qji of reactant j in stream i (i ) 1, ..., 10; j ) A, B, C, E, G, P), the mass flow rates Qi of stream i (i ) 1, ..., 10), the reactor temperature (T), the reactor volune (V), the mass fraction w3j of component j in stream 3 (w3j ) Q3j /Q3, j ) A, B, C, E, G, P), and the purge fraction (R). The 78 equality constraints arise from mass balances on the reactor, cooler, decanter, and column. Most of the equality constraints can be easily eliminated, leaving six nonlinear equations.43 In this model, we select the same 10 variables as used in NSGAII-aJG:44 V, T, R, QB, QA, Q3A, Q3B, Q3C, Q3E, and Q3G. The bounds of decision variables and constraints are listed in Table 10.7,42-44 3.3.2. Flowsheet Optimization with Different Profitability Measures. Four objectives presented by Pintaric and Kravanja are employed here (listed in Table 10), which are the net present worth (NPW), the payback period (PBP), the profit before taxes (PBT), and the total annual cost (TAC). The total annual cost (TAC) is the sum of the operating cost and the depreciation cost (it does not consider the time value of money or taxes), where depreciation cost is calculated by

R V (m3) T (K) QA (kg/h) QB (kg/h) FCI ($) CF ($/year) NPW ($) IRR (year-1) HBS avg t (s) HBS avg FET

max NPW

min PBP

max PBT

min TAC

0.109 3.747 351.211 5238.256 11789.293 3975050.26 1997750.51 7312685.68 0.493 35.522 37177

0.0994 0.872 374.650 6121.996 13945.681 925047.07 875965.76 4024354.83 0.946 25.566 17990

0.113 6.814 342.623 4956.701 11111.374 7228656.32 2420653.11 6448573.64 0.313 34.48 35727

0.102 7.899 342.108 4807.496 10878.388 8379945.34 2520942.80 5863943.71 0.274 28.747 25778

dividing fixed capital investment (FCI) over the lifetime of the project (assumed to be 10 years). Fixed capital investment (FCI) is the capital necessary for the installed process equipment with all components needed for complete process operation. Profit before taxes (PBT) is the difference without accounting for taxes between the annual revenue and TAC including depreciation. The payback period (PBP) measures the period of time required to pay off the initial fixed capital investment (FCI) from the annual cash flows (CF). The annual cash flow (CF) is the sum of profit after taxes and depreciation where the tax rate is taken to be 30% per year. The net present worth (NPW) is the present value of all investments and cash flows during the project lifetime, where the time value of money is counted at the expected rate of return (rd, year-1). For the WO process, FCI is dependent on the volume of the reactor and the density of the material flow; CF is composed of the sale of product P, the sale of the purge stream, and the tax credit generated by the depreciation charges minus the cost of the raw materials, the utility cost, the waste-removal cost, and the fixed cost. It is assumed that the depreciation and CF are the same every year during the project life of 10 years and that there is no working capital (i.e., TCI ) FCI, TCI is the

Table 10. Objectives, Constraints and Decision Variables Used in the WO Process Objectives and Decision Variables Objectives:

Constraints QA3 ) QA + QA3 (1 - R) -

max NPW min PBP

QB3 ) QB + QB3 (1 - R) -

max PBT QC3 ) QC3 (1 - R) +

min TAC Decision Variables:

QE3 ) QE3 (1 - R) +

10000 e QB e 15000 kg/h 1000 e QA e 15000 kg/h 0 e QA3 e 70000 kg/h

Q32

(k1QA3 + k2QC3 )QB3 VF Q32 - 2k2QB3 QC3 - K3QB3 (2160 + 0.1QE3 )VF Q32

2k1QB3 QC3 VF

0.85 e V e 10 m

0 e R e 0.99

k1QA3 QB3 VF

2k1QA3 QB3

3

322 e T e 378 K

Q32

2160 + 0.1QE3 ) 0.1QE3 (1 - R) + QG3 )

1.5k3QC3

(2160 +

0.1QE3

k2QB3 QC3 - 0.5K3QC3 (2160 + 0.1QE3 )VF Q32

)VF

Q32

where

( -12000 1.8T ) -15000 ) exp( 1.8T ) -20000 ) exp( 1.8T )

k1 ) (5.9755 × 109) exp

0 e QB3 e 70000 kg/h

k2 ) (2.5962 × 1012

0 e QC3 e 70000 kg/h

k3 ) (9.6283 × 1015

0 e QE3 e 70000 kg/h

F ) 801 kg/m3, rd ) 0.12

0 e QG3 e 70000 kg/h

1701

Q3 ) QA3 + QB3 + QC3 + 1.1QE3 + QG3 + 2160

1702

Ind. Eng. Chem. Res., Vol. 50, No. 3, 2011

Table 12. Optimal Solutions of the Flowsheet Obtained Using NSGA-IIoaJG (in Parentheses) and GAMS/CONOPT profitability measures used as objective functions R V (m3) T (K) QA (kg/h) QB (kg/h) FCI (MUS $) CF (MUS $/year) NPW (MUS $) IRR (year-1)

max NPW

min PBP

max PBT

min TAC

0.109 (0.110) 3.75 (3.71) 351 (351) 5239 (5247) 11792 (11809) 3.97 (3.94) 2.00 (1.99) 7.30 (7.31) 0.493 (0.489)

0.100 (0.100) 0.873 (0.871) 374 (375) 6123 (6121) 13956 (13965) 0.925 (0.924) 0.876 (0.875) 4.02 (4.02) 0.945 (0.944)

0.113 (0.113) 6.82 (6.80) 342 (343) 4957 (4956) 11113 (11118) 7.22 (7.21) 2.42 (2.42) 6.44 (6.46) 0.313 (0.312)

0.102 (0.102) 7.90 (7.89) 342 (342) 4808 (4809) 10880 (10882) 8.37 (8.37) 2.52 (2.52) 5.86 (5.87) 0.274 (0.272)

(20)

The results in Tables 11 and 12 show that HBS is effective and efficient in solving complex optimization problems. Additionally, 23529, 4337, 21973, and 12131 partitioned from avg FET are consumed in the SQP subalgorithm for four objective functions individually. All of the success rates of finding optimal solutions under three different objectives are no more than 0.74, except that it is 1 under the objective function of min TAC. Moreover, it is no more than 0.1 for the objective function of min PBP. It is better to use different penalty parameters because of the small objective function value of PBP, such as 0.01. The test results show that the more complex optimization problems are, the more difficult it is to maintain the computational efficiency, accuracy, and robustness of optimal algorithms. The hybrid method HBS can help to analyze complex optimization problems dispensing with the complexity of their mathematical models.

PBP ) FCI/CF

(21)

4. Conclusions

NPW ) -TCI + fPA(rd)CF

(22)

total capital investment). FCI, CF, TAC, PBT, NPW (all in $), and PBP are given as follows:43,44 FCI ) CF ) 0.7

1 [2207Q { 0.453

7

600VF 0.453

(17)

+ 50Q9 - 168QA - 252QB -

}

2.22(Q10 + QA + QB) - 84Q6] - 1041.6 + 0.3

TAC )

60VF 0.453 (18)

1 [168QA + 252QB + 2.22(Q10 + QA + QB) + 0.453 84Q6 + 60VF] + 1041.6 (19) PBT )

1 (2207Q7 + 50Q9) - TAC 0.453

fPA(rd) )

fPA(IRR) )

(1 + rd)10 - 1 rd(1 + rd)10

(1 + IRR)10 - 1 ) TCI/CF IRR(1 + IRR)10

(23)

(24)

where fPA(rd) is the present worth annuity factor corresponding to the discount rate rd to bring future CF to the present time, which, for 10 years of identical cash flows, is given by eq 23. The internal rate of return (IRR) is a special discount rate at which the NPW of a project equals zero. 3.3.3. Simulation Results. The parameter settings of the IBIAMC subalgorithm are as follows: number of repeated trials, Nr ) 50; number of parallel membranes, Nml ) 4; number of subalgorithm cycles for one trial, C ) 10; number of objects being sent in each parallel membrane, No ) 20; number of objects in a communication object, Nco ) 2; pmm ) 0.9; pcm ) 0.9; pt ) 0.5; and pa ) 1. The increment used in mutation mode is (nrand × widej)/[1.2 × (c × ml)1.1]. The magnitude of the target indication vector is widej × 0.0618/(c × ml). The activating condition of the quasi-Golgi membrane is that the multiplication of the current value of subalgorithm cycles by the current value of parallel membranes is exactly divisible by 3 after one algorithm cycle. The degree of violation is δ ) 0.001. The penalty parameters used in four revised objective functions are all 10. The main optional parameter settings of SQP subalgorithm (function fmincon) are as follows: MaxIter, 2000; MaxFunEvals, 30000; Display, iter; GradObj, off; Hessian, off; and GradConstr, off. The optimal values solved by HBS are shown in Table 11. The previously reported results42,43 are listed in Table 12.

We propose the hybrid optimization method integrating the strong points both of heuristic algorithms (IBIAMC) and gradient-based algorithms (SQP) to overcome the disadvantages of them in handling complex constrained optimization problems (HBS). The performance of HBS has been investigated. The results of numerical simulations and two case studies showed that the proposed method can locate the optimum solution with a steady performance in an acceptable time. The more flexible activating condition for membranes of quasi-Golgi and S can be investigated for a higher efficiency because of HBS’s algorithmic framework. Acknowledgment This work was supported by the National Natural Science Foundation of China under Grants 60874072 and 60721062, and the National High Technology Research & Development Program of China under Grant 2008AA04Z209. Appendix All constrained functions used in our experiments are summarized here for completeness. The original sources are cited with each function.45-48 1. Minimize45 3

f1(x) )

∑x

i

i)1

s.t. g1(x) g2(x) g3(x) g4(x) g5(x) g6(x)

) ) ) ) ) )

1 - 0.0025(x4 + x6) g 0 1 - 0.0025(x5 + x7) g 0 1 - 0.01(x8 - x5) g 0 -x1x6 + 833.33252x4 + 100x1 e 83333.333 -x2x7 + 1250x5 + x2x4 - 1250x4 e 0 -x3x8 + x3x5 - 1250x5 e -1250000

Ind. Eng. Chem. Res., Vol. 50, No. 3, 2011

where 100 e x1 e 10000, 1000 e xi e 10000 (i ) 2, 3), and 10 e xi e 1000 (i ) 4, ..., 8). The optimum solution is at x* ) (579.3167, 1359.943, 5110.071, 182.0174, 295.5985, 217.9799, 286.4162, 395.5979), where f(x*) ) 7049.3307. Three constraints are active (g1, g2, and g3). 2. Maximize46

1.8272502406271, -0.76365986736498), where f(x*) ) 0.05394151404. 7. Minimize48 10

f7(x) )

∑x

i

i)1

f2(x) ) (√D)

∏x

∑x

i

-1)0

i)1

where D ) 10 and 0 e xi e 1 (i ) 1, ..., D). The global maximum is at xi* ) 1/D, where f(x*) ) 1. 3. Maximize47

f3(x) )

|

D



D

cos4(xi) - 2

i)1

s.t. g1(x) )

∏ cos (x ) 2

i

i)1

∑ D

ixi2

i)1

D

|

where 0 e xi e 10 (i ) 1, ..., 10) and c ) (-6.089, -17.164, -34.054, -5.914, -24.721, -14.986, -24.1, -10.708, -26.662, -22.179). The best known solution is x* ) (0.0406684113216282,0.147721240492452,0.783205732104114,0.00141433931889084, 0.485293636780388, 0.0006931383051556082, 0.0274052040687766, 0.0179509660214818, 0.0373268186859717, 0.0968844064336845), where f(x*) ) -47.7648884594915. Three constraints are active (h1, h2, and h3). 8. Maximize47

D

∏ x g 0.75, g (x) ) ∑ x e 0.75D i

∑x

s.t. h1(x) ) x1 + 2x2 + 2x3 + x6 + x10 - 2 ) 0 h2(x) ) x4 + 2x5 + x6 + x7 - 1 ) 0 h3(x) ) x3 + x7 + x8 + 2x9 + x10 - 1 ) 0

i)1

D

s.t. h(x) )

xi

10

i)1

i

2

[ ( )] ci + ln

j

D

D

1703

2

i)1

i

i)1

where D ) 20 and 0 e xi e 10 (i ) 1, ..., D).The global maximum is unknown; the best value we found is f(x*) ) 0.803619. Constraint g1 is close to being active (g1 ) -1 × 10-8). 4. Minimize45 4 2 2 2 f4(x) ) (x1 - 10) + 5(x2 - 12) + x3 + 3(x4 - 11) +

10x56 + 7x62 + x74 - 4x6x7 - 10x6 - 8x7 s.t. g1(x) ) 2x12 + 3x24 + x3 + 4x42 + 5x5 - 127 e 0 g2(x) ) 7x1 + 3x2 + 10x32 + x4 - x5 - 282 e 0 g3(x) ) 23x1 + x23 + 6x62 - 8x7 - 196 e 0 g4(x) ) 4x12 + x22 - 3x1x2 + 2x32 + 5x6 - 11x7 e 0

where -10 e xi e 10 (i ) 1, ..., 7). The optimum solution is at x* ) (2.330499, 1.951372, 0.4775414, 4.365726, 0.6244870, 1.038131, 1.594227), where f(x*) ) 680.6300573. Two constraints are active (g1 and g4). 5. Maximize47 f5(x) ) [100 - (x1 - 5)2 - (x2 - 5)2 - (x3 - 5)2]/100 s.t. g(x) ) (x1 - p)2 + (x2 - q)2 + (x3 - r)2 - 0.0625 e 0 where 0 e xi e 10 (i ) 1, 2, 3) and p, q, r ) 1, ..., 9. The feasible region of the search space consists of 93 disjointed spheres. A point (x1, x2, x3) is feasible if and only if there exist (p, q, r) such that the above inequality holds. The global optimum is located at x* ) (5, 5, 5), where f(x*) ) 1. 6. Minimize45 f6(x) ) ex1x2x3x4x5 s.t. h1(x) ) x12 + x22 + x32 + x42 + x52 - 10 ) 0 h2(x) ) x2x3 - 5x4x5 ) 0 h3(x) ) x13 + x23 + 1 ) 0 where -2.3 e x1, x2 e 2.3 and -3.2 e x3, x4, x5 e 3.2. The optimum solution is x* ) (-1.71714224003, 1.59572124049468,

f8(x) )

sin3(2πx1) sin(2πx2) x13(x1 + x2)

s.t. g1(x) ) x12 - x2 + 1 e 0 g2(x) ) 1 - x1 + (x2 - 4)2 e 0 where 0 e x1, x2 e 10. The optimum solution is at x* ) (1.2279713, 4.2453733), where f(x*) ) 0.095825. The solution lies within the feasible region. Literature Cited (1) Edgar, T. F.; Himmelblau, D. M.; Lasdon, L. S. Optimization of Chemical Processes, 2nd ed.; McGraw-Hill: New York, 2001. (2) Nocedal, J.; Wright, S. J. Numerical Optimization; Springer-Verlag: New York, 1999. (3) Horst, R.; Pardalos, P. M. Introduction to Global Optimization; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1995. (4) Dennis, J E.; Schnabel, R. B. Numerical Methods for Unconstrained Optimization and Nonlinear Equations; Society for Industrial Mathematics: Philadelphia, PA, 1996. (5) Biegler, L. T.; Grossmann, I. E. Retrospective on Optimization. Comput. Chem. Eng. 2004, 28 (8), 1169–1192. (6) Grossmann, I. E.; Biegler, L. T. Part II: Future Perspective on Optimization. Comput. Chem. Eng. 2004, 28 (8), 1193–1218. (7) Bella, C. W. D.; Stevens, W. F. Process Optimization by Nonlinear Programming. Ind. Eng. Chem. Process Des. DeV. 1965, 4 (1), 16–20. (8) Gottfried, B. S.; Bruggink, P. B.; Harwood, E. R. Chemical Process Optimization Using Penalty Functions. Ind. Eng. Chem. Res. 1970, 9 (4), 581–588. (9) Wendt, M.; Li, P.; Wozny, G. Nonlinear Chance-Constrained Process Optimization under Uncertainty. Ind. Eng. Chem. Res. 2002, 41 (15), 3621– 3629. (10) Zhang, Y.; Monder, D.; Forbes, J. F. Real-Time Optimization under Parametric Uncertainty: A Probability Constrained Approach. J. Process Control 2002, 12 (3), 373–389. (11) Jia, Z. Y.; Ierapetritou, M. Mixed-Integer Linear Programming Model for Gasoline Blending and Distribution Scheduling. Comput. Chem. Eng. 2003, 42 (4), 825–835. (12) Li, J.; Karimi, I. A.; Srinivasan, R. Recipe Determination and Scheduling of Gasoline Blending Operations. AIChE J. 2010, 56 (2), 441– 465. (13) Holland, J. H. Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence; MIT Press: Cambridge, MA, 1992. (14) Adleman, L. M. Molecular Computation of Solutions to Combinatorial Problems. Science 1994, 266 (5187), 1021–1024.

1704

Ind. Eng. Chem. Res., Vol. 50, No. 3, 2011

(15) Mathur, M.; Karale, S. B.; Priye, S.; Jayaraman, K.; Kulkarni, B. D. Ant Colony Approach to Continuous Function Optimization. Ind. Eng. Chem. Res. 2000, 39 (10), 3814–3822. (16) Nishida, T. Y. Membrane Algorithms. Membrane Computing. In Lecture Notes in Computer Science; Springer-Verlag: New York, 2005; Vol. 3850, pp 55-66. (17) Fan, K.; Brabazon, A.; Sullivan, C.; O’Neill, M. Quantum-Inspired Evolutionary Algorithms for Financial Data Analysis. In Lecture Notes in Computer Scinece, Applications of EVolutionary Computing; SpringerVerlag: New York, 2008. (18) Salomon, R. Evolutionary Algorithms and Gradient Search: Similarities and Differences. IEEE Trans. EVolut. Comput. 1998, 2 (2), 45–55. (19) Mansoornejad, B.; Mostoufi, N.; Jalali-Farahani, F. A Hybrid GASQP Optimization Technique for Determination of Kinetic Parameters of Hydrogenation Reactions. Comput. Chem. Eng. 2008, 32 (7), 1447–1455. (20) Tao, J. L.; Wang, N. DNA Double Helix based Hybrid Genetic Algorithm for the Gasoline Blending Recipe Optimization Problem. Chem. Eng. Technol. 2008, 31 (3), 440–451. (21) Paun, G. Computing with Membranes. J. Comput. Syst. Sci. 2000, 61 (1), 108–143. (22) Paun, G.; Rozenberg, G.; Salomaa, A. The Oxford Handbook of Membrane Computing; Oxford University Press: New York, 2010. (23) The P Systems Web Page, http://ppage.psystems.eu. (24) Calude, C. S.; Paun, G. Bio-steps beyond Turing. Biosystems 2004, 77 (1-3), 175–194. (25) Membrane Computing. http://en.wikipedia.org/wiki/Membrane_ computing. (26) Paun, G. P System with Active Membranes: Attacking NP-complete Problems. J. Autom., Lang. Combin. 2001, 6 (1), 75–90. (27) Krishna, S. N.; Paun, G. P Systems with Mobile Membranes. Nat. Comput. 2005, 4 (3), 255–274. (28) Rose, A.; Schraegle, S. J.; Stahlberg, E. A.; Meier, I. Coiled-coil Protein Composition of 22 Proteomes-differences and Common Themes in Subcellular Infrastructure and Traffic Control. BMC EVol. Biol. 2005, 5, 66. (29) Diaz-Pernil, D.; Gutierrez-Naranjo, M. A.; Perez-Jimenez, M. J.; Riscos-Nunez, A. A linear-time tissue P system based solution for the 3-coloring problem. Electron. Notes Theor. Comput. Sci. 2007, 171 (2), 81–93. (30) Besozzi, D.; Cazzaniga, P.; Pescini, D.; Mauri, G. Modeling Metapopulations with Stochastic Membrane Systems. Biosystems 2008, 91 (3), 499–514. (31) Huang, L.; Wang, N. An Optimization Algorithm Inspired by Membrane Computing. AdV. Nat. Comput. 2 2006, 4222, 49–52. (32) Huang, L.; Wang, N.; Zhao, J. H. Multiobjective Optimization for Controller Design. Acta Autom. Sin. 2008, 34 (4), 472–477. (33) Zhao, J.; Wang, N. A Bio-inspired Algorithm Based on Membrane Computing and Its Application to Gasoline Blending Scheduling. Comput. Chem. Eng., published online Jan 22, 2010, http://dx.doi.org/10.1016/ j.compchemeng. 2010.01.008.

(34) Karp, G. Cell and Molecular Biology: Concepts and Experiments, 4th ed.; Wiley: New York, 2004. (35) Runarsson, T. P.; Yao, X. Stochastic Ranking for Constrained Evolutionary Optimization. IEEE Trans. EVolut. Comput. 2000, 4 (3), 284– 294. (36) Chootinan, P.; Chen, A. Constraint Handling in Genetic algorithms Using a Gradient-based Repair Method. Comput. Oper. Res. 2006, 33 (8), 2263–2281. (37) Zhang, Y.; Monder, D.; Forbes, J. F.; Real-time optimization under parametric uncertainty: a probability constrained approach. J. Process Control 2002, 12 (3), 373–389. (38) Singh, A.; Forbes, F. J.; Vermeer, P. J.; Woo, S. S. Model-based Real-time Optimization of Automotive Gasoline Blending Operations. J. Process Control 2000, 10 (1), 43–58. (39) Mendez, C. A.; Grossmann, I. E.; Harjunkoski, I.; Kabore, P. A Simultaneous Optimization Approach for Off-line Blending and Scheduling of Oil-Refinery Operations. Comput. Chem. Eng. 2006, 30 (4), 614–634. (40) Zhao, X. Q.; Rong, G. Blending Scheduling based on Partial Swarm Optimization Algorithm. In Proceedings of the 2004 IEEE International Conference on Information Reuse and Integration (IRI-2004); IEEE Press: Piscataway, NJ, 2004; pp 618-622. (41) Forbes, J. F.; Marlin, T. E. Model Accuracy for Economic Optimizing Controllers: the Bias Update Case. Ind. Eng. Chem. Res. 1994, 33, 1919–1929. (42) Biegler, L. T.; Grossmann, I. E.; Westerberg, A. W. Systematic Methods of Chemical Process Design; Prentice Hall: Upper Saddle River, NJ, 1997. (43) Pintaric, Z. N.; Kravanja, Z. Selection of the Economic Objective Function for the Optimization of Process Flow Sheets. Ind. Eng. Chem. Res. 2006, 45 (12), 4222–4232. (44) Rangaiah, G. P. Multi-objectiVe Optimization: Techniques and Applications in Chemical Engineering; World Scientific Publishing Company: Singapore, 2009. (45) Hock, W.; Schittkowski, K. Test examples for nonlinear programming codes. In Lecture Notes in Economics and Mathematical Systems; Springer-Verlag: Heidelberg, Germany, 1981. (46) Michalewicz, Z.; Nazhiyath, G.; Michalewicz, M. A note on usefulness of geometrical crossover for numerical optimization problems. In 5th Annual Conference on EVolutionary Programming; MIT Press, Cambridge, MA, 1996; pp 305-312. (47) Koziel, S.; Michalewicz, Z. Evolutionary algorithms, homomorphous mappings, and constrained parameter optimization. EVol. Comput. 1999, 7 (1), 19–44. (48) Himmelblau, D. M. Applied Nonlinear Programming; McGrawHill: New York, 1972.

ReceiVed for reView October 11, 2009 ReVised manuscript receiVed November 28, 2010 Accepted December 2, 2010 IE101002N