Particle Swarm Optimization Algorithm for a Campaign Planning

Oct 16, 2008 - and particle swarm optimization (PSO) algorithms where the PSO is applied to solve one subproblem with binary variables while the heuri...
0 downloads 0 Views 152KB Size
Ind. Eng. Chem. Res. 2008, 47, 8775–8784

8775

Particle Swarm Optimization Algorithm for a Campaign Planning Problem in Process Industries Lixin Tang* and Ping Yan The Logistics Institute, Northeastern UniVersity, Shenyang 110004, P.R. China

Campaign planning problem (CPP) is to determine the number and length of campaigns for different products over a planning horizon such that the setup and inventory holding costs are minimized. This problem can be found frequently in a multiproduct batch processing plant in the processing industry, such as chemical or pharmaceutical industries. This paper investigates a typical CPP and proposes a hybrid approach of heuristic and particle swarm optimization (PSO) algorithms where the PSO is applied to solve one subproblem with binary variables while the heuristic is applied to the other subproblem with remaining variables by fixing binary variables. As for the evaluation of particles, we take the whole objective function of the primal problem as a fitness function which can be calculated by solving the two subproblems. In implementing the PSO, by designing a “product-to-period” representation for a discrete particle, we redefine the particle position and velocity which are different from the standard PSO. Furthermore, a new strategy is developed to move a particle to the new position. To escape from local minima, a disturbance strategy is also introduced during the iteration process of the PSO. Computational results show that the proposed PSO may find optimal or near optimal solutions for the 180 instances generated randomly within a reasonable computational time. 1. Introduction Campaign mode of operation is a dominant manufacturing fashion in multiproduct and multipurpose plants when reliable long-term demand predictions are available for the various products. The campaign mode of operation is prevalent across a wide spectrum of process industries. Such operations are encountered in many applications including autoclaves or stirred tank reactors in basic and specialty chemical processes, pharmaceutical industry, and the food-processing industry like manufacturing of sorbitol, modified starches, and specialty sugars. Campaign planning as a production-planning problem often arises in process industries and plays an important role in batch plants of process industries. A campaign is defined as the production amount of a specific product type of one contiguous production run which generally will span several planning periods.1 Generally, in the campaign mode of production, a contiguous production run has to be in multiples of a predefined batch size. A batch is the quantity of material that undergoes processing by a single chemical process using an individual equipment unit in a chemical batch plant.2 The batch size is often related to the reactor capacity and involves two categories including fixed and variable sizes.3 For instance, pharmaceutical plants usually handle fixed sizes for which integrity must be maintained (no mixing/splitting of batches), while solvent or polymer plants handle variable sizes that can be split and mixed. Besides, lower and upper bounds are imposed on a contiguous production run. Lower bounds on campaign lengths have to be considered when a predefined production amount needs to be reached to initiate a chemical reaction. On the other hand, upper bounds on a contiguous production run arise since cleaning of the equipment is required after a certain amount of product is produced. In this paper, we consider a multiproduct campaign planning problem with fixed batch size. This problem is based on the well-known lot-sizing model referred to as PLSP (proportional lot-sizing and scheduling problem)4 which is NP-hard. The PLSP is a time-bucket* To whom correspondence should be addressed. E-mail: lixintang@ mail.neu.edu.cn. Tel.: +86-24-83680169. Fax: +86-24-83680169.

oriented lot-sizing model, which is based on the assumption that at most one product can be setup in each period. This means that at most two different products can be produced per period. We address the problem of determining the number and length of campaigns for different products to minimize the setup and inventory holding costs over a planning horizon. Campaign planning problems have been studied extensively in the literature, see for instance, Suerie,1 Grunow et al.,2 HongChoon Oh et al.,5 and Rajaram et al.6 Meanwhile, numerous models and solution approaches have been reported. As for comprehensive literature reviews on this subject, they can be found in Brucker et al.,7 Kallrath,8 and Me´ndez et al.3,9 So far, many of the planning problems arising in process industries are modeled by mixed integer linear program (MILP) or mixed integer nonlinear program (MINLP) in the literature. For MILP problems, most researchers adopt the method of solving the models directly with the help of MILP solvers. As for MINLP problems, solving methodologies can be roughly classified into three kinds. First, some methods have been focused on the study about the algorithm design, such as equality relaxation algorithm,10-12 branch and bound technique,13 and evolutionary strategies,14,15 and so on. Second, some MINLP models can be solved directly by using solvers. Finally, some researchers transform MINLP models into linear programs, and then solve them by a MILP solver like CPLEX. Among these methods, both deterministic algorithms and software packages may not be applied widely to practical projects, since because of the large number of binary variables in the MINLP or MILP formulation of the problem, the time for obtaining an optimal solution is quite far from being practical. To obtain fast solutions for this sort of problems one may think of intelligent optimization algorithms or heuristic approaches which compute suboptimal solutions in a reasonable amount of time especially for those large engineering optimization problems with complex constraints. Recently, He and Hui16 have presented a genetic algorithm for large-size multistage batch plant scheduling and greatly increased the search speed. In addition, Liu and Karimi17 developed several novel heuristic models based on the continu-

10.1021/ie800383y CCC: $40.75  2008 American Chemical Society Published on Web 10/16/2008

8776 Ind. Eng. Chem. Res., Vol. 47, No. 22, 2008

ous-time formulation and found even a heuristic based on an inferior model can surpass others based on superior models. In this paper we pursue this idea and present a new solution approach to the campaign planning problem in process industries. The new approach for tackling the CPP is based on a PSO algorithm which is a relatively current evolutionary computation technique. By analyzing characteristics of the CPP, particles are redesigned to represent the setup states of the facility over all the periods in order to determine the campaign numbers of different products. Then a novel forward heuristic is introduced into the PSO to solve the campaign length decisions including batch number and production amount of different campaigns in each period. In addition, the improved PSO adopts a disturbance strategy to improve the solution quality further. Computational results and comparisons are based on 180 instances generated randomly. The experimental results demonstrate the effectiveness (in terms of solution quality) and efficiency (in terms of computation times) of the proposed PSO. The rest of this paper is organized as follows: Details of the campaign planning problem are described in section 2. In section 3, we discuss some properties of the campaign planning model which give a guide for the solution approach. Section 4 provides a novel hybrid approach of heuristic and PSO to solve the CPP. Computational results and comparisons with the results obtained from CPLEX solver are presented in section 5. Finally, we end the paper with some conclusions and future work in section 6.

J

∑Z

∀ t ) 1, ... , T

jt ) 1

(4)

j)1

∀ j ) 1, ... , J,

Xjt g Zjt - Zjt-1

t ) 1, ... , T

∀ j ) 1, ... , J,

Yjt e BZjt-1 + BZjt

∀ j ) 1, ... , J,

Xjt e Zjt

t ) 1, ... , T (6)

t ) 1, ... , T

∀ j ) 1, ... , J,

Xjt e 1 - Zjt-1

Mjt g Mjt-1 + Yjt-max CXjt+1 ∀ j ) 1, ... , J,

t ) 1, ... , T - 1 (11)

J

minimize

T

J

T

∑ ∑ sc X + ∑ ∑ h I j jt

j jt

(1)

j)1 t)1

j)1 t)1

subject to ∀ j ) 1, ... , J,

Ijt-1 + Yjt ) djt + Ijt

∑ p Y + ∑ st X

j jt e Ct

j jt

j)1

t ) 1, ... , T (2)

J

J

j)1

∀ t ) 1, ... , T

(3)

(12)

J

Mjt-1 + Yjt g min C

∑X

∀ j ) 1, ... , J,

rt

t ) 1, ... , T

r)1 r*j

(13) ∀ j ) 1, ... , J,

Mjt-1 + Yjt e max C

t ) 2, ... , T (14)

∀ j ) 1, ... , J,

Mjt-1 + Yjt ) BSRjt + Sjt

t ) 2, ... , T (15)

2. Problem Description The campaign planning problem can be stated as follows. A multiproduct production facility (a machine or a processor or a production line) produces J products over a given planning horizon T. The available capacity (time) of the facility in each period is limited. The production is to be planned as a series of some single product campaigns, where a campaign is an uninterrupted run producing a single product. We assume that the facility operates in a batch mode. Thus, campaign lengths must be in multiples of batch size. Different campaigns may last for different durations and lower and upper bounds of campaign length are restricted by batch numbers. While switching from one campaign to the next, the facility must be set up for producing the next product. Change over times and costs are sequence-independent. A product may be produced in one or more campaigns during the planning horizon, but no two successive campaigns may produce the same product. Therefore, whenever a product is produced in a campaign, it must be produced in sufficient quantity so that the demand can be met from the inventory before it is produced again in the next campaign. To supply a product to its demand during the periods when it is not being produced, a product inventory has to be maintained. This results in inventory cost. The planning problem aims to determine campaign lengths and numbers for each product so as to minimize the sum of setup and inventory costs for the given planning horizon T. Mathematically the CPP can be formulated as follows, which is presented by Suerie:1

(9)

t ) 1, ... , T - 1 (10)

∀ j ) 1, ... , J

Mj0 e min C(1 - Xj1)

(8)

t ) 1, ... , T

∀ j ) 1, ... , J,

Mjt e max C(1 - Xjt+1)

(7)

t ) 1, ... , T

∀ j ) 1, ... , J,

Mjt e Mjt-1 + Yjt

(5)

J

Sjt e BS(1 -

∑X )

∀ j ) 1, ... , J,

rt

t ) 2, ... , T

r)1 r*j

(

t+n

Ijt g BS

)(

t n djm djm 1 - Zjt Xjt+m BS m)1 m)1 m)1





∀ j ) 1, ... , J,



n ) 0, ... , T - 1,

Sjt g 0 Rjt g 0 and integer

∀ j ) 1, ... , J,

∀ j ) 1, ... , J,

)

t ) 1, ... , T - n (17)

Xjt ∈ {0;1}, Zjt ∈ {0;1} (Zj0 ) 0) ∀ j ) 1, ... , J, Yjt g 0, Ijt g 0 (Ij0 ) 0)

(16)

t ) 1, ... , T (18) t ) 1, ... , T (19)

t ) 2, ... , T

∀ j ) 1, ... , J,

(20)

t ) 2, ... , T (21)

The objective function 1 is composed of two terms: setup costs and holding costs. Constraints 2 are ordinary inventory balance constraints and constraints 3 ensure that production and setup operations are feasible with respect to the facility capacity. Constraints 4 secure that the facility is always set up for one particular product at the end of each period. Constraints 5 couple state and setup variables Zjt and Xjt. Constraints 6 couple the production decisions Yjt and setup states Zjt. Constraints 7 describe the relationship between setup and state variables. Constraints 8 avoid two consecutive setup operations for the same product. Constraints 9-12 give a definition of the campaign variable Mjt. Constraints 13 ensure that the minimum production amount of product j for one campaign has been produced before another campaign starts. Constraints 14 guarantee that the production amount of each campaign for product j never exceeds max C. Constraints 15-16 ensure that the production amount per campaign must be in multiples of batch

Ind. Eng. Chem. Res., Vol. 47, No. 22, 2008 8777

size. Constraints 17 couple inventory variables Ijt with state and setup variables. Some assumptions are made about the beginning and end of the planning horizon. We assume that a plan starts with a new setup operation and no facility state is fixed at the beginning of the planning horizon. Such assumptions make campaigns easily incorporated into the planning interval by providing the initial setup state Zj0 as well as the initial campaign production quantity Mj0 prior to the planning horizon. With respect to the end of the planning horizon, campaign restrictions are not applied to the last campaign because the last campaign will possibly fulfill the campaign restrictions after the planning horizon. Table 1 provides a simple example of the CPP. The optimal solution of the CPP for this example, which is computed with CPLEX, is described in Table 2. The optimal objective function value is 105. Figure 1 shows the optimal solution to this example with the Gantt chart.

Considering Zj0 ) 0 for all the products and property a, we can easily conclude that Xj1 ) Zj1. Thus, ∑j J) 1 Xj1 ) 1 holds according to constraints 4. This means that if Xj1 ) 0 (j ∈ {1,..., J}), there must be some product r(r * j) for which Xr1 ) 1, namely, ∑rJ ) 1,r*j Xr1 ) 1. In addition, it can be seen that Yj1 ) 0 when both Zj0 and Xj1 are equal to zero. Therefore, for some product j, if Xj1 ) 0, then Mj0 g min C based on constraints 13a. Besides, as constraints 12 provide an upper bound on Mj0, the equation Mj0 ) min C stands when Xj1 ) 0. On the other hand, if Xj1 ) 1, then there must be Mj0 ) 0 according to constraints 12. (d) If Xjt+1 ) 0, then Mjt ) Mjt-1 + Yjt (∀ j ) 1,..., J, t ) 1,..., T - 1) which can be obtained from constraints 9 and 10. If Xjt+1 ) 1, then Mjt ) 0, because of constraints 11. So, Mjt can be computed as follows: Mjt ) (Mjt-1 + Yjt)(1 - Xjt+1) t+1

3. Preliminary Properties

)

To solve this problem, we start with some preliminary properties which will be used in the following PSO method. It can be seen that there are 4JT continuous variables, 2JT binary variables and (J - 1)T integer variables in the CPP model formulation. With the increase of J and T in a large-scale instance, the number of variables as well as the number of constraints will become larger and larger. This will inevitably lead to an increase in computational efforts for solving the problem. Analyzing the CPP model formulation in detail, we find that the model scale can be greatly reduced on the basis of the properties derived once the state variables Zjt are given a priori. The following properties 1 to 2 will illustrate it theoretically. Property 1 describes the relationships among some variables in the model, whereas property 2 gives a reduced model based on property 1. Property 1. The following equations hold in any feasible solution to the CPP: Xjt ) Zjt(1 - Zjt-1) t

Ijt )

jn

t

t+1

Mjt )

jm)Yj2 +

∏ (1 - X

jm)Mj0 +

m)2

(c)

jm)Mj0 +

m)2

J

T

∑∑

J

scjXjt +

j)1 t)1

r1

r*j

∀ j ) 1, ... , J

∑ ∑ h (T - t + 1)d j

j)1 t)1

jt

j)1 t)1

subject to J



J

pjYjt e Ct-

j)1



∑ st X

∀ t ) 1, ... , T

j jt

j)1

{∑ ( t

Yjm g

max

n∈{0,...,T-t}

t+n

djm + BS

m)1

(1 - Zjt -

(d)

t-1



n)1 m)n+1

n)1 m)n+1

jm

t

{

∏ (1 - X

jm)Mj0

m)2

}

∀ (j, t) ∈ A, J

∑X

rt ) 1,

j ∈ {1, ... , J}, t ∈ {2, ... , T}

r)1 r*j

(25)

t

(1 - Xjm)Yjn + Yjt e max C-

∀ (j, t) ∈ Ω \ A,

∏ (1 - X

jm)Mj0

m)2

Ω ) {(j, t)|j ∈ {1, ... , J}, t ∈ {2, ... , T}}(26)

0 e Yjt e Ct 

∑d

m)1

t

∑ ∏

)



∀ j ) 1, ... , J, t ) 1, ... , T (24)

(1 - Xjm)Yjn + Yjt ) BSRjt-

A ) (j, t)|

t-1

}

t

jt+m),

t

∑ ∏

(23)

t djm djm × BS m)1 m)1

∑X

m)1

(13a)

T

(22)

J

∑X

J

hj(T - t + 1)Yjt -

n

(1 - Xjm)Yjn

(1 - Xjm)Yjn

n)1 m)n+1

T

∑∑

n)1 m)n+1

r)1

t+1

∑ ∏

Property 2. Suppose that the state variables Zjt are given. Then, the CPP model formulation (constraints 1-21) can be reduced to the following model (eqs 22-28). Minimize

m)1

Proof. (a) From constraints 5, 7, and 8, we can see that setup variables Xjt have a close relationship with Zjt-1 and Zjt. On the one hand, setup for product j occurs in period t only when Zjt-1 ) 0 and Zjt ) 1. On the other hand, when Xjt ) 0, there are three circumstances existing in all: Zjt-1 ) 1 and Zjt ) 1; Zjt-1 ) 1 and Zjt ) 0; Zjt-1 ) 0 and Zjt ) 0. All the three circumstances can be obtained from constraints 5. Thus, constraints 5, 7, and 8 correspond to a linear formulation of eq a, and there is a nonlinear relationship among variables Zjt-1, Zjt and Xjt. (b) Inventory of product j at the end of period t equals to the current residual production quantity up to period t. As initial inventories of all the products are zero, this equation stands. (c) In the first period (t ) 1), constraints 13 can be described by the following inequality: Mj0 + Yj1 g min C

∏ (1 - X

t+1

∑ ∏

... + (1 - Xjt+1)Yjt

t

t+1

)

t

Mj0 ) min C(1 - Xj1)

jm)Yj1 +

m)2

m)3

(b)

jn

n)1

∏ (1 - X

∏ (1 - X

t

n)1

jm)Mj0 +

m)2 t+1

(a)

∑Y -∑d

t+1

∏ (1 - X

∀ j ) 1, ... , J,

max C min C  e Rjt e   and integer, BS BS

t ) 1, ... , T

(27)

∀ (j, t) ∈ A (28)

8778 Ind. Eng. Chem. Res., Vol. 47, No. 22, 2008 Table 1. Example Data djt t j)1 j)2

1

2

3

4

pj

stj

scj

hj

minC

maxC

BS

Ct

0 15

20 20

20 30

35 35

1 1

10 10

10 10

1 1

50

60

20

120

Table 2. Optimal Solution of the CPP to the Example of Table 1 t

1

2

3

4

Y1t Y2t

0 15

20 45

40 5

15 55

Figure 1. Gantt chart of optimal solution to the example.

Proof. Clearly, once the state variables Zjt are given, variables Xjt, Ijt and Mjt will be eliminated from the CPP model 1-21 by some substitutions based on property 1. As for slack variables Sjt, it can also be omitted in the model if variables Xjt are known. The reasons for it are explained in the following. On the one hand, if ∑rJ ) 1,r*j Xrt ) 1, then Sjt ) 0. Besides, campaign size constraints 13-14 can be replaced by imposing lower and upper bounds on batch number variables Rjt. On the other hand, if ∑r J ) 1,r*j Xrt ) 0, then constraints 15-16 are included by constraints 14. Therefore, the original CPP model will be transformed into the new model 22-28. 4. PSO Algorithm for the Campaign Planning Problem 4.1. Standard PSO Algorithm. PSO is an evolutionary computation technique developed by Kennedy and Eberhart.18 It is based on the metaphor of social interaction and communication such as bird flocking and fish schooling. In PSO, the system is initialized with a population of random solutions and searches for optima by updating generations. Each individual or potential solution, named particle, flies in the dimensional problem space with a velocity which is dynamically adjusted according to the flying experiences of its own and its colleagues. The position of the ith particle is represented as a λ-dimensional vector in problem space Xki ) (xki1, xki2,..., xkiλ), and the best particle in the swarm is denoted by the index g. The best position found by the ith particle until iteration k is recorded and represented k k k as Pik ) (pi1 , pi2 ,..., piλ ), and the velocity of the ith particle is Vik k k k ) (Vi1 ,Vi2,..., Viλ ). The particles are manipulated according to the following equations:19 k k k k k Vk+1 id ) wVid + c1r1(pid - xid) + c2r2(pgd - xid)

(29)

k k+1 xk+1 id ) xid + Vid

(30)

where i ) 1, 2,..., Np, and Np is the size of population, k represents the iteration counter, w denotes the inertia weight, c1 is the cognition learning factor, and c2 is the social learning factor. Both c1 and c2 are positive constants; r1 and r2 are random numbers uniformly distributed in [0, 1]. The termination criterion for the iterations is determined according to whether the max generation or a designated value of the fitness of Pg is reached. So far, PSO has gained rapid popularity as an effective technique for solving complex and difficult optimization problems, such as constrained optimization,20-22 multiobjective optimization,23-25 and neural network training.26 Owing to only a very few parameters that need to be adjusted, PSO has been used across a wide range of applications including power and voltage control,27 mass-spring system,28 task assignment,29 and traveling salesman problem.30 Generally speaking, PSO like other evolutionary optimization algorithms is applicable to most optimization problems and circumstances that can be cast into optimization problems.

j ik. Figure 2. Solution representation of particle X

In the following sections, we extend the discrete PSO algorithm of Kennedy and Eberhart31 to solve the campaign planning problem in process industries. Property 2 has shown a simplified model of CPP under the assumption that all the state variables Zjt are given. Therefore, there are only two sorts of decisions that need to be made for the primal CPP: one is setup states of facility over all the periods (namely variables Zjt) from which campaign numbers of different products will be obtained; and the other is campaign length decisions, such as batch number Rjt and production amount Yjt of different campaigns per period. The proposed PSO consists of three parts: determination of campaign numbers, a forward heuristic for the generation of campaign length, and a disturbance strategy for better performance. Details are given as follows. 4.2. Determination of Campaign Numbers. 4.2.1. Particle Solution Representation. One of the most important issues when designing the PSO algorithm lies on its solution representation. In general, each particle in PSO is encoded as an array with λ dimensions or elements, representing a possible solution in a multidimensional parameter space. To construct a relationship between the problem domain and the PSO particles for CPP, we design a “product-to-period” representation for the discrete particle. The discrete particle is encoded based on facility state k j ik ) (xi11 variables Zjt. We define particle i at iteration k as X , k k k k xi12,..., xiJT), xijt ∈ {0,1}, where xijt equals 1 if the facility is set up for product j at the end of period t for particle i and 0 otherwise. For instance, the following is a setup state matrix k k k for a 3-product 4-period problem. We have xi11 ) xi12 ) xi24 ) k k xi33 ) 1 and all other xijt ) 0 (see Figure 2) 4.2.2. Definition of Velocity. Our improved PSO algorithm designs the velocity to provide useful information that facilitates choosing preferred moves. We define the velocity of particle i k k k k j ik ) (Vi11 at iteration k as V , Vi12 ,..., ViJT ), Vijt ∈ [-Vmax,Vmax], k where Vijt is the velocity value of product j in period t and Vmax k is a constant which limits the range of Vijt . A higher value of k k Vijt indicates a great possibility of xijt ) 1, while a lower value k favors xijt ) 0. The particle’s new velocity is updated by k k k k Vkijt ) wVk-1 ijt + c1r1(pijt - xijt) + c2r2(pgjt - xijt)

(31)

k k k k j ik ) (pi11 where P , pi12 ,..., piJT ), pijt ∈ {0,1} denotes the best k j gk ) (pg11 solution obtained by particle i until iteration k and P , k k k pg12,..., pgJT), pgjt ∈ {0,1} denotes the best solution in the swarm until iteration k. We will take an example to explain the meaning of velocity further. Assume that there exists only the cognitional part in eq j ik is described in Figure 2; 31 and c1 ) r1 ) 1. The value of X

Ind. Eng. Chem. Res., Vol. 47, No. 22, 2008 8779

k k k Figure 3. The values of Vijt . ) pijt - xijt

k Figure 4. The values of s(Vijt ).

Figure 5. The values of Rik(j,t). k k k k k pi21 ) pi12 ) pi13 ) pi34 ) 1 and all other pijt ) 0. Figure 3 k gives the values of Vijt. Value 1 intensifies the possibility that the facility is set up for product j at the end of period t, whereas k -1 diversifies such a possibility. In the computation, if Vijt is k k greater than Vmax, then set Vijt ) Vmax; if Vijt is smaller than k -Vmax, then set Vijt ) -Vmax. In the discrete PSO, the velocity values are converted from real numbers to the changes of probabilities according to the following sigmoid function:

s(Vkijt) )

1 1 + exp(-Vkijt)

(32)

where s(Vkijt) denotes the probability of xkijt taking 1. For example, k s(Vi11 ) ) 0.27 as shown in Figure 4 means that there is a 27% k chance that xi11 is equal to 1. 4.2.3. Setup State Generation from Particle-Represented Solution. In the proposed PSO algorithm, each particle constructs its new setup states on the basis of its changes of probabilities k k s(Vijt ) from the velocity Vijt . Since there is only one product that can be on the setup state at the end of each period, changes of probabilities need to be uniformed per period. The particle i arranges product j to be on the setup state at the end of period t according to the following probability: Rki (j, t) )

s(Vkijt) J



(33)

s(Vkijt)

j)1

Figure 5 gives the uniformed results for the changes of k probabilities s(Vijt ). On the basis of the basic idea of roulette wheel approach, a random real number from [0,1] is generated in each period to set which product is selected. This process will continue until all the periods in the horizon are finished. Then, the setup states of facility can be obtained from particlerepresented solution. At the same time, setup times (i.e., campaign numbers) of different products will be calculated by property 1a.

4.2.4. Check the First Setup State Period for Different Products. Suppose τj ) min{t|djt> 0, t ) 1,..., T}, σik(j) ) k min{t|xijt ) 1, t ) 1,..., T}, j ∈ {1,..., J}, and T′ ) Φ. The following repair procedure will amend the first setup state period for different products to meet the constraint σik(j) e τj. Step 1. Arrange the products by nondecreasing order of τj and keep the sorted results in the product set J′. Step 2. For product j ∈ J′, if σik(j) e τj, T′ ) T′ ∪ {σik(j)}. If k σi (j)> τj, generate a random integer number t from [1, τj]\T′. Assume that the facility is set up for product j′ at the end of period t, namely xijk ′t ) 1. Exchange the setup states between k k k the two products j and j′ as follows: set xijt ) 1, xijσ i (j) ) 0, k′ k k′ xij σi (j) ) 1, and xij t ) 0. Repeat step 2 from the first product to the last one in J′. 4.3. A Forward Heuristic for the Generation of Campaign Length. After campaign numbers of different products have been determined from the “product-to-period” particle representation, how to compute the campaign length will be the next main task in order to evaluate the quality of each particle. As has been shown in property 2, the research problem is now converted into the construction of a production planning to minimize inventory holding costs while satisfying the various capacity, inventory, and campaign constraints. The proposed heuristic includes mainly two parts: first, determine the batch numbers for different campaigns; and then compute the production amount per period for campaigns. The procedure is summarized below. Step 1. Determination of batch numbers for different campaigns. Here, batch number is referred to as the unit of length measurement of campaigns considering the batch-size restriction involved: (1.1) First the actual demand of campaigns needs to be calculated. For a campaign of product j, the actual demand equals the scarcity between the current residual inventory of product j and the total product demand over all the periods which are before the next campaign of product j starts. We denote the actual demand of the pth campaign for product j as ADjp. (1.2) Check the feasibility of particle. If ADjp is greater than max C, then we assume that this particle is infeasible and go to step 3. (1.3) Calculate campaign lengths for the feasible particle. If ADjp is no more than max C, the batch number of the pth campaign for product j is computed as follows: R(j, p) ) 

max{ADjp, min C}  BS

(34)

Note that we do not need to calculate the batch number of the last campaign as minimum, and batch-size restrictions are not applied to the last campaign. Actually, the length of the last campaign just equals its actual demand. Step 2. Determination of production amount per period for each campaign. Suppose that p is a campaign of product j and its campaign length is Qp which is also the production amount of campaign p: (2.1) Calculate the set Up of periods, in which the facility is on the setup state for campaign p. (2.2) Determine the first period in Up where the actual demand occurs. We denote the actual demand of product j in period t as ADjt′ which refers to the scarcity between the current residual inventory and the demand of product j in period t. Then, the first period of Up in which the actual demand occurs can be computed by min{t| AD′jt> 0, t ∈ Up}. (2.3) Assign Qp to the periods in Up. Assume that period t is the first period which has a positive actual demand in Up. First, allot ADjt′ to the production amount of product j in period t. Then, the unassigned production amount of campaign p, namely Qp - ADjt′ , is assigned orderly to the other periods followed by period t in accordance with capability

8780 Ind. Eng. Chem. Res., Vol. 47, No. 22, 2008 Table 3. Gap Values of Different Parameter Combinations When Np ) 10 (c, w, Vmax,µ) (1, (1, (1, (1, (1, (1, (1, (1, (1, (1, (1, (1, (1, (1, (1, (1, (1, (1, (1, (1, (1,

0.5, 10, 10) 0.5, 10, 20) 0.5, 10, 30) 0.5, 15, 10) 0.5, 15, 20) 0.5, 15, 30) 0.5, 20, 10) 0.5, 20, 20) 0.5, 20, 30) 1, 10, 10) 1, 10, 20) 1, 10, 30) 1, 15, 10) 1, 15, 20) 1, 15, 30) 1, 20, 10) 1, 20, 20) 1, 20, 30) 1.5, 10, 10) 1.5, 10, 20) 1.5, 10, 30)

gap (%)

(c, w, Vmax,µ)

gap (%)

(c, w, Vmax,µ)

gap (%)

5.2390 0.9146 0.9146 3.8608 1.8063 2.5312 0.2780 0.9146 1.3719 0 0 0 0 0.2780 0.2780 0 0 0 6.6875 2.1142 7.6228

(1, 1.5, 15, 10) (1, 1.5, 15, 20) (1, 1.5, 15, 30) (1, 1.5, 20, 10) (1, 1.5, 20, 20) (1, 1.5, 20, 30) (1.5, 0.5, 10, 10) (1.5, 0.5, 10, 20) (1.5, 0.5, 10, 30) (1.5, 0.5, 15, 10) (1.5, 0.5, 15, 20) (1.5, 0.5, 15, 30) (1.5, 0.5, 20, 10) (1.5, 0.5, 20, 20) (1.5, 0.5, 20, 30) (1.5, 1, 10, 10) (1.5, 1, 10, 20) (1.5, 1, 10, 30) (1.5, 1, 15, 10) (1.5, 1, 15, 20) (1.5, 1, 15, 30)

4.9800 2.5709 3.5758 5.6901 0.9146 9.4314 2.9921 0 2.0194 0.2780 3.7459 0 0.9146 0.9947 2.8148 5.1592 0 0.2780 0 0 0

(1.5, 1, 20, 10) (1.5, 1, 20, 20) (1.5, 1, 20, 30) (1.5, 1.5, 10, 10) (1.5, 1.5, 10, 20) (1.5, 1.5, 10, 30) (1.5, 1.5, 15, 10) (1.5, 1.5, 15, 20) (1.5, 1.5, 15, 30) (1.5, 1.5, 20, 10) (1.5, 1.5, 20, 20) (1.5, 1.5, 20, 30) (2, 0.5, 10, 10) (2, 0.5, 10, 20) (2, 0.5, 10, 30) (2, 0.5, 15, 10) (2, 0.5, 15, 20) (2, 0.5, 15, 30) (2, 0.5, 20, 10) (2, 0.5, 20, 20) (2, 0.5, 20, 30)

0.2780 0.2780 0 4.0194 0.9419 3.1553 3.9297 3.3749 3.5404 3.8148 1.3719 7.1930 6.9947 2.2390 6.2390 0 0 0.2390 1.3719 0.2780 2.2780

constraints. If Qp cannot be distributed completely in this assignment approach, shift the surplus Qp to period t and the periods before period t in accordance with capability constraints. (2.4) Repeat step 2.1 to step 2.3 from the last campaign to the first campaign for particle i. If this procedure conflicts with capability constraints finally, then the particle is infeasible and go to step 3. Step 3. Computation of the fitness value. If the particle is infeasible, we set the fitness value of this particle equal a larger number. Otherwise, the fitness value is computed by objective function 22. 4.4. Disturbance Strategy. To improve the search further, we introduce a disturbance strategy to add population diversity since the particles tend to be stagnant when their velocities are near zero. This disturbance strategy employs a random immigrant technique, where position and velocity of a particle are simply reinitialized. When no improvement in the best objective function value is seen in µ consecutive iterations, the disturbance strategy will be applied to a random particle in the swarm. 4.5. The Framework of PSO for CPP. The proposed PSO algorithm can be stated as follows: j i0 ) 0, k Step 1. Initialize parameters: Np, c1, c2, w, Vmax, µ,V ) 0. j i0, i ) 1,..., Np. Step 2. Generate randomly initial particles X j ik, i ) Step 3. Apply the repair procedure to the particles X 1,..., Np. Step 4. Evaluate all the particle individuals in the population based on the proposed forward heuristic procedure. Step 5. Update the personal best particles as well as global best particle. Step 6. If the best objective function value that has been found so far has not been improved for continuous µ iterations, the disturbance strategy will be applied to a random particle in the swarm. Step 7. Update the velocities and positions of particles in the swarm. Step 8. Set k ) k +1. Step 9. Go back to step 3, unless the termination condition is met. 5. Computational Experience 5.1. Problem Instance Generation. Referencing to the instance generation method introduced by Suerie,1 we regenerate

(c, w, Vmax,µ) (2, (2, (2, (2, (2, (2, (2, (2, (2, (2, (2, (2, (2, (2, (2, (2, (2, (2,

1, 10, 10) 1, 10, 20) 1, 10, 30) 1, 15, 10) 1, 15, 20) 1, 15, 30) 1, 20, 10) 1, 20, 20) 1, 20, 30) 1.5, 10, 10) 1.5, 10, 20) 1.5, 10, 30) 1.5, 15, 10) 1.5, 15, 20) 1.5, 15, 30) 1.5, 20, 10) 1.5, 20, 20) 1.5, 20, 30)

gap (%) 6.7863 0 0.2780 0 0.2780 5.1592 0 0.2780 0.2780 1.3719 0.2780 2.2780 1.9947 1.9878 4.0194 1.9419 5.8693 5.9750

randomly 180 instances where the scale on some instances is enlarged to test the performance of the proposed PSO algorithm for solving CPP. The parameters on problem instances are designed as follows. For each product, average demand AVD is set to 100 per period, and product demand of each period is generated randomly from a uniform distribution [50, 150] for each product. We design problems where capacity is variable over all periods. Let U be the average utilization factor of capacity and 0 e U e 1. Then, capacity is generated from the uniform distribution [J × AVD/U-100, J × AVD/U + 100] for each period. Setup time is product-dependent. For each product j, stj is selected randomly from the interval [1, 5]. The amount of capacity (time) used per unit of production is 1 in all test problems, namely pj ) 1 (j ) 1,..., J). As for the inventory holding cost coefficient hj, it is an integer drawn randomly from [1, 5]. Setup cost has a close relationship with the inventory holding cost. Given the mean demand of 100, setup cost/holding cost ratio determines the campaign size, which is useful in establishing the time between campaigns. Here we set this ratio equal to 200. Campaign restrictions are imposed with a batch size of 200 units, minimum production amounts of 600 units and maximum production amount of 2000 units per campaign for all the products. For each parametrization (J, T, U), 10 instances are generated randomly (see Tables 7 and 8). 5.2. Preliminary experiments. As Np ) 20 is often adopted in PSO algorithm, the values of Np that are chosen start with Np ) 10 and increase to 30 in steps of 10 in the preliminary experiment. Then, the following sets of PSO parameter values are adopted from the literature: Np ∈ {10, 20, 30}, c1 ) c2 ∈ {1, 1.5, 2}, w ∈ {0.5, 1, 1.5}, Vmax ∈ {10, 15, 20}, µ ∈ {10, 20, 30}. These sets of parameter values result in 35 ) 243 possible combinations and each combination is run up to a maximum of 1000 iteration steps for the PSO with disturbance strategy. All preliminary experiments are carried out on a test instance from the test set S041-S050, and Tables 3-5 report the experimental results in detail. The statistical results on average gap values are summarized in Table 6. For this experiment, we denote parameters c1 and c2 as c in Tables 3-6 and define the optimality gap as gap )

|f(x) - f(x*)| × 100 f(x*)

Ind. Eng. Chem. Res., Vol. 47, No. 22, 2008 8781 Table 4. Gap Values of Different Parameter Combinations When Np ) 20 (c, w, Vmax,µ) (1, (1, (1, (1, (1, (1, (1, (1, (1, (1, (1, (1, (1, (1, (1, (1, (1, (1, (1, (1, (1,

0.5, 10, 10) 0.5, 10, 20) 0.5, 10, 30) 0.5, 15, 10) 0.5, 15, 20) 0.5, 15, 30) 0.5, 20, 10) 0.5, 20, 20) 0.5, 20, 30) 1, 10, 10) 1, 10, 20) 1, 10, 30) 1, 15, 10) 1, 15, 20) 1, 15, 30) 1, 20, 10) 1, 20, 20) 1, 20, 30) 1.5, 10, 10) 1.5, 10, 20) 1.5, 10, 30)

gap (%)

(c, w, Vmax,µ)

gap (%)

(c, w, Vmax,µ)

gap (%)

0.2390 0.4823 0.2780 0.2390 0.3719 0.9146 0.2390 0.2390 0.2390 0 0 0 0 0 0 0 0 0 3.9297 1.3352 3.1553

(1, 1.5, 15, 10) (1, 1.5, 15, 20) (1, 1.5, 15, 30) (1, 1.5, 20, 10) (1, 1.5, 20, 20) (1, 1.5, 20, 30) (1.5, 0.5, 10, 10) (1.5, 0.5, 10, 20) (1.5, 0.5, 10, 30) (1.5, 0.5, 15, 10) (1.5, 0.5, 15, 20) (1.5, 0.5, 15, 30) (1.5, 0.5, 20, 10) (1.5, 0.5, 20, 20) (1.5, 0.5, 20, 30) (1.5, 1, 10, 10) (1.5, 1, 10, 20) (1.5, 1, 10, 30) (1.5, 1, 15, 10) (1.5, 1, 15, 20) (1.5, 1, 15, 30)

1.8063 1.7918 1.9878 0.2780 0.7659 1.3719 0.2780 0 0 0.2390 1.9419 0 0.9146 0.2780 0.2780 0.0194 0 0 0 0 0

(1.5, 1, 20, 10) (1.5, 1, 20, 20) (1.5, 1, 20, 30) (1.5, 1.5, 10, 10) (1.5, 1.5, 10, 20) (1.5, 1.5, 10, 30) (1.5, 1.5, 15, 10) (1.5, 1.5, 15, 20) (1.5, 1.5, 15, 30) (1.5, 1.5, 20, 10) (1.5, 1.5, 20, 20) (1.5, 1.5, 20, 30) (2, 0.5, 10, 10) (2, 0.5, 10, 20) (2, 0.5, 10, 30) (2, 0.5, 15, 10) (2, 0.5, 15, 20) (2, 0.5, 15, 30) (2, 0.5, 20, 10) (2, 0.5, 20, 20) (2, 0.5, 20, 30)

0 0.2780 0 1.8063 0.2780 1.9419 1.9419 2.9002 2.1142 0.9146 0.9146 3.7918 0.2390 0 0 0 0 0 0.2390 0 0

(c, w, Vmax,µ) (2, (2, (2, (2, (2, (2, (2, (2, (2, (2, (2, (2, (2, (2, (2, (2, (2, (2,

1, 10, 10) 1, 10, 20) 1, 10, 30) 1, 15, 10) 1, 15, 20) 1, 15, 30) 1, 20, 10) 1, 20, 20) 1, 20, 30) 1.5, 10, 10) 1.5, 10, 20) 1.5, 10, 30) 1.5, 15, 10) 1.5, 15, 20) 1.5, 15, 30) 1.5, 20, 10) 1.5, 20, 20) 1.5, 20, 30)

gap (%) 0 0 0 0 0 0.2780 0 0 0 0 0.2780 0 1.9419 1.8063 3.8148 1.3719 5.6901 2.1142

Table 5. Gap Values of Different Parameter Combinations When Np ) 30 (c, w, Vmax,µ) (1, (1, (1, (1, (1, (1, (1, (1, (1, (1, (1, (1, (1, (1, (1, (1, (1, (1, (1, (1, (1,

0.5, 10, 10) 0.5, 10, 20) 0.5, 10, 30) 0.5, 15, 10) 0.5, 15, 20) 0.5, 15, 30) 0.5, 20, 10) 0.5, 20, 20) 0.5, 20, 30) 1, 10, 10) 1, 10, 20) 1, 10, 30) 1, 15, 10) 1, 15, 20) 1, 15, 30) 1, 20, 10) 1, 20, 20) 1, 20, 30) 1.5, 10, 10) 1.5, 10, 20) 1.5, 10, 30)

gap (%)

(c, w, Vmax,µ)

gap (%)

(c, w, Vmax,µ)

gap (%)

0 0.2780 0 0.2390 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0.2780 0.9146 2.2268

(1, 1.5, 15, 10) (1, 1.5, 15, 20) (1, 1.5, 15, 30) (1, 1.5, 20, 10) (1, 1.5, 20, 20) (1, 1.5, 20, 30) (1.5, 0.5, 10, 10) (1.5, 0.5, 10, 20) (1.5, 0.5, 10, 30) (1.5, 0.5, 15, 10) (1.5, 0.5, 15, 20) (1.5, 0.5, 15, 30) (1.5, 0.5, 20, 10) (1.5, 0.5, 20, 20) (1.5, 0.5, 20, 30) (1.5, 1, 10, 10) (1.5, 1, 10, 20) (1.5, 1, 10, 30) (1.5, 1, 15, 10) (1.5, 1, 15, 20) (1.5, 1, 15, 30)

0.2390 0.9146 1.3719 0 0.3155 1.0878 0.2390 0 0 0 0 0 0.2390 0 0.2780 0 0 0 0 0 0

(1.5, 1, 20, 10) (1.5, 1, 20, 20) (1.5, 1, 20, 30) (1.5, 1.5, 10, 10) (1.5, 1.5, 10, 20) (1.5, 1.5, 10, 30) (1.5, 1.5, 15, 10) (1.5, 1.5, 15, 20) (1.5, 1.5, 15, 30) (1.5, 1.5, 20, 10) (1.5, 1.5, 20, 20) (1.5, 1.5, 20, 30) (2, 0.5, 10, 10) (2, 0.5, 10, 20) (2, 0.5, 10, 30) (2, 0.5, 15, 10) (2, 0.5, 15, 20) (2, 0.5, 15, 30) (2, 0.5, 20, 10) (2, 0.5, 20, 20) (2, 0.5, 20, 30)

0 0 0 0 0.2390 0.9146 1.8063 0.2780 0.2390 0.9146 0 3.5758 0 0 0 0 0 0 0 0 0

Table 6. Average Gap Values of All Possible Parameter Combinations Np gap (%) 10 20 30

2.1171 0.7192 0.2787

c

gap (%)

w

gap (%) Vmax gap (%)

µ

gap (%)

1 1.5 2

1.1050 0.9956 1.0145

0.5 1 1.5

0.7426 0.2491 2.1233

10 20 30

1.1122 0.6774 1.3255

10 15 20

1.0848 0.9833 1.0469

where x is a heuristic solution and x* is the optimal solution. The results of our preliminary experiment reveal that the best setting is Np ) 30, c1 ) c2 ) 1.5, w ) 1, Vmax ) 15, µ ) 20. This setting achieves the least amount of average deviation from optimality (see Table 6). Note that a larger Np will improve the quality of solutions further, but this will prolong the computation times simultaneously. There exists a tradeoff between the quality of solutions and computation times. Considering that satisfying results have been obtained by PSO when Np equals 30, we did not increase Np any further in our experiments. On the basis of the preliminary experimental results, the parameters of the PSO are configured as follows: Np ) 30, c1 ) c2 ) 1.5, w ) 1, Vmax ) 15, µ ) 20. Specially, we set Np ) 50 for the large-scale test set while the other parameters remain as in the small-scale test set. For each test instance, the proposed PSO has been run 30 times and average results are presented. A run is terminated if the maximum of 1000 iteration steps is

(c, w, Vmax,µ) (2, (2, (2, (2, (2, (2, (2, (2, (2, (2, (2, (2, (2, (2, (2, (2, (2, (2,

1, 10, 10) 1, 10, 20) 1, 10, 30) 1, 15, 10) 1, 15, 20) 1, 15, 30) 1, 20, 10) 1, 20, 20) 1, 20, 30) 1.5, 10, 10) 1.5, 10, 20) 1.5, 10, 30) 1.5, 15, 10) 1.5, 15, 20) 1.5, 15, 30) 1.5, 20, 10) 1.5, 20, 20) 1.5, 20, 30)

gap (%) 0 0 0 0 0 0 0 0 0 0 0 0 0 0.2390 1.8063 0.9146 0.9146 2.1142

reached. The proposed PSO algorithm was coded in Visual C++. For all the test instances, we compare our algorithm with the results of CPLEX 10.0. All the computational experiments reported in this paper are run on an Intel Pentium IV 3.0 GHz PC and the average results are summarized in Tables 7 and 8. 5.3. Computational Results. The computational performance will be judged in terms of the solution speed and quality. Since the 10 instances naturally vary for each parametrization (J, T, U), the objective function values also vary in magnitude. To eliminate problem-to-problem variations, we use relative objective function values rather than absolute objective function values. Solution quality will be measured in terms of the solution proportion coefficient Fl, which is defined as follows: optl 100 l ) a, b, c optbest where optl represents the best solution found by method l and letters a-c denote CPLEX, PSO without disturbance strategy, and the proposed PSO algorithm, respectively, and optbest is the best optl obtained from all the three methods. Column “time” reports the average computation time in seconds in Tables 7 and 8. Table 7 shows the average computational results of all 90 test instances for the small-scale test set. CPLEX obtains Fl )

8782 Ind. Eng. Chem. Res., Vol. 47, No. 22, 2008 Table 7. Average Computational Results of Small-Scale Instance Set instance

J

T

U

Fa(%)

timea (s)

Fb(%)

timeb (s)

Fc(%)

timec (s)

S001-S010 S011-S020 S021-S030 S031-S040 S041-S050 S051-S060 S061-S070 S071-S080 S081-S090

3 3 3 3 3 3 3 3 3

12 12 12 15 15 15 18 18 18

0.65 0.7 0.75 0.65 0.7 0.75 0.65 0.7 0.75

1 1 1 1 1 1 1 1 1

1.134 1.189 1.346 5.829 7.677 6.895 13.251 15.156 11.974

1.0197 1.0185 1.0169 1.0396 1.0256 1.0286 1.0409 1.0479 1.0487

0.554 0.569 0.533 0.648 0.610 0.593 0.808 0.719 0.704

1.0035 1.0029 1.0019 1.0030 1.0107 1.0034 1.0080 1.0086 1.0046

0.584 0.787 0.568 0.733 0.676 0.707 0.823 0.804 0.804

a

CPLEX. b PSO without disturbance strategy c PSO with disturbance strategy.

Table 8. Average Computational Results of Large-Scale Instance Set

a

instance

J

T

U

Fa (%)

Fb(%)

timeb(s)

Fc(%)

timec (s)

S091-S100 S101-S110 S111-S120 S121-S130 S131-S140 S141- S150 S151-S160 S161-S170 S171-S180

4 4 4 5 5 5 6 6 6

28 28 28 30 30 30 48 48 48

0.65 0.7 0.75 0.65 0.7 0.75 0.65 0.7 0.75

1.0481 1.0159 1.0233 1.0255 1.0090 1.0286 1.0082 1.0197 1.0501

1.0508 1.0516 1.0722 1.0379 1.0258 1.0418 1.0260 1.0360 1.0250

1.7672 1.8190 1.8195 2.2222 2.2348 2.2552 8.6388 8.7454 8.3413

1 1 1.0017 1 1 1 1.0012 1 1.0020

2.0424 2.0065 1.9589 2.7187 2.6835 2.7041 10.2213 8.9806 9.2318

CPLEX. b PSO without disturbance strategy c PSO with disturbance strategy.

the optimal solutions for all the 90 instances in the small-scale test set. For large-scale instance problems, average computational results can be found in Table 8. Here, we limit CPU time to 7200 s and compare the results with the best found within 7200 s for both CPLEX and the proposed PSO algorithm. While this limit may seem arbitrary and perhaps inadequate, we believe that quick solution is a critical consideration in a practical application. Therefore, we set a limit on solution time and 7200 s seems like a very reasonable wait for a planner in an industrial setting. In comparison with the results of CPLEX, it is possible to see that the improved PSO algorithm works quite well in the small-scale test set since the percentage deviations of the best solution from optimality vary from 98.93% to 99.81% and computation times are extremely small (we never exceed 0.9 s). Similar results can be found in the large-scale test instances where the proposed PSO wins CPLEX within 11 s. In addition, it is observed that PSO without disturbance strategy performs poorly in both the small-scale tests and the large-scale tests. This demonstrates the premature convergence of PSO algorithm to some extent. From all the experimental values, we can conclude that the proposed PSO algorithm performs best for most instances. The reason that the optimal or suboptimal solutions are approached rapidly during the PSO implementation mainly lies in the efforts on the reduction of problem scale. As the properties mentioned above are developed to reduce the original CPP formulation, computational scale can be decreased greatly for PSO. In addition, the disturbance strategy improves the solution quality further although it may increase the computational efforts to some extent. By designing such a disturbance strategy, the improved PSO can escape the local minima effectively in the iteration process since new particles immigrate into the swarm when evolution gets into a stagnant state. 6. Conclusions Campaign planning problems arising in process industries have been studied extensively. Instead of the method of solving the models with the help of software packages that is commonly used, we present a hybrid approach of heuristic and PSO

algorithm to solve the campaign planning problem in process industries. In the proposed approach, the PSO solve the subproblem which determines campaign numbers and corresponds to the binary variables of the CPP model while the heuristic is applied to another subproblem which calculates campaign lengths with remaining variables by fixing binary variables. During the implementation of PSO, we redefine the particle position and velocity and construct a moving strategy to update particle positions. Moreover, a disturbance strategy is employed to further improve the solution quality. Performance of the proposed PSO algorithm is evaluated by comparing the experimental results with CPLEX. The computational results have shown the superiority of the proposed PSO in solution speed and the solutions obtained from the PSO are near to optima for most instances. In summary, the proposed PSO algorithm provides a novel approach to solve the CPP in process industries. For further research and investigation, some improvements and extensions would be worth further studying. These include ways to accelerate the convergence for problems of higher dimension, as well as methods of extending the methodology to optimize some other practical applications. Acknowledgment The authors would like to thank two anonymous referees for their helpful comments and suggestions. This research is partly supported by National Natural Science Foundation for Distinguished Young Scholars of China (Grant No. 70425003), National 863 High-Tech Research and Development Program of China through approved No. 2006AA04Z174, and National Natural Science Foundation of China (Grant No. 60674084). Nomenclature Abbreviations PSO ) particle swarm optimization CPP ) campaign planning problem

Indices j, j′, r ) products t, m, n ) periods

Ind. Eng. Chem. Res., Vol. 47, No. 22, 2008 8783 i, g ) particles of PSO algorithm λ, d ) dimensions of particles in PSO algorithm k ) iterations of PSO algorithm p ) campaigns

Sets J′ ) a set of products T′ ) a set of periods Up ) a set of periods in which the facility is on the setup state for campaign p

Model Parameters and Variables (1) Model Parameters pj ) capacity (time) to produce one unit of product j B ) big number Ct ) available capacity (time) of the facility in period t djt ) demand for product j in period t hj ) cost of holding one unit of product j in inventory for one period scj ) setup cost for product j stj ) setup time for product j max C ) maximum production amount of campaigns min C ) minimum production amount of campaigns BS ) batch size J ) number of products T ) number of periods (2a) Binary and Integral Model Variables Xjt ) whether setup for product j takes place in period t (Xjt ) 1) or not (Xjt ) 0) Zjt ) whether the facility is set up for product j at the end of period t (Zjt ) 1) or not (Zjt ) 0) Rjt ) number of full batches produced in the actual campaign of product j up to period t (2b) Continuous Model Variables Yjt ) quantity of product j to be produced in period t Ijt ) inventory of product j at the end of period t Mjt ) current campaign quantity for product j up to period t Sjt ) residual quantity of product j in period t

Algorithm Parameters and Variables (1) Parameters Np ) population size w ) inertia weight c1 ) cognition learning factor c2 ) social learning factor r1 ) random number uniformly distributed in [0, 1] r2 ) random number uniformly distributed in [0, 1] µ ) number of consecutive iterations when the best objective function value has no improvements in the swarm Vmax ) maximum velocity of particles (2) Variables k k k Xik ) position of ith particle at iteration k, Xik ) (xi1 , xi2 ,..., xiλ ) k k k k k Vi ) velocity of ith particle at iteration k, Vi ) (Vi1, Vi2,..., Viλ ) k k Pik ) best position of ith particle until iteration k, Pik ) (pi1 , pi2 ,..., k piλ ) j ik ) “product-to-period” position representation for the discrete X k k k k k j ik ) (xi11 particle i at iteration k; X , xi12 ,..., xiJT ), xijt ∈ {0,1}. xijt ) 1 if the facility is set up for product j at the end of period t for k particle i; otherwise, xijt )0

k k j ik ) velocity of discrete particle i at iteration k; V j ik ) (Vi11 V , Vi12 ,..., k k ViJT), Vijt ∈ [-Vmax, Vmax] k j ik ) best position of discrete particle i until iteration k; P j ik ) (pi11 P , k k k pi12,..., piJT), pijt ∈ {0,1} j kg ) (pkg11, pkg12,..., Pkg ) best position in the swarm until iteration k; P k k pgJT), pgjt ∈ {0,1} k k s(Vijt ) ) sigmoid function value of Vijt k k Ri (j,t) ) probability of xijt taking 1 ADjp ) actual demand of the pth campaign for product j ADjt′ ) actual demand of product j in period t R(j,p) ) batch number of the pth campaign for product j Qp ) production amount of campaign p

Literature Cited (1) Suerie, C. Campaign Planning in Time-indexed Model Formulations. Int. J. Prod. Res. 2005, 43, 49–66. (2) Grunow, M.; Gu¨nther, H.; Lehmann, M. Campaign Planning for Multi-stage Batch Processes in the Chemical Industry. OR Spectrum 2002, 24, 281–314. (3) Me´ndez, C. A.; Cerda´, J.; Grossmann, I. E.; Harjunkoski, I.; Fahl, M. State-of-the-art Review of Optimization Methods for Short-term Scheduling of Batch Processes. Comput. Chem. Eng. 2006, 30 (6-7), 913– 946. (4) Drexl, A.; Haase, K. Proportional Lotsizing and Scheduling. Int. J. Prod. Econ. 1995, 40, 73–87. (5) Hong-Choon, Oh; Karimi, I. A. Planning Production on a Single Processor with Sequence-Dependent Setups Part 1: Determination of Campaigns. Comput. Chem. Eng. 2001, 25, 1021–1030. (6) Rajaram, K.; Karmarkar, U. S. Campaign Planning and Scheduling for Multiproduct Batch Operations with Applications to the Food-processing Industry. Manuf. SerV. Oper. Manage. 2004, 6, 253–269. (7) Brucker, P.; Hurink, J. Solving a Chemical Batch Scheduling Problem by Local Search. Ann. Oper. Res. 2000, 96, 17–38. (8) Kallrath, J. Planning and Scheduling in the Process Industry. OR Spectrum 2002, 24, 219–250. (9) Me´ndez, C. A.; Henning, G. P.; Cerda´, J. Optimal Scheduling of Batch Plants Satisfying Multiple Product Orders with Different Due-dates. Comput. Chem. Eng. 2000, 24, 2223–2245. (10) Kocis, G. R.; Grossmann, I. E. Relaxation Strategy for the Structural Optimization of Process Flow Sheets. Ind. Eng. Chem. Res. 1987, 26 (9), 1869–1880. (11) Kocis, G. R.; Grossmann, I. E. Global Optimization of Nonconvex Mixed-integer Nonlinear Programming (MINLP) Problems in Process Synthesis. Ind. Eng. Chem. Res. 1988, 27, 1407–1421. (12) Kocis, G. R.; Grossmann, I. E. A Modeling and Decomposition Strategy for the MINLP Optimization of Process Flowsheets. Comput. Chem. Eng. 1989, 13, 797–819. (13) Balasubramanian, J.; Grossmann, I. E. A Novel Branch and Bound Algorithm for Scheduling Flowshop Plants with Uncertain Processing Times. Comput. Chem. Eng. 2002, 26, 41–57. (14) Costa, L.; Olivera, P. Evolutionary Algorithms Approach to the Solution of Mixed Integer Non-linear Programming Problems. Comput. Chem. Eng. 2001, 25, 257–266. (15) Luo, Y. Q.; Yuan, X. G.; Liu, Y. J. An Improved PSO Algorithm for Solving Non-convex NLP/MINLP Problems with Equality Constraints. Comput. Chem. Eng. 2007, 31, 153–162. (16) He, Y.; Hui, C. W. Genetic Algorithm for Large-size Multi-stage Batch Plant Scheduling. Chem. Eng. Sci. 2007, 62, 1504–1523. (17) Yu, Liu; Karimi, I. A. Scheduling Multistage, Multiproduct Batch Plants with Nonidentical Parallel Units and Unlimited Intermediate Storage. Chem. Eng. Sci. 2007, 62, 1549–1566. (18) Kennedy, J.; Eberhart, R. Particle Swarm Optimization. In Proceedings of IEEE-ICNN, 1995; pp 1942-1948. (19) Eberhart, R. C.; Shi, Y. H. Evolving Artificial Neural Networks. In Proceedings of ICNNB, 1998, pp 5–13. (20) Parsopoulos, K.; Vrahatis, M. Particle Swarm Optimization Method for Constrained Optimization Problems. In Intelligent TechnologiessTheory and Applications, New Trends in Intelligent Technologies; IOS Press: Amsterdam, The Netherlands, 2002; Vol. 76, pp 214-220. (21) Hu, X.; Eberhart, R. C. Solving Constrained Nonlinear Optimization Problems with Particle Swarm Optimization. In Proceedings of the 6th WMSCI, 2002, pp 203–206. (22) Paquet, U.; Engelbrecht, A. P. A New Particle Swarm Optimiser for Linearly Constrained Optimization. In Proceedings of CEC 2003, Vol. 1, pp 227-233.

8784 Ind. Eng. Chem. Res., Vol. 47, No. 22, 2008 (23) Fieldsend, J. E.; Singh, S. A, Multi-objective Algorithm Based upon Particle Swarm Optimization, an Efficient Data Structure and Turbulence. In Proceedings of the 2002 U.K. WCI, pp 37-44. (24) Pulido, G. T.; Coello, Coello, C. A. Using Clustering Techniques to Improve the Performance of a Multi-objective Particle Swarm Optimizer. Proc. GECCO, LNCS 2004, 3102, 225–237. (25) Li X. D., Better Spread and Convergence: Particle Swarm Multiobjective Optimization Using the Maximin Fitness Function . Proc. GECCO, LNCS 2004, 3102, 117-128. (26) Bergh, F. V. D.; Engelbrecht, A. P. Cooperative Learning in Neural Networks Using Particle Swarm Optimizers. S. Afr. Comput. J. 2000, 26, 84–90. (27) Abido, M. A. Optimal Power Flow Using Particle Swarm Optimization. Electr. Power Energy Syst. 2002, 24, 563–571. (28) Brandstatter, B.; Baumgartner, U. Particle Swarm Optimization: Mass-Spring System Analogon. IEEE Trans. Magn. 2002, 38, 997–1000.

(29) Salman, A.; Ahmad, I.; Al-Madani, S. Particle Swarm Optimization for Task Assignment Problem. Microprocesses Microsyst. 2003, 26, 363– 371. (30) Clerc, M. Discrete Particle Swarm Optimization, Illustrated by the Travelling Salesman Problem. New Optimization Techniques in Engineering; Springer: Heidelberg, Germany, 2004; Vol. 219, p 239. (31) Kennedy, J.; Eberhart, R. C. A Discrete Binary Version of the Particle Swarm Algorithm. In Proceedings of the WMSCI, 1997; pp 41044109

ReceiVed for reView March 08, 2008 ReVised manuscript receiVed July 27, 2008 Accepted September 08, 2008 IE800383Y