An Evaluation of Simulated Annealing for Batch Process Scheduling

total time to produce a set of batches in the serial flowshop with unlimited storage. We compare the method with the best heuristic method and two oth...
0 downloads 0 Views 951KB Size
Ind. Eng. Chem. Res. 1991,30, 163-169

163

An Evaluation of Simulated Annealing for Batch Process Scheduling Hong-ming Ku and Iftekhar Karimi* Department of Chemical Engineering, Northwestern University, Evanston, Illinois 60208-3120

In recent years, simulated annealing has been successfully used t o solve several combinatorial optimization problems. Most scheduling problems in batch processing are also combinatorially complex. In this paper, we investigate the usefulness of simulated annealing for solving batch process scheduling problems. For this, we develop a simulated annealing methodology for minimizing the total time t o produce a set of batches in the serial flowshop with unlimited storage. We compare the method with the best heuristic method and two other random strategies. Its generality, simplicity and near-optimal nature of its solutions far outweigh its large computational requirements. In fact, it seems hard t o devise a heuristic algorithm that can give solutions as good as those given by the simulated annealing, even if we allow more computational effort. Simulated annealing appears to be a very versatile and powerful method for solving different forms of the batch process scheduling problem.

Introduction In recent years, an optimization technique called simulated annealing (SA) (Press et al., 1986; Kirkpatrick et al., 1983; Aarts and Korst, 1989) has shown significant promise for solving combinatorial optimization problems that are NP-complete. The method is a simple randomization algorithm. It distinguishes itself from other heuristic algorithms in that it is not greedy. While the search for better solutions in the commonly used heuristics often terminates at a local minimum due to their greedy nature, SA can climb hills, Le., accept moves that generate solutions of higher cost than the present one and thus be able to dig itself out of a local minimum to search for better minima. Applications of this method to several well-known problems have been reported recently in the literature. For instance, Kirkpatrick et al. (1983) used it to solve the famous traveling salesman problem with as many as several thousand cities in some cases. Vecchi and Kirkpatrick (1983) and Sechen and Sangiovanni-Vincentelli (1984) used it to determine the layout of complex integrated cicuits involving several hundred thousand elements on a tiny silicon substrate so as to minimize the interference among their connecting wires. In the chemical engineering literature, Dolan et al. (1989) applied it to the synthesis of heat-exchanger networks and obtained new, lower cost designs for problems in the published literature. Because of its generality, simplicity, and success in solving large-scale combinatorial problems, SA appears very attractive as a method for solving large-scale scheduling problems in batch processes. Scheduling of process operations is very important in the batch CPI. This problem in its most common form consists of determining the times and order in which the products are to be processed in each of the processors within a production facility in order to optimize a suitable objective. Except for a few cases, it is a difficult combinatorial optimization problem. In fact, most scheduling problems have been shown (Garey et al., 1976) to be NP-complete, and no efficient algorithm is likely to exist for solving them optimally. In reality, scheduling problems involving more than 10 products are extremely difficult to solve optimally. For these reasons, various specialized but suboptimal scheduling algorithms have been developed (Ku and Karimi, 1986, 1988; Rajagopalan and Karimi, 1987a,b; Kuriyan and Reklaitis, 1989) to solve these problems efficiently. However, there are several drawbacks in designing and using such suboptimal algorithms. The first is that the quality of solutions deteriorates very rapidly with an in-

* Author to whom correspondence should be addressed. 0888-5885 I 9 1l2630-0163%02.50I O

crease in the problem size. The second drawback is that there is no simple way of evaluating the performance of such algorithms for very large-scale problems. The third and the major drawback is that there is very little carryouer from one scheduling algorithm to another. That is to say, two scheduling problems that seem closely related must often be solved by different techniques. In fact, most scheduling algorithms are designed to exploit special characteristics inherent in a problem. A scheduling approach that would transcend the boundaries of the different scheduling problems and yield near-optimal solutions for large-scale problems would be extremely useful. The aim of this paper is to report a preliminary evaluation of the potential of SA for application in batch process scheduling. We consider the serial multiproduct batch process with unlimited intermediate storage as an example process and implement a SA methodology for determining a production sequence for a given set of products so as to minimize the total production time. Even this well-studied and relatively simple scheduling problem is known to be NP-complete. Specifically, we study how SA can be implemented for this scheduling problem, which parameters and aspects of the method have strong bearing upon its effectiveness, and how it compares with the best available scheduling algorithm in terms of the computational effort and the quality of solutions. We begin with a brief introduction to the basic concepts of SA.

Simulated Annealing The central idea behind SA originates from an analogy with the way liquids freeze and crystallize, or the way metals cool and anneal. At high temperatures, atoms of a liquid or metal can wander freely because of high thermal mobility. As the temperature of the liquid or metal is lowered slowly, the atoms begin to align themselves into orderly repetitive arrays, and at a sufficiently low temperature, pure crystal composed of completely ordered atoms is formed. This crystal corresponds to the state of minimum energy allowed for the system. On the other hand, if a liquid or metal is cooled quickly or quenched and the substance is allowed to deviate from equilibrium, then the state of minimum energy is not attained. Instead, the system ends up in a metastable, locally optimal structure or amorphous state having a somewhat higher energy. Hence, annealing involves cooling a substance slowly to allow ample time for the redistribution of atoms so that a minimum energy state is achieved. The greedy heuristic methods commonly used for solving combinatorial optimization problems are analogous to the process of rapid cooling or quenching. In such methods, 0 1991 American Chemical Societv

164 Ind. Eng. Chem. Res., Vol. 30, No. 1, 1991

one begins from a starting point, always moves downhill for better and better solutions, and in the process discards all worse solutions. For instance, one of the most common and effective optimization strategies for heuristics is that of iterative improvement. Such a strategy has been shown to be very effective in solving the batch process scheduling problem (see Ku et al. (1987)). In iterative improvement, one starts with an initial system configuration or solution with a cost function. An improvement procedure is applied to the initial configuration to find a rearranged configuration with an improved cost function. The rearranged configuration then becomes the new configuration of the system, and the process is continued until no further improvement is possible. Thus, one always tries to achieve as much reduction in the cost as possible at a given instance and never accepts any configuration with a higher cost. This iterative strategy is analogous to quenching, with the cost function and the improvement procedure playing the roles of energy and the process of rapid cooling, respectively. Hence, just as rapid quenching of a substance from high temperatures leads to metastable structures, iterative improvement often results in a suboptimal solution. On the other hand, the process of annealing is based on a different procedure of minimization. Instead of being fixed in one energy state, a physical system in thermal equilibrium at temperature T has its energy stochastically distributed among all different energy states E. So even at low temperatures, there is a chance, however small, of a system being in a high-energy state. The probability distribution describing this is known as the Boltzmann distribution, where k is the Boltzmann constant relating temperature to energy. The consequence of a system having such an energy distribution is that from a given state it can go both downhill as well as uphill, Le., can go both to a lower energy as well as a higher energy state. Occasional uphill steps prevent the system from being stuck in a local optimum and allow it to wander off in search of a better configuration. Can we simulate the physical process of annealing in solving optimization problems? Metropolis et al. (1953) were the first to do so. The Metropolis Algorithm. The basic idea behind a simulated annealing algorithm is to consider the various solutions of a numerical optimization problem as different configurations of a physical system and the objective function values of these solutions as the energies of the analogous states. The algorithm starts with an initial solution. At every step of the algorithm, new candidate solutions are generated from the current solution by means of random perturbations. Let El be the objective function value of the current solution and E2 be that of the new solution. If E25 El, the new solution is made the current solution and the process continues. The case of E2> El is treated randomly, in which the probability of accepting the new solution is given by P W ) = exp[-(E, - El)/kT]. A random number, uniformly distributed in the interval [0, 11, is generated and compared with P(hE). If it is less than P(AE),the new solution is made current; otherwise, it is discarded and another solution is generated from the current one. The algorithm continues until a certain termination criterion is met, such as a prespecified number of rearrangements. This scheme has come to be known as the Metropolis algorithm. Note that the algorithm always makes a downhill move when a solution with a lower objective value is found (E2 5 E l ) but occasionally makes an uphill move, i.e., accepts

a solution with a higher objective (E, > El). The probability of accepting a solution with a higher objective depends upon the value of a control parameter (temperature) T. The higher the temperature, the more likely it is that an uphill move is made. Thus, to apply a SA algorithm to a problem, one must first resolve the following three purely problem-dependent issues: (1) an objective function (analogue of energy) to be minimized; (2) a characterization of possible solution configurations; and (3) an efficient method of generating random solutions from a current solution. Then, one must select the following two largely algorithmic elements: (1) a criterion for accepting or rejecting a new solution, i.e., acceptance criterion; and (2) an annealing or cooling schedule specifying an initial value of control parameter T, a method for decreasing T, and the number of solutions to be generated a t each T before it is decreased. Although based on the two algorithmic elements mentioned above, several forms of SA algorithms are possible (see Das et al. (1989)); the Metropolis algorithm is the simplest and has been used the most (Aarts and Korst, 1989). Although not necessarily the best for the problem under consideration, we use only the Metropolis algorithm for our preliminary investigation in this paper. A more detailed comparison of different algorithms has been given by Das et al. (1989) recently. We implement the Metropolis algorithm for scheduling a serial multiproduct batch process with unlimited intermediate storage (UIS). The motivation for selecting this system (the serial UIS flowshop) for our evaluation is as follows. First of all, it has been studied very extensively; thus, several solution methods are available with which the SA can be compared. Second, this is one of the simplest plant configurations, and hence, the scheduling considerations are not complicated. Before proceeding further, it must be mentioned that SA has some nice, theoretical convergence properties (Aarts and Korst, 1989). Using the theory of Markov chains, it can be shown that it is guaranteed to converge to the set of optimal solutions, given an infinite number of iterations. However, practically, this asymptotic behavior can be approximated in polynomial time. Although, by doing this, the guarantee of an optimal solution is lost; polynomial-time approximation algorithms, such as the one used in this paper, are known to give near-optimal solutions for many problems of interest.

The Serial UIS Flowshop A serial multiproduct process (or a flowshop) consists of a series of M batch units linked together by semicontinuous units. In such a process, all products pass through all the units in the same sequence; a product may skip some units. A set of N product batches is to be produced. The time required to process product i on batch unit j , tij, is specified for all products and units. If a product skips a batch unit, its processing time is considered zero on that unit. When a product batch finishes processing on a unit, it may be stored temporarily in an intermediate storage unit. In an UIS flowshop, at least N - 1 such storage units are available between every two adjacent batch units. Thus, a storage unit is always available whenever a product batch needs it; hence, it is called an unlimited intermediate storage system. The scheduling problem involves the determination of a sequence in which the batches should be produced so as to maximize the productivity of the plant, Le., to minimize the total time (makespan) required to produce all the batches. The following assumptions are commonly made by the algorithms available for scheduling the UIS flowshop.

Ind. Eng. Chem. Res., Vol. 30, No. 1, 1991 165 (1)All products are produced in the same order on each processing unit. (2) The operation is nonpreemptive; i.e., once begun, an operation cannot be interrupted until it is completed. (3) A unit may not process more than one product at a time, nor may a product be processed by more than one unit simultaneously. Purely for the sake of simplicity, we also assume that the times to transfer batches between units and storage are negligible compared to the batch processing times and are included in them. Similarly, times required to set up and clean units and storage when changing from one product to another are also negligible. Existing Scheduling Algorithms. As indicated earlier, the UIS flowshop is the most studied (Ku et al., 1987). The problem of finding a sequence with minimum makespan has been shown to be NP-complete for M > 2 by Garey et al. (1976); thus, polynomial-time algorithms for solving the problem exactly are not likely to exist. So most research has been directed toward developing suboptimal heuristic algorithms. These algorithms generally divide the scheduling problem into two subproblems: the sequence selection and the completion time determination for a given sequence. Because of assumption 1, the possible sequences or solutions of the problem are simply characterized by permutations of N integers. A sequence k , k , - ... - k N means that product ki is produced ith in the production sequence. If we denote C i .as the time at which the ith product in the sequence actuaily finishes processing on batch unit j , then for the UIS flowshop the Clj's are given by (Rajagopalan and Karimi, 1989) i = 1, N j = 1, M Cij = max [Ci(j-l), C(i-l)j]+ t k j (2) where Cij = 0, if i 5 0 or j < 0 or j > M. A number of heuristic algorithms exist in the literature (see Ku et al. (1987) for a review) for finding good but suboptimal solutions for the UIS flowshop problem. Recently, Rajagopalan and Karimi (1987) proposed a general scheduling algorithm, called Idle Matrix Search (IMS). It is applicable to general flowshops with mixed intermediate storage policies, and nonzero transfer and set-up times. This algorithm is the most general and the best to date; and it has been found to be more effective and more efficient (Rajagopalan and Karimi, 1987a,b) than all the existing algorithms for solving the flowshop problems. For small-size problems ( N I8), it gives an average deviation of 1.3% from the optimal solutions, and it outperforms other scheduling algorithms by as much as 8% in makespan improvements for larger problems. Despite its general features, the IMS heuristic is still problem-specific in that it is designed for makespan flowshops only. However, we selected IMS for evaluating the SA algorithm, which we develop next for the UIS flowshop.

The SA Algorithm Theoretical analyses and previous research (Mitra et al., 1986) have indicated that the quality of the final solution and the efficiency of SA are strongly affected by the following parameters: (1) the initial value of the control parameter T , (2) the number of solutions generated at each T , ( 3 ) the rate of decrease of T , and (4) the efficiency in generating solutions from a current solution. Drawing on our analogy with the physical annealing system, the system configuration for the UIS scheduling problem is given by a permutation of integers 1 to N . The energy of a configuration is its makespan C M N . The only missing component in the complete analogy is the temperature for which the SA methodology assigns an arbi-

trary control parameter. Since.we do not know when to stop, we set an upper limit on the number of sequences examined by the algorithm as a termination criterion. Because it is desirable to have computational requirements proportional to a small power of N , we terminate the algorithm after 3N3 sequences have been generated ( N I 6). Note that, due to the hill-climbing feature of SA, we must save the best result as the algorithm generates new configurations. These features of the algorithm are relatively easy to fix; the more complex features involve schemes for generation of sequences from a given current sequence and the annealing schedule for the control parameter T. Sequence Rearrangement. The basic issue to be settled here is how the algorithm should move from one sequence to another. Typically the approach is to perturb the current system configuration slightly to generate a new one. Given a permutation of N integers, there are so many ways in which a new permutation can be generated. For instance, one could pick a product at random and insert it randomly back into the sequence or one could select two products randomly and switch their positions in the sequence. Since a large number of solutions will be generated in the SA algorithm, the procedure for generating the next sequence must be efficient and simple. Previous computational experience (Dannenbring, 1977; Ku and Karimi, 1986; Rajagopalan and Karimi, 1987a,b) on scheduling problems suggests that a local neighborhood search is a cheap and relatively simple method for modifying a given sequence. We experimented with a number of rearrangement strategies involving combinations of random product insertions, random product interchanges, and pairwise interchanges of adjacent products. Several of these strategies were almost equally effective; however, we noticed that no strategy worked well unless pairwise interchanges of adjacent products were a part of it. This led us to select the pairwise interchange as the preferred strategy. Of course, another reason behind our selection is its simplicity, as the strategy simply consists of switching a pair of adjacent products in the current sequence to get a new sequence. We keep a counter i of the pair to be switched. For the very first rearrangement, i = 1. Any time a given current sequence is to be rearranged, the ith product in the sequence is switched with the (i + 11th product to generate a new sequence. Then i is set to mod (i - 1, N - 1) + 1. This procedure is used throughout the SA algorithm, i.e., for 3 N 3 sequences. Annealing Schedule. It consists of an initial value of the parameter kT to be used in the energy distribution function and a strategy for decreasing kT, Le., by how much should kT be reduced and after how many rearrangements. Several strategies for decreasing k T are possible (Das et al., 1989), and some trial and error experimentation is necessary for selecting the best. In this paper, we use the simple strategy of reducing kT at each iteration by 5% in 20 discrete steps, i.e., the 3N3 rearrangements are divided into 20 groups of int (0.15N3) rearrangements each. kT remains constant for each group but decreases by 5 % from one group to the next. Thus, for the first int (0.15N3)rearrangements, the initial value of kT is used, for the next int (0.15N3)rearrangements, the value of kT is decreased by 570, and so on. This means that the final value of kT is 0.95,O or approximately 0.36 times the initial kT value. To determine the initial value of kT, we take a random sequence for each combination of M and N and generate a number of random sequences, say 3000, by using the pairwise interchange strategy. We record the absolute differences in makespan that result due to every rear-

166 Ind. Eng. Chem. Res., Vol. 30, No. 1, 1991 Y = Sequence E = Makcspan

I j = 0,r = 0, J = int(3N3RO) Y 1 = Random Squencc

ymin=Y1

560h

andBin=ECYl)

InitializekT

540

"

48j 460

,

,

,

,

,

,

,

5,

440

0

10

20

30

40

50

0

Rearrangement Number

Figure 2. Performance of SA on the example.

been generated. Of course, SA does not terminate until 1024 rearrangements are generated.

I

lyes

Figure 1. Simulated annealing algorithm.

Table I. Processing Times for the Example orocessina unit product no. 1 2 1 61 2 2 42 32 3 4 5 6 7

8

1 46 11 35 66 25

31 98 14 89

8 46

3 56 42 71 42 20 75 25 44

rangement. These differences give us an idea of the range of A ,!% values that will be encountered, when the SA algorithm moves from one sequence to the next. The initial value of kT is then taken as 1.5 times the average of these differences. Although this procedure for generating the initial kT values is not precise, the results are satisfactory. Aarts and Korst (1989) and Das et al. (1989) give precise procedures for generating reliable k T values. A flow chart for the SA algorithm is given in Figure 1. Example. The best way to illustrate the main features of SA is by graphical means. As an example, consider an eight-product, three-unit UIS flowshop problem whose processing times are given in Table I. An arbitrary sequence 2-1-7-5-8-6-4-3 with a makespan of 551 time units is used as an initial configuration. Figure 2 shows makespan as a function of the number of new configurations. Note that the spikes in the figure represent the uphill moves that SA makes in search of better solutions. An optimal sequence 3-8-5-1-6-7-4-2 with a makespan of 457 time units is found after 60 rearrangements have

Numerical Evaluation The SA methodology has been known to require extensive computational effort in solving other combinatorial problems such as the traveling salesman problem. For a fair evaluation, it is reasonable to ask the following question: if the SA algorithm gives better results, is it merely due to the fact that it evaluates a far greater number of sequences than a heuristic algorithm or there is something about it that makes it possible to get better solutions? If the better solutions obtained from the SA algorithm were merely due to the sheer number of sequences evaluated, then we should be able to examine the same number of sequences randomly and very likely be able to get as good a solution. Therefore, in addition to comparing the SA with IMS, we must also compare it with a control algorithm, call it Al, which merely evaluates 3N3 random sequences and selects the best. Another motivation behind using A1 is the fact that degeneracy, i.e, many sequences having the same makespan, is quite common in many scheduling problems. In such a case, if there are problems in which the proportion of optimal sequences in the N ! possible sequences is large, then our chances of getting a near-optimal solution are high by evaluating a huge number of sequences. So if there is nothing special about the SA algorithm, A 1 would also perform as well. Another question that we must ask ourselves is this: is there any significance of the artificial control parameter T in the SA algorithm? This controls the probability of uphill moves in the SA algorithm. As we indicated earlier, we have very good analogies in the SA algorithm for the energy and system configuration of a physical system, but such is not the case with the temperature. Clearly we may ask if we at all need an annealing schedule in the SA algorithm. Can we get better solutions just by allowing our heuristic methods to dig themselves out of a locally best solution by means of an uphill move? If the annealing schedule is not an important element of the SA algorithm and its better solutions are merely due to its uphill moves, then a heuristic algorithm that uses the usual greedy strategy to locate better and better solutions but, whenever it gets stuck in a local minimum, digs itself out by means of an uphill move may be as effective as the SA algorithm

Ind. Eng. Chem. Res., Vol. 30, No. 1, 1991 167 Table 11. Percentages of Solutions within k% of Optiman percentages of solutions 1% 3% 5% N M optimal 6 6 6 6 7 7 7 7 8 8 8 8

3 5 8 10 3 5 8 10 3 5 8 10

1.30 0.70 0.40 ,060 0.70 .030 0.06 0.08 0.50 0.10 0.02 0.03

2.20 1.20 1.00 1.20 1.30 0.50 0.20 0.30 0.80 0.30 0.06 0.10

4.70 3.00 3.00 3.00 3.60 1.20 0.80 1.30 2.40 0.90 .030 0.60

8.40 6.00 6.20 6.50 7.10 3.00 2.50 3.50 4.70 2.10 1.20 1.60

10% 21.50 18.40 22.10 25.10 19.60 12.60 13.40 16.90 16.00 10.70 08.88 10.40

worst dev of worst solution, 3' % 0.90 0.40 0.20 0.20 0.20 0.10 0.05 0.03 0.08 0.02 0.006 0.005

41.8 41.5 35.3 32.4 44.4 45.1 39.8 36.6 46.9 47.8 43.8 39.6

OPercentage of solutions = 100[no. of solutions within k% of optimal]/N!. % deviation of worst solution = [(worst makespan - optimal makespan)/optimal makespan].

if it is also allowed to evaluate the same number of sequences. To answer these questions, we devised another control algorithm, call it A2. In this control algorithm, we use the same pairwise interchange strategy as in the SA algorithm, but without the annealing schedule. The difference between S A and A2 is that, in contrast to SA, A2 makes a rearranged sequence the current sequence only if its makespan is strictly less than the makespan of the current sequence and makes an uphill move only when no better sequence is found after N - 1pairwise interchanges. Thus, A2 is almost like a conventional heuristic algorithm except that it makes an uphill move, when it gets stuck. In A2, the uphill move is made by randomly interchanging two adjacent products in the current sequence. A2 also evaluates 3N sequences. Before evaluating SA, we first examine the degeneracy in and the inherent difficulty of solving the UIS flowshop problem. The scheduling problem in flowshops is similar to many other combinatorial optimization problems, such as the traveling salesman, in that the solution is characterized by a permutation of integers. However, despite this similarity, it is possible that the level of difficulty (measured as the proportion of optimal or near-optimal solutions) may not be the same for the same problem size (i.e., the number of products or the number of cities). While the number of cities largely determines the level of difficulty (as perceived above) in the traveling salesman problem, that of the scheduling problem is affected by both the number of products and the number of batch units. Previous computational experience (Ku and Karimi, 1986; Rajagopalan and Karimi, 1987a,b) shows that for the same N the performance of all scheduling algorithms deteriorates markedly as M increases, suggesting that the number of optimal or near-optimal solutions decreases drastically as M increases. With this added complication, the success of SA in solving the traveling salesman problem does not therefore guarantee its success in solving the scheduling problem. Despite the enormous amount of literature available on the flowshop scheduling, it appears that there has been no attempt to study the proportions of optimal as well as worst-case solutions. Since the proportion of optimality is a measure of the difficulty of a given problem as far as heuristic algorithms are concerned, from the standpoint of optimization, it would be useful to know the proportion of solutions that are within a certain percentage of the optimum. In addition, the performance of scheduling algorithms is often judged on the basis of average percentage deviations from the optimum or from the best solution available. But unless we know the deviation of the worst solution, the numbers on these average per-

centage deviations can be meaningless in judging the superiority of an algorithm. Of course, accurate proportions can be determined only for small-size problems ( N I8) via complete enumeration. For larger problems, they must be estimated by computing a lower bound for the optimum and sampling a large number of solutions. Fortunately, the trends in the nature of the problem are quite clear from an analysis of small problems, so such an analysis for the large problems seems unnecessary. For N 5 8, we performed complete enumerations for 100 randomly simulated test problems for each of the N X M combinations in Table 11. Processing times tjjwere generated from a uniform distribution over the range 0.1-20.0 h. From Table 11, we see that the number of optimal as well as near-optimal solutions decreases markedly as M increases. In fact, for M = 10, more than 70 test problems have only 1 or 2 optimal solutions irrespective of N . This explains why all scheduling algorithms perform worse as M increases. Table I1 shows the proportions of the worst solutions as well as their average deviations from the optima. The results show that the maximum percentage deviations fall between 32% and 5070,and they also seem less sensitive to the problem parameters M and N . As we see later, a remarkable virtue of SA is that its performance is not affected strongly by M. For evaluating SA, we divide the problems into two sizes: small-size problems ( N I8) and large-size problems ( N 530). Small-Size Problems. For these problems, the optimal solutions can be obtained by complete enumerations; hence, the evaluation is exact. We use the deviation from the optimum and the proportion of optimal solutions obtained as the criteria for evaluation. The average deviations are computed only for the nonoptimal solutions. One hundred randomly simulated test problems were used for each combination of M and N , and processing times were again generated from a uniform distribution over the range 0.1-20.0 h. One initial value of kT was generated for each combination of M and N according to the method described before and was used for all 100 problems. These kT values are given in Table 111. The test results (Table 111) show that S A almost always has a higher than 90% probability of finding an optimal solution for small-size problems. This probability is unusually high, considering the fact that all other scheduling algorithms can only locate no more than 30-4070 optimal solutions for N 1 6 (Rajagopalan and Karimi, 1987a,b; Dannenbring, 1977). Of the test problems for which SA failed to locate the optima, very close to optimal solutions were obtained as shown by the average deviations (51.6%) in Table 111. Note that the performance of SA, unlike the other scheduling al-

168 Ind. Eng. Chem. Res., Vol. 30, No. 1, 1991 Table 111. Evaluation of S A for Small-Size Problems" initial proportion of optimal, 70 N M kT, h SA A1 A2 3 5 8 10 3 5 8 10 3 5 8 10

6 6 6 6 7 7 7 7 8 8 8 8

1.30 1.75 2.35 2.75 1.50 2.10 3.00 3.40 1.90 2.75 3.15 3.50

99 96 97 95 98 100 96 94 98 92 90 82

95 71 71 63 78 53 27 37 54 28 15 14

mean dev from optimal, % A1 1.5 1.5 1.3 1.2

SA 0.7 1.3 1.4 1.6 0.2 0.0 0.8 0.6 1.2 0.8 0.3 1.1

85 70 66 65 81 68 64 57 64 28 15 20

A2 3.0 3.5 3.6 2.5 1.6 2.9 3.0 2.6 3.0 5.0 5.2 5.1

.o

1

1.6 1.4 1.5 1.4 2.1 2.3 2.2

" S A = simulated annealing algorithm. A1 = control algorithm with random sequences. A2 = control algorithm with uphill moves. proportion optimal (%) = 100[no. of optimal solutions/no. of test problems]. mean dev from optimal (%) = 100[(algorithm makespan optimal makespan)/optimal makespan].

Table IV. Evaluation of SA for Large-Size Problems" initial proportion best, % kT, h SA IMS N M A1 10 10 10 10 15 15 15 15 20 20 20 20 30 30 30 30

3

5 8 10 3 5 8 10 3 5 8 10 3

5 8 10

2.10 2.30 2.90 3.15 0.95 1.60 2.20 2.35 0.80 1.30 1.90 2.00 0.70 1.05 1.35 1.60

100 96 96 98 100 98 100 100 98 100 100 100 100 100 100 100

52

60 12 6 4 58 10 0 0 48 2 0 0

a

4 0 42 12 0 2 34 10 0 0 50 8 0 0

mean dev from best, % A1 IMS

SA 0.00 0.03 0.10 0.04 0.00 0.003 0.00 0.00 0.003 0.00 0.00 0.00 0.00 0.00 0.00 0.00

1.7 4.8 4.7 5.4 1.2 3.3 5.5 5.2 1.0 3.5 4.9 4.9 0.4 2.0 4.5 5.6

0.6 3.1 3.2 3.1 0.7 3.3 6.0 5.5 0.6 4.2 6.6 6.6

mean CPU time: s IMS A1

SA 8.8 14.4 22.0 27.2 42.6 69.3 107.6 133.3 130.7 213.9 332.7 414.0 645.6 1037.8 1633.4 2450.5

0.8 1.4 2.4 3.4 2.4 4.3 6.8 9.7 5.2 8.6 15.6 19.5 12.4 25.2 45.7 60.8

18.1 24.3 33.2 39.4 100.1 131.9 176.5 207.9 346.2 446.7 586.7 686.2

SA = simulated annealing algorithm. IMS = idle time matrix search algorithm. A 1 = control algorithm with random sequences. ProDortion best (%) = 1001no. of best solutions/no. of test Droblemsl. mean dev from best (%) = 100 [(algorithm makespan - best makespan)/best makespan]. *MicroVAX I1 (Ultrh 32 m).

-

gorithms, deteriorates only slightly as M increases. Thus, in many respects, SA appears to be a superior method for solving the flowshop scheduling problem. The performance of the control algorithms is also shown in Table 111. For both A1 and A2, the proportions of optimal solutions decrease drastically, while the average deviations increase manyfold. Surprisingly, A2 performs worse than even A l . These indicate that the annealing schedule is indeed a crucial component of SA and that SA is not simply another random search strategy. Large-Size Problems. For this set, we use IMS as the basis for comparison, since it is the best algorithm to date for UIS flowshops. We drop the control algorithm A2 on the basis of its bad performance for the small-size problems. We used 50 test problems for each combination of M and N in Table IV. The initial kT values were generated exactly as done for the small size problems and are listed in Table IV. Computation times for SA do not include the effort required to obtain the initial kT values. Due to the intensive computational effort, we omit A1 from the evaluation for N = 30. Since the optima are not known, we use relative deviations and proportions of best solutions as the criteria for evaluation. The relative deviationlerror is calculated with respect to the best of the solutions obtained from SA, IMS, and A l . The proportion of best solutions is obtained as the percentage of problems for which each algorithm gives the best solution. The results in Table IV clearly establish the superiority of SA over IMS and A l . While as much as 5-6% devia-

tions in makespan were obtained for IMS and A l , those for SA were zero or near-zero in all cases. In addition, SA has much higher proportions of best solutions. In fact, out of the 800 problems, SA failed to find the best solutions in only 8. Note also that the makespan deviations of IMS and A 1 increase while the proportions of best solutions decrease significantly as M increases. This clearly indicates a decrease in the effectiveness of IMS with an increase in M, whereas that of SA is affected only slightly. With the computational work in SA proportional to 3N3, SA requires, depending upon N , between 10 and 50 times more computation time than IMS (Table IV). This is the sole disadvantage of SA, but the trade-off is an improvement in makespan of up to 5%. With today's computing power, the computational requirements of SA are not prohibitive. If accuracy is the overriding consideration of the plant schedulers, then SA is the obvious choice. At the least, SA can serve as the basis for evaluating the performance of the conventional heuristic algorithms, since it almost always provides the best solutions. Our experience suggests that the initial kT values and the rearrangement scheme are the two most important factors determining the quality of final solutions in the scheduling problems. Since reliable methods for the former are now available (Das et al., 1989; Aarts and Korst, 19891, the latter seems to be the most important factor. Also note that we did not investigate the effects of different cooling schedules and the acceptance criteria, which are discussed by Das et al. (1989). We also believe that the

Ind. Eng. Chem. Res., Vol. 30, No. 1, 1991 169 CPU times of SA can be lowered significantly without compromising quality, if we determine the average number of rearrangements required to obtain the best solution as well as its variability or spread about the average. Our present methodology seems quite adequate for solving the makespan problems, and the method seems to have a great potential as a versatile technique for solving a variety of scheduling problems such as minimizing the due date penalties, etc. We are currently investigating the above issues and hope to report the results in the near future (Ku and Karimi, 1989).

Conclusions A simulated annealing algorithm was developed for scheduling the multiunit UIS makespan flowshop. The main features of this algorithm were thoroughly evaluated by designing two special control algorithms and also using the conventional heuristic algorithms. Results indicate that the methodology promises to be an effective algorithm for solving a variety of scheduling problems. In contrast to the best conventional heuristic algorithms, it almost always gives the best solution, and in other cases, it gives nearly the best solution. The algorithm is simple and appears to be applicable to different types of scheduling problems. Its only drawback is the large amount of computational effort required, but considering the complexity of the scheduling problems and the advances in our computational capability, the simplicity of the algorithm and the near-optimal nature of its solutions far outweigh this drawback. Acknowledgment We acknowledge the financial support from IBM Corporation and the National Science Foundation (Grant CBT-8822173).

Nomenclature English L e t t e r s Cij = time at which the ith product in the sequence finishes processing on batch unit j E = objective function, analogue of energy k = Boltzmann constant M = number of batch processing units N = number of products processing time of product i on batch unit j = control parameter, analogue of temperature

$=

Mathematical Functions

exp[ ] = exponential function int [ ] = integer part of the quantity in the brackets max [ ] = maximum of the quantities in the brackets Abbreviations

FIS = finite intermediate storage IMS = Idle Matrix Search heuristic

MIS = mixed intermediate storage NIS = no intermediate storage UIS = unlimited intermediate storage

Literature Cited Aarts, E.; Korst, J. Simulated Annealing and Boltzmann Machines; John Wiley & Sons: New York, 1989. Dannenbring, D. G. An Evaluation of Flowshop Sequencing Heuristics. Manage. Sci. 1977, 23, 1174. Das, H.; Cummings, P. T.; LeVan, M. D. Applications of Simulated Annealing to Scheduling of Serial Multiproduct Batch Process. Presented a t the AIChE Annual Meeting, San Francisco, CA, 1989; Paper 23e. Dolan, W. B.; Cummings, P. T.; Levan, M. D. Heat Exchanger Network Design by Simulated Annealing. 1989, 35,(5), 725. Garey, M. R.; Johnson, D. S.; Sethi, R. Complexity of Flowshop and Jobshop Scheduling. Math. Oper. Res. 1976, I, 117. Kirkpatrick, S.; Gelatt, C. D., Jr.; Vecchi, M. P. Optimization by Simulated Annealing. Science 1983,220,’ 671. Ku, H. M.; Karimi, I. A. Scheduling in Multistage Serial Batch Processes with Finite Intermediate Storage-Part 11: Approximate Algorithms. Presented a t the AIChE Annual Meeting, Miami, FL, 1986; Paper 72e. Ku, H. M.; Karimi, I. A. Scheduling in Serial Multiproduct Batch Processes with Finite Interstage Storage: A Mixed Integer Linear Program Formulation. Znd. Eng. Chem. Res. 1988,27 (lo), 1840. Ku, H. M.; Karimi, I. A. Heuristic Algorithms for Scheduling Serial Multiproduct Batch Processes with Due Date Penalties. Presented at the AIChE Annual Meeting, - San Francisco, CA, 1989; Paper 23b. Ku, H. M.; Rajagopalan, D.; Karimi, I. A. Scheduling in Batch Processes. Chem. Enn. Pron. 1987. 83 (8), 35. Kuriyan, K.; Reklaitk, G. Scheduling Network Flowshops so as to Minimize Makespan. Comput. Chem. Eng. 1989,13, 187. Metropolis, N.; Rosenbluth, A.; Rosenbluth, M.; Teller, A.; Teller, E. Equation of State Calculations by Fast Computing Machines. J. Chem. Phys. 1953,21, 1087. A. Convergence and Mitra, D.; Romeo, F.; Sangiovanni-Vincentelli, Finite-Time Behavior of Simulated Annealing. Adu. Appl. _ _ Rob. 1986, 18, 747. Press, W. H.; Flannery, B. P.; Teukolsky, S. A.; Vetterling, W. T. Numerical ReciDes-The Art of Scientific Computinp: - Cambridge Universiiy Press: Cambridge, 1986. Rajagopalan, D.; Karimi, I. A. Scheduling in Serial Mixed-Storage Multiproduct Processes with Transfer and Set-up Times. In Computer-Aided Proc. Oper.; Reklaitis, G. V., Spriggs, H. D., Eds.; CACHE and Elsevier: New York, 1987a; p 679. Rajagopalan, D.; Karimi, I. A. Scheduling in Serial Mixed-Storage Multiproduct Processes with Transfer and Set-up Times. TR No. 8703, Center for Manufacturing Engineering, Northwestern University, Evanston, IL, 1987b. Rajagopalan, D.; Karimi, I. A. Completion Times in Serial MixedStorage Multiproduct Processes with Transfer and Set-up Times. Comput. Chem. Eng. 1989,13 (1/2), 175. Sechen, C.; Sangiovanni-Vincentelli,A. The Timber Wolf Placement and Routing Package. Proc. 1984 Custom Integrated Circuit Conf., Rochister 1984. Vecchi, M. P.; Kirkpatrick, S. Global Wiring by Simulated Annealing. IEEE Trans. Comput.-Aided Des. 1983, CAD-2 (4), 215.

v.

Received for review August 28, 1989 Revised manuscript received July 9, 1990 Accepted July 23, 1990