Scheduling a Single-Product Reentrant Process with Uniform

reentrancy in their process flows, as different lots/batches as well as different tasks ... reentrant process with uniform processing times is address...
0 downloads 0 Views 182KB Size
Ind. Eng. Chem. Res. 2000, 39, 4203-4214

4203

PROCESS DESIGN AND CONTROL Scheduling a Single-Product Reentrant Process with Uniform Processing Times Nitin Lamba, Iftekhar A. Karimi,* and Akesh Bhalla Department of Chemical and Environmental Engineering, 4 Engineering Drive 4, National University of Singapore, Singapore 117576

Most semiconductor manufacturing involves multiple stages of batch/semicontinuous physicochemical operations. Scheduling of such plants becomes complex because of a high degree of reentrancy in their process flows, as different lots/batches as well as different tasks of the same lot compete for time on the various processing units at each stage. Scheduling of a single-product reentrant process with uniform processing times is addressed in this paper. First, an analysis of the minimum possible cycle time in such a process is presented. Then, a one-pass heuristic algorithm using a novel priority-based resource-sharing policy is developed. The algorithm is also used to study the impact of the lot release interval on system performance. In comparison to a previous algorithm, the proposed algorithm is much more efficient and gives much better results; thus, it is well suited for a large-scale operation. Introduction Increases in the use of computers and high-tech gadgetry have boosted the semiconductor manufacturing industry tremendously over the past decade. At the same time, exponential rates of growth and innovation combined with cutthroat competition are putting tremendous pressure on the industry to improve manufacturing practices and reduce costs. As a result, semiconductor manufacturing operations have concerned many researchers in the recent past. Wafer fabrication is the most important process in semiconductor manufacturing. In a wafer fabrication facility (or wafer fab), complex electronic circuitry is developed on a polished and clean wafer in a clean room. This is done in multiple passes, each of which creates a new layer (film) with a pattern on the wafer. Thus, a wafer has multiple layers embedding several patterns. For instance, a CMOS has 10 layers, whereas a DRAM1 may have 21 or even more layers. Each pass in a wafer fabrication process involves some or all of six physicochemical2 operations (tasks) executed in the following sequence: (1) Deposit a layer by means of chemical vapor deposition in a batch furnace. (2) Apply a film of light-sensitive material, called photoresist, in a spinnercoater assembly. (3) Draw circuitry on a pattern mask, and then use an ultraviolet beam to transfer it to the photoresist. The photoresist changes its composition where it is exposed to the beam. This entire process (called masking) is carried out in a stepper. (4) Remove the exposed (positive resist) or unexposed (negative resist) photoresist in a solvent-filled developer bath.(5) Etch the pattern on the wafer layer in a plasma etch chamber using ionized gaseous plasma. Here, the exposed areas of layer are etched away by the plasma. * Corresponding author. E-mail: [email protected].

(6) Strip the entire remaining photoresist layer, if any, in an acid-filled wet-etch bath or an oxygen plasma chamber. Thus, the total number of tasks required to make a wafer can easily exceed a few hundreds. All processing units in the above six steps are batch units, except for the spinner-coater assembly and stepper. Although these two operate in a semicontinuous mode, the wafers are generally processed in lots; hence, they can still be treated as batch units. A wafer fab may use multiple types of units for each step, and there are usually multiple units of each type. For instance, there may be several types of plasma etch chambers and several identical chambers of each type. A set of identical units is called a processing stage (station) or a work center. Because the machines involved are usually very expensive, economic considerations prohibit allocation of a dedicated station for each pass in a wafer recipe. In other words, a lot may revisit the same station several times during its recipe. This operational feature is called reentrancy, and a wafer fab can be classified as a multiproduct network flowshop with identical parallel units and reentrant flows. The reentrancy feature creates complex competition for resources between similar tasks of different lots, which are at different stages in the recipe, and conflicts invariably arise. As we shall see later, the manner in which these conflicts are resolved can have a major impact on the overall process performance. In wafer fabs, plant operation is normally regulated (resource allocation or conflict resolution is achieved) by means of intelligent lot release and lot scheduling policies. A lot release policy specifies when new lots are introduced into the plant for production, whereas a lot scheduling policy assigns priorities to lots that are competing for the same processing stage or station. The complexity of a wafer fab is best illustrated by an example R&D fab3 for which data are shown in Table 1. This fab performs

10.1021/ie000380x CCC: $19.00 © 2000 American Chemical Society Published on Web 11/06/2000

4204

Ind. Eng. Chem. Res., Vol. 39, No. 11, 2000

Table 1. Process and Recipe Data for an R&D Wafer Fab (FAB 1) Reported by Wein3

172 tasks on 24 stations with 1-4 identical parallel units in each stage and as many as 23 revisits to a station. In a large-scale production process (typical of most wafer fabs), the number of possible resource conflicts during a schedule is awesome, and thus, wafer fabs, whether they are single-product (one wafer type) or multiproduct (several wafer types), present an extremely challenging scheduling problem. In the past years, researchers have used mainly three approaches to solve the wafer fab scheduling problem: mathematical programming (LP or MIP) formulations, queuing models, and heuristic algorithms. Johri4 highlights the difficulties encountered in a real setting and describes various theoretical and practical problems. Kumar5 discusses the different problems that arise in these plants along with several scheduling policies. Scheduling research in this area has mainly used various system performance criteria as scheduling objectives, e.g., minimize makespan, minimize residence time, minimize queuing time, etc. Because chances of contamination of a wafer increase with the time that it spends in the manufacturing process, mean residence time is an important criterion in wafer fab scheduling. Most existing literature assumes wafer fab processing to be prone to random failure/repair and to have stochastic task times. Furthermore, most studies have used discrete-event stochastic simulation to evaluate different rules, based on queuing network theory or Brownian network models, for handling queues at different stages in a wafer fab. Kim and Leachman6 considered the work-in-process (WIP) projection problem that translates work-inprocess inventory into a schedule of completed products, thereby giving net demand and resource capacities. They developed an LP formulation and some heuristic approaches for this problem and solved the resulting large LP by a decomposition method. Hung and Leachman7 presented a methodology for automated produc-

tion planning of semiconductor manufacturing based on iterative linear programming and discrete-event simulation. Their formulation incorporates epoch-dependent parameters for flow times from lot release up to each operation on each manufacturing route. Dessouky and Leachman8 presented two MILP formulations that account for reentrancy, similar lots, and similar machines in semiconductor plants. Their approaches are based on restricting the allowed domain of events for the start of lot processing. Wein,3 using a queuing network model, studied a large R&D wafer fab in detail and concluded that input regulation has a major impact on fab performance. He focused on bottleneck stations (machines) and developed policies involving sequencing rules for these stations. Lu et al.9 built on this approach and introduced fluctuation smoothing policies, which further reduce mean cycle time substantially over the baseline first-in-first-out (FIFO) processing policy. They performed extensive simulations and also studied an aggregate model characterizing a typical wafer production fab. Lee et al.10 attempted the scheduling of the above aggregate model of Lu et al.9 as an example for their new heuristic sequence branch algorithm (SBA). In contrast to other works, they assumed a deterministic environment and did not use discrete-event simulation. Graves et al.11 modeled a fab as a reentrant flowshop and minimized average throughput time subject to meeting a given production rate. They allowed only cyclic schedules and neglected setup times and observed that, because a specified production rate must be achieved, jobs must be released into the line at that rate. They also discussed other issues such as machine breakdowns and expediting. Peyrol et al.12 applied simulated annealing (SA) to a semiconductor circuit fabrication plant. They assumed an unlimited intermediate storage (UIS) policy13 and determined an input order for a given set of products so as to minimize the average residence time in the plant. However, a discreteevent simulation program was used within SA to evaluate the scheduling objective. Considerable literature exists14 on planning/scheduling in multiproduct batch plants addressing problems relevant to the chemical process industry. Kuriyan and Reklaitis15 considered scheduling of network flowshops with identical parallel units but without reentrant flows. They proposed a two-step heuristic strategy consisting of sequence generation and sequence evaluation steps and evaluated several local search procedures and bottleneck sequencing for the first step. They found local search procedures to be quite effective for the sequence generation step. However, except for the work of Lee et al.,10 no one has explicitly addressed network flowshops with reentrant flows. In this paper, we address scheduling of a singleproduct multistage process with reentrant flows and uniform task times. Many wafer fabs fit this description. We present a simple priority-based scheduling algorithm that gives near optimal schedules and is quite suited for a truly large-scale and complex process such as a wafer fab. Our scheduling strategy is novel in that it is based neither on discrete-event simulation and queuing rules nor on a branch-and-bound type of enumeration. It differs from the former in that it looks at the scheduling problem as a whole rather than as an event-based queuing problem at each station and resolves resource demand conflicts keeping in view even

Ind. Eng. Chem. Res., Vol. 39, No. 11, 2000 4205

Figure 1. Schematic diagram of a single-product, multistage process with identical parallel units in each stage and reentrant flows.

future demands and not just the present ones. We also develop a recipe representation that suits our algorithm and provides a framework for uniform treatment of resources (machines, utilities, operators, etc.) in this type of reentrant flowshop. Although developed with a deterministic process in mind, the algorithm is also used to study a stochastic process with little modification. In the following section, we first state the problem addressed in this paper and then present an analysis of minimum cycle time in order to derive two lower bounds on the makespan. Subsequently, we describe our algorithm and illustrate its application by means of two examples. Finally, we also study the effect of the lot release interval and stochastic processing times for one of the examples. Problem Statement Figure 1 shows a schematic diagram of a multistage process with reentrant flows. We assume that the process can be viewed as comprising S processing stages or stations (s ) 1,S), with stage s having ms identical, parallel, batch/semicontinuous processing units or machines. Although the facility may produce multiple products/items, in this work, we focus on scheduling just a single item or product. We assume that the item (e.g., wafer) is produced in lots/batches of a fixed lot size B items/lot and assume that the items are transferred to the next stage only when the entire lot has finished processing. This allows us to treat all units as batch units with no loss of generality. We represent the product recipe as a sequence of K tasks (Tk; k ) 1,K). Each task Tk in the recipe is performed by a unique stage sk. The tasks can be grouped into exactly S types, each group having one or more tasks. The tasks in a group are similar in nature (e.g., cleaning, vapor deposition, etc.), except for some minor differences. Furthermore, we assume that all tasks in a group use the same station/stage and require the same processing time. In other words, the time and stage required for a group of tasks are the same across the recipe. This is what we mean by uniform processing times. This enables us to associate with each stage s a fixed processing time ts that is characteristic of that stage alone. In other words, that processing time is independent of the position of the task in the recipe. Because several tasks in the recipe may need the same stage, a lot may revisit a given stage several times

during its entire recipe. This means that the process has reentrant flows. Let vs denote the number of visits to stage s required during the recipe. We make the following additional assumptions: (1) Lots are identical, i.e., have the same recipe, the same lot size (B items/ lot), and the same processing time for a given task. (2) Lots are available for performing the first task whenever needed. (3) Lots are independent, i.e., no precedence relations exist among them. (4) Unlimited intermediate storage (UIS)13 is available before and after every stage. Thus, an unlimited number of in-process lots can be stored at any stage. (5) Lot transfer times and setup times are negligible compared to the lot processing times or are included in the latter. (6) The process is free of uncertainties such as failures and repairs of units, and lot processing times are deterministic and known a priori. With these assumptions, the scheduling problem can be stated as follows. Given the process information such as the number of stages, the number of units in each stage, the product recipe, the lot size, and the processing time at each stage, determine a detailed schedule for producing N identical lots to minimize the total time required for production (or makespan). A detailed schedule describes when (task start time), where (on which unit), and for how long (task end time) a given task of a given lot will be performed. Lower Bounds on Makespan Before we develop a methodology for scheduling a reentrant flowshop, it is worth analyzing the system to determine the highest productivity or the minimum cycle time that can possibly be achieved when the system is running nonstop in a steady mode. To this end, consider an ideal scenario in which an unlimited supply of unfinished lots is available before each stage in the system and each stage is operating at its fastest rate possible. Although each stage performs one characteristic type of task, the actual task that it performs for a given lot will depend on the number of visits that the lot has already made to that stage. Because a lot makes vs visits to stage s during its entire recipe, there can be vs types of unfinished lots waiting before stage s. Let us assume that each lot type is available in unlimited supply before stage s. Because all tasks, irrespective of their lot types, require the same processing time ts, all lots can be considered essentially identi-

4206

Ind. Eng. Chem. Res., Vol. 39, No. 11, 2000

Figure 2. Schematic diagram of a reentrant flowshop for Example I.

cal as far as the operation of stage s is concerned. In this ideal scenario, it is clear that the minimum time between the emergence of two successive lots from stage s is ts/ms. Because the total processing time of a lot on stage s is vsts, the minimum average time between the emergence of two successive lots of the same type from stage s, or the minimum average cycle time of stage s, is given by

t*s )

vsts ms

(1)

We define the above as the minimum stage cycle time. Its inverse gives the fastest processing rate (lots per unit time) that stage s can achieve under the ideal scenario defined earlier. As expected, for the special case of vs ) 1 and ms ) 1, eq 1 reduces to the lot processing time on stage s. Because a lot has to pass through S stages in series with different processing rates, the slowest stage will be the bottleneck or the rate-limiting stage. Thus, the minimum average cycle time for the S-stage serial process, or equivalently the minimum average process cycle time, is given by

t* ) max[t*s] s

(2)

Note that the term cycle time, as used in the semiconductor manufacturing parlance,9 refers to the residence time and not the cycle time as used here. Residence time, by itself, is also an important performance criterion, as it impacts yield losses.9 From the above analysis, it is clear that finished lots cannot be produced at an average rate greater than 1/t* when the fab is operating with unlimited intermediate storage levels as described above. To produce N lots, let us assume that each stage begins operation at time zero, operates at its fastest rate, and stops operation when it has processed all of the tasks that it needs to perform to produce exactly N lots. It is clear that the ratelimiting stage will finish last and at time Nt*. In a real situation (where unlimited supplies are not available), however, the stages will have to wait for their predecessors to finish tasks, and thus the time required

Figure 3. Sample schedule for Example I.

will be no less than Nt*. The situation becomes more complex when we start with an empty system. It is clear that the makespan cannot be lower than v1t1 + v2t2 + v3t3 + ... + vStS, the minimum residence time for the first lot. On the basis of this discussion, it is very tempting to propose a lower bound LB1 on the makespan as

LB1 ) v1t1 + v2t2 + v3t3 + ... + vStS + (N - 1)t* (3) In most cases, this is a very good lower bound. However, this can, indeed, be violated by the first few lots. Because the fab is empty initially, there are several time slots on the machines, which can facilitate scheduling in such a way that the limiting stage/step is not the ratecontrolling step. Let us illustrate this with an example. Example I. Figure 2 shows a simple four-stage (S ) 4), seven-task (K ) 7) process with reentrant flows. The lot recipe, expressed as a sequence of stages, is 1 f 2 f 3 f 2 f 4 f 2 f 3. In other words, s1 ) 1, s2 ) 2, s3 ) 3, s4 ) 2, s5 ) 4, s6 ) 2, and s7 ) 3. Furthermore, we have m1 ) 1, m2 ) 2, m3 ) 1, m4 ) 1, t1 ) 0.5 h, t2 ) 1.0 h, t3 ) 0.8 h, and t4 ) 0.5 h from Figure 2. For simplicity, let us say that four (N ) 4) lots are to be scheduled. Let Cik denote the completion time of task Tik (task Tk of lot i). Thus, Ci0 denotes the release time of lot i into the process. Assume C10 ) 0.0 h, C20 ) 0.5 h, C30 ) 1.0 h, and C40 ) 1.5 h. For this system, the absolute minimum residence time is 5.6 h, and the minimum cycle time from eq 2 is 1.6 h.

Ind. Eng. Chem. Res., Vol. 39, No. 11, 2000 4207

Figure 4. A comparison of the makespan (for the schedule in Figure 2) and lower bounds (LB1 and LB2) for Example I.

Figure 3 shows a simple schedule for producing the four lots. In this schedule, lot 2 is done at t ) 6.8 h, with lot 3 at t ) 8.4 h. However, from eq 3, the lower bounds LB1 for N ) 2 and N ) 3 are 7.2 h and 8.8 h, respectively. In other words, LB1 is not an accurate lower bound for low N. However, as we see later, as the production continues and the system reaches a state of comparatively stabilized/busy operation, LB1 becomes a quite accurate lower bound. To address the above problem, we can develop another, more conservative, lower bound LB2 for the makespan. Because it is impossible to produce the first lot earlier than v1t1 + v2t2 + v3t3 + ... + vStS and because the process cannot produce faster than the last stage in the recipe, it is clear that the makespan cannot be lower than

LB2 ) v1t1 + v2t2 + v3t3 + ... + vStS + (N - 1)t*last (4) where t*last is the stage cycle time of the last stage in the recipe. It is clear that LB1 is tighter than LB2. As work-in-process increases, the actual bottleneck stage comes into play, and the makespan becomes controlled by the first lower bound LB1. This is best illustrated by reconsidering Example I. The lot completion times for Example I, with the two lower bounds discussed above, are shown in Figure 4 for the first four lots. We can see that the makespan never violates LB2. It is accurate for N ) 3, but then it becomes loose, and LB1 starts to control. Note that, even though a real system may have unlimited intermediate storage between stages, it is unlikely that there will be an unlimited supply of lots before each stage at all times. Hence, it may not be possible to achieve the above approximate lower bounds on the makespan in all cases. The algorithm that we present next, however, does a good job of approaching the tighter lower bound LB1 at least for low N. Scheduling Strategy It is clear from the problem statement that KN tasks are to be performed using S stages to produce the N lots of one item/product. Because a lot may use a stage several times during its production, it is natural to expect that tasks belonging to different lots and at different stages in the recipe may need units from the same stage at any given time. Unless the number of

units in that stage is unlimited, decisions must be made regarding the sequence in which the tasks should be performed on the units. In other words, the key question is how to allocate a stage of units to all tasks that may possibly compete for it at various times. Let us first reflect on how different approaches have attempted this so far in the literature. One approach is to identify all tasks that will be processed on a stage and devise a suitable MIP formulation to take care of sequencing them on each unit in the stage. Existing work16,17 suggests that the problem size quickly explodes in such a formulation and the approach becomes practically useless for a large-scale process such as a wafer fab. Another approach has been to use a heuristic method, viz. sequence branch algorithm, proposed by Lee et al.10 This method views the problem as that of scheduling constituent tasks and evaluates promising partial sequences of these tasks. It is essentially a specialized branch-and-bound technique that also quickly becomes computationally very expensive. This is why most work in this area have resorted to the use of lot dispatch rules based on queuing models within a discrete-event simulation approach. In the discrete simulation approach, assuming that the system is empty initially, processing units are readily available initially, and lots do not have to wait for them. However, as the simulation proceeds further in time, the number of lots in the system increases, and the time available on a stage has to be shared among more and more lots. Because the number of units in a stage is always limited, the lots are forced to wait in the in-process storage before a stage, and a dynamic queue of lots develops before each stage. In the discrete simulation approach, the entire allocation problem now reduces to deciding which lot, from the queue present at any given time before a stage, must be selected for processing whenever a unit from the stage becomes free or available. Most research studies have simply used one or more selection rules for this purpose. Two simple examples are the FIFO (first-in-first-out) and the SRPT (shortest remaining processing time) rules. In the FIFO policy, the queue before a stage is examined at every event when a unit in the stage becomes free or a lot finishes processing on the previous stage. From the lots present in the queue at that time, the lot that entered the queue the earliest is selected, and the free unit is allocated to that lot. This allocation strategy is triggered by time events, and the lot that demanded the stage (resource) first gets the priority. In the SRPT rule,3 the queue is examined in the same way, but now the free unit is assigned to a lot that has the shortest processing time remaining in its production. Here again, the allocation is triggered by time events, but the selection is slightly different. However, a feature common to all rules in the discrete-event simulation approach is that a lot must be present in the queue at a given time for it to be selected. The free unit cannot be “reserved” for a lot that may enter the queue at a later time and is absent from the queue at the time of allocation. Clearly, it is quite possible that such a decision of reserving a unit for the future may, in fact, be better from the standpoint of overall schedule, as it is not myopic in its vision. Ku and Karimi13 were the first to recognize this possibility, although it was for a different problem. They showed that scheduling products one at a time could

4208

Ind. Eng. Chem. Res., Vol. 39, No. 11, 2000

be a better strategy than scheduling events one at a time, as is done in a discrete-event approach. They addressed minimization of the makespan for N batches in a serial (one unit per stage) multistage batch process with nonzero transfer and setup times, no reentrancy, and various in-process storage modes (FIS, ZW, NIS, MIS). In this context, they studied the allocation of shared in-process storage to various batches at different times during the schedule. Through several illustrative examples and via extensive numerical simulations, they demonstrated that a discrete-event simulation approach using the FIFO strategy, i.e., an event-driven and demand-priority-based approach, almost always gave significantly greater makespans than the product-at-atime strategy. Because the FIFO rule is inherently myopic because of its event-based strategy, the aforementioned strategy is slightly futuristic and considers, during selection, lots that may join the queue at a future time. However, they13 recognized that, although very good and efficient, their strategy was not necessarily optimal, because their problem is quite hard. This strategy, in a sense, is similar to the SRPT rule described earlier, as it inherently gives the priority to the task with the shortest remaining processing time. However, there is a very subtle and fundamental difference between the two. The SRPT rule, by virtue of being an event-based strategy, is myopic, whereas the aforementioned strategy is not. In this paper, we modify the basic idea behind the above methodology and apply it to the present problem. The present problem is expected to be as hard as that of Ku and Karimi;13 hence, the methodology presented here is not guaranteed to be optimal. In their case, the shared and mixed intermediate storage increased complexity, whereas high reentrancy and large number of orders primarily affect the problem at hand. To modify their approach to the present problem, we visualize the process as a process with constrained resources in the following manner. Process Representation Let us view the plant as a set of resources such as equipment, operators, utilities, raw materials, etc. In general, a task requires a set of resources, and various types of constraints may apply to their use. It is also possible that alternate resources may be suitable and available for a given task, and a selection may need to be done. In other words, given a suitable set of resources of different types for a task, we must select a subset that meets the requirements without violating any resource constraints and is in some sense good or optimal. This can be done in two steps. The first is to identify the maximal subset of resources that are available for the task at hand, and the second is to select the best minimal subset from that subset such that all the requirements are met. Now, a plant resource can be of two types depending on how its usage is measured. Some resources such as equipment, operators, etc. have discrete usage levels, whereas others such as steam, additives, etc. have continuous usage levels. For instance, a single, unique processing unit is a resource with only two usage levels 0 (not in use) and 1 (in use). Similarly, a stage s with ms identical, parallel units in this process is a resource with (ms + 1) discrete usage levels (0, 1, 2, 3, ..., ms). Furthermore, each resource can be assumed to have a known capacity profile. This is a time-dependent profile

giving the maximum amount of resource usage allowed at any time. We group all alternate resource entities with identical features into one single resource, whereas we treat those with different features as distinct resources. This applies to both discrete as well as continuous resources. For instance, two different boilers can be considered as one resource with a capacity equal to sum of their individual steam capacities as long as they both produce steam with the same specification. In the process of scheduling tasks, we will allocate resources to tasks at various times, and hence, at the end, we will have a time-dependent actual usage profile for each resource. In the manufacturing process under study, we assume that resources such as operators, utilities, etc. are not constrained and thus need not be considered. However, the most important resources in this process are the processing stages or stations, and they have discrete usage levels. As stated earlier, we group the identical units into one group; thus, we have S resources that are to be shared among NK tasks at various times. The maximum allowed usage level for stage s is ms at all times. In other words, our problem is essentially to use these limited capacity resources to process the NK tasks in the minimum amount of time. For the present process, we have a unique stage (resource) for every task, so there is no need to select from alternate resources. The only thing that needs to be decided is when that unique resource should be used for a given lot. We now detail our algorithm that assigns resources in a lot-by-lot manner making sure that no constraints are violated. Scheduling Algorithm The basic idea behind our algorithm is to schedule one lot fully (i.e., allocate resources to all tasks in its recipe) at a time. This is in contrast to scheduling several dependent tasks of many lots simultaneously. The immediate advantages are a huge reduction in problem size, an intrinsic look-ahead or futuristic property, no handling of queues, no queuing priorities, and no projection of a huge number of potential resource allocation conflicts. As we see later, this simplicity of our algorithm results in a high efficiency required for a large-scale problem but without sacrificing the quality of schedules. Because the plant resources have limited capacities, we first describe how we maintain a detailed account of usage for each resource. Resource Usage Profile. To keep track of resource usage over time, we maintain a chronologically ordered linked list for every resource. Each element of this list contains two pieces of data. The first piece is a time instance, and the second is a resource usage level beginning with that time. For instance, if the plant is empty at the start, then the linked list for stage s is empty. If at a later stage in the scheduling process, two tasks are assigned to stage s during intervals [0, 1.0] and [0.5, 1.5], then the linked list becomes [(0, 1), (0.5, 2), (1.0, 1), (1.5, 0)]. Now, whenever a new task is scheduled on stage s, this linked list is updated appropriately. Note that we first use one linked list for each stage s, treating it as one single resource. Linked lists for its constituent ms identical units can be derived later from that linked list. We label the lots as 1, 2, 3, ..., N. Because they are all identical anyway, this labeling is merely for identification and is immaterial. In our algorithm, this is the

Ind. Eng. Chem. Res., Vol. 39, No. 11, 2000 4209

order in which lots enter the process, i.e., are released into the process, and also the order in which they are scheduled one at a time. For each lot, tasks are also scheduled one at a time in the sequence T1, T2, T3, ..., TK. With this, our algorithm can be simply stated as follows: Step 1: Initialize plant state at time zero. Schedule: Do i ) 1, N Do k ) 1, K Step 2: Schedule task Tik (defined as task Tk of lot i) End Do: Schedule Step 3: Assign a specific processing unit to each task We now describe the above three steps in detail. Step 1: Initialize Plant State. Here, we create one linked list or usage profile for each stage. If the plant is empty at the start, then all linked lists are empty. If this is not so, then we assume that a schedule for the unfinished lots is available, with which the link lists can be initialized. If this is not so either, then we first schedule all of the unfinished lots using our algorithm stated above to obtain the initial plant state. Step 2: Schedule Task Tik. As mentioned earlier, the lot release policy is an important scheduling decision. Several release policies3 have been proposed. The simplest is a deterministic one, in which lots are released into the system at a fixed, predetermined time interval. For now, we assume that lot release times Ci0 are known a priori and consider how to schedule task Tik. Because lot transfer times and setup times are negligible, it is clear that Ci(k-1), the completion time of Ti(k-1), is the earliest time at which task Tik can begin. To perform Tik, we need one free unit from stage sk for duration pk, where pk denotes the time required to perform Tik. For this, we search the linked list for stage sk, beginning with time Ci(k-1), to locate the earliest time slot of length pk during which the usage level of sk is less than its capacity. We allocate stage sk to Tik for that slot and update the linked list of sk to reflect this allocation. Thus, the resource usage profiles are continually updated after each scheduled task, and any allocation, once done, cannot be revised again later. We record the start time of this slot as the start time of Tik and the end time of this slot as the end time (Cik) of Tik. From the above discussion, it is clear that our algorithm gives higher priority to lots earlier in the sequence than to those later in the sequence, as far as resource allocation is concerned. A resource, once assigned to a task for a given duration, cannot be reassigned to another task for that duration. Thus, every task is scheduled subject to the schedule and resource allocations of all previous tasks. Furthermore, the algorithm schedules every task so as to complete it as early as possible subject to the available resources plus the completion time of its precursor task in the recipe. Finally, by its very nature, it never makes any allocation that would violate resource constraints. Step 3: Assign units to tasks. From the schedule obtained so far, we know when a task will start processing on a particular stage and when it will end, but we do not know exactly which unit it will use in that stage. We now present an algorithm to assign a distinct unit to each task using the final linked lists for the entire schedule. In step 2, we treated each stage as a single resource with a certain capacity and made sure that its usage never exceeded its capacity. This means that at least one unit in a stage is surely free at every

instance between the start and end times of every task to be processed on that stage. However, how do we know if we can process that task on one and the same unit for the entire duration? Is it possible that one unit is free for a part of the time, after which it is not free, but another unit becomes free, and so the task must be switched to that unit? We call this task splitting. Thus, is it possible that task splitting may arise, when we try to assign units to tasks based on the cumulative usage profile for a stage? This says that the assignment of units to tasks may not be a trivial problem. For a given stage usage profile, many different assignments of units to tasks are possible, and an optimal solution may exist. Fortunately, in this study, because each stage consists of identical units, the process of obtaining one feasible assignment of units to tasks without task splitting is straightforward. The key to avoid task splitting is not to think of assigning units to tasks based on priority as is done in the scheduling algorithm, but to assign them based on a FIFO policy. To this end, we process all tasks on a stage, one at a time, in increasing order of their start times and assign a free unit to each task while accounting the usage of each individual unit in that stage. For a stage s, we proceed as follows: (1) Take the final linked list for stage s and process its elements one by one. Recall that the elements in the linked list are ordered chronologically and each element denotes a time instance and one or more events (i.e., the start and/or end of one or more tasks) that happen at that time. While processing the linked list, we maintain an end time for each unit in stage s. This is the time at which the unit becomes free after finishing the last task assigned to it. At the start, these end times are obtained from the initial state of the plant. (2) Now let us say that we have already processed the first (n - 1) elements of the linked list and we wish to process the nth element. Let this element denote a time instance t in the schedule. If no task starts at time t, then nothing needs to be done, and we proceed to the next element. For each task that starts at time t, we assign a unit for it as follows. (3) We compare the current end times of all units with the current time t. If the end time of a unit exceeds t, then that unit is not available. If the end time of a unit is earlier than t, then that unit is available for this task. Because multiple units may be available, we need a selection criterion to choose one from among them. In this study, we select the available unit whose current end time is the nearest to t and assign it to the task. Knowing the task time, we now update the end time for this unit. (4) Processing of the nth element is complete when all tasks that start at t are assigned units. We then proceed to the next element and repeat steps 2 and 3. When all elements in the linked list are processed, all tasks will have been assigned units. The above procedure is guaranteed to give assignments without violating the stage capacity and without any task splitting, because every stage usage profile ensures that a free unit will always be available at the start of every task on that stage and the stage capacity will not be violated at any time during the entire schedule. Remarks. In this problem, there is a unique suitable resource for every task, and hence, the question of selecting from alternate resources did not arise. In a general problem, there can be several alternate resources for every task, and then one must use some

4210

Ind. Eng. Chem. Res., Vol. 39, No. 11, 2000

Table 2. Linked List Details after Selected Tasks in the Schedule Calculations for Example I task

L1

T17

[(0, 1), (0.5, 0)]

T27

[(0.0, 1), (0.5, 1), (1.0, 0)]

T41

[(0.0, 1), (0.5, 1), (1.0, 1), (1.5, 1), (2.0, 0)]

T42

[(0.0, 1), (0.5, 1), (1.0, 1), (1.5, 1), (2.0, 0)]

T43

[(0.0, 1), (0.5, 1), (1.0, 1), (1.5, 1), (2.0, 0)]

L2

L3

L4

completion times (h)

[(0.5, 1), (1.5, 0), (2.3, 1), (3.3, 0) (3.8, 1), (4.8, 0)] [(0.5, 1), (1.0, 2), (1.5, 1), (2.0, 0), (2.3, 1), (3.1, 2), (3.3, 1) (3.8, 2), (4.1, 1), (4.6, 1) (4.8, 1), (5.6, 0)] [(0.5, 1), (1.0, 2), (1.5, 2), (2.0, 1), (2.3, 2), (2.5, 1), (3.1, 2), (3.3, 1), (3.8, 2), (4.1, 1), (4.6, 2), (4.8, 2), (5.6, 1), (5.8, 0), (6.3, 1), (7.3, 0)] [(0.5, 1), (1.0, 2), (1.5, 2), (2.0, 1), (2.3, 2), (2.5, 1), (3.1, 2), (3.3, 1), (3.8, 2), (4.1, 1), (4.6, 2), (4.8, 2), (5.6, 2), (5.8, 1), (6.3, 2), (6.6, 1), (7.3, 0)] [(0.5, 1), (1.0, 2), (1.5, 2), (2.0, 1), (2.3, 2), (2.5, 1), (3.1, 2), (3.3, 1), (3.8, 2), (4.1, 1), (4.6, 2), (4.8, 2), (5.6, 2), (5.8, 1), (6.3, 2), (6.6, 1), (7.3, 0)]

[(1.5, 1), (2.3, 0), (4.8, 1), (5.6, 0)] [(1.5, 1), (2.3, 1), (3.1, 0), (4.8, 1), (5.6, 1), (6.4, 0)]

[(3.3, 1), (3.8, 0)]

C17 ) 5.6

[(3.3, 1), (3.8, 0), (4.1, 1), (4.6, 0)]

C27 ) 6.4

[(1.5, 1), (2.3, 1), (3.1, 1), (3.9, 0), (4.8, 1), (5.6, 1), (6.4, 0), (7.3, 1), (8.1, 0)]

[(3.3, 1), (3.8, 0), (4.1, 1), (4.6, 0), (5.8, 1), (6.3, 0)]

C41 ) 2.0

[(1.5, 1), (2.3, 1), (3.1, 1), (3.9, 0), (4.8, 1), (5.6, 1), (6.4, 0), (7.3, 1), (8.1, 0)]

[(3.3, 1), (3.8, 0), (4.1, 1), (4.6, 0), (5.8, 1), (6.3, 0)]

C42 ) 6.6

[(1.5, 1), (2.3, 1), (3.1, 1), (3.9, 0), (4.8, 1), (5.6, 1), (6.4, 0), (7.3, 1), (8.1, 1), (8.9, 0)]

[(3.3, 1), (3.8, 0), (4.1, 1), (4.6, 0), (5.8, 1), (6.3, 0)]

C43 ) 8.9

selection criteria to pick one from the alternate available resources. This will normally involve examining the linked lists of all the alternate resources to determine their availabilities plus some other criteria for selecting the best. One simple criterion could be that the earliest available resource should be selected first. In any case, the point is that we can easily extend this algorithm to accommodate multiple alternate resources. Next, we illustrate our algorithm via two examples. The first example (Example I, Figure 2, discussed earlier) illustrates the algorithmic steps in detail, whereas the second, more complex, example demonstrates and evaluates the full potential and effectiveness of our algorithm. Example I (Revisited) We first create four linked lists L1, L2, L3, and L4 respectively for stages 1-4. Assuming the plant to be empty at the start, all lists are initially empty. This completes the plant initialization step of our algorithm. We now enter the DO loop in our algorithm. Several snapshots of the four linked lists, as they get updated after scheduling various tasks, are shown in Table 2. After scheduling the first three tasks of lot 1, we have two entries each in L1, L2 and L3, reflecting the allocations of stages 1-3 to tasks T11, T12, and T13. For a better illustration of the allocation process, let us consider the scheduling of T42. From Table 2, C41 ) 2 h implies that T42 cannot start before t ) 2 h. T42 needs 1 h of processing on a unit in stage 2. To obtain the earliest free slot of 1 h, we examine the usage levels in L2 starting with t ) 2 h in two repeated steps as follows: (1) Find a time at which T42 can possibly start. (2) Check if a unit in stage 2 is available for 1 h from that start time. Because the usage level at t ) 2 h is less than the maximum stage capacity, t ) 2 h is a possible instance at which T42 can start. As a unit in stage 2 must be available during [2, 3] h without violating any resource constraint, we check the next entry in L2, which is t ) 2.3 h. Because no unit is free during [2.3, 2.5] h, T42 cannot start at t ) 2.0 h. The next point at which the usage level falls below the

maximum capacity is t ) 2.5 h, so we take it as a possible start time and examine subsequent entries in L2. However, we again find an instance (t ) 3.1 h) of maximum usage, which violates condition 2. Therefore, we also discard t ) 2.5 h as a possible start time for T42 and look for another start time. Proceeding in this fashion, we find that [5.6, 6.6] h is the earliest slot free for T42 in L2, and so we assign T42 to that slot and update L2 as shown in Table 2. After scheduling all tasks, we obtain a makespan of 12.2 h. Assigning the units to tasks following the algorithm described in step 3, we get the final schedule in Figure 3. Having illustrated our algorithm on a simple example, we now take a more complex process and use our algorithm in both deterministic and stochastic environments. Furthermore, we compare it with the SBA of Lee et al.10 and analyze the impact of the lot release time interval on various system performance criteria. Example II Let us consider an aggregate model (Figure 5) of a wafer fab. This was originally proposed by Lu et al.9 as a gross approximation of a production line. Table 3 gives the plant configuration, the recipe, and the mean task times. For now, we consider a deterministic production environment in which task times are fixed at their mean values. For this example, the absolute minimum residence time is 67.3 h, and the minimum cycle time is 1.8 h. First, we compare the performances of our algorithm and the SBA.10 Comparison with the SBA. For this comparison, we vary N from 5 to 100 and assume that all lots are available for release into an empty plant at zero time. Table 4 shows the makespans obtained from the two algorithms for different N and the corresponding LB1 values. Because our algorithm requires negligible CPU time even for N ) 100, we have not shown its actual CPU times. The superiority of our algorithm for the present problem is evident in Table 4. In contrast to the SBA, our algorithm easily attains LB1 for N e 22, although it does show a departure from LB1 for larger

Ind. Eng. Chem. Res., Vol. 39, No. 11, 2000 4211

Figure 5. Schematic diagram of an aggregate model of a wafer production line for Example II.

N. However, this indicates that it may not be possible to attain LB1 for large N and that the makespans from our algorithm may still be the best. The SBA demands large CPU times, exponentially increasing with N, because of its enumerative nature. On the other hand, the computational effort for our algorithm should show a nearly linear growth with N. Figure 6 shows how the start and end times of lots vary with N. Here, the start time is the time at which a lot starts processing on the first stage, while the end time is the time at which it finishes processing on the last stage. The difference between the two is the actual residence time of a lot. Clearly, the lot residence time

increases with the lot number. This is expected as the queue lengths at various stages are expected to increase as we flood the system with lots. Let us now study the impact of the lot release policy on various system performance measures in this example. Several different release policies have been proposed3,9 for fabs in the literature. We use the simplest policy, i.e., releasing lots at a fixed time interval into the process. We call this the lot release time interval. Effect of Lot Release Interval. We fix N ) 100 arbitrarily and consider the system to be deterministic. For this case, LB1 is 245.5 h. If the scheduling objective

4212

Ind. Eng. Chem. Res., Vol. 39, No. 11, 2000

Table 3. Process and Recipe Data for Example II

Figure 7. Effect of lot release interval on maximum queue sizes on station 6 in Example II.

Table 4. A Comparison of Makespans from and Computational Efforts of Our Algorithm with those of the SBA (Lee et al.)10 for Example II makespan (h)a lots N

LB1 (tight bound)

our algorithmb

SB algorithmc

SBA CPU time (s)c

5 6 8 10 20 50 75 100

74.5 76.3 79.9 83.5 101.5 155.5 200.5 245.5

74.5 76.3 79.9 83.5 101.5 176.25 232.00 296.95

73.9 77.9 83.8 90.5 -

298.1 779.5 1629.5 3750 -

a All lots are released at time zero. b CPU times for our algorithms are negligible compared to those for the SBA c Results as reported by Lee et al.10

Figure 6. Entry (start of processing) and exit (end of processing) times of lots as predicted by our scheduling algorithm in Example II for the case of zero lot release times. Table 5. Effect of Lot Release Time Interval on the Maximum Queue Sizes at Various Stages lot release interval (h) 0.0 1.0 1.8 2.0 3.0 3.2 3.6 3.8 4.0

maximum queue sizes at various stages/stations 1 2 3 4 5 6 7 8 9 10 11 12 12 7 6 5 3 2 2 0 2

5 5 2 3 2 2 1 0 1

33 14 6 4 1 0 0 0 0

6 5 3 5 2 3 1 0 1

8 3 4 4 2 1 1 0 1

55 30 9 11 3 2 0 0 1

0 0 0 0 0 0 0 0 0

4 3 4 4 2 3 1 0 3

27 28 20 12 4 1 2 0 1

11 9 4 2 0 0 0 0 0

8 8 5 5 3 1 2 0 1

0 0 0 0 0 0 0 0 0

is to maximize productivity alone, then it makes sense to release all lots at zero time, as the fab is assumed to have unlimited intermediate storage. So we first look at the case in which all 100 lots are queued up at zero time at the first processing stage. Because input is not regulated, as production continues, lots arrive rapidly and thus have to queue up for processing at each station.

Queues build up because lots are released into the system faster than the system can process them. This can be avoided by releasing the lots intelligently. Table 5 lists the maximum queue size at each station for different release intervals. This information is useful in identifying critical stages. A critical stage is not necessarily a station with the smallest number of machines or with the maximum revisits or with the most expensive equipment or even the most important from a process viewpoint. Such queue data (Table 5) can be used to identify the station needing higher capacity and/or more machines in a re-engineering effort. In this example, stage 6 appears to be the most critical as it experiences the longest queues, as shown in Figure 7. Note that stations 7 and 12 do not have queues at any point in time, even when all lots are released at time zero. Also, the queue at stage 1 does not include the lots that have not begun processing, even though released into the system. As the release interval increases, the maximum queue sizes decrease dramatically. Interestingly, the reduction is slower on stage 9 than it is on stage 6, even though the latter appeared to be more critical. The queues do not disappear even at the interval of 1.8 h, which is the minimum cycle time as defined earlier. This suggests that the effective cycle time in this system is, in fact, greater than the minimum cycle time. Even for intervals greater than 1.8 h, queues do exist at some time during production. Needless to say, the reentrant nature of this process has a complex effect on queues, and it is not possible to predict the queue trends very precisely. The mean queuing time also showed a behavior similar to the maximum queue size as the release interval was varied. Figure 8a-c shows the effect of the release interval on the makespan, mean residence time, and mean cycle time. For small release intervals, the makespan increases negligibly with increase in release interval. The late release causes little delay in the overall production time but helps to distribute the loads on the stages reducing the maximum queue sizes. For large release intervals, however, competing product lots (conflicts) are reduced, but delays are caused because the lots are not received fast enough. However, this also means that a lot does not need to wait for stages, and so, its residence time decreases with increasing release interval. Note that the mean cycle time equals the release interval for a range of release intervals. Figure 9 shows the start/ end times for one such case in which the start and end time lines are parallel. This is a case of no conflict between lots for the time on any stage. The queues vanish at all stations. This can be likened to a ZW policy solution13 for this system. In such a case, the actual lot residence time is exactly equal to the minimum possible

Ind. Eng. Chem. Res., Vol. 39, No. 11, 2000 4213 Table 6. Effect of Stochastic Processing Times and Their Distributions on System Performance Measures normal distribution

deterministic

log-normal distribution

performance measure

mean

std dev

mean

std dev

mean

std dev

cycle time (h) residence time (h) makespan (h)

2.34 166.30 296.95

0 0 0

2.18 161.80 283.46

0.04 1.41 3.95

2.29 165.11 294.17

0.03 0.72 3.11

arbitrarily assume a standard deviation σs ) 0.05ts. For reliable results, we generate 100 stochastic schedules for each distribution (normal and log-normal). For each schedule run, we generate random task times a priori using the appropriate distribution. To generate log-normally distributed task times, we use the three-parameter (θ, η, and ξ) log-normal distribution,18 where θ is the minimum possible task time and η and ξ are the mean and standard deviation, respectively, of the normal random variable (NRV) that is equivalent to the log-normal task time. For the task time at stage s, we compute η and ξ for its equivalent NRV using µ ) ts, σ ) 0.05ts, and θ ) 0.9ts in the following way:

[

ξ2 ) ln 1 + η ) ln[(1 - θ)µ] -

Figure 8. Effect of lot release interval on various system performance measures in Example II: (a) completion times of 100 lots, (b) mean lot residence time, and (c) mean cycle time.

Figure 9. Entry (start of processing) and exit (end of processing) times for the zero-wait case of Example II.

residence time. However, this is achieved at the expense of increased cycle time. Having studied the deterministic version of Example II, we now assume that task times are not constant but varying randomly according to either normal or lognormal distribution. Random Task Times. We again use N ) 100. Let us assume that means and variances of task times are known a priori. Let the mean values of task times be as listed in Table 3, i.e., mean µs ) ts for stage s. Let us

]

σ2 θ2 µ2

[

(5a)

]

σ2 1 ln 1 + 2 2 2 θµ

(5b)

In our opinion, the log-normal distribution should be preferred above the normal distribution for modeling task times, as the latter can have arbitrarily low task times in some cases. This is unrealistic, as one cannot possibly complete processing at a stage in less time than some minimum time. For each stochastic schedule, we compute various performance criteria such as the makespan, average residence time per lot, and average cycle time. Then, we compute the means and variances for these three criteria over all 100 schedule runs. Table 6 shows the effect of randomness in task times on performance measures. There is a difference between the stochastic and the deterministic results. However, the log-normal distribution is quite close to the deterministic case. The fact that the makespan seems to reduce in the stochastic case is not at all obvious. This highlights the complex interplay between task times and resource conflicts. The normal distribution gives lower values than the lognormal distribution mainly because normal task times can be arbitrarily low. The log-normal distribution, on the other hand, is clearly better suited, because of its ability to prevent the random task times from falling below a certain minimum. This study shows that our algorithm can easily be applied to a stochastic environment, and even its deterministic implementation produces results comparable to a stochastic one. So far in this paper, we have assumed a fixed lot size. It is expected, and we have verified, that an increase in the lot size merely scales up the various performance criteria such as the makespan, residence time, and cycle time. We are currently working on an extension of our algorithm to a case of nonidentical product lots, i.e., a system with nonuniform processing times.

4214

Ind. Eng. Chem. Res., Vol. 39, No. 11, 2000

Conclusion

Abbreviations

A simple one-pass algorithm based on a novel, productat-a-time strategy was presented for scheduling a singleproduct process with reentrant flows and uniform processing times. The algorithm is well suited for wafer fabs in semiconductor manufacturing, which demand complex scheduling considerations. Although developed with a deterministic, zero-breakdown environment in mind, it can easily be extended to a stochastic, failureprone environment, as illustrated in our paper. Using the concept of a rate-limiting stage, two lower bounds on the time required to produce a number of lots were also derived. Of these, one that is guaranteed is quite conservative and works well for a small number of lots, whereas the other, which is not guaranteed, is quite good for the scheduling of a large number of lots. Computationally, our algorithm performs far better than an existing partial enumeration method by Lee et al.10 Furthermore, it also gives much better makespans in almost all instances for the numerical example tested. The reentrant processes pose very challenging scheduling problems, and further work is in progress to enhance and improve our algorithm and also to compare it with other simulation-based strategies employing queue processing rules.

SBA ) Sequence branch algorithm LB ) Lower bound

Acknowledgment This work was supported by a research grant (RP970642) from the National University of Singapore. We are indebted to Prof. M. P. Srinivasan for his invaluable discussions on the operation of wafer fabs. We also thank the reviewers for their constructive comments that have led to a more complete paper. In particular, we thank the reviewer who commented on task splitting. Notation Cik ) Completion time of task Tik K ) Number of tasks in a product recipe ms ) Number of identical machines in stage s N ) Number of lots to be produced pk ) Processing time of task k in the recipe sk ) Stage on which task Tk is performed S ) Total number of processing stages/stations ts ) Task processing time on stage s t* ) Minimum cycle time of the system for a product t*last ) Cycle time of the last stage in a product recipe Tk ) Task k in a product recipe Tik ) Task k of product lot i vs ) Total number of visits made to stage s during a product recipe Greek Letters µ ) Mean value of task time σ ) standard deviation of task time h ) Mean of the NRV equivalent to a log-normal RV ξ ) Standard deviation of the NRV equivalent to a lognormal RV θ ) Minimum positive value of a log-normal random variable

Literature Cited (1) Seow, B. Q. Career in a DRAM wafer fab. In Proceedings of the interfaculty seminar on meeting the needs of the wafer fabs: Teaching of Microelectronis at NUS; Singapore, January 20, 1996; p 33. (2) Srinivasan, M. P. A chemical engineer in the microelectronics industry. In Proceedings of the interfaculty seminar on meeting the needs of the wafer fabs: Teaching of Microelectronis at NUS; Singapore, January 20, 1996; p 71. (3) Wein, L. M. Scheduling semiconductor wafer fabrication. IEEE Trans. Semicond. Manuf. 1988, 1 (3), 115. (4) Johri, P. K. Practical issues in scheduling and dispatching in semiconductor wafer fabrication. J. Manuf. Syst. 1993, 12 (6), 474. (5) Kumar, P. R. Scheduling semiconductor manufacturing plants. IEEE Control Syst. Mag. 1994, 33. (6) Kim, J. S.; Leachman, R. C. Decomposition method application to a large-scale linear programming WIP projection model. Eur. J. Oper. Res. 1994, 74, 152. (7) Hung, Y. F.; Leachman, R. C. A production planning methodology for semiconductor manufacturing based on iterative simulation and linear programming calculations. IEEE Trans. Semicond. Manuf. 1996, 9 (2), 257. (8) Dessouky, M. M.; Leachman, R. C. Dynamic models of production with multiple operations and general processing times. J. Oper. Res. Soc. 1997, 48, 647. (9) Lu, S. C. H.; Ramaswamy, D.; Kumar, P. R. Efficient scheduling policies to reduce mean and variance of cycle-time in semiconductor manufacturing plants. IEEE Trans. Semicond. Manuf. 1994, 7 (3), 374. (10) Lee, S.; Bok, J. K.; Park, S. A new algorithm for largescale scheduling problems: Sequence branch algorithm. Ind. Eng. Chem. Res. 1998, 37, 4049. (11) Graves, S. C.; Meal, H. C.; Stefek, D.; Zeghmi, A. H. Scheduling of re-entrant flow shops. J. Oper. Manage. 1983, 3 (4), 197. (12) Peyrol, E.; Floquet, P.; Pibouleau, L.; Domenech, S. Scheduling and simulated annealing application to a semiconductor circuit fabrication plant. Comput. Chem. Eng. 1993, 17 (S), S39. (13) Ku, H. M.; Karimi, I. A. Completion time algorithms for serial multiproduct batch processes with shared storage. Computers Chem. Eng. 1990, 14(1), 49. (14) Applequist, G.; Samikoglu, O.; Pekny, J.; Reklaitis, G. V. Issues in the use, design and evolution of process scheduling and planning systems. ISA Trans. 1997, 36 (2), 81. (15) Kuriyan, K.; Reklaitis, G. V. Scheduling in network flowshops. Comput. Chem. Eng. 1986, 36 (2), 81. (16) Moon, S.; Park, S.; Lee, W. K. New MILP models for scheduling of multiproduct batch plants under zero-wait policy. Ind. Eng. Chem. Res. 1996, 35, 3458. (17) Bhalla, A. Planning and scheduling of a thin film resistor process. M. Engg. Dissertation, National University of Singapore, Singapore, 2000. (18) Johnson, N. L.; Kotz, S. Continuous Univariate Distributions 1; Houghton Mifflin Company: Boston, MA, 1975.

Received for review April 3, 2000 Revised manuscript received August 9, 2000 Accepted August 31, 2000 IE000380X