Continuous Time Representation Approach to Batch and Continuous

For the classical knapsack problem, for example, this heuristic might be to select an object with maximal specific price (price over weight ratio). It...
0 downloads 0 Views 109KB Size
204

Ind. Eng. Chem. Res. 1999, 38, 204-210

Continuous Time Representation Approach to Batch and Continuous Process Scheduling. 2. Computational Issues L. Mockus and G. V. Reklaitis* School of Chemical Engineering, Purdue University, West Lafayette, Indiana 47907

The first part of this series presented a general mathematical framework for describing a wide variety of scheduling problems arising in multiproduct/multipurpose batch and continuous chemical plants. The problem was formulated as a large mixed integer nonlinear programming model (MINLP). A technique that exploits the characteristics of the problem in order to reduce the amount of required computation is reported. The technique is based on the Bayesian heuristic (BH) approach to discrete optimization. The BH approach allows incorporation of different heuristics into the solution process. A material requirement planning (MRP) heuristic is employed in the work reported herein. Computational examples are presented to illustrate the applicability of the method to the scheduling of multipurpose plants under a variety of operational conditions. 1. Introduction The first part of this series (Mockus and Reklaitis (1999)) has described a general approach to the shortterm scheduling problem of multipurpose batch and continuous operations. A mathematical programming formulation that takes into account a number of the features of realistic industrial problems has been presented. The formulation is based on a continuous time representation, in which the planning horizon is divided into a number of intervals of unequal and unknown duration resulting in a large mixed integer nonlinear program. In principle, the optimal solution of this MINLP can be obtained by standard exact generalized benders decomposition, outer approximation, or branch and bound techniques. However, in practice, it is known that such problem structures often result in exponential growth of solution times with problem size, thus rendering most industrially relevant problems intractable. Furthermore, these approaches do not guarantee attainment of global optimum solutions when the underlying nonlinear relaxations are nonconvex as is the case with the NUDM formulation. This work is an attempt to address the above problem. It was suggested in Mockus and Reklaitis (1996) that the Bayesian heuristic approach could be employed for the solution of such problems. The BH framework allows us to adapt any specific heuristic for a given class of problems. Furthermore, by clever choice of heuristic, we can make the numerical algorithm more efficient. In Mockus and Reklaitis (1996) a general simulated annealing heuristic is used. In this work we employ a specialized material requirements planning heuristic tuned for a class of batch and continuous scheduling problems. Computational experiments suggest that the BH approach combined with nonuniform time discretization shows promise for the solution of batch and continuous scheduling problems. Section 2 summarizes the key aspects of the BH approach. Section 4 describes how the MRP heuristic (Orlicky (1975)) is tuned to incorporate batch and continuous operations. Finally, section 9 illustrates the effectiveness of this technique * Author to whom all correspondence should be addressed. E-mail: [email protected]. Fax: (317) 494-0805.

vis a vis the standard branch and bound technique applied to the uniform time discretization approach (UDM). 2. Bayesian Heuristic Approach Algorithms of exponential complexity are usually required to obtain the exact solution of global and discrete optimization problems. Even in cases when an approximate solution which lies within some tight error bounds is acceptable, the exponential complexity often remains. The desire to guarantee satisfactory results for the worst case is an important factor in forcing exponential complexity. Therefore in practice the solution of many applied global and discrete optimization problems is often approached using heuristics. Most decision processes consist of a number of steps. For example, in batch scheduling problems these steps are as follows: selecting a task, selecting a suitable equipment unit to process the task, and determining the amount of material to be processed by this task. In each step, an object is selected from some decision set (for example, select a task from a given set of tasks which produce the necessary product). A heuristic is a set of rules used to perform a step. For the classical knapsack problem, for example, this heuristic might be to select an object with maximal specific price (price over weight ratio). It may be helpful to think of this process as descending the decision tree by a path predefined by heuristic rules. In Figure 1a, the descent in a decision tree using the rule of always choosing the leftmost node is illustrated. The key idea of the BH approach is to randomize these heuristic rules. Instead of descending the decision tree only once, the solution is repeated many times by selecting different paths. Each path is selected by applying the heuristic rule with some parameterized probability. Since the set of parameters for each descent in the decision tree is different, each set represents a different solution replicate and thus the best of them can be chosen for retention. In Figure 1b for each replicate the left most node is chosen with some probability which is the same within a specific replicate but is different for each replicate. The parameterization is another key feature of the BH approach. If we know or expect that some heuristic

10.1021/ie970312j CCC: $18.00 © 1999 American Chemical Society Published on Web 12/04/1998

Ind. Eng. Chem. Res., Vol. 38, No. 1, 1999 205

Figure 1. Decision tree: (a) deterministic descent; (b) randomized descents.

“works” well, then we may increase the efficiency of the search by randomizing the parameters of the heuristic. Instead of solving a multidimensional discrete optimization problem directly, we tune the parameters of the randomized heuristic. This tuning process is a low dimensional continuous optimization problem. We solve this tuning problem using the Bayesian method of global optimization Mockus (1989). The main advantage of the worst case rigorous approach is that it yields error bounds on the solution. The main disadvantage is the focus on the worst possible case for a given class of problems. If this class is large, then in order to obtain sufficiently tight bounds many iterations may be required. This is the natural “cost” of such a guarantee. The main advantage of the Bayesian approach is its focus on average case performance. An additional advantage of the BH approach is the possibility of including expert knowledge in a natural and convenient way. The potential ability to “learn” is also a positive feature of the BH approach. By learning, we mean that the decision parameters which are optimal for some problems of the given class may be “good enough” for the rest of the class. The main disadvantage is that it is in general not possible to obtain and maintain guaranteed bounds on the quality of solution. 3. Constraint Handling The feasible region of the MINLP model defined in the first part of this series (Mockus and Reklaitis (1999)) is quite complex since it is described as the intersection of a number of different linear and bilinear constraint families. By contrast, Bayesian global optimization methods are designed for feasible regions described by hyper-rectangles. Thus, to use these methods some device is needed to transform the complex feasible region to a hyper-rectangle. There are two strategies for accomplishing this. The first is to handle the constraints implicitly, that is, to insure that constraints are satisfied as an inherent part of the construction of the heuristics. For instance, each piece of equipment is only allowed to be assigned once at a given time by virtue of how the assignment heuristic is executed. The second way is to use the classical penalty function construction, that is, to drive constraint satisfaction by adding a term into the objective function which consists of the amount of constraint violation multiplied by a large constant. Constraint infeasibilities thus cause large bumps in the objective function value which in turn lead the optimization method to search in other directions.

There are two well-known difficulties of the penalty strategy. The first is that with the addition of many terms corresponding to the constraint violations the objective function topology becomes quite complex, even multimodal, causing great difficulties for the optimization method. The second is that the values of the penalty parameters must be carefully chosen. If the values of the parameters are chosen too large, the search degenerates to a search for a “nearest” feasible solution. If the parameters are chosen too small, the constraints may be violated. Sometimes this difficulty is mitigated by some form of iterative parameter adjustments strategy in which the penalty parameter values are increased as the search progresses. The approach employed in this study is a combination of the above two strategies: the randomized materials requirement planning heuristic is constructed in such a way that the majority of the constraints, integer and continuous, are implicitly satisfied. The violations of the state capacity constraints (eq 8, Mockus and Reklaitis (1999)) and the violations of the hard demand due date constraints (eq 5), Mockus and Reklaitis (1999)) are treated using the penalty terms. The values of the penalty parameters for the latter construction are scaled according to the magnitudes of the other scheduling objective function coefficients, such as storage costs, product values, etc. A general rule which has proven effective in this work is to set the penalty parameters to be 100 times the value of the largest unit storage cost or product sales price. Although various penalty parameter adjustment strategies may well lead to some improved performance, we have not found it necessary to employ such devices. 4. Material Requirement Planning Heuristic MRP is an inventory management and production planning technique which, given a delivery schedule for a final products, determines the initiating times for all raw materials orders and for the production runs needed to prepare all required intermediate products, as well as the starting times for the production of the final product itself. Given a delivery time for a product shipment, each branch of the product processing tree is traced from the product in question and each component requirement is calculated. If the inventory is inadequate, then a production order is issued for that component. The tracing of each branch of the processing tree continues until all raw materials requirements have either been met or ordered. For scheduling problems relevant to the chemical industry, we have to extend this heuristic to handle unit and task assignment, batch size determination, and other features. For purposes of this paper we will employ simple and natural heuristic rules. Of course, these rules could be enhanced or augmented to give further improvements in the results. The following is the list of heuristic rules. (1) Product Selection Rule. The production run of each product in a batch and continuous processing system has some due date. We begin with a product which has the earliest due date. The rationale for this is that knowledge of due dates close to start of the scheduling horizon is more concrete. When production of this product is scheduled, we recursively also schedule the production of the intermediates required to produce it.

206 Ind. Eng. Chem. Res., Vol. 38, No. 1, 1999

Suppose that we selected product (or intermediate) s with due date Ds. If the decision d is to select some product then the function h(d) ) Ds is the heuristic function value for this decision. For example, if the due date for product1 is 8 p.m. and the due date for product2 is 6 p.m. (see Figure 2a), then our decision will be to select product2. (2) Equipment Item Selection Rule. To schedule production of a given product or intermediate, it is necessary to select a task and an equipment unit to process this task. When there are few tasks producing the same product, we randomly assign the quantity of a product that the given task has to produce. Then we select an equipment item which is available during the time interval closest to the due date (a unit may become unavailable for processing because it processes another task, it is shutdown for maintenance etc.). Suppose that such an interval for unit j ends at time tj. Mathematically this rule can be stated as the minimization of Ds - tj over j. For example, assume that the task2 produces product2 and that it can be performed on unit1 and unit2. Then we select unit1 as the equipment item to process task2 (see Figure 2b). The interval closest to the due date is chosen as the time interval during which the task is processed on the selected unit. (3) Task Start Selection Rule. Usually the size of the unit availability interval is different than the task processing time. The rule will start a given task so that it ends exactly at the end of this interval. Suppose that task i is finished processing on unit j by time ti, then mathematically this rule can be stated as the minimization of tj - ti over i. For example, assume that the task2 processing time on unit1 is 1 h. Furthermore, the interval chosen in the previous step is from 3 to 5 p.m. Then task2 is started at 4 p.m. so that it ends at 5 p.m. (see Figure 2c). We can express these rules in the following way. Let tSi be the start time of the task i and τij be the processing time of task i by unit j. Furthermore, assume that the list of unit j availability intervals is expressed as a set {(tSjl, tEjl ) | l ) 1, ..., Jj}, where tSjl is the start of the interval, tEjl is the end of the interval, and Jj is the number of such intervals. Then

s ) argmin{Ds}

(1)

j,l ) argminmin{tEjl ,Ds}-tSjlgτij{Ds - min{tEjl ,Ds}}

(2)

tSi ) min{tEjl ,Ds} - τij

(3)

Equation 1 implements the product selection rule, eq 2 implements the equipment item selection rule, and eq 3 implements the task start selection rule. The schedule generation process is a recursive process of satisfying due dates for each final product p* (intermediates may be treated as products since they are produced from other intermediates or raw materials). The flowchart of this heuristic is presented in Figure 3. tSi is the starting time of the task i and Si is the set of intermediate materials required to produce this task. We have to note that raw materials are not included in this set. For the purpose of this work we make an assumption that when a raw material is required it is readily available. However this assumption can be easily removed by appropriate modification of the above

Figure 2. MRP heuristics for batch scheduling problems: (a) product selection rule; (b) equipment item selection rule; (c) task start selection rule.

Figure 3. Flowchart of the MRP Heuristics.

presented rules (for example, purchase more material than is required by a given order). The scheduler maintains a separate due date list Lp* for each final product. These lists are initially empty. Each element of the due date list is a pair (t, a), where t is the time when a given final product is due and a is the required amount of this product. Furthermore, there is one final product list which initially contains all final products (L ) {p*}). The scheduler also maintains a separate equipment availability time list Lj ) {(tSjl, tEjl , Bjl)} for each unit j, where (tSjl, tEjl ) is the time interval when the unit is available, l is the index of this interval, and Bjl is the fraction of the volume of the unit used during this interval. In general this list initially contains one element (0, H; 0) (this means that the unit is available from the start 0 until the end H of the scheduling horizon and that all its capacity may be used for processing). However, for the case when equipment maintenance times are known, these times are excluded from the list. We select a final product p* to be produced from the final product list L using the first rule. Then we schedule a task i which produces this final product p* using the equipment item selection rule. When the

Ind. Eng. Chem. Res., Vol. 38, No. 1, 1999 207

equipment j is selected we start task i at the time defined by the last rule. The equipment availability times list Lj for the equipment item j is updated. The time interval during which this task is processed is removed from the list Lj if the entire capacity of the corresponding equipment is used. If all of the capacity is not used, then the interval is left in the list with an indication how much of the capacity of the unit is used. Execution of a task requires some intermediate materials which are inserted into the due date list Lp* of products to be scheduled. In this case the task start time becomes the due date for those materials. In this fashion we proceed recursively to schedule all of the products in the due date list Lp*. If the execution of a task requires only raw materials, then scheduling of some product in this list is finished and another product from the due date list Lp* is selected. The schedule for the final product p* is considered to be generated when there are no more products to schedule and this product is removed from the final product list L. We proceed to the scheduling of the another final product from the final product list L. Two additional rules dealing with the size of a task and with sequence dependent changeovers have to be mentioned. Since these rules are not randomized, they are presented separately. First, the batch size selection rule is not randomized basically because batch size is a continuous variable and, thus, we do not expect a significant profit increase associated with its randomization. Moreover, the batch size variable is optimized in the last step of the overall algorithm outlined in section 5. Second, the heuristic for dealing with changeovers is based on the realistic assumption that the changeover time is much less than the task processing time. However this assumption can be easily removed. In principle, both above rules may be randomized at the cost of additional computing time. The following strategy is adopted to decide the size of a batch task. Whenever the item to process a task is selected, we try to process as much material as possible. The limiting factors in this case are the capacity of the equipment unit which is selected for processing and the required amount of material to be processed. If the capacity of the equipment is the lesser of the two, then a new demand for this product is generated. The size of the demand is set equal to the amount of the remaining unprocessed material. This new demand is satisfied using the recursive method described above. As an example, consider that it is necessary to produce 100 tons of material p* but the selected unit can produce only 60 tons. Thus we schedule a task on this unit and insert a demand for the remaining 40 tons for this product p* into the due date list Lp*. The due date for this demand is set to be the task start time. For continuous tasks we assume that continuous units are capable of processing all required amounts of material V. In this case the task processing time is max τij ) max τmin ij , V/rij ,

(4)

is a minimum processing time of task i in where τmin ij is a maximum processing continuous unit j and rmax ij rate of continuous unit j when used for performing task i. The sequence dependent changeovers are accommodated in the following way. When scheduling a new task, we try to select an equipment unit with an

availability interval greater than the sum of the task processing times and the required changeover times. For example, suppose that a light dye is finished processing on some unit at 6 p.m., a gray dye began to be processed at 8 p.m., and we want to process a black dye in between. The processing time of the black dye is 1 h, 45 min and the cleaning required to change to gray dye processing is 30 min. Thus it is not possible to process the black dye in the interval from 6 to 8 p.m. If the cleaning time required to change from black to gray dye is 10 min, then we can schedule a black dye for processing but it should end no later than 7:50 p.m. which means that it should start no later than 6:05 p.m. 5. Global Optimization Algorithm (GOA) The key component of the BH framework is the randomized heuristic function. This function corresponds to the objective function in the global optimization. Once the randomized heuristic function is defined, we may optimize its parameters using the Bayesian global optimization method (Mockus (1989)) and as byproduct we obtain the optimum profit. For purposes of this paper we employ a parametric linear function of the material requirements planning (MRP) heuristic as the randomized heuristic. As noted in section 2, the MRP heuristic is constructed so that all constraints except those limiting the amount of material in a given state are satisfied. Violations of this family of constraints are penalized in the objective function. For the numerical cases reported in this paper the value 500 was used throughout. In using the MRP heuristic we try to schedule tasks as near to the due date as possible. In section 4, we described the modification of the heuristic to model batch and continuous tasks. If di is some decision (select a product, a suitable equipment unit, or task start), then h(di) is the heuristic function. The function r(x, h(di)) is a randomized heuristic function which gives the probability of the decision di. x is the randomization parameter. The function r(...) used in this paper has the form

r(x, h(di)) ) xa0 + (1 - x)a1h(di) M where a0 ) 1/M, al ) 1/∑i)1 h(di), and M is the number of possible decisions. The heuristic rules used in this paper are defined in section 4. Thus the GOA algorithm can be represented by the following simple steps. Step 1. Fix parameter x using the global Bayesian method (Mockus (1989)). Any global optimization method which generates the vector x uniformly distributed in the unit cube forces the asymptotic convergence of the BH approach with probability one. The asymptotic convergence of the proposed algorithm can be easily understood from the fact that all possible decisions are enumerated asymptotically with probability one in the case when parameters are uniformly distributed. For a complete convergence proof, see Mockus et al. (1997). We have to note, however, that parameter x is fixed based on the value of the profit of the previously generated schedules. The feature of a given global Bayesian method is that it fixes the next x to a value for which the probability to achieve the best profit is maximal, i.e. to a value for which it expects to get maximal profit. Of course, it may be that this new x value does not give the best profit, thus we need to use additional iterations.

208 Ind. Eng. Chem. Res., Vol. 38, No. 1, 1999

These two features were key reasons for using the global Bayesian method. The second feature makes the search more efficient than a pure random Monte Carlo search while the first enables a more thorough exploration of the decision space. Step 2. Generate a schedule by using the randomized MRP heuristic (see sections 4 and 6). Step 3. Evaluate the schedule for this parameter. Step 4. If the schedule is feasible (there is no penalty) and the value of the best schedule so far did not increase for 10 iterations, then go to step 6. Step 5. Go to step 1. Step 6. Fix binary variables WSijo and WFijo to the values given by the best schedule. These variables correspond to the sequencing and assignment of tasks. Substitute the values of these binary variables into the model to reduce the problem to a linear program. The solution of the linear program gives the exact starting times, batch sizes, and processing rates. We see that the scheduler generates sequences and assignments while the LP model is used to produce the exact schedule. The LP model is derived from the NUDM model by substituting the sequencing and assignment variables which are fixed by the scheduler. This disaggregation of the scheduling problem allows us to solve the large scale MINLP problem by using a combination of heuristic algorithm and an efficient LP solver. The key issue in using the BH approach is that together with the best schedule we also acquire the best randomization parameter x, or in other words, we tailor the heuristic for a given class of problems. Initially we are parameterizing the heuristic, thus giving it a few extra degrees of freedom. By varying parameters in an intelligent way, so that the expected outcome is maximized, we find the parameter values which yields the best profit. 6. Schedule Generation The process of schedule generation using the randomized heuristic is essentially the same as that described in section 4. The only difference is that instead of using the heuristic rules deterministically we make our selection with some probability. Consider, for example, the product selection rule. Instead of selecting a product with smallest due date, we select the product with some probability r which is a function of its due date. More specifically, the probability r may be expressed as

rs ) xa0 + (1 - x)a1 exp{-Ds}

(5)

where rs is the probability to select product s. Here x is the parameter set by the Bayesian method in the previous step 1, M is the number of the products in the list L (or Lp*), and h(di) ) exp{-Ds}. We selected an exponential function instead of a linear one so that the probability to select a product with late due date is very small. As we see, the products with smallest due dates are chosen with highest probability; thus, we preserve the character of the MRP heuristic while making it random. In a similar manner we deal with the other rules. The generated schedule may be infeasible because of violations of the state capacity constraints. It is possible that the state capacity is exceeded or that a negative amount of material exists in a given state. These situations are handled by penalizing the variations from

the minimal and the maximal state capacity values and adding this penalty to the objective function. The objective function is readily evaluated by calculating the storage and utility costs and the profit gained by satisfying demands less the raw material costs. 7. Differences of the Bayesian Heuristic Approach from Simulated Annealing and Genetic Algorithms The Bayesian approach has a conceptual similarity with simulating annealing and genetic algorithms, since all three employ stochastic decision elements. However, there are several key differences. One key difference of the Bayesian approach from simulated annealing and genetic algorithms is learning. In simulated annealing or genetic algorithms parameters, such as initial temperature or mutation probability etc., are found experimentally by studying a number of scheduling problems. However, in the Bayesian case these parameters are tuned for each problem class and do not need to be fixed for all classes. Thus, in contrast to simulating annealing and genetic algorithms, the Bayesian approach allows learning to occur for the problem at hand. In practice learning is achieved by self-tuning of the randomization parameter (x from (5) in our case). This parameter is automatically tuned by the global Bayesian method. The optimal value of the randomization parameter provides a measure of the degree of learning achieved and the quality of the randomized heuristics (the MRP heuristics in our case). A value of the randomization parameter close to 1 indicates that the heuristic is not effective and that little learning was achieved: the algorithm effectively proceeds as a random search. On the other hand, a value close to 0 tells us that as a result of learning, the search was strongly influenced by the heuristic. A further drawback of simulated annealing and genetic algorithms is that when applied to discrete or mixed integer problems they generate a large number of infeasible solutions. Thus a large percentage of the computation time is wasted in the generation of these infeasible solutions. A similar situation was observed when trying to employ simulation annealing heuristic within the Bayesian Heuristic approach framework (Mockus and Reklaitis (1996)). Because of this, only small size problems could be solved. When using the MRP heuristic within Bayesian framework, we overcame this problem, and typically 70% percent of the generated schedules were feasible. This can be explained by the fact that the MRP heuristic fits well to the given class of scheduling problems and generates mostly realizable schedules. 8. Test Problems To demonstrate the effectiveness of the proposed approach, we report test results with a series of nine test examples involving both batch and continuous operations. Some of the key characteristics and size parameters of these problems are summarized in Table 1. The complete description of the numerical parameters of these problems can be accessed by the reader electronically in the form of RCSPec language files (rcsplib@ ecn.purdue.edu). RCSPec is a scheduling language, developed by Zentner et al. (1998), which can be used to represent

Ind. Eng. Chem. Res., Vol. 38, No. 1, 1999 209 Table 1. Summary of Key Parameters no. of batch or chage- no. of equipment horizon no. of problem continuous overs tasks units (h) products batch1 batch4 batch3 exIII cpctsp1 cpctsp2 cpctsp3 cipac2 cipac1

batch batch batch batch batch batch batch continuous continuous

no no no no yes yes yes yes no

3 8 6 12 3 5 7 10 10

3 6 4 3 1 1 1 5 5

11 8 9 270 7 11 15 16 16

2 4 3 4 3 5 7 4 4

Table 2. B&B and GOA Comparison for Examples of Batch and Continuous Processes GOA B&B problem batch1 batch4 batch3 exIII cpctsp1 cpctsp2 cpctsp3 cipac2 cipac1

profit ($)

time (s)

profit (%)

time no. of (s) replicates

3230 6 100 0.46 60534 1.7 99.99 2.90 105756 8.6 99.97 3.85 1400 66 100 3.65 4724 0.6 100 0.06 8122 188.5 100 0.20 12015 / 99.60 3.01 20879 / 100.54 31.8 22800 824.6 107.77 27.3

12 13 20 15 11 11 16 29 29

BH profit (%)

randomization param

77.91 93.79 99.18 100 100 100 99.6 100.39 107.77

0.11 0.43 0.03 0.02 0.01 0.04 0.09 0.15 0.06

all process scheduling problems by their task, unit, and resource structures. It also contains a wide variety of constructs within the task, unit, and resource descriptors with which to unambiguously express information associated with any particular problem. The language provides a formulation independent description and was developed so as to allow researchers to easily compare solution methodologies on a well-defined set of test problems. This is especially important in the case of scheduling problems because the solutions of these types of NP-complete problems can be very sensitive to slight changes in the problem parameters or their interpretation. 9. Results Solution of the above test problem set was undertaken using the proposed new algorithm and using a commercial MILP solver applied to the uniform discretization formulation. The latter involves the usual branch and bound enumeration strategy with solution of relaxed linear programming subproblems at the nodes of the enumeration tree. The results of the testing are summarized in Table 2. Computational experiments were performed on a HP 9000/755 workstation using the commercially available CPLEX solver. The column “BH profit” shows the profit value obtained before performing the LP solution phase, i.e. the profit value given only by the statistical part of the algorithm. As we see, the LP solution does not significantly increase the value of the profit. However we can not draw definite conclusions about the need for the LP solution step from this case study alone. We have to note that while for batch processes (batch1, batch3, batch4, exIII, cpctsp1, cpctsp2, cpctsp3) GOA gives solutions slightly worse than the branch and bound approach, for continuous processes GOA (cipac1, cipac2) gives better results than UDM. This can be explained by the inherently discrete nature of the UDM, which is not suited for continuous tasks. The time

variables can have only discrete values in the UDM case, while the processing time of continuous tasks assumes continuous values and thus can not be represented as discrete variables. Furthemore, NUDM works much better than UDM for problems with sequence dependent tasks (cipac2, cpctsp1, cpctsp2, cpctsp3) also. This is due to the fact that NUDM handles sequence-dependent changeovers in a more efficient way than the conventional state-task network representation used in the version of the UDM formulation available for our work. This is in agreement with the analysis reported by Gooding (1994), who has shown that this form of the sequence dependent task representation, in general, results in a tighter formulation than the standard UDM representation. It is worth noting that with increasing number of sequence dependent tasks (cpctsp1, cpctsp2, cpctsp3) the solution time of B&B grows exponentially while the corresponding time of GOA grows only polynomially. The asterisk means that the optimal solutions for the examples cpctsp3 and cipac2 were not reached by B&B since the solution time was unreasonably large. The B&B tree was terminated after 20 000 nodes. The computational time for the batch4 example is worse in the GOA case due the fact that the MRP heuristic is not very well suited for such processes (batch4 example contains zero wait states). It is possible to modify GOA to account for the zero wait states by aggregating the two tasks connected by zero wait state. Another interesting feature is a correlation between quality of heuristics and randomization parameter which we mentioned in section 3. Numerical data confirms that a heuristic is well suited for a given class of problems if the randomization parameter is close to 0 or, in other words, that self-tuning (learning) was effective. 10. Conclusions Part 1 of this series (Mockus and Reklaitis (1999)) presents a general formulation of the short-term scheduling problem for complex multipurpose batch and continuous operations. However, the size of the resulting MINLP raised serious concerns regarding the practical applicability of the NUDM. In this paper, we provide an algorithm to overcome these limitations. As we mentioned above, this algorithm is based on the BH approach to discrete optimization, and clever choice of heuristic is a key issue. If the heuristic is a very general one (as simulated annealing in Mockus and Reklaitis (1996)) then the class of problems solved is large as is the computational time. If the heuristic is a special one (as MRP in our case) then the class of problems solved is smaller as is the computational time. Of course, problems which do not fit a given heuristic are solved also, but the computational effort required can be unpredictably high (batch4 is a good example). Thus one of the directions for future work may well be the development of an expert system which recognizes the structure of the problem and suggests a heuristic to solve this problem. Usually processing data are uncertain (task processing times fluctuate around the mean value due to the quality of feed, due dates are not well known in advance etc.). The BH approach is a statistical framework and thus theoretically accommodates stochastic data. Thus the other promising future direction could be to apply

210 Ind. Eng. Chem. Res., Vol. 38, No. 1, 1999

of the BH approach to scheduling problems with uncertainty. NUDM allows one to model various uncertainties in time and size parameters without modifications since time and size are continuous variables (for the UDM uncertainty in processing times require major modifications). The key problem of the B&B algorithm is good estimation of upper and lower bounds. The BH approach can be used to provide upper bounds thus incorporating statistical and deterministic techniques together within one framework. Finally, the results suggest that the BH approach combined with the nonuniform time discretization formulation shows promise for the solution of batch and continuous scheduling problems. Literature Cited Gooding, W. B. Specially structured formulations and solution methods for optimization problems important to process scheduling. Ph.D. thesis, Purdue University, 1994. Mockus, J. Bayesian approach to global optimization: theory and applications; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1989.

Mockus, L.; Reklaitis, G. V. A new global optimization algorithm for batch process scheduling. In Floudas C. A. and P. M. Pardalos, editors, State of the Art in Global Optimization: Computational Methods and Applications, volume 7 of Nonconvex Optimization and its Applications. Kluwer Academic Publishers: Dordrecht, The Netherlands, 1996; pp 521-538. Mockus, L.; Reklaitis, G. V. Continuous Time Representation Approach to Batch and Continuous Process Scheduling. 1. MINLP Formulation. Ind. Eng. Chem. Res. 1999, 38, 197-203. Mockus, J.; Eddy, W.; Mockus, A.; Mockus, L.; Reklaitis, G. Bayesian heuristic approach to discrete optimization and global optimization. Kluwer Academic Publishers: Dordrecht, The Netherlands, 1997. Orlicky, J. Material Requirement Planning. McGraw-Hill: New York, 1975. Zentner, M. G.; Elkamel, A.; Pekny, J. F.; Reklaitis, G. V. A language for describing process scheduling problems. Comput. Chem. Eng. 1998, 22, 129-145.

Received for review May 5, 1997 Revised manuscript received October 16, 1998 Accepted October 19, 1998 IE970312J