Bio-Bar-Code-Based DNA Detection with PCR-like Sensitivity

ACS Nano 2013 7 (1), 471-481 .... Accounts of Chemical Research 2011 44 (10), 1050-1060 .... Journal of the American Chemical Society 0 (proofing), ...
0 downloads 0 Views 360KB Size
Applied Mathematical Modelling 37 (2013) 2008–2015

Contents lists available at SciVerse ScienceDirect

Applied Mathematical Modelling journal homepage: www.elsevier.com/locate/apm

Open shop scheduling problem to minimize makespan with release dates q Danyu Bai a, Lixin Tang b,⇑ a b

School of Economics & Management, Shenyang University of Chemical Technology, Shenyang 110142, PR China Liaoning Key Laboratory of Manufacturing System and Logistics, The Logistics Institute, Northeastern University, Shenyang 110819, PR China

a r t i c l e

i n f o

Article history: Received 26 August 2009 Received in revised form 14 April 2012 Accepted 21 April 2012 Available online 11 May 2012 Keywords: Scheduling Open shop problem Makespan Performance analysis of algorithm

a b s t r a c t The scheduling problem of open shop to minimize makespan with release dates is investigated in this paper. Unlike the usual researches to confirm the conjecture that the tight worst-case performance ratio of the Dense Schedule (DS) is 2  1/m, where m is the number of machines, the asymptotic optimality of the DS is proven when the problem scale tends to infinity. Furthermore, an on-line heuristic based on DS, Dynamic Shortest Processing Time-Dense Schedule, is presented to deal with the off-line and on-line versions of this problem. At the end of the paper, an asymptotically optimal lower bound is provided and the results of numerical experiments show the effectiveness of the heuristic. Ó 2012 Elsevier Inc. All rights reserved.

1. Introduction In an open shop, a set of jobs has to be processed on m machines. Every job consists of m operations, each of which must be processed on a different machine for a given processing time. The operations of each job can be processed in any order. At any time, at most one operation can be processed on each machine, and at most one operation of each job can be processed. Moreover, every job has a release date, only after which the operation of that job can be processed. In the processing of any operation, no preemption and delaying are allowed, and the jobs are independent. The objective is to find a schedule to minimize makespan Cmax, that is, the maximal completion time among the n jobs. For this objective, the decision maker will consider the processing sequences of the jobs and the routes that the jobs pass though the machines simultaneously, which enhances the complexity of the problem. For study convenience, most of the researches focus on the problem with the assumption that all jobs are available at time zero. With the standard scheduling notation of Graham et al. [1], the problem can be described as OmjjCmax, where m is the number of machines. For problem O2jjCmax, Pinedo [2] presented a priority rule, Longest Alternate Processing Time first (LAPT), with which the optimal schedule can be found in polynomial time. The NP-hardness of problem O3jjCmax is showed by Gonzalez and Sahni [3] in the ordinary sense. In 1993, Lawler et al. [4] proved that the problem Omj2jCmax is strongly NPhard, which means that the optimal solution of the problem can not be obtained in polynomial time. For small scale problem, Branch & Bound algorithms are mentioned to solve it [5,6]. For large scale problem, constructing heuristic algorithms is an effective way to obtain the approximately optimal solution. Bárány and Fiala [7] showed that any Dense Schedule (DS) is at most twice the optimal makspan. Chen and Strusevich [8] conjectured that the tight worst-case performance ratio of the DS is 2  1/m for problem OmjjCmax, and proved the conjecture for m = 3. When jobs are pre-ordered, the DS is improved by

q This research work was carried out when the first author studied at Northeastern University as a doctoral candidate. And the revision of this paper was undertaken when the first author has been a lecturer at Shenyang University of Chemical Technology. ⇑ Corresponding author. E-mail address: [email protected] (L. Tang).

0307-904X/$ - see front matter Ó 2012 Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.apm.2012.04.037

D. Bai, L. Tang / Applied Mathematical Modelling 37 (2013) 2008–2015

2009

Strusevich [9], and the new algorithm is at most 2  1/(m + 1) times of the optimal solution for problem OmjjCmax. It is pointed out that there is no polynomial time approximation algorithm with the worst-case performance ratio strictly less than 5/4 unless P = NP [10]. A PTAS (polynomial time approximation scheme) is given by Sevastianov and Woeginger [11] for the problem. An overview of open shop scheduling problems can be found in Chen et al. [12] or Pinedo [2]. As the job is available only after its arrival in practice, the research of open shop makespan problem with release dates approaches the practical production more. If each job has a release date rj, the problem can be described as OmjrjjCmax. In 1981, Lawer et al. [13] pointed out that the problem O2jrjjCmax is strongly NP-hard. Chen [14] proved that the worst-case performance ratio of DS is 3/2 for problem O2jrjjCmax, and conjectured that the ratio is bounded by 2  1/m when m is arbitrary. For problem O3jrjjCmax, Chen et al. [15] showed that the worst-case performance ratio of DS is bounded above by two, and proved that the ratio can reach 5/3 for some special case. For the on-line version of problem OmjrjjCmax, that is, jobs arrive over time, the data of jobs (such as the release dates, the processing times and the number of jobs to be scheduled) are unknown until they arrive and decisions are made after the arrival of the jobs. An algorithm that can work in such on-line environment is called on-line algorithm. For a survey of online scheduling problems, the readers can refer to Sgall [16]. Chen et al. [17] generalized the DS to schedule the on-line version of problem OmjrjjCmax, and proved the conjecture that the worst competitive ratio of DS for the two-machine case is bounded by 2  1/m. Traditionally, a classical way for evaluating the performance of an on-line algorithm is to obtain its worst competitive ratio, that is, the ratio of the objective value obtained by an on-line algorithm to the optimal solution obtained by a hypothetical off-line algorithm that knows the entire input in advance. However, the worst competitive performance of an algorithm appears hardly in probability, except for some small size problems. Usually, in practical scheduling environments, for example, industrial production, there are thousands of jobs to be processed. Therefore, it is more suitable for using an asymptotic competitive ratio to estimate the effect of an algorithm in practice when the problem size is large enough. An algorithm with an asymptotic competitive ratio of one is asymptotically optimal, which works as well as the optimal schedule when the number of jobs goes to infinity. In our paper, we will prove that the DS is asymptotically optimal for problem OmjrjjCmax when the problem size is sufficiently large. Moreover, we present an on-line heuristic, Dynamic Shortest Processing Time-Dense Schedule (DSPT-DS), to deal with its off-line and on-line versions. At the end of the paper, we provide an asymptotically optimal lower bound and conduct numerical experiments to verify the effectiveness of this heuristic. The remainder of the paper is organized as follows. The formulating expression of the problem is given in Section 2. The asymptotic analysis of DS is provided in Section 3. The DSPT-DS heuristic and some computational results are presented in Sections 4 and 5, and this paper is closed by the conclusions in Section 6. 2. Problem specification In an open shop scheduling problem, each job j, j = 1, 2, . . . , n, with a release date rj, which is the earliest time when the job is available, and in an on-line environment also the time when the job is known in the system, has to be processed on m machines once. The order in which job j passes through the m machines is unimportant. The processing of job j on machine i, i = 1, 2, . . ., m, is denoted as operation O(i, j) with a processing time p(i, j). It is assumed that the processing times are bounded by Pmax and i.i.d. (independently and identically distributed) random variables. At any given time each machine can handle at most one job and each job can be processed on at most one machine. Preemption is forbidden, that is, any commenced operation has to be completed without interruption. And no job is delayed during the load time of each machine. The completion time of job j on machine i is denoted by C(i, j). The objective is to find a sequence of jobs with the given processing times on each machine to minimize the makespan, i.e., the maximum completion time. For convenience, we call OmjrjjCmax problem P. 3. Asymptotic optimality of DS for problem P In a DS, if any machine is idle, then there must be no available operation at this time on the machine. Actually, we can easily translate the idea of DS into a greedy algorithm: At time t, t P 0, when machine i, 1 6 i 6 m, becomes available, select an available operation, say O(i, j), 1 6 j 6 n, to process on this machine and prohibit the running of O(i0 , j), 1 6 i0 6 m and i0 – i, during time interval [t, t + p(i, j)] on machine i0 . In this section, we will discuss the asymptotic optimality of DS with release dates. To describe the DS when jobs arrive over time, some concepts and notations are introduced. Let R(i, j) and C(i, j) be the starting time and finish time of operation O(i, j), respectively. Definition [15]. An idle interval [b, e) on machine i, i = 1, 2, . . . , m, for a given schedule S is called reasonable if one of the following conditions holds for job j, j = 1, 2, . . . , n, (1) Job j has been finished on machine i before time b, i.e., C(i, j) 6 b; or (2) Job j is being processed on a machine other than i at any time t in [b, e), i.e.,

2010

D. Bai, L. Tang / Applied Mathematical Modelling 37 (2013) 2008–2015

½b; eÞ #

[ 0 0 ½Rði ; jÞ; Cði ; jÞÞ; or: i0 –i

(3) Job j released after time e, i.e., rj P e. A schedule is dense if all idle intervals are reasonable. It is supposed that any idle interval does not traverse any release date, i.e., if there is an idle interval [b, e) in which there is a release date rj, b < rj < e, then we denote [b, e) by two idle intervals: [b, rj) and [rj, e). Some properties of open shop DS are introduced for deducing requirement. Lemma 1. ([8]). For problem OmjjCmax, if machine i, 1 6 i 6 m, is idle in time interval [t1, t2], then after t2 the machine processes at most m  1 jobs. h Lemma 2. ([15]). For problem Omjr j jCmax, let Midle(t) P 1 be the number of machines that are idle at time t. Then there are at most m  Midle(t) jobs are released before t will be processed on these idle machines after t. h Let Ii(t1, t2) denote the total idle times on machine i, i = 1, 2, . . ., m, in time interval [t1, t2], t1 6 t2, and Fi,j = {O(k, j): C(k, j) 6 R(i, j), k – i} be the set of the operations for job j, j = 1, 2, . . . , n, that have been finished before operation O(i, j). Lemma 3. ([15]). For problem OmjrjjCmax, if O(i, j) is processed in [R(i, j), C(i, j)), then:

X

Ii ðr j ; Rði; jÞÞ 6

pðk; jÞ;

fk:Oðk;jÞ2F i;j g

where i = 1, 2, . . . , m.

h

We index the jobs according to their arriving sequence, i.e., r1 6 r2 6    6 rn. Therefore, the following lower bound for problem P can be easily obtained by observation.

( C LB ¼ max

( max

16j6n;16i6m

) ( )) n m X X rj þ pði; gÞ ; max r j þ pði; jÞ : 16j6n

g¼j

i¼1

With the above preparations, the main theorem about the DS is stated below. Theorem 1. Let release date rj be nonnegative random variables, j = 1, 2, . . ., n, and the processing time p(i, j) of job j, j = 1, 2, . . . , n, i = 1, 2, . . . , m, be independent random variables and have the same continuous distribution with nonzero bounded density /(). Then, for a series of randomly generated instances of problem P, with probability one, we have:

lim

n!1

C LB C max ðS Þ C max ðDSÞ ¼ lim ; ¼ lim n!1 n!1 n n n

ð1Þ

where Cmax(DS) and Cmax(S⁄) denote the objective values obtained by the DS and the optimal schedule, respectively. Proof. Without loss of generality, for the instance of DS, we assume that the schedule terminates on machine k, 1 6 k 6 m, and the last finished job is job n. Define operation O(k, j) before which is just the last idle interval arisen by waiting for the arrivals on machine k. Let set Q denote the jobs that are processed after job j on machine k. For one thing, if rj = R(k, j) the following two cases are considered. Case 1.1. There is no idle time after operation O(k, j). With Lemma 2, we know that there are at most m  1 jobs arrived before time rj that will be processed after job j on machine k. Therefore, we have: 

Cðk; nÞ  C max ðS Þ 6 Cðk; nÞ  C LB 6 r j þ

X g 1 2Q

! n n X X X pðk; g 1 Þ  r j þ pðk; g 2 Þ ¼ pðk; g 1 Þ  pðk; g 2 Þ g 2 ¼j

g 1 2Q

g 2 ¼j

6 ðm  1ÞPmax ;

ð2Þ

where Pmax = max16i6m,16j6n p(i, j). Case 1.2. Denote the total idle time after operation O(k,j) is Ik(R(k, j), C(k, n)). With Lemma 1 and inequality (2), we have:

Cðk; nÞ  C max ðS Þ 6 Cðk; nÞ  C LB 6 r j þ

X g 1 2Q

¼

X g 1 2Q

pðk; g 1 Þ þ Ik ðRðk; jÞ; Cðk; nÞÞ  r j þ

n X g 2 ¼j

n X pðk; g 1 Þ  pðk; g 2 Þ þ Ik ðRðk; jÞ; Cðk; nÞÞ 6 2ðm  1ÞPmax : g 2 ¼j

!

pðk; g 2 Þ ð3Þ

2011

D. Bai, L. Tang / Applied Mathematical Modelling 37 (2013) 2008–2015

For another, if rj < R(k, j) the following two cases are considered. Case 2.1. There is no idle time after operation O(k, j). With Lemma 3 and inequality (2), we have: 

Cðk; nÞ  C max ðS Þ 6 Cðk; nÞ  C LB 6

Rðk; jÞ þ

X

! pðk; g 1 Þ  r j þ

g 1 2Q

¼ ðRðk; jÞ  r j Þ þ

X

n X pðk; g 1 Þ  pðk; g 2 Þ

g 1 2Q

!

n X

! pðk; g 2 Þ

g 2 ¼j

6 2ðm  1ÞPmax :

ð4Þ

g 2 ¼j

Case 2.2. Denote the total idle time after operation O(k, j) is Ik(R(k, j), C(k, n)). With Lemma 1 and inequality (4), we have: 

Cðk; nÞ  C max ðS Þ 6 Cðk; nÞ  C LB 6 Rðk; jÞ þ

X

n X pðk; g 1 Þ þ Ik ðRðk; jÞ; Cðk; nÞÞ  r j þ pðk; g 2 Þ

g 1 2Q

¼ ðRðk; jÞ  r j Þ þ

X g 1 2Q

!

g 2 ¼j

! n X pðk; g 1 Þ  pðk; g 2 Þ þ Ik ðRðk; jÞ; Cðk; nÞÞ 6 3ðm  1ÞPmax :

ð5Þ

g 2 ¼j

Combining (2)–(5), we have:

C max ðDSÞ  C max ðS Þ 6 C max ðDSÞ  C LB 6 3ðm  1ÞPmax :

ð6Þ

Dividing n on the both sides of inequality (6), and taking limit, we have:

0 6 lim

n!1

C max ðDSÞ  C max ðS Þ C max ðDSÞ  C LB 3ðm  1ÞPmax 6 lim 6 lim ¼ 0: n!1 n!1 n n n

ð7Þ

With the Limit Theory in Mathematical Analysis and rearranging inequality (7), we obtain the result of the theorem. h For the same instance, the greedy algorithm based on the DS may generate different schedules. An example is provided as follows. Example 1. There are four jobs and three machines. The processing times of job Jj, j = 1, 2, 3, 4, on machine Mi, i = 1, 2, 3, and the release dates r j are given in the form below.

J1

J2

J3

J4

M1

3

1

2

1

M2

2

4

4

4:

M3 rj

3 5

2 1

5 3

6 8

According to DS, we schedule the operations with the shortest processing time first, and obtain a schedule, S1, which is shown in Fig. 1. Additionally, another DS, S2, is provided by scheduling the operations with the manner First Come First Served, as shown in Fig. 2. Obviously, S1 and S2 are both DSes. Therefore, it is known that sometime Cmax(S1) – Cmax(S2) for the same instance, where Cmax(S1) and Cmax(S2) are the makespan values obtained by schedules S1 and S2, respectively. Does the phenomenon mentioned above influence the asymptotic optimality of DS? This question will be answered in the following theorem. Theorem 2. The sequence of operations in a DS does not influence the asymptotic optimality. Proof. Consider two different DSes, S1 and S2, generated by the same instance. Without loss of generality, it is assumed that the makespans of the two schedules, Cmax(S1) and Cmax(S2), are not equal. Schedule S1 terminates on machine k1, and schedule S2 terminates on machine k2, where k1 – k2, 1 6 k1 6 m, 1 6 k2 6 m.

Fig. 1. Schedule S1 of the example.

2012

D. Bai, L. Tang / Applied Mathematical Modelling 37 (2013) 2008–2015

M3 M2 M1

Fig. 2. Schedule S2 of the example.

For schedule S1, define operation O(k1, j1), 1 6 j 6 n, before which is just the last idle interval arisen by waiting for the arrivals on machine k1 and the release date of job j is rj. Let set Q1 denote the jobs that are processed after job j on machine k1. Obviously, the jobs to be processed on machine k1 after job j are made up of two parts: One is the jobs whose release dates are not earlier than that of job j; and the other is the jobs that are processed on other machines during the idle interval (otherwise, the idle interval will shorten or disappear). With Lemma 2, we know that there are at most m  1 jobs arrived before time rj which will be processed after job j in set Q1 on machine k1. With inequality (5), we have:

C max ðS1 Þ ¼ Rðk1 ; jÞ þ

X

pðk1 ; h1 Þ þ Ik1 ðRðk1 ; jÞ; Cðk1 ; nÞÞ 6 r j þ

h1 2Q 1

n X pðk1 ; h1 Þ þ 3ðm  1ÞPmax :

ð7Þ

h1 ¼j

In schedule S2, let set Q2 denote the jobs that are processed after job j on machine k2. Obviously, it is impossible that the jobs whose release dates are not earlier than that of job j are processed before time rj. Therefore, we have:

C max ðS2 Þ ¼ Rðk2 ; jÞ þ

X

pðk2 ; h2 Þ þ Ik2 ðRðk2 ; jÞ; Cðk2 ; nÞÞ P r j þ

h2 2Q 2

X

pðk1 ; h2 Þ P r j þ

h2 2Q 2

n X pðk1 ; h2 Þ:

ð8Þ

h2 ¼j

With the inequalities (7) and (8), we have:

0 6 jC max ðS1 Þ  C max ðS2 Þj 6 3ðm  1ÞPmax :

ð9Þ

Dividing n on the both sides of inequality (9), and taking limit, we have:

0 6 lim

n!1

jC max ðS1 Þ  C max ðS2 Þj 3ðm  1ÞPmax 6 lim ¼ 0: n!1 n n

ð10Þ

Rearranging inequality (10), we deduce:

lim

n!1

C max ðS1 Þ C max ðS2 Þ ¼ lim ; n!1 n n

which completes the proof.

h

4. DSPT-DS heuristic for Problem P According to Lemma 3, we know that the idle time in problem P is mainly arisen by the arriving jobs. As the sequence of operations in a DS does not influence the asymptotic optimality (Theorem 2), therefore, we may put the available operation with short processing time forward to reduce the waiting time of the subsequent arrivals. Based on this idea, an on-line heuristic, DSPT-DS, is constructed. Letting matrix B = (O(i, j))mj, 1 6 j 6 n, denote the operations that are available at time t, t P 0 and R(i, j) be the starting time of operation O(i, j), we describe the heuristic as follows. 4.1. DSPT-DS heuristic Step 1.

Step 2. Sept 3. Step 4.

At time t, t P 0, process the operation with the smallest processing time, say O(i1, j1), among all the available ones in matrix B. If some operations simultaneously satisfy the condition, give preference to the operation with the smallest index. Update the starting times of the operations, which are at the same column and row with O(i1, j1), to t + p(i1, j1) in matrix B. Remove operation O(i1, j1) from matrix B. If some jobs arrive, go to Step 3; if matrix B becomes empty, go to Step 4. Sort the operations of the arrivals into matrix B, and update the starting time of each new operation to the longest starting time of its row in matrix B. Then, go to Step 1; Let the machines remain idle until a job arrives, and go to Step 3. If the scheduling is completed, terminate the program.

2013

D. Bai, L. Tang / Applied Mathematical Modelling 37 (2013) 2008–2015

5. Computational results To study the effectiveness of DSPT-DS heuristic, a series of computational experiments have been designed. The purpose of the experiments mainly rests on revealing the asymptotic optimality of the heuristic. The following two trials are conducted: Trial one for n > m; and Trial two for n 6 m. In Trial one, the combinations of 5, 10 & 20 machines with 20, 50 & 100 jobs are tested, respectively. In Trial two, the combinations of 20 jobs with 20 machines, 20 & 50 jobs with 50 machines and 20, 50 & 70 jobs with 70 machines are tested, respectively. The processing times are first randomly generated from a discrete uniform distribution on [1, 10]; second, they are independently generated by rounding the values of the equation y0.5, which is called discrete exponential, where y is the random variable on [1, 100]. For example, if y = 50, then [y0.5] = [7.0711] = 7, where [] denotes the calculation of rounding. And the release dates are drawn from a discrete uniform distribution on [1, Rtn], where n is the number of jobs, and Rt is a multiplier with the values of 1 and 5. For example, if n = 50 and Rt = 5, then the distribution of release dates is [1, 5  50] = [1, 250]. The purpose that we design these parameters is to show whether the trend of objective values is independent from the inputs. As problem P is strongly NP-hard, it is usual to use the lower bound as a substitute of the optimal schedule. To simplify the calculation and speed up the convergence of the ratios, we calculate the lower bound value, C0LB , as follows. On machine k, we select a job, say job j, 1 6 j 6 n, whose operation on that machine is just the last operation that is just processed at its release date and let set Q denote the jobs that are processed after job j on machine k. Therefore, we have:

( C 0LB

¼ max

(

) ( )) m X X max r j þ pðk; gÞ ; max r j þ pðk; jÞ :

16k6m

16j6n

g2Q

k¼1

With Lemma 2, we know that machine k will idle if job j is not scheduled at time rj when at most m  1 jobs released before rj are processing on machine k0 , 1 6 k0 6 m and k0 – k. And these jobs will be processed after time C(k, j) on machine k. Therefore, for a given operation sequence on machine k, the difference between C0LB and CLB is:

( C 0LB  C LB 6 max r j þ 16k6m

X

) pðk; g 1 Þ

g 1 2Q

( 

max

16j6n;16k6m

rj þ

) n n X X X pðk; g 2 Þ 6 pðk; g 1 Þ  pðk; g 2 Þ 6 ðm  1ÞPmax : g 1 2Q

g 2 ¼j

ð11Þ

g 2 ¼j

Dividing n on the both sides of the above inequality and taking limit, we have:

C 0LB C LB ¼ lim : n!1 n n!1 n lim

As the asymptotic optimality of CLB is shown by Theorem 1, obviously, the C0LB is also asymptotically optimal. 5.1. Trial one for n > m In Table 1, we compare the objective values Cmax(DS0 ) obtained by DSPT-DS heuristic with C0LB for discrete uniform and discrete exponential processing times, respectively. The data given in the table are the ratios of the heuristic objective values to their lower bound, respectively. And we performed ten different random trials for each combination (5, 10 & 20 machines respectively corresponding to 20, 50 & 100 jobs) and the averages are shown in the table. The numerical results in Table 1 clearly evince that the ratios approach one simultaneously for a fixed number of machines, which verifies that the objective and the lower bound both approach the optimal solution for sufficiently large size problems and indirectly confirms the result of Theorem 1. An example with 20 machines and Rt = 1 can be found in Fig. 3. For the constant number of jobs, the ratios become smaller as the number of machines decreasing, which means that the asymptotic optimality of DS is also dependent on the number of machines. Larger the number of machines is more the quantity of idle time is, which weakens the ratio of the objective to its lower bound. The example with n = 100 and Rt = 5 is given in Fig. 4.

Table 1 Computational results for n > m. Discrete uniform

Discrete exponential

m=5

m = 10

m = 20

m=5

m = 10

m = 20

n = 20

Rt = 1 Rt = 5

1.04204 1.07189

1.06328 1.08079

1.09878 1.15059

1.02270 1.05582

1.03558 1.09318

1.10842 1.14288

n = 50

Rt = 1 Rt = 5

1.00843 1.02089

1.01474 1.02181

1.02663 1.03703

1.00509 1.01171

1.01120 1.01556

1.02849 1.03232

n = 100

Rt = 1 Rt = 5

1.00246 1.01042

1.00551 1.01156

1.00871 1.01937

1.00115 1.00947

1.00714 1.01086

1.00829 1.01956

2014

D. Bai, L. Tang / Applied Mathematical Modelling 37 (2013) 2008–2015

Fig. 3. The example for m = 20 and Rt = 1(n > m).

Fig. 4. The example for n = 100 and Rt = 5(n > m).

Table 2 Computational results for n 6 m. Discrete uniform

Discrete exponential

m = 20

m = 50

m = 70

m = 20

m = 50

m = 70

n = 20

Rt = 1 Rt = 5

1.10293 1.15165

1.07337 1.10841

1.06778 1.08878

1.09655 1.15386

1.07336 1.10535

1.06743 1.09681

n = 50

Rt = 1 Rt = 5

– –

1.10531 1.12546

1.07409 1.10801

– –

1.08190 1.12729

1.07817 1.11199

n = 70

Rt = 1 Rt = 5



– –

1.09271 1.11804

– –

– –

1.08577 1.12872

Fig. 5. The example for n = 20 and Rt = 1(n 6 m).

5.2. Trial two for n 6 m In Table 2, we compare the objective values obtained by DSPT-DS heuristic with their lower bound values for discrete uniform and discrete exponential processing times respectively when n 6 m. Ten different random trials for each combination (20 jobs with 20 machines, 20 & 50 jobs with 50 machines and 20, 50 & 70 jobs with 70 machines) are conducted and the averages are shown in the table. The data reported in Table 2 show that the ratios are improved when the number of machines increases for a fixed number of jobs. This is because that the interaction of the operations becomes the maximum when n = m. And then, as the number of jobs keeps on decreasing, the interaction weakens and gap between the objective and its lower bound shortens, which

D. Bai, L. Tang / Applied Mathematical Modelling 37 (2013) 2008–2015

2015

betters the ratios. An example with 20 jobs and Rt = 1 is shown in Fig. 5. Meanwhile, an inverse trend compared with Trial one is observed: For a fixed number of machines, the ratios keep increasing as the number of jobs rising. A possible explanation about this phenomenon might be that fewer operations generate more idle time during job sequencing. And the influence of idle time would be reduced as the number of jobs is greater than the number of machines. 6. Conclusions In this paper, we discussed the general m-machine open shop makespan problem with release dates. It is proven that the DS is asymptotically optimal when the problem scale is large enough. For improving the effect of DS, an on-line heuristic, DSPT-DS (by combining the SPT (Shortest Processing Time) rule with the DS), is provided. In numerical simulations, we tested different combinations of jobs and machines with discrete uniform and exponential processing times, respectively. The computational results show that (1) for n > m, the DSPT-DS heuristic approaches the associated optimal schedule when the number of jobs goes to infinity; (2) for n 6 m, the trend is inverse because of the influence of idle time; (3) the two phenomena mentioned above are independent from the distribution of the processing times. For further research, the performance of DSPT-DS heuristic will be discussed for open shop scheduling problem with restrictions, such as preemptions, dead-lines or no-wait jobs. Furthermore, we will also try to design new algorithm to provide better solution for the problem when n 6 m. Acknowledgements We are grateful to the graduate student Qian Fang for testing the data. The authors are grateful for the useful comments of the referee and the kind help of the editor-in-chief. This research is partly supported by the State Key Program of National Natural Science Foundation of China (Grant No. 71032004). References [1] R.L. Graham, E.L. Lawler, J.K. Lenstra, A.H.G. Rinnooy Kan, Optimization and approximation in deterministic machine scheduling: a survey, Ann. Discrete Math. 5 (1979) 287–326. [2] M. Pinedo, Scheduling: Theory, Algorithms and Systems, Second ed., Prentice-Hall, New Jersey, 2002. [3] T. Gonzalez, S. Sahni, Open shop scheduling to minimize finish time, J. Assoc. Comput. 23 (1976) 665–679. [4] E.L. Lawler, J.K. Lenstra, A.H.G. Rinnooy Kan, D.B. Shmoys, Sequencing and scheduling: algorithms and complexity, in: S.C. Graves, A.H.G. Rinnooy Kan, P.H. Zipkin (Eds.), Handbook in Operations Research and Management Science, Logistics of Production and Inventory, vol. 4, North-Holland, Amsterdam, 1993, pp. 445–522. [5] P. Brucker, J. Hurink, B. Jurisch, B. Wöstmann, A branch & bound algorithm for the open-shop problem, Discrete Appl. Math. 76 (1997) 43–59. [6] U. Dorndorf, E. Pesch, T. Phan-Huy, Solving the open shop scheduling problem, J. Sched. 4 (2001) 157–174. [7] I. Bárány, T. Fiala, Nearly optimum solution of multimachine scheduling problems, Szigma 15 (1982) 177–191. in Hungarian. [8] B. Chen, V.A. Strusevich, Approximation algorithms for three-machine open shop scheduling, ORSA J. Comput. 5 (1993) 321–326. [9] V.A. Strusevich, A greedy open shop heuristic with job priorities, Ann. Oper. Res. 83 (1998) 253–270. [10] D.P. Williamsom, L.A. Hall, J.A. Hoogeveen, C.A.J. Hurkens, J.K. Lenstra, S.V. Sevast’janov, D.B. Shmoys, Short shop schedules, Oper. Res. 45 (1997) 288– 294. [11] S.V. Sevastianov, G.J. Woeginger, Makespan minimization in open shops: a polynomial time approximation scheme, Mathematical Programming 82 (1998) 191–198. [12] B. Chen, C.N. Potts, G.J. Woeginger, A review of machine scheduling: complexity, algorithms and approximability, in: D.-Z. Du et al. (Eds.), Handbook of Combinatorial Optimization, Kluwer Academic Publishers, 1998, pp. 21–169. [13] E.L. Lawer, J.K. Lenstra, A.H.G. Rinnooy Kan, Minimizing maximum lateness in a two-machine open shop, Math. Oper. Res. 6 (1981) 153–158. [14] R. Chen, Dense schedules for open-shop with jobs release dates, OR Transactions 7 (2003) 73–77. [15] R. Chen, W. Huang, G. Tang, Dense open-shop schedules with release times, Theor. Comput. Sci. 407 (2008) 389–399. [16] J. Sgall, On-line scheduling, in: A. Fiat, G.J. Woeginger (Eds.), Online algorithms: The State of the Art, Lecture Notes in Computer Science, vol. 1442, Springer, Berlin, 1998, pp. 196–231. [17] B. Chen, A.P.A. Vestjens, G.J. Woeginger, On-line scheduling of two-machine open shops where jobs arrive over time, J. Comb. Optim. 1 (1998) 355–365.