
Scheduling1
Zhi Huang
College of Computer Science and Technology,Huazhong University of Science and Technology
Wuhan, P. R. China 430074
E-mail:huangzhi666@yahoo.com.cn
Abstract
In this paper, the job shop scheduling problem with the objective to minimize the makespan is discussed. The theorem, which guarantees the shifting bottleneck procedure (SB) with Schrage one-machine sequencing algorithm always obtains the feasible solution for any instance, is concisely proven. A counterexample is given to show that SB with Carlier’s one-machine sequencing algorithm could get the infeasible solution. By the way, we also point out a mistake made in the Carlier’s theorem which Carlier’s algorithm is based on, and complete it by some modifications. In order to overcome the infeasibility weakness of SB and expect better performance, a modified shifting bottleneck procedure (MSB) for job shop scheduling is proposed. The theorem, which ensures MSB get the feasible solution for any instance of the problem, is testified. MSB is tested on a group of benchmark instances. The computational experiment shows that MSB get better solutions than SB.
Keywords: Job shop scheduling; Heuristic; Feasibility; Shifting Bottleneck
1Introduction
Job shop scheduling problem is among the hardest combinatorial optimization problems, and is strongly NP-complete[1]. Many heuristic algorithms have been developed including dispatching priority rules [2], local search [3], shifting bottleneck procedure [4], stimulated annealing [5] and tabu search [7,8]. The research results of the problem show that it is hard to obtain good enough solutions only by single search scheme. So the hybrid heuristic methods that combine several heuristic schemes have been proposed. It is known that shifting bottleneck procedure and taboo search are widely used in hybrid heuristic methods [9-11].
Adams et al. proposed shifting bottleneck procedure [4], and they mentioned that it might obtain infeasible solutions, and they also pointed out that they did not find the case. However, they did not explain why the case might exist, and they did not prove that the case does exist, or prove the case would never happen. Storer et al. [12] mentioned that the simplified implementation of SB they used failed for two instances (10×50), and said that they did not know the reason. Strictly speaking, the failure might be caused by the weakness in SB [4], or by the simplified implementation, or even by negligence in programming. Although some researchers [13] discussed the infeasible problem of SB, they did not prove that SB with Carlier one-machine sequencing algorithm [14] will always obtain feasible solutions or give a counterexample testifying an infeasible solution could be reached by SB with Carlier’s algorithm. And they just prove that in SB, if the one-machine sequencing (OMS) problems are solved with Schrage’s algorithm [14], no infeasibility could be reached [13]. So whether SB based on Carlier OMS algorithm could obtain infeasibility is still unknown.
In this paper, we construct a counterexample testifying SB using Carlier’s OMS algorithm does get an infeasible solution. Before giving the counterexample, we present the theorem that in SB, if Schrage
1 Support by the National Grand Fundamental Research 973 Program of China(G1998030600).
algorithm is used to solve OMS algorithm, feasible solutions will always be obtained. We prove the theorem in a little more than one page, which is much more concise than the literature [13] does in more than five pages. By the way, there is a mistake in the theorem used in Carlier’s OMS algorithm[14]. Although the paper of Carlier’s algorithm [14] has 22 citations (according to the search result by google search engine), none of them has found out the mistake. We construct a counterexample to show the error, and correct the theorem by some modifications.
On the purpose of overcoming the feasibility problem of the procedure and gaining better solutions, a procedure named the modified shifting bottleneck procedure (MSB) is suggested. We also prove that MSB could always obtain the feasible solution for any instance of job shop scheduling problem. A group of benchmark instances are computed to test MSB. The computational results show that the performance of MSB is better than that of SB [4] and ISB [13]. Although MSB is not the best one at present, it is a promising algorithm.
2 The SB procedure
This section is derived from the literature [4,13,14] and is needed for better comprehension of section 3 and 4.
2.1 The Problem
The job shop scheduling problem (JSSP) is described as follows [4]. Jobs are to be processed on machines with the objective of minimizing the makesapn, i.e. the time needed for processing all jobs (maximum completion time), subject to the constraints that: (i) the sequence of machines for each job is prescribed; and (ii) each machine can process only one job at a time.
Let N={0, 1, …, n } denote the set of operations (with 0 and n the dummy operations “start” and “finish”), M the set of machines, A the set of pairs of operations constrained by precedence relations representing condition (i) above, and the set of pairs of operations to be performed on machine k and which therefore cannot overlap in time, as specified in (ii). Further, let denote the (fixed) duration (processing time) and the (variable) start time of operation i . The problem can then be stated as
k E i d i t
n t min i i j d t t ≥−,),(A j i ∈
, 0≥i t ,N i ∈
,j j i i i j d t t d t t ≥−∨≥−M k E j i k ∈∈,),( (P)
Any feasible solution to (P) is called a schedule. It is useful to represent this problem on a disjunctive graph G=(N, A, E ) [16], with node set N , ordinary (conjunctive) arc set A , and disjunctive arc set E . Conjunctive arcs represent precedence relations, and pairs of disjunctive arcs correspond to operations that must be processed on the same machine. Figure 1 illustrates the graph for a problem with 8 operations (on three jobs) and three machines.
A selection in contains exactly one member of each disjunctive arc pair of . Actually, determining an acyclic section on is equivalent to sequencing the jobs (operations) on machines k .
A complete selection S consists of the union of selection , one in each , . In the language of disjunctive graphs, the problem is finding an acyclic complete selection S ⊂ E that minimizes the length of a longest path in the directed graph D k S k E k E k S k S k E M k ∈S .
Figure 1: Disjunctive Graph of a Job Shop Problem
2.2 The SB procedure
The machines are considered one-by-one, in an OMS (one-machine sequencing) problem, and the result is used to both rank the machines, and select a good sequence on the highest rank machine (bottleneck machine). Every time a new machine is sequenced, all the already sequenced machines are locally reoptimized by solving OMS problem for each of them, while keeping the sequence of the other machines fixed.
Let M 0 be the set of machines that have been already sequenced, by choosing selections S p (p ∈M 0). Let (P (k , M 0)) be the problem obtained by replacing each arc set E p (p ∈M 0) with the corresponding selection S p , and deleting each arc set E p (p ∈M /M 0 - {k }). A bottleneck machine m ∈M /M 0 is such that v (m ,M 0)=max{v (k ,M 0): k ∈M /M 0}, where v (k ,M 0) is the value of a good solution to (P (k ,M 0)). A brief statement of the Shifting Bottleneck Procedure is as follows. Let M 0 be the set of already sequenced(M 0=Φ at the start).
Step 1. Identify a bottleneck machine m among the machine k ∈M /M 0 and sequence it. Set M 0← M 0∪{m }.
Step 2. Reoptimize the sequence of each critical machine k ∈M , while keeping the other sequences fixed; i.e., set M ′0:=M 0-{k } and solve P (k , M ′0). Then if M 0 is equal to M , the algorithm stops; otherwise go to step 1.
(P (k , M 0)) can be stated as:
min n t i i j d t t ≥− A M p S j i p ΥΥ):(),(0∈∈
, 0≥i t N i ∈,
, j j i i j d t t di t t ≥−∨≥−k E j i ∈),( (P (k , M 0))
The problem is solved by Carlier’s algorithm. The result is also used in step 2 of the procedure, which has been called the local re-optimization procedure. This procedure is described below.
Let k (1), … ,k (p ) be an ordering of M 0(p =|M 0|). The local re-optimization cycle is for i =1, …, p , solve the problem (P (k (i ), M 0/{k (i )})) and substitute the selection S k (i ) for the old selection. As long as |M 0|<|M |, the algorithm go through at most three local re-optimization cycles for each set M 0. At the last step, when M 0 is equal to M , the algorithm continues the local reoptimization to the point where there is no improvement for a full cycle.
Adams et al. [4] also pointed out that upon the completion of the local reoptimization procedure for a given M 0, they found out it useful to repeat the procedure after temporarily removing from the problem the last (according to the current ordering) α machines (α is the minimum of |M 0|1/2 and the number of noncritical
machines in M 0). At the end the machines that had been removed are reintroduced one bye one, successively, and the procedure for M 0 is completed. In this way, typically improvements are achieved. In SB procedure, the OMS problem solved is (P*(k , M 0)) instead of ().
),(0M k P (P *(k , M 0)) can be stated as:
n t min
i i i n q d t t +≥− i i r t ≥, *N i ∈,
, j j i i i j d t t d t t ≥−∨≥−k E j i ∈),
(, (P*(k , M 0)) where N * is the set of operations to be processed on machine k , ),0(i L r i =and i i d n L q −=),0((is the value of the longest path between and j ). ),(j i L i i r is the release date of operation , and is the time
to be spent in the system after the end of operation (delivery
time). i i q i ),(*0M k P only uses part of the constraints of . The constraints in could be induced by those in .
),(0M k P ),(*0M k P ),(0M k P In SB procedure, OMS problem is solved by Carlier’s algorithm [14].
),(*0M k P 2.3 Carlier’s algorithm
In this section, we describe Schrage and Carlier’s algorithm in some details, although they could be found in the literature [14], since they are background knowledge of section 3 and 4.
The problem is to sequence n independent jobs (operations) on a machine. Each job (operation) has a release time i r , a processing time and delivery time (amount of time after being processed on the machine).
i d i q Carlier’s algorithm is based on branch and bound method in which each node is evaluated by applying Schrage’s algorithm [14].
Schrage’s algorithm. I is the set of all the operations processed on the machine. U is the set of operations already scheduled and U is the set of all other operations. t is the time.
(1) Set t =; i I
i r ∈min φ=U .
(2) At time t , among ‘ready’ jobs (i t r i ≤) of U , schedule job j with greatest .
i q (3) Set: ; }{j U U Υ=t t j =; t =Max(t j +d j , i U
i r ∈min ); if U is equal to I , the algorithm is finished; otherwise, go to (ii).
Carlier’s algorithm is based on Schrage’s algorithm and the following theorem.
Carlier’s Theorem [14]. Let be the makespan of Schrage’s schedule.
max C (1) If the scheduling is not optimal, there is a critical job c and a critical set J such that
c i J
i J i i i J i d C q d r J h −>++=∈∈∈∑max min min )(Consequently, the distance to optimum from Schrage’s schedule is less than d c ; moreover, in an optimal schedule, either c will be processed before all jobs of J , or c will be processed after all jobs of J .
(2) If this schedule is optimal, there exists J such that h (J ) =.
max C
See proof in the literature [14]. Let the critical path is [0, j (1), j (2), … , j (p ), n ]. If c exists, it is the greatest number such that q j (c ) If c does not exist, the algorithm stops; otherwise, considers two new problems, one in which c has to be processed before all the jobs of J by setting ∑∈+=J k p k c c q d q Max q ),(and another in which c has to be processed after all the jobs of J by setting ∑∈∈+=J k k J k k c c d r r Max r )min ,( The lower bound of the new nodes will be the maximum of f (δ), h (J ) and h (J ∪{c }); a new node will be added to the tree only if its lower bound is less than the upper bound f 0. Upper bound. Every time the algorithm applies the Schrage’s algorithm, it compares the makespan to f 0, and if the makespan is less than, it sets f 0=makespan . In the theorem, the mistake (if this schedule is optimal, there exists J such that h (J ) =) exists. Table 1 gives the counterexample, which has two operations. The Schrage’s schedule obtained is shown in figure 2. (the makespan of the Schrage’s schedule) is equal to 28. h ({1})=24, h ({2})=27, and h ({1,2})=27, therefore no such set J satisfying h (J ) = exists. max C max C max C Table 1. The Counterexample of the Theorem in Carlier’s Algorithm Operation i 1 2 r i 10 12 d i 3 3 q i 11 12 Figure 2: The Schrage’s Schedule Obtained for the Counterexample of the Theorem. The modified theorem with no such mistake is described as follows: Modified Carlier’s Theorem . Let D ′ be the conjunctive graph associated with the Schrage schedule. There might be more than one critical paths, and we consider the critical path µ of D ′ passing through a maximal set of operations. We modify the numbering of operations so that µ pass through 0, 1, …, p , n in the order. (a) If this schedule is not optimal, there exists an integer c (1≤c ≤p ), which is the maximal number satisfying q c i q ∈min max − d c . Consequently, the distance to the optimum from the Schrage schedule is less than d c ; moreover, in an optimal schedule, either c will be processed before all jobs of J , or c will be processed after all jobs of J . (b) If there exists no integer c (1≤c ≤p ) satisfying q c Theorem. In SB, if the one-machine sequencing (OMS) problem P*(k,M 0) is solved with Schrage algorithm, surely feasible solution of any instance of the job shop scheduling problem is obtained. Proof. In the language of disjunctive graphs, we only need prove that the corresponding digraph of the solution for any instance by the SB procedure with Schrage OMS algorithm is acyclic. Let N * denote the operation set to be processed on machine k . Suppose the first cycle or cycles appear just when Schrage’s algorithm solve the one-machine scheduling problem , and just after get , the start time of operation i (), and have not get the next operation’s start time. Then the cycle must include operation i . Because if node i is not included in the cycle, then the new precedence relationships between i and the operations, who belong to N * and whose start time are just gotten before i , do not contribute to the appearance of the cycle. Then only the precedence relationships before getting contribute to the cycle. However, no cycle appears before the start time of operation i is gotten. This is a contradiction. ),(*0M k P i t *N i ∈i t Since only the precedence relationships gotten before we solve the prob ),0M can not form a cycle, and only the new produced precedence relationships gotten after i t is gotten during ),(*0M k can not form a cycle either, so both the former precedence relationships and latter precedence relationships are needed for a c lem (*k P solving ycle. P i j … h Figure 3: The Precedence Relationships Starting from Operation i . We mark the former precedence relationships with real arrows and the latter ones with broken arrows, as Figure 3 shows. Since the cycle includes i , then there must be a directed path from i to other operations and then back to i . We walk from i in the direction of the arrow. Then the first arrow must be real arrow, because none of the broken arrows have i as the front (i is the last one of the scheduled operations in N *). Then we walk on. Sooner or later we will meet the first broken arrow, since only the precedence relationships gotten before solving the problem can not form a cycle (only the real arrow can not form a cycle). Let the broken arrow has h as the end of the arrow, which means that h belongs to N * (the operation set to be processed on machine k ) and the time is gotten before in solving . Obviously we have one or a series of continuous real arrow from to h , which means that and h have precedence relationship before solving and needs processing before h . ),(*0M k P h t i t ),(*0M k P i i ),(*0M k P i Since i preceding h is known before solving , then ),(*0M k P h i r r < and in P (Lemma ). Since in solving the start time of h is gotten earlier than i , then h i q q >),(*0M k ),(*0M k P i h t t <. According to the Schrage’s algorithm, only if i h i h q q r r ≥∨≤ could t i h t < be possible. This is contradicted with the delusion of h i h i q q r r >∧<. Hence the existence of a cycle during Schrage’s algorithm always produces a contradiction, and so no cycle is possibly gotten by the Schrage’s algorithm for one-machine scheduling problem. □ Lemma . In the digraph before solving one-machine sequencing problem (), if operation i is preceding operation h , then ),(0M k P h i r r < and . h i q q >Proof. Because h r is the length of the longest directed path from node 0 to node h in the digraph (), which is more than or equal to the length of the longest directed path from 0 to i added by that of the longest directed path from i to h (),0(h L r h =),(),0(),0(h i L i L h L +≥). Since operation i is preceding operation h , then >0. Additionally, i d h i L ≥),(),0(i L r i =. So h i r r <. And , i i d n i L q −=),(),(),(),(n h L h i L n i L +≥. And h h d n h L q −=),(. Since operation i is preceding operation h , then . Additionally, . Therefore ≥− ≥+− ==+d i d h i L ≥),(0>h d i i d n i L q −=),(),(),(n h L h i L +i d ),(n h L i d i d ),(n h L h q h >. So . □ h q h i q q >4 Counterexample of the SB procedure with Carlier’s OMS algorithm In this section, we give the counterexample to testify that the SB procedure using Carlier’s OMS algorithm could get the infeasible solution. Figure 4 shows the counterexample with 11 operations (on three jobs) and eight machines. In job 1, operation a must be performed on machine I; operation 1 on machine II, operation b on machine III, operation 2 on machine II, operation c on IV. In job 2, operation e is processed on machine V, 3 on II, f on VI. In job 3, operation g need be processed on machine VII, 4 on Ⅱ, h on VIII. The numbers on the arcs present the time that operations need processing. Figure 4: The Counterexample Job Shop Problem. We try to solve this problem using SB procedure. In the SB procedure, only operations on machine II need sequencing, since no other operations is performed on a common machine. The OMS problem that Carlier’s algorithm solve is as shown in table 2: Table 2. the input of the OMS problem. Operation i 1 2 3 4 r i 20 27 22 30 q i21 18 34 32 The solution that Carlier’s algorithm gets is shown in Figure 5. Figure 5: The Sequencing Order Solved by Carlier’s Algorithm Actually, the sequencing order gives the optimal solution of the OMS problem. Any other sequencing order leads to larger makespan than the optimal one. In the end, the SB procedure based on Carlier’s algorithm gets the solution as Figure 6 shows. In the digraph of the solution exists the cycle (1→b→2→1), therefore the solution is an infeasible one. As we have described above, for this counterexample the SB procedure will get the infeasible solution if it just applies Carlier’s algorithm for OMS problem. Figure 6: The Digraph of the Solution of the Counterexample Job Shop Problem. 5The modified SB procedure The SB procedure uses Carlier’s algorithm in which operations are considered as independent ones, whereas some dependence between operations of a machine might exist in the job shop scheduling problem. Neglecting the dependence relationship have some negative effect [15,17], what is more, a feasible solution may not be achieved as we have discussed above. In order to overcome the drawback, we propose the Modified SB procedure (MSB). 5.1MSB procedure The MSB procedure we implemented is described as follows. Step 1: Set M0=Φ. Step 2: Identify a bottleneck machine m among the machine k∈M/M0 and sequence it by modified Carlier’s algorithm. Set M0←M0∪{m}. Step 3: If |M 0|<|M |, perform 6 local reoptimization cycles for M 0, and go to step 2; otherwise perform nine local reoptimization cycles for M 0. Step 4: Delete the α (the minimum of |M 0|1/2 and the number of noncritical machines in M 0) machines, which were last added into M 0. Step 5: Identify a bottleneck machine m among the machine k ∈M /M 0 and sequence it by modified Carlier’s algorithm. Set M 0← M 0∪{m }. Step 6: If |M 0|<|M |, perform six local reoptimization cycles for M 0, and go to step 2; otherwise perform 9 local reoptimization cycles for M 0. Step 7: If the number of jobs is equal to 15, perform step 4-6 again and stop; otherwise stop. The modified SB procedure is based on Modified Carlier’s algorithm, and modified Carlier’s algorithm is base on modified Schrage’s algorithm. Modified Schrage’s algorithm. I is the set of all the operations processed on the machine. U is the list of the operations already scheduled. U is the set of all other operations. p ij (i , j ∈I ) the length of the longest path from i to j in the whole graph (=0 if there is no path). (1) Set v =0; for all i ∈I , set r ′i =r i ; Set t =min i ∈I r ′I , U =NULL . (2) At time t , schedule operation k (q k =max{ q i |U i ∈; and ∀j ∈I , if p ij >0, then j ∈U } (3) u v =k ; v =v +1; ∀i ∈I , if p ki >0 and r ′i < t+d k , then set r ′i =t +p jk ; set: t k =t ; t =Max(t k +d k , i U i r ′∈min ); If v is equal to |I |, the algorithm stops and returns U =(u 1, u 2, u 3, … , u |I|); otherwise go to (2) Let the critical path of the modified Schrage algorithm is [0, j (1), j (2), … ,j (p ), n ]. Let c be the greatest number such that q j(c ) i d ∈min i |i ∈J }. The modified Carlier algorithm is similar with Carlier algorithm, except its definition of h (J ) and its utilizing modified Schrage’s algorithm instead of Schrage algorithm. 5.2 MSB pro cedure a lwa ys g ets feasible so lutions Proposition. Feasible solution of any instance of the job shop scheduling problem is surely obtained by the MSB procedure . Proof. MSB is based on Modified Carlier’s algorithm solving one machine sequencing problems. Modified Carlier’s algorithm is based on modified Schrage’s algorithm, and the solution obtained by modified Carlier’s algorithm is one of the solution reached by modified Schrage’s algorithm (no matter for some operation j , r j or q j is changed or not). So if every digraph corresponding to the solution obtained by modified Schrage’s algorithm is acyclic, then modified Carlier’s algorithm will always find feasible solution, then the solution obtained by MSB will always be feasible. Next, we prove the digraph corresponding to the solution by modified Schrage’s algorithm is acyclic. Suppose the first cycle or cycles appear just when modified Schrage’s algorithm solve the one-machine scheduling problem , and just after get , the start time of operation i (),and have not get the next operation’s start time. In the similar way with the proof of proposition 1, we could prove the cycle must include operation i , and an operation h (, and h is scheduled) must exist, satisfying the precedence relationship i →…→h exists before we solve the problem , and the time is gotten before in solving . However, according to modified Schrage’s algorithm, is gotten before in solving , since the precedence relationship i →…→h exists before solving . Since the circle appears just after is ),(0M k P i t *N i ∈*N i ∈),(0M k P h t i t ),(0M k P i t h t ),(0M k P ),(0M k P i t i t h t solution could be produced in the digraph corresponding to the solution by MSB. □ 5.3Results and discussions MSB is implemented in C language on PC Pentium 4 2.8G. Table 3 compares the makespans of the 41 benchmark instances (la01-40 [18] and ft10 [19]) obtained by MSB with those by SB, and gives the computational time spent by MSB. The acronym CI-CPU stands for computer-independent CPU time (s). And these are based on comparisons of Dongarra [7], as interpreted by Vaessens et al. [20]. For all the 41 instances, MSB finds 17 solutions with better makespans than SB, 5 ones with worse makespans, 19 ones with equal makespans. MSB’S proportion of the number of better ones to that of worse ones is 17/5(3.40). Dauzere-peres and Lasserre proposed a modified shifting bottleneck procedure, which we call MSB0 [15]. MSB0 computed 20 instances, and found 11 better solutions than SB, 7 worse ones than SB, and 2 ones as good as SB. Its proportion of the number of better ones to that of worse ones is 11/7(1.57). Table 3. Comparison of SB and MSB for Benchmark Instances Problem n m Opt Makespan CI-CPU(s) SB Compare MSB MSB LA011056666666660.56 LA02 10 5 655720>6840.56 LA03 10 5 597623<628 1.09 LA04 10 5 590597>590 1.09 LA05 10 5 5935935930.56 LA06 15 5 926926926 1.65 LA07 15 5 000 1.65 LA08 15 5 863863863 1.65 LA09 15 5 951951951 1.61 LA10 15 5 95558 1.65 LA11 20 5 122212221222 2.73 LA12 20 5 103910391039 5.46 LA13 20 5 115011501150 4.90 LA14 20 5 129212921292 6.02 LA15 20 5 120712071207 4.38 LA16 10 10 9451021>995 4.94 LA17 10 10 784796<797 4.38 LA18 10 10 84811 4.94 LA19 10 10 842875<855 4.94 LA20 10 10 902924>913 4.90 LA21 15 10 10461172>111416.42 Table 3 (Continue) LA2215109271040>96314.77 LA23 15 10 10321060>103217.50LA2415109351000>97216.42 LA25 15 10 9771048>100316.42 LA26 20 10 12181304>123322.40 LA27 20 10 12351325>127722.96 LA28 20 10 12161256<127423.52 LA29 20 10 11531294>126722.96 LA30 20 10 13551403>135524.08 LA31 30 10 17841784178434.44 LA32 30 10 18501850185034.48 LA33 30 10 17191719171933.78 LA34 30 10 17211721172133.78 LA35 30 10 18881888188835.00 LA36 15 15 12681351>133144.28 LA37 15 15 13971485>143146.48 LA38 15 15 11961280<131038.29 LA39 15 15 12331321>128747.57 LA40 15 15 12221326>126048.13 FT10 10 10 9301016>991 6.55 Table 4 shows the comparison of computational outcomes of SB [4], ISB [13] and MSB. La06-15 and la30-35 are easy instances, and all the three algorithms sole them optimally for a short period, so these 15 ones are not listed here. The rest 25 are divided into five groups according to their sizes. The mean relative error (MRE) was also calculated for each set of instances. Relative error is the percentage by which the solution obtained is above the optimum value (Opt) if it is known or the best known lower bound value (LB): 100*(makespan − Opt)/Opt, or 100*(makespan-LB)/LB. The mean relative error of MSB is equal to 3.55, which is better than 5.91 of SB [4], and a little better than 3.88 of ISB [13]. However MSB spent more time than SB and ISB as a whole. Table 4. Mean Relative Error over Best Makespan and CI-CPU Time of SB, ISB and MSB Problem n m Mean relative error (MRE)CI-CPU(s) SB ISB MSB SB ISB MSB LA01-05 10 5 3.09 2.47 1.92 0.8 0.0 0.77 LA16-20 10 10 4.2 2.41 3.46 3.7 0.4 4.82 LA21-25 15 10 8.25 4.86 3.4 11.3 2.6 16.31 LA26-30 20 10 6.9 5.24 4.1 19.9 12.8 23.18 LA36-40 15 15 7.1 4.4 4.88 29.9 49.5 44.95 Total MRE 5.91 3.88 3.55 Actually the size of the job shop scheduling problem instances becomes larger and larger in practice, and an algorithm without hybrid schemes seems not able to meet the practical demands for both a quite high quality of solutions and a moderate computing time. At present, SB-GLS [9] and TSSB [11] which make use of SB are among the best algorithms for the job shop scheduling problem. In further study fields, it is hopeful that MSB be combined with other strategies such as tabu search to createefficient and effective meta-heuristic algorithms for the job shop scheduling problem. MSB can be applied to provide their initial solutions as well. A ck no wledg ment s The author would like to thank Doctor Candidate He Kun. The discussions with her in the Carlier’s algorithm aspired the author in finding out the mistake in the Carlier’s theorem. References [1]Lawler EL, Lenstra JK, Rinnooy Kan AHG. Recent developments in deterministic sequencing and scheduling: a survey. In: Dempster MAH, Lenstra JK, Rinnooy Kan AHG, editors. Deterministic and stochastic scheduling. 1982. pp. 35-73. Dordrecht, The Netherlands: Reidel. [2]French S (1982). Sequencing and scheduling: an introduction to the mathematics of the job-shop. Chichester, UK: Harwood. [3]Spachis AS, King JR.(1979) Job-shop scheduling heuristics with local neighborhood search. International Journal of Production Research 17(6):507–26. [4]Adams J, Balas E, Zawack D. (1988) The shifting bottleneck procedure for job shop scheduling. Management Science 34:391–401. [5]Van Laarhoven PJM, Aarts EHL, Lenstra JK. (1992) Job shop scheduling by simulated annealing. Operations Research40(1):113–25. [6]Dell’Amico M, Turbian M. (1993) Applying tabu-search to job-shop scheduling problem. Annals of Operations Research41(1):231–252. [7]Dongarra JJ. (1993) Performance of various computers using standard linear equations software. Report CS--85, Computer Science Department, University of Tennessee, Knoxville, TN. [8]Taillard E. (1993) Benchmarks for basic scheduling problems. European Journal of Operations Research :278-285. [9]Balas E, Vazacopoulos A (1998). Guided local search with shifting bottleneck for job shop scheduling. Management Science 44(2):262-75. [10]Nowicki E, Smutnicki C. (1996) A fast taboo search algorithm for the job shop scheduling problem. Management Science42(6):797-813. [11]Pezzella F, Merelli E. (2000) A tabu search method guided by shifting bottleneck for job shop scheduling problem. European Journal of Operations Research 120:297-310. [12]Storer RH, David Wu S, Vaccari R. (1992) New search space for sequencing problems with application to job shop scheduling. Management Science39(10):1495–509. [13]Wenqi H, Aihua Y. (2004) An improved shifting bottleneck procedure for the job shop scheduling problem. Computer & Operations Research 31:2093-2110. [14]Carlier J. (1982) The one-machine sequencing problem. European Journal of Operational Research 11:42-47. [15]Dauzere-Peres S, Lasserre JB. (1993) A modified shifting bottleneck procedure for job-shop scheduling. International Journal of Production Research 31: 923-932. [16]Balas E. (1969) Machine sequencing via Disjunctive Graphs: An Implicit Enumeration Algorithm. Operations Research 17: 941-957. [17]Balas E, Lenstra JK, Vazacopoulos. (1995) The One-machine Problem with Delayed Precedence Constraints and its use in Job Shop Scheduling. Management Science 41(1):94-109. http://www.paper.edu.cn [18]Lawrence S. (1984) Supplement to Resource constrained project scheduling: an experimental investigation of heuristic scheduling techniques, GSIA, Carnegie Mellon University, Pittsburgh, PA. [19]Fisher H, Thompson DL. (1963) Probabilistic learning combinations of local job shop scheduling rules. In: Muth JF, Thompson GL, editors. Industrial scheduling. Englewood Cliffs, NJ: Prentice-Hall. [20]Vaessens RJM, Aarts EHL, Lenstra JK. (1994) Job shop scheduling by local search. Memorandum COSOR 94-5, Eindhoven University of Techonlogy, Department of Mathematics and Computing Science, Eindhoven, The Netherlands. -13-Carlier’s algorithm is a branch and bound method. In the tree, each node δ is associated with associated with a lower bound f (δ). The upper bound f 0 is the value of the best solution known so far. Branching. Consider the node of the tree with the lowest bound and apply the Schrage’s algorithm to the OMS problem associated with it.
C i J
3 The theorem
min{q i J
