sequencing precedence-related jobs on two machines to minimize the weighted completion time

15
Int. J. Production Economics 100 (2006) 44–58 Sequencing precedence-related jobs on two machines to minimize the weighted completion time Girish Ramachandra, Salah E. Elmaghraby North Carolina State University, Raleigh, NC 27695, USA Received 1 January 2004; accepted 1 October 2004 Available online 2 February 2005 Abstract We address the problem P2jprecj P w j C j : The problem is known to be NP-hard. We offer a binary integer program (BIP) and a dynamic program (DP); the latter is based on the concept of ‘‘initial subsets’’ of jobs and the optic of ‘‘weighted earliness–tardiness’’. Although the DP approach expands the size of problems that can be solved to optimality to almost twice that obtained by the BIP, it reaches its computational limit around 25 jobs with mean job processing time of 10. We then introduce a genetic algorithm (GA) procedure that is capable of solving any problem size, and further extends the domain of applicability to more than two machines in parallel (problem Pmjprecj P w j C j ). The BIP is used also to establish a good lower bound against which the performance of the GA procedure is measured for larger size problems. Experimental investigation of the GA procedure demonstrates that it is capable of achieving the optimum in very few iterations (less than 20), thanks to the manner in which the initial population is generated, and that early abortion still yields excellent approximation to the optimum as judged by its proximity to the lower bound. r 2004 Published by Elsevier B.V. Keywords: Scheduling; Parallel machines; Dynamic programming; Genetic algorithm 1. Introduction The problem treated in this paper may be described as follows. We are given a set N of n jobs that are ready ‘‘now’’, and two identical machines (m=c’s) in parallel. Only in the applica- tion of genetic algorithm (GA) do we extend the number of machines to three. Extension to more than three machines is straightforward, but not attempted in this research. The jobs are related by arbitrary precedence in the job-on-node (JoN) mode of representation. Each job has a processing time p i and a ‘‘weight’’ denoted by w i ; which may represent the cost of keeping it as ‘‘work in process’’ during its residence in the shop before completion. Denote the completion time of job i by C i : The cost incurred at its completion is evaluated as w i C i : It is desired to sequence the jobs ARTICLE IN PRESS www.elsevier.com/locate/ijpe 0925-5273/$ - see front matter r 2004 Published by Elsevier B.V. doi:10.1016/j.ijpe.2004.10.014 Corresponding author. Tel.: +1 919 515 7077; fax: +1 919 515 5281. E-mail address: [email protected] (S.E. Elmaghraby).

Upload: girish-ramachandra

Post on 21-Jun-2016

216 views

Category:

Documents


4 download

TRANSCRIPT

ARTICLE IN PRESS

0925-5273/$ - se

doi:10.1016/j.ijp

�Correspondifax: +1919 515

E-mail addre

Int. J. Production Economics 100 (2006) 44–58

www.elsevier.com/locate/ijpe

Sequencing precedence-related jobs on two machines tominimize the weighted completion time

Girish Ramachandra, Salah E. Elmaghraby�

North Carolina State University, Raleigh, NC 27695, USA

Received 1 January 2004; accepted 1 October 2004

Available online 2 February 2005

Abstract

We address the problem P2jprecjP

wjCj : The problem is known to be NP-hard. We offer a binary integer program

(BIP) and a dynamic program (DP); the latter is based on the concept of ‘‘initial subsets’’ of jobs and the optic of

‘‘weighted earliness–tardiness’’. Although the DP approach expands the size of problems that can be solved to

optimality to almost twice that obtained by the BIP, it reaches its computational limit around 25 jobs with mean job

processing time of 10. We then introduce a genetic algorithm (GA) procedure that is capable of solving any problem

size, and further extends the domain of applicability to more than two machines in parallel (problem PmjprecjP

wjCj).

The BIP is used also to establish a good lower bound against which the performance of the GA procedure is measured

for larger size problems. Experimental investigation of the GA procedure demonstrates that it is capable of achieving

the optimum in very few iterations (less than 20), thanks to the manner in which the initial population is generated, and

that early abortion still yields excellent approximation to the optimum as judged by its proximity to the lower bound.

r 2004 Published by Elsevier B.V.

Keywords: Scheduling; Parallel machines; Dynamic programming; Genetic algorithm

1. Introduction

The problem treated in this paper may bedescribed as follows. We are given a set N of n

jobs that are ready ‘‘now’’, and two identicalmachines (m=c’s) in parallel. Only in the applica-tion of genetic algorithm (GA) do we extend the

e front matter r 2004 Published by Elsevier B.V.

e.2004.10.014

ng author. Tel.: +1919 515 7077;

5281.

ss: [email protected] (S.E. Elmaghraby).

number of machines to three. Extension to morethan three machines is straightforward, but notattempted in this research. The jobs are related byarbitrary precedence in the job-on-node (JoN)mode of representation. Each job has a processingtime pi and a ‘‘weight’’ denoted by wi; which mayrepresent the cost of keeping it as ‘‘work inprocess’’ during its residence in the shop beforecompletion. Denote the completion time of job i

by Ci: The cost incurred at its completion isevaluated as wiCi: It is desired to sequence the jobs

ARTICLE IN PRESS

G. Ramachandra, S.E. Elmaghraby / Int. J. Production Economics 100 (2006) 44–58 45

on the two machines non-preemptively whilerespecting the precedence relations so that the costof completing all the jobs is minimized. Theconvention is to state this problem asP2jprecj

PwiCi:

Parallel machine scheduling problems are beingaddressed by many researchers due to their wideand prevalent applications. The problem is im-portant from both the theoretical and practicalpoints of view. The criterion of minimizing theweighted completion times is appealing to manybecause it induces stable utilization of resources,high ‘‘throughput’’ in the shop, and minimum in-process inventory. Scheduling problems involvingprecedence constraints are among the most diffi-cult problems in the area of machine scheduling.

Let m denote the number of (identical) machinesin parallel. Garey and Johnson (1979) use thetransformation from PARTITION to demonstratethat the decision without precedence constraints isNP-complete for m ¼ 2; and is NP-complete in thestrong sense for arbitrary m42: For arbitraryprecedence constraints, the problem is NP-com-plete in the strong sense even if all weights areequal, and the partial order is an ‘‘in-tree’’ or an‘‘out-tree’’. The computational difficulty of theproblem shall be manifested in our mathematicalformulations below, and explains our focus onachieving solutions via compu-search proceduresin general and GA in particular.

1.1. Literature review

The ability to understand the structure of theproblem and develop near-optimal solutions re-main limited, and progress on the weighted timecompletion objective on a single machine oridentical parallel machines is quite recent.

For the one m=c scheduling problem, minimiz-ing the weighted completion time is NP-hard forarbitrary precedence constraints (Garey and John-son, 1979; Horn, 1973). A polynomial-time algo-rithm is known when the precedence is a forest(Horn, 1973), or a series–parallel graph ðs=pÞ

(Adolphson and Hu, 1973; Lawler, 1973). Thecase of arbitrary precedence was treated by Sidney(1969) who introduced the concept of initial sets ofjobs, and subsequently by Sidney and Steiner

(1986). Elmaghraby (2001), in a recent report hasproposed an approach to solve the case ofarbitrary precedence by first converting the nons=p graph into a s=p one by adding artificial

precedence relations; he then employs a branch-and-bound (BaB) procedure to arrive at theoptimal sequence. Chekuri and Motwani (1999),Chudak and Hochbaum (1999) and Hall et al.(1997) provide 2-approximation algorithms to thisproblem. Chekuri’s algorithm is based on solvingthe minimum cut while the latter two contributionsuse modified linear programming relaxations.For an arbitrary number of machines in parallel,

the problem is NP-hard even without precedenceconstraints, unless all weights are equal. Thespecial case Pmk

PwjCj with unit weights is

solvable in polynomial time by sorting (Conwayet al., 1967). Alidaee (1993) modifies the dynamicprogramming algorithm of Hall and Posner (1991)for solving the single m=c weighted earliness– tardi-

ness (WET) problem to obtain a pseudo-poly-nomial algorithm for the 2-machine weighted mean

flow time minimization problem. The problem isstrongly NP-hard even when all the weights areidentical and the graph is a union of chains.Dror et al. (1997) provide two results:P2jstrong chainj

PCj is solvable in polynomial

time, and P2jweak chainjP

Cj is NP-hard in thestrong sense.‘‘List scheduling’’ algorithms, first analyzed by

Graham (1966), are among the simplest and mostcommonly used approximation methods for par-allel machine scheduling problems. These algo-rithms use priority rules, or job ranking, which areoften derived from solutions to relaxed versions ofthe problems. In his seminal paper, Graham (1966)showed that a simple list scheduling rule is að2� 1=mÞ-approximation algorithm for the pro-blem PjprecjCmax: In contrast, simple counter-examples demonstrate that this may lead to anarbitrarily poor performance ratio for a weightedsum of completion times objective

Pj2N wjCj :

Thus to obtain a bounded performance for theweighted sum of completion times objective, theno-idleness property needs to be relaxed.An important contribution to this area of

research is the paper by Rinnooy Kan et al.(1975) in which they propose a BaB algorithm for

ARTICLE IN PRESS

G. Ramachandra, S.E. Elmaghraby / Int. J. Production Economics 100 (2006) 44–5846

1kP

wiCi; for a general nondecreasing costfunction. They use four main dominance criteriato define the order in which jobs will appear in theoptimal schedule. They use the assignment for-mulation on the root node and then computelower bounds (l.b.) throughout the search treeusing Dorhout’s (1975) dual solution.

The necessity of having to deal with idlenessadds a significant amount of complexity to theproblem. Hall et al. (1997) partition jobs intogroups that are individually scheduled accordingto the GLSA, and then these schedules areconcatenated to obtain a solution to the originalproblem. Chekuri et al. (1997) present a differentvariant of the GLSA by artificially introducing idletime whenever it seems that a delay of the nextavailable job in the list (if it is not the first) can beafforded. Unlike the above algorithms, which aremachine based, the approximation algorithm byMunier et al. (1998) (MQS) was the first within itscontext that was job based. They obtain the listfrom an LP relaxation which uses job completiontimes Cj as the decision variables. Reference to thisformulation is made in the next section.

The extreme complexity of the multiprocessorscheduling problem has triggered a great deal ofeffort in developing compu-search techniques suchas tabu search and genetic algorithms (GA).Esquivel et al. (2001) address the parallel processorscheduling problem for the makespan minimiza-tion criterion using GA and show that theirmethod is superior to GLSA. Hou et al. (1994)propose a scheduling algorithm using geneticsearch in which each chromosome is a collectionof lists, each of which represents the schedule on adistinct processor. Thus each chromosome is atwo-dimensional structure of (processor index,task list). Although this solution representationfacilitates easy crossover operations, it poses arestriction on the number of schedules beingrepresented. Therefore, it reduces the probabilityof finding optimal solutions.

In this paper we present two mathematicalformulations. The first is a binary integer program(BIP), discussed in Section 2, which proved usefulin solving optimally problems with no more thanabout 10 jobs. In the same section we also presenta l.b. based on a linear programming relaxation

which is later used to evaluate the performance ofthe GA approach. The second is a dynamicprogram (DP), presented in Section 3, whichextends the capability of optimization modelsbeyond the limits of the BIP models, albeit itremains unwieldy for problems with more thanabout 25 jobs. In Section 4, we present a GAformulation to find optimal or near-optimalsolutions in a fast and consistent way. The solutionrepresentation in this formulation is a ‘valid’scheduling list, and we alter this list using geneticoperators to find a list which, if used as an input tothe ‘‘scheduler’’, generates an optimal schedule.The l.b. developed in Section 2 is used to compareresults for problems solved using the GA proce-dure. Section 5 presents our experimental results.Section 6 discusses the conclusions and delineatesthe areas of our current research.

2. A BIP model

Define xjkt as 0–1 variables for 1pjpn; k 2

f1; 2g; and t discrete with 0ptpT ; where T

denotes some upper bound on the starting timeof the ‘‘last job(s)’’ (these are the jobs with nosuccessors in the graph):

xjkt ¼

1 if job j is started on machine k

at time t;

0 otherwise:

8><>:

The BIP formulation for the problem is as follows:

min z ¼Xn

j¼1

wjCj (1)

subject to the following constraints:

X2k¼1

XT

t¼0

xjkt ¼ 1 8j; (2)

Xn

j¼1

xjktp1 8k; t; (3)

X2k¼1

XT

t¼0

txjkt þ pjpX2k¼1

XT

t¼0

txj0kt 8j; j0 2 A; (4)

ARTICLE IN PRESS

G. Ramachandra, S.E. Elmaghraby / Int. J. Production Economics 100 (2006) 44–58 47

txjkt þMð1� xjktÞXXt�1r¼0

ðpi þ rÞxikr

8k; t; 8ði; jÞ 2 N �N; ð5Þ

Cj ¼X2k¼1

XT

t¼0

txjkt þ pj 8j; (6)

xjkt 2 f0; 1g 8j; k; t: (7)

The objective function (1) seeks to minimize thesum of weighted completion times. Constraint (2)ensures that each job is started at some time oneither m=c: Constraint (3) ensures that at any timeat most one job may start processing on anymachine. Constraint (4) ensures that if job j

precedes job j0on the same machine, the start timeof j0 should be at least pj units from the start of j0;where A denotes the set of arcs in the graph.Constraint (5) ensures that a job can startprocessing on an m=c only after all jobs previouslyassigned to that m=c are completed, where M is alarge number. Constraint (6) defines the comple-tion times of all the jobs. Lastly, constraint (7)represents the binary integrality of the variables.

To illustrate the proposed BIP model and itscomputational requirements, an example of 10jobs is shown in Fig. 1. (The figure also shows two‘‘initial subsets’’ which are discussed below inSection 3.) The legend ½wj ; pj� next to a job

Fig. 1. A sample problem, some initial subsets, and the optimal

schedule.

indicates its weight and processing time, respec-tively. The BIP was modeled in LINGO and ittook 31 seconds to reach the optimal solution. Itrequired 17 814 variables and 2392 constraints.The optimal sequence is as follows:

machine 1 : jobs 1,2,6,4,8

machine 2 : jobs 3,5,9,7,10

for a value of 111.The size of this BIP model (as measured by the

number of variables and the number of con-straints) is strongly affected by the fact that one ofthe indices is a function of the processing times.We are particularly interested in the effect ofnumber of jobs n and the sum of processing timesP

j pj on the number of constraints generated bythe model because the LINGO version we usedhad a limitation of 32 000 variables and 16 000constraints. Assuming the ratio of the number ofarcs (representing precedence) to the number ofjobs to be 1.5 (recall that we have adopted the job-on-node mode of representation, in which thenodes represent the jobs and the arcs represent theprecedence among them), one arrives at thefollowing expression:

Number of constraints ¼ 3:5nþ 2Tðn2 þ 1Þ;

where T is the scheduling time horizon. Since thereare two identical processors available, we can limitT between 0:5

Pj pj and

Pj pj : Thus we can say

that the number of constraints would lie between3:5nþ

Pj pjðn

2 þ 1Þ and 3:5nþ 2P

j pjðn2 þ 1Þ:

The table shown in Fig. 2 gives an idea of thenumbers discussed. It is therefore inefficient andcomputationally expensive to attempt to solve theproblem by the above integer programmingformulation for n410 jobs with average proces-sing time larger than 5.A total of 50 problems were solved to optimality

utilizing this model, which ranged in size from 10to 15 jobs with total processing time

Ppj ranging

between 20 and 40. Fig. 5 in Section 5 tabulates asample of 11 instances. The table also shows theresults of applying the GA procedure describedbelow to these problems. It can be seen that: (i) theLP relaxation is a reasonably good l.b. on theoptimal value; (ii) the optimum could be achieved

ARTICLE IN PRESS

Fig. 2. Growth in the number of constraints.

G. Ramachandra, S.E. Elmaghraby / Int. J. Production Economics 100 (2006) 44–5848

for problem size up to 15 jobs, but the largernumber of jobs imposed stricter constraints on theindividual job processing time in order not toexceed the software capability; (iii) although notreported in the figure, the BIP solution timeranged between a few seconds and 6 hours on aPentium III processor. Evidently larger problemsare beyond the software capability available to us,though other software can expand the size ofsolvable problems to 50 jobs of average duration20.

2.1. Lower bounds

Lower bounds are often obtained by relaxingone or more constraints from the main problem.Some commonly used methods are: relax theprecedence among the jobs, permit job preemp-tion, permit processing jobs on different machines(job splitting), or solve the problem as a (con-tinuous) LP. A notable exception to this commonpractice is the work of Rinnooy Kan et al. (1975)mentioned above.

For our problem, we propose to secure the l.b.by simply relaxing the binary integrality con-straints in the BIP. For the 10 jobs problempresented in Fig. 1 the LP relaxation had anoptimal value of 100.83, which is a reasonablystrong l.b. on the optimum (of value 111). Thebound secured from the LP relaxation is comparedto the l.b. obtained by using the LP formulationprovided by MQS (1998) which uses job comple-

tion times Cj as the decision variables. Althoughour formulation does not explicitly account for theassignment of jobs to machines, such assignmentcan be deduced from the result of the BIP. For thesake of completeness we briefly state the ILP ofMQS.

MQS-LP:

min B ¼Xn

j¼1

wjCj (8)

subject to

Xj2F

pjCjX1

2m

Xj2F

pj

!2

þ1

2

Xj2F

p2j 8F � N;

(9)

CjXpj 8j; (10)

CjXCi þ pj 8i � j: (11)

Constraint (9) is a relatively weak (becausemachine assignments are ignored) way of expres-sing the requirement that each machine canprocess at most one job at a time. Constraints(10) and (11) take care of completion times andprecedence relations, respectively. Observe thathere we are using the stronger form of Schulz(1996) rather than the weaker form of

1

2m

Xj2F

pj

!2

þXj2F

p2j

24

35 8F � N;

used by Hall et al. (1997). The fundamental ideabehind the bound of (9) is that these inequa-lities define the convex hull of the polytope ofthe original problem, as demonstrated by Queyr-anne and Schulz (1994). Since none of theinequalities of (9) is redundant, all must beenumerated, which mars the applicability ofthis approach to large-scale problems. In fact wecould not exceed n ¼ 13 jobs in our routineexperimentation, from which we concluded thatour l.b. based on the solution of the BIP as aregular LP is as good, if not actually superior, tothat of MQS; see the tables in Figs. 5 and 6 inSection 5. No further use of MQS-LP was made inthe sequel.

ARTICLE IN PRESS

G. Ramachandra, S.E. Elmaghraby / Int. J. Production Economics 100 (2006) 44–58 49

3. A DP model

The main motivation to pursue the DP ap-proach stems from the following two contribu-tions. For other contributions to the singlemachine scheduling problem with various criteria,see Ibaraki and Nakamura (1994) and thereferences cited therein. See also Mondal and Sen(2001) for treatment of the WET problem with arestrictive common due date.

1.

The work by Hall and Posner (1991) in whichthey developed a DP algorithm of pseudo-polynomial order to solve the WET probleminvolving a single m=c with unrelated jobs.

2.

The extension of the above formulation byAlidaee (1993), again using DP, to solve theproblem of minimizing the sum of weightedcompletion times of unrelated jobs on twoidentical parallel machines.

The keys to the success of these two DPformulations are two. The first is that the order

in which the jobs are selected in the iterativeprocedure of either model is known (in particular,it is wspt, which stands for weighted shortestprocessing times). The second is that subsets ofjobs together with the ‘‘occupancy’’ of the ‘‘early’’jobs determine uniquely the completion time of thelatest ‘‘tardy’’ job.

Can the order in which the jobs are consideredin the presence of precedence relations be defined inadvance of the procedure for sequencing on twomachines? A seemingly logical way to proceed is toseek the optimal sequence on one machine subjectto the precedence constraints, very much in thesame way that both contributions started with theorder given by the wspt ranking, and then imposethe WET logic on that order. Presumably, the one-processor schedule, taken as a list, provides somepriority information on how the jobs are to beconsidered. In particular, the case of a singlemachine in which the jobs possess series–parallelðs=pÞ precedence, under the objective of minimiz-ing weighted completion times, has been solved byan elegant procedure due to Lawler (1973). Surely,one may argue, such optimal sequence on onemachine should give a clue to the order in which

the jobs are to be considered when scheduled ontwo machines. Unfortunately this is not true! Theorder given by the one machine optimal sequencesecured from Lawler’s procedure proved to beuseless to our problem; one can easily constructcounter-examples in which the optimal sequenceon two machines violates the order established bythe one-machine optimal solution. The reason is asfollows.Unlike with the makespan criterion, the com-

pletion time of every job is important for theweighted completion time criterion. When tryingto convert the one-machine schedule into aparallel-machines one, precedence constraints pre-vent complete parallelization (of the jobs). Thus,we may need to execute jobs out of order from thelist to benefit from parallelization, which impairsthe utility of the one-machine schedule as a guideto the order in which jobs are considered for themultiple-machines case. If all the pi’s are identical,say 1, we can afford to use the list schedulingalgorithm. But if the pi’s are different, that is nolonger the case since a job could keep a processorbusy while delaying more profitable jobs thatbecome available. Therefore there seems to be noescape from devising a new approach to thedetermination of the allocation cum sequencing tothe machines. However, we retain the WET opticof Hall and Posner (1991) and Alidaee (1993).The Achilles’ heel of the DP procedure proposed

next is the need to enumerate all initial subsets ofjobs (defined as a set of jobs with no predecessors)and to maintain a complete list of alternate optimaat all intermediate stages, since subsequent expan-sion of a subset of jobs may favor one alternativeover the others, despite their being of equal valueat a particular stage of iteration.The construction of the DP model proceeds as

follows. Let Ik denote an unordered set of initial

jobs of cardinality k; such as I5 ¼ f1; 2; 3; 4; 7g inFig. 1. It defines the state in the DP model. Thestages of the DP model are defined by thecardinality of the set Ik; k ¼ 1; . . . ; n; where n isthe number of initial subsets. The precedencerelations may restrict the set of jobs eligible to beterminal jobs in the set Ik:Denote the set of eligibleterminal jobs in Ik by LðIkÞ: For instance, in theset I5 ¼ f1; 2; 3; 4; 7g; it is clear that LðI5Þ ¼

ARTICLE IN PRESS

G. Ramachandra, S.E. Elmaghraby / Int. J. Production Economics 100 (2006) 44–5850

f2; 3; 7g since any one of these three jobs may bethe last job in the sequence. However, in the samegraph the set I6 ¼ f1; 2; 3; 5; 6; 9g has LðI6Þ ¼ f9g:Let lk denote the last job in the sequence of Ik: Inthe set I5; the terminal job l5 may be any job of thesubset LðI5Þ; but in the set I6 only job 9 is eligibleto be l9: Let d be a fictitious ‘‘due date’’, with

dXX

j

pj :

Since we are concerned with earliness and tardi-ness, the measure of time proceeds in bothdirections away from the due date d: Jobs thatcomplete processing before d are termed ‘‘early’’and denoted by the set E; and those that completeprocessing after d are termed ‘‘tardy’’ and denotedby the set T : Therefore, lk 2LðIkÞ would appearfirst among the ‘‘early’’ jobs, and last among‘‘tardy’’ jobs; see Fig. 3(B).

The total processing time of any subset I of jobsis denoted by PðIÞ ¼

Pj2I pj ; which is divided in

some fashion between the two m=c’s. Let Cj

denote the completion time of job j measuredfrom d: Let Ik�1 ¼ Ik � flkg be the subset of jobsin Ik exclusive of job lk: Suppose we know theoptimal sequence of the jobs in Ik�1; and denotethe subset of ‘‘early’’ jobs by

IEk�1 ¼ fj 2 Ik�1 jCjpdg

Job lk early Job lk tardy

0d

E T

E T

p(Wk)

p(Wk)

0

e

e p(Wk) - e

p(Wk) - e

(A)

(B)

Fig. 3. Illustration of earliness and tardiness.

and the subset of ‘‘tardy’’ jobs by

ITk�1 ¼ fj 2 Ik�1 jCj4dg:

Let the set of jobs immediately preceding job lk inthe precedence graph be denoted by G�1ðlkÞ: Thenjob lk; being the last job in the sequence, mayeither be the ‘first early’ or the ‘‘last tardy’’. If it isthe ‘‘first early’’, then its completion time is

CElk¼ max

j2G�1ðlkÞ

Cj ;P IEk�1� �� �

þ plk;

with cost equal to wlkCE

lk: On the other hand, if the

job lk is ‘‘last tardy’’ then its completion time is

CTlk¼ max

j2GðlkÞCj ;PðI

Tk�1Þ

� �þ plk

;

with cost given by wlkCT

lk: The reason for defining

CElkðCT

lkÞ as given in the above two expressions is

the possibility of a job j 2 G�1ðlkÞ being in the setT (set E) in the optimal sequence ITk�1:We are now poised to formulate the DP

extremal equation. Let f ðIk; lkÞ denote the minimalcost of the jobs in Ik when job lk is last. Then onemay write the extremal equation as

f ðIk; lkÞ ¼ minlk2LðlkÞ

min wlkCE

lk;wlk

CTlk

n onþ f ðIk � flkg; lk�1Þ

o; k ¼ 1; . . . ; n:

ð12Þ

Observe that there are as many values of theoptimum f ðIk � flkg; lk�1Þ as there are elements inthe set LðIk � flkgÞ:To gain an idea about the performance of this

DP approach, we solved the 10 jobs problem givenin Fig. 1. We had to enumerate 44 initial subsets,and secured the same optimal sequence depicted inFig. 1 and the same optimal value of 111.The validity of the extremal equation of (12)

depends crucially on the separability of theobjective function. We assert that the addition ofjob lk to the set Ik�1 does not ‘disturb’ the optimalsequence of the latter, in the sense of eitherswitching the order of any two jobs in it orinserting itself between two jobs in Ik�1: This canbe seen from the following three observations:

1.

Job lk cannot insert itself in the sequence of thejobs that precede it.

ARTICLE IN PRESS

G. Ramachandra, S.E. Elmaghraby / Int. J. Production Economics 100 (2006) 44–58 51

2.

If job lk should occupy an earlier position in thesequence, thus inserting itself somewhere in theset Ik�1; then some other job, say l0k; would belast in the set Ik; which implies that lk has beenpreviously considered in the subset Ik�1: Usingthe same logic with Ik�1 we are led to subsetsIk�2; . . . ; I1; in which lk could have been in aprior position in any of them. Therefore,considering job lk at stage k does not interferewith its consideration in prior stages, if pre-cedence permits it.

3.

The presence of job lk last cannot cause twojobs, say i and j, in the set Ik�1 to exchangepositions. Here we must consider two eventua-lities:(a) the exchange of positions is between two

‘‘internal’’ jobs; i.e., neither i nor j is a ‘‘last’’job in the sequence of the set Ik�1: Thiseventuality is impossible since such aninterchange must improve the cost of theset Ik�1 taken by itself by permitting somejob to finish earlier, which contradicts theoptimality of its current sequence,

(b) the exchange of positions is between a‘‘last’’ job in the set Ik�1; suppose it is jobj, and an ‘‘internal’’ job, suppose it is job i.But then the same argument presentedrelative to job lk can be repeated verbatimrelative to job j; which would eventuallylead to the situation described in arguments1, or 2, or 3a.

Therefore the superiority or inferiority of job lk

being last shall become evident without disturbingthe sequence of any job in the set Ik�1; as asserted.This establishes the independence of the de-cision on lk from prior decisions, and theadditivity of the cost of appending lk to thesequence of set Ik�1:

The only caveat in applying the extremalequation of (12) is that at each stage of iterationsone must retain all alternate optima at each state

with a specified ‘last’ job. This is because alternateoptima represent different positions of the jobs inthe earliness–tardiness dichotomy, which mayhave an impact on the placement (and cost) ofsubsequent jobs that are appended to the partialsequence. This fact was amply evident in the

example cited above. And it is the cause behind thecomputational difficulty in large-scale problems.

4. A GA approach

The fact that our problem is intractable topolynomial-time algorithms suggests the possibleuse of compu-search approaches (alternativelyknown as meta-heuristics) such as GA simulatedannealing, and tabu search, among many others.We propose to solve our problem using GA, andthis section is devoted to a detailed description ofits construction and implementation.The distinctive feature of our algorithm which

sets it apart from other contributions using GA inscheduling problems lies in (i) the structure of thechromosome representation, and (ii) in the mannerby which the initial population is generated.Each chromosome represents a list of task

priorities. Since task priorities are dependent onthe input, which is a directed acyclic graph (dag),different sets of task priorities represent differentproblem instances. We use different priority rulesto derive the first few chromosomes and then alterthem slightly to construct the rest of the initialpopulation. As shall be presently seen, this methodensures faster convergence compared to startingwith a set of randomly ordered chromosomes.Fitness of the chromosomes is determined fromthe value of the schedules secured from these listsusing the first available machine (FAM) rule.

4.1. The structure of the GA

GAs, introduced by Holland in the 1970s, aresearch techniques based on the concept of evolu-tion (Davis, 1991; Goldberg, 1989). Given a well-defined search space in which each point isrepresented by a bit string, called a chromosome,a GA is applied with its three search operators—selection, crossover and mutation—to transform apopulation of chromosomes, with the objective ofimproving their ‘‘quality’’. Before the search starts,a set of chromosomes are chosen from the searchspace to form the initial population. The geneticsearch operators are then applied one afteranother to systematically obtain a new generation

ARTICLE IN PRESS

G. Ramachandra, S.E. Elmaghraby / Int. J. Production Economics 100 (2006) 44–5852

of chromosomes with a better overall quality. Thisprocess is repeated until the stopping criterion ismet and the best solution of the last generation isreported as the final solution.

For an efficient GA search, in addition to aproper solution structure, it is necessary that theinitial population of schedules be a diverserepresentative of the search space. Furthermore,the solution encoding should permit:

a large diversity in a small population, � easy crossover and mutation operations, and � an easy computation of the objective function.

All three desired characteristics are realized in ourapproach.

4.2. The proposed GA

Classical optimal scheduling algorithms, likeHu’s (1961) and Coffman’s (1976) algorithms, arebased on the list scheduling approach in which thenodes of the dag are first arranged as a list suchthat the ordering of the list preserves the pre-cedence constraints.

It is proposed that a similar approach be used tocalculate the objective function value or fitness ofthe chromosome in the GA; i.e., beginning fromthe first precedence-feasible job in the list, each jobis removed and assigned to the machine thatallows the earliest start time.

It is evident that an optimal ordering of jobs inthe list is needed to generate an optimal scheduleusing this approach. While optimal scheduling listscan be easily constructed for certain restrictedcases (e.g., unit-weight out-tree as in Hu’s algo-rithm, where an out-tree is a connected, acyclicgraph in which each node has at most one parent),such lists cannot be determined for arbitrary dags.Also, there are an exponential number of candi-date lists that can be used for scheduling, and anexhaustive search for an optimal list can be verytime consuming indeed, if not impossible.

1,1,0,2 2,1,2,5 3,2,2,3 4,2,3,5 5

The likelihood of the existence of lists leading tooptimal schedules using the FAM rule is very high,although there is no proof that such a list alwaysexists. Since an optimal schedule is not unique, the listwhich can lead to the optimal schedule is also notunique. A solution neighborhood can then be definedfor the genetic search. For instance, we can start froman initial list from which we obtain an initial schedule.We then systematically modify the ordering within thelist such that precedence is maintained. From the newlist we construct a new schedule. If the schedule isbetter, we adopt it, else we test another modified list.

4.3. Chromosome representation

From the perspective of representation, twobroad classifications exist, namely direct andindirect representations. In the former case acomplete and feasible schedule is an individual ofthe evolving population. In the latter scenario, thepopulation consists of encoded schedules. Becausethe representation does not directly provide aschedule, a schedule builder or decoder is necessaryto transform the chromosome into a schedule andevaluate it. The decoder could be in the form ofimplementing the naı̈ve list scheduling algorithmor similar priority rules.Consider the dag shown in Fig. 1. A direct

representation of a chromosome could be thefollowing four-tuple:

htask_id ; proc_id; init_time; end_timei;

where task_id identifies the task to be allocated,proc_id identifies the processor on which the taskis to be processed, init_time is the start time oftask_id on proc_id; end_time denotes completiontime of the task.The chromosome representation for the Gantt

chart of Fig. 1 is shown in (13). An example ofindirect representation for the same schedule isshown in (14). As seen, this representation justindicates the assignments of the jobs to theprocessors.

,1,5,8 6,2,5,8 7,1,8,9 8,1,9,10 (13)

2.

ARTICLE IN PRESS

G. Ramachandra, S.E. Elmaghraby / Int. J. Production Economics 100 (2006) 44–58 53

Processor: 1 1 2 2 1 2 1 1 2 2

Task: 1 2 3 4 5 6 7 8 9 10

(14)

In our case, we use a variant of theindirect representation. A permutation whichrespects all precedence constraints is a chromo-some. The fitness of a chromosome is defi-ned as

PwjCj ; where wj represents job weight

and Cj is the completion time of job j

in the schedule constructed by using the FAMrule.

4.3.1. Representation of the precedence constraints

The precedence constraints among the jobsdetermine the size of the search space andalso adds complexity to the problem. In ourapproach, precedence constraints are repre-sented by the adjacency matrix A: It is anode–node adjacency matrix of 1’s and 0’s, wherea 1 in location ði; jÞ implies that i precedes j;written i � j: Using the concept of transitiveclosure of graphs, we also define earliest po-

sition (EP) and latest position (LP) for anygiven node. These positions determine the earliestand the latest feasible positions of a given jobin a list. The EPs and LPs for all the jobs inFig. 1 are shown in (15). This information is usedfor feasibility checks of chromosomes duringmutation.

Job: 1 2 3 4 5 6 7 8 9 10

EP: 1 2 2 2 4 4 7 8 6 10

LP: 1 5 4 5 6 6 7 8 9 10

(15)

4.3.2. Generation of the initial population

The initial population is generated from a set ofscheduling lists that are constructed using a set ofpriority rules as described next.

1.

Weighted subgraph rule: Ready jobs are as-signed to the machines in topological orderas and when the machine becomes available.In case of a contest; i.e., there are more

ready jobs than machines, the priority isdecided as follows:(a) Case 1—one machine available: For each job

in contention, calculateP

wj=P

pj inwhich the summation in both numeratorand denominator is over the jobs underconsideration and the jobs in the subgraphof its successors. The job with the highestvalue is scheduled first, with ties beingbroken lexicographically (which essentiallymeans arbitrarily).

(b) Case 2—both machines available: The sameprocedure is followed and the jobs aresorted in nonincreasing order of theirweighted subgraph values, with ties beingbroken lexicographically. The first two jobsin this list are scheduled.

WSPT rule: This method is similar to theprevious one except that the priority rule isthe wspt rule.

3.

Lawler’s 1—machine optimal schedule: For thegiven dag, we first transform it into a s=p graphby the addition of auxiliary precedence relations

following the procedure of Elmaghraby (2001)then apply Lawler’s (1973) algorithm to obtainthe list. This list is then scheduled on the twomachines using the FAM rule.

4.

LP-relaxation list: This method is due to MQS(1998). The LP-relaxation of Eqs. (8)–(11) issolved to obtain the completion time vectors ofthe jobs. Let the LP completion time of job j beCLP

j : Accordingly, define LP-mid-point mLPj as

mLPj ¼ CLP

j � pj=2: The LP-mid-point list isnow defined by sorting the jobs in nondecreas-ing order of their mid-points mLP

j :

These different ordering schemes not onlyprovide the necessary diversity but also representa population of better fitness than a set of totallyrandom topological orderings. A whole popula-tion is then generated by random valid swappingof jobs in the lists.

4.3.3. The genetic operators

Since we are dealing with modification ofordered lists, standard crossover and mutationoperators may violate the precedence constraints,thereby generating invalid lists. So, we need to use

ARTICLE IN PRESS

G. Ramachandra, S.E. Elmaghraby / Int. J. Production Economics 100 (2006) 44–5854

other well-defined genetic operators. Three kindsof crossover operators, namely, the order crossover

(OCX), the partially mapped crossover (PMX), andthe cycle crossover (CCX) operators are viablecandidates. By using simple counter-examples, onecan show that PMX and CCX operators mayproduce invalid lists. Therefore, in our algorithm,only the OCX operator is used. We also describe amutation operator based on node-swapping.

The OCX operator. We consider a single-pointorder crossover operator. Given two parents, selecta cutoff point (the same for both parents)randomly in the two chromosomes. We first passthe left segment from the first parent to the child.Then we construct the right segment of the childfrom the original genes of the other parent in thesame order as they appear, deleting the genes thatare already in the left segment and keeping theones that are missing. An example of the crossoveroperator is shown in Fig. 4(a).

The chromosomes shown in the figure are validtopological ordering of the dag in Fig. 1. The cutoffpoint is randomly selected and falls after gene inposition 4. The left segment {1,3,2,4} of parent 1 ispassed directly to the child. The original chromo-some of parent 2 is f1; 2; 3; 5; 6; 9; 4; 7; 8; 10g; ofwhich f1; 3; 2; 4g have been already accounted for inthe left segment. This leaves the right segmentcontaining the genes f5; 6; 7; 8; 9; 10g of parent 2 tobe appended to the child in the order of their

appearance in parent 2. This operator permits fastprocessing and is easy to implement. More im-portantly, it always produces valid offsprings.

The mutation operator. A valid topological ordercan be transformed into another topological orderby swapping some nodes. However, not every pairof nodes can be swapped without violating

{ 1, 3, 2, 4, 5, 6, 7, 8, 9, 10 }

{ 1, 2, 3, 5, 6, 9, 4, 7, 8, 10 }

parent 1

parent 2

{ 1, 3, 2, 4, 5, 6, 9, 7, 8, 10 }

child

{ 1, 3, 2, 4, 5, 6, 7, 8, 9, 10 } { 1, 3, 2, 6, 5, 4, 7, 8, 9, 10 }mutation

(a)

(b)

Fig. 4. Illustration of (a) crossover and (b) mutation operators.

precedence constraints. Two nodes are inter-

changeable if they are not lying on the same pathin the dag. Using the concept of transitive closureof graphs, we can check (in constant time) whetheror not two nodes are interchangeable during thesearch. This implies that we can efficiently testwhether two randomly selected nodes are inter-changeable, and if so, swap them to check the newobjective function value. Such swapping definesthe random search neighborhood which size isOðn2Þ since there is a maximum of ðn

2Þ pairs of

interchangeable nodes. It should be noted thatsuch swapping may result in infeasibility if properpost-processing is not done. For example, in Fig.4(b), jobs 2 and 8 are interchangeable since they donot lie on the same path in the dag. But, if weconsider the list f1; 3; 2; 4; 5; 6; 7; 8; 9; 10g; thensimply swapping the nodes produces the listf1; 3; 8; 4; 5; 6; 7; 2; 9; 10g; which is clearly infeasiblebecause job 8 cannot be processed before 4. Inorder to overcome this occurrence, we use theearliest– latest positions table (15) to determine ifindeed the swapped jobs’ positions are permissible.Satisfaction of both conditions is sufficient toensure feasible interchangeability.We now define the swap mutation operator

which swaps two interchangeable nodes in a givenchromosome. An example of this operator isshown in Fig. 4(b). The mutation probability forsmaller problem sizes is derived empirically, asexplained in the next section. For problem sizes of40 jobs and more it is calculated by the formula

mm ¼ b �f 0 � f min

f avg � f min

!; (16)

where f 0 is the fitness of the chromosome, f min thecurrent best fitness, f avg the mean fitness of thepopulation, b a number between 0 and 1.

4.3.4. Parameter setting

As discussed above, the genetic search method isguided by the ‘tuning’ of three parameters,namely: population size, crossover rate mc; andmutation rate mm: We chose these parametersempirically, within the ranges shown below:

Population size: 25–75 chromosomes (stepsize ¼ 5),

ARTICLE IN PRESS

G. Ramachandra, S.E. Elmaghraby / Int. J. Production Economics 100 (2006) 44–58 55

crossover rate: 0–100% (step size ¼ 10%), and �

Fig. 5. Computation results—two-machine scenario.

Fig. 6. Computation results—three-machine scenario.

mutation rate: 0–100% (step size ¼ 10%) andadaptive.

The entire process was done in three rounds.Initially we fixed all parameters at specific valueswithin the ranges above. We also fixed b of Eq.(16) at 0.5. We then varied the population size insteps and kept the best one; defined to be thesmallest size that resulted in the lowest objectivefunction value appearing in at least 5 generations.Next we varied the crossover rate while keepingthe other two parameters fixed. We finally finishedthe first round by varying the mutation rate. Thenext two rounds were carried out by starting withvarying the crossover rate and mutation rate,respectively. This procedure was carried out foreach of the problem sizes of 10, 20, 40 and 80 jobs.Results were very similar for crossover andmutation rates, independently of the populationsize. We found that for smaller problem sizes,population size of 30 and the crossover ratebetween 0.4 and 0.6 gave the best results. Usingthe mutation rates between 0.1 and 0.3 gave betterresults when compared to the use of adaptivemutation operator. After few experiments with branging from 0.1 to 0.8, we decided to fix it at 0.6for all problem sizes. With problem sizes of 40 and80 jobs, we achieved better results when thepopulation size was 50, crossover rate between0.5 and 0.7, and mutation operator adaptive.

The stopping criterion used in all the cases wasthe number of generations—20 for problem sizesup to 20 jobs, and 30 for problem sizes greaterthan 20 jobs.

5. Computational results

In order to get a feel of the performance of theGA, 50 problems were constructed that werewithin LINGO’s solving capability. The experi-ments were conducted on a Sun Ultra workstation,and all programs were written in MATLAB. Thetables in Figs. 5 and 6 give the results for the two-and three-machine cases, respectively. The pro-posed GA reached the optimal value for allproblems in 15 generations or less. Problem

instances for 20, 40, and 80 jobs were alsogenerated. Since these are well beyond thecomputing capability of the LINGO solver avail-able to us the performance of the GA wasevaluated against the l.b. generated from the LPsolution (written as l.b. (LP)). Fig. 7 summarizesthe performance in terms of percentage deviationfrom the l.b. (LP).The average computation times for all the

problem instances is presented in Fig. 9. Sincethe computation overhead in MATLAB is sig-nificantly higher than in C/C++, these times donot serve as a performance index of our GA in thetrue sense.Additional problems were also generated using

the DAG generator developed by Elmaghraby et

ARTICLE IN PRESS

Fig. 7. Randomly generated instances—deviation of GA from LP.

Fig. 8. Randomly generated instances using DAGEN.

G. Ramachandra, S.E. Elmaghraby / Int. J. Production Economics 100 (2006) 44–5856

al. (1996), which uses the node reduction index(Bein et al., 1992) (n.r.i., the number of nodes to‘‘reduce’’, i.e., eliminate, in the job-on-arc mode ofrepresentation of the precedence relations in orderto render the graph series/parallel. In some sense,the n.r.i. measures the degree of deviation of thegiven graph from being series/parallel) as one ofthe metrics in randomizing the topology of theprecedence graph. In all, 12 instances weregenerated, of which 8 were with n.r.i. of 6, and 4with n.r.i of 8. The weights and processing timeswere generated from a uniform distribution asshown in the table of Fig. 8. As evident from the

table, the GA solution is no more than 1% fromthe l.b. (LP). This demonstrates the effectivenessof both the GA as a solution methodology and theLP relaxation as an effective l.b. (Fig. 9).

6. Summary and conclusions

We have investigated the problemPmjprecj

PwiCi under the assumption that m ¼

2 identical machines in the case of the BIP and theDP models, and m ¼ 2 or 3 in the case of the GAprocedure. The precedence relations are arbitrary.

ARTICLE IN PRESS

Fig. 9. Average computation times.

G. Ramachandra, S.E. Elmaghraby / Int. J. Production Economics 100 (2006) 44–58 57

We have developed two mathematical program-ming models: a binary integer program (BIP)that can accommodate up to � 10 jobs withaverage job processing time of 5, and a dyna-mic program (DP) that extends the scope ofsolution to � 25 jobs with average job pro-cessing time of 10. For larger problems we havedeveloped a rather efficient GA procedure. In thisstudy we have implemented it on two and threeprocessors; it is easily extendable to more thanthree processors.

This research may be extended in at least twodirections. The first is to permit fractional alloca-tion of a ‘‘machine’’ to a job, if more gainful. Thiswould emulate, for instance, the case of dividing aperson’s time equally between two jobs in the sameinterval of time. The second is to have differentresources that are available in multiple units each.This would be the case of jobs demanding two (ormore) different resources simultaneously, say atruck and a bulldozer, and there are several unitsof each resource. The resolution of these exten-sions would go a long way towards resolving thewell known resource-constrained project schedul-ing problem (RCPSP) in project planning andcontrol.

References

Adolphson, D., Hu, T.C., 1973. Optimal linear ordering. SIAM

Journal on Applied Mathematics 25, 403–423.

Alidaee, B., 1993. Schedule of n jobs on two identical machines

to minimize weighted mean flow time. Computers &

Industrial Engineering 24, 53–55.

Bein, W.W., Kamburowski, J., Stallmann, M.F.M., 1992.

Optimal reduction of two-terminal directed acyclic graphs.

SIAM Journal on Computing 21, 1112–1129.

Chekuri, C., Motwani, R., 1999. Precedence constrained

scheduling to minimize sum of weighted completion times

on a single machine. Discrete Applied Mathematics 98,

29–38.

Chekuri, C., Motwani, R., Natarajan, B., Stein, C., 1997.

Approximation techniques for average completion time

scheduling. Proceedings of the Eighth ACM-SIAM Sympo-

sium of Discrete Algorithms, pp. 609–618.

Chudak, F.A., Hochbaum, D.S., 1999. A half-integral linear

programming relaxation for scheduling precedence-con-

strained jobs on a single machine. Operations Research

Letters 25, 199–204.

Coffman, E.G., 1976. The Computer and Job-Shop Scheduling

Theory. Wiley, New York.

Conway, R.W., Maxwell, W.L., Miller, L.W., 1967. Theory of

Scheduling. Addison-Wesley, Reading, MA.

Davis, L.D., 1991. The Handbook of Genetic Algorithms. Van

Nostrand Reinhold, New York.

Dorhout, B., 1975. Experiments with some algorithms for the

linear assignment problem. Report BW30, Math. Centrum,

Amsterdam, The Netherlands.

Dror, M., Kubiak, W., Dell’Olmo, P., 1997. Scheduling chains

to minimize mean flow time. Information Processing Letters

61, 297–301.

Elmaghraby, S.E., 2001. On scheduling jobs related by

precedence on one machine to minimize the weighted

completion times. Paper presented at the IETM Conference,

Quebec City, Canada, August 21–24, 2001.

Elmaghraby, S.E., Agrawal, M., Herroelen, W., 1996. DA-

GEN: A network generating algorithm. European Journal

on Operational Research 90, 376–382.

Esquivel, S.C., Gatica, C.R., Gallard, R.H., 2001. Conven-

tional and multirecombinative evolutionary algorithms for

the parallel task scheduling problem. In: EvoWorkshops,

Lecture Notes in Computer Science, vol. 2037. Springer,

Berlin, pp. 223–232.

Garey, M.R., Johnson, D.S., 1979. Computers and Intract-

ibility: A Guide to the Theory of NP-Completeness. W.H.

Freeman, New York.

Goldberg, D.E., 1989. Genetic Algorithms in Search, Optimi-

zation and Machine Learning. Addison-Wesley, Reading,

MA.

Graham, R.L., 1966. Bounds for certain multiprocessing

anomalies. Bell System Technical Journal 45, 1563–1581.

Hall, N.G., Posner, M.E., 1991. Earliness–tardiness scheduling

problems I: Weighted deviation of completion times about a

common due date. Operations Research 39, 836–846.

Hall, L.A., Schulz, A.S., Shmoys, D.B., Wein, J., 1997.

Scheduling to minimize average completion time: Off-line

and on-line approximation algorithms. Mathematics of

Operations Research 22, 513–544.

ARTICLE IN PRESS

G. Ramachandra, S.E. Elmaghraby / Int. J. Production Economics 100 (2006) 44–5858

Horn, W., 1973. Minimizing average flow time with parallel

machines. Operations Research 21, 846–847.

Hou, E.S.H., Ansari, N., Ren, H., 1994. A genetic algorithm for

multiprocessor scheduling. IEEE Transactions on Parallel

and Distributed Systems 5, 113–120.

Hu, T.C., 1961. Parallel sequencing and assembly line

problems. Operations Research 19, 841–848.

Ibaraki, T., Nakamura, Y., 1994. A dynamic programming

method for single machine scheduling. European Journal on

Operational Research 76, 72–82.

Lawler, E.L., 1973. Optimal sequencing of a single machine

subject to precedence constraints. Management Science 19,

544–546.

Mondal, S.A., Sen, A.K., 2001. Single machine weighted

earliness–tardiness penalty problem with a restrictive due

date. Computers & Operations Research 28, 649–669.

Munier, A., Queyranne, M., Schulz, A.S., 1998. Approximation

bounds for a general class of precedence constrained parallel

machine scheduling problems. In: Proceedings of Integer

Programming and Combinatorial Optimization (IPCO),

Lecture Notes in Computer Science, vol. 1412. Springer,

Berlin, pp. 367–382.

Queyranne, M., Schulz, A.S., 1994. Polyhedral approaches to

machine scheduling. Technical Report 408/1994, Technical

University of Berlin.

Rinnooy Kan, A.H.C., Legeweg, B.J., Lenstra, J.K., 1975.

Minimizing total cost in one machine scheduling. Opera-

tions Research 23, 908–927.

Schulz, A.S., 1996. Polytopes and scheduling. Ph.D. Thesis,

Technical University of Berlin, Germany.

Sidney, J.B., 1969. Decomposition algorithms for single-

machine sequencing with precedence relations and deferral

costs. Operations Research 22, 283–298.

Sidney, J.B., Steiner, G., 1986. Optimal sequencing by modular

decomposition: polynomial algorithms. Operations Re-

search 34, 606–612.