scheduling techniques to optimise rail operations · mahmoud masoud (b.sc., dip.or, msc) a thesis...
TRANSCRIPT
Scheduling Techniques to Optimise
Rail Operations
Mahmoud Masoud
(B.Sc., Dip.OR, MSc)
A thesis submitted in fulfilment of the requirements for the degree of
Doctor of Philosophy
2012
Principal Supervisor: Professor Erhan Kozan
Associate Supervisor: Associate Professor Geoff Kent
Mathematical Sciences Discipline Faculty of Science and Technology
Queensland University of Technology
Australia
© Copyright by Mahmoud Masoud 2011
All right reserved
I
Statement of Original Authorship
The work contained in this dissertation has not been previously
submitted for degree at any tertiary educational institution. To the best
of my knowledge and belief, this dissertation contains no materials
previously published or written by another person except where due
reference is made.
Signed……………….
Date…………………
II
Acknowledgements
First and foremost, I sincerely thank Allah, my God, the Most Gracious, Most Merciful for enabling me to complete my Ph.D. successfully and for often putting so many good people in my way.
I would like to express my sincere gratitude to my principal supervisor Professor Erhan Kozan for the continuous support of my PhD study and
research, for his patience, motivation, enthusiasm and immense knowledge. His guidance helped me in all the time of research and writing of this thesis. I could not have imagined having a better advisor and mentor for my PhD study.
I would like also to express my most sincere gratitude to my associated supervisor A/ Professor Geoff Kent, for his comments and suggestions
which are very important contribution. Special thanks also go to all staff members of mathematical science
discipline, sugarcane research staff at Queensland University of Technology (QUT). I would like to acknowledge the funding support of the Sugar Research and Development Corporation (SRDC).
I’m indebted to my mother and father who gave my support and encouragement. I’m greatly indebted to my brothers, Gamal and Ayman,
who supported me during my study and their kids, Ahmed, Takwa, Sara, Esraa, Jana and Goudy. Also I would like to thank my uncle Mohamed Mahgoub for his support during my study.
Finally, my wife Amira and my daughter Retaj have been, always, my
pillar and my guiding light, and I thank them.
III
Glossary
MIP Mixed Integer Programming
CP Constraint Programming
OPL Optimisation Programming Language
SMS Single Machine Scheduling
PMS Parallel Machine Scheduling
FSS Flow Shop Scheduling
JSS Job Shop Scheduling
OSS Open Shop Scheduling
GSS Group Shop Scheduling
BPMJSS Blocking Parallel-Machine Job Shop Scheduling
ULBJSS Unlimited Buffer Job Shop Scheduling
LBJSS Limited Buffer Job Shop Scheduling
BJSS Blocking Job Scheduling Shop
CJSS Classical Job Shop Scheduling
LBGSS Limited Buffer Group Shop Scheduling
BGSS Blocking Group shop Scheduling
BFSS Blocking Flow Shop Scheduling
UkLBFSS Unlimited Buffer Flow Shop Scheduling
LBFSS Limited Buffer Flow Shop Scheduling
BFS Best First Search
SBS Slice Based Search
LDS Limited Discrepancy Search
DDS Depth-bound Discrepancy Search
IDFS Interleaved Depth First Search
SSS Standard Search Strategy
DSS Dichotomic Search Strategy
TSCE Terminal Segment Conflict Elimination
ISCE Intermediate Segment Conflicts Elimination
SBD Segment Blocking Determination
RCE Rail Conflict Elimination
CA Computing Acceleration
IV
Abstract
In Australia, railway systems play a vital role in transporting the sugarcane crop
from farms to mills. The sugarcane transport system is very complex and uses daily
schedules, consisting of a set of locomotives runs, to satisfy the requirements of the
mill and harvesters. The total cost of sugarcane transport operations is very high;
over 35% of the total cost of sugarcane production in Australia is incurred in cane
transport. Efficient schedules for sugarcane transport can reduce the cost and limit
the negative effects that this system can have on the raw sugar production
system. There are several benefits to formulating the train scheduling problem as a
blocking parallel-machine job shop scheduling (BPMJSS) problem, namely to
prevent two trains passing in one section at the same time; to keep the train activities
(operations) in sequence during each run (trip) by applying precedence constraints;
to pass the trains on one section in the correct order (priorities of passing trains) by
applying disjunctive constraints; and, to ease passing trains by solving rail conflicts
by applying blocking constraints and Parallel Machine Scheduling. Therefore, the
sugarcane rail operations are formulated as BPMJSS problem.
A mixed integer programming and constraint programming approaches are used to
describe the BPMJSS problem. The model is solved by the integration of constraint
programming, mixed integer programming and search techniques. The optimality
performance is tested by Optimization Programming Language (OPL) and CPLEX
software on small and large size instances based on specific criteria. A real life
problem is used to verify and validate the approach. Constructive heuristics and new
metaheuristics including simulated annealing and tabu search are proposed to solve
this complex and NP-hard scheduling problem and produce a more efficient
scheduling system. Innovative hybrid and hyper metaheuristic techniques are
developed and coded using C# language to improve the solutions quality and CPU
time. Hybrid techniques depend on integrating heuristic and metaheuristic techniques
consecutively, while hyper techniques are the complete integration between different
metaheuristic techniques, heuristic techniques, or both.
V
Publications Arising from this PhD Research
Masoud, M., Kozan, E., & Kent, G. (2011). A job-shop scheduling approach for
optimising sugarcane rail operations. Flexible Services and
Manufacturing Journal; 23(2):181-196.
Masoud, M., Kozan, E., & Kent, G. (2010a). Scheduling techniques to optimise
sugarcane rail Systems. ASOR Bulletin; 29:25-34.
Masoud M., Kozan E., & Kent, G. (2010b). A constraint programming approach to
optimise sugarcane rail operations, Proceedings of the 11th Asia Pacific
Industrial Engineering and Management Systems Conference 2010, 147:1-7,
Malaysia. (Outstanding Student Paper Award)
Masoud, M., Kozan, E., & Kent, G. (2010c). A comprehensive approach for
scheduling single track railways. The Annual Conference on Statistics,
Computer Sciences and Operations Research, Egypt, Cairo, 45,19-30.
Masoud, M., Kozan, E., & Kent, G. (under review) A new approach to automatically
producing schedules for cane railways. Australian Society of Sugar Cane
Technologies.
Masoud, M., Kozan, E., & Kent, G.. (under review) Hybrid/hyper metaheuristic
techniques for optimising sugarcane rail operations. Computer and
Operations research.
VI
Contents
Statement of Original Authorship………………………..………………...………….....……I
Acknowledgements…………………………..……………..…………………………...… II
Glossary………………………………………..………………….……………....……..….III
Abstract……………………………………………………...…………………..………..…IV
Publications Arising from this PhD Research…………………………………………..........V
Content………………………………………………………………...…………..…...........VI
List of Tables…………………………………….……………..………......….........XII
List of Figures……………………………………………...…………………….….…......XV
Chapter 1: Introduction and the Research Problems………………………..……….......1
1.1 Introduction........................................................................................................................................2
1.2 Research Problem .............................................................................................................................3
1.2.1 Background.......................................................................................................................3
1.2.2 Sugarcane Transportation Systems.....................................................................................6
1.2.2.1 The Sugarcane Rail Transport..............................................................................7
1.2.2.2 The Cane Transport and Harvesting Integration...................................................8
1.2.2.3 The Sugarcane Road Transport Systems............................................................11
1.2.3 Complexity of the Sugarcane Rail Transport Systems.....................................................13
1.2.3.1 Rail Complexity.............................................................................................13
1.2.3.2 Sugarcane Systems Complexity....................................................................16
1.2.4 Research Questions............................................................................................................21
1.3 Contribution and Significance of the Study.....................................................................................24
1.4 Outline of the Thesis........................................................................................................................26
Chapter 2: Scheduling Theory Review……………………………………………..….....28
2.1 Introduction .....................................................................................................................................29
2.2 Scheduling Classifications..............................................................................................................29
VII
2.2.1 Job Characteristic.............................................................................................................30
2.2.2 Machine Environment....................................................................................................30
2.2.2.1 Single Machine Scheduling (SMS)..........................................................32
2.2.2.2 Parallel Machine Scheduling (PMS)........................................................33
2.2.2.3 Flow Shop Scheduling (FSS)...................................................................34
2.2.2.4 Job Shop Scheduling (JSS)......................................................................34
2.2.2.5 Open Shop Scheduling (OSS).................................................................36
2.2.2.6 Group Shop Scheduling (GSS)...............................................................36
2.2.3 Optimality Criteria...........................................................................................................37
2.2.4 Concluding Remarks........................................................................................................38
2.3 Job Shop Scheduling (JSS) Solution Techniques............................................................................39
2.3.1 Disjunctive Graph Technique...........................................................................................41
2.3.1.1 Heuristic Techniques................................................................................46
2.3.1.1.1 The Shifting Bottleneck Algorithm................................................46
2.3.1.1.2 Dispatching Rules...........................................................................46
2.3.2 Mixed Integer Programming Approach for JSS Problem.................................................47
2.3.2.1 A hyper Branch and Bound Technique........................................................48
2.3.3 Constraint Programming Approach for the JSS Problem..................................................53
2.4 Scheduling Theory and Railway Systems in Literature..................................................................56
2.5 Conclusion.......................................................................................................................................60
Chapter 3: Modelling the Sugarcane Rail Transport Problem……………….…….....62
3.1 Introduction....................................................................................................................................64
3.2 Segment and Section Blocking Types............................................................................................65
3.2.1 Blocking Segment Types..................................................................................................65
3.2.1.1 Blocking Terminal Segments............................................................................66
3.2.1.2 Blocking Intermediate Segments......................................................................66
3.2.2 Blocking Section Types.....................................................................................................68
3.3 Description of the Blocking Segment Models of Sugarcane Rail System......................................69
3.3.1 Blocking Segment MIP Model of the Sugarcane Rail System..........................................76
VIII
3.3.2 Blocking Segment CP Model of the Sugarcane Rail System............................................83
3.4 Description of the Blocking Section Models of Sugarcane Rail System.......................................89
3.4.1 Blocking Section MIP Model of Sugarcane Rail System..................................................90
3.4.2 Blocking Section CP Model of Sugarcane Rail System....................................................93
3.5 Inclusion of the Delivery and Collection Time Constraints to the Model......................................95
3.5.1 Delivering Delay Constraints..............................................................................................99
3.5.2 Collecting Delay Constraints..............................................................................................101
3.6 Sugarcane Rail System as a Dynamic System...............................................................................102
3.7 Conclusion.....................................................................................................................................104
Chapter 4: Solution Approach…………………………………………………….........105
4.1 Introduction....................................................................................................................................107
4.2 Constraint Satisfaction (CS) Techniques.......................................................................................109
4.2.1 Constraint Propagation.....................................................................................................109
4.2.1.1 Node Consistency..............................................................................................111
4.2.1.2 Arc Consistencies..............................................................................................112
4.2.1.3 Bounds Consistency...........................................................................................115
4.2.1.4 Path Consistency................................................................................................115
4.2.2 Search Process..................................................................................................................116
4.2.2.1 Variable and Value Ordering Heuristics.............................................................117
4.2.2.2 Search Techniques..............................................................................................119
4.2.2.3 Global Constraint................................................................................................124
4.3 Proposed Algorithms......................................................................................................................125
4.3.1 Collecting and Delivering Conflict Elimination...............................................................126
4.3.1.1 Terminal Segment Conflict Elimination..............................................................127
4.3.1.2 Intermediate Segment Conflict Elimination........................................................130
4.3.2 Algorithms for Solving Train Conflicts.............................................................................133
4.3.3 Computing Acceleration (CA) Algorithms.........................................................................137
4.4 Conclusion......................................................................................................................................139
IX
Chapter 5: Computational Results of CP and MIP Models. ………….…...…....140
5.1 Introduction..............................................................................................................................143
5.2 A Case Study for Testing Blocking Segment CP and MIP Models.........................................143
5.2.1 Input Data...............................................................................................................144
5.2.2 Results of Makespan Minimisation Objective........................................................146
5.2.2.1 Constraint Programming Model Results..................................................146
5.2.2.1.1 Standard Constraint Programming Results...........................146
5.2.2.1.2 Results of Computing Acceleration (CA) Algorithms..........149
5.2.2.1.3 Train and Runs Scheduling for CP Model............................151
5.2.2.1.4 Solutions Analysis by Search Tree........................................152
5.2.2.2 MIP Model Results..................................................................................156
5.2.2.2.1 Result of Standard MIP Model................................................156
5.2.2.2.2 Results of Computing Acceleration (CA)...............................158
5.2.2.2.3 Train Scheduling Results.........................................................160
5.2.2.3 Comparisons of CP and MIP Models using Makespan Criterion............161
5.2.3 Results of Total Waiting Time Minimisation Objective ........................................163
5.2.3.1 Constraint Programming (CP) Model .......................................................163
5.2.3.1.1 Results of Standard CP.........................................................163
5.2.3.1.2 Results of Computing Acceleration (CA) Algorithms…….164
5.2.3.1.3 Train Scheduling Results......................................................165
5.2.3.2 Mixed Integer Programming (MIP) Model ................................................166
5.2.3.2.1 Result of Standard MIP Model.............................................166
5.2.3.2.2 Results of Computing Acceleration (CA) Algorithms…….167
5.2.3.2.3 Train Scheduling Results......................................................168
5.2.3.3 Comparisons of CP and MIP Models of Total Waiting Time Criterion….169
5.3 A large Scale Case Study for Testing Blocking Segment CP and MIP Models.......................171
5.3.1 Results of Makespan Minimisation Objective.........................................................175
X
5.3.2 Results of Total Waiting Time Minimisation Objective................................................177
5.4 Sensitivity Analysis of Blocking Segment MIP Sugarcane Rail Model.......................................178
5.4.1 Small Rail Systems......................................................................................................179
5.4.2 Large Rail Systems.......................................................................................................184
5.5 Blocking Section MIP and CP Results..........................................................................................186
5.5.1 The comparisons between the Blocking Segment and Sections Models......................187
5.6 The Results of Inclusion of the Delivery and Collection Time Constraints .................................189
5.7 Conclusion.....................................................................................................................................193
Chapter 6: Metaheuristic Techniques……………………………………………....….194
6.1 Introduction...................................................................................................................................197
6.2 Neighbourhood Structure..............................................................................................................198
6.2.1 Adjacent Pairwise Interchange (API) ..........................................................................200
6.2.2 Non-Adjacent Pairwise Interchange (NAPI)................................................................201
6.2.3 Extraction and Forward Shifted Reinsertion (EFSR)……………………...................201
6.2.4 Extraction and Backward Shifted Reinsertion (EBSR)................................................202
6.3 Neighbourhood Structure in the Railway Scheduling Problem ...................................................202
6.4 Simulated Annealing (SA) Technique ..........................................................................................207
6.4.1 Simulated Annealing Technique for Solving Job Shop Scheduling Problem…….....208
6.4.2 New Simulated Annealing Algorithms for Sugarcane Rail Cases.............................210
6.5 Tabu Search (TS) Technique.........................................................................................................213
6.5.1 Tabu Search Technique for Solving Job Shop Scheduling Problem............................213
6.5.2 A new Tabu Search Technique for Sugarcane Rail Cases............................................215
6.6 Metaheuristic Results of Sugarcane Rail Systems.........................................................................216
6.6.1 TS and SA Results by Changing Number of Trains........................................................219
6.7 Hybrid Metaheuristic Techniques for Sugarcane Rail Systems....................................................221
6.7.1 Hybrid SA/TS Technique.................................................................................................222
6.7.2 Hybrid TS/SA Technique .................................................................................................226
6.7.3 Hybrid Techniques Result for Sugarcane Rail Systems.....................................................229
6.7.4 Analysis of Hybrid Techniques..........................................................................................232
XI
6.8 Hyper Metaheuristic Techniques for Sugarcane Rail Transport Systems...............................234
6.8.1 Hyper SA/TS Technique..............................................................................................235
6.8.2 Hyper TS/SA Technique..............................................................................................238
6.8.3 Hyper Techniques Result for Sugarcane Rail Systems................................................241
6.8.4 Analysis of Hyper Techniques.....................................................................................244
6.9 Hybrid and Hyper Metaheuristic Techniques Test Cases........................................................246
6.10 Hyper and Hybrid Metaheuristic Technique (TS/SA) and MIP.............................................247
6.11 Study Analysis of Elements of Metaheuristic Techniques.....................................................249
6.12 Inclusion of Delivery and Collection Time Constraints for a Hyper TS/SA Results.............252
6.13 Conclusion..............................................................................................................................257
Chapter 7: Conclusions and Future Work……………………..………………..... 258
7.1 Introduction..............................................................................................................................259
7.2 Theoretical Contributions.........................................................................................................259
7.3 Practical Contributions.............................................................................................................262
7.4 Future Work..............................................................................................................................263
References ………………………………………………………………...………......265
Appendix A………………………………………………………………………….....276
Appendix B…………………………………………………………………………….316
Appendix C………………………………………………………………………….…320
XII
List of Tables
Table 1.1: Decision matrix of the rail network shown in Figure 1.5………………………………...20
Table 2.1: 3 jobs and 3 machines problem…………………………………………………….……...43
Table 2.2: Solving n/1/Cmax problem for the three machines M1, M2, and M3 at the final
level……...51
Table 5.1: The distances and the sectional running times of the sections…………………..........…144
Table 5.2: Siding capacity and siding allotments…………………………………………….......….145
Table 5.3: Train number, capacity and speed…………………………………………………..........145
Table 5.4: SSS and DSS results for standard CP model to optimise makespan…………......……...148
Table 5.5: SSS and DSS results for CA algorithms to optimise makespan………………......…….150
Table 5.6: Start and finish times of train run for CP model to optimise makespan…………......….151
Table 5.7: The standard CP model solutions before obtaining the optimal solution of
Makespan………………………………………………………………………………….……......153
Table 5.8: SSS and DSS results for standard MIP model to optimise makespan ………….....…..157
Table 5.9: SSS and DSS results for CA algorithms for MIP to optimise makespan ……….......…159
Table 5.10: Start and finish times of train runs for MIP to optimise makespan……………….......160
Table 5.11: Makespan results of standard MIP and CP and CA algorithms using SSS and
DSS………………………………………………………………………………………................162
Table 5.12: SSS and DSS results for Standard CP to optimise the total waiting time……….....….163
Table 5.13: Solution of CA algorithms for CP to optimise the total waiting time……………........164
Table 5.14: Start and finish times of train runs to optimise the total waiting time…………….......165
Table 5.15: SSS and DSS results for standard MIP model to optimise the total waiting time.........166
Table 5.16: SSS and DSS results of CA algorithms for MIP to optimise the total waiting
time………………………………......................………………………………………….....…....167
Table 5.17: Start and finish times of train runs of MIP to optimise the total waiting time….....…168
Table 5.18: Standard MIP and CP results using SSS and DSS to optimise the total waiting
time………………………………………………………………………………………....……...170
Table 5.19: MIP and CP results using SSS and DSS for CA algorithms………….…………....…170
Table 5.20: The distance between sidings in the extension case study……………..……….....….173
Table 5.21: Sidings capacity and allotments in the extension case study…………………....……174
XIII
Table 5.22: Train number, capacity and speed in the extension case study…………………....….175
Table 5.23: SSS results using makespan for the larger case study………………….............…..........175
Table 5.24: DSS results using makespan for the larger case study……………..……......….…...…..176
Table 5.25: Start and finish times of train runs using the makespan objective function….................176
Table 5.26: SSS results using total waiting time for the larger case study…………….…….........…177
Table 5.27: DSS results using total waiting time for the larger case study.………….……............…177
Table 5.28: Start and finish times of train runs using the total waiting time objective……................178
Table 5.29: Results of sensitivity analysis of total waiting time (a small rail)…………….......….....180
Table 5.30: Results of sensitivity study of makespan (a small rail)………………………........…….183
Table 5.31: Sensitivity analysis of changing some variables in the system using total waiting
time……………………………………………………………………………………...…........……185
Table 5.32: Sensitivity study for makespan criterion……………………………………........……...186
Table 5.33: Makespan and total waiting time results for the blocking section MIP model…...........187
Table 5.34: Makespan and total waiting time results for the blocking section CP model……..........187
Table 5.35: Train runs for seven harvester model……………………………………………............191
Table 6.1: Simulated annealing result for Example 2.1……………………………………...........…210
Table 6.2: TS technique result for example 2.1…………………………………………..........….....214
Table 6.3: TS and SA results for sugarcane rail transport system under section blocking
constraint……………………………………………………………………………………..............217
Table 6.4: Comparison of makespan of TS, SA and hybrid techniques…………………........…......230
Table 6.5: Metaheuristic and hyper techniques results of different cases………………….........…...242
Table 6.6: Metaheuristic, hybrid, hyper, and MIP results of different tested cases………….............248
Table 6.7: Train operations for delivering and collecting empty and full bins....................................254
Table A1: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level
1…………......277
Table A2: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level
2a...….......…..279
Table A3: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level
2b……............281
XIV
Table A4: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level
2c….........…...282
Table A5: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level
3a….….....….284
Table A6: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level
3bi……..........286
Table A7: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level
3bii….............287
Table A8: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level
3ci….........…..289
Table A9: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level
3cii……..........291
Table A10: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level
3ciii...............292
Table A11: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level
4ai….............294
Table A12: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level
4aii……........295
Table A13: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in
level4bi…..............297
Table A14: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level
4biia…..........298
Table A15: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level
4biib…..........300
Table A16: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level
4ci…….........301
Table A17: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level
4cii…......…..302
Table A18: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level
5aia...............304
XV
Table A19: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level
5aib…..........305
Table A20: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level
5aiia….........306
Table A21: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level
5aiib..….......307
Table A22: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level
5bia…...........308
Table A23: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level
5bib...............310
Table A24: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level
5biia........…..311
Table A25: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level
5cii…............312
Table A26: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level
6aii…............313
Table A27: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level
6bia…...........314
Table B1: simulated annealing result for Example 2.1…………………………......………..............319
Table C1: Tabu search technique result for example 2.1…………………………………….............329
List of Figures
XVI
Figure 1.1: Value chain linkages in the sugar industry (Milford, 2002)………………………........…..4
Figure 1.2: The length of train less than the length of section………………………………........…..14
Figure 1.3: The length of train greater than the length of section………………………………..........15
Figure 1.4: The segment length is greater than the train length………….………………………........15
Figure 1.5: Delivering and collecting point at the middle and the end of the branch (sugarcane)....... 16
Figure 1.6: Delivering and collecting point at the end of the branch (Coal mining)…………….........17
Figure 1.7: Research plan…………………………………………………………...…………........…27
Figure 2.1: Scheduling problem types…………………………………………………………........…31
Figure 2.2: single machine scheduling problem; n jobs processed on one machine………….….........32
Figure 2.3: Parallel machine scheduling problem; n jobs processed on m parallel machines…….......33
Figure 2.4: Flow shop problem; n jobs processed on m machines……………………………….........34
Figure 2.5: Job shop problem; n jobs processed on m machines………………………………...........35
Figure 2.6: Open shop problem; n jobs processed on m machines……….……………………...........36
Figure 2.7: Group shop scheduling problem; n jobs include different groups on m machines…..........37
Figure 2.8 Job shop solution techniques………………………………………………………........…40
Figure 2.9: Conjunctive arcs for 5 jobs and 5 machines….…………………………….………..........42
Figure 2.10: Disjunctives arcs for 5 jobs and 5 machines…………….……………………........…….42
Figure 2.11: The disjunctive graph for 3 jobs and 3 machines……………….………………….........43
Figure 2.12: The first feasible solution for the 3 jobs and 3 machines example…………….….…......44
Figure 2.13: The Gantt chart for the first feasible solution…………………………………............…45
Figure 2.14: The second feasible solution for 3 jobs and 3 machines example………………........….45
Figure 2.15: The Gantt chart for the second feasible solution…………………………………...........46 Figure 2.16: Disjunctive graph for the final solution…………………………………………….........51
Figure 2.17: Optimal solutions of MR1R, MR2R, and MR3R at the final level for n/1/CRmax
Rproblem……............51
Figure 2.18: The complete branching procedure of the optimal solution………………………..........52
Figure 3.1: MIP and CP approaches……………………………...………………………………........64
Figure 3.2: Two trains travelling in the same direction. One requires the blocking terminal
segment………………………………………………………………………............…...66
XVII
Figure 3.3: Two trains travelling in the same direction. One requires the blocking intermediate
segment……………………………………………………........………………………...67
Figure 3.4: Two trains travelling in different directions. One requires the blocking intermediate
segment………………………………………...……………….........…………………...68
Figure 3.5: Train requires the blocking section s……………………….……........…………...……...69
Figure 3.6: A simple cane rail network with three sidings for delivering and collecting….........…….70
Figure 3.7: A single cane rail network after applying model…………………………………........….72
Figure 3.8: Rail branch includes passing loop…………..………………………………...…........…..73
Figure 3.9: Rail siding includes passing point……………………………………………...........……74
Figure 3.10: MIP and CP models structure………………………………………………........………75
Figure 3.11: A single cane rail network with three sidings can allow passing trains………......…......90
Figure 3.12: Numerical case…………………………………………………………........……….….98
Figure 3.13: Visit times of siding A for one day………………………………………........…..……..98
Figure 3.14: Sugarcane rail system as a dynamic system………………………………........………103
Figure 4.1: The order of nodes that will be expanded by DFS strategy………………….........……..120
Figure 4.2: Limited discrepancy search…………………….....………………………….........…….122
Figure 4.3: Siding at the beginning of terminal segment…………………………………....….........127
Figure 4.4: Siding at the end of terminal segment…………………………………………........…...128
Figure 4.5: Siding at the middle side of terminal segment…………………………………..............128
Figure 4.6: Terminal segment conflict elimination algorithm……………………………….............129
Figure 4.7: Siding at the beginning of intermediate segment………………………………..............130
Figure 4.8: Siding at the end of intermediate segment………………………………...……..............131
Figure 4.9: Siding at the middle side of intermediate segment……………………………..…..........131
Figure 4.10: A rail conflict case…………………………………………………………............…..135
Figure 4.11: A section conflict case using a delayed train technique……….....................................136
Figure 4.12: A section conflict case using a slow train technique…………………….....….............137
Figure 5.1: A realistic test case study (small sector of rail network of Kalamia mill)……….............143
Figure 5.2: CPU time of SSS and DSS for the standard and CA of the CP using Makespan..............151
Figure 5.3:Scheduling 5 trains on 11 segments using CP with the makespan of 2664……................152
Figure 5.4: Solutions analysis by search tree………………………….…………………...…...........155
XVIII
Figure 5.5: CPU time of standard and CA algorithms for MIP to optimise Makespan.......................160
Figure 5.6: Scheduling 5 trains on 11 segments using MIP with the makespan of 2664…….............161
Figure 5.7: CPU time of standard and CA algorithms for CP to optimise the total waiting time........165
Figure 5.8: CP for scheduling 5 trains on 11 segments with the total waiting time of 1984…….......166
Figure 5.9: CPU time of SSS and DSS for the standard and the CA algorithms of MIP to optimise the
total waiting time……………………………………………………………………………..............168
Figure 5.10: MIP for scheduling 5 trains on 11 segments with the Total waiting time of 1984..........169
Figure 5.11: Larger case study: bigger part of rail network of Kalamia mill……........………...........172
Figure 5.12: SSS and DSS results of different cases using a small rail to optimise TWT……...........181
Figure 5.13: SSS and DSS results of different problems using a small rail to optimise makespan....184
Figure 5.14: Blocking segment and section results using makespan and total waiting time…...........188
Figure 5.15: Harvester usage for seven harvester model……………………………………........….189
Figure 5.16: Mill yard stock chart for seven the harvester model……………………..….........…….190
Figure 5.17: Train utilisation for the seven harvester model…………………………………...........190
Figure 6.1: API technique for 10 jobs………………………………………………………….........201
Figure 6.2: NAPI technique for 10 jobs………………………………………………………..........201
Figure 6.3: EFSR technique for 10 jobs………………………………………………………..........202
Figure 6.4: EBSR technique for 10 jobs…………………………......................................................202
Figure 6.5: A single cane rail network with 22 sections…………………………….………….........203
Figure 6.6: Transition matrix for a single railway includes 22 sections………………………..........204
Figure 6.7: Case of three trains require the same section……………………………….....................207
Figure 6.8: Disjunctive graph of final solution for the 3 jobs and 3 machine example………...........209
Figure 6.9: Disjunctive graph of final solution for the 3 jobs and 3 machines example…...…...........214
Figure 6.10: Metaheuristic techniques using makespan for different cases …………...……….........218
Figure 6.11: CPU time of TS and SA for different cases………….…………………….…......…….219
Figure 6.12a: Makespan of metaheuristics on 10 sections…………………......…………............….220
Figure 6.12b: Makespan of metaheuristics on 15 sections…………...........................….......……….220
Figure 6.12c: Makespan of metaheuristics on 20 sections…………..............................……........….221
Figure 6.13: Hybrid SA/TS technique…………..............................………………...…............…….223
XIX
Figure 6.14: The detailed hybrid SA/TS technique………………..........................………….......….225
Figure 6.15: Hybrid TS/SA technique…………..............................…………………...……….........226
Figure 6.16: The detailed hybrid TS /SA technique…………..............................…..………….........228
Figure 6.17: Comparison of TS, SA and hybrid techniques using makespan…………......................231
Figure 6.18: CPU time of TS, SA and hybrid techniques for different cases…………......................232
Figure 6.19a: Makespan of hybrid techniques on 15 sections…………………………....…….........233
Figure 6.19b: Makespan of hybrid techniques on 20 sections……………………...……............….233
Figure 6.19c: Makespan of hybrid techniques on 25 sections……………………………….............233
Figure 6.19d: Makespan of hybrid techniques on 30 sections……………………………….............233
Figure 6.20: Hyper SA/TS technique………………………………………………..………........….235
Figure 6.21: The detailed SA/TS hyper Technique…………………….……………………….........237
Figure 6.22: Hyper TS/SA technique…………………………………………………..……….........238
Figure 6.23: The detailed hyper TS/SA Technique………………….………………………….........240
Table 6.24: Metaheuristic and hyper techniques results of different cases………………........….….243
Figure 6.25: Comparison of CPU time of TS and SA and hyper techniques………........…………...244
Figure 6.26a: Makespan of hyper techniques on 15 sections………………………………........…..245
Figure 6.26b: Makespan hyper techniques on 20 sections………………….……………........……..245
Figure 6.26c: Makespan of hyper techniques on 25 sections……………………….………........…..245
Figure 6.26d: Makespan of hyper techniques on 30 sections………………………….………..........245
Figure 6.27: Hybrid and hyper techniques results using makespan for some tested cases……..........246
Figure 6.28: CPU time of some tested cases using hybrid and hyper techniques………........………247
Figure 6.29: Effect of α value on the hyper SA/TS solution……………………………....................249
Figure 6.30: Effect of α value on the hyper SA/TS solution…………………………………............249
Figure 6.31: Effect of number of iterations on hyper TS/SA solution………………………….........250
Figure 6.32: Effect of number of iterations on CPU time of hyper TS/SA solution…………............250
Figure 6.33: The average of makespan of hyper TS/SA case study 30/12 with different runs........251
Figure 6.34: A sample train schedule of seven trains, 45 sections and 15 harvesters.........................253
Figure A1: Conjunctive graph for the 3/3/G/Cmax problem………………………….………….........276
Figure A2: Conjunctive graph for the 3/3/G/Cmax problem……………………….………….............277
XX
Figure A3: Optimal solutions of M1, M2, and M3 in level 1for n/1/Cmax
problem……………........…277
Figure A4: The branching procedure at level 1……………………………………...........………….278
Figure A5: Conjunctive graph for the 3/3/G/Cmax problem…………………………........…………..279
Figure A6: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level
2a.........……..279
Figure A7: The branching procedure at level 2a ……………………………………….....................280
Figure A8: Disjunctive graph for the level 2b ……………………..………………….......................280
Figure A9: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level
2b……...........281
Figure A10: The branching procedure at level 2b…………………………………….…...................281
Figure A11: Disjunctive graph for the level 2c…………………………………….….......................282
Figure A12: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level
2c….............282
Figure A13: The branching procedure at level 2c…………………………………............................283
Figure A14: The complete branching procedure at level 2………………………………….........….283
Figure A15: Disjunctive graph for the level 3a…………………………………………....................284
Figure A16: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level 3a
…............284
Figure A17: The branching procedure at level 3a…………………………………….………...........285
Figure A18: Disjunctive graph for the level 3bi…………………………………………...................285
Figure A19: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level 3bi
…...........286
Figure A20: The branching procedure at level 3bi…………………………………………........…...286
Figure A21: Disjunctive graph for the level 3bii………………………...…………...…............…...287
Figure A22: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level
3bii.….........287
Figure A23: The branching procedure at level 3bii…………………………………………........….288
Figure A24: Disjunctive graph for the level 3ci………………………………………………...........288
XXI
Figure A25: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level
3ci…........…289
Figure A26: The branching procedure at level 3ci………………………………………...................290
Figure A27: Disjunctive graph for the level 3cii……………………………...…………...................290
Figure A28: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level
3cii…...........291
Figure A29: The branching procedure at level 3cii……………………………………............……..291
Figure A30: Disjunctive graph for the level 3ciii……………………………………….....................292
Figure A31: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level
3ciii…..........292
Figure A32: The complete branching procedure at level4……………………………………...........293
Figure A33: Disjunctive graph for the level 4ai…………………………………............…………...293
Figure A34: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level 4ai
…...........294
Figure A35: The complete branching procedure at level4ai…………………………........................294
Figure A36: Disjunctive graph for the level 4aii………………………………………......................295
Figure A37: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level
4aii…...........295
Figure A38: The complete branching procedure at level4aii……………………………...................296
Figure A39: Disjunctive graph for the level 4bi………………………………………………...........296
Figure A40: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level
4bi…............297
Figure A41: The complete branching procedure at level 4bi………………………….......................297
Figure A42: Disjunctive graph for the level 4biia………………………………….……...................298
Figure A43: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level
4biia.............298
Figure A44: The complete branching procedure at level 4biia…………………………....................299
Figure A45: Disjunctive graph for the level 4biib……………………………………...........….…...299
Figure A46: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level
4biib…........300
XXII
Figure A47: Disjunctive graph for the level 4ci…………………………………………...................300
Figure A48: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level
4ci…............301
Figure A49: Disjunctive graph for the level 4cii………………………………………...……...........301
Figure A50: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level 4cii
…..........302
Figure A51: The complete branching procedure at level 4cii…………………………………..........303
Figure A52: Disjunctive graph for the level 5aia…………………………………….........................303
Figure A53: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level
5aia….........304
Figure A54: Disjunctive graph for the level 5aib…………………………………..……...................304
Figure A55: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level
5aib…..........305
Figure A56: Disjunctive graph for the level 5aiia………………………………………...................306
Figure A57: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level
5aiia….........306
Figure A58: Disjunctive graph for the level 5aiib………………………………………....................307
Figure A59: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level
5aiib….........307
Figure A60: Disjunctive graph for the level 5bia…………………………………………........…….308
Figure A61: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level
5bia…..........308
Figure A62: Disjunctive graph for the level 5bib………………………………………....…........….309
Figure A63: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level
5bib…..........310
Figure A64: Disjunctive graph for the level 5biia………………………………..…........….…….....310
Figure A65: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level
5biia….........311
Figure A66. Disjunctive graph for the level 5cii……………………………………...………...........311
XXIII
Figure A67: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level
5cii…...........312
Figure A68. Disjunctive graph for the level 6aii……………..…………………………....................312
Figure A69: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level
6aii…...........313
Figure A70: Disjunctive graph for the level 6bia…………………………………………........…….313
Figure A71: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level
6bia.............314
Figure A72: The complete branching procedure at level 6bia……………………...…………...…...315
Figure B1: Disjunctive graph of an initial solution for the 3 jobs and 3 machines example…...........316
Figure B2. Disjunctive graph of an initial solution for the 3 jobs and 3 machines example…...........317
Figure B3: Disjunctive graph of an initial solution for the 3 jobs and 3 machines example…...........317
Figure B4: Disjunctive graph of an initial solution for the 3 jobs and 3 machines example...............318
Figure B5: Disjunctive graph of an initial solution for the 3 jobs and 3 machines example…......….319
Figure C1: Disjunctive graph of an initial solution for the 3 jobs and 3 machines example..….........320
Figure C2: Disjunctive graph of an initial solution for the 3 jobs and 3 machines example…...........320
Figure C3: Disjunctive graph of an initial solution for the 3 jobs and 3 machines example…...........322
Figure C4: Disjunctive graph of an initial solution for the 3 jobs and 3 machines example…...........322
Figure C5: Disjunctive graph of an initial solution for the 3 jobs and 3 machines example…...........323
Figure C6: Disjunctive graph of an initial solution for the 3 jobs and 3 machines example…...........324
Figure C7: Disjunctive graph of an initial solution for the 3 jobs and 3 machines example…...........325
Figure C8: Disjunctive graph of an initial solution for the 3 jobs and 3 machines example…...........325
Figure C9: Disjunctive graph of an initial solution for the 3 jobs and 3 machines example…...........326
Figure C10: Disjunctive graph of an initial solution for the 3 jobs and 3 machines example….........327
Figure C11: Disjunctive graph of an initial solution for the 3 jobs and 3 machines example….........328
1
Chapter 1
Introduction and the Research Problems
Chapter Outline
1.1 Introduction...............................................................................................................2
1.2 Research Problem .....................................................................................................3
1.2.1 Background.................................................................................................3
1.2.2 Sugarcane Transportation Systems...................................................................6
1.2.2.1 The Sugarcane Rail Transport...............................................................7
1.2.2.2 The Cane Transport and Harvesting Integration...................................8
1.2.2.3 The Sugarcane Road Transport Systems............................................11
1.2.3 Complexity of the Sugarcane Rail Transport Systems....................................13
1.2.3.1 Rail Complexity...........................................................................13
1.2.3.2 Sugarcane Systems Complexity......................................................16
1.2.4 Research Questions...........................................................................................21
1.3 Contribution and Significance of the Study......................................................................24
1.4 Outline of the Thesis.........................................................................................................26
2
1.1 Introduction Railway systems in Australia play a vital role in transporting the sugarcane crop
between farms and mills. About 90% of the total Australian crop is transported in
this way. The sugarcane transport system is very complex and uses a daily schedule
of train runs to meet the needs of both the harvesters and the mill. An efficient
sugarcane transport schedule can reduce costs and minimise the negative effects on
the overall sugar production system.
The sugarcane rail transport system has a significant effect on the performance of the
sugar production process and its overall costs. Potential negative effects of a poor
rail transport system include: stopping the supply of cane to mills, causing
interruptions to the raw sugar production process; delaying the arrival of sugarcane
to the mill; allowing cane to deteriorate and lose sugar quality; delaying the arrival of
empty cane bins to harvesters causing the harvesters to waste time and money
waiting for empty bins; increasing cane production costs through inefficiencies in the
sugarcane transport system itself, and in the harvesting and raw sugar production
processes.
Many researchers have developed optimization models to improve the efficiency of
the railway and sugarcane transport system. However, extensive work is still
required to develop new techniques of scheduling and achieve optimal solutions for
the sugarcane rail transport problem.
This research aims to produce effective techniques for producing efficient schedules
for the sugar rail transport system by developing new scheduling models based on
Mixed Integer Programming (MIP) and Constraint Programming (CP). This
research treats the train scheduling problem as a blocking parallel-machine job shop
scheduling (BPMJSS) problem and develops a novel job shop scheduling (JSS)
approach to solve it. This approach integrates mixed integer programming and
constraint programming to produce efficient schedules to solve the sugarcane rail
transport scheduling problem in a reasonable time. The new models include the
requirements of all major elements of the cane transport system to reduce the overall
cost and optimise the performance of the system. New metaheuristics including
simulated annealing and tabu search are proposed to solve this complex and NP-hard
3
scheduling problem. Hybrid and hyper techniques are developed using the
integration of the heuristic and metaheuristic techniques to improve the solution
quality and Computer Processing Unit (CPU) time. Optimisation Programming
Language (OPL), CPLEX software and C# are used to develop the code of the
proposed models and the solution techniques.
This chapter introduces the background to this research, the main research problem
and significance of the study. The main research question has been articulated
through investigative questions in Section 1.2 while the significance of the study is
highlighted in Section 1.3. A brief outline of the complete document is provided in
section 1.4.
1.2 Research Problem 1.2.1 Background
The value chain of the Australian sugar industry is illustrated in Figure 1.1 and
consists of seven main stages: sugarcane growing; sugarcane harvesting; sugarcane
transport; raw sugar production (at the sugar mill); raw sugar storage; shipping and
refining.
The sugarcane transport system in Australia is an important element in the
production of raw sugar. Most of the sugarcane crop is transported from farms to
mills by rail. A cane railway network is mainly single track which includes many
branches, lines and sidings. The cane railway system performs two main tasks:
delivering empty bins from the mill to the harvesters at sidings; and collecting the
full bins of cane from harvesters and transporting them to the mill. From the
perspective of the transport system, the mill serves the function of converting full
bins into empty bins while the harvesters convert empty bins to full bins.
4
Material flow
Information and planning
Figure 1.1: Value chain linkages in the sugar industry (Milford, 2002)
Sidings, where the empty and full bins are delivered and collected, have a finite
capacity that cannot be exceeded. However, each siding has a daily allotment of bins
(the number, determined by the mill, of bins to be filled by the harvesters) that often
exceeds the capacity of the siding, hence requiring multiple deliveries and
collections. Each train can haul a limited number of empty and full bins depending
on its capacity. Safety conditions require that empty and full bins are not mixed on
the train. As a result, all empty deliveries must take place before full collections.
This transport scenario significantly impacts on the overall cost of the sugarcane
production system and represents over 35% of the total cost of sugar production in
Australia.
5
Sugarcane rail transport systems are very complicated systems and these systems
face many challenges which directly and indirectly affect the overall cost or the
performance of sugarcane system. The challenges include:
Limited harvesting hours. In Australia, most harvesting operations occur
during daylight hours while the transport system and mills operate
continuously. As a result, the cane bins are used for temporary storage of the
harvested crop at sidings and at the mill. Long storage times reduce
sugarcane quality and crop profitability.
Minimising the number of locomotive runs to reduce operating costs.
Reducing the time between harvesting and crushing (cane age) to keep sugarcane quality high.
Scheduling passing of trains on the single rail track to improve performance
of the railway system.
Maintaining a non-interrupted supply of full bins to the mill and empty bins to the harvesters.
Many changing circumstances. A generic model needs to be designed to deal
with unexpected events such as ever-changing numbers of harvesters, bins
and trains.
An Automatic Cane Railway Scheduling System (ACRSS) was first developed at
James Cook University in the 1970s to produce daily schedules (Pinkney and Everitt,
1997). This software solved the railway scheduling problem by dividing the problem
into two parts: a routing problem that produced train runs, and a sequencing problem
to determine the train run times to satisfy harvester and mill requirements. Further
refinement of the solution was then undertaken by adding train runs to produce a
daily schedule to satisfy the objective function. This function considered issues such
as the number of cane bins, cane age, a non-interrupted supply of full bins to the
mill, and the capacity of the sidings.
ACRSS has since been upgraded to be more efficient and easy to use than it was
initially, but it still has many limitations. Firstly, this scheduler depends on iterative
techniques which produce feasible, but not optimal, solutions. ACRSS does not
include all sugarcane transport system constraints such as train passing constraints,
6
having two harvesters at one siding, and different speed and load limits on different
sections of track. The required manual modifications to the produced schedule take a
long time to undertake and cannot be used to prepare operating schedules on a daily
basis.
1.2.2 Sugarcane Transportation Systems The sugarcane transport system is a complex system that includes a large number of
variables and elements. These elements work together to achieve the main system
objectives which aim to satisfy both mill and harvester requirements and improve the
efficiency of the system in terms of low overall costs. These costs include delay,
congestion, operating and maintenance costs.
The main focus for recent studies into how to reduce the overall cost of the
sugarcane transport systems can be divided into: the sugarcane rail transport
systems; The Cane Transport and Harvesting Integration, in particular transportation
and harvesting; and road transport systems.
Integration of the transport and harvesting elements in the value chain of the
Australian sugar industry can increase the performance efficiency of the transport
system. For example, optimising the delivery and collection times throughout the
rail system requires information about harvester start times, harvesting rate, the
number of harvesters in the systems, and their locations. Harvester breakdown can
interrupt the transport system.
Cane is transported from the farms to the sidings using trucks. It is useful therefore
to study the earlier work on cane road transport systems to develop new models
which optimise the delivery and collection times between harvester locations and
sidings. This optimisation can further optimise the rail transport system between
sidings and the mill. Developing cane road system models and integrating these with
the rail models and road models, is beyond the domain of this current study. These
issues can be undertaken as future research.
7
1.2.2.1 The Sugarcane Rail Transport
The Australian sugarcane rail transport system is a unique system. Rail is uncommon
outside Australia for transporting sugarcane from farms to mill. Therefore, there are
very few studies that relate directly to sugarcane rail systems.
Everitt and Pinkney (1999) described an integrated set of tools to manage the
performance of a cane transport scheduling system. This work was designed to
achieve integration between the schedule simulation program, namely the animated
cane transport scheduling system (ACTSS), the schedule generating program,
namely the automatic cane railway scheduling system (ACRSS), and Traffic Officer
Tools (ToTools), (McWhinney & Penridge, 1991). ACRSS software used the data
of railway networks and harvesting patterns to produce efficient schedules for
reducing the number of locomotive movements. ACTSS software gave flexibility to
modify and develop the schedules. ToTools allowed arranging and managing all
daily operations in the system such as the distribution of harvesters to help the traffic
officer. Since all these software programs used the same data and similar methods of
processing, the integration process between them saved a lot of time in modelling
and improved the performance of the system. In addition, using the global
positioning system (GPS) in transport systems gave more data such as trip times,
speeds of trains, and shunt times at sidings. This data helped to better define system
parameters so that ACRSS and ACTSS could produce more effective schedules.
Martin et al. (2001) proposed using Constraints Logic Programming (CLP) to solve
cane railway scheduling problems. CLP is an artificial technique using the ECLiPSe
language to solve large-scale problems. ECLiPSe is a Prolog language with extra
features to suit for the cane railway system. This technique produced daily schedules
to minimize the number of trains and their runs, satisfy the constraints of the system
such as siding capacity and the characteristics of bins on a train.
Higgins and Davies (2005) introduced a simulation model for the capacity of sugar
cane transport systems in Australia. This capacity was determined by estimating
some variables such as the number of trains used in the system and their movements,
number of bins, and the time spent waiting for the empty bins at farms. The
transportation modelling divides into two main types, the first of which is the static
8
model. In static models all variables are assessed and assumed beforehand. This
applies to variables such as the amount of time spent between cutting and crushing
cane, the demand for bins and trains, and the time spent waiting for empty bins at the
farms. This model has been previously used in research to develop a transport
system. The second type of transportation modelling is the dynamic model. Dynamic
models produce many schedules which are suitable for handling unexpected events
in the transport system. These models integrate with the fundamental basics of
stochastic simulation which help these models to be more flexible and easy to use.
Furthermore, these models help in finding the link between all parameters in the
system of sugar cane transport such as the average time spent between cutting and
crushing of cane, train movements per day, and time spent waiting for bins at siding.
1.2.2.2 The Cane Transport and Harvesting Integration The integration between sugarcane value chain elements, particularly transport and
harvesting elements, was the main focus for the most recent studies on how to reduce
the overall cost of the system. Many techniques and methods were used to achieve
this purpose.
Higgins et al. (2007) concentrated on the sugar value chain elements, especially in
the harvesting and transport sectors. The integration between the value chain
elements improves the performance of the sugar production system and reduces the
overall cost. This research focused on logistical and non-logistical integration
between the value chain elements. Mathematical techniques were used to improve
the integration between value chain elements. This study included many countries
such as Brazil, South Africa, Thailand and Australia as case studies and exposed
many techniques which were used for each sector.
Grimley and Horton (1997) proposed a mathematical model to reduce the total cost
of the harvest and transport systems using optimization techniques. They analysed
the whole system to understand all components such as transportation networks,
costs for all parts of the system, and the combination of groups of harvesters. In the
next step, they defined the interactions and relations between some sectors such as
transport, harvesting, crushing, inventory, and their effects on the total cost. Then
they used mathematical programming and operations research techniques to obtain
9
the best solutions based on the analysis of collected data. They developed two
models: first, the rotation model to determine the harvesters’ schedules, transport
capacity, and mill’s needs to keep continuous supply; and second, the daily model to
process the rotation model output to produce optimal schedules of trains and bin
shifts. The two models were formulated by mixed integer programs and used AMPL
modelling language and CPLEX solvers. The model of Grimley and Horton did not
prove practical for use by sugar mills and is not in use.
Higgins et al. (2004a) developed a modelling framework to improve the sectors of
transport and harvesting efficiency via two case studies – one of Plane Creek and the
other of Mourilyan in North Queensland. The development of a modelling
framework depended on the integration between activities of the harvesting and
transport sectors in the sugarcane industry. The study found that the location of
farms affects the number of train movements and cane bin demand which has a huge
effect on the overall transport cost and on other components in the system.
Higgins (2004) concentrated on the reduction of costs of harvesting sugar cane and
transport by building a model for optimising siding use by harvesting groups. This
model was able to achieve best utilization of rail capacity, reducing the movement of
harvesters between the sidings, and achieving satisfaction and fairness for growers.
He used some advanced methods which were very useful in producing efficient
siding schedules and at the same time kept the mills operating without interruption.
These methods depended on metaheuristic techniques such as tabu search. Tabu
search is more efficient than other local search heuristics because of its flexibility in
treating any expected or unexpected changes of data or of constraints to the system.
Higgins et al. (2004b) designed models for three regions in Australia to produce
optimal sugarcane harvest schedules. These models depended on coordination
between growers, harvesters, and mills. They developed software that was used to
help the implementation process. These models achieved an increase in profits for all
three regions.
Higgins and Laredo (2006) developed a modelling framework to integrate the
transport and harvesting sectors to reduce the total cost of production. The two
sectors have many elements which can integrate inside the framework such as siding
and harvester rosters, and transport capacity planning. They used P-Median and
10
spatial clustering formulations to build these models. Neighbourhood search
techniques were used to solve their model. This work was applied to a sugar area
located in the north-east of Australia and achieved good results in reducing the
overall cost of the system. In addition, Higgins and Laredo could reduce the number
of sidings and harvesters by increasing the period of harvesting to 24 hours per day.
Iannoni et al. (2006) applied discrete simulation techniques to manage and analyse
the performance of the harvesting and transportation sectors in Brazil. Colin (2009)
used mathematical programming (quadratic programming) to develop a new model
for plantation planning and scheduling. The main aim of this model was to increase
the efficiency and competitiveness of Brazilian agro industries (integrating the
agricultural and industrial operations) such as sugarcane, orange and wood and
reduce the deviation from the constructed plan.
Salassi and Barker (2008) developed a framework to reduce harvest costs by
integrating the transport and harvesting sectors in Louisiana. They proposed a
mathematical model to minimise the total waiting time at the mill. This model
considered transport and harvesting variables and the delivery times of the crop.
Barker (2007) designed a linear programming model to obtain optimal harvest
schedules for groups of farms by combining the harvest units. Minimising the
waiting times at the mill can reduce the overall costs of the production system. The
coordination of harvest schedules through groups of farms has improved the
transport efficiency.
Kaewtrakulpong (2008) used multi objective optimisation techniques to improve the
efficiency of harvesting and transport systems. He used an integer programming
formulation to define the truck allocations and simulated the harvesting and transport
system. In this research he clarified the relation between mechanical harvesting and
the transportation processes in Thailand and optimised the harvesting groups’
allocations to reduce the total cost of the production. The old system in Thailand was
based on delivery directly from farms to mills.
Many papers have been presented to increase the efficiency and profitability of the
cane supply chain. These studies concentrated on the integration of farmers and
millers (Le Gal et al., 2004). Lejars et al. (2008) proposed a decision support
11
approach to improve the management of sugarcane supply through the mill area.
This approach depended on the MAGI simulation tool, modelling tool for sugarcane
supply from field to sugar mill, ( Le Gal et al., 2003). In this paper, they suggested
some ways to increase efficiency such as rearrangement of mill areas or changing
cane delivery allocation rules to increase the total sugar production and the total
revenue. This approach has been implemented for two mills in Reunion and one mill
in South Africa. Integration of supply chain elements such as the mill crushing rate,
harvest mechanism, and transport capacities can improve sugarcane supply
management, increase the mill sugar production and achieve high quality sugarcane
in terms of the recoverable value of sugar as a percentage of the sugarcane mass (Le
Gal et al., 2009; Le Gal et al., 2008).
Grunow et al. (2007) designed optimisation models to keep a constant supply of
sugarcane while minimising the associated costs. They divided the sugarcane supply
problem into three levels: plantation planning, harvest scheduling, and harvesters
groups and machines. They found the optimal decision for each level by using
Mixed Integer Linear Programming (MILP) to decrease the costs.
1.2.2.3 The Sugarcane Road Transport Systems
The road transport systems have very different constraints to rail transport
systems. Developing models for such cane road systems could be useful for future
project to optimise the delivery and collection time between harvester and siding.
Models have been developed in earlier work by a variety of researchers. The main
llimitation of these models is there is no integration between these models and the
rail transport models to optimise the sugacane transport systems.
A simulation model was developed by Hahn and Riberio (1999) with a heuristic
technique to improve the performance of the sugarcane road transport system. They
used a software package, System Simulator for the Transport of Sugarcane (SSTS) to
solve this model including a minimal allocation heuristic technique. Díaz and Pérez
(2000) arranged and optimized sugar cane harvest and road transport processes using
a simulation optimization approach. This technique was based on the combination
of simulation modelling, response surfaces and optimization techniques to determine
the best solution. In this paper the response surface method was used to find average
12
trip times for sugar cane transport. They used the output of a simulation model as an
initial input for optimization methods to reach the near optimal solution.
Lopez et al. (2006) designed a mathematical model to reduce the daily cost of sugar
cane road transport from farms to the mill in Cuba. The model estimated the
capacity of the transport system which helped the supply of full bins to mills to be
continuous without interruptions. The mathematical formulation of this model
depended on a mixed integer linear programming method. The advantage of this
model was how it dealt with changes which happen from one day to the next. For
instance, the number of working hours and means of cutting and transportation may
all change so the model may need reformulation daily. Also, the number of farms,
harvesting machines and transportation means may change, so the decision variables
will change. Therefore, they used a tailor-made software package to solve these
problems with a high number of variables, thus making the daily update easier and
saving a lot of time. This paper also examined two different methods for the
transportation of cane: direct transportation between farms and mills, and
transportation of cane from farms to intermediate stores at stations to be cleaned and
subsequently carried to mills. The choice between these methods depended on the
location of farms and mills.
Arjona et al. (2001) developed a simulation model of sugarcane harvesting and road
transportation systems including processes from cutting the crop at farms to its
movment to mills. This model aimed to reduce the number of harvesters, and all
transport means without increasing the end time of all stages of the system. They
considered all activities of the system independent and used SIMACT (Simulation of
Activities) software to design this model. SIMACT software was written in the C++
language. The results of this paper showed that this model made the system more
profitable and more efficient.
Higgins (2006) built a new model of a sugar cane road transport system. This model
was based on a mixed integer programming method. He solved this model using two
metaheuristic techniques: tabu search (TS) and variable neighbourhood search
(VNS). He produced transport schedules to reduce waiting times in the transport
system and at the mill. In this research, he proved that this model was more efficient
13
than manual methods used by a mill traffic officer when he applied this model in the
Maryborough region in Australia.
Chetthamrongchai et al. (2001) designed a new road transport scheduling system for
sugarcane in Thailand. Loading stations were constructed between farms and mills to
reduce the cost of cane transport especially for the small sugarcane growers. These
areas are located close to the growers in order to ease the supply of cane to the mills.
1.2.3 Complexity of the Sugarcane Rail Transport Systems The current sugarcane rail transport system has two issues which make it complex-
the first issue is rail complexity and the second issue is the complexity of the
sugarcane system itself. Rail complexity refers to the large number of conflict points
that exist throughout the rail network and impede the safe passing of trains. The
complexity of the sugarcane system itself derives from the fact that there are
numerous points situated throughout the rail network which serve dual roles as
sidings for delivering and collection and passing loops. This complexity causes
inefficiencies in system operations. Sections 1.2.3.1 and 1.2.3.2 examine theses
complexity issues in detail.
1.2.3.1 Rail Complexity Single track railway systems use blocking constraints to achieve safe operations.
These blocking constraints work by preventing more than one train from occupying a
track section, a length of track between two key points in the track network as
explained in Chapter 4, at the same time and help in resolving conflicts throughout
the railway network. These constraints work with many types of railways such as
mining or freight railways. However, the blocking constraints are not always
effective in a cane railway network. Some sections are too short and the distance
between two sidings can be less than the length of the train. As a result, in this case,
blocking a section is not sufficient to satisfy safety requirements and this may cause
accidents. Additionally, cane railway networks include many segments (branches or
lines). Some of these branches do not have passing loops. As a result, complete
branches are sometimes used as substitute passing loops. A segment blocking
approach is considered in the proposed models in Chapter 4 (Modelling the
14
Sugarcane Rail Transport Systems Problem) to deal with this type of railway. All
sections in a segment are blocked at the same time preventing other trains from
simultaneously using them at the same time.
The different situations throughout the rail network of the sugarcane system are
shown in Figures 1.2, 1.3 and 1.4. Two sections are included in Figure 1.2, where the
length of the train is less than the length of each section. The two sections work as
sidings for delivery and collection. In that case, the blocking section constraints can
be applied correctly when the length of train is less than the length of the section.
Section length Train head Train tail Section occupation time by train length
Section1
The length
of train k
Section 2
Time
Figure 1.2: The length of train less than the length of section
Figure 1.3 shows that the train length is greater than each section length. In that case,
the blocking section constraints are not sufficient enough to meet the safety
requirements, where the train length is greater than each section length. As a result,
the segment blocking constraints are required to solve any conflicts and satisfy the
safety requirements.
15
Section length Train head Train tail Section occupation time by train length
Section 1 Section 2 The length of
train k
Time
Figure 1.3: The length of train greater than the length of section
Figure 1.4 shows that section 1, section 2 and section 3 are included in segment1,
and the length of the segment1 is greater than the length of the train. As a result, the
blocking segment constraints can be applied successfully to solve the conflict in
Figure 1.3. Segment length Train head Train tail Segment occupation time by train length
Section 1
Segment 1
The length of
Section 2 train k
Section 3
Time
Figure 1.4: The segment length is greater than the train length
Sections 1, 2 and 3 in Figure 1.4 cannot be joined as one long section if sections 1
and 3 contain two sidings where bins need to be delivered or collected. The two
sections must be considered as two separated sections included within one segment
16
and applying blocking segment constraints. This issue is discussed in more detail in
the blocking segment models in Chapter 4.
1.2.3.2 Sugarcane Systems Complexity The biggest difference between a sugarcane rail system and other rail systems such
as coal mining is in delivering and collecting operations. Sugarcane systems have
many sidings in the middle of rail network branches which work as delivering and
collecting points (Figure 1.5). These sidings are blocked during the delivering and
collection operations. The blocked sidings in the middle of the rail network increase
the number of conflict points and reduce passing opportunities.
Figure 1.5: Delivering and collecting point at the middle and the end of the branch (sugarcane)
The coal rail network on the other hand generally has sidings for delivering and
collecting at the end of the network only (Figure 1.6) and the train travels to the end
of the line and returns. While these sidings are also blocked during the delivering
and collecting operations, the blocked sidings do not increase the conflict points
because the blocked siding at the end does not work as a passing section for other
trains.
17
Figure 1.6: Delivering and collecting point at the end of the branch (coal mining)
Having sidings at the end of rail network branches reduces the number of options for
the train after delivering and collecting to one. This option is for the train to return
because the branch has ended and has no further sections. As a result, the number of
the model constraints can be reduced as well. On the other hand, having sidings in
the middle of rail network branches provides options for the train.
These options are that the train delivers:
all empties to the siding and then has to return.
all empties to the siding but the train has to collect full bins from some
sidings on the way back.
some empties at the siding and delivers the remaining empty bins to other
sidings further along the branch.
Other options include the train collecting:
full bins and its capacity is reached.
some full bins from one siding and another siding at the same branch recalls
it to collect full bins without its delivering any empty bins.
18
Additional issues such as arrival time, departure time, and waiting time to deliver
and collect at each siding make the decision making even more complex. The
combination of all these options, at all sidings, can produce a final set of operations
for each train during its run. The combination of all sets of operations for all runs for
all trains can then produce an optimal schedule for the entire system.
Increasing the number of sidings through each branch increases the number of
decision points (delivering and collecting points) which in turn increases the number
of conflict points throughout the rail network making the system even more
complicated. A good decision at each siding can optimise the efficiency of the entire
sugarcane system.
An optimal decision for one run can be produced by evaluating (D1×D2×…DS)
decisions, where Ds is the average number of decisions at each section s and S is the
total number of rail sections, including the siding points. This optimal decision for
one run covering 13 sections and one mill as illustrated in Figure 1.5 can be
produced by evaluating all combinations of the number of decisions, and the number
of sections, including the mill. An objective function can be used to evaluate these
decisions. The total combinations of the decisions in the outbound and inbound
directions which are explained in Table 1.1 and 14 sections (13 sections+ 1 mill) are
calculated as follows:
(D1×D2×…D14)= (10×8×11×8×12×8×7×12×7×8×8×12×8×8)
=1.95×1013 decisions / run.
The total number of the decisions and their evaluated combinations which produce
an optimal decision for the system, including the number of trains and runs is
calculated from:
The total combination of the decisions= ((D1×D2×…DS) ) ×K×R, (1.1)
Where K is the total number of trains and R is the total number of runs for each train.
19
Equation (1.1) indicates that increasing the number of sections or sidings in the
system can increase the number of decisions sharply. Additionally, the numbers of
trains and runs are significant factors for increasing the complexity of the sugarcane
rail system. For instance, for a system including 2 trains and 5 runs for each train for
the network shown in Figure 1.5, the total decisions are:
1.95×1013 × 2×5 =1.95×1014 decisions.
Table 1.1 describes the decision matrix of the sugarcane system based on the
decisions of the train at each section in Figure 1.5. The two directions, outbound and
inbound, for each train are shown in Table 1.1, where the delivering and collecting
operations are in the outbound and inbound directions respectively, so as to reduce
the operating time of the trains. The decision of whether or not to visit other sidings
is taken at each siding but not in a single section. The decision to begin a new run is
made at the mill only.
This analysis of the complexity of the sugarcane rail transport system indicates that
the optimisation problem is strongly NP-hard and needs computational complexity to
produce the optimal solution of an efficient schedule.
20
Table 1.1: Decision matrix of the rail network shown in Figure 1.5
D: there is a decision
n/a: not applicable
Dec
ision
Ty
pe
Options
Outbound direction
Inbound direction
M s1 s2 s3 s4 s5 s6 s7 s8 s9 s10 s11 s12 s13 M s1 s2 s3 s4 s5 s6 s7 s8 s9 s10 s11 s12 s13
Del
iver
ing
and
colle
ctin
g ac
tiviti
es
No activities D D D D D D D D D n/a D D D n/a D D D D D D n/a D n/a n/a D D D n/a
Only collecting D n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a D n/a D n/a n/a D n/a D n/a D n/a D
Only delivering n/a n/a D n/a D n/a n/a D n/a D n/a D n/a D D n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a
Delivering & collecting n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a
Vis
it tim
e Arrival time n/a D D D D D D D D D D D D D D D D D D D D D D D D D D D
Departure time D D D D D D D D D D D D D D n/a D D D D D D D D D D D D D
Waiting time D D D D D D D D D n/a D D D n/a D D D D D D D D D D D D D D
Run
de
cisi
on
Visit other sidings D n/a D n/a D n/a n/a D n/a n/a n/a D n/a n/a n/a n/a n/a n/a D n/a n/a D n/a D n/a D n/a D
New run n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a D n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a
21
1.2.4 Research Questions
The complexity of the sugarcane rail transport system means there are many research
questions to be investigated in this thesis.
Question 1 Is a mathematical formulation of the sugarcane transport system necessary? The sugarcane transport system is a complex system that involves many variables
and elements. Mathematical modelling provides many benefits in terms of accuracy
of results, prediction of urgent and future problems and subsequent solutions which
are faster than some other techniques.
The following questions are therefore investigated:
Why do we need a quantitative model?
How can the sugarcane transport problem be formulated as an application of
operations research?
How can the best solution be obtained for the sugarcane rail transport problem?
Question 2
What are the capabilities and accuracy of Constraint Programming (CP) and Mixed
Integer Programming (MIP) techniques to solve sugarcane transport system
problems under job shop scheduling fundamentals?
This study investigates the ability of CP and MIP techniques by designing new
models which have capabilities to produce efficient schedules, and so improve the
performance of the sugarcane transport system and reduce overall costs.
As such, the following questions are investigated:
What is the quality of the solution?
What is the CPU time of the solution?
22
Question 3
What is the impact of integrating constraint programming search techniques with
mixed integer programming and constraint programming models on the quality of
the solution of the sugarcane transport system problem?
This research investigates the impact of integrating CP search techniques such as
Best First Search Technique (BFS), Depth-First Search Strategy (DFS), Slice Based
Search (SBS), Limited Discrepancy Search (LDS), Depth-bound Discrepancy Search
(DDS), Interleaved Depth First Search (IDFS), Standard Search Strategy (SSS) and
Dichotomic Search Strategy (DSS) with mixed integer programming and constraint
programming models on the quality of problem solutions. The sugarcane system
includes an extensive number of variables. Therefore, constructive techniques are
required to deal with all variables and produce good solutions in a reasonable time.
The following questions are investigated:
What is the quality of the solution?
What is the CPU time of the solution?
What are the effects on the overall cost of the sugarcane rail transport
system when the new model is applied?
Are the necessary safety conditions achieved during transporting of bins
between farms and mill, without increasing the transport cost?
Question 4 How can the new models achieve flexibility and deal with new or urgent situations in
the sugarcane transport system?
The sugarcane transport system is a dynamic system because new and sometimes
urgent situations arise daily such as changes in the number of bins, trains and labour.
For this reason, flexible models are required which can deal with these changes and
achieve the objectives. For example, if the number of the trains is reduced, the need
to minimise the total completion time has a high priority, while if the number of
trains is large, the priority of the objective function will be to minimise the total
waiting time. These models can obtain solutions for cases where the objective
function changes between criteria such as minimising makespan, total waiting time,
23
and total completion time of the train run per day. Therefore the following are
investigated:
What are the effects of the changing objective function on selecting the
solution technique?
What are the impacts of changing of the objective function on the optimality
of solution or CPU time?
What is the impact of changing the objective function on the overall costs of
the system?
What are the impacts of changing the number of trains and harvesters on the
optimality of solution or CPU time?
Question 5 What is the impact of hybrid and hyper metaheuristic techniques on the solution for
the sugarcane transport system problem, in particular for large scale problems?
Exact solution techniques cannot solve large-scale problems in a reasonable time.
Therefore, metaheuristic techniques such as simulated annealing (SA) and tabu
search (TS) are used to reduce the CPU time of large-scale problems. Hybrid and
hyper metaheuristic techniques are proposed in this research to improve the solutions
of SA and TS.
Therefore the following will be investigated:
What is the impact of the SA technique on the quality of the solution?
What is the impact of the SA technique on the CPU time of the solution?
What is the impact of the TS technique on the quality of the solution?
What is the impact of the TS technique on the CPU time of the solution?
What are the impacts of hybrid metaheuristic techniques such as hybrid
SA/TS and hybrid TS/SA hybrid techniques on the quality and CPU time of
the solution?
What are the impacts of hyper metaheuristic techniques such as hyper SA/TS
and hyper TS/SA on the quality and CPU time of the solution?
24
1.3 Contribution and Significance of the Study As already identified, the sugarcane transport system has a significant effect on the
sugar production process and its overall cost. In Australia, the total cost of
sugarcane transport operations is high, so it is important for this study to develop
efficient schedules for the sugarcane transport system to optimise its performance.
The application of optimal schedules to the sugarcane rail transport system will:
Ensure a continuous supply of full bins to the mill, and empty bins to the
harvesters.
Remove train scheduling conflicts.
Minimise the time between sugarcane harvests at farms and the crushing
operation at the mill (the cane age) by optimising delivery and collection
times.
Minimise the waiting time of trains at sidings so that trip time is
decreased between farms and mills.
Maximise the capacity of the railway system.
Minimise the number of locomotive shifts and so reduce the number of
crews.
New mathematical models are developed using the Mixed Integer Programming
(MIP) and Constraint Programming (CP) approaches. MIP and CP approaches
under Blocking Parallel Job Shop Scheduling (BPJSS) fundamentals will be used to
solve the sugarcane transport system problem. Other applications of these models
include systems such as the coal mining rail transport system. This study focuses on
many types of integration of MIP and CP search techniques. Heuristic and
metaheuristic techniques are developed to solve large scale problems. The
integration types and solution techniques followed in this project are outlined below:
i. Mixed Integer Programming MIP: The MIP formulation as a model and
CP search techniques are integrated to combine the power of both to
improve the solution and the solutions’ CPU time, especially in large-
scale problems. MIP models are developed using two different blocking
constraints, segment and section blocking, and the parallel job shop
scheduling approach. OPL solver will be used to obtain a solution for
25
MIP to minimise makespan. The CPLEX MIP Solver will be used to minimise the
total waiting time as an objective function. This Solver includes the CPLEX MIP
algorithm and a branch and bound algorithm which uses cooperation between a
simplex algorithm and constraint-based domain reduction.
ii. Constraint programming CP: The CP formulation as a model and CP search
techniques as solution techniques are used. CP models are developed under
segment and section blocking constraints. OPL Solver and Scheduler Solver will
be used to obtain solutions.
iii. Standard Search Strategy (SSS) and Dichotomic Search Strategy (DSS) are
integrated with all CP search techniques such as BFS, DFS, SBS, LDS, DDS, and
IDFS with the MIP and CP models to reduce the CPU time.
iv. Algorithms are developed to solve the rail network issues such as algorithms for
collecting and delivering conflict elimination, algorithms for solving train
conflict, and algorithms for computing acceleration.
v. Metaheuristic techniques, SA and TS, are adapted to solve large-scale problems.
The C# language is used to develop the metaheuristic code.
vi. New neighbourhood techniques are developed based on the section and train
neighbourhoods
vii. Hybrid and hyper metaheuristic techniques, hybrid SA/TS, hybrid TS/SA, hyper
SA/TS and hyper TS/SA, are developed to improve the quality and the CPU time
of the metaheuristic solutions. Two hyper techniques were developed using C#
language.
The new models have the flexibility of changing the objective function based on the
priority of the objectives in the sugarcane transport system. For example the
objective function in this research can minimise makespan, total waiting time or total
completion time based on the needs of the system.
26
1.4 Outline of the Thesis
The main research plan of this thesis is shown in Figure 1.7 and is explained in
details in seven chapters. Chapter 1 describes the research problem. Chapter 2
presents the overview of the main scheduling approaches, in particular job shop
scheduling, and different solution techniques, such as disjunctive graph and hybrid
branch and bound technique, to obtain optimal solutions. Chapter 3 describes
Constraint Programming (CP) and Mixed Integer Programming (MIP) models for
the sugarcane rail transport system, including train scheduling and sugarcane system
constraints. The extensions of the mathematical models are presented in Chapter 3 to
optimise the real visit times at the sidings for real sugarcane rail transport systems,
avoiding any interruption in the supply of bins to harvesters. Chapter 4 shows the
different solution techniques and how they can be applied to solve the sugarcane rail
transport system under blocking constraints. Many new algorithms and heuristics are
developed to solve the proposed mathematical models. Chapter 5 presents the
computational results of the CP and MIP models for sugarcane rail transport systems
under blocking constraints. Optimization Programming Language (OPL) and
CPLEX are used to obtain results from the CP and MIP models. This chapter
develops a timetable for the sugarcane rail transport system. Chapter 6 reports the
metaheuristic techniques of simulated annealing (SA) and tabu search (TS) that are
used to solve the sugarcane rail transport scheduling problem. The hybrid and hyper
metaheuristic techniques developed to improve the quality and CPU time of the
solutions are also presented here. Visual Studio C # language was used for coding
the search problem and obtaining the results. Finally, conclusions and
recommendations for future work are presented in Chapter 7.
This chapter has briefly outlined the background to the research problem and has
demonstrated the need for new models that optimise the performance of the
sugarcane system which in turn reduces the total cost of the transport process
between farms and mills. The set of investigative questions along with the main
research problem have been identified. The significance of the research has also been
highlighted.
27
Figure 1.7: Research plan
Sugarcane Rail Transport System
Scheduling Approaches
Job Shop Scheduling Approach
Constraints Programming CP
Metaheuristic Techniques
Hybrid Techniques
Applicable for small size problems
Mixed Integer Programming MIP
Disjunctive Graph
Applicable for small size problems
Simulated Annealing
(SA)
Tabu Search (TS)
Final Outputs and Results
Justification of Final Model for Real Life Problems
Improve the solutions
Verify and validate
Applicable for large size problems
Improve the solutions
Branch and Bound
Hybrid TS/SA
Hybrid SA/TS
Hyper TS/SA
Hyper SA/TS
Blocking Segment Model
Blocking Section Model
Blocking Segment Model
Blocking Section Model
CPLEX Software
OPL Software
C# Language
Search Techniques
MIP Models CP Models
Dichotomy Search Strategy
Best First Search
Depth First Search
Slice Based Search
Depth Bound Discrepancy Search
Interleaved Depth First Search
Standard Search Strategy
Hybrid and Hyper Techniques
Published paper Published paper Published paper
Published paper
Paper under review
Paper under review
28
Chapter 2
Scheduling Theory Review
Chapter Outline
2.1 Introduction .....................................................................................................................29
2.2 Scheduling Classifications................................................................................................29
2.2.1 Job Characteristic..............................................................................................30
2.2.2 Machine Environment.......................................................................................30
2.2.2.1 Single Machine Scheduling (SMS)..............................................32
2.2.2.2 Parallel Machine Scheduling (PMS)............................................33
2.2.2.3 Flow Shop Scheduling (FSS).......................................................34
2.2.2.4 Job Shop Scheduling (JSS)...........................................................34
2.2.2.5 Open Shop Scheduling (OSS)......................................................36
2.2.2. 6 Group Shop Scheduling (GSS)....................................................36
2.2.3 Optimality Criteria............................................................................................37
2.2.4 Concluding Remarks..........................................................................................38
2.3 Job Shop Scheduling (JSS) Solution Techniques.............................................................39
2.3.1 Disjunctive Graph Technique...........................................................................41
2.3.1.1 Heuristic Techniques......................................................................46
2.3.1.1.1 The Shifting Bottleneck Algorithm.................................46
2.3.1.1.2 Dispatching Rules............................................................46
2.3.2 Mixed Integer Programming Approach for JSS Problem..................................47
2.3.2.1 A hyper Branch and Bound Technique...........................................48
2.3.3 Constraint Programming Approach for the JSS Problem...................................53
2.4 Scheduling Theory and Railway Systems in Literature...................................................56
2.5 Conclusion........................................................................................................................60
29
2.1 Introduction
In this research, scheduling theory, job shop scheduling in particular, is adapted to
scheduling the sugarcane rail system. Many problems in real life are classified as
scheduling problems; where scheduling is defined as the allocation of resources over
time to perform a collection of tasks (Baker, 1974). Each resource (machine) has its
own features and configurations while each task (job) can be described by its
processing time, resource requirement, start and finish time. Resources can be
equated to machines in a job shop scenario and tasks can be equated to jobs.
Scheduling makes decisions by optimising one or more objectives. These decisions
are called solutions. The easiest approach to obtain the best solution is to evaluate all
solutions and then select the best. The main problem with this approach is that the
number of tested solutions grows exponentially with the size of the problem (in a
combinatorial optimisation problem). As a result, the computation time of this
approach increases as the size of the problem grows, making it harder to solve these
problems in reasonable time, even in small cases. Many heuristic and metaheuristic
techniques have been developed to solve large-scale problems. The main core of
these techniques is to reduce the solution space to reach good solutions in a short
time. Many scheduling problems are hard to solve and are called NP-hard
(nondeterministic polynomial time), where no algorithm can solve these problems in
polynomial time (the solution can be provided in predetermined and finite number of
steps) (Lawler et al., 1993).
.
2.2 Scheduling Classifications
In the scheduling environment, there are many forms and classifications which
include three main elements: job characteristics; machine environment and
optimality criteria (Pinedo, 2008).
30
2.2.1 Job Characteristic The first element in the scheduling problem is job characteristic where for each job j
the following features can be obtained;
Processing time (gij): the required time for processing job j on machine i.
Ready time (rj): the arrival time of job j in the system for processing.
Due date (di): the time by which job j should be completed.
Weight (wi): the weight (wi) is the factor for determining the priority of execution
for each job i relative to the other jobs in the system
Additionally, its setup time does or does not allow any pre-emption. Pre-emption
(the processing of any job on a machine that can be interrupted at any time and that
allows for the processing of another job on the same machine) is allowed or not,
depending on arrival time, and departure time of other jobs. These job features can
affect the definitions of the scheduling problem types and the complexity of each
type. For example, jobs which depend on processing time only are easier to deal
with than those which depend on processing time and due date. That is, increasing
the number of features of any job can raise the complexity of the scheduling
problem. Furthermore, increasing the number of jobs inside the scheduling problem
can increase the complexity of the problem. For example, dealing with problems that
include a limited number of jobs is easier than dealing with problems that include an
unlimited number of jobs.
2.2.2 Machine Environment
The machine environment includes the machines in the system and the features of
these machines. Figure 2.1 illustrates the three main machine types.
31
Figure 2.1: Scheduling problem types
Open Shop Scheduling
OSS
Flow Shop Scheduling FSS
Single Machine
Scheduling SMS
Group Shop Scheduling
GSS
Limited Buffer
Flow Shop Scheduling
LBFSS
Unlimited Buffer
Flow Shop Scheduling ULBFSS
Multi Machine
Scheduling MMS
Job Shop Scheduling
JSS
Blocking Job
Scheduling Shop
BJSS
Classical Job Shop
Scheduling CJSS
Blocking Group shop Scheduling
BGSS
Limited Buffer Job
Shop Scheduling
LBJSS
Blocking Flow Shop Scheduling
BFSS
Scheduling Approaches
Limited Buffer Group
Shop Scheduling
LBGSS
Unlimited Buffer Job
Shop Scheduling ULBJSS
Parallel Machine
Scheduling PMS
32
• Single Machine Scheduling (SMS): includes one machine or single
production unit.
• Parallel Machine Scheduling (PMS): A parallel machine system includes
identical machines that are working in parallel. Each job can be processed by
any one of the free machines.
• Multi Machine Scheduling (MMS): includes many scheduling forms such as
flow shop scheduling, job shop scheduling (JSS), open shop scheduling
(OSS) and group shop scheduling (GSS).
Each type is shown as follows:
2.2.2.1 Single Machine Scheduling (SMS)
Single machine scheduling has n jobs; j=1... n to be processed on a single machine
m1 with the same route as shown in Figure 2.2.
Figure 2.2: single machine scheduling problem; n jobs processed on one machine
The general formulation for the single machine scheduling problem for minimising
the total weighted completion time is:
Min 1
n
j jj
w C=∑
Where, CRj R and wRjR are completion time and weight respectively for the j P
th Pjob.
j1
j2
j3
• •
jn
m1
Ready time
33
2.2.2.2 Parallel Machine Scheduling (PMS)
Parallel machine scheduling has n jobs; j=1... n to be processed on m parallel
machines with the processing time gj for each job j as shown in Figure 2.3.
Figure 2.3: Parallel machine scheduling problem; n jobs processed on m parallel machines
The makespan (Cmax ), the completion time of the last operation of last job in the
system, as an objective function of the parallel machine scheduling problem
(P││Cmax) , is formulated as:
1
1,.., .n
maxj
y g C i mi j j=
≤ ∀ =∑
Where,
1 0
if job j is processed on machine iyi j otherwise
=
j1
j2
j3
• •
jn
m1
m2
m3
m4
mm
•
•
Ready time
34
2.2.2.3 Flow Shop Scheduling (FSS)
All jobs in flow shop scheduling have to be processed on all machines in the same
sequence. n jobs are processed on m machines in a fixed order and as a result all jobs
have the same route as shown in Figure 2.4.
Figure 2.4: Flow shop problem; n jobs processed on m machines
There are two main types of flow shop scheduling problems that depend on the
buffer capacity between successive machines. The two types are: flow shop with
unlimited buffer capacity and flow shop with limited buffer capacity. A flow shop
with unlimited buffer capacity (Fm|| Cmax) is suitable for many practical applications
especially when the products are small. The flow shop is called permutation if the
order of jobs on the machines does not change.
Flow shop with limited buffer capacity (Fm| block | Cmax) means storage space
between two successive machines is limited. A flow shop with a limited buffer
capacity means that machine (m) might be blocked when it finishes job (j) because
(j) cannot find empty space while waiting for processing on the next machine. This
means (j) must remain with (m). This can prevent other jobs waiting in the queue for
(m) from being processed on (m).The flow shop scheduling problem is a special case
of job shop scheduling problems.
2.2.2.4 Job Shop Scheduling (JSS)
The job shop scheduling problem is NP hard and one of the combinatorial problems
in the scheduling field (Lenstra et al., 1979). The JSS problem assigns machines to
operations over time, where processing times (duration of tasks) are independent of
m1
j1
j2
j3
jn
• •
Ready time
m2
m3
m4
mm
• •
35
the schedule which optimises the given objective. In job shop scheduling problems,
each job has a predefined route which may be different from other job routes as
shown in Figure 2.5.
The classical JSS problem can be described as follows for a set of machines M where
M= {m1, m2, m3, m4…mm} and a set of jobs J, {j1, j2, j3…jn}. Each job ji∈ J has a
set of k operations where oi={o1i, o2
i, o3i…ok
i} and is processed in the same order, in
non-interrupted time (non-preemptive) on a unique machine. Each machine can
process, at most, one operation at a time. All operations follow the precedence
constraints, where the preceding operation has to finish before starting the
succeeding operation. Many objectives can be achieved by solving the job shop
scheduling problem, such as minimising total tardiness, mean tardiness, total flow
time and mean flow time, but minimising the completion time of the last operation,
or the makespan, is the most popular. These objectives are described in Section
2. 2.3 in detail.
Figure 2.5: Job shop problem; n jobs processed on m machines
The two main JSS problem types are blocking job shop problems and classical job
shop problems where the blocking constraints are not applied. Each machine is
blocked during the period of processing any operation, until this operation leaves
that machine. Limited buffers and unlimited buffers have to be considered while
applying the blocking constraints.
m1
m2
m3
m4
mm
j1: o1, o2, o3, o4... ok
m1
m2
m4
m3
mm
m2
m3
m4
mm
m2
m1
m3
m4
mm
j2: o1, o2, o3, o4... ok
j3: o1, o2, o3, o4... ok
• •
• •
• •
• •
m1
Ready time
• •
• •
• •
• •
• •
• •
jn: o1, o2, o3, o4... ok
36
2.2.2.5 Open Shop Scheduling (OSS)
The open shop scheduling (OSS) problems are similar to those in job shop
scheduling, but there are no precedence constraints between the operations of the
same job (no order between operations). This means no fixed order of the job
operations as shown in Figure 2.6. Gonzales and Sahni (1976) have proved NP-
hardness for the OSS problem.
Figure 2.6: Open shop problem; n jobs processed on m machines
2.2.2. 6 Group Shop Scheduling (GSS)
In group shop scheduling (GSS) problems, a set of operations that is processed on a
set of machines for each job is divided into a set of groups G1, G2…Gk, where k is
the number of groups. There are no restrictions on operations in the same group,
while the relations between operations of the different groups follow the priorities
and the constraints between the groups (see Figure 2.7). If the number of groups
equals one, then the GSS problem becomes an open shop scheduling problem, while
if each group includes one operation, the problem becomes a job shop scheduling
problem. As both JSS and OSS are special cases of GSS, GSS is an NP hard
problem. Blocking constraints can be applied with limited buffer capacity in group
shop problems.
m1
m2
m3
m4
mm
j1: o1, o3, o4, o2... ok
m1
m2
m4
m3
mm
m2
m3
m4
mm
m2
m1
m3
m4
mm
j2: o3, o2, o4, o1... ok
j3: o1, o4, o2, o3... ok
• •
• •
• •
• •
m1
Ready time
• •
• •
• •
• •
• •
• •
jn: o3, o2, o1, o4... ok
37
Figure 2.7: Group shop scheduling problem; n jobs include different groups on m machines
2. 2.3 Optimality Criteria
In scheduling problems, there are many different optimality criteria (objective
functions) that change from one application to another. These criteria depend on the
job features. Criteria are divided into:
Criteria that depend on the processing time
Completion time (Cj): The finish time of processing job j in the system.
Flow time (Fj): The time taken to complete job j; Fj=Cj-rj .
Maximum completion time (makespan); Cmax=max Cj for all j.
Mean flow time ( F ) = 1
1 n
jj
Fn =∑
Maximum flow time (Fmax) = maxj {Fj}.
Criteria that depend on the processing time and on the due date
Lateness (Lj): the time by which the completion time exceeds the due date of job
j; Lj=Cj-dj
Tardiness (Tj): Tj=max{Cj-dj,0} where tardiness is never negative.
m1
m2
m3
m4
mm
j1: o1, o3, o4, o2... ok
m1
m2
m4
m3
mm
m2
m3
m4
mm
m2
m1
m3
m4
mm
ko ...1o ,4o, 2o ,3o: 2j
ko ...3o ,2o, 4o ,1o: 3j
• •
• •
• •
• •
m1
Ready time
• •
• •
• •
• •
• •
• •
ko ...4o ,1o, 2o ,3o: nj
G1 G2 Gk
38
Maximum lateness (Lmax): Lmax=max{Lj} for all j
Total weighted completion time = 1
n
jw Cj j
=∑
Total weighted tardiness = 1
n
jw Tj j
=∑
Number of tardy jobs (NT): 1
( ),n
T jj
N Tα=
=∑
where
0, 0( )
1, 0j
jj
if TT
if Tα
== >
2.2.4: Concluding Remarks
Scheduling theory can be applied to solve many problems in real applications by
identifying the three elements of job characteristics, machine environment and
optimality criteria. In this case it is applied to a single track railway scheduling
problem by identifying the job as a train run and the machine as a single section
track. The job characteristic in train scheduling will be applied where each train has
a ready time before starting the trip; and a sectional running time (processing time)
and due time for delivering and collecting the empty and full bins to and from the
siding. These characteristics will be prioritized (weighted) to determine the
importance of train trips for the sidings.
In a machine environment, the techniques to solve a single track railway scheduling
problem should have four main abilities: prevent two trains passing in one section at
a time; keep the train activities (operations) in sequence during each run (trip); pass
the trains on one section in correct order (priorities of passing trains); and ease
passing trains by solving rail conflicts.
The SMS depends on using one machine and multi jobs. As the research problem
involves multi machines (section), the SMS cannot be used individually to solve the
research problem and will be discarded. The nature of the FSS is different from the
research problem because in FSS the machines are sequenced and used in one
direction, while the machines (sections) in a single track railway scheduling problem
will be used in two directions (outbound and inbound). The OSS technique is
unsuitable for the research problem where there are no precedence constraints
39
between the operations of the same job (no order between operations). This means
the train activities (operations) will not be in a sequence during each trip (run). The
same problem arises in GSS where there are no restrictions on operations in the same
group.
A blocking parallel-machine job shop scheduling (BPMJSS) will be used to
formulate the research problem where the blocking constraints, MPS, and JSS are
integrated to solve the single track railway scheduling problem. The JSS prevents
two trains passing in one section at a time and the section availability for a passing
train can be taken into account by applying the blocking constraints. The JSS
includes the precedence and disjunctive constraints which keeps the train activities
(operations) in sequence during each run (trip) and allows for more than one train
pass on one section, in the correct order. PMS helps solve the rail conflicts by easing
passing trains.
Makespan and the total waiting time are used in this research as optimality criteria to
reduce the total cost of the rail transport system. The two criteria are suitable and
satisfy the aims of this research.
2.3 Job Shop Scheduling (JSS) Solution Techniques
.
The JSS problem is computationally difficult and many techniques have been
developed to solve it. Researchers have used many different approaches to
formulate the JSS problem such as integer programming (IP), mixed integer
programming (MIP), disjunctive graphs, dynamic programming (DP) and constraint
programming (CP) formulations. Many solution techniques have also been proposed
over the last few decades to solve the JSS problem. These techniques are divided
into exact solution techniques (branch and bound, dynamic programming) and
approximate solution techniques (heuristic, metaheuristic, and search techniques).
(Brucker et al., 1994; Artigues et al.,2009)
The JSS problem is formulated in this research in three main formulations; mixed
integer programming, constraint programming and disjunctive graph techniques.
These techniques have a high ability to generate accurate solutions in reasonable
time using approximate solution techniques and exact solution technique (Puget &
40
Lustig, 2001; Foccaci et al., 2002). Additionally, these techniques are suitable for
the research problem. Figure 2.8 shows the job shop solution techniques including
the different mathematical formulations.
Figure 2.8 Job shop solution techniques
Job Shop Solution approach
Heuristic Techniques
Branch and Bound
The Shifting Bottleneck
Dispatching Rules
Meta Heuristic Techniques
Tabu Search
Simulated Annealing
Search Techniques
Best First Search
Depth First Search
Slice Based Search
Limited Discrepancy Search
Depth Bound Discrepancy Search
Interleaved Depth First Search
Standard Search Strategy
Dichotomy Search Strategy
First Input First Output Technique
Longest Processing Time
Critical Ratio
Earliest Due Date
Shortest Processing Time
Minimum Slack Rule
Disjunctive Graph
Mixed integer Programming
Constraint Programming
Dynamic Programming
Mathematical formulations
41
2.3.1 Disjunctive Graph Technique
This section defines the disjunctive graph model G= (O, A, E) where O is a set of
nodes, A is a set of directed arcs (conjunctions) and E is a set of undirected arcs
(disjunctions). The classical JSS problem can be formulated by the disjunctive graph
model to minimise the makespan as an objective function. In the disjunctive graph
model, the makespan value is determined by the critical path.
A critical path in a job shop schedule consists of a set of operations in which the first
operation starts at time 0 and the last operation finishes at the makespan. The
completion time of an operation on the critical path is equal to the starting time of
the next operation on the path.
To find the solution, the sequence of operations on the critical path is explored to
identify alternative solutions that have minimal impact on the makespan. A
disjunctive graph shows all feasible solutions and the critical path in JSS problems
and can be used to minimise the makespan. The approach can be illustrated as
follows:
The set of all job operations is defined by the set O of nodes. There are two dummy
nodes 0 and F where 0 is the start node and F is the end node. The maximum number
of operations is O = J×M, where J is the total number of jobs and M is the total
number of machines. The set of precedence constraints between consecutive
operations of the same job is represented by the set A of conjunction arcs (where the
order of operations must be preserved) shown as solid lines in Figure 2.9. In a
feasible solution (a schedule), a conjunction arc from operation k to operation j (k →
j) with processing time gk must satisfy the precedence constraint tk + gk ≤ tj where tk
is the start time of operation k, and tj is the start time of operation j.
42
Figure 2.9: Conjunctive arcs for 5 jobs and 5 machines
The sequence of job operations executed on the same machine is represented by the
set E of disjunctions where E=1
Mi
Ei= and iE is the subset of disjunctive pair-arcs
(where the order of operations can be swapped) shown as dotted lines in Figure 2.10
corresponding to machine i (Adams et al., 1988). Disjunctive arc k―j which has the
processing times gRkR and gRjR has two constraints and one of them needs to be satisfied;
tRkR + gRkR ≤ tRjR or tRjR + gRjR ≤ tRk Rwhere k and j are operations for two different jobs.
Figure 2.10: Disjunctives arcs for 5 jobs and 5 machines
0
1 2 3 4 5
6
17
7
18
8
20
F
M1
9
13
10 11 12
14 15 16
M2 M3 M4 M5
M1 M2 M4
M1 M3 M4 M5
M4 M1 M3 M2
M5 M1
19
M3 M4
0
1 2 3 4 5
6
17
7
18
8
20
F
M1
9
13
10 11 12
14 15 16
M2 M3 M4 M5
M1 M2 M4
M1 M3 M4 M5
M4 M1 M3 M2
M5 M1
19 M3
M4
J1
J2
J3
J4
J5
J1
J2
J3
J4
J5
43
Cyclic selection, which does not include any conjunctive arcs, is used for sequencing
machines. A cyclic selection Bi is a feasible solution for machine i. A complete
selection of operations set B includes the union of selections Bi; i∈M. The makespan
is the length of the longest path between the two operations k and j. The job shop
scheduling problem solution is then a complete selection B⊂E in a disjunctive graph
that minimises the length of the longest path (critical path). Example 2.1 explains the
disjunctive graph model by using the 3/3/G/Cmax (the system includes 3 jobs are
processed on 3 machines without interrupted time to minimize the makespan) as a
JSS problem.
Example 2.1
The machine sequence and the processing time for each job are
shown in Table 2.1 below.
Table 2.1: 3 jobs and 3 machines problem
Processing time Machine Operation Job 4 3 6 1 2 3 1 2 3 JR1 4 6 2 1 3 2 4 5 6 JR2 3 6 4 2 1 3 7 8 9 JR3
Figure 2.11 shows the disjunctive arcs set E;
E=ER1R ER2R ER3R;
ER1R={(1,4), (4,1),(1,8),(8,1),(4,8),(8,4)} ER2R={(2,6), (6,2),(2,7),(7,2),(6,7),(7,6)}
ER3R={(3,5), (5,3),(3,9),(9,3),(5,9),(9,5)}
Figure 2.11: The disjunctive graph for 3 jobs and 3 machines
0
1 2 3
7 8
10
J1
M1
4 5 6
M2 M3
M1 M3 M2
M2 M1 9 M3
J2
J3
4
3
6
6
4
3
2
6
4
44
This example uses two feasible solutions to explain the
disjunctive graph. A critical path at any graph Ds will be a
feasible solution, where the critical path is the length of the
longest path from the start of operation 0 to the finish of
operation 10 (0 and 10 are dummy operations). This length is
the makespan Cmax. All operations at the critical path are
critical operations as well. Figure 2.12 shows the first feasible
solution for Example 2.1, where the complete selection S is:
S=S1 S2 S3;
SR1R= {(4, 1), (1, 8)}
SR2R= {(7, 2), (2, 6)}
SR3R= {(5, 9), (9, 3)}
Figure 2.12: The first feasible solution for the 3 jobs and 3 machines example
Makespan=24
Figure 2.13 shows that the critical path is
{0→4→1→8→9→3→10} with length (makespan CRmaxR) equals
24.
0
1 2 3
8
10
J1
J2
J3
M1
4 5 6
M2 M3
M1 M3 M2
7
M2
9 M3 M1
4
4
3
6
2
4
6
3
6
45
Figure 2.13: The Gantt chart for the first feasible solution
Figure 2.14 shows the second feasible solution of Example
2.1, where the complete selection S is:
S=S1 S2 S3;
SR1R= {(4, 8), (1, 8)}
SR2R= {(7, 2), (2, 6)}
SR3R= {(5, 9), (9, 3)}
Figure 2.14: The second feasible solution for 3 jobs and 3 machines example
Makespan=23
In Figure 2.15, The critical path is {0→4→8→1→2→3→10} with length (makespan CRmaxR ) 23.
0
1 2 3
8
10
J1
J2
J3
M1
4 5 6
M2 M3
M1 M3 M2
7
M2
9 M3 M1
4
3
6
4
6
3
6
2
4
46
Figure 2.15: The Gantt chart for the second feasible solution
Disjunctive graphs in example 2.1 show that any operation i has two types of
predecessors and successors. Job predecessor and successor are defined by JP[i] and
JS[i] and their machine predecessor and successor are defined by MP[i] and MS[j].
Job predecessor and successor are arcs of the conjunctive graph Ds while machine
predecessor and successor are disjunctive arcs. The final solution of the JSS problem
depends on the two types of predecessors and successors for each operation.
2.3.1.1 Heuristic Techniques
2.3.1.1.1 The Shifting Bottleneck Algorithm
Adams et al (1988) proposed the shifting bottleneck algorithm as an effective
technique for solving the job shop scheduling problem. This algorithm depends on
selecting the machine that significantly reduces the makespan value. This machine is
called a bottleneck machine and its algorithm produces the sequence of operations in
the set of machines to determine the bottleneck machine every time.
.
2.3.1.1.2 Dispatching Rules
Many dispatching rules have been proposed to solve the job shop scheduling
problem. These techniques are faster than many other heuristic techniques and
sometimes are very effective in solving job shop scheduling problems. In this section
some of the priority rules are summarized.
47
The first rule is the shortest processing time “SPT” rule. The main core of this
technique is the ascending order of the processing time, pi, of the job i where:
p1 < p2 < p3 <…. pn and n is the total number of jobs. This technique performs well
with minimising tardy jobs (Chang et al., 1996 ; Rajendran & Holthaus, 1999).
The second rule is the weighted shortest processing time, WSPT. In this technique,
each job i has a weight wi where; p1/w1 < p2/w2 < p3/w3 <…. Pn/wn.
The next rule is called the critical ratio CR. This technique includes the ascending
order of the ratio; CR= (di –system time)/ri, where ri is the remaining processing
time. This technique is more efficient for many scheduling problems (Moser &
Engell, 1992; Pierreval & Mebarki, 1997; Rose, 2002; Abu-Suleiman et al., 2005).
Another rule is the longest processing time LPT which depends on the descending
order of the processing time of all jobs. Additionally there is the first input first
output rule (FIFO) and the minimum slack rule (MS).
The last rule is the earliest due date (EDD) where EDD depends on the processing
time and due date di of job operations, where d1 < d2 < d3 <…. dn. This rule orders
the due date of operations of each job in ascending order. This rule performs well for
minimizing the tardy rate (Chang et al., 1996) and the mean tardiness (Jeong & Kim,
1998).
2.3.2 Mixed Integer Programming Approach for JSS Problem
Manne (1960) proposed a mixed integer programming formulation to solve the job
shop scheduling problem. Formulating the job shop scheduling problem requires two
main constraint types to be included in the main model: precedence constraints
(Equation 2 below) and disjunctive constraints (Equations 3 and 4 below).
Precedence constraints ensure that operation o of job j∈J on machine i ∈M must
finish before operation o+1 of the same job starts on the same machine. Disjunctive
constraints ensure jobs j and k are processed on machine i in the correct order. Either
job j on machine i precedes job k on the same machine where job k cannot use this
machine before job j leaves it, or job k on machine i precedes job j on the same
machine where job j cannot use this machine before job k leaves it. The main model
for the job shop scheduling problem under a makespan Cmax optimisation criterion is:
48
Model Notations
tijo: start time of operation o of job j on machine i.
gijo : processing time of operation o of job j on machine i.
yijk =1: if job j precedes job k on machine i.
yijk =0 : otherwise
Z: a big number.
Cmax: makespan.
Model formulation
Minimise CRmaxR (2.1)
Subject to
.( 1)t g t o job j on machine iijo ijo ij o+ ≤ ∀ ∈+ (2.2)
( ) 1 . .t t g Z y o j and o kijo iko iko ijk ′≥ + + − ∀ ∈ ∈′ ′ (2.3)
( ) - . .t t g Z y o j and o kiko ijo ijo ijk ′≥ + ∀ ∈ ∈′ (2.4)
.maxC t g o jijo ijo≥ + ∀ ∈
(2.5)
0tijo ≥ and 0.gijo ≥
(2.6)
2.3.2.1 A hyper Branch and Bound Technique
While exact techniques give an optimal solution for some problems, they are very
time consuming even in small cases. In this section, a hyper technique which
depends on the integration of a branch and bound method and the n/1/Lmax
algorithm is described to solve a job shop scheduling problem (Pinedo, 2008).
Dynamic programming to JSS is not within the scope of this study because DP is
difficult to apply for large scale problems.
49
To obtain an optimal solution for the job shop scheduling problem, branch and
bound techniques can be used. A branching method is used to generate all active
schedules and select the schedule with the minimum makespan (or the optimal
schedule by another criterion). A feasible schedule is called active if no operations
can be completed earlier and there are no delaying operations. This method is based
on complete enumeration, and as a result is very time consuming, especially with
large-scale problems. The branching method is described as following.
Notations α: the set of all operations for which all predecessors have already been scheduled.
sij : earliest possible starting time of operation(i,j) є α, where,
i is number of machine and j is number of job.
pij: processing time of job j on machine i.
α΄: a subset of set α.
Step 1: (initial conditions)
α: first operation of each job.
sij=0 for all (i,j) є α.
Step2: (machine selection)
Calculate t(α) for the current partial schedule (schedule during the
new branching process).
t(α)=min{ sij+pij} ; (i,j) є α.
i*: the machine on which the minimum is achieved.
Step 3: (branching)
α΄: includes all operations (i*,j) on machine i* where si*j< t(α).
For (i*,j) є α΄ then
Extend partial schedule by scheduling the next operation
on machine i*.
For each such selection then
Delete (i*,j) from α.
Add job successor of (i*,j) to α. Return to step 2.
End
50
The branching method explains the execution of the branching scheme. From Step 3,
the current partial schedule at any node is used to produce new branches from this
node. The partial schedule can be determined by the disjunctive arcs produced
during the branching process. The number of branches is determined by the number
of operations in α΄. A branch of any node depends on the choice of an operation
(i*,j) Єα΄ as the next on machine i*. Nodes at the very bottom of the tree can
produce all the active schedules. A lower bound of makespan can be identified by
the length of the critical path in the disjunctive graph. A better lower bound is
calculated after producing all disjunctive arcs. The n/1/Lmax problem (n jobs, 1
machine and maximum lateness objective) is solved to avoid processing more than
one operation on the same machine at the same time, where not all disjunctive arcs
are yet produced, and to obtain a better lower bound. In this problem, the maximum
lateness, Lmax, is calculated to process n jobs on one machine. The next algorithm is
used to solve the n/1/Lmax problem which is classified as strongly NP-hard.
The n/1/Lmax algorithm is integrated with the branch and bound method to solve a job
shop scheduling problem.
Step1: calculate the lower bound (LB) from the longest path from the source node
to the sink node.
Step 2: calculate the earliest starting time, sij for all operations (i,j) n machine i by
calculating the longest path from the source to operation(i,j) in the disjunctive
graph.
Step 3: calculate the longest path from (i,j) to sink in the graph, Φij, and then
calculate the due date dij=LB- Φij+pij
Step4: solve the single machine problem under the conditions of release dates, with
no pre-emption to minimise the lateness Li.
Step 5: repeat this algorithm for all machines to calculate L1, L2,...Lm. A better lower
bound can be obtained by using LB*= LB +max Li, where i=1 to m.
The largest value LB* is a lower bound at any node of the graph. The
makespan can be obtained by the minimum lower bound.
51
Solving a job shop problem using a hyper branch and bound technique as an exact
technique is still very time consuming especially for large-scale problems. Example
2.1 is a 3/3/G/Cmax problem that is strongly NP –hard.
By applying the hyper branch and bound technique to Example 2.1, the final solution
for this example is obtained at level 6. Figure 2.16 shows the complete disjunctive
graph for the final solution. All steps taken in Example 2.1 are shown completely in
Appendix A. The complete solution shows that these techniques are time consuming
even in small cases.
Figure 2.16: Disjunctive graph for the final solution
The lower bound LB=L{0,((1,4),(1,1),(2,2),(3,3),(3,9),10}=21
Table 2.2 and Figure 2.17 illustrate the better lower bound.
Table 2.2: Solving n/1/CRmaxR problem for the three machines MR1R, MR2R, and MR3RR Rat the final level
MR1 MR2 MR3 JR1 JR2 JR3 JR1 JR2 JR3 JR1 JR2 JR3
SR11R=4 SR12R=0 SR13R=8 SR21R=8 SR22R=11 SR23R=0 SR31R=11 SR32R=4 SR33R=17
ΦR11R=17 ΦR12R=21 ΦR13R=1
0
ΦR21R=1
3
ΦR22R=2 ΦR23R=1
6
ΦR31R=1
0
ΦR32R=1
6
ΦR33R=4
dR11R=8 dR12R=4 dR13R=14 dR21R=11 dR22R=21 dR23R=8 dR31R=17 dR32R=11 dR33R=21
0
1 2 3
8
10
J1
J2
J3
M1
4 5 6
M2 M3
M1 M3 M2
7
M2
9 M3
3
6
4
6
2
3
6
4
4
M1
52
Optimal solution for M1 Optimal solution for M2 Optimal solution for M3
J2 J1 J3 J3 J1 J2 J2 J1 J3
4 8 14 time 3 8 11 13 time 4 10 11 17 21 time
L1=0 L2=0 L3=0
Figure 2.17: Optimal solutions of M1, M2, and M3 at the final level for n/1/Cmax problem
The new lower bound LB*=21+max {0, 0, 0} =21.
The branching stops at node (3, 3). Figure 2.18 presents the complete search tree
where the lower bounds are represented in the boxes at each level.
Level 0
Level 1
2a 2b 2c Level 2 3a 3bi 3bii 3ci 3cii 3ciii
Level 3 4ai 4aii 4bi 4biia 4biib 4ci 4cii
Level 4
Level 5 5aia 5aib 5ai 5aiib 5bia 5bib 5biia 5cii
Level 6
The optimal solution Figure 2.18: The complete branching procedure of the optimal solution
Nodes at the very bottom of the tree correspond to all active schedules, where there
is complete selection. The solution provided in Figure 2.18 indicates that the optimal
solution is in the lower level and makespan=21.
As seen from Example 2.1, the exact techniques are time consuming. There are many
steps in obtaining the final solution. Therefore there is an urgent need to develop
1, 1 1, 4 1, 8
2, 7
1, 1 1, 8 1, 1 1, 4 3, 9 2, 2
3, 5 3, 9 3, 5 3, 9 3, 9 3, 5 3, 5 1, 8 1, 4
21
26 20 25
26 21 23 26 25 25
23
2, 6
26 29 24
3, 3 3, 5
24
3, 3 3, 9 2, 6
24
2, 2 2, 6
29
23 24 24 28 21 24 23 30
3, 3 3, 3 21 21
53
techniques which provide good solutions in a reasonable time. Heuristic techniques
such as the shifting bottleneck algorithm and dispatching rules obtain quick solutions
for NP hard problems and these are described in section 2.3.5. The metaheuristic
techniques such as simulated annealing and tabu search that deal with NP hard
problems and obtain good solutions in a reasonable time are examined in Chapter 6.
2.3.3 Constraint Programming Approach for the JSS Problem
The field of constraint programming (CP) is relatively new with the basic concepts
developed in the field of artificial intelligence in the 1970s and then further advanced
in computer science in the mid 1980s. CP was first discussed internationally in 1995
and since then has become an important technique that complements traditional
mathematical programming technologies to optimise many business activities (Sadeh
& Fox, 1996; Sadeh et al., 1995; El Sakkout & Wallace, 2000).
The formulation of any problem using CP can be defined by three elements: (S,
D, C), where S is a set of variables, {s1,s2,s3,...,sn }; D is a set of variable domains,
where D(si ) = Di is the domain of variable si; and C is a set of constraints which
gives the values that variables can take, where ci∈C is a constraint of some
variables and can be written as:
ci (si, sj, sk,... ) ⊆ Di × Dj × Dk ×…
Formulating the job shop scheduling problem in constraint programming is relatively
simple. Each operation can be expressed as a variable where the start time of each
operation is defined by the determined domain. With these variables, the precedence
and disjunctive constraints can be defined easily (Equations 2 and 3 below
respectively). Makespan (Cmax) is presented to restrict the completion times of all
operations. A schedule with minimal makespan will be obtained. A formulation of
the JSS problem using the CP tool of Optimisation Programming Language (OPL) is
as follows:
54
Minimise Cmax (2.7)
Subject to
task [i, j, o].start + Duration [i, j, o] preceded task [i, j, o+1].start ∀o∈j (2.8)
{task [i, j, o].start ≤ task [i, k, o'].start + Duration [i, k, o'] } ∨
{task [i,k, o'].start ≤
task [i, j, o].start + Duration [i, j, o]} ∀o∈j and o'∈j (2.9)
Cmax ≥
task [i, j, o].start + Duration [i, j, o]
∀o∈ j (2.10)
task [i, j, o].start≥0 and Duration [i, j, o]≥0 (2.11)
Duration [i, j, o] represents the processing time of operation o of job j on machine i
and task [i, j, o].start means the start time of operation o of job j on machine i. The
JSS problem solution depends on solving a sequence of constraint satisfaction
problems related to different upper bounds for the makespan. The constraint
programming technique assumes there is an upper bound Y “makespan” and finds the
start time (t) for each operation o where t is in the initial domain [0, Y-d (o)]; d (o) is
the processing time of operation o; and the precedence and resource constraints are
satisfied. Nuijten and Le Pape (1998) used the ILOG Scheduler to obtain a solution
by looking for the earliest start time, the latest start time, the earliest completion time
and the latest completion time inside the domain to minimise the makespan. The
main aim of using CP in the scheduling field is to reduce computation times and
reduce the variable domains using the constraints during the search procedure. This
process is called constraint propagation. Variable and value ordering heuristics and
backtracking techniques have been applied to capture the JSS problem search space
and reduce the search process time (Sadeh & Fox, 1996; Sadeh et al., 1995).
Cheng and Smith (1997) applied a constraint programming approach for the JSS
problem, especially Precedence Constraint Posting (PCP) and multi PCP using the
benchmark. An edge-guessing algorithm was used to achieve an optimal solution for
the operation sequences for small-scale JSS problems and a near-optimal solution for
large-scale problems (Belhadji & Isli, 1998). This algorithm integrated constraint
55
propagation with a problem decomposition approach to solve the JSS problems
easily in a short time (Dorndorf et al., 2002).
The efficiency of Constraint Logic Programming (CLP) as an approach for solving
JSS problems individually or integrated with other techniques is described in much
research. Trentesaux et al. (2001) presented two approaches: CLP and Distributed
Problem Solving (DPS) which are inherited from artificial intelligence (AI) and
distributed artificial intelligence (DAI) respectively to solve the JSS problem. The
results for JSS problems have been obtained using Constraint Handling in Prolog
(CHIP) for a CLP and Distributed Production Scheduling System (DPSS) for a DPS.
The optimality for DPS cannot be proved for all cases, while in CLP it can be proved
for small-scale problems and some large-scale problems. The integration of CLP and
DPSS improved the quality of the JSS problem solution.
The neural network was constructed using constraint programming to improve its
performance while looking for the solution to the JSS problem. An adaptive neural
network reduced the computational time, especially for large-scale JSS problems,
and improved the quality of the solutions to be near-optimal solutions (Yang, 2006;
Yang et al., 2010).
Constraint Integer Programming (CIP) has been proposed as a new approach to solve
combinatorial problems, in particular the JSS problem (Achterberg, 2007a, 2007b;
Achterberg et al., 2008). Most solvers for CP, MIP and Satisfiability (SAT)
problems (mathematical logic problems) work by dividing the main problem into
smaller sub problems, and then obtaining all the potential solutions. However, these
solvers differ in the method of processing or solving the sub problems. The CIP
approach integrates the three techniques CP, MIP and SAT. The CIP approach
depends on a branch and bound technique as a minimum approach that is used in the
three areas, MIP, CP, and SAT. Additionally, algorithms such as Linear
Programming LP relaxations and cutting plane separators work with branch and
bound techniques in MIP. Constraint specific domain propagation algorithms are
used in CP, SAT and conflict analysis as in modern SAT solvers.
New models were proposed to solve JSS problems (Hentenryck & Michel, 2005;
Watson & Beck, 2008; Barba et al., 2009) that depended on the integration of
56
constraint satisfaction techniques and local search techniques to improve the quality
of the solution in large-scale problems. This research modelled the JSS problem as a
constraint satisfaction problem which was solved using local search techniques and
designing new neighbourhood structures. Dovier et al. (2009) designed empirical
comparisons between CLP and Answer Set Programming (ASP) using combinatorial
problems to select the most efficient approach for each problem. The solution
approaches classification by its efficiency in solving the different problems. It is
useful for future research or for applications related to CLP and ASP.
CLP Techniques are used for solving JSS problems in three ways. The first way uses
pure CLP as an individual approach and provides good results in reasonable time for
small cases. However, results for large-scale problems are not sufficiently stable to
obtain the optimal solution. The second way makes comparisons between the CLP
and other approaches to identify the advantages of CLP in solving many different
cases in job shop scheduling problems. It also allows the researcher to know which
approach is going to obtain the most efficient solution. The last way CLP techniques
are used for solving JSS problems is by integrating CLP with other techniques such
as Integer Linear Programming ILP, MIP, and local search techniques to solve large-
scale problems for optimal solutions in reasonable time.
2.4 Scheduling Theory and Railway Systems in Literature
Railway systems are very complicated particularly if they are single track systems.
Many studies and researchers have analysed and solved major problems to improve
the efficiency of such systems. Most studies have concentrated on railway operation
development or optimisation of train schedules, and improving railway capacity.
These studies have analysed the single track rail system as an NP-hard problem, so
that finding a solution demands special techniques. These techniques and
mathematical models were developed using scheduling theory to optimise system
performance and solve NP-hard problems in the transport sector such as reducing
delays and solving conflicts.
Mixed integer programming and other mathematical programming models were
proposed to optimise the train scheduling system. Yang et al. (2010) developed a
57
mixed integer programming model for a single track rail to solve the conflicts on
tracks. A Branch and bound technique was used to solve the model. Higgins and
Kozan (1997) introduced a mathematical model to study the impact of timetable
changes and railroad infrastructure changes so as to optimise train schedules on a
single track rail. Their approach was to change the number of trains and the number
of sidings, where the solution for 31 trains and 14 sidings was obtained in a
reasonable time. Higgins et al. (1996a, 1996b) designed a mathematical model to
optimise the number and location of sidings on a single track rail to reduce the
delays caused by congestion. They used a decomposition approach where the model
was divided into two sub models, one to optimise track segment lengths and arrival
and departure times, and the other to determine the optimal train schedule. This
approach was useful in that it reduced the calculation time especially for large-scale
problems. Jeong and Kim (2011) developed a model for sequencing the delivery
operations in two main train movement directions (inbound and outbound) to obtain
optimal solutions for transfer by the rail crane. Probability models were developed to
determine the knock-on delays of trains caused by rail conflicts, (Marinova &
Viegasb, 2001; Yuan and Hansen, 2007; Murali et al., 2009). Meng and Zhou (2011)
suggested a new mathematical model and stochastic programming approach to
optimise single track train movement to reduce the total train delay time.
Branch and bound techniques were proposed with a local search technique to
improve the solutions of the rail system optimization problem. Corman et al. (2010)
developed some algorithms to improve a real-time traffic management system by
integrating rescheduling methods and local rerouting strategies in a tabu search
technique. The modified branch and bound method obtained a solution with a short
computation time. Neighbourhood structures for train rerouting are used in this
research and the Dutch railway network was used as a practical case. A discrete
event optimisation model and a branch and bound method were developed to solve a
single track train scheduling problem as a blocking job shop problem (Ariano et al.,
2007; Zhou & Zhong, 2007). Solving conflicts through a single track rail was the
main aim of these studies. A branch and bound method reduced the total time of
train trips and improved the accuracy of the solution to be near-optimal, as well as to
decrease computation times. The optimisation techniques increase the efficiency of
solutions but require long computation times for large-scale cases.
58
Constraint programming techniques were used to develop new models for the train
scheduling problem. Rodriguez (2007) developed a new model for the routing and
scheduling of trains by using constraint programming and based this model on a
simulator to calculate train run time. The main aim of Rodriguez’s model was to
solve conflicts and reduce delays at junctions by integrating a decision support
system and constraint programming. The model improved train system performance
for some real cases in a reasonable time. Abril et al. (2008) formulated the railway-
scheduling problem as constraint satisfaction problems which were defined as NP-
complete and solved by division into sub-problems. Search techniques used in this
research were the depth-first search technique and a tree partition method. All
techniques depended on graph partitioning where the domain of the main problem
was divided into sub domains where each domain has the same number of nodes.
Sato et al. (2007) suggested a shunting scheduling method in a railway depot which
was very suitable for many dynamic systems. The flexible job shop scheduling
problem formulation and constraint programming techniques as solution techniques
were used to achieve good results in a reasonable time. Salido and Barber (2009)
developed new heuristic techniques to solve periodic train scheduling problems some
of which were based on mathematical programming methods such as linear
programming and local search, while other techniques were based on railway
topological characteristics and constraint programming techniques. Spanish railways
were used as a case study in this project which aimed to maximise the capacity of the
rail line. CPLEX software was used and obtained results which were close to optimal
solutions.
Disjunctive graph models were proposed and metaheuristic techniques used to obtain
near optimal solutions. Burdett and Kozan (2008) proposed a novel hybrid job shop
approach to solve a train scheduling problem and designed a disjunctive graph model
of train operations and used metaheuristic techniques such as simulated annealing.
This approach achieved good results for many objective criteria and improved train
movements. Burdett and Kozan (2010) developed a new constructive technique, the
job shop approach to solve train- scheduling problems. This solution depended on a
disjunctive graph model and constructive solution methods and achieved better
results than metaheuristic techniques for different criteria such as minimizing a
59
makespan objective. Mascis et al. (2002) studied the job shop scheduling problem
with blocking, where the operations of each job must be processed without any
interruption. They used a generalization of the disjunctive graph to formulate the
problem. Higgins and Kozan (1997) proposed a better mathematical model to
optimise a train schedule on a single track rail to solve train conflicts by introducing
metaheuristic techniques such as genetic algorithms, tabu search and hybrid
algorithms. CPU time was calculated to compare these techniques for different
problems. Liu and Kozan (2009, 2011) represented the train-scheduling problem as a
blocking parallel machine job shop scheduling problem. Here, they solved the
blocking problem for single and multiple railway tracks and extended the classical
disjunctive techniques and proposed a new heuristic method, named the feasibility
satisfaction procedure. The new technique was applicable to many complex real
situations in train crossing problems.
Heuristic techniques were used to increase the efficiency of railway performance.
Kozan and Burdett (2005) developed a new methodology to measure the capacity of
the railway system and increase the efficiency of railway performance. They
assessed factors or variables which have a significant effect on railway capability
such as length and weight of trains, stopping rules for trains, and the distance
between two stations. Also proposed was a new definition and estimation of train
travelling times between two points which had a significant impact on railway
capacity. Burdett and Kozan (2006) extended their previous railway capacity
modelling to estimate the capacity for different railway systems. Firstly, they
developed techniques for railway lines to improve the level of performance.
Secondly, they developed more complex techniques to solve railway network
problems. The numerical results for the case study showed the positive impact of
these techniques on the performance of railway systems.
Mixed integer programming, disjunctive graphs and constraint programming
approaches have been used to develop new models for railway scheduling problems
particularly with respect to single track rails. Many solution techniques such as
branch and bound, heuristic and metaheuristic techniques were used to solve the
developed models.
60
2.5 Conclusion
This research uses job shop scheduling techniques to solve the sugarcane rail
transport problem. There are several benefits in treating the train scheduling problem
as a blocking parallel-machine job shop scheduling (BPMJSS) technique, namely;
BPMJSS technique prevents two trains operating on one section of track at the same
time, because it considers that only one train (job in a conventional job shop) can
operate on a track section (machine in a conventional job shop) at the same time. The
availability of each section is taken into account where some sections or segments
are unavailable for use by some trains during a specific time; these sections are
called blocking sections. Secondly, the precedence constraints, where the preceding
operation has to finish before starting the next operation, will be applied to the train
scheduling to keep the train activities (operations) in sequence during each run (trip).
Disjunctive constraints in JSS ensure two jobs are processed on one machine, and in
the correct order. These constraints can be applied to train scheduling where more
than one train passes on one section in the correct order. This serves to reduce the
waiting time and solve rail conflicts. PMS is integrated with JSS to ease passing
trains and help solve rail conflicts as well. The JSS technique looks more promising
for finding better solutions, in reasonable computational times, than alternative
methods do.
Many solution techniques have been used to solve the job shop scheduling problem.
JSS was formulated using mixed integer programming, disjunctive graph and
constraint programming. The hyper branch and bound technique is an exact solution
technique which obtains the optimal solution for the job shop scheduling problem.
This technique is time consuming and requires a large number of iterations to
achieve optimality. Other techniques such as the shifting bottleneck algorithm or
dispatching rules are quick to obtain solutions, but the accuracy of these techniques
is not high. Metaheuristic techniques will be described in Chapter 6.
The integration of the railway operations and the sugarcane system will be the main
point of this current research. Designing new models which include all sugarcane
61
system constraints (train and siding capacity, delivery and collection constraints) and
the railway operation will have significant benefits in reducing the overall cost and
achieving safe conditions during the passing of trains.
Integrating the railway operations and the sugarcane system involves two main
sections, namely; rail operation constraints related to passing trains, and capacity
constraints. A blocking parallel-machine job shop scheduling approach is used for
the sugarcane rail problem since the fundamentals of the job shop technique are
suitable for train scheduling problems. The blocking sections and segments will be
used. Mixed integer programming and constraint programming are also used to
formulate and solve this large scale transport problem. This section provides an
insight into the proposed methodology, which will be described in detail in the
following chapter.
62
Chapter 3
Modelling the Sugarcane Rail Transport Problem
Chapter Outline
.
3.1 Introduction....................................................... ...............................................................64
3.2 Segment and Section Blocking Types..............................................................................65
3.2.1 Blocking Segment Types....................................................................................65
3.2.1.1 Blocking Terminal Segments..............................................................66
3.2.1.2 Blocking Intermediate Segments........................................................66
3.2.2 Blocking Section Types.......................................................................................68
3.3 Description of the Blocking Segment Models of Sugarcane Rail System........................69
3.3.1 Blocking Segment MIP Model of the Sugarcane Rail System............................76
3.3.2 Blocking Segment CP Model of the Sugarcane Rail System..............................83
3.4 Description of the Blocking Section Models of Sugarcane Rail System..........................89
3.4.1 Blocking Section MIP Model of Sugarcane Rail System....................................90
3.4.2 Blocking Section CP Model of Sugarcane Rail System......................................93
3.5 Inclusion of the Delivery and Collection Time Constraints to the Model........................95
3.5.1 Delivering Delay Constraints................................................................................99
3.5.2 Collecting Delay Constraints...............................................................................101
3.6 Sugarcane Rail System as a Dynamic System................................................................102
3.7 Conclusion......................................................................................................................104
63
Publications Arising from Chapter 3
Masoud, M., Kozan, E., & Kent, G. (2011). A job-shop scheduling approach for
optimising sugarcane rail operations. Flexible Services and Manufacturing
Journal; 23(2):181-196.
Masoud, M., Kozan, E., & Kent, G. (2010a). Scheduling techniques to optimise
sugarcane rail Systems. ASOR Bulletin; 29:25-34.
64
3.1 Introduction
Job Shop Scheduling (JSS) techniques were used to develop new models for the
sugarcane rail transport system using two main approaches: Constraint Programming
(CP) and Mixed Integer Programming (MIP). MIP focuses on the objective function
to improve its value using linear relaxation techniques, which eliminate suboptimal
solutions. On the other hand, the CP approach focuses on satisfying the constraints
using consistency algorithms or filtering algorithms, as explained in Chapter 4, to
remove infeasible candidate solutions as shown in Figure 3.1. Filtering algorithms
are useful to discover infeasible solutions for solving JSS problems, while some
other approaches are time consuming because they keep running without discovering
the infeasible solutions. Some problems are best handled by MIP, others by CP,
while the integration of the MIP and CP solution techniques can solve some harder
problems (Hentenryck, 2002).
Focuses Using To
On Prune
Focuses Using To
On Eliminate
Figure 3.1: MIP and CP approaches
The train scheduling problem is formulated in this research as a blocking parallel-
machine job shop scheduling problem (BPMJSS). In this research, the rail track
sections are defined as machines and the train runs as jobs. Each job (train run)
contains a number of operations determined by the start time and processing time at
each machine (a single section of track). Some machines (sections) may be single
units (a single track) or have several alternative units (parallel track sections,
including sidings or passing loops). As discussed in Section 2.2.4, there are several
benefits in treating the train scheduling problem as a blocking parallel-machine job
shop scheduling (BPMJSS) problem, namely to prevent two trains passing in one
section at the same time; to keep the train activities (operations) in sequence during
each run (trip) by applying the precedence constraints; to pass the trains on one
Suboptimal Solution MIP
CP
Objective Function
Linear Relaxation
Constraints
Filtering Algorithms
Infeasible Candidate Solutions
65
section in the correct order (priorities of passing trains) by applying disjunctive
constraints; and, to ease passing trains by solving rail conflicts by applying blocking
constraints and PMS (Parallel Machine Scheduling).
MIP and CP models are developed using blocking segment and blocking section
constraints. Blocking segment models in Section 3.3 are applicable for sugarcane rail
systems, while blocking section models in Section 3.4 can be applied to other
applications such as coal mining rail systems (Masoud et al., 2011). The blocking
section models can be applicable to sugarcane rail systems after adding assumptions
and modifications to the rail network infrastructure of the sugarcane system, and as
explained in Section 3.4.
In this research, Blocking segment constraints are proposed to develop the sugarcane
rail transport models because many branches in sugarcane rail networks do not
contain passing loops and because some section lengths are shorter than the train
length. In spite of these models including some assumptions to improve the
performance of the blocking segment model reducing the total waiting time value of
each train run, this value is still high.
New models using blocking section constraints are presented in Section 3.4 to reduce
the waiting time value and other objectives. The blocking section models can be
applicable to sugarcane rail systems after adding assumptions and modifications to
the rail network infrastructure of the sugarcane system, and as explained in Section
3.4. The blocking section models in can be applied to other applications as well such
as coal mining rail systems (Masoud et al., 2011).
3.2 Segment and Section Blocking Types
3.2.1 Blocking Segment Types
A segment blocking approach is used to develop the sugarcane railway system. All
sections in a segment are blocked at the same time and prevent other trains from
simultaneously using them. Two main segment types are included in the rail
network: a terminal segment (end of the network with no segments following it) and
66
an intermediate segment (has another segment following it). Blocking segment types
will be clarified using different cases.
3.2.1.1 Blocking Terminal Segments
All operations are executed continuously in terminal segments without any
interruption. Trains that require the terminal segment take the outbound direction.
Developing an algorithm to solve the blocking for that segment is therefore quite
straightforward. Figure 3.4 shows blocking terminal segment B being used by train
k1 with k2 waiting at segment A to use segment B, as shown in Figure 3.2. Terminal
segment C can be used by k1 or k2 to solve any conflicts for the two train.
Figure 3.2: Two trains travelling in the same direction. One requires the blocking terminal segment
3.2.1.2 Blocking Intermediate Segments
The situation is more complicated at intermediate segments of the track. At any
intermediate segment, operations in the outbound and inbound directions cannot be
implemented continuously if the train used the intermediate segment to go further in
the outbound direction to another segment. Therefore, the train would complete the
operations of the inbound direction on the intermediate segment after visiting this
other segment. As a result, dealing with the intermediate segment as one segment
and blocking this segment would cause a significant loss in the use of this segment
and reduce the efficiency of the rail network because any intermediate segment
Blocking terminal segment B
Terminal segment C
k1
k2
Outbound direction
Inbound direction
Intermediate segment A
67
might be blocked for a long time. For that reason, any intermediate segment has
been expressed mathematically as two segments for the two different directions:
outbound and inbound, as detailed in Section 3.3. There are two scenarios for
blocking intermediate segments:
Scenario one
Two trains k1 and k2 have the same direction, inbound, so in this case one segment
(inbound) will be used by the two trains. That is, the blocking constraints are
applied to one segment and will be quite straightforward. In Figure 3.3, train k2 uses
segment A and train k1 requires the same segment in the same direction. As a result
the blocking constraint can be applied easily for segment A and train k1 has to wait
for segment A to be unblocked before continuing into segment A.
Blocking segment
Figure 3.3: Two trains travelling in the same direction. One requires the blocking intermediate segment
Scenario two
The two trains k1 and k2 have different directions, inbound and outbound
respectively. Segment A is occupied by train k2 and is therefore a blocked segment,
while at the same time train k1 requires segment A, as shown in Figure 3.4.
Terminal segment B
Terminal segment C
k1
k2
Outbound direction
Inbound direction
Intermediate segment A
68
Blocking segment
Figure 3.4: Two trains travelling in different directions. One requires the blocking intermediate segment
All blocking segment types are modelled mathematically and explained in detail in
Section 3.3. New algorithms for solving train conflicts are proposed in Section 4.3.2.
3.2.2 Blocking Section Types
In the blocking section approach, two trains (k1 and k2) can use one segment at a
time but no more than one train can occupy one section at a time. Blocking section
constraints can be applied correctly when the length of the train is less than the
length of the section, as explained in Section 1.2.3.1.
Figure 3.5 shows that the two trains k1 and k2 have two different directions, inbound
and outbound respectively. Section s is occupied by train k1 and is therefore a
blocked segment, but at the same time, train k2 requires section s.
One operation is executed on section s at a time and the index for this operation is
the same in the two travelling directions, inbound and outbound, as explained in the
blocking section models in Section 3.4.
Because segments can consist of a number of sections, applying a blocking section
constraint is easier than applying a blocking segment constraint that entails the
blocking of all sections in that segment.
Terminal segment B
Terminal segment C
k1
k2
Outbound direction
Inbound direction
Intermediate segment A
69
Section s
Figure 3.5: Train requires the blocking section s
3.3 Description of the Blocking Segment Models of Sugarcane Rail System
Two main models are developed in this research for the rail transport systems using
the constraint programming technique (CP) and the mixed integer programming
technique (MIP). These models tackle and solve many scheduling train problems
throughout different types of rail networks. Each model consists of two main parts:
the objective functions and the constraints. The models’ objective functions deal
with minimising the makespan objective and the total waiting time separately. The
models’ constraints include rail operation constraints and sugarcane system
constraints. Rail operation constraints are related to trains passing on the rail network
and include precedence, order of rail segments, train runs, passing priority, and
blocking constraints. The sugarcane system constraints include train capacity, siding
capacity, mill capacity, empty and full bin requirements, harvester rates, and
harvesting times.
The model constructs train runs that consist of a series of operations. These
operations involve activities such as traversing a track section and delivering and
collecting bins. The train runs are modelled as whole trips, generally involving
leaving the mill, delivering empty bins to sidings during the outbound part of the
run, collecting full bins from sidings during the inbound part of the run and returning
to the mill. As such, two operations generally will be conducted on each track
section, each run. One of them will be implemented in the outbound direction and
the other operation will be executed in the inbound direction.
Terminal segment B
Terminal segment C
k1 k2
Outbound direction
Blocking section s
Intermediate segment A
Inbound direction
70
The main elements of a railway network are shown in Figure 3.6. The sample rail
network in Figure 3.6 has seven sections, where sections S2, S5 and S7 contain
sidings (for storing empty and full bins for the harvesters). Three segments are
included in the figure. Two segments are defined as terminal while one segment is
defined as intermediate. In the terminal segments, the train can travel outbound and
then immediately travel inbound without any interruption. For intermediate
segments, however, the train may go to visit other segments like terminal segment 2
or terminal segment 3 after traversing intermediate segment 1 in the outbound
direction, and come back in the inbound direction to visit the intermediate segment
again. Operations can be conducted continuously in the two directions on terminal
segments while there can be an interruption between the operations of the outbound
direction and the inbound direction on intermediate segments.
Figure 3.6: A simple cane rail network with three sidings for delivering and collecting
If blocking constraints were simply applied to the intermediate segment as they can
be to a terminal segment, the segment would be blocked during the time period of
implementing the outbound and inbound operations on it. The blocked time period
includes the interruption period between the operations of the outbound and the
inbound directions when the train is in either segment 2 or segment 3. As a result,
the utilisation efficiency of this intermediate segment will be reduced because it will
Mill
S1
Terminal segment 2
Intermediate segment 1
Terminal segment 3
S2 S3 S4 S5
S6
S7
Outbound direction
Inbound direction
k1 k2
Junction
Siding
71
be blocked for a longer time. Blocking a terminal segment does not affect utilisation
efficiency since the operations of these segments are conducted continuously.
Example 3.1 examines these issues.
Example 3.1
Assume that the train k1 run includes the path, Mill-S1-S2-S3-S6-S7-
S7-S6-S3-S2-S1-Mill, where outbound operations O1-O2-O3 are
implemented on the intermediate segment. Outbound operations 4-5
and inbound operations O6-O7 are implemented on terminal segment
3 continuously. Inbound operations O8-O9-O10 are implemented on
the intermediate segment. Blocking the intermediate segment during
use of k1, to satisfy the safety conditions, means this segment is
blocked during the time period of implementing the operations O1-
O2-O3-O8-O9-O10. This period includes the time of operations O4-
O5-O6-O7 since the operations of each segment should be
implemented and conducted continuously. As a result, if the train k2
requires the intermediate segment, the waiting time of the train k2 to
use the intermediate segment equals the time period of implementing
the operations O1-O2-O3-O4-O5-O6-O7-O8-O9-O10 by train k1
which affect the utilisation efficiency of that segment.
The proposed model increases the utilisation efficiency of segments by assuming
that intermediate segments have been given separate segment numbers for the
outbound and inbound directions (segments 1 and 4 in Figure 3.7). Segment 1
includes the outbound operations O1-O2-O3 and segment 4 includes the inbound
operations O8-O9-O10.
72
Figure 3.7: A single cane rail network after applying model
In the outbound and inbound directions of train k1, segment1 and segment 4 are
blocked respectively. As a result, the waiting time of train k2 is reduced to equal the
time period of implementing the outbound operations O1-O2-O3 on segment 1 by
train k1. If k1 is in the inbound direction, the waiting time of k2 to catch segment 4 is
equal to the time period of the inbound operations O8-O9-O10 of train k1. Blocking
segment 1 does not automatically mean the intermediate segment is blocked
completely, since segment 4 can be used by another train k2 while train k1 is using
segment 1. This conflict is addressed by the proposed algorithms of the solution
approach in Chapter 4 by distinguishing between the segment types to prevent using
one physical segment by more than one train at the same time.
The efficiency of the blocking segment constraints can be increased further in the
proposed models by considering the passing loops in each rail branch. Any blocking
for any segment which includes a passing point can increase the waiting time of the
system and then increase the operating cost and decrease the efficiency of rail
section utilisation. Passing points can be passing loops (no activities; no delivering
and no collecting) or a big siding (delivering or collecting) with passing loop and can
allow for more than one train to pass at the same time.
Mill
S1
Terminal segment 2
Inbound segment 4
Terminal segment 3
S2 S3 S4 S5
S6
S7
Outbound direction
Inbound direction
k1 k2
Junction
Siding
Blocked segment
Outbound segment 1
73
Case a: Passing loop
Each branch which includes a passing loop can be divided into two segments to
increase the utilisation of this branch and reduce the waiting time of any train at this
branch. As a result, the blocking segment is applied to each part that does not include
a passing loop. For example in Figure 3.8, branch B1 includes a passing loop, so two
segments (A1 and A2) are included in it where each segment includes some sections.
The passing loop has two parallel tracks (C1 and C2) without storage, where C1 and
C2 have one section for each. Branch B2 has no passing loop so the whole branch is
considered as one segment.
Figure 3.8: Rail branch includes passing loop
Passing loop (no storage)
Rail segment A1 Rail segment A2
Rail section
Rail branch B1
Rail branch B2
Rail segment A3
Junction
Siding
C1
C2
Parallel track segments
74
Case b: Siding as a passing loop
Siding types in sugarcane rail system are divided into big sidings that can allow for
more than one train to pass and small sidings which do not allow for more than one
train to pass at the same time. Small sidings can have a double track section, where
one of them is used for cane storage and another for a passing train (only one train at
a time). The big sidings can have more than two tracks where one is used for cane
storage and other tracks for more than one train passing at a time (parallel machines).
Each track in this siding can be considered as one segment that includes one section
as shown in Figure 3.9.
Figure 3.9: Rail siding includes passing point
Figure 3.9 shows a big siding (storage with passing loop) that has a triple section
where each section can be a segment and allow for passing trains. Each segment
includes only one section where Segments 3, 4 and 5 include sections S3, S4, S5
respectively. Blocking segments in this case includes blocking sections.
Mill
S1 S2 S4 S9 S8
Single section
Siding with passing loop (triple section)
Siding
S5
S6
S7
Segment 1 Segment 2 Segment 5
Segment 3
Segment 4
S3
Siding
75
Concluding Remarks
Sections 3.3.1 and 3.3.2 show the proposed mixed integer programming and
constraint programming models for the sugarcane rail transport system using
segment constraints. Figure 3.10 shows the main structure of the MIP and CP
models, where the objective functions and the rail operation and sugarcane system
constraints are used to solve the sugarcane rail problem.
Figure 3.10: MIP and CP models structure
MIP and CP models
Minimising total waiting time
Minimising makespan
Precedence
Sugarcane system constraints
Runs order
Rail operation constraints
Objective functions
Constraints
Segments order
Passing priority
Blocking
Train
Siding capacity
Bin allotment
Ready time
76
3.3.1 Blocking Segment MIP Model of the Sugarcane Rail System
In this section mixed integer programming is used as a mathematical programming
technique to obtain the solution to the sugarcane rail transport problem. Two
objective functions are used separately in this model: minimising makespan and the
total waiting time. The sugarcane system is a dynamic system so many changes can
occur daily such as the number of the trains in the system or the number of working
sidings. The mixed integer programming model is designed to be more flexible so as
to change the objective function easily to solve any new situation in the system.
Notation
Maximum number of trains. K
Index of trains; k=1,2,…K., k'=1,2,…K. ,k k′
Maximum number of segments. E
Index of the segments; e=1,…E. e
Index of sections; s=1,2,…S.
Maximum number of sections.
s
S
Index of operations of each train.
1, 2, . .o O= …… , 1, 2, . .o O′ = ……
Maximum number of operations of each train.
,o o′
O
Index of runs of each train; r=1, 2…R., r'=1,2,…R.
Maximum number of train runs. ,r r′
R
Start time of train k in run r on section s on segment e.
k osr et
Ready time for all trains where all trains are at mill and ready to move. kη
Processing time of operation o of train k on section s on segment e.
koseg
A big positive number. V
Number of full bins collected from siding s by train k during
operation o and run r on segment e. k osr e
B
Number of empty bins delivered for siding s by train k during
operation o and run r on segment e. k osr e
α
Total allotment of siding s per day.
se
A
77
Capacity of train k of empty bins. kp
Capacity of train k of full bins.
kf
Siding capacity.
seC
Makespan.
maxC
1, if train assigned to section on segment during run . =
0, otherwise.k s e r
k sr e
X
1, and are processed on section on segment = during run and respectively, and train precedes train .
0, otherwise .
k k s er r k k′
′ ′
k k sr r e
Z ′ ′
1, train uses section on segment before segment during run .=
0,otherwise.k s e e r′
k s sr e e
β ′ ′
1, if run is assigned for train before run . =
0, otherwise. r k r′
krr
µ′
1, if run is assigned for train . =
0, otherwise. r k
kr
λ
1, if the operation of train requires section on segment during run 0, otherwise.
o k s e=
k osr e
q
1, if train requires section on segment , but operation of train = scheduled at the same section on the same segment during run .
0, otherwise.
k s e ok r
′
k k osr e
b ′
Definition of a track section and segment The single track railway includes a track section and segment where; Track section: A length of track between two key points in the track network.
- End of one siding or passing loop to the start of the next.
- The start of a siding or passing loop to the end of that siding or passing loop,
including the section of mainline parallel to it.
- From the start or end of a siding or passing loop to a junction.
Track segment: a length of each branch or line in the rail network.
78
The Objective Functions
Two criteria are used to optimise the new model: minimising the makespan (Cmax)
and minimising the total waiting time for the blocking segment (TWTBSG).
To minimise the makespan (Cmax ), the completion time of the last operation, is used
and the objective function is given by Equation (3.1):
1max ( ) max
S
k k Os kOs k Os kR E E R E RsMinC Min t g q λ
=
= +
∑ (3.1)
Equation 3.1 ensures the finish time of each operation for all trains during all runs
has to be less than the makespan.
To minimise the total waiting time for the blocking sections group (TWTBSG) for
all train runs in the system, Equation (3.2) is used.
Minimise: TWTBSG = IWT + OSWT + ISWT + RWT (3.2)
The components of this objective function are explained in detail below:
The initial waiting time (IWT) considers the waiting time before starting the first run
of each train, Equation (3.3).
IWT= ( ) 11 1 11min .
K
k s k kk
t sη λ=
− ∀∑ (3.3)
The segment waiting time is the waiting time after finishing operations on one
segment and starting operations on a new one by each train. This part includes the
waiting time in both outbound and inbound directions (OSWT and ISWT,
respectively) (Equation (3.4) and Equation (3.5)).
OSWT= ( ){ }1 11 1 1 1 1 1
K R E E S S
k s k k s k Os k k Os kOs k s sr e r r e r e r r e e r e ek r e e s sq t q t gλ λ β′ ′ ′′ ′ ′′ ′= = = = = =
− +∑∑∑∑∑∑
(3.4)
ISWT = { }( )1 11 1 1 1 1 1
( 1K R E E S S
k s k k s k Os k k Os kOs k s sr e r r e r e r r e e r e ek r e e s sq t q t gλ λ β′ ′ ′ ′′ ′ ′ ′′ ′= = = = = =
− + −∑∑∑∑∑∑
(3.5)
79
The run waiting time (RWT) is the waiting time between finishing one run and
starting a new one as shown in Equation (3.6).
RWT= ( ){ }(1) 1( 1) ( 1) ( 1) 11 1
E S
k s k k s k Os k k Os kOsr e r r r E r r E Ee sq t q t gλ λ
+ + += =− +∑∑ (3.6)
Constraints
Rail operation constraints Ready time
Equation (3.7) ensures that the start time of the first operation of the first run of train
k on the first segment and the first run has to be greater than or equal to the ready
time.
( )11 1 1min 0 , .k s k kt k sη λ− ≥ ∀ (3.7)
Precedence
Equation (3.8) ensures operation o+1 of train k cannot be processed before finishing
operation o for train k on section s on segment e.
( ) ( 1) ( 1) , , , , .k os kos k os k o s k o sr e e r e r e r et g q t q k r o s e+ ++ ≤ ∀ (3.8)
Where,
If
/ 2 o O≤ for each terminal segment, then the train is moving in the outbound
direction.
If / 2 o O> for each terminal segment, then the train is moving in the inbound
direction.
In the case of an intermediate segment, the direction of the train is determined by
whether the train is occupying an outbound or inbound segment.
Segments order
Equation (3.9) ensures segment e is processed before segment e'.
80
( ) ( )-1
, , , , , , ; .
k Os kOs k Os k o s k o s k s sr e e r e r e r e r e et g q t q V
k r o e e s s e e
β′ ′ ′ ′ ′′ ′ ′+ ≤ +
′ ′ ′ ′∀ ≠ (3.9)
Equation (3.10) ensures segment e' is processed before segment e.
( ) ( )-
, , , , , , ; .
k Os kOs k Os k os k os k s sr e e r e r e r e r e et g q t q V
k r o e e s s e e
β′ ′ ′ ′′ ′ ′ ′+ ≤
′ ′ ′∀ ≠ (3.10)
Equation (3.11) states the logic relation between decision variables.
1k o s k o s k s s k s sr e e r e e r e e r e e
q q β β′ ′ ′ ′′ ′ ′ ′+ − ≤ + (3.11)
Runs order
Equation (3.12) ensures that run r is assigned before run r+1for train k.
( ) 1 1 1
1; , , , , .
k Os kOs k Os k k os k os kr E E r E r r e r e rt g q t q
r R o e e s s
λ λ′ ′′ ′+ + ++ ≤
′ ′∀ ∈ −
(3.12)
Passing priority
Equations (3.13) and (3.14) ensure trains k and k' are processed on section s of
segment e in the correct order. Either train k precedes train k' where train k' cannot
use this segment before train k leaves it, or train k' precedes train k where train k
cannot use this segment before train k' leaves it.
( )11 1
1
, , , , , , , .
O S
k o s k s kos k k sr e r e e r r eo st t g V Z
k k K k k o O s S e E r r R
′ ′ ′′ ′= =
≥ + + −
′ ′ ′ ′∀ ∈ ≠ ∈ ∈ ∈ ∈
∑∑
(3.13)
81
( )11 1
, , , , , , , .
O S
k os k s k o s k k sr e r e e r r eo st t g V Z
k k K k k o O s S e E r r R
′ ′ ′ ′′ ′′= =≥ + −
′ ′ ′∀ ∈ ≠ ∈ ∈ ∈ ∈
∑∑
(3.14)
Equation (3.15) ensures each segment cannot process more than one train at the same
time.
1 11 1
E S
k s k s k s k k s k k sr e r e r e r r e r r ee sX and X X Z Z′ ′ ′′ ′ ′
= == + − ≤ +∑∑ (3.15)
Blocking
Equation (3.16) defines the blocking constraint at each segment, where, if the
segment is used by train k, all other trains have to wait until that segment is free
before they can use it.
1 1 1 1 1
, , , , .
K E S E S
k os k os kr e r e rk e s e st b q q tk o s k k s k o s kr e r e r e r
k r r o o
λ λ′= = = = =
≥′ ′ ′ ′ ′ ′′ ′ ′
′ ′∀
∑∑∑ ∑∑
(3.16)
Equation (3.17) ensures non-negativity.
0 , 0 , , . k os kosr e e
t g k s e≥ ≥ ∀ (3.17)
Train
Equations (3.18) and (3.19) are train capacity constraints. In equation (3.18), the
number of empty bins delivered for all sidings during run r has to be less than or
equal to the capacity of train k.
1 1 1 , .
E O S
k os k os kr e r e re o sq p k rkα λ
= = =≤ ∀∑∑∑ (3.18)
82
In equation (3.19), the number of full bins collected from all sidings during run r has
to be less than or equal to the capacity of train k.
1 1 1 , .
E O S
k os k os k kr e r e re o sq B f k rλ
= = =≤ ∀∑∑∑ (3.19)
Equations (3.20) and (3.21) ensure empty bin delivery is in the outbound direction
where o and o' are two operations at each siding in the two directions and o' precedes
o. Similarly, equations (3.23) and (3.24) ensure the collection of full bins is in the
inbound direction but o precedes o'.
If o' ≤ o then
, , , , .k o s kr ep k r o s eα ′ ′≤ ∀ (3.20)
0 , , , , .k o sr eB k r o s e′ ′= ∀ (3.21)
Else
, , , , .k o s kr eB f k r o s e′ ′≤ ∀ (3.22)
0k o sr eα ′ = (3.23)
End if
Siding capacity
Equation (3.24) ensures that the total number of empty and full bins at each siding is
less than the siding capacity during each operation o, where each operation is
addressed only for a delivery or collection but not both.
( ) Ck os k os k os k sr e r e r e r eB qα λ+ ≤ (3.24)
Bin Allotment
Equations (3.25) and (3.26) show the relation between the total bin allotments for
each siding and the number of empty and full bins delivered to and collected from
83
each siding. The total number of empty bins delivered to siding s has to be equal to
the allotment of the harvester operating at this siding.
1 1 1 1 .
K R E O
k os k os k sr e r e r ek r e oq A sα λ
= = = == ∀∑∑∑∑ (3.25)
Equation (3.26) ensures that the total number of full bins collected from siding s has
to be equal to the allotment of the harvester operating at this siding.
1 1 1 1 .
K R E O
k r e oq B A sk os k os k sr e r e r e
λ= = = =
= ∀∑∑∑∑ (3.26)
3.3.2 Blocking Segment CP Model of the Sugarcane Rail System
In this section, a CP approach is used to formulate the problem. CP has the ability to
model many different types of combinatorial optimisation problems. CP is better
than mathematical programming techniques individually which have restrictive
modelling powers. CP can produce more than one solution for many scheduling
problems. CP techniques have the ability to solve all feasibility problems and can
deal with conflicting objectives.
CP problem modelling depends on the CP software package used. There are many
different modelling languages that can be used in this field. ILOG’s OPL modelling
language has been used here to formulate the sugarcane rail transport problem
because OPL has a suitable set of constructs and statements for scheduling problems
to develop an effective CP model. OPL Unary resources can process only one
operation at a time and the segments and sections are modelled as Unary resources.
Each train run is a unary resource because a train can only conduct one run at a time.
Each operation is associated with a start time and duration. Additionally,
ActivityHasSelectResource is used in OPL to decide which unary resource will be
selected for processing operation o of train k and which run ε will be selected by
train k to start first. Also, each operation requires a unary resource to be processed
on it. This section is a summary of the sugarcane transport system model using CP in
the OPL language environment.
84
Additional Notation for CP model
w Index of segments; w = 1,2…E.
k oswB
ε Number of full bins collected from siding s by train k during operation
o and run ε on segment w
k oswεα
Number of empty bins delivered for siding s by train k during operation
o and run ε on segment w
swA
Total allotment of siding s per day
swC
Capacity of siding s on segment w
k oswtε Start time of train k in run ε on section s on segment w
koswg
Processing time of operation o of train k on section s on segment w
The Objective Functions
To minimise the makespan (Cmax ), the completion time of the last operation, is used
and the objective function is given by Equation (3.27):
Minimise Cmax = ( ) ( )( )max k Os kOsR E EMin k in trains t g+ (3.27)
To minimise the total waiting time for all train runs in the system, Equation (3.28) is
used:
Minimise: TWTBSG = IWT + OSWT + ISWT + RWT (3.28)
The components of this objective function are as follows:
The initial waiting time (IWT) is shown in Equation (3.29).
IWT= ( )1 11
K
k s kk
tε
η=
−∑ (3.29)
The waiting time in outbound and inbound directions, OSWT and ISWT is shown in
Equations (3.30) and (3.31) respectively.
OSWT= ( )111 1 1 1 1
K R E E S
k k Os k Osw w wk w w st t gε ε εε
′′= = = = =− +∑∑∑∑∑ (3.30)
85
ISWT = ( )111 1 1 1 1
K R E E S
k k Os kOsw w wk w w st t gε εε
′ ′′= = = = =− +∑∑∑∑∑ (3.31)
The run waiting time (RWT) is shown in Equation (3.32).
RWT=
( )1( 1) 11 1
E S
k s k Os kOsE Ew st t g
ε ε+= =− +∑∑
(3.32)
The Model Constraints
Ready time
Equation (3.33) ensures that the start time of first operation of train k on the first
segment and the first run has to be greater than or equal to the ready time.
( )11 1min 0 , .k s kt k sη− ≥ ∀ (3.33)
Precedence
Equation (3.34) ensures that operation o+1 of train k cannot be processed before
finishing operation o for train k on section s of segment w.
( 1)( ) , , , , .k os kos k o sw w wt g precedes t k o s wε ε
ε++ ∀ (3.34)
Segments Order
Equation (3.35) ensures segment w is processed before segment w' or, segment w'
before segment w.
, , ; , , , .
k Os kOs k os k Os kOs k osw w w w w wt g t t g t
k w w w w o s
ε ε ε ε
ε
′ ′ ′′ ′ ′+ ≤ ∨ + ≤
′ ′∀ ≠ (3.35)
Runs order Equation (3.36) ensures that train k selected run ε (unary resource) to be processed.
ActivityHasSelectResource (k, Trip, ε) kε⇔ , .k K Rε∀ ∈ ∈ (3.36)
86
Runs Order
Equation (3.37) ensures that run ε is assigned to train k before run ε+1.
1
, , , , , 1.
k Os kOs k osE E wt g t
k K s s S o O w E R
ε ε
ε
′+
+ ≤
′∀ ∈ ∈ ∈ ∈ ∈ − (3.37)
Passing Priority
Equation (3.38) ensures that trains k and k' are processed on section s of segment w
in the correct order. Either train k precedes train k' where train k' cannot use this
segment before train k leaves it, or train k' precedes train k where train k cannot use
this segment before train k' leaves it.
1 11 1 1 1
, , , , , , , , .
O S O S
k o s k s kos k os k s k o sw w w w w wo s o st t g t t g
k k K k k s S w E R o o O
ε ε ε ε
ε ε
′ ′ ′ ′ ′′ ′ ′= = = =≥ + ∨ ≥ +
′ ′ ′ ′∀ ∈ ≠ ∈ ∈ ∈ ∈
∑∑ ∑∑
(3.38)
Blocking
Segments are modelled as Unary resources, where each segment w will be a unary
resource in OPL language. OPL Unary resources can process only one train at a
time.
Train
Equations (3.39) and (3.40) are train capacity constraints. In equation (3.39), the
number of empty bins delivered for all sidings during run ε has to be less than or
equal to the capacity of train k.
1 1 1. ,
E O S
k os kww o sp k
εα ε
= = =≤ ∀∑∑∑ (3.39)
Equation (3.40) ensures that the number of full bins collected from all sidings during
run ε has to be less than or equal to the capacity of train k.
87
1 1 1. ,
E O S
k os kww o sB f k
εε
= = =≤ ∀∑∑∑ (3.40)
Equations (3.41) and (3.42) ensure empty bin delivery is in the outbound direction
where o and o' are two operations at each siding in the two directions and o' precedes
o. Similarly, equations (3.43) and (3.44) ensure the collection of full bins is in the
inbound direction but o precedes o'.
If o' ≤ o then
, , , , .k o s kwp k o w s
εα ε′ ′≤ ∀ (3.41)
0 , , , , .k o sw
B k o s wε
ε′ ′= ∀ (3.42)
Else
, , , , .k o s kwB f k o s w
εε′ ′≤ ∀ (3.43)
0k o swεα ′ = (3.44)
End if
Bins Allotment
Equations (3.45) and (3.46) show the relation between the total bin allotments for
each siding and the number of empty and full bins delivered to and collected from
each siding.
The total number of empty bins delivered to siding s has to be equal to the allotment
of the harvester operating at this siding.
1 1 1 1.
K R E O
k os sw wk w oA s
εεα
= = = == ∀∑∑∑∑ (3.45)
The total number of full bins collected from siding s has to be equal to the allotment
of the harvester operating at this siding.
1 1 1 1.
K R E O
k os sw wk w oB A s
εε= = = == ∀∑∑∑∑ (3.46)
88
Siding Capacity
Equation (3.47) ensures that the total number of empty and full bins at each siding is
less than the siding capacity during each operation o.
, , , , .k os k os sw w wB C k o s w
ε εα ε+ ≤ ∀ (3.47)
Concluding Remarks
CP model uses fewer constraints than the MIP model because CP can combine
several constraints to work as one constraint using the statement OR “∨ ” such as in
passing priorities constraints and runs order constraints. CP uses variable subscripts
instead of binary decision variables. For this reason, there are no binary decision
variables in CP, while the MIP model includes many of them such as k osr eq . Two
variable subscripts are used: sw and kε, where sw represents selecting section s on the
selected segment w to train k while kε represents the selected run ε of train k. For that
reason the notations of segment and run are changed in the CP model.
When implementing the CP code using OPL software, some commands can be used
as well by the following form under constraint programming fundamentals:
koswg variable will be defined as Duration [k, r, e, o, s]
Activity task [k, ε, w, o, s] is defined on Duration [k, r, e, o, s]
The variable k osr wt can be formulated by constraint programming as
task [k, ε, w, o ,s].start
Usually in OPL code we use precedes in CP formulation instead of ≤
(less than sign).
89
3.4 Description of the Blocking Section Models of Sugarcane Rail System
Blocking section models are proposed to improve the efficiency of the performance
of the sugarcane rail system by reducing the total waiting time and other objectives
such as makespan. The main issue is the blocking section is not applicable for the
current real sugarcane rail system, so some assumptions are proposed to apply the
blocking section models to this system. This section investigates whether the
blocking section model can reduce the makespan and total waiting time for the
sugarcane rail system (Masoud et al., 2010a). Here proposals are put forward to
develop the blocking sections for sugarcane rail models. These proposals include
physical changes and modelling approaches to reduce delays as follows:
i. Physically, More passing loops may be constructed in the track segments of
the sugarcane rail network or some sidings are working as passing loops.
Passing loops can ease the passage of passing trains and reduce the number
of conflict points throughout the rail network.
ii. In the developed models, the consecutive short sections may be combined to
form one section. This can help ensure that the length of the train is less than
the length of the section and so section blocking constraints can be applied.
iii. In the developed models, the sidings located in consecutive short sections
may be combined with their nearest siding to work as one siding. The
capacity of this siding would equal the combined capacities of the original
sidings and the total allotments of these sidings.
MIP and CP approaches have been integrated with blocking parallel job shop
scheduling (BPJSS) techniques to produce MIP and CP models for the sugarcane rail
system. The segment parameter has been removed from the MIP and CP models. In
these models, parallel track sections are called units and the parallel track sections
are defined as the same section. Some units are defined as sidings or passing loops.
Figure 3.11 shows a portion of a rail network where siding and siding with passing
loop (multi track section units) are included. Single sections in the rail network are
90
used only for one-train-at-a-time passing and they do not include any sidings for
delivery and collection.
Figure 3.11: A single cane rail network with three sidings can allow passing trains
3.4.1 Blocking Section MIP Model of Sugarcane Rail System
The blocking segment MIP model in section 3.3.1 has been modified and updated to
develop the blocking section MIP model. Five equations have been updated and
some notations have been removed as follows:
1- Precedence constraint (Equation 3.8) was updated.
2- Segments order constraints (Equations 3.9, 3.10, and 3.11) were removed.
3- Passing priority constraints (Equations 3.13, 3.14, and 3.15) were updated.
4- Blocking constraints (Equation 3.16) were modified.
5- The siding capacity and bin allotment constraints (Equations 3.18 - 3.26) were
modified with section unit notation.
Additional Notation
U Maximum number of units of sections s.
u Index of unit uth of section s; u=1,2…U.
1 : if section is processed by train before section
0 : k ssr
s k s
otherwiseϕ ′
′=
Mill
s1
s22
s6
s21
s23
s3 s51 s4
s71
s72
s73
Single section
Siding Junction
Multi section units
s24
91
Objective function
Two criteria have been developed for the blocking section MIP model; minimising
the makespan (Cmax) and minimising the total waiting time.
To minimise the makespan (Cmax ), Equation (3.48) is developed as follows:
( ) 1 1
, 1,.., .maxS U
k k os kos k os kR u u R u Rs uo k KMinC Min max t g q λ
= =∀ =
= +
∑∑ (3.48)
To minimise the total waiting time for the blocking section (TWTBS) for all train
runs in the system, Equation (3.49) is used.
TWTBS= IWT + OSWT + ISWT + RWT (3.49)
The initial waiting time (IWT) is the waiting time before starting the first run of each train as shown in Equation (3.50).
IWT= ( )1 11 1 11 ,
K
k s k k s ku ukt q s uη λ
=− ∀∑
(3.50)
The section waiting (SWT) time is the waiting time on each section where the next
section is blocking. This part includes the waiting time in outbound and inbound
directions (OSWT and ISWT, Equation (3.51) and (3.52) respectively).
OSWT= ( ){ }( 1) ( 1)1 1 1 1 1
K R S S U
k o s k k o s k os k k os kos k ssr u r r u r u r r u u rk r s s uq t q t gλ λ ϕ′ ′ ′+ +
′= = = = =− +∑∑∑∑∑
(3.51)
ISWT= ( ){ }( )( 1) ( 1) ( 1)1 1 1 1 1
1K R S S U
k os k k os k o s k k o s k o s k ssr u r r u r u r r u u rk r s s uq t q t gλ λ ϕ′ ′ ′ ′+ + +
′= = = = =− + −∑∑∑∑∑
(3.52)
The run waiting time (RWT) is the waiting time between finishing one run and
starting a new one as shown in Equation (3.53).
RWT= ( )( ){ }(1) 1( 1) ( 1) ( 1)1 1max
S U
k s k k s k os k k os kosr u r r u r u r r u us uq t q t gλ λ
+ + += =− +∑∑
(3.53)
92
Precedence Equation (3.54) ensures operation o+1 of locomotive k cannot be processed before
finishing operation o for locomotive k for outbound locomotives.
( ) ( )( 1) ( 1) 1
., ., 1..., 1., , .
k os kos k os k k o s k ssk o sr u u r u r r u rr u krt g q q t V
k K r R o O s S u U
λ ϕλ ′+ ++ ≤ − −
∀ ∈ ∈ = − ∈ ∈
(3.54)
Equation (3.55) ensures operation o of locomotive k cannot be processed before
finishing operation (o+1) in locomotive k for inbound locomotives.
( )( )( 1) ( 1) 1
., ., 1..., 1., , .
k o s k o s k os k os kos k ssr u r u r u r u u rq t q t g V
k K r R o O s S u U
ϕ ′+ + ≤ − + −
∀ ∈ ∈ = − ∈ ∈
(3.55)
Passing priority
In equation (3.56), trains k and k' respectively are processed on the unit uth of section
s and locomotive k is processed after locomotive k'.
( )'1
, , ; ; ; , ; , .
k os k o s k o s k ks k ksr u r u u ut t g Z V Z
k k K k k s S u U r r R o o O
′ ′ ′ ′ ′′≥ + − −
′ ′ ′ ′∀ ∈ ≠ ∈ ∈ ∈ ∈ (3.56)
In equation (3.57), trains k and k' respectively are processed on the unit uth of section
s and locomotive k' after locomotive k.
( )1
, , , , , , o, .
k o s k os kos k ks k ksr u r u u u ut t g Z VZ
k k K k k s S u U r r R o O
′ ′ ′ ′′≥ + − −
′ ′ ′ ′∀ ∈ ≠ ∈ ∈ ∈ ∈ (3.57)
Blocking
Equation (3.58) is a blocking condition. Operation o of locomotive k requires the
unit uth of section s but the operation o' of locomotive k' is scheduled on the same
unit of the same section.
93
1 1 1 1 1
, , , , .
K S U S U
k o s k k s k o s k k os k os kr u r u r u r r u r u rk s u s ut b q q t
k r r o o
λ λ′ ′ ′ ′ ′ ′′ ′ ′′= = = = =≥
′ ′∀
∑∑∑ ∑∑ (3.58)
3.4.2 Blocking Section CP Model of Sugarcane Rail System
The units of each section are used as unary resources in the blocking section CP
model, meaning that no more than one train can use each unit of each section at a
time. The blocking segment CP model is modified and updated as follows:
In the blocking CP model, unit u on section s is a Unary resource in OPL language. ActivityHasSelectResource is used instead of binary variables.
As for the blocking section MIP model, the blocking section CP model requires
changes to the objective function, and precedence, passing priority and blocking
constraints from the blocking segment CP model.
The makespan and total waiting time are minimised as objective functions in the Blocking Section CP Model (Equations 3.59 and 3.60):
( ) 1 1
, 1,.., maxS U
k k os kosR u us uo k KMin C Min max t g
= =∀ =
= +
∑∑
(3.59)
TWTBS= IWT + OSWT + ISWT + RWT (3.60)
The initial waiting time (IWT) is the waiting time before starting the first run of each train as shown in Equation (3.61).
IWT= ( )111 ,
Ktk s kuk
s uη=
− ∀∑ (3.61)
The section waiting (SWT) time is the waiting time on each section where the next
section is blocked. This part includes the waiting time in outbound and inbound
directions (OSWT and ISWT, Equation (3.62) and (3.63) respectively).
OSWT= ( )( 1)1 1 1 1 1
K R S S U
k os kosk o s u uk s s ut t g
u εεε′+
′= = = = =
− +
∑∑∑∑∑
(3.62)
94
ISWT= ( ){ }( 1) ( 1)1 1 1 1 1
K R S S U
k os k o s k o su u uk s s ut t gε εε
′ ′+ +′= = = = =
− +∑∑∑∑∑
(3.63)
The run waiting time (RWT) is the waiting time between finishing one run and
starting a new one as shown in Equation (3.64).
RWT= ( )( ){ }1( 1)1 1max
S U
k s k os kosu u us ut t g
ε ε+= =− +∑∑ (3.64)
Precedence
Equation (3.65) ensures operation o+1 of locomotive k cannot be processed before
finishing operation o for locomotive k for outbound locomotives or operation o of
locomotive k cannot be processed before finishing operation o+1 in locomotive k for
inbound locomotives.
( ) ( )( 1) ( 1)
., R., 1..., 1., , .
k os kos k o s k o s k os kosu u u u u ut g t t t g
k K o O s S u U
ε ε ε ε
ε
+ ++ ≤ ∨ ≤ +
∀ ∈ ∈ = − ∈ ∈
(3.65)
Passing priority In equation (3.66), trains k and k' respectively are processed on unit u of section s
and locomotive k is processed after locomotive k' or locomotive k' after locomotive
k.
( ) ( )
, , ; ; ; , ; , .
k os k o s k o s k o s k os kosu u u u ut t g t t g
k k K k k s S u U R o o O
ε ε ε ε
ε ε
′ ′ ′ ′ ′ ′′ ′≥ + ∨ ≥ +
′ ′ ′ ′∀ ∈ ≠ ∈ ∈ ∈ ∈ (3.66)
Blocking
Sections and units are modelled as Unary resources. OPL Unary resources can
process only one operation at a time. Each unit u of section s will be a unary resource
in OPL language.
95
3.5 Inclusion of the Delivery and Collection Time Constraints to the Model
The models developed in sections 3.3 and 3.4 examine many issues of train
scheduling with blocking constraints; single track railway conflicts; delivering and
collecting conflicts in the outbound and inbound directions; and train and siding
capacities during the delivering and collecting of bins. Results from these models are
described in Chapter 5. Constraints associated with delivery and collection times for
satisfying the empty and full bin requirements for each siding are not included in
these models.
The delay problems in satisfying the empty and full bin requirements for each siding
in the sugarcane rail system have been solved with additional constraints to optimise
the delivery and collection time. The time table needs to satisfy the harvesters’
requirements for a continuous supply of empty bins to harvest continuously and the
mill requirements for a continuous supply of full bins to crush continuously. In
addition, the number of bins in the system has to be minimised by spreading out the
empty bin deliveries to match the mill’s production of empty bins resulting from
emptying the full bins during crushing operations. Optimising the collection time for
each siding can reduce the time between harvesting and crushing (cane age) to keep
sugarcane quality high.
A specific scenario, developed to illustrate the main aims of a sugarcane rail
transport system is explained as follows:
This scenario assumes that the delivered empty bins during the last visit
to a siding during a day are provided for use the next day when the
harvester commences. The number of bins to deliver is determined by
considering the total allotment for each siding and the siding capacity.
The visits to each siding for one day are organised in the next steps.
96
Begin
Step1: First visit
Aim: The main aim of the first visit to a siding for the day is to deliver
empty bins which are to be used until the train’s next visit.
Additionally, during this first visit, the empty bins delivered the
previous day that have been filled are collected.
Time: The time of the first visit is optimised by considering the number
of empty bins delivered to the siding in the last visit of the previous
day, the harvester start time and the harvesting rate at the siding as
follows:
(first visit time – harvester start time)*harvester rate ≤ the number of
empty bins delivered in the last visit of the previous day.
Delivered empties: The number of delivered empties in the first visit
should be sufficient to satisfy the harvester’s needs without interruption
until the second visit to the siding. The maximum number of delivered
empty bins is limited by the siding capacity.
Step 2: Second and subsequent visits up to the last visit
Aim: the main aim of the second and subsequent visits before the last
visit is to deliver the empty bins to the siding that will be used until the
next visit. Additionally, the now-full bins at the siding will be collected.
Time: the time of the second and subsequent visits before the last visit is
optimised by considering the number of empty bins delivered to the
siding in the previous visit, the time of the previous visit and the
harvester rate at the siding as follows:
(second visit time – first visit time)*harvester rate ≤ the number of
empty bins delivered in the first visit of that day.
97
Delivered empties: the number of delivered empties in the second and
subsequent visits before the last visit should be sufficient to satisfy the
harvester’s needs until the next visit to the siding.
Repeat Step 2 until the siding allotment of empty bins has been
delivered
Step 3: Last visit
Aim: the main aim of the last visit is to deliver the empty bins to the
siding to be used the next day until the first visit and to collect the last
of the full bins (the siding’s allotment of full bins is completed) filled
today.
Time: the time of the last visit is after the completion of the harvest for
the day and is optimised by considering when the full bins are required
by the mill.
Delivered empties: the delivered empties in the last visit should be
sufficient to make the harvester work without interruption the next day
from harvester start time until the train’s first visit to the siding.
General Constraints
The siding capacity and the anticipated number of full bins at the siding
are taken into account during delivery of empty bins at the siding. The
allotment of the delivered empty bins equals the allotment of the
collected full bins each day.
End
A numerical example for the visit times and operations at siding A is shown in
Figure 3.12.
98
Harvester start=4 am
Harvester rate =10 bins/hour
Allotment 174 bins
Siding capacity 140 bins
Siding A
Figure 3.12: Numerical case
Figure 3.13 shows the results of the numerical case in Figure 3.12 using the specific
scenario to optimise the siding visit time. Given the siding capacity of 140 bins, no
more than 70 bins can be delivered at a time so that there is room for the next
delivery. The 34 empty bins delivered overnight will be consumed at 7:24 and so
that is the time chosen for the first visit. The 70 empty bins delivered in the first
visit will be consumed at 14:24 and so that the time chosen for the second visit. The
70 empty bins delivered in the second visit will be consumed at 21:24 and so that is
the time chosen for the last visit. The last visit time can be later than 21:24 since
delivery of empty bins is not time critical.
0 24 hours
harvester start first visit second visit last visit
Time 4 am 7:24 14:24 21:24(or later)
Delivered empties 70 70 34
Collected Fulls 34 70 70
Figure 3.13: Visit times of siding A for one day
Optimising the delivery and collection time for the sugarcane rail transport system
requires extra equations. All steps to optimise the siding visit time in the sugarcane
transport system were modelled using MIP.
Visits
99
Additional Notations for the Delivery and Collecting Time Constraints
rateseH Harvester rate at siding s in segment e.
startseH
Harvester start time at siding s in segment e.
1,There is a delivery at siding in segment during run by train .0,Otherwise.
s e r kNk osr e
=
New constraints were developed to update the MIP model to optimise the visit time
table to the sidings as follows:
3.5.1 Delivering Delay Constraints
- No Delays with the First Visit
To remove any Delivering Delay at each siding throughout the rail network, the
model has to distinguish between the following scenarios:
Sections are used for travelling through, and not delivering and not
collecting.
Some sections contain a siding, but there is no harvester at this siding. This
siding is only for train travelling
Sections are used for train travelling and including a siding and have
harvesters but maybe one of the trains uses it just for travelling so as to reach
another siding, so no delivering or collecting.
Sections are used for train travelling and including a siding and have
harvesters but the train wants collecting only and no delivering.
Sections are used for train travelling and including a siding and have
harvesters and the train needs to make deliveries.
Equations (3.67) and (3.68) ensure there are no harvesting delays at the time of the
first visit. Equation (3.67) is important for achieving Equation (3.68), where
Equation (3.67) ensures the section working as a siding and there is a delivery to the
siding (last scenario) . This will execute Equation (3.68) and without Equation
(3.67), Equation (3.68) cannot be applied correctly.
100
M *(1 ) , , , .k os k osR e R eN k K o O s S e Eα ≤ − ∀ ∈ ∈ ∈ ∈ (3.67)
( )1
60*60
, , , .
/ * *k os k os start rate k osR e e s s R ee et H H M N
k K o O s S e E
α
∀ ∈ ∈ ∈ ∈
≥ − −
(3.68)
Equation (3.68) ensures the number of delivered empties in the last run for siding s
in segment e is greater than or equal to the siding’s requirements until the time of the
first visit. The last run is used for the next day to ensure there are sufficient empty
bins for the period between harvester start time and the first visit the next day.
The time of the first visit, the harvester start time, and the harvesting rate are
considered for each siding s. Equation (3.68) will be ignored if there is no delivery at
the siding.
- No delaying Between Next Visits
Equations (3.69) and (3.70) ensure there are no harvesting delays, where the time of
the second visit for each siding depends on the number of empty bins delivered in
the first visit, and so on. Equations (3.69) and (3.70) ensure the delivered empty bins
in any run will be sufficient until the next visit. These equations work from the
second visit to the last visit for each siding.
*(1 ) , , , , .k os k osr e r eM N k K r R o O s S e Eα ≤ − ∀ ∈ ∈ ∈ ∈ ∈ (3.69)
( )( 1)
(( ) / 60*60 )* *
, 1, , , .
k os k os k os rate k osr e r e r e s r eet t H M N
k K r R o O s S e E
α+
≥ − −
∀ ∈ ∈ − ∈ ∈ ∈ (3.70)
The time between the visits are used by the harvester to fill the empty bins at the
siding, so the delivered number at any visit has to be sufficient for the harvester until
the next visit. Equation (3.69) operates as in Equation (3.67) to check if there is a
delivery to be made at the siding.
101
3.5.2 Collecting Delay Constraints
These constraints ensure that no bins are collected before they are filled.
The number of full bins collected depends on the harvesting rate at each siding and
the visit times. Equations (3.71) and (3.72) ensure that the number of full bins
collected is no more than the number of full bins produced at each siding.
( ) 1 1
( ) / 60*60 )*
, , , .
k os k os start ratee e s se eB t H H
k K o O s S e E
≤ −
∀ ∈ ∈ ∈ ∈ (3.71)
( )(( ) / 60*60 )*( 1)
, 1, , , .
rateseB t t Hk os k os k osr e r e r e
k K r R o O s S e E
≤ −+
∀ ∈ ∈ − ∈ ∈ ∈ (3.72)
The challenge, however, is to minimise the size of the bin fleet. It is easy to
simply add more empty and full bins at the mill at the start of the simulation to
make sure that the mill never runs out of empty bins to supply the harvesters or
full bins to crush. The challenge is to delay deliveries as long as possible to
reduce empty bin demand at the mill. To meet this challenge, the time of the
last visit to a siding after the completion of the harvest for the day (when time
constraints on deliveries are looser) is optimised by considering when the full
bins are required by the mill.
102
3.6 Sugarcane Rail System as a Dynamic System
The sugarcane transport system is a dynamic system because new and sometimes
urgent situations arise daily such as changes in the number of trains, sections and
harvesters. The sugarcane rail system is modelled as a blocking parallel-machine job
shop scheduling (BPMJSS) problem. BPMJSS can be applied for dynamic systems
considering some dynamic variables: section unexpected delay, harvester unexpected
Delay and adding new jobs in the system including the train unexpected delay. The
main reasons for unexpected delays are maintenance (stop working temporary) or
breakdown (stop working for a long time) process.
Two options are available to resolve the unexpected delays; repair or remove the
used element (harvester, loco or section) in the system. Three parameters have to be
constructed to identify the sections or harvesters unexpected delay; Index of
harvester or section delay; Delay time point (delay start time); Delay period (the time
between the start and finish delay time)
Adding new jobs in the system means adding new trains to the rail system where two
parameters are identified; the index and arrival time of each train. The rail system
operations will be rescheduled by updating three main lists; harvesters, sections and
trains as shown in Figure 3.14. Solution techniques of the sugarcane rail transport
system are very quick to reschedule the rail operations system in reasonable time.
103
Figure 3.14: Sugarcane rail system as a dynamic system
104
3.7 Conclusion
MIP and CP models have been developed to solve sugarcane rail transport system
problems by optimising the operational time throughout the rail system. The core of
the constraint programming approach is to satisfy all problem constraints, while MIP
focuses on improving the objective function. Both of these approaches are useful and
so the CP search techniques are integrated with the MIP model in the solution
approach in Chapter 4 to solve the sugarcane rail transport system problem. The CP
approach has the additional advantage that the number of constraints is less than the
number of MIP model constraints, since CP combines many constraints into one
constraint. Therefore, the memory capacity during implementing the solution code
can be saved in the CP model particularly in small-size problems. Many solution
techniques are described in Chapter 4 to solve the MIP and CP models in reasonable
time.
The models have been constructed in two parts. The first part, containing the rail
operation constraints and the basic bin delivery and collection constraints, is
extensively used in this thesis. The second part extends the models to optimise the
visit times at each siding. Optimising visit times removes any interruption to
delivering and collecting the empty and full bins to and from each siding.
105
Chapter 4
Solution Approach
Chapter Outline
4.1 Introduction.....................................................................................................................107
4.2 Constraint Satisfaction (CS) Techniques........................................................................109
4.2.1 Constraint Propagation.......................................................................................109
4.2.1.1 Node Consistency...............................................................................111
4.2.1.2 Arc Consistencies................................................................................112
4.2.1.3 Bounds Consistency............................................................................115
4.2.1.4 Path Consistency.................................................................................115
4.2.2 Search Process...................................................................................................116
4.2.2.1 Variable and Value Ordering Heuristics..............................................117
4.2.2.2 Search Techniques...............................................................................119
4.2.2.3 Global Constraint.................................................................................124
4.3 Proposed Algorithms......................................................................................................125
4.3.1 Collecting and Delivering Conflict Elimination................................................126
4.3.1.1 Terminal Segment Conflict Elimination...............................................127
4.3.1.2 Intermediate Segment Conflict Elimination.........................................130
4.3.2 Algorithms for Solving Train Conflicts..............................................................133
4.3.3 Computing Acceleration (CA) Algorithms..........................................................137
4.4 Conclusion......................................................................................................................139
106
Publications Arising from Chapter 4
Masoud M., Kozan E., & Kent, G. (2010b). A constraint programming approach to
optimise sugarcane rail operations, Proceedings of the 11th Asia Pacific
Industrial Engineering and Management Systems Conference 2010, 147:1-7,
Malaysia. (Outstanding Student Paper Award)
107
4.1 Introduction
In this research, mixed integer programming (MIP) and constraint programming
(CP) formulations of the sugarcane rail transport systems are solved using the
integration of job shop scheduling (JSS) techniques and constraints satisfaction
techniques. Constraints satisfaction has been used previously as a solution technique
for constraint programming problems. It has been applied in this research with mixed
integer programming, in particular for search techniques. Linear relaxation
techniques in Optimization Programming Language OPL and CPLEX software are
integrated with constraint satisfaction techniques during problem solving for the CP
and MIP models respectively. The linear relaxation technique focuses on the
objective function and improves its value by removing suboptimal solutions, while
constraint stratification techniques such as constraint propagation and search
techniques focus on the problem constraints and remove the infeasible values from
the constraints domain.
MIP uses an LP (Linear Programming) solver for the continuous relaxation of the
problem to obtain infeasibility or a lower bound for the objective function. CP on
the other hand uses constraints propagation to obtain infeasibility or a lower bound,
in the constraint variables. The search algorithms of the two techniques are only
slightly different. The algorithms in CP are similar to the Branch and Bound
algorithms which are used in MIP, (Haralick & Elliott, 1980; Bockmayr & Ksper,
1998). A search tree is used in most of the search algorithms, where each node
represents a decision variable and each branch represents a value assigned to the
variable. The integration of the MIP formulation and constraint satisfaction search
techniques of CP have been used in this research.
Integrating CP and MIP or Integer Linear Programming (ILP) approaches has
received attention from many researchers since the late 1990s to combine the
advantages of both and avoid their individual weaknesses (Puget & Lustig, 2001;
Foccaci et al., 2002; Mouret et al., 2009). Identifying their commonalities and
differences ensures integrity between these techniques. The CP and ILP techniques
were compared through three criteria: modelling ability, node processing and search
algorithm. The two approaches have the common concepts of fundamental
108
modelling such as decision variables, constraints on variables, and the search
algorithm.
Integration of integer programming and CP has been used to solve combinatorial
problems, especially scheduling problems, as opposed to using either of the two
approaches individually, (Milano, 2004). Artigues et al. (2009) designed a new
mathematical model to optimise employee timetabling by integrating ILP and CP.
The CP approach combined the LP relaxation into global constraint and reduced
cost-based filtering techniques. Optimization Programming Language (OPL) has
integrated an integer programming approach and CP to solve scheduling problems.
Hentenryck (2002, 1999) designed CP and integer programming models and showed
how OPL can integrate the two approaches to solve combinatorial optimization
problems easily and in reasonable time.
A hybrid method for solving planning and scheduling problems (Hooker, 2005; Beck
& Refalo, 2003) integrates Mixed Integer Linear Programming (MILP) and CP
through logic-based Bender decomposition to take advantage of both techniques to
minimise cost and/or makespan objectives. MILP solved the assignment using
CPLEX software and CP solved scheduling problems using ILOG Scheduler
software by obtaining an optimal solution for large-scale problems where Bender
decomposition algorithms gave a feasible solution. El Khayat et al. (2006) designed
a mathematical model and CP model for solving scheduling problems, especially job
shop scheduling problems, where machines and vehicles were both considered as
constraining resources. Mathematical models are time consuming for solving some
problems, especially some large-scale problems, while CP can provide solutions in
reasonable time. Integrating the two approaches produced optimal schedules for
many test case problems using ILOG OPL Studio.
In chapter 3 of this thesis, four models of sugarcane rail transport systems were
formulated as blocking parallel-machine job shop scheduling (BPMJSS) problems
using two approaches. MIP and CP. Each model included two main parts: an
objective function and constraints, where all problem constraints have to be satisfied
while improving the objective function. In this chapter, many solution techniques
are proposed including constraints satisfaction techniques which are integrated with
developed algorithms to solve many issues in the sugarcane rail transport system
109
models such as train conflicts and the delivery and collection of empty and full bins
(Masoud et al., 2010b).
Metaheuristics with hybrid (hybrid SA/TS and hybrid TS/SA) and hyper (hyper
SA/TS and hyper TS/SA) techniques are presented in Chapter 6 to solve large-scale
problems in reasonable time.
4.2 Constraint Satisfaction (CS) Techniques
Constraint satisfaction (CS) that deals with problems defined within the finite set of
possible values of each variable is the main technology used for solving
mathematical formulation problems. This ‘finite set’ is often called the domain of the
variables. Most constraints of industrial applications use a finite domain. CS depends
on two main techniques: constraint propagation and search process, where both are
integrated to solve the mathematical formulation problems.
The following sections show some of the aspects of CS including the techniques and
methods used to solve optimisation problems, especially sugarcane rail transport
system problems.
4.2.1 Constraint Propagation
Constraint propagation eliminates values from a variable’s domain that do not satisfy
the constraints. Propagation is often referred to as consistency maintenance and
includes node consistency (NC), arc consistency (AC) and path consistency. Many
techniques have been developed to achieve AC and NC (Tsang, 1993). The nature of
the consistency techniques is deterministic while the search process is non-
deterministic. The following Example 4.1 illustrates the main concept of constraint
propagation and how it works with the problem domain.
Example 4.1
Assume x, y, and z are three variables with integer values in the
interval [1, 6], with constraint y < z. The decision variable y can take a
value in [1, 6], so y has to be at least 1. While z > y, z=1 cannot satisfy
the constraint. Therefore 1 is removed from the domain of z; z∈[1,
6]. By the same method, the domain of y will be [1, 5], and there is no
110
change in the domain of x, because no constraint includes x at this
point. A new constraint can be added to the problem like, x = y×z.
The minimal possible value for y is 1, and for z is 2, so x has to be at
least 2. As a result, the domain of x will be reduced to [2, 6].
Moreover the maximum value of x is 5 and the minimum value of z is
2, while y which is equal to x/z, must be, at most, 3.
In Example 4.1, Constraint propagation reduced the domain of the variables to be
[ ]2,6x∈ , [ ]1,3y∈ and z∈[2, 6]. As a result, the new domains of x, y, and z have
become consistent with all problem constraints.
Consistency techniques were introduced by Montanari (1974) and Waltz (1975) in
the artificial intelligence area to improve the efficiency of picture recognition
programs. Picture recognition includes labelling all lines in a picture in a consistent
manner. A significant number of possible combinations can be reduced to consistent
combinations only. Consistency techniques effectively remove many inconsistent
assignments at an early stage and are applicable for complex problems. Example 4.2
describes how these techniques work and their application.
Example 4.2
Let x<y be a constraint of the variable x with the domain Dx=[4,8] and
y with the domain Dy=[3,6]. For some values of Dx, there are
inconsistent values in Dy which do not satisfy the constraint x<y and
vice versa. As a result, some values will be removed from the
respective domains without loss of any solution. So, the domains will
be Dx= [4, 5] and Dy= [5, 6].
In Example 4.2, the reduction of the domains of x and y does not remove all
inconsistent pairs necessarily, where x=5, and y=5 is still in the domains of x
and y. In spite of that, for each value of domain x, one can get a consistent
value for domain y and vice versa.
Many consistency techniques were proposed for constraint graphs in binary CS
problems to prune the search space. A consistency-enforcing algorithm makes any
111
partial solution of a small sub network extensible to some surrounding network.
Thus, the potential inconsistency is detected as soon as possible. In the following
techniques, a binary CS problem is represented as a constraint graph. Each node is
labelled by the variable and the edge between two nodes corresponds to the binary
constraint binding the variables that label the nodes connected by the edge. A unary
constraint can be represented by the cycle edge and it can terminate at the same
node.
4.2.1.1 Node Consistency
Node consistency (NC) is the simplest consistency technique. The node representing
the variable y is node consistent if and only if for every value z in the current y
domain, each unary constraint on y is satisfied. A CS problem is node consistent if
and only if all variables are node consistent, i.e., all variables’ value in its domain
satisfy the constraints of these variables. The next paragraph illustrates how this
technique works.
If the domain Dy of the variable y includes a value “b” that does not satisfy the
unary constraint on y, then the instantiation of y to “b” will always result in
immediate failure. So, the node inconsistency can be removed by eliminating those
values from domain Dy of variable y which can not satisfy the unary constraint on y.
The main steps of NC are shown as follows:
Node Consistency Algorithm For each y in nodes(G) Do
For each z in the domain Dy of y Do
If any unary constraint on y is inconsistent with z Then
Delete z from Dy
End If
End For
End For
End NC
112
4.2.1.2 Arc Consistencies
This strategy considers a constraint as a link between variables including the
bounded domains of these variables. To illustrate this method, consider the domains
of the variables x, y are Dx=Dy = {2, 3, 4, 5}, and with the simple constraint, x-1=y,
on variables x and y. So the constraint Cxy will be represented as the set {(3, 2), (4,
3), (5, 4)}, which is all the combinations of x and y that satisfy this constraint. The
constraints can be classified depending on the number of variables, where unary
constraints affect one variable. For example, y is an odd integer variable, so the node
consistency will remove all even numbers from y’s domain, in the other words, ∀ a
∈y, cy ≡odd is satisfied. Another type of constraint is a binary constraint, which
affects two variables. Generally, n-ary constraints have an effect on n variables. A
graph can be used to represent binary CS problems. To do that, nodes represent the
variables and the edge is the constraint between two variables if there is a constraint
between them.
Definition: The directed arc between x and y, arc(x, y), is called an arc constraint if
∀ u ∈Dx ∃ v∈Dy such that u and v satisfy the constraint Cxy. A CS problem is arc
consistent if and only if every arc(x, y), in its constraint graph is arc consistent.
From the constraint x-1=y, the value x=2 cannot satisfy the constraint, because there
is no value in Dy such that 2-1=y. As a result, any value u ∈ Dx that cannot satisfy
the constraint Cxy, can be removed from Dx. By the same previous idea of making
arc(x, y) consistent, any inconsistent values can be removed from y’s domain by
acting arc(x, y) consistent.
In the literature, many algorithms have been proposed to solve a binary CS problem
using arc consistent principles. The complexity for general constraints is O (d2c),
where d is the maximum domain size and c is the number of binary constraints in the
problem. It is clear that an arc, arc(x, y), can be made consistent by simply deleting
those values from the domain of x, Dx, for which there does not exist a
corresponding value in the domain of Dy such that the binary constraint between x
and y is satisfied (note that deleting of such values does not eliminate any solution of
the original CS problem). The algorithm of arc consistency includes the next main
steps as follows:
113
Arc Consistency Algorithm For each x in Dx Do
If there is no such y in Dy such that (x, y) is consistent, Then
Delete x from Dx
End If
End For
End
Executing the previous technique for each arc just once is insufficient to make every
arc of the constraint graph consistent. Implementing an arc consistency algorithm
once reduces the domain of some variables xi, then each previously revised arc(xj, xi)
has to be revised again, since some of the values of the domain of xj may not be
consistent with any remaining values of the revised domain of xi.
In addition to the arc consistency algorithm, there are many algorithms demonstrated
as Arc Consistency-n (AC) named AC-1 to AC-7. These algorithms were developed
for binary constraints. Mohr and Henderson (1986) developed AC-4, the important
method of selecting values for a variable domain to support a value through another
variable’s domain. For example with the binary constraint x-1=y, between variable x
and y, (presented before in this section), 3 in Dy is supporting value 4 in Dx, 4 in Dy
is supporting value 5 in Dx, and so on. But there is no value supporting the value 2 in
Dx and for that reason, value 2 can be removed from Dx. The same procedure can be
applied to the directed arc(y, x).
For any value through a variable’s domain, AC-4 compiles and preserves a list of
supporting values from the domain of the other variables. If there are no supporting
values or all supporting values are removed from the domains of their variables, then
the supported value should also be deleted from the domain of its variable. The AC-4
is executed in the worst case time complexity O (d2c), where d is the maximum
domain size and the c is the number of binary constraints (Mohr & Henderson, 1986;
Deville & Hentenryck, 1991). Hentenryck et al. (1992) proposed the AC-5
algorithm which is executed in the worst case time complexity O (d2c). However for
114
functional, anti-functional and monotonic classes of constraints, it provides a
specialized arc consistency algorithm of time complexity O (dc). Moreover, this
algorithm provides the user with an interface for applying consistency checks for
many types of constraints. This interface helps identify the difference between the
current variable domain and the previous domain of this variable, after the previous
activation of the constraint (Puget 1995). The AC-5 method was implemented by
ILOG Solver.
Bessiere et al. (1993) proposed AC-6 to improve the AC-4 algorithm. AC-6 is
executed in the worst case time complexity O (d2c); however, its space complexity is
decreased to O (dc). This algorithm keeps just one supporting value at a time, for
each value to be supported for each constraint. Another value can replace the
original, if this supporting value is removed. If no other supporting value remains,
the supported value can be removed. Bessiere et al. (1994) developed the AC-7
algorithm to improve the supporting value idea of AC-4 and AC-6. The proposed
AC-7 algorithm works in two directions and uses the constraint meta knowledge to
reduce the arc consistency computation. For instance, a value u ∈Dx which is
supported by a value v ∈ Dy also supports the value v.
Arc- consistency algorithms are efficient for general binary constraints. However, for
general n-ary constraints, these algorithms are time consuming. Mohr and Henderson
(1986) introduced the General Arc Consistency (GAC-4) algorithm which depends
on supporting values as in AC-4. The worst case time complexity is O (dnc), where n
is the maximum number of ary constraints. Puget (1998) and Regin (1994) proposed
specialised and faster algorithms for particular types of non-binary constraints, in
particular the all-different constraint. These algorithms consider the semantic of the
constraint so more pruning can be implemented when compared with a binary
constraint. Many consistency techniques have been applied to general CP problems.
However these techniques are very time consuming when applied to problems
including n-ary constraints, even if a simple consistency technique such as arc-
consistency is used (Lhomme,1993). Bound propagation is an alternative technique
to the n-ary constraints and is used for solving numeric CS problems. This bound
propagation is explained in the next section.
115
4.2.1.3 Bounds Consistency
The bounds consistency method was proposed because arc-consistency is very time
consuming if used as a complete method. Bounds consistency propagates only the
bounds of the variable’s domain through other primitive constraints. If any of the
bounds has no support value through the other variable domains, it must be removed.
As a result, a constraint programming problem becomes bound consistent if each
constraint ci∈C is bound consistent as described in Example 4.3.
Example 4.3
Assume the constraint C includes some variables s1, s2, s3,...,sn.
Hence, C(s1,s2,s3,...,sn) is bound consistent if and only if for each
variable si; for all ui ∈ [min( Di ), max( Di )], and for all j ≠ i, there
is uj ∈ [min(min( Di ), max( Di )], and where c(u1, u2, u3,…un ) is
satisfied.
Generally, a constraint satisfaction problem is bounds consistent if each constraint is
bounds consistent. The bounds consistency method is implemented in time O (dc),
and is useful for a binary constraint.
4.2.1.4 Path Consistency
Arc and bounds consistency work on each constraint in turn. The path consistency
method tries to involve more than one constraint in turn as described in Example 4.4.
Example 4.4
A path (x, y, z), where x, y, z are variables, and there are binary
constraints cxz, czy, and cxy between (x, z) and (z, y), and (x, y)
respectively. The path (x, y, z) is a path constraint, if and only if,
xu D∈ , yv D∈ satisfy the constraint cxy, ∃ a value w ∈ Dz such that
(u, w)∈ cxz and (w, v)∈czy. Constraints are represented as a set of
values, some of which are discarded from domain z if values do not
satisfy constraint cxy.
116
This type of consistency method is very time-consuming and achieved the worst case
time of O (d3c3).
4.2.2 Search Process Example 4.5 indicates there are many values still possible for each decision variable
and as a result, constraint propagation alone does not necessarily solve the problem
completely without other techniques. A search tree provides a solution with each
node representing a decision variable x, y, or z, and each branch expressing a
possible value for that variable.
Example 4.5
If the search starts with variable x, many branches can be created
from this node, each one taking one of the values in the domain x [2,
6]. Assume that the first branch assigns 2 to x. At this point,
constraints are automatically used to further reduce the variable
domains, if needed. Since the domain x has changed, the constraint
x=yz will be used and [ ]1,3y∈ and z∈[2, 6]. Propagation of this
constraint will reduce the domain y to 1 and the domain z to 2. Now,
all decision variables have been assigned a value, and the problem is
solved, so the search stops.
In some cases, a chosen node is inconsistent, which means one or more decision
variables have an empty domain. In this case no solution is possible and another
choice must be made. This means returning to the last choice made and removing all
consequences of the choice and then choosing another node. This process involves
backtracking through the tree to follow another branch. This process continues until
there is a solution or the entire tree is exhausted.
Search process can be adapted to solve optimisation problems. The initial value of
the objective function is stored as the initial solution. Each time a solution is found,
the value of its objective function is stored, and a new constraint is added to the
problem. The new value of the objective function must be strictly better than the one
117
just found. This process involves looking only for solutions that improve the best
solution already found. The order of choosing nodes or variables is considered in
advance. This order between variables is important to improve search performance.
For example, the order may be defined according to which variable or node has the
least number of possible values in its domain. Section 4.2.2.1 describes the order of
the variables and value techniques used in the search process.
4.2.2.1 Variable and Value Ordering Heuristics The order of the variables and the values assigned to the variables in backtracking
have important roles in improving the efficiency of a search algorithm.
Variables Ordering
Variables chosen for instantiation have a significant impact on the number of
backtracks, where the number of backtracks increases the efficiency of constraint
satisfaction. Variables ordering can be static or dynamic. Static ordering means the
order of the variables is determined before the search begins, and is not changed
thereafter. Dynamic ordering means selecting the next variable (during the search)
based on the current state of the search. Dynamic ordering is not feasible for all
search algorithms, because it needs extra information on the variable domains. For
example, there is no extra information available during the search for a simple
backtrack algorithm (BT) to make a different ordering choice from the initial
ordering. However, in the forward checking (FC) algorithm, the current state
includes the domains of the variables as they have been pruned by the current set of
instantiations, so choosing the next variable is possible with this information.
Many heuristics have been developed for dynamic variable ordering. Haralick and
Elliot (1980) developed the Fail-First (FF) principle for dynamic variable ordering
which consists of selecting the next variable to minimise the expected length of each
branch and so minimise the search cost. Smith and Grant (1998) stated that it is not
necessary to minimise the length of branches as this process minimises the size of
the search tree. As a result, the FF principle is inefficient to use and is unsound.
118
A common heuristic, often described as an implementation of the FF principle, is
choosing the next variable which has the fewest remaining alternatives for
instantiation. In other words, choosing the next variable equates to choosing that
which has the smallest current domain. The smallest domain heuristic works well for
many problems. Another heuristic used when all variables have the same number of
values, is choosing the variable that participates in most constraints. One heuristic
for the static ordering of variables is suitable for simple backtracking. This heuristic
depends on choosing the next variable that has the largest number of constraints with
the past variables.
Value Ordering
Once a variable is selected for instantiation, many values become available. The
order of these values has to be considered because it significantly affects the time to
find the first solution. However, if all solutions are required or there are no solutions,
value ordering is indifferent. The branches starting from each node of the search tree
will be rearranged by value ordering. As a result, the branch which leads to a
solution is searched earlier than other branches. That is if the CS problem has a
solution, and a correct value is assigned for each variable, then the solution can be
found without any backtracking. Therefore, in order to select a variable to
instantiate, choosing the first value of that variable is necessary. When the aim is to
find a solution, a value is chosen to succeed with no conflict. This Succeed – First
strategy (SF) can be applied to value ordering.
Another heuristic is to prefer those values that maximise the number of options
available. The AC-4 algorithm works well with this heuristic as it counts the
supporting values. It is possible to estimate the “promise” of each value by knowing
the domain sizes of the future variables after choosing the value. That is, the AC-4
algorithm can predict the upper bound on the number of possible solutions resulting
from the assignment of values to the variables. The value that has the highest number
of solutions has to be selected. Additionally, AC-4 can calculate the percentage of
values in the domains of the future variables that are no longer usable. The best
solution has the lowest cost. Another heuristic is one that depends on selecting the
119
value that leads to CS problem solution. This strategy requires estimating the
difficulty of solving a CS problem.
4.2.2.2 Search Techniques
Many strategies are followed to find the solution through the search trees. Most are
prime methods that use a parallel branch and bound technique to solve combinatorial
optimisation problems. These methods obtain the optimal solution for many NP-hard
problems. However, some techniques used individually are time consuming and need
huge memory capacities to solve large-scale problems. Different search techniques
and strategies are applied to solve the two sugarcane rail transport system
formulation models, MIP and CP. The different search techniques such as Best First
Search Technique (BFS), Depth-First Search Strategy (DFS), Slice Based Search
(SBS), Limited Discrepancy Search (LDS), Depth-bound Discrepancy Search
(DDS), and Interleaved Depth First Search (IDFS) are integrated with the two main
search strategies, Standard Search Strategy (SSS) and Dichotomic Search Strategy
(DSS), to improve the accuracy and CPU time for the techniques’ solutions.
BFS- Best First Search Technique
Best First Search is a well-known technique for optimization problems and consists
of choosing the node with the best value (the relaxation) of an expression that is,
typically, the objective function. That is, the Best First Search maintains all the
nodes still needing to be explored and chosen. The next node to expand is that which
has the best lower bound. The selected node is then expanded into another set of
nodes that are inserted in the set of unexplored nodes in replacement of the selected
node. For the Best First Search to be effective, it is important to have a precise lower
bound, where the distance between the lower bound of a node and the best solution
obtained from that node, should be as small as possible.
The initial upper-bound on the makespan, in the job shop scheduling problem, is the
sum of the duration of all activities. If a solution is found with a lower makespan, the
new bound is added at the leaf node (terminated node). The search then continues as
if the solution state was actually a failure. Sierra and Varela (2008) used Best First
Search to reduce the search domain in solving the job shop scheduling problem with
120
total flow time. The evaluation function, in these techniques namely f(s), is used to
determine the makespan of any generated solution path that extended the current
path of a node s.
The best first search technique starts with the root node which is in open
nodes(nodes are generated but have not been expanded yet), and then chooses the
node which has the best makespan value (minimum) in the open list(list includes
generated nodes but have not been expanded yet) for expansion. This node will be
removed from the open list and their children will be generated and the parents put in
the closed list. A new child will then be chosen for expansion. The closed and open
lists are checked and updated to add the best node, and remove the worst, depending
on the makespan value.
DFS-Depth-First Search Strategy (DFS)
The strategy followed for the depth-first search is, as its name implies, to search
“deeper” in the graph whenever possible. Depth-first search (DFS) starts at the root
node and proceeds by descending to its first descendant, and continues this process
to reach a leaf. The process requires backtracking to the parent of the leaf and
descending to its next descendant. This process continues until the root node is
revisited after all nodes or its descendants have been visited (Tsin, 2002).
Figure 4.1 shows that the order of visited nodes starts from the root A and visits in
order B, C, and then leaf D. Then DFS comes back to the parents C and B to visit E
and F consecutively. It then comes back to the root A to visit G and H. From node H,
it moves to visit I and J, and returns to node I to visit K and then node H to visit L.
A
H G B
C F L I
K
121
Figure 4.1: The order of nodes that will be expanded by DFS strategy
The time and space analysis of DFS differs according to its application area. In
theoretical computer science, DFS is typically used to traverse an entire graph, and
takes time O (|V| + |E|), linear in the size of the graph. DFS may experience non-
termination when the length of a path in the search tree is infinite. Therefore, the
search is only performed to a limited depth, and due to limited memory availability,
one typically does not use data structures that keep track of the set of all previously
visited vertices. In this case, the time is still linear in the number of expanded
vertices and edges (Her and Ramakrishna 2007).
DFS lends itself efficiently to heuristic methods of choosing a likely-looking branch.
When an appropriate depth limit is not known, a priori, an iterative deepening depth-
first search applies DFS repeatedly with a sequence of increasing limits. In the
artificial intelligence mode of analysis, with a branching factor greater than one,
iterative deepening increases the running time by only a constant factor over the case
in which the correct depth limit is known. This increase is due to the geometric
growth of the number of nodes per level.
SBS-Slice Based Search
The SBS technique is effective for job shop problems and depends on the existence
of a good heuristic to obtain a solution. This method supposes that initially, there is a
small number of allowed discrepancies (the choices during the search process which
do not follow the heuristic) at the beginning of the search process which increases
during the exploration of the search tree until a solution is found or the whole search
tree has been explored.
LDS-Limited Discrepancy Search
The LDS is one of the complete search techniques that can provide an optimal
solution for many problems. This technique depends on building the search tree
with good heuristics (Harvey & Ginsberg, 1995). This technique supposes that the
J E D
122
first leaf visited is likely to be a solution. If not, then it is likely that the number of
mistakes along the path from the root to this leaf is small. The next leaf nodes that
have paths from the root and differ only in one choice from the initial path, will then
be visited. This process continues by visiting the leaves that have a higher
discrepancy than other leaves from the initial path as described in Example 4.6.
Example 4.6
Suppose that v0 is a node with ordered descendants v1, v2, v3 …vk. The
discrepancy of vj is the discrepancy of v0+j-1 for j=1, 2…k and the
discrepancy of the root node is 0. The search process in LDS technique
starts at the root node where the level of discrepancy is d=0 and proceeds to
its first descendant 1v whose discrepancy is not higher than d. This process
continues until a leaf is reached. Backtracking to the parent of the leaf is
required and the descent to its next existing descendant, which has a
discrepancy that is not higher than k is made. This process continues back to
the root node when all descendants not higher than k have been visited. Set
d=d+1 and repeat this process until back at the root node and all descendants
have been visited.
Figure 4.2 shows limited discrepancy search and the order of the visited nodes
(Hoeve, 2005). The black node of the search tree indicates the root; visited
nodes are red nodes; and red arcs indicate the parts of the tree that are newly
traversed.
a. discrepancy 0 b. discrepancy 1
123
c. discrepancy 2 d. discrepancy 3
Figure 4.2: Limited discrepancy search This strategy identifies that for a discrepancy d>0, LDS visits leaves with a
discrepancy less than d. Leaves therefore are visited many times. Hence, if the
discrepancy of a node v is d, and the length of a path from v to a leaf is l, then the
descendants with a discrepancy between d-l and d will be considered. This revisiting
can be avoided by keeping track of the remaining depth to be searched. This
technique is called improved limited discrepancy search (ILDS). It is noted that DFS
and ILDS are complete search techniques that are not redundant where these
techniques have to visit all paths from the root to a leaf exactly once. LDS technique
is out of the domain of this research because it is time consuming.
DDS-Depth-bound Discrepancy Search The DDS technique supposes that the heuristic used to build the search tree is more
likely to make a mistake near the root node (Walsh, 1997; Beck &Perron, 2000).
DDS aims to remove the domain values due to propagation along a path from the
root to a leaf meaning discrepancies will be allowed only above a certain depth-
bound. At iteration i, the depth-bound is i-1, where under this depth, it is only
allowed to descendants which do not cause the discrepancy increase.
IDFS-Interleaved Depth First Search
Interleaved depth first search strategy works through DFS where IDFS prevents DFS
falling into mistakes. DFS sometimes visits bad nodes and cannot reach other new
nodes searching sequentially from left to right. IDFS on the other hand searches in
parallel many subtrees at a tree level that is called active. It searches in each subtree
to reach each leaf. If this leaf is the goal, the search process terminates. Otherwise,
the state of the current subtree is recorded and added to a list of active subtrees,
which will be taken later at the suspended point.
IDFS selects another subtree and repeats the previous process. Parallel DFS on
active subtrees is simulated by IDFS using a single processor. IDFS avoids
124
completely falling into mistakes by distributing the search among the subtrees,
which are expected to include some good subtrees. For optimisation problems,
particularly minimisation, constraint satisfaction problems handle the associated
objective function using the two strategies of Standard Strategy (SSS) and
Dichotomic Search Strategy (DSS).
SSS-Standard Search Strategy
Standard Strategy (SSS) is uncomplicated and solves the optimisation problem as a
constraint satisfaction problem. The main steps of this strategy are firstly to ignore
the objective function and look for a feasible solution. The objective function is
evaluated at the first feasible solution point, and an upper bound is obtained.
The second step is to add a constraint to the original constraints and so obtain the
next feasible solution which is better or at least the same value (if multiple solutions
are wanted) as the objective value of the current solution.
DSS-Dichotomic Search Strategy
The DSS strategy is useful if the lower bound (W) of the problem is known in
advance. The main steps are firstly to evaluate the objective function value V at a
feasible solution: that is, evaluate the mid value Z given by (W + V)/2. The set of
constraints of the original problem is then added to a constraint f(x1,x2…..,xn) < Z,
where f(x1,x2…..,xn) is the objective function, and the CP problem is solved by
neglecting this function. In the next step, a feasible solution should be better than the
current solution, hence, the value of V is updated, and the strategy is continued with
a new value of the mid value Z. An unfeasible solution requires the updating of the
lower bound W and hence the mid value Z, so that the search strategy is executed as
above.
4.2.2.3 Global Constraint
The ability to express and reason with sophisticated constraints is a very important
feature of constraint programming. Example 4.7 shows the logic behind the global
constraint.
125
Example 4.7
Z1∈[1, 3] (1), Z2∈[1,3] (2), Z3∈[1,3]
(3)
Z4∈[1, 3] (4) , and Zi ≠Zj (5)
This problem has no solution, since there are three values for four
variables. However, domains are not reduced by constraint
propagation on each constraint. In fact, a given constraint Zi ≠Zj, for
each value of Zi (say 1), and a value always exists in the domain of
the other variable Zj (say 3) that satisfies the constraint.
The conflict occurs when each elementary constraint Zi ≠Zj is treated separately
illustrating that there are just three values assigned to four variables. This global
analysis of the problem presents the new aspect called global constraint where the
reduction techniques are used to n-ary constraints. The set of inequality constraints
in the Example 5.7 can be changed by a global constraint in the form alldiff (Z1, Z2,
Z3, Z4). Many techniques are proposed to deal with the global constraint such as a
graph of theoretical approaches that calculates the arc consistency domain reductions
for the alldiff constraint in O (n25). A bound consistency algorithm was proposed by
Puget (1998) that runs in O (n * log (n)) and is very important in scheduling
problems, especially in terms of job shop or flow shop problems.
Many complementary algorithms in Section 4.3 are developed to integrate with
solution techniques in Section 4.2 to solve many issues during implementing the two
Models CP and MIP of sugarcane rail transport systems. Some of these algorithms
are developed using MIP formulation but can be applied easily to the CP.
4.3 Proposed Algorithms
The proposed algorithms are developed to solve many issues in the sugarcane rail
transport system models. Collecting and Delivering Conflict Elimination algorithms
are developed in this research to optimise the collecting and delivering operations.
This type of proposed algorithms includes Terminal Segment Conflict Elimination
(TSCE) and Intermediate Segment Conflict Elimination (ISCE) algorithms. The
second type of proposed algorithms is for solving railway conflicts and includes
126
Segment Blocking Determination SBD and Rail Conflict Elimination RCE
algorithms. The last type consists of algorithm Computing Acceleration (CA)
Algorithms for Reducing Computing Time.
In spite of the proposed algorithms are developed to solve segment blocking models
of sugarcane rail system, all these algorithms can be adapted easily to the blocking
section models.
4.3.1 Collecting and Delivering Conflict Elimination
The main purpose of the sugarcane rail transport system is to satisfy the sidings’ and
mill’s requirements regarding empty and full bins respectively. So, optimising the
delivering and collecting operations is important for achieving this purpose at lowest
cost. Some assumptions are made when considering the sugarcane rail models and
the proposed algorithms.
The sugarcane rail transport system models assume that there are two operations, o
and o´, to be implemented on each section during train run r in the outbound and the
inbound directions respectively. The train outbound operations are assigned for
delivering empty bins to the sidings without any collecting, while the train inbound
operations are assigned for collecting without any delivering. No delivering or
collecting occurs if the section is a passing section and does not work as a siding
This manner of operating can reduce the total train operating cost and keep the
quality of the cane high by reducing the cane age (the time between the cane
harvesting at sidings and the crop crushing at the mill). Additionally, for safety
reasons, trains are, not allowed to have a mix of empty and full bins in tow.
Optimising collection and delivery of bins depends on the location of each siding in
the railway network, where some sidings are located in terminal segments and others
in intermediate segments. Distinguishing between the outbound direction and
inbound direction on each siding can help in assigning the delivery of empty bins to
the outbound operation on each siding and the collection of full bins to the inbound
operation at the same siding. So, some proposed algorithms can establish relations
between the outbound and inbound operations on each siding to enforce each train to
127
either deliver or collect (or to do nothing in the case of passing sections). In terminal
segments, the outbound operation o´ at segment e is less than the inbound operation
o at segment e (o´ < o), while, in the intermediate segment, the scenario is different
because the outbound operation o´ and the inbound operation o will be in different
segments. As a result, we cannot ensure the operation o´ will be less than the
operation o or obtain direct relations between them because this depends on the
location of the siding in the segment. The terminal segment conflict and
intermediate segment conflict are solved using two proposed algorithms through
studying all conflict cases of delivering and collecting operations.
4.3.1.1 Terminal Segment Conflict Elimination
The main advantage of the terminal segment is all train operations can be executed
continuously without any interruption. This means, depending on the model
assumptions, there is one segment to implement the outbound and inbound
operations. The different possible locations of the siding in a terminal segment are
studied using the next three cases.
Case 1: The Siding at the Beginning of Terminal Segment
Figure 4.3 clarifies that segment A is terminal and includes 3 sections s1, s2, s3 and 6
operations will be executed continuously on it. Section s1 works as a siding andIS
located at the beginning of the segment A. The outbound operation is o´ = 1 and the
inbound operation is o= 6, and 1< 6. As a result, the location of the siding at the
beginning of the segment does not affect the inequality o´ < o, where the outbound
operation will be less than the inbound operation.
Segment A
Inbound direction
S1
S3
S2
128
Figure 4.3: Siding at the beginning of terminal segment
Case 2: The Siding at the End of Terminal Segment Figure 4.4 shows that the section s3 works as a siding and is located at the end of the
terminal segment A. The outbound operation is o´ = 3 and the inbound operation is
o= 4, and 3< 4. As a result, the inequality o´ < o is still satisfied.
Segment A
s1 s2 s3 Segment A Outbound direction
Figure 4.4: Siding at the end of terminal segment
Case 3: The Siding at the Middle of Terminal Segment Section s2 works as a siding and is located in the middle of the terminal segment A
(as shown in Figure 4.5). The outbound operation is o´ = 2 and the inbound
operation is o= 5, and 2< 5. As a result, the inequality o´ < o is still satisfied.
Segment A
Outbound direction
Inbound direction
Segment A
S2 S3 S1
Inbound direction
129
Figure 4.5: Siding at the middle side of terminal segment
Cases 1, 2 and 3 prove that changing the siding location at the terminal segment does
not affect the relation between the outbound and inbound operations and the
inequality o´ < o is satisfied. Terminal Segment Conflict Elimination (TSCE)
algorithm is proposed to solve the delivering and collecting conflicts at sidings at
terminal segments as illustrated in Figure 4.6.
Terminal Segment Conflict Elimination (TSCE) Algorithm Select train k; k є K
Select segment e ЄE //step 2
Select section s; s Є S
If s is a siding then
o´ and o are two operations will be
executed at siding s; o´ <> o
If o´ < o then // outbound direction
k o s kr e epα ′ ≤
0k o sr e e
B ′ =
k k k o sr r r e eα α α ′= −
go to step 2
Else // inbound direction
k o s kr e eB f′ ≤
0k o sr e e
α ′ =
k k k o sr r r e eB B B ′= −
go to step 2
Else
End if
Else
Segment A
Outbound direction
130
0k o sr e e
α ′ =
0k o sr e e
B ′ =
go to step 2
End if;
End.
4.3.1.2 Intermediate Segment Conflict Elimination
Optimising the delivering and collecting operations on each siding located at an
intermediate segment requires establishing a relationship between the two operations
o and o´. This relation depends on two scenarios. The first one is if the train visits
the intermediate segment and decides to return to the mill after delivering or
collecting, the intermediate segment becomes like a terminal segment and the TSCE
algorithm can be used to solve any conflict. The second scenario is if the train visits
the intermediate segment and then visits other segments, the intermediate segments
require new algorithm to solve any delivering and collecting conflicts. To develop a
solution for intermediate segment conflicts, the different siding locations in the
intermediate segment are studied using the next three cases. The next three cases
show that the inequality is not satisfied in all intermediate segment cases.
Case 1: The Siding at the Beginning of an Intermediate Segment
Figure 4.7 shows that the intermediate segment includes two segments A and B as
outbound direction and inbound directions respectively. Section s1 works as a siding
at the beginning of segment A and at the end of segment B. The outbound operation
o´ at segment A will be o´=1, while the inbound operation o at segment B will be
o=3. As a result o´ < o is satisfied.
Segment B
Inbound direction
S1 S3
S2
Figure 4.6: Terminal segment conflict elimination algorithm
131
Figure 4.7: Siding at the beginning of intermediate segment
Case 2: The Siding at the End of an Intermediate Segment Figure 4.8 shows that the siding is located at the end of segment A and the beginning
of segment B. The outbound operation o´ at segment A is o´=3, while the inbound
operation o at segment B is o=1. As a result, o´ > o and the TSCE algorithm can
not be applied for this case. So, another algorithm is required to solve any problem
related to this type of intermediate segment.
Segment B
s1 s2 s3 Segment A Outbound direction
Figure 4.8: Siding at the end of intermediate segment
Case 3: The Siding at the Middle of an Intermediate Segment
In this case, siding s2 works where o´ = o; o´ =2 and o =2. As a result we cannot
distinguish easily which operation is for the delivering and which one is for colleting
(as shown in Figure 4.9).
Segment A
Outbound direction
Inbound direction
Segment B
132
Figure 4.9: Siding at the middle side of intermediate segment
From the previous discussion, there is an urgent need for a new algorithm to solve all
possible cases for the intermediate segment and the new algorithm can integrate with
the TSCE algorithm for solving all delivering and collecting cases.
Intermediate Segment Conflicts Elimination (ISCE) Algorithm
From all the previous three cases relating to the intermediate segment, the inbound
segment always has to be less than the outbound segment, where segment A is
visited before the inbound segment B. So, the intermediate-segment-conflicts-
elimination algorithm distinguishes between the outbound operations and inbound
operations using the index of the segment.
The ISCE Algorithm supposes that segment e´ and e are two segments in outbound
and inbound directions respectively which represent an intermediate segment, where
e´ ≤ e. This algorithm assigns the operation of segment e´ for delivering and the
operation e for collecting. The ISCE Algorithm is described below:
Select train k; k є K //step 1
Select section s; s Є S //step 2
If s is a siding and located in two different segments
e´ and e “ from the assumption of the model” then
If e´ ≤ e then // outbound direction then
k o s kr e epα
′ ′≤
0k o sr e eB
′=
k k k o sr r r e eα α α
′ ′= −
go to step 2
Segment A
Outbound direction
S2 S3 S1
133
Else // inbound direction
k o s kr e eB f
′ ′≤
0k o sr e eα
′ ′=
k k k o sr r r e eB B B
′ ′= −
go to step 2
Else
End if
Else
0k o sr e eα
′ ′=
0k o sr e eB
′ ′=
go to step 2
End if;
End.
4.3.2 Algorithms for Solving Train Conflicts
As mentioned previously, the rail network includes two main types of segments,
terminal segments and intermediate segments. For solving any conflicts through the
rail networks, blocking segment cases are taken into consideration using two
algorithms. The two developed algorithms are integrated to solve all types of
segment conflicts through the rail network by applying blocking constraints and
some heuristics such as Shortest Processing Time (SPT) to improve the algorithm
solutions. The first algorithm, the Segment Blocking Determination (SBD)
algorithm, is to recognize the segment that requires blocking and then the second
algorithm, the Rail Conflicts Elimination (RCE) algorithm, applies blocking
constraints and eases the trains passing. Blocking segment types are described using
some cases and scenarios in Chapter 3.
134
The Two algorithms are integrated to solve all the types of segment conflicts and to
apply blocking constraints correctly. The SBD algorithm selects the segment that
needs to be blocked and the RCE algorithm applies the blocking constraints and
solves any segment conflicts.
Segment Blocking Determination (SBD) Algorithm
This algorithm depends on distinguishing the segment types and identifying which
segment requires the blocking constraints. Each intermediate segment includes two
different segments mathematically so as to distinguish between the intermediate
segments and other segments in the rail network. Segment Blocking Determination
SBD algorithm will be used. SBD uses the index of sections of segment to
distinguish the different segments. If any section is included in any two different
segments, it means this segment is intermediate and so blocking constraints can be
applied to both intermediate outbound and intermediate inbound segments.
Select trains k1and k2 є K.
Select e´ and e are two segments, where k1 uses e´ and k2 requires e.
Select section s; s Є S;
If e´ <> e then
If s Є e´ ” s is the last section in the e´.\” then
If s Є e ” s is the first section at segment e” then
e´ and e are one intermediate segment; e´ and e are the same physically.
Apply Rail conflict elimination (RCE) algorithm Else
e´ and e are different physically
Do not apply Rail conflict elimination (RCE) algorithm
Endif
Else
Endif
Else
There is a one segment as a terminal,
Apply the blocking constraints and Rail conflict elimination (RCE) algorithm
135
Endif;
SBD algorithm is integrated with the Rail Conflict Elimination (RCE) algorithm to
solve any railway conflicts through the rail network.
Rail Conflict Elimination (RCE)Algorithm
The RCE algorithm is developed for solving a track segment conflict throughout the
rail network. Each segment has many sections but no passing loops inside the
segment. As a result, the RCE Algorithm works on solving any conflicts in that
segment. A heuristic technique, the Shortest Processing Time (SPT), is used inside
this algorithm to reduce the waiting time of trains before starting that segment.
Oliveira (2001) used the SPT technique to resolve a section conflict. In Figure 4.10,
k and k' Є{1,...K} are two trains where k is travelling in the outbound direction and k'
is travelling in the inbound direction. e Є{1,...E} is a segment which includes the
conflict point. {s1, s2, s3, s4, s5} are some sections included in segment e. in Figure
4.10, start time of train k' is delayed to solve the conflict on segment e.
Figure 4.10: A rail conflict case
Let o and o' be two operations of trains k and k' on section s on segment e,
Let koseg and k o se
g ′ ′ be the processing time, and k osr et
and k o sr e
t ′ ′′be the start time of
o and o' consecutively.
136
Begin
If there is a conflict (e, d), where d Є [0...Total system time] then
Select the e which the (e, d) is the earliest to occur.
Calculate 1 1
O S
kos k ose r eo sg q
= =∑∑
and
1 1
O S
k o s k o se r eo sg q′ ′ ′ ′′
= =∑∑ the total processing time for k and
k' consecutively on segment e.
Apply the shortest processing time technique to all trains on each conflicted segment
If 1 1
1O S
kos k os k o s k o se r e r e r eo st q g q t qk s k osr e r e
′ ′ ′ ′′ ′= =
+ − <
∑∑
11 1
O S
k s k o s k o s k o s k os k osr e r e e r e r e r eo st q g q t q′ ′ ′ ′ ′ ′ ′′ ′ ′′= =
+ −
∑∑ then
o of train k will be selected to be scheduled first.
Else
o' of train k′ is selected first.
End if.
If the selected operation cannot in the correct order to execute then
Backtrack choosing another operation.
End if
End if
End.
The RCE can be adapted to solve a section conflict in the blocking section models as
shown in Figure 4.11. Train k' is delayed to solve the conflict on s1.
137
Figure 4.11: A section conflict case using a delayed train technique
The conflict point can be solved using another technique in Figure 4.12, where the
speed of train k is reduced to be slower.
Figure 4.12: A section conflict case using a slow train technique
4.3.3 Computing Acceleration (CA) Algorithms
138
Two algorithms, segment elimination and section elimination algorithms are
discussed in this section. The main objective of the first algorithm is to reduce the
total number of segments of the rail network. The second algorithm reduces the total
number of sections of rail network. These algorithms work together to simplify the
problem and reduce the computing time.
Segment Elimination Algorithm In this algorithm, the segments not in use during the day’s operations are removed
from the complete list segments. Lk is the visited segments list. L΄k is the non-
visited segments list.
Select train k; k Є K //step1
Lk=Ø
L'k=E // the total number of segments at beginning for all trains
Select segment e ЄE // step 2 If 1k o sr e e
q = then
Lk=Lk+1 //increase the number of visited segments
L'k= L'k - 1 //decreasing the number of non visited segments
go to step 2
Else go to step 2
End if Set k
newE = E - L'k go to step 1
End.
139
Section Elimination Algorithm The Section Elimination Algorithm reduces the number of segments in the model.
Each eliminated segment includes some sections which are eliminated at the same
time but there are still some sections remaining that are not used. This algorithm is
used to eliminate the unused sections in the used segments to further simplify the
model. The reduced number of segments from the Segment Elimination Algorithm is
integrated with Section Elimination Algorithm. By changing the domains of
segments and sections in the model, the total time of the calculations is substantially
reduced. Vk is the list of sections visited by train k. V'k is the unused sections list.
Select train k; k Є K //step1
Vk=Ø
V΄k= knewS // the total number of sections of new list of segments
which we had before from segment elimination algorithm
Select sections se ЄS on e segment where e Є knewE (segment elimination algorithm)
//step2
If k o sr e eq =1 then
Vk=Vk + 1 //increase the number of visited sections
V΄k= V΄k - 1 //decreasing the number of non visited sections
go to step 2
Else
go to step 2
End if
Vk = knew kS V΄ −
go to step 1
End.
4.4 Conclusion
140
This chapter describes many different techniques to solve the sugarcane rail transport
system problem. Search techniques used to solve the MIP and CP models in Chapter
4, were the integration of MIP model and constraint programming search techniques
that can produce good solutions in reasonable time. The proposed algorithms in this
chapter are used to solve all types of conflicts throughout the rail network and reduce
the CPU time of CP and MIP models code. The heuristic and metaheuristic
techniques applied to sugarcane rail transport systems are described in Chapter 6.
These solve large-scale problems in reasonable time.
140
Chapter 5
Computational Results of CP and MIP Models
Chapter Outline
5.1 Introduction.................................................................................................................... 143
5.2 A Case Study for Testing Blocking Segment CP and MIP Models...............................143
5.2.1 Input Data........................................................................................................144
5.2.2 Results of Makespan Minimisation Objective................................................146
5.2.2.1 Constraint Programming Model Results......................................... 146
5.2.2.1.1 Standard Constraint Programming Results.....................146
5.2.2.1.2 Results of Computing Acceleration (CA) Algorithms. ...149
5.2.2.1.3 Train and Runs Scheduling for CP Model.......................151
5.2.2.1.4 Solutions Analysis by Search Tree..................................152
5.2.2.2 MIP Model Results...........................................................................156
5.2.2.2.1 Result of Standard MIP Model......................................156
5.2.2.2.2 Results of Computing Acceleration (CA)......................158
5.2.2.2.3 Train Scheduling Results...............................................160
5.2.2.3 Comparisons of CP and MIP Models using Makespan Criterion...161
5.2.3 Results of Total Waiting Time Minimisation Objective ................................163
5.2.3.1 Constraint Programming (CP) Model .............................................163
5.2.3.1.1 Results of Standard CP...................................................163
5.2.3.1.2 Results of Computing Acceleration (CA) Algorithms...164
5.2.3.1.3 Train Scheduling Results................................................165
5.2.3.2 Mixed Integer Programming (MIP) Model .....................................166
5.2.3.2.1 Result of Standard MIP Model.......................................166
5.2.3.2.2 Results of Computing Acceleration (CA) Algorithms...167
141
5.2.3.2.3 Train Scheduling Results................................................168
5.2.3.3 Comparisons of CP and MIP Models of Total Waiting Time
Criterion.............................................................................................169
5.3 A large Scale Case Study for Testing Blocking Segment
CP and MIP Models…………………………………………………….…….……….171
5.3.1 Results of Makespan Minimisation Objective...............................................175
5.3.2 Results of Total Waiting Time Minimisation Objective..................................177
5.4 Sensitivity Analysis of Blocking Segment MIP Sugarcane Rail Model.........................178
5.4.1 Small Rail Systems.........................................................................................179
5.4.2 Large Rail Systems.........................................................................................184
5.5 Blocking Section MIP and CP Results...........................................................................186
5.5.1 The comparisons between the Blocking Segment and Sections Models........187
5.6 The Results of Inclusion of the Delivery and Collection Time Constraints ………......189
5.7 Conclusion......................................................................................................................193
142
Publications Arising from Chapter 5
Masoud, M., Kozan, E., & Kent, G. (2010c). A comprehensive approach for
scheduling single track railways. The Annual Conference on Statistics,
Computer Sciences and Operations Research, Egypt, Cairo, 45,19-30.
Masoud, M., Kozan, E., & Kent, G. (under review). A new approach to
automatically producing schedules for cane railways. Australian Society of
Sugar Cane Technologies.
143
5.1 Introduction CP and MIP models of blocking segment and section constraints are solved using
OPL and CPLEX software respectively where the OPL solver using constraint
programming search techniques and the ILOG scheduler are integrated in the OPL
software. SSS and DSS were integrated with the different search techniques as well
to obtain solutions for the two models. The proposed algorithms in Chapter 4 are
included and used to solve the models. The results were obtained before and after
applying Computing Acceleration (CA) Algorithms to simplify the model Small and
large sizes of Kalamia Mill are used as case study for investigating the different
solution techniques. Makespan and the total waiting time are used as objective
functions. The results of Inclusion of the delivery and collection time constraints will
be presented in this chapter.
5.2 A Case Study for Testing Blocking Segment CP and MIP Models
A case study is examined to demonstrate and validate the CP and MIP models in this
research. A sector of the transport system at Kalamia Mill, near Ayr south of
Townsville, was used to obtain the optimal completion time of all runs of trains per
day. Figure 5.1 shows the distances between sidings, the siding capacities and the
siding allotments.
Figure 5.1: A realistic test case study (small sector of rail network of Kalamia Mill)
144
The case study involves 26 sections, 15 sidings, 11 segments and 5 trains. More
details are given in Tables 5.1, 5.2 and 5.3 for this case. All algorithms are included
and used to solve the models.
5.2.1 Input Data The three types of data used to identify the parameters of the sugarcane rail transport
system are rail data, harvest data and trains data.
Rail Data
Table 5.1 provides information about the distances between sidings and sectional
running time in seconds, which is used as input data. In this table each section
between two sidings, or two passing loops, or siding and conjunction point, or two
conjunction points takes a number as index number. Table 5.1: The distances and the sectional running times of the sections
Section Number
From To Distance (km)
Sectional running time (s)
1 Mill JN - 35 0 0 2 JN - 35 JN - 40 .48 58 3 JN - 40 HEA .32 39 4 JN - 40 JN - 39 1.59 192 5 JN - 39 JN - 41 4.22 508 6 JN - 41 Gainsford 2 .27 33 7 JN - 41 Gainsford 4 2.03 245 8 JN - 39 Kal Plains .98 118 9 Kal Plains JN 38 1.82 219 10 JN 38 Brandon1 .19 23 11 JN 38 Brandon3 .94 113 12 Brandon3 Brandon4 1.6 193 13 JN - 35 Chiverston 2 1.68 202 14 Chiverston 2 Chiv terminus 2.4 289 15 Mill Town PTS J2 .71 86 16 Town PTSJ2 JN37 .66 80 17 JN37 Lillesmers .36 43 18 JN37 Town 3 1.53 184 19 Town 3 Town Terminus 3.13 377 20 Town PTS J2 Mainline 1A 1.11 134 21 Mainline 1 a Mainline 1 .44 53 22 Mainline 1 Mainline 3 2.19 264 23 Mainline 3 JN33 1.07 129 24 JN33 Mainline 4A .32 39 25 Mainline 4A Mainline 4B .01 1 26 JN33 Central PTS J8 .13 16
145
Harvest Data
The harvest data such as siding capacity and siding allotments is shown in Table 5.2.
Each bin has a capacity of 6 tonnes. Most sidings have a capacity less than their
allotments. This means that these sidings need more than one run per day by one
train or more than one train.
Table 5.2: Siding capacity and siding allotments
Train Data
The number of trains, train name, maximum capacity of trains and speed are
illustrated in Table 5.3. The train capacity includes the maximum number of empty
bins and maximum number of full bins delivered by the train.
Table 5.3: Train number, capacity and speed
* 1 bin= 6 tonnes **8 km/h: kilometre per hour
Siding section number
Siding Capacity (bins)
Allotment (tones)
Allotment (bins)
3 Hea 120 - 6 Gainsford 2 140 1044 174 7 Gainsford 4 160 - - 8 Kal Plains 162 - -
10 Brandon 1 136 - - 11 Brandon 3 252 - - 12 Brandon 4 110 - - 13 Chiverston 2 128 - - 14 Chiv Terminus 114 676 114 17 Lillesmers 132 - - 18 Town 3 110 - - 19 Town Terminus 120 - - 20 Mainline 1 A 130 540 90 21 Main_Line 1 62 - - 22 Main_Line 3 148 780 130 24 Mainline 4A 100 - -
Total 3040 506
Train index
Train Max empty(bins) Max fulls (bins) Average speed(km/h)
1 Barratta 120 90 30 2 Rita_Island 120 124 30 3 Jarvisield 120 124 30 4 Norham 120 110 30 5 Kilrie 120 124 30
146
5.2.2 Results of Makespan Minimisation Objective
Minimising the makespan in the sugarcane rail transport system reduces the total
operating cost and helps implement all train runs in a specific time. The makespan
minimisation objective is achieved using the constraint programming and mixed
integer programming models. The makespan results of the two models are compared
in Section 5.1.2.3.
5.2.2.1 Constraint Programming Model Results
The main objective of the constraint programming model is to minimise the
makespan. The two cases of the constraint programming model are the standard
constraint programming and the integration of constraint programming model and
Computing Acceleration Algorithms.
5.2.2.1.1 Standard Constraint Programming Results
Standard Constraint Programming means using the original constraint programming
without using the computing acceleration algorithms. Table 5.4 shows results of SSS
for the CP model before applying the algorithms of Computing Acceleration (CA).
The number of variables and the number of constraints are reduced because the CP
formulation combines some constraints in one constraint as seen in the CP
formulation model. Table 5.4 shows the standard CP model results using the
integration of SSS strategy and other search techniques before applying algorithms.
Makespan and CPU time are calculated in second(s). The number of variables and
the constraints in the tested case are not small in number, so obtaining the solutions
is not easy. The choice point column shows the number of explored points that can
be branched to search the solutions, while the failure points column shows the
number of points that stop and cannot be extended in the search tree. The Solver
memory in kilo byte (kb) is presented to show the PC memory which the solution
techniques code needs in order to solve the problem. Table 5.4 illustrates that SSS
works well with all search techniques except BFS, in CP formulation. The DFS
technique gives results in a shorter CPU time than the other search techniques.
147
Table 5.4 shows the results of the integration of DSS and other search techniques for
the CP model before applying algorithms. All techniques using the DSS strategy
have given more reasonable results in a shorter time than the SSS strategy. The
number of Choice points in the DSS strategy is reduced by 9.8% of SSS, while the
number of failure points is reduced sharply by 99.9% of the Choice points in the
SSS. DFS achieved a good CPU time with the two strategies while the Best first
search gave a shorter time with DSS but did not give any result with the SSS. As a
result, the DFS technique is more suitable for solving the CP formulation model than
other techniques.
148
Table 5.4: SSS and DSS results for standard CP model to optimise makespan
Search technique
Variables number
Constraints
number
SSS DSS
Choice points
Failure points
Makespan (s)
CPU Time
(s)
Solver memory
(kb)
Choice points
Failure points
Makespan (s)
CPU Time
(s)
Solver memory
(kb) DFS
30253
380150
14021 14018 2664 100.68 136098 12657 10 2664 17.81 135716 SBS 14021 14018 2664 101.34 136508 12657 10 2664 18.06 136122 DDS 14021 14018 2664 102.61 136508 12657 10 2664 19.31 136122 BFS n/a n/a n/a n/a n/a 12681 12 2664 11.41 136098 IDFS 13657 13655 2664 106.64 136613 12657 10 2664 18.16 136122
149
SSS requires a longer time with the different search techniques than DSS with these
search techniques. Additionally, the BFS search technique does not work with the
SSS strategy while all search techniques work well with DSS. DFS provides better
results in a reasonable time with two strategies than other techniques can.
5.2.2.1.2 Results of Computing Acceleration (CA) Algorithms
Computing Acceleration (CA) algorithms are applied to the CP model to improve
the CPU time of the solutions. For that reason, the two algorithms reduced the
number of variables of the tested case by 92.5% before applying the algorithms. The
reduction of the variables caused a reducing of the constraints by 96.1%. As a result,
the number of choice points is reduced by more than 91%. The solver memory is
reduced by around 96% as well after applying the algorithms. Table 5.5 illustrates
the results of the integration of SSS and all search techniques for solving the CP
model after applying CA algorithms. All search techniques give an optimal solution
in good CPU time, except the BFS technique that does not work with SSS.
The DSS results with the search techniques are shown in Table 5.5. The DSS
strategy can be applied to all search techniques and obtains the same solution in
approximately the same CPU time. Table 5.5 results show the number of failure
points is reduced by DSS around 99% of the SSS failure points, while the number of
choice points is reduced generally by 8%.
The two strategies have the same solution and are closed to each other in CPU time.
The performance of some techniques is promising, while others may either require
extensive time or be unsuitable for this model with SSS, such as the BFS technique.
Figure 5.2 shows the comparisons of SSS and DSS before and after applying the CA
algorithms. These algorithms have a significant effect in reducing the CPU time.
Additionally, the DSS strategy works better than the SSS strategy in most cases and
uses the different search techniques.
150
Table 5.5: SSS and DSS results for CA algorithms to optimise makespan
Search technique
Variables
Constraints
SSS DSS
Choice points
Failure points
Makespan (s)
CPU time (s)
Solver memory
(kb)
Choice points
Failure points
Makespan (s)
CPU time (s)
Solver memory
(kb) DFS
2253
14706
2229 2226 2664 0.30 4433 2035 10 2664 0.13 4401 SBS 2229 2226 2664 0.28 4466 2035 10 2664 0.14 4433 DDS 2229 2226 2664 0.30 4466 2035 10 2664 0.13 4433 BFS n/a n/a n/a n/a n/a 1021 12 2664 0.11 4433 IDFS 2135 2133 2664 0.53 4502 2035 10 2664 0.14 4433
151
Figure 5.2: CPU time of SSS and DSS for the standard and CA of the CP using makespan
5.2.2.1.3 Train and Runs Scheduling for CP Model
Table 5.6 shows the exact start and finish times of train runs and assigns the runs to
trains. The trains 1, 3, and 4 start at time 0 and they are scheduled first, second and
third respectively, but all have the same priority to start. Train 5 starts fourth, and
train 2 starts fifth.
Table 5.6: Start and finish times of train run for CP model to optimise makespan
Train index
Run index
Siding number
Visit time (s)
Empties Fulls Run time (s)
Start time (s)
Finish time (s)
1 1 6 791 120 120 2414 0 1582 2 5 6 1873 54 54 1582 250 2664 3 2 14 491 112 112 982 0 982 4 3 20 220 90 90 1890 0 1976
22 537 10 20 5 4 22 1439 120 110 1074 86 2240
Total 5 - - 506 506 7942 0 2664
All trains have to deliver the full allotment of bins to each siding and collect the
same number of full bins from each siding. Table 5.6 also shows the actual visit
times for all trains at each siding, the number of sidings to be visited, the number of
empty and full bins to be delivered to or collected from each siding and train run
times. Train run time is defined as the time a train spends between starting the run at
the mill and returning to the mill after visiting the sidings.
0
2
4
6
8
10
12
DFS SBS DDS BFS IDFS
CPU
in se
cond
s
Search techniques
SSS of standard CP /10
DSS of standard CP/10
SSS of the CA algorithms
DSS of the CA algorithms
n/a n/a
152
Figure 5.3 shows the waiting time of trains while implementing their runs. A
distinction has been made between intermediate segments and terminal segments
(defined graphically in Figure 5.1). Intermediate segments include some operations
in both the inbound and outbound direction but the inbound direction operations may
be implemented after visiting other segments. As a result, two segment numbers
have been assigned to these segments. There can be no operations in other segments
between the outbound and inbound operations on terminal segments and so only one
segment number is assigned to these segments.
Figure 5.3: Scheduling 5 trains on 11 segments using CP with the makespan of 2664
The visit time in Table 5.6 is not optimised as real visit time because many variables
have not yet been considered in the CP model such as harvester start and harvester
rate. For that reason, Section 5.5 will introduce the optimising delivery and
collection operations in the sugarcane rail transport system including these variables.
5.2.2.1.4 Solutions Analysis by Search Tree
After applying the CP search techniques for the MIP and CP models, this section
explains how OPL uses the search tree techniques to obtain case study test solutions
and how the search tree is built by branching techniques. The case study was tested
using the CP model and four solutions were obtained before the optimal solution
using DFS with SSS. Table 5.7 shows all solutions before applying CA algorithms
and the spent CPU time to obtain each solution. The first solution was obtained in
9.48 seconds, while the optimal solution was obtained after 100 seconds.
153
Table 5.7: The standard CP model solutions before obtaining the optimal solution of makespan
Solutions variables constraints choice points
failure points
CPU time (s)
Makespan (s)
Solver memory
(kb) First sol. 30253
380150
4304 3731 9.48
3356
135.531
Second
sol. 30253 380150 8334 7260 11.02 3106 135.575
Third sol.
30253 380150 11388 11359 98.7 2914 135.985
Optimal sol.
30253 380150 14021 14018 100.68 2664 136.098
The search tree uses coloured nodes to express the node types inside OPL software.
For example, the red nodes are the failures, the solutions are green, the blue are the
explored choice points, white are the nodes created internally by ILOG Solver and
still unexplored, and the black nodes are pruned points that appear only while using
the LDS method. Figure 5.4 shows the stages of obtaining all solutions using the
search tree and SSS strategy. Each stage includes four sectors of the search tree of
each solution.
Stage 1: Discovering the fist solution: The makespan of the first solution (3356) is
found within (9.48) seconds where choice points are (4304) and failure points are
(3731).
a. Started discovering the nodes of tree b. More discovered nodes to reach first solution
c. Sector of search tree of first solution d. First solution is discovered
154
Stage 2: Discovering the second solution: The makespan of the second solution
(3106) is obtained by discovering around (8334) nodes and the number of failure
nodes is (7260) in the time (11.02).
e. Started discovering the second solution f. More discovered nodes to reach second solution
g. Sector of search tree of second solution h. Second solution is discovered
Stage 3: Discovering the third solution: The value of the third solution is decreased to be (2194) in (98.7) seconds after (11388) nodes are checked and (11359) were failure points.
i. Started discovering the third solution j. More discovered nodes to reach third solution
k. Sector of search tree of third solution l. Third solution is discovered
155
Final stage: The fourth solution: The fourth solution is optimal where makespan is
(2664) in CPU time (100.68). Seconds after (14021) discovered nodes and (14018)
were failure points.
m. Started discovering the fourth solution n. More discovered nodes to reach fourth solution
o. Sector of search tree of fourth solution p. Fourth solution is discovered
Figure 5.4: Solutions analysis by search tree
Conclusion on CP
The sugarcane railway operations are very complex and have a large number of
variables. The proposed scheduling model is very complicated but needs to be solved
in a reasonable time because of the dynamic nature of the system. The CP model
works well to solve the sugarcane rail transport problem using the integration of the
search techniques and the two search strategies. The BFS technique does not give a
solution with a standard search strategy. The integration of The DSS with the
different search techniques requires a shorter CPU time than the SSS with these
search techniques. The DSS is better to solve CP model than SSS.
156
5.2.2.2 MIP Model Results
5.2.2.2.1 Result of Standard MIP Model
The case study using the standard MIP model contains 26 sections, 11 segments, a
maximum of 10 operations for each segment, and 15 sidings. The total number of
trains is five and each train had one run to execute the tasks of delivering and
collecting the bins. The number of variables, before applying the Computing
Acceleration algorithms, is 32231 and the total number of constraints is 412407.
Table 5.8 shows the SSS strategy results with different search techniques to solve the
standard MIP model. The DFS technique works well with the SSS strategy and can
provide a solution in a reasonable time. The IDFS on the other hand is time
consuming and takes up to 9.5 hours (34422.09 seconds) to find a solution. SBS and
DDS techniques do not work with the SSS strategy and require a larger memory
capacity than DFS and BFS.
The DSS strategy is integrated with other search techniques in Table 5.8. All search
techniques work well with DSS and reach the optimal solution in a reasonable time.
The number of choice points and failure points is reduced, which makes the CPU
time shorter than SSS. Many nodes are ignored in the search tree, which means that
these nodes cannot provide better solutions than the current solutions.
The DFS results are more stable than other search techniques with SSS or DSS
strategies, where DFS can solve the proposed model and provide solutions in a
reasonable time. IDFS is time-consuming using SSS, even though it provides a
solution in a reasonable time using DSS.
157
Table 5.8: SSS and DSS results for standard MIP model to optimise makespan
Sear
ch
tech
niqu
e
Var
iabl
es
Con
stra
ints
SSS DSS
Choice points
Failure points
Makespan (s)
CPU time (s)
Solver memory
(kb)
Choice points
Failure points
Makespan (s)
CPU time (s)
Solver memory
(kb) DFS
32231
412407
59033 59030 2664 43.86 147734 30645 14 2664 26.09 147162
SBS n/a n/a n/a n/a n/a 30645 14 2664 26.58 147653 DDS n/a n/a n/a n/a n/a 30645 14 2664 26.86 147653 BFS 19429 31718 2664 116.81 151240 19423 14 2664 68.45 151212 IDFS 1033363 1055888 2664 34422.09 369480 30645 14 2664 25.67 147653
158
5.2.2.2.2 Results of Computing Acceleration (CA)
After applying Computing Acceleration (CA) algorithms (Segment and Section
Elimination Algorithms) to the model, the total number of sections is reduced from
26 to 11, the total number of segments from 11 to 3, the total number of operations
from 10 to 6 and the total number of sidings from 15 to 6. The number of trains and
runs has not changed. After applying Computing Acceleration (CA) Algorithms, the
total number of variables reduced from 32231 to 2611, where the reduction ratio is
around 91.8%. The total number of constraints reduced from 412407 to 15239,
where the reduction ratio is around 96%.
Table 5.9 shows the integration results of the search techniques and SSS strategy
after applying Computing Acceleration (CA). SSS works well in the small cases
(after applying CA algorithms) where it provides a faster solution with DFS and BFS
than before the algorithms were applied. The solution time using IDFS reduced
sharply from 9.5 hours, in the standard model, to be 0.53 computing acceleration
algorithms. The SBS and DDS techniques still do not work with the SSS strategy.
The Solver memory capacity reduced furtherafter applying CA algorithms than in
the standard model.
All search techniques work well with DSS to solve the proposed model as shown in
Table 5.9. CA algorithms with DSS reduce the number of choice and failure points
more than SSS. The solutions using DSS require the Solver memory capacity to be
less than SSS. CPU time of DSS and SSS with all search techniques is less than one
second. Applying BFS using SSS or DSS minimises the number of choice and
failure points more than by applying other search techniques. As a result, BFS CPU
time is shorter than other search techniques with the two strategies.
159
Table 5.9: SSS and DSS results for CA algorithms for MIP to optimise makespan
Sear
ch
tech
niqu
e
Var
iabl
es
Con
stra
ints
SSS DSS
Choice points
Failure points
Makespan (s)
CPU time (s)
Solver memory
(kb)
Choice points
Failure points
Makespan (s)
CPU time (s)
Solver memory
(kb) DFS
2611
15239
4020 4018 2664 0.16 5057 2950 15 2664 0.19 4988
SBS n/a n/a n/a n/a n/a 2950 15 2664 0.17 5037 DDS n/a n/a n/a n/a n/a 2950 15 2664 0.19 5037 BFS 1494 1542 2664 0.11 5057 1488 14 2664 0.14 5049 IDFS 4614 4668 2664 0.52 5291 2950 15 2664 0.17 5037
160
Figure 5.5 shows the CPU time of the search techniques before and after applying
CA algorithms using SSS and DSS strategies. These graphs clearly identify that the
CPU time is reduced sharply after applying the algorithms and that SBS and DDS
are not applicable for the SSS strategies. IDFS results of standard MIP require
excessive time. The BFS technique requires the longest CPU time before the
algorithms are applied using DSS strategy.
Figure 5.5: CPU time of standard and CA algorithms for MIP to optimise makespan 5.2.2.2.3 Train Scheduling Results
Table 5.10 shows the start and finish times of train runs and the trains to which they
were assigned. Table 5.10 shows train 2 and 3 start time at zero. Trains 1 and 5 are
scheduled to be the third and the fourth and to start at time 250. Train 4 is the last to
start and commences its run at time 336. Figure 5.1 highlights three different rail
routes from the mill which means up to three trains can start simultaneously.
Table 5.10: Start and finish times of train runs for MIP to optimise makespan
Train index
Run index
Siding number
Visit time
Empty bins
Full bins Run time
Run start
time (s)
Run finish
time (s) 1 3 6 1873 54 54 2414 250 2664 2 1 6 791 120 120 1582 0 1582 3 2 14 491 112 112 982 0 982 4 5 22 1783 100 96 2250 336 2586 5 4 20 470 90 90 2250 250 2500
22 787 30 34 Total 5 - - 506 506 9478 2664
0 1 2 3 4 5 6 7 8 9
10 11 12
DFS SBS DDS BFS IDFS
CPU
in se
cond
s
Search techniques
SSS of standard MIP /10
DSS of standard MIP /10
SSS of CA algorithms
DSS of CA algorithms
n/a n/a n/a n/a
3442.2
161
All trains have to deliver the full allotment of empty bins to each siding and collect
the same number of full bins from each siding. Table 5.10 shows the sidings visited
during each run, the visit time, the number of empty bins to be delivered and the
number of full bins to be collected from each siding as well as train run times. Train
run time is defined as the time that a train spends between starting the run at the mill
and returning to the mill after visiting the sidings. Figure 5.6 shows the detail of the
train run schedule as shown in Table 5.10.
Figure 5.6: Scheduling 5 trains on 11 segments using MIP with the makespan of 2664
5.2.2.3 Comparisons of CP and MIP Models using Makespan Criterion
Makespan results illustrate that DSS with search techniques is faster than standard
search techniques for both MIP and CP formulation models. Additionally, the
application of these algorithms helped to obtain faster solutions and also retained the
model solution as optimal. This section compares the MIP and CP models with the
SSS and DSS search techniques before and after applying CA algorithms. Table
5.11 illustrates the results of the SSS and DSS for the standard and the CA
algorithms for MIP and CP models.
DSS provides better solutions with all search techniques for CP standard model in
reasonable time than the standard MIP. SSS requires a longer time for the standard
CP model than the standard MIP model. The CA algorithms for the CP model
require a shorter time with all search techniques integrating with DSS than the MIP
model requires. CA algorithms for MIP models using SSS require a shorter time than
the CP model; however, MIP does not work with two search techniques, SBS and
DDS.
162
Table 5.11: Makespan results of standard MIP and CP and CA algorithms using SSS and DSS
Search technique
Standard MIP and CP CA algorithms for MIP and CP MIP CP MIP CP
SSS DSS SSS DSS SSS DSS SSS DSS
Mak
espa
n (s
)
CPU
Tim
e (s
)
Mak
espa
n (s
)
CPU
Tim
e (s
)
Mak
espa
n (s
)
CPU
Tim
e (s
)
Mak
espa
n (s
)
CPU
Ti
me(
s)
Mak
espa
n (s
)
CPU
Tim
e (s
)
Mak
espa
n (s
)
CPU
Tim
e (s
)
Mak
espa
n
(s)
CPU
Tim
e (s
)
Mak
espa
n (s
)
CPU
Tim
e (s
)
DFS 2664 43.86 2664 26.09 2664 100.68 2664 17.81 2664 0.16 2664 0.19 2664 0.30 2664 0.13 SBS n/a n/a 2664 26.58 2664 101.34 2664 18.06 n/a n/a 2664 0.17 2664 0.28 2664 0.14 DDS n/a n/a 2664 26.86 2664 102.61 2664 19.31 n/a n/a 2664 0.19 2664 0.30 2664 0.13 BFS 2664 116.81 2664 68.45 n/a n/a 2664 11.41 2664 0.11 2664 0.14 n/a n/a 2664 0.11 IDFS 2664 34422.09 2664 25.67 2664 106.64 2664 18.16 2664 0.52 2664 0.17 2664 0.53 2664 0.14
163
5.2.3 Results of Total Waiting Time Minimisation Objective
The two mathematical models (CP and MIP) are solved using the total waiting time
as an objective function. The solution techniques include the branch and bound
algorithm, simplex algorithm and constraint-based domain reduction. These
techniques are integrated to obtain good solutions using OPL and CPLEX software.
Additionally, the integration of mixed integer programming and constraint
programming search techniques can occur in OPL using Minimize with linear
relaxation. The case study involves 26 sections, 15 sidings, 11 segments and 5 trains.
5.2.3.1 Constraint Programming CP Model
5.2.3.1.1 Results of Standard CP
The standard CP model under the total waiting time criterion includes 30254
variables and 381539 constraints. A comparison of the integration of the different
search techniques and the two search strategies, SSS and DSS, in solving the
proposed model before applying CA algorithms, is examined in this section. Table
5.12 shows that SSS and DSS can give the same value of the total waiting time with
all search techniques except for the BFS technique, which it is not applicable to both
strategies and needs too much time to solve a small case.
Table 5.12: SSS and DSS results for Standard CP to optimise the total waiting time
Sear
ch
tech
niqu
e
SSS DSS
Cho
ice
poin
ts
Failu
re
poin
ts
Wai
ting
time
(s)
CPU
Tim
e (s
)
Mem
ory
solv
er(k
b)
Cho
ice
poin
ts
Failu
re
poin
ts
Wai
ting
time
(s)
CPU
Tim
e (s
)
Mem
ory
solv
er(k
b)
DFS 12657 0 1984 707 979848 12657 0 1984 699 979848 SBS 12657 0 1984 717 980266 12657 0 1984 700 980179 DDS 12657 0 1984 698 980274 12657 0 1984 699 980270 BFS n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a IDFS 12657 0 1984 715 980242 12657 0 1984 699 980258
Generally, SSS with search techniques spend longer time than DSS to produce the
solutions except DDS which requires longer time with DSS than SSS.
164
5.2.3.1.2 Results of Computing Acceleration (CA) Algorithms
After applying Computing Acceleration (CA) algorithms to the CP model, it was
found that the algorithms reduced the total number of sections from 26 to 11, the
total number of segments from 11 to 3, the total number of operations from 10 to 6
and the total number of sidings from 15 to 6. The number of trains and runs was not
changed. After applying CA algorithms, the total number of variables reduced from
30253 to 2254 and the total number of constraints reduced from 380150 to 14823.
Table 5.13 shows a comparison of results for all search techniques and the two
search strategies of CA algorithms. This table shows that SSS and DSS work well in
the small cases with all search techniques except BFS, which is not applicable and
requires too much time. After applying CA algorithms, SSS with the search
techniques requires shorter CPU time than DSS, however the CPU time of both
techniques are close to each other. SSS and DSS are able to obtain a good solution in
reasonable time of CA algorithms.
Table 5.13: Solution of CA algorithms for CP to optimise the total waiting time
Sear
ch
tech
niqu
e
SSS DSS
Cho
ice
poin
ts
Failu
re
poin
ts
Tota
l w
aitin
g tim
e (s
)
CPU
Tim
e (s
)
Mem
ory
solv
er(k
b)
Cho
ice
poin
ts
Failu
re
poin
ts
Tota
l w
aitin
g tim
e (s
)
CPU
Tim
e (s
)
Mem
ory
solv
er(k
b)
DFS 997 0 1984 1.17 15032 997 0 1984 1.20 15028 SBS 997 0 1984 1.19 15127 997 0 1984 1.20 15060 DDS 997 0 1984 1.22 15131 997 0 1984 1.19 15060 BFS n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a IDFS 997 0 1984 1.17 15064 997 0 1984 1.19 15064
Comparisons of the standard model and CA algorithms using the two strategies to
optimise the total waiting time are shown in Figure 5.7. The CPU time and memory
solver capacity is reduced sharply after applying the algorithms for the two
strategies. The search technique BFS is not applicable to both strategies. Choice
points of the algorithms are fewer than in the standard model, however, the failure
points are zero which affected the search time positively.
165
Figure 5.7: CPU time of standard and CA algorithms for CP to optimise the total waiting time
5.2.3.1.3 Train Scheduling Results
Table 5.20 shows the start and finish times of train runs and the trains to which they
were assigned. Table 5.14 shows that trains 2, 3 and 5 start at time zero. A train 4 is
scheduled to be the fourth and starts at time 86. Train 1 is the last to start and
commences its run at time 250. Figure 5.8 highlights that because there are three
different rail routes from the mill, the three trains start at the same time, zero.
Table 5.14: Start and finish times of train runs to optimise the total waiting time
Train index
Run index
Siding number
Visit time
Empty bins
Full bins Total run time
Run start
time (s)
Run finish
time (s) 1 3 6 1840 120 90 2414 250 2664 2 1 6 758 54 84 1582 0 1582 3 2 14 202 112 112 982 0 982 4 5 20 988 90 - 1890 86 1976
22 1175 10 110 5 4 20 854 - 90 1074 0 1074
22 273 120 20 Total 5 - - 506 506 9478 2664
* s: second ** kb: kilo byte All trains have to deliver the full allotment of bins to each siding and collect the
same number of full bins from each siding. Table 5.14 shows the sidings visited
during each run, the visit time, the number of empty bins to be delivered and the
number of full bins to be collected from each siding, as well as train run times.
Figure 5.8 shows the detail of the train run schedule shown in Table 5.14.
0 0.5
1 1.5
2 2.5
3 3.5
4 4.5
5 5.5
6 6.5
7 7.5
DFS SBS DDS BFS IDFS
CPU
in se
cond
s
Search technique
SSS of standard CP/100
DSS of standard CP/100
SSS of CA algorithms
DSS of CA algorithms
n/a n/a n/a n/a
166
Figure 5.8: CP for scheduling 5 trains on 11 segments with the total waiting time of 1984
5.2.3.2 Mixed Integer Programming (MIP) Model
5.2.3.2.1 Result of Standard MIP Model
The standard MIP model using the total waiting time criterion includes 32231
variables and 412407 constraints. Table 5.15 shows a comparison of results for all
search techniques and search strategies to solve the proposed model of Standard
MIP. Table 5.15 shows that SSS and DSS can provide the same value of the total
waiting time with all search techniques except SBS and DDS search techniques,
where both require too much time to solve a small case using the two strategies SSS
and DSS. The CPU time results show that SSS requires a longer time than DSS with
DFS and IDFS while SSS with BFS requires a shorter time than DSS with BFS. The
BFS technique works well with both search strategies and provides a better solution
in a reasonable time than other search techniques. The IDFS requires more than one
hour to obtain a solution. The used solver memory capacity to obtain the solutions is
shown in Table 5.15.
Table 5.15: SSS and DSS results for standard MIP model to optimise the total waiting time
Sear
ch
tech
niqu
e
SSS DSS
Cho
ice
poin
ts
Failu
re
poin
ts
Wai
ting
time
(s)
CPU
Tim
e (s
)
Solv
er
mem
ory
(kb)
Cho
ice
poin
ts
Failu
re
poin
ts
Wai
ting
time
(s)
CPU
Tim
e (s
)
Solv
er
mem
ory
(kb)
DFS 36017 33010 1984 1928 211414 36017 33010 1984 1919 211426 SBS n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a DDS n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a BFS 3028 3060 1984 1740 211237 3028 3060 1984 1744 211245 IDFS 12789 13273 1984 4502 211526 12789 13273 1984 4482 211534
167
5.2.3.2.2 Results of Computing Acceleration (CA) Algorithms
Computing Acceleration (CA) algorithms are applied to the MIP model to reduce the
total number of sections from 26 to 11, the total number of segments from 11 to 3,
the total number of operations from 10 to 6, and the total number of sidings from 15
to 6. Therefore, the total number of variables is reduced from 32231 to 2611 and the
total number of constraints reduced from 412407 to 15239. The number of trains and
runs is not changed. The used solver memory capacity to obtain the solutions is
reduced from 690468 kb to 8070 kb bytes.
Table 5.16 compares the results for all search techniques and the two search
strategies of CA algorithms. SSS and DSS work well with all search techniques in
the small cases, where both provide the same solution in nearly the same time.
Table 5.16: SSS and DSS results of CA algorithms for MIP to optimise the total waiting time
Sear
ch
tech
niqu
e
SSS DSS
Cho
ice
poin
ts
Failu
re
poin
ts
Wai
ting
time
(s)
CPU
Tim
e (s
)
Solv
er
mem
ory
(kb)
Cho
ice
poin
ts
Failu
re
poin
ts
Wai
ting
time
(s)
CPU
Tim
e (s
)
Solv
er
mem
ory
(kb)
DFS 2093 2090 1984 2.72 8026 2093 2090 1984 2.70 8030 SBS 4200 4204 1984 3.45 8066 4200 4204 1984 3.45 8066 DDS 2176 2198 1984 3.66 8597 2176 2198 1984 3.63 8525 BFS 528 536 1984 2.44 8002 528 536 1984 2.42 8018 IDFS 579 616 1984 3.77 8018 579 616 1984 3.77 8022
The effect of applying the CA algorithms on CPU for the two strategies is shown in
Figure 5.9 where the CPU time is reduced sharply. The search techniques SBS and
DDS are not applicable for the SSS strategies while these techniques work well with
the DSS strategy.
168
Figure 5.9: CPU time of SSS and DSS for the standard and the CA algorithms of MIP to optimise the total waiting time
5.2.3.2.3 Train Scheduling Results
This section shows the trains scheduling and the visit times of all sidings working on
that day. The trains scheduling using the total waiting time objective is the same in
the MIP and CP models. Therefore, MIP model results are introduced only in this
section. Table 5.17 shows trains 2 and 3 and 5 start at time zero. Train 4 is scheduled
to be the fourth to start at time 86. Train 1 is the last to start and commences its run
at time 250.
Table 5.17: Start and finish times of train runs of MIP to optimise the total waiting time Train index
Run index
Siding number
Visit time(s)
Empty bins
Full bins
Total run time(s)
Run start time (s)
Run finish time (s)
1 3 6 1840 120 90 2414 250 2664 2 1 6 758 54 84 1582 0 1582 3 2 14 202 112 112 982 0 982 4 5 20 988 90 - 1890 86 1976
22 1175 10 110 5 4 20 854 - 90 1074 0 1074
22 273 120 20 Total 5 - - 506 506 9478 2664
* s: second
Figure 5.10 shows the segment index, which includes sections to be used during each
train run. The waiting time for each train is illustrated during each run. The two
0
4
8
12
16
20
24
28
32
36
40
44
DFS SBS DDS BFS IDFS
CPU
in se
cond
Search techniques
SSS of standard MIP/100
DSS of standard MIP/100
SSS of CA algorithms
DSS of CA algorithms
n/a n/a n/a n/a
169
directions of the trains are implemented on the terminal segments without any
interruption.
Figure 5.10: MIP for scheduling 5 trains on 11 segments with the Total waiting time of 1984
Optimising the time table for the sugarcane rail transport system is introduced in
Section 5.5 and includes all variables to solve many problems.
5.2.3.3 Comparisons of CP and MIP Models of Total Waiting Time Criterion
The comparison of the CP and MIP model results are analysed in this section.
Generally, by using the total waiting time as an objective function, the standard CP
and standard MIP models gave the same results for the total waiting time value,
while the standard CP model is faster compared with the standard MIP. There was
no significant difference between CPU of SSS and DSS for the CP and MIP models.
Both strategies, with different search techniques, are implemented nearly in the same
CPU time as shown in Table 5.18.
Two search techniques are not applicable to the standard MIP, SBS and DDS, while
for the standard CP only the BFS search technique is not applicable. IDFS for the
standard MIP takes more time than other techniques.
170
Table 5.18: Standard MIP and CP results using SSS and DSS to optimise the total waiting time
After applying the CA algorithms, the CPU time of the MIP model reduced sharply
to be very close to the CPU time of CP. Additionally, all search techniques in MIP
work well, while BFS still does not work in the CP model. This is illustrated in
Table 5.19. Table 5.19: MIP and CP results using SSS and DSS for CA algorithms
Search technique
MIP CP
SSS DSS SSS DSS
Waiting time (s)
CPU Time
(s)
Waiting time (s)
CPU Time
(s)
Waiting time (s)
CPU Time
(s)
Waiting time (s)
CPU Time
(s) DFS 1984 2.72 1984 2.70 1984 1.17 1984 1.20 SBS 1984 3.45 1984 3.45 1984 1.19 1984 1.20 DDS 1984 3.66 1984 3.63 1984 1.22 1984 1.19 BFS 1984 2.44 1984 2.42 n/a n/a n/a n/a IDFS 1984 3.77 1984 3.77 1984 1.17 1984 1.19
Sear
ch
tech
niqu
e
MIP CP SSS DSS SSS DSS
Waiting time (s)
CPU Time (s)
Waiting time (s)
CPU Time
(s)
Waiting time (s)
CPU Time
(s)
Waiting time (s)
CPU Time (s)
DFS 1984 1928 1984 1918 1984 707 1984 699 SBS n/a n/a n/a n/a 1984 717 1984 700 DDS n/a n/a n/a n/a 1984 698 1984 699 BFS 1984 1740 1984 1744 n/a n/a n/a n/a IDFS 1984 4502 1984 4482 1984 715 1984 699
171
5.3 A large Scale Case Study for Testing Blocking Segment CP and MIP Models
To further test the model, the previous case study was extended to include a larger
part of the Kalamia Mill rail network. That is, the extended case study includes 51
sections, 42 sidings, 21 segments, 5 trains, 10 runs and 6217 tonnes as a total
allotment for all sidings. Figure 5.11 and Table 5.20 show the distances between
sidings, allotment of each siding, and siding capacities.
The previous results indicated that the CP model works well with small cases.
However, the MIP model has provided good results especially with the DFS search
technique in particular, to solve the makespan problem. The DFS search technique
has provided stable results with SSS and DSS. Integrating the MIP model and
different CP search techniques will obtain good solutions for this case. Computing
Acceleration (CA) Algorithms are applied directly to reduce the CPU time where the
total number of sections is reduced to 33; the total number of sidings has become 22,
and there are 10 segments. The two strategies SSS and DSS are tested with the
different search techniques to solve the current case study. ILOG OPL software is
used to solve the current case study of the sugarcane rail transport system.
172
Figure 5.11: Larger case study: bigger part of rail network of Kalamia mill Rail Data
Rail data in Table 5.20 has more details about this case that are related to distances
between sidings, sidings’ capacity and harvester data. Harvest Data will be shown in
Table 5.20.
0.27
Siding capacity
Distance between sidings
Allotment of siding
173
Table 5.20: The distance between sidings in the extension case study
Section numbe
r
From To Distance (km)
Sectional running time (s)
1 Mill JN - 35 0 0 2 JN - 35 JN - 40 .48 58 3 JN - 40 HEA .32 39 4 JN - 40 JN - 39 1.59 192 5 JN - 39 JN - 41 4.22 508 6 JN - 41 Gainsford 2 .27 33 7 JN - 41 Gainsford 4 2.03 245 8 JN - 39 Kal Plains .98 118 9 Kal Plains JN 38 1.82 219
10 JN 38 Brandon1 .19 23 11 JN 38 Brandon3 .94 113 12 Brandon3 Brandon4 1.6 193 13 JN - 35 Chiverston 2 1.68 202 14 Chiverston 2 Chiv terminus 2.4 289 15 Mill Town pts. J2 .71 86 16 Town Pts. J2 JN37 .66 80 17 JN37 Lillesmers .36 43 18 JN37 Town 3 1.53 184 19 Town 3 Town Terminus 3.13 377 20 Town PTS J2 Mainline 1A 1.11 134 21 Mainline 1 a Mainline 1 .44 53 22 Mainline 1 Mainline 3 2.19 264 23 Mainline 3 JN33 1.07 129 24 JN33 Mainline 4A .32 39 25 Mainline 4A Mainline 4B .01 1 26 JN33 Central PTS J8 .13 16 27 Central PTS J8 Central 1a .33 40 28 Central 1a Central 1 1.4 169 29 Central 1 Central 2 1.06 128 30 Central 2 Central 3 1.01 122 31 Central PTS J8 Creek points 2.00 201 32 Creek points Jarvisfield 2A .76 92 33 Jarvisfield 2A Jarvisfield 2B .14 17 34 Jarvisfield 2B Jarvisfield 3 1.75 211 35 Jarvisfield 3 Jarvisfield 6 1.39 167 36 Jarvisfield 6 JN_27 .73 88 37 JN_27 Jarvisfield 8A 1.46 176 38 Jarvisfield 8A JN-29 .15 18 39 JN-29 Jarvisfield 8B .39 47 40 JN-29 Jarvisfield 8C .37 45 41 JN_27 J/Field term B 1.5 181 42 J/Field term B J/Field term A .43 52 43 Creek points Ivanhoe points 1.9 229 44 Ivanhoe points Ivanhoe 2 1.25 151 45 Ivanhoe 2 Ivanhoe 3 1.24 149 46 Ivanhoe 3 Ivanhoe Terminus .99 119 47 Ivanho points Norham 3 .19 23 48 Norham 3 Norham _4 1.22 147 49 Norham 4 Norham Depot 1.94 234 50 Norham Depot Rita _ISLAND4 .68 82 51 Rita island 4 Rita island PTS .67 81
174
Harvest Data
The harvest data includes the siding capacity and the siding allotment. Other
parameters such as the harvester rates and the harvester start time are presented in
Section 5.5. Table 5.21 below details this data. Table 5.21: Sidings capacity and allotments in the extension case study
Siding section number
Siding Capacity (bins)
Allotment (tones)
Allotment (bins)
3 Hea 120 - - 6 Gainsford 2 140 1044 174 7 Gainsford 4 160 - - 8 Kal Plains 162 - - 10 Brandon1 136 - - 11 Brandon3 252 - - 12 Brandon4 110 - - 13 Chiverston 2 128 - - 14 Chiv terminus 114 676 114 17 Lillesmers 132 - - 18 Town 3 110 - - 19 Town Terminus 120 - - 20 Mainline 1 A 130 540 90 21 Main_Line 1 62 - - 22 Main_Line 3 148 780 130 24 Mainline 4A 100 - - 25 Mainline 4B 96 - - 26 Central 1A 68 - - 27 Central 1 150 - - 28 Central 2 136 - - 29 Central 3 154 585 98 30 Jarvisfield 2A 118 0 - 31 Jarvisfield 2B 118 0 - 32 Jarvisfield 3 146 0 - 33 Jarvisfield 6 124 0 - 34 Jarvisfield 8A 118 0 - 35 Jarvisfield 8B 124 300 50 36 Jarvisfield 8C 136 0 - 37 J/Field Term B 100 564 94 38 J/Field Term A 152 1050 175 39 Norham 3 126 0 - 40 Norham 4 124 0 - 41 Norham Depot 120 0 - 42 Rita Island 4 140 678 113
Total - - 6228 1038
175
Train Data
Train data includes the train index, name and speed. The train capacity includes
maximum number of empty bins and maximum number of full bins that can be
downloaded by the train. Table 5.22 details the train data.
Table 5.22: Train number, capacity and speed in the extension case study
5.3.1 Results of Makespan Minimisation Objective
Table 5.23 shows the results of the DFS search technique and the SSS search
strategy. The makespan of the current case is 25205, obtained after 870.54 seconds,
and no better solution was reached after 24 hours. The other search techniques did
not provide any solutions during the first 24 hours. The number of variables in the
current case study is high even though the algorithms were applied. This means that
the sugarcane system is an expansive system that includes a large number of
variables used in 829754 constraints of the MIP model.
Table 5.23: SSS results using makespan for the larger case study
Table 5.24 shows the results of the integration of DSS and the different search
techniques. An optimal solution was found in reasonable time with a makespan of
15414. DSS works efficiently with all search techniques even with the larger case
study. A comparison of the two search strategies identifies that DSS was more
efficient than SSS for this case. The number of choice points and failure points was
Train index
Train name Max empty (bins)
Max fulls (bins)
Average speed (km/h)
1 Rita-Island 120 124 30 2 Jarvisfield 120 124 30 3 Kilrie 120 124 30
Search technique
SSS
Nb. of variables
Nb. of constraints
Nb. of choice points
Nb. of failure points
Makespan (s)
CPU Time
(s)
Solver memory
(kb)
DFS 65161 829754 26335 4186 25205 870.54 364797
176
reduced by the DDS strategy to 22254 and 14 respectively for the different search
techniques.
Table 5.24: DSS results using makespan for the larger case study
Train Scheduling
In the larger case study, five trains performed 10 runs to deliver and collect 6228
tonnes of cane. Table 5.25 shows the scheduling of the train runs, the start and finish
times for each run and the number of bins delivered to and collected from each
siding.
Table 5.25: Start and finish times of train runs using the makespan objective function
Search technique Makespan CPU Time(s) Solver memory (kb) DFS 15414 75.70 311516 SBS 15414 75.84 312231 DDS 15414 77.98 312231 BFS 15414 75.86 312231 IDFS 15414 75.53 312231
Train index
Run index
Siding Number
Visit Time
Empty bins
Full bins
Run time (s)
Run start time (s)
Run finish time (s)
1 1 6 758 120 90 1582 0 1582 5 6 2340 54 84 1582 1582 3164
2 2 14 202 114 114 982 0 982 4 22 13405 10 26 2282 13132 15414
30 14151 98 98 3 7 41 5871 94 - 3382 4413 7795
42 6052 26 31 22 7258 - 93
8 22 8068 120 - 3741 7795 11536 41 9719 - 94
4 9 42 10475 120 - 3817 8401 12218 20 12132 - 90
10 22 12492 - 11 3196 12218 15414 50 13734 113 113
5 3 39 1652 49 50 3817 0 3817 42 2126 - 24
6 20 7414 90 - 4017 3817 7834 39 5469 1 - 42 5891 29 120
Total 10 - - 1038 28398
177
Note:
Visit time in table 5.25 includes the arrival time of each train at each siding.
Delivering and collecting visit time is the same (delivering and collecting operations
are conducted continuously) at the siding when the train returns from this siding to
the mill or when this siding is located at the end of the rail network. If the siding is at
the middle of the train pass-way, the delivering visit time “outbound direction” is
different from the collecting visit time “inbound direction”.
5.3.2 Results of Total Waiting Time Minimisation Objective
This case used minimising the total waiting time as an objective function to
investigate the MIP model. 5 trains were used to satisfy the sidings and mill
requirements. Table 5.26 shows the results for the DFS search technique and the
SSS search strategy. The total waiting time of 18495 was obtained after 3220
seconds.
Table 5.26: SSS results using total waiting time for the larger case study
Table 5.27 shows the results of the DSS and the search techniques integration. An
optimal solution was found in a reasonable time, with a total waiting time 18495.
The DSS works efficiently with DFS as a search technique even with the larger case
study. A comparison of the two search strategies identifies that DSS was more
efficient than SSS using the total waiting time objective. The number of choice and
failure points was reduced by the SSS and DDS strategies to 2881 and 2228
respectively during the search process.
Table 5.27: DSS results using total waiting time for the larger case study
Search technique
SSS Nb. of
variables Nb. of
constraints Nb. of choice points
Nb. of failure points
Total waiting
time
CPU Time
(s)
Solver memory
(kb) DFS 54400 671106 2881 2228 18495 3220.72 958730
Search technique
DSS Nb. of
variables Nb. of
constraints Nb. of choice points
Nb. of failure points
Total waiting
time
CPU Time
(s)
Solver memory (kb)
DFS 54400 671106 2881 2228 18495 3210 958742
178
Train scheduling In the larger case study, 5 trains performed 10 runs to deliver and collect 6217 tonnes
of cane. Table 5.28 shows the scheduling of the train runs, the start and finish times
for each run and the number of bins delivered to and collected from each siding.
Table 5.28: Start and finish times of train runs using the total waiting time objective
5.4 Sensitivity Analysis of Blocking Segment MIP Sugarcane Rail Model
This section shows how sensitivity analysis is used to investigate a new approach to
the sugarcane rail problem. The integration of the MIP model and CP search
techniques is used to solve different problems. Small and large rails are used in the
sensitivity study. Different trains and multi runs will be used to check the
relationship between the number of trains, number of runs and the CPU time of the
solution technique to the sugarcane rail problem. Makespan and total waiting time
objectives are used as objective functions. DFS and the two strategies provide the
rail problem solution.
Train index
Run index
Siding Number
Visit Time
Empty bins
Full bins
Run time (s)
Run start time (s)
Run finish time (s)
1 1 6 758 120 90 1582 0 1582 4 6 2340 54 84 1582 1582 3164
2 2 14 202 114 114 982 0 982 6 30 4997 98 98 3296 3468 6764
22 5723 22 26 3 3 22 273 30 - 3382 0 3382
20 3162 90 17 5 22 6227 44 104 3382 3382 6764
20 6678 - 20 4 7 42 7417 - 110 5554 3554 9108
41 7184 94 - 22 8571 26 -
8 20 9194 - 11 3196 9108 12304 50 10624 113 113
5 9 41 12063 - 94 4327 9194 13521 42 11778 120 30
10 39 15173 50 50 3817 13521 17338 42 15595 55 35 22 16801 8 - 20 13607 - 42
Total 10 - - 1038 1038 31100
179
5.4.1 Small Rail Systems Total Waiting Time(TWT)
This section investigates the effect the number of trains and runs has on minimising
the total waiting time as an objective function. In these cases, the size of the rail
network is limited to include 26 sections, 11 segments, and 15 sidings. SSS and DSS
strategies will be integrated with the DFS search technique of Computing
Acceleration (CA) algorithms to solve different tested cases. A Comparison of the
results of alternative cases for 40 runs is shown in Table 5.29.
180
Table 5.29: Results of sensitivity analysis of total waiting time (a small rail)
Test
ed C
ases
Trai
ns
Run
s /Tr
ain
SSS DSS
Var
iabl
es
Con
stra
ints
Cho
ice
poin
ts
Failu
re p
oint
s
Tota
l Wai
ting
Tim
e (s
)
CPU
(s)
Var
iabl
es
Con
stra
ints
Cho
ice
poin
ts
Failu
re p
oint
s
Tota
l Wai
ting
Tim
e (s
)
CPU
(s)
C1 10 4 18090 258664 26294 25080 111430 2818 18090 258664 26294 25080 111430 2832
C2 8 5 17736 253030 13318 12351 79746 1759 17736 253030 13318 12351 79746 1760
C3 5 8 17250 247534 1112 1112 71960 1206 17250 247534 1112 1112 71960 1220
C4 4 10 17100 236052 559 559 53748 661 17100 236052 559 559 53748 671
C5 2 20 16818 196180 198 198 21546 464 16818 196180 198 198 21546 464
181
SSS and DSS results are nearly similar and both of them are working well with
different problems as shown in Table 5.29 and Figure 5.12. There is a very strong
link between the total waiting time and the number of trains, where the total waiting
time increases by adding and reducing the number of runs. However, minimizing the
total waiting time can occur by increasing the number of runs and removing some
trains from the system. Table 5.29 shows that CPU time increases with any increase
in the number of trains in the system for both the SSS and DSS strategies.
Figure 5.12: SSS and DSS results of different cases using a small rail to optimise TWT
The Percentage Improvement of the Total Waiting Time (PITWT) is calculated for
the different cases as follows:
(PITWT)ij= {{total waiting time of case i - total waiting time of case j} / {total
waiting time of case i,}}*100. Where i and j are index of cases and i < j.
(PITWT)12 = ((111430-79746)/111430)*100 = 28.4%; (PITWT)12 improvement from C1 to C2.
(PITWT)23 = ((79746-71960)/ 79746)*100 = 9.7%.; (PITWT)23 improvement from C2 to C3.
(PITWT)34 = ((71960-53748)/ 71960)*100 = 25.3%. ; (PITWT)12 improvement from C3 to C4.
(PITWT)45 = ((53748-21546)/ 53748)*100 = 59.9%.; (PITWT)23 improvement from C4 to C5.
(PITWT)15 = ((111430-21546)/ 111430)*100 = 80.6%; (PITWT)15 improvement from C1 to C5.
0 1000 2000 3000 4000 5000 6000 7000 8000 9000
10000 11000 12000
C1 C2 C3 C4 C5
Tim
e in
seco
nds
Problems
SSS Solution/10
SSS CPU
DSS Solution/10
DSS CPU
182
Makespan
Table 5.30 shows the results of the makespan criterion applied as an objective
function to the MIP model. Increasing the number of trains and decreasing the
number of runs, means the makespan will also decrease. This means that using a
large number of trains at the same time to deliver or collect the bins affects the
makespan value. The CPU time is larger when 20 trains are used at the same time,
while CPU time with a small number of trains is reasonable.
The Percentage Improvement of the Makespan (PIM) is calculated for the different
cases using a small rail as follows:
(PIM)ij= {{makespan of case i – makespan of case j} / {makespan of case i}}*100,
where i and j are index of cases and i<j.
(PIM)12 = ((37524-26960)/ 37524)*100 = 28.1%; (PIM)12 improvement from C1 to C2.
(PIM)23 = ((26960-24812)/ 26960)*100 = 7.96%; (PIM)23 improvement from C2 to P3.
(PIM)34 = ((24812-22730)/ 24812)*100 = 8.39%; (PIM)34 improvement from C3 to C4.
(PIM)45 = ((22730-21812)/ 22730)*100 = 4.0%; (PIM)45 improvement from C4 to C5.
(PIM)56 = ((21812-21312)/ 21812)*100 = 2.3%; (PIM)56 improvement from C5 to C6.
(PIM)16 = ((37524-21312)/ 37524)*100 = 43.2; (PIM)16 improvement from C1 to C6.
183
Table 5.30: Results of sensitivity study of makespan (a small rail)
Test
ed C
ases
Trai
ns
Run
s for
eac
h tra
in SSS DSS
Var
iabl
es
Con
stra
ints
Cho
ice
poin
ts
Failu
re p
oint
s
Mak
espa
n
(s)
CPU
(s)
Var
iabl
es
Con
stra
ints
Cho
ice
poin
ts
Failu
re p
oint
s
Mak
espa
n
(s)
CPU
(s)
C1 2 20 16819 1642664 437 438 37524 337 16819 1642664 438 437 37524 330
C2 4 10 17101 1581140 852 851 26960 302 17101 1581140 852 851 26960 301
C3 5 8 17251 1537592 628 628 24812 275 17251 1537592 628 628 24812 275
C4 8 5 17737 236738 1089 1089 22730 599 17737 236738 1089 1089 22730 603
C5 10 4 17191 1402920 0 0 21812 229 17191 1402920 0 0 21812 229
C6 20 2 20221 1025604 0 0 21312 2244 20221 1025604 0 0 21312 2240
184
SSS and DSS strategies are nearly the same when using a large number of runs and
trains, as shown in Figure 5.13.
Figure 5.13: SSS and DSS results of different problems using a small rail to optimise makespan
5.4.2 Large Rail Systems
Total Waiting Time
The previous case study used 5 trains and two runs for each train. In this section a
sensitivity analysis of the total waiting time is undertaken by changing the train
number from 5 to 3 and 2, and the runs from 2 to 4 and 5. Table 5.31 indicates that
decreasing the number of trains can improve the value of the total waiting time.
The Percentage Improvement of the Total Waiting Time (PITWT) for the different
cases using a larger rail is shown as follows:
(PITWT)12 = ((18495-13324)/18495)*100 = 27.9%; (PITWT)12 improvement from C1 to C2.
(PITWT)23 = ((13324-7024)/ 13324)*100 = 47.28%; (PITWT)23 improvement from C2 to C3.
(PITWT)13 = ((18495-7024)/ 18495)*100 = 62%; (PITWT)23 improvement from C1 to C3.
These total waiting time results show that using 2 trains and 5 runs in C1 is better
than using 5 trains and 2 runs in C3, where the improvement from C1 to C2 is 62%.
0
500
1000
1500
2000
2500
3000
3500
4000
C1 P2 C3 C4 C5 C6
Tim
e in
seco
nds
Problems
Solution of SSS/10
CPU of SSS
Solution of DSS/10
CPU of DSS
185
Table 5.31: Sensitivity analysis of changing some variables in the system using total waiting time
Test
ed C
ases
Trai
ns
Run
s /Tr
ain
SSS DSS
Var
iabl
es
Con
stra
ints
Cho
ice
poin
ts
Failu
re p
oint
s
Tota
l Wai
ting
Tim
e (s
)
CPU
(s)
Var
iabl
es
Con
stra
ints
Cho
ice
poin
ts
Failu
re p
oint
s
Tota
l Wai
ting
Tim
e (s
)
CPU
(s)
C1 5 2 54400 671106 2881 2228 18495 3220 54400 671106 2881 2228 18495 3210
C2 3 4 64800 829034 1392 1015 13324 4566 64800 829034 1392 1015 13324 4562
C3 2 5 53860 646490 222 0 7024 2867 53860 646490 222 0 7024 2844
186
Makespan
This section presents the sensitivity analysis applied to the makespan using different
problems. Table 5.32 shows the effects of changing the number of trains and runs on
the makespan value, where the makespan value is minimised, in spite of increasing
the number of trains and decreasing the number of runs.
Table 5.32: Sensitivity study for makespan criterion
The Percentage Improvement of the Makespan (PIM) in the case of increasing the
number trains and decreasing the number of runs is described as follows:
(PIM)12 = ((27839 - 21411)/ 27839)*100 = 23.08%; (PIM)12 improvement from C1 to C2.
(PIM)23 = ((21411 - 15414)/ 21411)*100 = 28%; (PIM)23 improvement from C2 to C3.
(PIM)13= ((27839 - 15414)/ 27839)*100 = 44.63%; (PIM)13 improvement from C1 to C3.
5.5 Blocking Section MIP and CP Results
DFS techniques are integrated with the DSS strategy to produce the solution for the
blocking section MIP and CP models for the case study in Figure 5.1. DFS is a stable
technique in most of the previous case studies and gave good results in reasonable
time in particular with the DSS strategy. Two objectives are investigated, minimising
the makespan and total waiting time (Masoud et al. 2010c). MIP and CP have nearly
the same results and CPU time. Table 5.33 shows the DSS results for blocking
section MIP using makespan and the total waiting objectives. Minimising makespan and the total waiting time are used as different objectives for
more investigation of the section blocking models. Table 5.33 describes that the
number of constraints and the CPU time of MIP are increased by changing the
objective function from makespan to the total waiting time.
Tested Cases
Trains number
Runs /Train
Variables Constraints Makespan (s)
Solver memory (kb)
C1 1 10 53701 607714 27839 321665 C2 2 5 53981 646966 21411 333370 C3 5 2 55901 670706 15414 342173
187
Table 5.33: Makespan and total waiting time results for the blocking section MIP model
The results of the blocking section CP model are obtained using makespan and the
total waiting time. Table 5.34 shows the number of constraints in CP using
makespan and the total waiting is lower than in MIP by:
((17802-13760)/17802)*100 =22.7%, where CP combines many constraints to work
as one constraint. Table 5.34: Makespan and total waiting time results for the blocking section CP model
5.5.1 The Comparisons between the Blocking Segment and Sections Models
The main aim of developing blocking section models is to reduce the value of the
makespan and the total waiting time objectives. Small and large cases in Figures 5.1
and 5.11 are used by applying blocking segment and section models. The Percentage
Improvement of the makespan and the total waiting time (PIM and PITWT) is
calculated from blocking segment to blocking section.
(PIM) segment, section = ((makespan of blocking segment – makespan of blocking section) / makespan of blocking segment)*100
Sear
ch
Tech
niqu
e
Obj
ectiv
e fu
nctio
n
DSS
Var
iabl
e N
umbe
r
Con
stra
ints
N
umbe
r
Cho
ice
Poin
ts
Failu
re
Poin
ts
CPU
Tim
e (s
)
Mem
ory
Solv
er(k
b)
Solu
tion
Val
ue (s
)
DFS
Makespan 1786 17802 491 491 1.17 6255 2432 Waiting time 1786 17852 510 510 2.36 7797 1610
Sear
ch
tech
niqu
e
Obj
ectiv
e fu
nctio
n
DSS
Var
iabl
e N
umbe
r
Con
stra
ints
nu
mbe
r
Cho
ice
poin
ts
Failu
re
poin
ts
CPU
Tim
e (s
)
Mem
ory
solv
er (k
b)
Solu
tion
Val
ue (s
)
DFS Makespan 1808 13760 1156 1156 0.91 6235 2432 Waiting time 1808 13760 1152 576 0.55 6786 1610
188
(PITWT) segment, section = ((TWT of blocking segment – TWT of blocking section) /
TWT of blocking segment)*100
Small case:
(PIM) segment, section = ((2664- 2432)/2664)*100 =8.7%
(PITWT) segment, section = ((1984- 1610)/1984)*100 =18.9%
Large case:
(PIM) segment, section = ((15414 - 15175)/15414)*100 =1.5%
(PITWT) segment, section = ((18495 - 16065)/18495)*100 =13.1%
Figure 5.14 shows the comparison between blocking section and segment models
using large and small cases, where the total waiting time improvement is more
significant than makespan.
Figure 5.14: Blocking segment and section results using makespan and total waiting time
0
500
1000
1500
2000
2500
3000
Makespan Waiting time
Tim
e in
seco
nd
Smal case study
Blocking sections
Blocking segemnts
0 2000 4000 6000 8000
10000 12000 14000 16000 18000 20000
Makespan Waiting time
Tim
e in
seco
nd
Large case study
Blocking sections Blocking segemnts
189
5.6 The Results of Inclusion of the Delivery and Collection Time Constraints The delivery and collection time constraints are investigated in this section using
Kalamia Mill. This case includes 7 harvesters, 4 trains and delivery and collection of
875 bins under time constraints. The ACTSS schedule checker and simulator
(McWhinney and Penridge, 1991; Pinkney and Everitt, 1997) was used to examine
the schedule produced by the MIP model. An optimal (or near optimal) number of train is determined under the system’s
constraints (see Chapter 3 for detail). The best makespan solution is found as 24.63
hr and the total operating time is 18.46 hr with 4 trains and 12 runs (trips).
Figure 5.15 shows a graph of harvester utilisation. The bar graphs are shaded in the
period when a harvester is operating. Gaps in the graph indicate periods when the
harvester is waiting for bins. Although there are some small gaps in the harvest
shown in Figure 5.15, these gaps were caused by differences in allocating shunting
time between the model and the ACTSS simulator and not by a weakness in the
model. Hence the model achieves the objective of maintaining a continuous supply
of empty bins to the harvesters.
Figure 5.15: Harvester usage for seven harvester model
Note: In ACTSS, shunting time is 15min for the total number of delivery or
collection bins of each visit at each harvester, while in the model, shunting time
depends on the number of delivered or collected bins where each bin takes 15
seconds.
Maintaining a continuous supply of full bins to the mill is demonstrated in the mill
yard stock chart shown in Figure 5.16. Since the stock of full bins never reduces to
zero, a continuous supply of full bins to the mill is demonstrated. It is easy,
190
however, to achieve a continuous supply of full bins to the mill in a model simply by
increasing the number of full bins at the mill at the start of the simulation. The
challenge, however, is to minimise the size of the bin fleet and increasing the number
of full bins at the mill to overcome deficiencies in the model can be identified by
considering the required size of the bin fleet. In this model, a bin fleet of 753 bins is
required to achieve the objective of filling 875 bins, indicating that 16% of the bin
fleet is filled twice during the day and showing a reasonably small bin fleet for the
harvesting task.
Figure 5.16: Mill yard stock chart for the seven harvester model
Figure 5.17 shows the train utilisation and the automatically generated train run
details are presented in Table 5.35.
Figure 5.17: Train utilisation for the seven harvester model
191
Table 5.35: Train runs for seven harvester model Train Run Location Start time Activity
Delivered empty bins
Collected full bins
Jarvisfield 1 Kalamia 7:02:00 Start_run
Mainline_1a 7:05:39 43 0
Mainline_3 7:25:56 35 0
Central_3 7:50:56 42 0
Central_3 8:05:56 0 31
Mainline_1a 8:21:14 0 28
Kalamia 8:39:53 End_run
2 Kalamia 10:25:00 Start_run
Mainline_1a 10:28:39 33 0
Mainline_3 10:48:56 35 0
Central_3 11:13:56 10 0
Central_3 11:28:56 0 34
Mainline_3 11:38:57 0 30
Mainline_1a 11:59:14 0 50
Kalamia 12:18:53 End_run
3 Kalamia 23:01:00 Start_run
Mainline_1a 23:04:39 14 0
Mainline_3 23:24:56 60 0
Central_3 23:49:56 46 0
Central_3 24:04:56 0 33
Mainline_3 24:14:57 0 10
Kalamia 24:38:53 End_run
Kilrie 1 Kalamia 1:40:00 Start_run
Gainsford_2 2:09:04 70 0
Gainsford_2 2:24:04 0 70
Kalamia 2:53:09 End_run
2 Kalamia 11:23:00 Start_run
Gainsford_2 11:52:04 70 0
Gainsford_2 12:07:04 0 70
Kalamia 12:36:09 End_run
3 Kalamia 16:59:00 Start_run
Gainsford_2 17:28:04 34 0
Gainsford_2 17:43:04 0 34
Kalamia 18:12:09 End_run
Norham 1 Kalamia 8:08:00 Start_run
J/Field_Term_B 8:35:54 47 0
J/ Field_Term _A 8:51:46 69 0
J/ Field_Term _A 9:06:46 0 48
J/ Field_Term _B 9:08:39 0 50
192
Mainline_3 9:42:37 0 26
Kalamia 10:06:33 End_run
2 Kalamia 12:41:00 Start_run
J/ Field_Term _A 13:09:46 62 0
J/ Field_Term _A 13:24:46 0 76
J/ Field_Term _B 13:59:39 0 12
Mainline_3 14:33:37 0 36
Kalamia 14:57:33 End_run
3 Kalamia 19:34:00 Start_run
J/ Field_Term _B 20:01:54 47 0
J/ Field_Term _A 20:17:46 44 0
J/ Field_Term _A 20:32:46 0 51
J/ Field_Term _B 20:33:39 0 32
Mainline_3 21:07:37 0 28
Mainline_1a 21:27:54 0 12
Kalamia 21:46:33 End_run
Rita Island 1 Kalamia 11:08:00 Start_run
Chiv_terminus 11:32:31 52 0
Chiv_terminus 11:47:31 0 52
Kalamia 12:12:02 End_run
2 Kalamia 16:20:00 Start_run
Chiv_terminus 16:44:31 10 0
Chiv_terminus 16:59:31 0 52
Kalamia 17:24:02 End_run
3 Kalamia 18:02:00 Start_run
Chiv_terminus 18:26:31 52 0
Chiv_terminus 18:41:31 0 10
Kalamia 19:06:02 End_run
Total 875 875
193
5.7 Conclusion
The sugarcane railway operations are complicated and have a large number of
variables. As a result, the proposed scheduling model is complicated and needs to be
solved in a reasonable time, because of the dynamic nature of the system. The
combination of mixed integer programming and constraint programming search
techniques is used as an integrated approach to obtain appropriate solutions in a
reasonable time. Some heuristics are proposed to solve rail conflicts and simplify the
system by removing unused segments and sections from the model. A case study
tests many search techniques and evaluates the performance of each technique. The
dichotomic search strategy, combined with any of the search techniques, gives high
quality solutions. The comparison of the results of the blocking section models and
blocking segment models, and sensitivity analysis of the railway utilisation are
described in detail in this chapter.
Inclusion of the delivery and collection time constraints results are presented in real
life where the sidings visit time is optimised to remove any delays in delivering or
collecting bins throughout the rail network. Metaheuristic techniques are adapted in
Chapter 6 for a larger number of harvesters and trains throughout the rail network to
reduce the CPU time of executing the model codes.
194
Chapter 6
Metaheuristic Techniques
Chapter Outline
6.1 Introduction....................................................................................................................197
6.2 Neighbourhood Structure................................................................................................198
6.2.1 Adjacent Pairwise Interchange (API) …………………...............................200
6.2.2 Non-Adjacent Pairwise Interchange (NAPI).................................................201
6.2.3 Extraction and Forward Shifted Reinsertion (EFSR)……………...…….....201
6.2.4 Extraction and Backward Shifted Reinsertion (EBSR).................................202
6.3 Neighbourhood Structure in the Railway Scheduling Problem ....................................202
6.4 Simulated Annealing (SA) Technique............................................................................207
6.4.1 Simulated Annealing Technique for Solving Job Shop Scheduling
Problem…………………………………...………………………………...208
6.4.2 New Simulated Annealing Algorithms for Sugarcane Rail Cases……….....210
6.5 Tabu Search (TS) Technique..........................................................................................213
6.5.1 Tabu Search Technique for Solving Job Shop Scheduling Problem.……...213
6.5.2 A new Tabu Search Technique for Sugarcane Rail Cases...............................215
6.6 Metaheuristic Results of Sugarcane Rail Systems..........................................................216
6.6.1 TS and SA Results by Changing Number of Trains.........................................219
6.7 Hybrid Metaheuristic Techniques for Sugarcane Rail Systems.....................................221
6.7.1 Hybrid SA/TS Technique..................................................................................222
6.7.2 Hybrid TS/SA Technique ..................................................................................226
6.7.3 Hybrid Techniques Result for Sugarcane Rail Systems......................................229
6.7.4 Analysis of Hybrid Techniques...........................................................................232
6.8 Hyper Metaheuristic Techniques for Sugarcane Rail Transport Systems......................234
195
6.8.1 Hyper SA/TS Technique.......................................................................................235
6.8.2 Hyper TS/SA Technique.......................................................................................238
6.8.3 Hyper Techniques Result for Sugarcane Rail Systems.........................................241
6.8.4 Analysis of Hyper Techniques.............................................................................244
6.9 Hybrid and Hyper Metaheuristic Techniques Test Cases...............................................246
6.10 Hyper and Hybrid Metaheuristic Technique (TS/SA) and MIP...................................247
6.11 Study Analysis of Elements of Metaheuristic Techniques...........................................249
6.12 Inclusion of Delivery and Collection Time Constraints for a Hyper TS/SA
Results...........................................................................................................................252
6.13 Conclusion....................................................................................................................257
196
Publications Arising from Chapter 6
Masoud, M., Kozan, E., & Kent, G. (2011c). Hybrid/hyper metaheuristic techniques
for optimising sugarcane rail operations. Computer and Operations research
(submitted).
197
6.1 Introduction
Optimisation the sugarcane rail transport system is NP Hard problem and finding an
optimal solution in reasonable time is difficult. As a result, there is a need to develop
some techniques to find good solutions in reasonable time. This chapter describes
some metaheuristic techniques to optimise the sugarcane rail transport systems.
Metaheuristic techniques are a type of local search technique which usually starts by
determining a feasible solution which may be selected at random and then used to
obtain a better solution by manipulating the current solution. These techniques give
a near optimal or optimal solution, but not guaranteed optimality.
The main advantage of the metaheuristic techniques is obtaining near-optimal
solutions in a reasonable time for many large problems, especially in scheduling
problems where the solution will be represented by a complete schedule and then
obtaining a better schedule than the current schedule. This schedule can be specified
by a simple sequence of n jobs as in a non-preemptive single machine schedule, or a
sequence of k operations on a specific machine of m machines, as in a non-
preemptive job shop schedule. Start times and completion times are included in this
schedule. Design of the schedule representation is more complicated in the
preemption case.
Before the metaheuristic techniques are identified, a basic local search is described
first. According to Pinedo (2008), there are four main criteria to be investigated
when comparing local search techniques, namely:
- solution representation;
- neighbourhood structure;
- search method within neighbourhood; and
- acceptance-rejection criterion.
The basic local search is usually called iterative improvement, since each iteration or
step is only performed if the next solution is better than the current solution. The
stop criteria of the algorithm depend on obtaining the target value of the solution
“final solution”. The main procedures of local search techniques are shown in the
next generic algorithm.
A generic local search algorithm;
198
Begin
Select an initial solution s ЄS; where S is the search space of all solutions.
Set Best solution s*=s;
While Stop criteria is not achieved do
Determine a uor s'ЄN(s); N(s) is the space of all neighbourhoods of s.
Set s=s'
If C(s') <C(s*) then
Set s*=s'
End if
End while
End.
This research investigates two metaheuristic techniques, simulated annealing (SA)
and tabu search (TS). These techniques solve many cases particularly those of a large
scale. The classical SA and TS will be used to solve a job shop scheduling problem
as a small example to show how these techniques work. They are applied to the
sugarcane rail transport systems. The job shop scheduling problem is formulated in
this chapter using a disjunctive graph approach to obtain feasible solutions to use as
initial solutions in SA and TS techniques. The overall acceptance or rejection
criterion to stop the technique implementation is different from one technique to
another, and in SA, is based on a probabilistic process. The TS uses a deterministic
process.
6.2 Neighbourhood Structure
Neighbourhood structure is the core of many local search techniques such as SA and
TS. Neighbourhood structure is applied to obtain new feasible solutions by applying
small perturbations on a given feasible solution. In job shop scheduling, re-ordering
the sequence of operations on a critical path is used to produce small perturbations to
obtain a neighbourhood which produces a better makespan than the current
makespan. A critical path in a job shop schedule includes a set of operations in
which the first starts at time 0 and the last finishes at time t=Cmax . The completion
time of each operation on the critical path is equal to the starting time of the next
operation on that path. The sequence of operations on the critical path is changed to
199
analyse the effects on the makespan. These change for different job operations on
the same machine and are explained in the different types of neighbourhood
structure.
Definition1: if N: S →Z(S) is a neighbourhood structure, where S is the search space
of an optimisation or decision/feasibility problem, then:
- N is called connected if any solution s Є S can be obtained from any other
solution s' Є S by a specific number of steps according to N.
- In the optimisation problem, N is called opt-connected if from any solutions s
Є S, an optimal solution s* Є S (if existent) can be obtained after applying a
specific number of steps according to N.
- In a decision problem, N is called feasible-connected, if any solution s Є S, a
feasible solution sf Є S (if existent) can be obtained after applying a specific
number of steps according N.
For classical job shop scheduling, assume S and S' be two complete selections for a
given alternative graph G.
- If S and S' are consistent, P is a critical path in G(S) and Cmax(S') ≤ Cmax(S),
then at least one alternative arc of P does not belong to S'.
- If S is not consistent, and C is a positive cycle in G(s) then at least one arc of
C does not belong to a complete consistent selection S' .
The proof of this theory is very clear (Brucker, 2007).
Definition 2: a generic neighbourhood structure Assume P be a critical path of a given alternative graph G(S) where S is a complete
selection. Neighbourhood N1(S) is the set of all consistent selections that can be
obtained by replacing one alternative arc or more of an arbitrary critical path P by its
alternative. One alternative arc i → j of an arbitrary critical path P is replaced by its
alternative j → i while its alternative will be h → k if the swap is not allowed. If the
new selection is unfeasible or inconsistent, this selection can be repaired by
replacing alternative arcs.
The search technique within a neighbourhood can be conducted in many ways. One
of these ways is to select schedules in the neighbourhood randomly, and evaluate
200
these schedules using any performance criteria to decide which schedule is accepted
to produce new solutions. Another way is to select a schedule which is giving good
results, and to try to swap the jobs that have a significant effect on the objective
function.
Many techniques are used to design a neighbourhood structure. Some techniques are
random and others are specific such as pairwise interchange (PI) techniques. This
section shows some PI techniques.
6.2.1 Adjacent Pairwise Interchange (API)
Adjacent Pairwise Interchange (API) technique swaps any two adjacent jobs
according to the following scheme:
Initial parameters
The maximum number of iteration=max_iter
The index of iteration: iter
The schedule s is an initial schedule
Select i, j Єs, where i, j are two adjacent jobs.(Step 1)
Swap i and j
Store the new schedule s'
Evaluate the new schedule using Makespan criterion where,
If f(s') < f(s) then
Select s' as a new schedule
Else
Select s as a new schedule
End if
Set ietr=iter+1
If iter < max_iter then
go to step 1
Else
End if
End
201
This technique is best explained by: 10 jobs are sequenced as in a given sequence
1-2-3-4-5-6-7-8-9-10. Figure 6.1 shows jobs 5 and 6 are two adjacent jobs which
will be swapped to obtain a new sequence 1-2-3-4-6-5-7-8-9-10.,
Figure 6.1: API technique for 10 jobs
6.2.2 Non-Adjacent Pairwise Interchange (NAPI)
Non-Adjacent Pairwise Interchange (NAPI) swaps two non adjacent jobs. All
adjacent pairwise interchange steps will be applied in NAPI except step 1 where the
step 1 will be changed to be:
Select i, j Єs, where i and j are two non-adjacent jobs.
Suppose the sequence 1-2-3-4-5-6-7-8-9-10 is given. By using NAPI, jobs 4 and 8,
as two non-adjacent jobs, will be swapped to obtain a new sequence: 1-2-3-8-5-6-7-
4-9-10 as shown in Figure 6.2.
Figure 6.2: NAPI technique for 10 jobs
6.2.3 Extraction and Forward Shifted Reinsertion (EFSR)
Extraction and Forward Shifted Reinsertion (EFSR) technique follows the next steps:
Select i, j Єs, where i and j are two non-adjacent jobs and the position of i
before the position of j
Extract job i from its position and reinsert it direct after job j.
5 10 9 8 7 6 4 3 2
5 10 9 8 7 6 4 3 2
1
1
202
For example, in the sequence 1-2-3-4-5-6-7-8-9-10, job 4 is extracted from its
position and reinserted directly after job 8. The resulted sequence is:
1-2-3-5-6-7-8-4-9-10 as shown in Figure 6.3.
Figure 6.3: EFSR technique for 10 jobs
6.2.4 Extraction and Backward Shifted Reinsertion (EBSR)
Extraction and Backward Shifted Reinsertion (EBSR) includes two main steps:
Select i, j Єs, where i and j are two non-adjacent jobs and the position of i is
before the position of j.
Extract job j from its position and reinsert it directly before job j. For example, in the sequence 1-2-3-4-5-6-7-8-9-10, job 8 is extracted from its
position and reinserted directly before job 4. The out sequence is:
1-2-3-8-4-5-6-7-9-10 as shown in Figure 6.4.
Figure 6.4: EBSR technique for 10 jobs
6.3 Neighbourhood Structure in the Railway Scheduling Problem
Neighbourhood Structure technology is used in developing new metaheuristic
techniques to solve the railway scheduling problems. Neighbourhood can be
produced at any section particularly at the intersection points. A single cane rail
network with 22 sections is shown in Figure 6.5, where a and b are intersection
points. Point a includes three neighbourhoods (sR5R, sR6R and sR11R) and point b also
includes three neighbourhoods (sR13R, sR14R and sR16 R).
5 10 9 8 7 6 4 3 2
5 10 9 8 7 6 4 3 2
1
2
203
s22
s21
s20
Intersection Point b
s19
Intersection Point a s18
s17
s16
s1 s2 s3 s4 s5 s11 s12 s13 s14 s15
s6
s7
Figure 6.5: A single cane rail network with 22 sections
A transition matrix is used to represent any railway network. This matrix will
include all connections of the railway network between all sections. The main idea of
the transition matrix is to use the binary system, using 1 and 0, to representing any
transport network. This matrix uses 1 to mean there is a connection or a link between
two sections or a section and a passing loop or mill and section. 0 means there is no
connection between them. Figure 6.5 shows a small railway network and how it can
be represented using a transition matrix in Figure 6.6.
s8
s9
S10
204
Figure 6.6: Transition matrix for a single railway includes 22 sections
The Transition matrix is designed in Figure 6.6 to include all connections between
22 sections of the network of the single railway. This matrix helps to implement two
main types of neighbourhood structures: section neighbourhood and train
neighborhood. Analysis of the feasibility of the solutions is included in the transition
matrix, where 1 means there is a feasible solution while 0 means there is no feasible
solution. This technique means many infeasible solutions related to section
neighbourhood can be avoided. Two main neighbourhood structure techniques are
proposed in this research and explained in detail below.
205
Section Neighbourhood
A decision at each conjunction or intersection point has to be taken in section
neighbourhood technique. This decision relates to the selecting of the section to be
used. The direction of the arc represents the direction of the train. Figure 6.5 shows
two intersection points, a and b.
At point a
There are three sections (sR5R, sR6R, and sR11R) linked with point a. The neighbourhoods
that can be produced at point a as follows:
sR5R→sR11R outbound direction
sR5R→sR6R outbound direction
sR6R→sR5R inbound direction
sR6R→sR11R outbound direction
sR11R→sR6 R inbound direction
sR11R→sR5R inbound direction
At point b
Three sections (sR13R, sR14R, sR16R) are linked to point b. All neighbourhoods that can be
produced at this point are shown as follows:
sR13R→sR14R outbound direction
sR13R→sR16R outbound direction
sR14R→sR13R inbound direction
sR14R→sR16R inbound direction
sR16R→sR13R inbound direction
sR16R→sR14R outbound direction
The makespan values are evaluated at each neighbourhood. All neighbourhoods are
permitted by considering the priorities of visiting the sidings to achieve the sidings
and mill requirements or other constraints on the models.
206
From the transition matrix, 1 means there is a link between one section and another.
This link represents a neighbourhood for any section that can be used to improve the
solution and 0 means a new neighbourhood cannot be constructed for the current
solution.
Train Neighbourhood
Some trains require a specific section for different operations. The sequence of
operations on these sections can produce new neighbourhoods by changing the
positions of these operations. Figure 6.7 presents three trains ka, kb, and kc where
ka requires section s to implement operations oa, kb requires section s to implement
operation ob and kc requires the same section to implement operation oc. As result,
many neighbourhoods of operations will be produced at section s as follows:
N1: oa→ob , where train ka precedes train kb The sequences of operations on s under N1 are oa→ob→oc or oc→ oa→ob.
oa →ob →oc means train ka precedes train kb and train kb precedes train kc,
oc→ oa→ob means train kc precedes train ka and train ka precedes train kb.
N2: oa→oc , where train ka precedes train kc The sequences of operations on s under N2 are oa→oc→ob or ob→ oa→oc.
oa→oc→ob means train ka precedes train kc and train kc precedes train kb,
ob→ oa→oc means train kb precedes train ka and train ka precedes train kc.
N3:ob→oc , where train kb precedes train kc The sequences of operations on s under N3 are ob →oc →oa or oa→ ob→oc.
ob →oc →oa means train kb precedes train kc and train kc precedes train ka,
oa→ ob→oc means train ka precedes train kb and train kb precedes train kc.
N4:ob→oa , where train kb precedes train ka
The sequences of operations on s under N4 are ob→oa→oc or oc→ ob→oa.
ob→oa→oc means train kb precedes train ka and train ka precedes train kc,
oc→ ob→oa means train kc precedes train kb and train kb precedes train ka.
207
N5:oc→oa , where train kc precedes train ka The sequences of operations on s under N5 are ob→oc→oa or oc→ oa→ob.
ob→oc→oa means train kb precedes train kc and train kc precedes train ka,
oc→ oa→ob means train kc precedes train ka and train ka precedes train kb.
N6:oc→ob , where train kc precedes train kb
The sequences of operations on s under N6 are oc→ob→oa or oa→ oc→ob.
oc→ob→oa means train kc precedes train kb and train kb precedes train ka, oa→ oc→ob means train ka precedes train kc and train kc precedes train kb .
kc
kRb RkRaR
s
Figure 6.7: Case of three trains require the same section
Neighbourhood structure techniques are used to develop new metatheuristic
techniques such as SA and TS. Sections 6.4 and 6.5 explain the application of SA
and TS techniques respectively to the job shop scheduling problem (refer to Example
3.1 for the detail) and to the sugarcane rail transport system.
6.4 Simulated Annealing (SA) Technique
SA was first introduced by Kirkpatrick et al. (1983) and Cerny (1985). The SA
technique is a metaheuristic technique using local search techniques to solve
combinatorial optimization problems. SA searches different possible solutions to
avoid being stuck in local optima. The SA procedure is as follows:
Junction point
208
SA starts with an initial solution as a current solution, currSolution,
and obtains a new solution, newSolution, using the neighbourhood of
the current solution. If the objective function value of the current
solution is f(currSolution) and the objective function value of the new
solution is f(newSolution), it can calculate the difference
∆= f(newSolution) - f(currSolution) to determine if the new solution is
acceptable. The new solution is acceptable (for minimization problems)
if ∆ is less than zero or exp(∆/T)> ε where ε is a small number and T is
the temperature; otherwise the new solution is rejected. Using the SA
technique, T will decrease through the iterations using the decreasing
stochastic parameter α.
6.4.1 Simulated Annealing Technique for Solving Job Shop Scheduling Problem
This section demonstrates how the SA technique applies to job shop scheduling
problems using makespan minimisation as an objective function. The SA procedure
is explained step by step below.
Select the initial simulated annealing parameters
Define a large value of the temperature T=T0
Select the value of the parameter α is between 0 and 1; 0<α<1
Set an initial feasible schedule using the disjunctive graph
Evaluate the initial value of objective function (makespan)
While (T is in cooling range)
Develop a heuristic technique to design a neighbourhood for the
current solution
Evaluate the new makespan; ∆=new Makespan – currMakespan
If ∆<0 then
Accept the new state
Else
If Pr(accepted)= exp(∆/T)> ε then
Accept the new state
Else
Reject the solution
209
Return to the previous solution
End if
End if
Update T according to T= αT
End while
Stop criteria
∆ is not significant or there are no improvements in the makespan for
small change in T
The maximum number of iterations has been completed
End
An Example of a Simulated Annealing Application
Example 2.1 is used to explain how SA can solve a job shop problem. The final
solution is shown graphically in Figure 6.8, where the critical path (red arrows) is
obtained. All SA iterations are shown in Table 6.1. The best solution is obtained
after four iterations. The complete SA solution for the numerical example is given in
Appendix A.
Figure 6.8: Disjunctive graph of final solution for the 3 jobs and 3 machine example
Makespan=21
Table 6.1 shows the results of all iterations for solving Example 2.1.
0
1 2 3
8
10
J1
J2
J3
M1
4 5 6
M2 M3
M1 M3 M2
7 M2 9 M3
3
6
4
6
2
3
6
4
4
M1
210
Table 6.1: Simulated annealing result for Example 2.1
Step Makespan (Cmax)
∆ Temperature (T)
Boltzmann probability
Decision Best value
0 - 100 - - 21 1 - 95 - accepted 2 90.25 0.988 accepted 3 - 85.73 - accepted 4 81.44 0.961 rejected
The Percentage Improvement of SA can be calculated as:
PISA= ((initial solution of CRmaxR -SA value)/initial solution)*100 of CRmax
PISA= ((29-21)/29)*100 = 8/29= 27.5%.
SA principles of JSS problem have been adapted to sugarcane rail transport systems
as identified in Section 6.4.2 using the SPT as an initial solution.
6.4.2 New Simulated Annealing Algorithms for Sugarcane Rail Cases
Two interactive SA algorithms are developed to solve sugarcane rail transport
systems. Firstly, initial feasible solutions are obtained and then the quality of the
solutions is improved. These algorithms depend on using two techniques of the
neighbourhood structure are shown in Section 6.3. Algorithm 1 uses the train
neighbourhood technique, while Algorithm 2 uses the section neighbourhood
technique. The integration of the two algorithms obtains SA results where the
section neighbourhood and train neighbourhood are used at the same time. The
procedures of the two algorithms are shown as follows:
Algorithm1 :
Select the initial simulated annealing parameters
Define a large value for the temperature T=TR0
Select value of the parameter α; 0<α<1
Construct an initial solution using a disjunctive graph model
Define initial scheduling for all trains on all sections
Define initial sequencing of segments
Calculate initial makespan (currMakespan)
While (T is in cooling range)
211
Construct a new neighbourhood
Apply train neighbourhood technique
Apply Adjacent pairwise techniques
Swap any two random trains
Require a specific segment
Require a specific section
Check the feasibility
Time window, section transition matrix, trains and sidings capacity
Keep the sequencing of sections without changing
Produce a new schedule and a new solution (new Makespan)
Evaluate the new makespan where ∆=new Makespan – currMakespan
If ∆<0 then
Accept the new state
Else
If Pr(accepted)= exp(∆/t)> ε then
Accept the new state
Else
Reject the solution and return to the previous solution
End if
End if
Update T according to T= αT
End while
Stop criteria
Until ∆ is not significant or there are no improvements in the
makespan for small change in T or,
Complete the maximum number of iterations, known in advance
End.
212
Algorithm 2
Produce an initial solution using a disjunctive graph model
Define initial scheduling for all trains on all sections
Define initial sequencing of sections
Calculate initial makespan (currMakespan)
While (T is in cooling range)
Construct a new neighbourhood
Apply section neighbourhood technique
Apply adjacent pair wise techniques
Swap any two random sections or segments
Apply blocking section constraints or
Apply blocking segment constraints
Check the feasibility
Time window, section transition matrix, trains and sidings capacity
Keep the sequencing of trains without changing
Produce a new schedule and a new solution (new Makespan)
Evaluate the new makespan where ∆=new Makespan – currMakespan
If ∆<0 then
Accept the new state
Else
If Pr(accepted)= exp(∆/t)> ε then
Accept the new state
Else
Reject the solution and return to the previous solution.
End if
End if
Update T according to T= αT
End while
Stop criteria
Until ∆ is not significant or there are no improvements in the
makespan for small change in T or,
Complete the maximum number of iterations, known in advanced
End.
213
6.5 Tabu Search (TS) Technique
The TS technique was introduced by Glover (1989 and 1990) and starts with an
initial solution which it considers as the current seed. The objective function of the
current seed is then calculated. Using the neighbourhood structure, the
neighbourhoods of the current seed are determined. The objective function is
calculated for all neighbourhoods of the current seed. The best is selected and added
to the tabu list. If the new solution is better than the current solution, it is stored as
the new best solution and used as a new seed. These steps are repeated until the stop
criteria are satisfied.
6.5.1 Tabu Search Technique for Solving Job Shop Scheduling Problem
The TS Technique for the job shop scheduling problems follows the steps below:
Generate initial feasible scheduling as a current solution and the best solution
Set the tabu search parameters and assume that the tabu list is empty
Set the stop criterion
Define a specific number of iterations
Find no improvements of the solution
Obtain specific value of the objective function
Generate neighbourhoods of the current solution using the neighbourhood structure
Select a neighbourhood which is not tabu or satisfies a given aspiration criterion
Move this neighbourhood to a new solution
Update the tabu list
Store the new solution as the best solution
The objective function value is better for the new solution
Repeat the steps until a stop criterion is satisfied
Numerical example is solved to explain tabu search procedure in detail.
214
Tabu Search Numerical Example
The numerical Example 2.1 in Chapter 2 shows how TS works to solve scheduling
problems. An initial solution supposed for minimising the makespan as an objective
function. The neighbourhood structure is used to generate new solutions. Figure 6.9
shows after implementing 6 iterations, the best solution is makespan = 21.
Figure 6.9: Disjunctive graph of final solution for the 3 jobs and 3 machines example
Makespan=21
Table 6.2 shows the results of all iterations for the previous example. Appendix B
shows the complete solution for the numerical example using TS.
Table 6.2: TS technique result for example 2.1 Iteratio
n CRmax Tabu list Best
value
0 { } 21
1 { oR8R→o} 2 { oR8R→o, oR8R→oR1R} 3 { oR8R→oR4R, oR8R→oR1R, oR9R→oR3R } 4 { oR8R→oR4R, oR8R→oR1R, oR9R→oR3R , oR4R→oR1R} 5 { oR8R→oR4R, oR8R→oR1R, oR9R→oR3R , oR4R→oR1R, oR5R→oR3R} 6 { oR8R→oR4R, oR8R→oR1R, oR9R→oR3R , oR4R→oR1R, oR5R→oR3R, oR5R→oR9R}
* CRmax R: Makespan
0
1 2 3
8
10
J1
J2
J3
M1
4 5 6
M2 M3
M1 M3 M2
7
M2
9 M3
3
6
4
6
2
3
6
4
4
M1
215
The Percentage Improvement of Tabu Search(PITS) is calculated as follows:
PITS = ((initial solution of Cmax-TS value)/initial solution )*100) of Cmax
PITS = ((29-21)/29)*100 =27.58%.
A new TS algorithm for the sugarcane rail operations has been developed to solve
large-scale problems to improve the quality of the solutions as shown in Section
6.5.2. This algorithm has two main parts where the first obtains the initial solution
using the shortest processing time (SPT) heuristic and the second includes tabu
search steps to improve the current solution.
6.5.2 A New Tabu Search Technique for Sugarcane Rail Cases
Anew algorithm will be used to a solve sugarcane rail transport system problem.
This algorithm includes two main parts.
Algorithm notations k: index of train;k=1...K
s: index of sections;s=1...S
o: index of operations;o=1...O
rk: ready time of train k
pkso: processing time of train k on sections during operation o
dkso: start time of train k on section s during operation o
iter: index of iterations
max_iter: maximum number of iterations
Part1
Set s=1, o=1
For k=1 to K
Select min (rk +pkso)
Next k
For k=1 to K
For s=2 to S
For o=1 to O
Select min (dkso + pkso)
216
Next o
Next s
Next k
Obtain the schedule as initial solution using makespan (currMakespan)
Part2:
If (iter ≤ max_iter) then
Building the neighbourhood structure for the initial solution in part1
Check the feasibility
Time window, section transition matrix, trains and sidings
capacity
Let (k,k') are the adjacent of two trains which require same section s
by operations o and o' respectively
Swapping (k,k') →(k',k)
Obtain new schedules
Calculate new makespan for all schedules (new Makespan)
Select the best solution
If newMakespan ≤ currMakespan then
Store the new solution as the best solution
Update the tabu list
Else
End if
End if
End
6.6 Metaheuristic Results of Sugarcane Rail Systems
The metaheuristic techniques, SA and TS, were investigated by comparing their
results to the initial solution, SPT, to optimise the efficiency of the sugarcane rail
transport system. The investigation took the form of a numerical experiment where
solutions were obtained for railway networks where the number of sections was
varied (10, 15 and 20) and the number of trains was varied (from 3 to 13) in a 1×10
and 2×13 factorial experiment as shown in Table 6.3. The experiment was
conducted using a PC with a Core 2 CPU chip at 2.39GHz speed and 3.50 GB of
217
RAM. Two parameters were examined to assess the techniques: CPU time and the
Percentage Improvement (PI) of the makespan of the TS and SA compared to the
makespan of the initial solution. The yellow colour highlights the best CPU time of
the metaheuristic techniques, while the green colour represents the best solution
quality.
Table 6.3 also shows that the TS results are better than the SA results for most of the
tested cases and the CPU time when applying the TS technique is shorter than SA.
As a result, TS is faster than SA for obtaining makespan. The Percentage
Improvement of TS and SA (PITS and PISA) is calculated for all tested cases to
show exactly the metaheuristic effects on the initial solution. The best PITS and
PISA values are 13.61% and 13.33% respectively using 20 sections and 8 trains.
Table 6.3: TS and SA results for sugarcane rail transport system under section blocking constraint
Tested cases (sections/trains)
Initial solution
SPT
TS
PITS (%)
TS CPU(s)
SA
PISA (%)
SA CPU(s)
(10/3) 9112 9029 .91 9029 .91 (10/4) 10146 9830 3.11 9830 3.11 (10/5) 13356 3.96 (10/6) 14836 5.79 4.82 (10/7) 15806 8.48 7.32 (10/8) 16840 8.72 158 7.01 155 (10/9) 20050 6.05 169 4.41 200 (10/10) 21530 3.36 180 3.25 212 (15/3) 11683 11600 0.59 7 11600 0.59 75 (15/4) 12563 12154 3.25 33 12154 3.25 124 (15/5) 16464 14376 4.34 74 14376 3.88 159 (15/6) 17685 16230 8.22 143 16472 6.86 (15/7) 18377 13.32 163 11.18 226 (15/8) 19257 17021 11.61 188 10.62 239 (15/9) 23158 8.83 230 6.70 277 (15/10) 24379 7.59 290 6.885 319 (15/11) 25071 5.72 298 3.63 359 (15/12) 25951 24326 6.26 323 6.16 385 (15/13) 29852 27968 6.31 337 6.08 396 (20/3) 13579 0.49 12 0.49 125 (20/4) 14288 13926 2.53 42 2.53 152 (20/5) 18684 3.49 117 3.49 202 (20/6) 19720 8.67 177 7.58 230 (20/7) 20273 12.57 211 12.04 260 (20/8) 20982 18126 13.61 268 18185 13.33 345 (20/9) 25378 12.05 332 6.72 380 (20/10) 26414 7.29 363 7.29 418 (20/11) 26967 5.76 402 5.06 464 (20/12) 27676 25951 6.23 428 5.92 507 (20/13) 32072 8.19 499 7.71 543
218
The worst PITS and PISA values occurred while using a small number of trains and
a significant number of sections such as the 3 train and 20 section case. There is a
strong relationship between the size of the case and improvements to the solution
quality using metaheuristic techniques, where in the case of increasing the number of
trains, the improvement of the initial solution increases until it reaches 7 trains in a
15 section group, and 8 trains in 10 and 20 sections. Following this increase the
improvements then start to decrease. For example, in the 15 section group, PITS and
PISA reached 13.32% and 11.18 respectively with 7 trains, and then decreased.
Figure 6.10 summarises the makespan results for the SA and TS solution techniques
for the numerical experiment. The solution quality (compared to the initial solution)
was generally better for fewer sections and more trains.
.
Figure 6.10: Metaheuristic techniques using makespan for different cases
Figure 6.11 shows that the CPU time increased relatively consistently with the
number of sections and the number of trains. The CPU time of TS is shorter than the
CPU time of SA in all tested cases.
0 2000 4000 6000 8000
10000 12000 14000 16000 18000 20000 22000 24000 26000 28000 30000 32000 34000
(10/
3)
(10/
4)
(10/
5)
(10/
6)
(10/
7)
(10/
8)
(10/
9)
(10/
10)
(15/
3)
(15/
4)
(15/
5)
(15/
6)
(15/
7)
(15/
8)
(15/
9)
(15/
10)
(15/
11)
(15/
12)
(15/
13)
(20/
3)
(20/
4)
(20/
5)
(20/
6)
(20/
7)
(20/
8)
(20/
9)
(20/
10)
(20/
11)
(20/
12)
(20/
13)
Mak
espa
n in
Sec
onds
Tested Cases Index
SPT
TS
SA
219
Figure 6.11: CPU time of TS and SA for different cases
6.6.1 TS and SA Results by Changing Number of Trains
The effects of changing the number of trains on a specific number of sections on TS
and SA are investigated in this section. The changing number of trains in the cases of
using 10 and 15 and 20 sections are shown in Figures 6.12, 6.13, and 6.14.
Generally, TS and SA work well with all section group cases and different trains,
particularly in the cases that include 7 and 8 trains. In the case of a small number of
trains, there is no significant difference in the results between the SPT technique and
TS and SA techniques. SPT works well with a small number of trains on the rail
network because the total number of conflict points decreased and then the total
waiting time at any section decreased as well.
Figure 6.12a shows the 10 section group results with a different number of trains. In
this case, TS and SA have the same solution in 2 out of 8 cases, while TS is better
than SA in 6 out of 8 cases. This means that TS and SA have the same solution in
25% of the total number of tested cases in this group. The TS is better than SA in
75% of the tested cases in the same group. The best PITS value is in case (10/8)
where 8 trains work with 10 sections.
0
50
100
150
200
250
300
350
400
450
500
550
600
(10/
3)
(10/
4)
(10/
5)
(10/
6)
(10/
7)
(10/
8)
(10/
9)
(10/
10)
(15/
3)
(15/
4)
(15/
5)
(15/
6)
(15/
7)
(15/
8)
(15/
9)
(15/
10)
(15/
11)
(15/
12)
(15/
13)
(20/
3)
(20/
4)
(20/
5)
(20/
6)
(20/
7)
(20/
8)
(20/
9)
(20/
10)
(20/
11)
(20/
12)
(20/
13)
CPU
tim
e( in
Sec
onds
)
Tested Cases
TS SA
220
The 15 section group results with a different number of trains are shown in Figure
6.12b. One case out of 11 has the same TS and SA results, while TS works better
than SA in 10 out of 11 cases. As a result, TS can improve around 90.8% of the
group tested cases better than SA, while SA and TS have the same results in solving
9.2% of these cases. The case (15/7) has the best PITS and PISA values, where the
number of trains is 7 with 15 sections.
Figure 6.12a: Makespan of metaheuristics on 10 sections Figure 6.12b: Makespan of metaheuristics on 15 sections
Results for the 20 sections group show in Figure 6.12c that three out of 11 cases
have the same TS and SA results while TS is better than SA in 8 out 11 cases. As a
result, TS is better than SA in 72% of the tested group, while around 28% of the
tested cases in this group have the same results. The best PITS and PISA values in
this group are in the solving case of (20/8), where the number of trains is 8 with 20
sections.
0 2000 4000 6000 8000
10000 12000 14000 16000 18000 20000 22000 24000 26000 28000
3 4 5 6 7 8 9 10
Mak
espa
n (S
)
Total Number of Trains
SPT TS SA
0 2000 4000 6000 8000
10000 12000 14000 16000 18000 20000 22000 24000 26000 28000
3 4 5 6 7 8 9 10 11 12 13
Mak
espa
n (S
)
Total Number of Trains
SPT TS SA
221
Figure 6.12c: Makespan of metaheuristics on 20 sections
6.7 Hybrid Metaheuristic Techniques for Sugarcane Rail Systems
Hybrid techniques depend on integrating heuristic and metaheuristic techniques
consecutively to produce good solutions in a reasonable time. Generally, hybrid
metaheuristic techniques work better than metaheuristic techniques such as SA and
TS individually. The TS technique uses tabu lists, which includes recently visited
solutions to avoid the local optima. TS has a deterministic nature, and for that reason
cannot avoid cycling. On the other hand, SA is a stochastic search technique that
takes a long time, but can stop cycling. Therefore, TS and SA can be integrated to
improve the quality of the solutions and reduce the CPU time of implementing the
case studies.
Many researchers introduced hybrid techniques based on TS and SA to solve real
scheduling problems such as the group scheduling and machining speed selection
problems (Zolfaghari & Liang, 1999), modeling machine loading problems
(Swarnkar & Tiwari, 2004), packing circles into a larger container circle (Zhang &
Deng, 2005), and using the integration of TS and SA for minimisation problems.
0 2000 4000 6000 8000
10000 12000 14000 16000 18000 20000 22000 24000 26000 28000 30000 32000
3 4 5 6 7 8 9 10 11 12 13
Mak
espa
n (S
)
Total Number of Trains
SPT TS SA
222
This section presents the two types of hybrid techniques of TS/SA and SA/TS to
solve the rail system problem with the specific sugarcane system constraints. These
techniques depend on the integration of the two metaheuristic techniques SA and TS,
and SPT as a heuristic technique. The TS/SA and SA/TS sensitivity analysis is
introduced in this Chapter.
The main advantages of these techniques are easy implementation and good quality
solutions that are better than the individual metaheuristic techniques. The main
disadvantage of these techniques is that CPU time is larger than for the individual
metaheuristic techniques.
6.7.1 Hybrid SA/TS Technique The SA/TS hybrid technique uses the SA solution as an initial solution to apply the
TS technique. The SA solution is stored for comparison with any TS solution at each
iteration to obtain better solutions. The final SA solution is therefore integrated with
all TS individual iterations to improve the SA solution as shown in Figure 6.13. The
three main steps are explained as follows:
Step 1 is to obtain an initial solution using the SPT heuristic.
Step 2 is to obtain the SA solution using the initial solution in step 1.
Step 3 is to use the final solution in step 2 as an initial solution to implement TS as a
hybrid technique. The main aim of step 3 is to improve the quality of the SA
solution.
The stop point in this technique is based on the maximum number of iterations in TS,
however a good hyper technique solution can be achieved without completing all TS
iterations (terminating point) as shown in Figure 6.13.
223
Figure 6.13: Hybrid SA/TS technique
This SA/TS hybrid technique procedure is detailed as follows:
Select the initial simulated annealing parameters
Define a large value of the temperature T=T0
Select the value of the decreasing parameter α; 0<α<1
Set an initial feasible scheduling using SPT technique
Evaluate the initial value of objective function (makespan)
Apply simulated annealing technique
Obtain MakeaspanSA
Select initial tabu search parameters
Tabu list ={ }.
Maximum number of iterations, max_iter
Set an initial feasible scheduling simulated annealing technique
While (iter <=max_iter)
Generate a new neighbourhood for new solution
224
Check the feasibility
Time window, sections transition matrix, trains and sidings capacity
Evaluate the new Makespan SA/TS
∆=new Makespan SA/ TS – Makespan SA
If ∆ ≤ 0 then
Accept the new solution
Store the new solution
Else
Reject the new solution
Update tabu list
End if
iter =iter+1
End while
End .
Figure 6.14 details the main steps of the hybrid SA/TS technique
225
NO
YES
NO YES
Figure 6.14: The detailed hybrid SA/TS technique
Initial conditions A large value for the temperature T=T0. The value of the decreasing parameter α;
0<α<1. Tabu list ={ }
Set number of iterations as a maximum
Iter <=max_iter
∆<=0
Accept the new solution Reject the new solution
An initial solution using SPT technique
Applying simulated annealing technique
End
Generate a new neighbourhood for new solution
Evaluate the new Makespan SA/TS
∆=new Makespan SA/TS– Makespan SA
Store the new solution
SA
TS
Update tabu list iter =iter+1
Start
Obtaining MakeaspanSA
Check the feasibility; time window, transition matrix for sections, trains and sidings capacity
SPT
226
6.7.2 Hybrid TS/SA Technique
The TS/SA hybrid technique uses the TS solution as an initial solution and applies
SA to improve the quality of the final solution. Comparing the TS solution as the
initial solution with all SA iterations is the main idea of this technique as shown in
Figure 6.15. The three main steps of the TS/SA hybrid technique are as follows:
Step 1 is to obtain an initial solution using the SPT heuristic.
Step 2 is to obtain the solution of TS using the initial solution in step 1.
Step 3 is to use the final solution in step 2 as an initial solution to implement SA as a
hybrid technique. The main aim of step 3 is to improve the quality of the TS
solution.
The stop point in this technique is based on the maximum number of iterations in
SA; however a good hyper technique solution can be achieved without completing all
SA iterations (terminating point) as shown in Figure 6.15.
Figure 6.15: Hybrid TS/SA technique
227
The hybrid TS/SA technique procedure is detailed as follows:
Select initial tabu search parameters
Set Tabu list ={ }
Set number of iterations as a maximum number, max_iter
Set an initial feasible scheduling by using SPT technique
Evaluate the initial value of objective function (makespan)
Apply tabu search technique
Obtain MakespanTS
Set an initial feasible scheduling tabu search technique
Select initial simulated annealing parameters
Define a large value for the temperature T=T0
Select the value of the decreasing parameter α; 0<α<1
While (T is in cooling range)
Design a neighbourhood for the current solution
Evaluate the new Makespan TS/SA
∆=new Makespan TS/SA – MakespanTS
If ∆<0 then
Accept the hybrid TS/SA solution
Else
If Pr(accepted)=exp(∆/T)> ε then
Accept the hybrid TS/SA solution
Store the new solution
Else
Reject the hybrid TS/SA solution
End if
End if
Update T according to T= αT
End while
End.
Figure 6.16 shows the main steps of the hybrid TS/SA technique. The main loops of
the SPT, TS and TS to produce the hybrid solution are shown below.
228
NO
YES
NO YES
YES
NO
Figure 6.16: The detailed hybrid TS /SA technique
Initial conditions A large value for the temperature T=T0. The value of the decreasing parameter α;
0<α<1. Tabu list ={ }
Set number of iterations as a maximum number
T≤Tmax
∆<=0
Accept the new solution
Reject the new solution
An initial solution using SPT technique
Applying Tabu Search
End
Generate a new neighbourhood for new solution
Evaluate the new Makespan TS/SA
∆=new Makespan TS/SA – Makespan TS
Store the new solution
TS
SA
T= αT
Start
Obtaining MakespanTS
Pr(accepted)=exp(∆/T)> ε
Check the feasibility; time window, transition matrix for sections, trains and sidings capacity
SPT
229
6.7.3 Hybrid Techniques Result for Sugarcane Rail Systems
The hybrid techniques, hybrid SA/TS and hybrid TS/SA, were investigated by
comparing their results to those using individual metaheuristic techniques, SA and
TS and also to the initial solution.
The investigation took the form of a numerical experiment where solutions were
obtained for railway networks where the number of sections was varied (15, 20, 25
and 30) and the number of trains was varied (4, 8 and 12) in a 4×3 factorial
experiment. The experiment was conducted using a PC with a Core 2 CPU chip at
2.39GHz speed and 3.50 GB of RAM. Two parameters were examined to assess the
techniques: CPU time and the Percentage Improvement of the makespan of the
hybrid SA/TS and hybrid TS/SA compared to the makespan of the initial solution
(PISA/TS and PITS/SA) The initial solution was obtained using the SPT and this
technique was used to obtain TS and SA results.
Table 6.4 shows that the hybrid techniques work better than the metaheuristic
technique, while the CPU time of all cases is higher than the individual metaheuristic
techniques (SA or TS). The hybrid PISA/TS and PITS/SA indicate the improvement
of makespan for each solution technique, and reveal which technique is more
effective in producing the results.
Table 6.4 shows that the hybrid SA/TS technique works better than hybrid TS/SA
with cases (15/8), (20/8), (20/12), (25/12), (30/8) and (30/12), while hybrid TS/SA
works well with only case (15/12). The different metaheuristic techniques have the
same PISA/TS and PITS/SA values using 4 trains in each group where the
improvement value reduced by increasing the number of sections. The best solution
improvement was in case (20/8) for all solution techniques. All section groups with 8
trains worked well and had better improvements than the case solution in the same
group. Generally, the improvements reduced by increasing the number of sections to
30. The solution quality of the TS technique is better than the SA technique and the
CUT time for TS is still shorter than the SA. The yellow colour is the best CPU time
for different metaheuristic techniques, while the green one is the best percentage
improvement of makespan for different metaheuristic solution techniques.
230
Table 6.4: Comparison of makespan of TS, SA and hybrid techniques
Test
ed c
ases
(s
ectio
ns/tr
ains
)
Initi
al so
lutio
n
SPT
TS SA Hybrid SA/TS Hybrid TS/SA
Mak
espa
n
PITS
(%)
CPU
Mak
espa
n
PISA
(%)
CPU
Mak
espa
n PI
SA/T
S (%
)
CPU
Mak
espa
n PI
TS/S
A (%
)
CPU
(15/4) 12563 12154 3.25 33 12154 3.25 124 12154 3.25 141 12154 3.25 139 (15/8) 19257 17021 11.61 188 10.62 239 16809 12.71 448 17097 11.215 447 (15/12) 25951 24326 6.26 323 6.16 385 7.61 692 23963 7.66 675 (20/4) 14288 13926 2.53 42 2.53 152 13926 2.53 228 13926 2.53 224 (20/8) 20982 18126 13.61 268 18185 13.33 345 17620 16.02 624 17717 15.56 614 (20/12) 27676 25951 6.23 448 5.92 507 25326 8.49 1010 25542 7.71 983 (25/4) 18234 17818 2.28 47 17818 2.28 196 17818 2.28 268 17818 2.28 261 (25/8) 24828 23879 3.82 347 23940 3.58 398 23393 5.78 736 23393 5.78 731 (25/12) 31522 30243 4.1 547 30383 3.61 647 29977 4.9 1240 30356 3.699 1221 (30/4) 23221 22812 1.76 63 22812 1.76 246 22812 1.76 320 22812 1.76 324 (30/8) 29915 28590 4.43 437 28590 4.43 496 28580 4.461 905 28589 4.43 927 (30/12) 36609 35524 2.96% 690 35546 2.9% 769 34929 4.59% 1451 35525 2.96% 1453
231
Figure 6.17 summarises the makespan results of the hybrid techniques (hybrid
SA/TS and hybrid TS/SA) compared to heuristic and metaheuristic techniques. The
solution quality (compared to the initial solution, SA and TS) was generally better
for the two hybrid techniques.
Figure 6.17: Comparison of TS, SA and hybrid techniques using makespan
As stated above, the main disadvantage of hybrid techniques TS/SA and SA/TS is
that they are time consuming when the CPU time is higher than the metaheuristic
techniques, TS and SA. Figure 6.18 shows the CPU time for all cases with different
techniques. Generally, the hybrid SA/TS is more time consuming than the hybrid
TS/SA with all groups except in section group 30 where the hybrid SA/TS CPU time
is less than the hybrid TS/SA CPU time. In all cases the CPU time of TS is shorter
than the CPU time of the SA.
0 2000 4000 6000 8000
10000 12000 14000 16000 18000 20000 22000 24000 26000 28000 30000 32000 34000 36000 38000
(15/4) (15/8) (15/12) (20/4) (20/8) (20/12) (25/4) (25/8) (25/12) (30/4) (30/8) (30/12)
Mak
espa
n(s)
Tested Cases
SPT TS SA Hybrid SA/TS Hybrid TS/SA
232
Figure 6.18: CPU time of TS, SA and hybrid techniques for different cases
6.7.4 Analysis of Hybrid Techniques
This section focuses on the results of each series of groups with a different number
of trains in order to show the sensitivity analysis of the hybrid techniques. This
analysis is to obtain the makespan with a different number of trains with the same
number of sections. Figure 6.19(a) shows that the hybrid technique results are the
same in 33.33% of the total number of the cases, particularly with a small number of
trains (4 trains). The hybrid SA/TS is better than hybrid TS/SA in 33.33% of the
total cases especially with 8 trains, and hybrid TS/SA is better than hybrid SA/TS in
33.33% of the total cases, particularly with 12 trains. As a result, the hybrid
techniques provide the same results for the 15 section group.
Figure 6.19(b) focuses on the hybrid technique results of the 20 section group with
different numbers of trains. Both hybrid techniques have the same PITS in 33.33%
of the total cases in this section group, while SA/TS is better than TS/SA in 66.66%
of the total cases.
0 100 200 300 400 500 600 700 800 900
1000 1100 1200 1300 1400 1500
(15/4) (15/8) (15/12) (20/4) (20/8) (20/12) (25/4) (25/8) (25/12) (30/4) (30/8) (30/12)
CPU
Tim
e in
Sec
onds
Tested Cases
TS SA Hybrid SA/TS Hybrid TS/SA
233
Figure 6.19a: Makespan of hybrid techniques nn15 sections Figure 6.19b: Makespan of hybrid techniques on 20 sections
The hybrid techniques in the 25 section group are close to each other, where 66.66 %
of the total cases have the same results as the two hybrid techniques, while the
hybrid SA/TS is better than the TS/SA in 33.33% of the total cases, as shown in
Figure 6.19(c).
Figure 6.19(d) shows the results of 30 section group cases. The hybrid SA/TS works
well and better than the TS/SA in 66.66% of the total cases in this group, while both
of them have the same result of 33.33% of the total cases in the same group,
especially with a small number of trains.
Figure 6.19c: Makespan of hybrid techniques on 25 sections Figure 6.19d: Makespan of hybrid techniques on30 sections
0 2000 4000 6000 8000
10000 12000 14000 16000 18000 20000 22000 24000 26000 28000 30000 32000 34000 36000 38000
4 trains 8 trains 12 trains
Mak
espa
n (s
)
Total Number of Trains
SPT TS SA Hybrid SA/TS HybridTS/SA
0 2000 4000 6000 8000
10000 12000 14000 16000 18000 20000 22000 24000 26000 28000 30000 32000 34000 36000 38000
4 trains 8 trains 12 trains
Mak
espa
n(s)
Total Number of Trains
SPT TS SA Hybridr SA/TS HybridTS/SA
0 2000 4000 6000 8000
10000 12000 14000 16000 18000 20000 22000 24000 26000 28000 30000 32000 34000 36000 38000
4 trains 8 trains 12 trains
Mak
espa
n (s
)
Total Number of Trains
SPT TS SA Hybridr SA/TS HybridTS/SA
0 2000 4000 6000 8000
10000 12000 14000 16000 18000 20000 22000 24000 26000 28000 30000 32000 34000 36000 38000
4 trains 8 trains 12 trains
Mak
espa
n (s
)
Total Number of Trains
SPT TS SA Hybrid SA/TS Hybrid TS/SA
234
6.8 Hyper Metaheuristic Techniques for Sugarcane Rail Transport Systems
Hyper techniques are the complete integration between different metaheuristic
techniques, heuristic techniques, or both, to improve the quality of the solution and
the CPU time. In this research, SPT, as heuristic technique, and SA and TS as
metaheuristic techniques, are integrated interactively where the interaction occurs
between each iteration in TS and each iteration in SA, to improve good solutions in a
reasonable time.
The hybrid approach uses solution techniques consecutively, while the hyper
approaches use solution techniques interactively. The proposed hyper approach
therefore has a greater chance to find the global optimal solution in a reasonable time
than the hybrid approach. This is because at each stage of the hyper iterations, any
local or suboptimal solutions are removed and push the solution to global optimality.
Additionally hybrid approaches are more consuming of CPU time than hyper
approaches for solving large-scale problems.
Many researchers examined and applied hyper techniques with different views to
solve different scheduling problems: Burke et al. (2007) developed hyper techniques
to solve Timetabling Problems; Cuesta et al. (2005) used ant colony techniques to
develop a hyper approach to solve the 2D Bin Packing Problem; Cowling et al.
(2003) developed a Hyper-heuristic approach to schedule a sales summit. Cowling’s
approach depends on using local search techniques combined with some heuristics.
Hyper techniques have two main steps: heuristic selection and move acceptance.
Most hyper techniques are integrating deterministic and non deterministic heuristic
techniques.
This research uses the two hyper techniques of SA/TS and TS/SA which depend on
the TS technique which has a deterministic nature, and SA as a stochastic search
technique. Both techniques are used interactively and consecutively where there is a
complete interaction between them. The sequencing of the metaheuristic techniques
is considered in the two hyper techniques such as using TS and then SA or using SA
and then TS.
235
6.8.1 Hyper SA/TS Technique
The hyper SA/TS technique aims to improve the SA results by integrating TS
iterations with the SA solution at each iteration. The SA/TS technique checks each
solution by SA after each iteration by entering one iteration from the TS technique to
improve it. Figure 6.20 shows the integration of the first solution in SA (after the
first iteration) with the first solution in TS is used to produce the solution two in SA.
Solution two in SA is integrated with solution two in TS to produce solution three in
SA. The stop point of the hyper SA/TS algorithm depends on the number of SA
iterations (the temperature T and parameter α).
Figure 6.20: Hyper SA/TS technique
A good hyper SA/TS technique solution can be achieved without completing all
iterations, unlike the hybrid SA/TS technique where a good solution requires
finishing SA completely. As a result, a hyper SA/TS technique is less time
consuming than hybrid SA/TS technique. The hyper SA/TS algorithm is detailed as
follows:
Select initial simulated annealing parameters
A large value for the temperature T=T0
The value of the decreasing parameter α; 0<α<1
Tabu list = { }
236
Set number of iterations as a maximum number Set an initial feasible scheduling by using SPT technique
Evaluate the initial value of objective function (makespan)
While (T is in cooling range or Iter <=max_iter)
Design a neighbourhood for the current solution
Check the feasibility
Time window, sections transition matrix, trains and sidings capacity
Evaluate the new Makespan SA
Evaluate the new makespan where ∆= Makespan SA –currMakespan
If ∆<0 then
Accept the new solution
Generate a new neighbourhood for new solution
Evaluate the new Makespan SA/TS
∆1=new Makespan SA/TS– Makespan SA
If ∆1<0 then
Store the new solution
Else
Reject the new solution
End if
Else
If Pr(accepted)=exp(∆/T)> ε then
Accept the new state
Else {
Reject the solution
}
End if
End if
Set iter =iter+1
Update T according to T= αT
Update tabu list-includes the best solution to prevent any repetition for that solution
End while End Figure 6.21 shows the main loops and the integration of SPT, SA, and TS techniques
to produce the hyper SA/TS solution.
237
No
YES
NO YES
NO YES
Yes
NO
Figure 6.21: The detailed SA/TS hyper Technique
Start
Initial conditions A large value for the temperature T=T0.
The value of the decreasing parameter α; 0<α<1, Tabu list ={ } Set a maximum number of iterations, Iter=1
T >=0 OR
Check the feasibility; time window, transition matrix for sections, trains and sidings capacity
Evaluate the new MakespanSA
∆= Makespan SA – currMakespan
∆<=0
Accept the new solution
Pr(accepted)=exp(∆/T)> ε
Reject the new solution
T= αT; iter =iter+1; Update tabu list
An initial solution by using SPT technique
Evaluate the Makespan for an initial solution
End
Generate a new neighbourhood for the new solution
Evaluate the new MakespanSA/TS
∆1=new Makespan SA/TS– MakespanSA
Store the new solution
TS
SA
∆1<=0
Reject the new solution
Generate a new neighbourhood for the current solution
SPT
238
6.8.2 Hyper TS/SA Technique
The hyper TS/SA technique integrates TS and SA, with the initial solution obtained
by SPT for TS. TS is used at the beginning of each iteration and the solution is
checked at the end of each iteration by applying SA iterations. The stop point of the
hyper TS/SA algorithm depends on the number of TS iterations. Figure 6.22 shows
the integration of the first solution in TS (after the first iteration) with the first
solution in SA used to produce solution two in TS. Solution two in TS is integrated
with solution two in SA to produce solution three in TS. The stop point of the hyper
TS/SA algorithm depends on the number of TS iterations. A good hyper TS/SA
technique solution can be achieved without completing all iterations, unlike the
hybrid TS/SA technique where a good solution requires finishing TS completely. As
a result, a hyper TS/SA technique is less time consuming than the hybrid TS/SA
technique.
Figure 6.22: Hyper TS/SA technique
The main steps of the hyper TS/SA algorithm are explained as follows:
239
Set the initial tabu search parameters
A large value for the temperature T=T0.
The value of the decreasing parameter α; 0<α<1.
Tabu list ={ }.
Maximum number of iterations, max_iter.
Set an initial feasible scheduling by using SPT technique.
Evaluate the initial value of objective function (makespan).
While (iter <=max_iter or T is in cooling range)
Design a neighbourhood for the current solution.
Check the feasibility
Time window, sections transition matrix, trains and sidings capacity.
Evaluate the new MakespanTS, where ∆= MakespanTS -currMakespan
If ∆<0 then
Accept the new state.
Else
{
Reject the solution.
Apply simulated annealing technique.
If Pr(accepted)=exp(∆/T)> ε then
Accept the new state.
Else
Reject the solution
End if
}
End if
Update T according to T= αT.
Update tabu list
“includes the best solution to prevent any repeating for that solution”
End while
Stop criteria:
Until ∆ is not significant or there is no improvements in the makespan for
small change in T
Total number of iterations has completed.
End
240
Figure 6.23 shows the main loops and the integration of SPT, TS, and SA techniques
to produce the hyper TS/SA solution.
Yes No
NO YES
YES
Figure 6.23: The detailed hyper TS/SA Technique
Start
Iter <=max_iter
Generate a new neighbor for current solution
Evaluate the new MakespanTS
∆= new MakespanTS – currMakespan
∆<=0
Accept the new solution
Pr(accepted)=exp(∆/T)> ε
Reject the new solution
An initial solution by using SPT technique
Evaluate the Makespan for an initial solution
End
Generate a new neighbourhood for new solution
Evaluate the new Makespan TS/SA
∆1=new Makespan TS/SA– Makespan TS
Store the new solution
SA
TS
∆1<=0
iter =iter+1; Update tabu list; T= αT
Initial conditions A large value for the temperature T=T0.
The value of the decreasing parameter α; 0<α<1, Tabu list = { } Set a maximum number of iterations, Iter=1
Check the feasibility; time window, transition matrix for sections, trains and sidings capacity
No
Yes
No
SPT
241
6.8.3 Hyper Techniques Result for Sugarcane Rail Systems
The hyper techniques were investigated by comparing their results to those using
individual metaheuristic techniques, SA and TS and also to the initial solution. The
same cases used to investigate the hybrid technique in section 6.7.3 are used to
investigate the hyper techniques in this section. The investigation took the form of a
numerical experiment where solutions were obtained for railway networks where the
number of sections was varied (15, 20, 25 and 30) and the number of trains was
varied (4, 8 and 12) in a 4×3 factorial experiment as shown in Table 6.5. The
experiment was conducted using a PC with a Core 2 CPU chip at 2.39GHz speed
and 3.50 GB of RAM. Two parameters were examined to assess the techniques:
CPU time and the Percentage Improvement (PI) of the makespan of the hyper SA/TS
and the hyper TS/SA compared to the makespan of the initial solution.
Generally, Table 6.5 shows that the hyper TS/SA works better than the hyper SA/TS
technique in most of the cases, where there is an improvement in the quality of the
makespan solutions. The results of cases (20/8), (20/12), (25/8) and (30/8) improved
more through hyper TS/SA than through hyper SA/TS, while the solution for case
(25/12) improved by using hyper SA/TS more than by using hyper TS/SA. Both
hyper techniques have the same results when solving cases (15/4), (15/8), (15/12),
(20/3), (25/4), (30/4), and (30/12). All section groups with 8 trains work well and
displayed greater improvements especially for group section 20. With 4 trains, the
results for both techniques were the same and there were no improvements using
metaheuristic techniques. The yellow colour shows the best CPU time for different
metaheuristic techniques, while the green one shows the best percentage
improvement of makespan for different metaheuristic solution techniques.
242
Table 6.5: Metaheuristic and hyper techniques results for different cases
Test
ed c
ases
(sec
tion/
train
) In
itial
solu
tion
SP
T
TS SA Hyper SA/TS Hyper TS/SA
Mak
espa
n
PITS
CPU
Mak
espa
n
PISA
CPU
Mak
espa
n
PISA
/TS
CPU
Mak
espa
n
PITS
/SA
CPU
(15/4) 12563 12154 3.25 33 12154 3.25 124 12154 3.25 66 12154 3.25 51 (15/8) 19257 17021 11.61 188 10.62 239 16917 12.15 170 16917 12.15 164 (15/12) 25951 24326 6.26 323 6.16 385 24326 6.26 292 24326 6.26 251 (20/4) 14288 13926 2.53 42 2.53 152 13927 2.53 100 13927 2.53 83 (20/8) 20982 18126 13.61 268 18185 13.33 345 17656 15.85 258 17652 15.87 231 (20/12) 27676 25951 6.23 448 5.92 507 25512 7.82 454 25326 8.49 402 (25/4) 18234 17818 2.28 47 17818 2.28 196 17818 2.28 118 17818 2.28 96 (25/8) 24828 23879 3.82 347 23940 3.58 398 23763 4.29 323 23592 4.98 282 (25/12) 31522 30243 4.1 547 30383 3.61 647 29637 5.98 538 30146 4.365 449 (30/4) 23221 22812 1.76 63 22812 1.76 246 22812 1.76 147 22812 1.76 122 (30/8) 29915 28590 4.43 437 28590 4.43 496 28590 4.43 384 28569 4.5 332 (30/12) 36609 35524 2.96% 690 35546 2.9% 769 35192 3.87 628 35192 3.87 544
243
Figure 6.24 summarises the makespan results for the two hyper techniques compared
with heuristic and metaheuristic techniques. The solution quality (compared to the
initial solution, SA and TS) was generally better for the two hybrid techniques.
Table 6.24: Metaheuristic and hyper techniques results of different cases
Figure 6.25 shows the CPU time for all cases using hyper techniques, where both
hyper techniques provide improvements for all metaheuristic CPU time. Hyper
TS/SA CPU time is better than hyper SA/TS and increasing the number of sections
or trains affects the CPU time.
0 2000 4000 6000 8000
10000 12000 14000 16000 18000 20000 22000 24000 26000 28000 30000 32000 34000 36000 38000
(15/4) (15/8) (15/12) (20/4) (20/8) (20/12) (25/4) (25/8) (25/12) (30/4) (30/8) (30/12)
Mak
espa
n (s
)
Tested Cases
SPT TS SA Hyper SA/TS Hyper TS/SA
244
Figure 6.25: Comparison of CPU time of TS and SA and hyper techniques
6.8.4 Analysis of Hyper Techniques
The sensitivity analysis of the two hyper techniques is shown in this section. The
sensitivity analysis depends on changing the number of trains with the same number
of sections. Figure 6.26a shows the group results for 15 sections with a different
number of trains. Both hyper techniques have the same results with all group cases
and show good improvement in the case of 8 trains.
Hyper TS/SA in Figure 6.26b performs better than hyper SA/TS in 66.66% of the
total cases in the 20 section group. Both TS/SA and SA/TS have the same results in
33.33% of the total cases, especially with a small number of trains - 4. 8 trains with
20 sections provided the best result for both hyper techniques between all cases in all
section groups, where the PISA/TS and PITS/SA results were the highest.
0
100
200
300
400
500
600
700
800
(15/4) (15/8) (15/12) (20/4) (20/8) (20/12) (25/4) (25/8) (25/12) (30/4) (30/8) (30/12)
CPU
Tim
e in
Sec
onds
Tested Cases
TS SA Hyper SA/TS HyperTS/SA
245
Figure 6.26a: Makespan of hyper techniques on 15 sections Figure 6.26b: Makespan hyper techniques on 20 sections
The 25 sections group in Figure 6.26c has the same percentage improvement of the
two hyper techniques in 33.33% of the total cases, especially in the case of 4 trains.
Hyper TS/SA had a better solution than hyper SA/TS in solving 8 trains, while the
hyper SA/TS was better at solving the 12 train case. The two hyper techniques have
nearly the same efficiency in solving the cases of the 30 sections group with the
same quality except in case (30/8) where the percentage improvement of the hyper
TS/SA technique increased by a small value of .07%, as shown in Figure 6.26d.
.
Figure 6.26c: Makespan of hyper techniques on25 sections Figure 6.26d: Makespan of hyper techniques on30 sections
0 2000 4000 6000 8000
10000 12000 14000 16000 18000 20000 22000 24000 26000 28000
4 trains 8 trains 12 trains
Mak
espa
n (s
)
Total Number of Trains
SPT
TS
SA
Hyper SA/TS
Hyper TS/SA
0 2000 4000 6000 8000
10000 12000 14000 16000 18000 20000 22000 24000 26000 28000
4 trains 8 trains 12 trains
Mak
espa
n 10
00
Total Number of Trains
SPT
TS
SA
Hyper SA/TS
Hyper TS/SA
0 2000 4000 6000 8000
10000 12000 14000 16000 18000 20000 22000 24000 26000 28000 30000 32000 34000 36000 38000
4 trains 8 trains 12 trains
Mak
espa
n (s
)
Total Number of Trains
SPT
TS
SA
Hyper SA/TS
Hyper TS/SA
0 2000 4000 6000 8000
10000 12000 14000 16000 18000 20000 22000 24000 26000 28000 30000 32000 34000 36000 38000
4 trains 8 trains 12 trains
Mak
espa
n 10
00
Total Number of Trains
SPT TS SA Hyper SA/TS Hyper TS/SA
246
6.9 Hybrid and Hyper Metaheuristic Techniques Test Cases
Hybrid and hyper techniques were used in this research to solve many cases in the
sugarcane rail transport system. Generally, the quality of the solutions from hybrid
techniques in tested cases such as (15/8), (15/12), (20/8), (20/12), (30/8) and (30/12)
is better than those from the hyper techniques, while the percentage improvement in
the case (25/12) solution using the hyper SA/TS technique is better than that using
hybrid techniques. Cases (15/4), (20/4), (25/4) and (30/4) have the same percentage
improvement value as shown in Figure 6.27.
Figure 6.27: Hybrid and hyper techniques results using makespan for some tested cases
While hybrid techniques have good results in many cases, these techniques are still
time consuming. However, hyper techniques can produce the same results in many
cases, and close to the good results produced by hybrid techniques in other cases, but
done in a reasonable time compared to hybrid techniques. Figure 6.28 shows the
CPU time of hybrid and hyper techniques for the tested cases. The CPU time results
indicate that the hyper techniques are shorter than hybrid techniques in producing the
solution.
0 2000 4000 6000 8000
10000 12000 14000 16000 18000 20000 22000 24000 26000 28000 30000 32000 34000 36000 38000
(15/4) (15/8) (15/12) (20/4) (20/8) (20/12) (25/4) (25/8) (25/12) (30/4) (30/8) (30/12)
Mak
espa
n (s
)
Tested Cases
Hyper SA/TS Hyper TS/SA Hybrid SA/TS Hybrid TS/SA
247
Figure 6.28: CPU time of some tested cases using hybrid and hyper techniques
6.10 Hyper and Hybrid Metaheuristic Technique (TS/SA) and MIP
The hybrid and hyper techniques were investigated by comparing their results to
those using individual metaheuristic techniques, SA and TS and also to the initial
solution, lower bound solution and the solution obtained using constraint
programming techniques.
The comparison of hybrid and hyper metaheuristic techniques and the optimal
solution of mixed integer programming using CPLEX software is shown in Table
6.6. Hybrid and Hyper techniques work with all problems while MIP works well
with small problems. MIP results using CPLEX is time consuming for large-scale
problems, where the CPU increased sharply from 0.66 to 774 seconds by increasing
the number of trains from 4 to 8 on 15 sections. MIP is not applicable for many cases
which include a large number of trains and sections such as (15/12), (20/8), (20/12),
(25/8), (25/12), (30/8) and (30/12), while hybrid and hyper techniques are applicable
for all cases. The best improvement is case (20/8), where the hybrid SA/TS equals
16.02% and Hyper TS/SA is 15.87%. Generally, the improvements reduced by
increasing the number of sections to 30. The yellow colour shows the best CPU time
for different metaheuristic techniques, while the green one shows the best percentage
improvement of makespan of different metaheuristic solution techniques.
0 100 200 300 400 500 600 700 800 900
1000 1100 1200 1300 1400 1500
(15/4) (15/8) (15/12) (20/4) (20/8) (20/12) (25/4) (25/8) (25/12) (30/4) (30/8) (30/12)
CPU
tim
e (s
)
Tested Cases
Hyper SA/TS Hyper TS/SA Hybrid SA/TS Hybrid TS/SA
248
Table 6.6: Metaheuristic, hybrid, hyper, and MIP results of different tested cases
Test
ed c
ases
(S
ectio
ns/T
rain
s)
V
aria
bles
C
onst
rain
ts
Initi
al so
lutio
n (c
onst
ruct
ive
tech
niqu
e SP
T)
MIP-CPLEX Optimal
TS SA Hyper SA/TS Hyper TS/SA Hybrid SA/TS Hybrid TS/SA
Perc
enta
ge
Impr
ovem
ent(
PI )
CPU
PITS
CPU
PISA
CPU
PISA
/TS
CPU
PITS
/SA
CPU
PISA
/TS
CPU
PITS
/SA
CPU
(15/4) 4625 54464 12563 9 0.66 3.25 33 3.25 124 3.25 66 3.25 51 3.25 141 3.25 139 (15/8) 10689 166528 19257 15.3 774 11.61 188 10.62 239 12.15 170 12.15 164 12.71 448 11.215 447 (15/12) 18193 344112 25951 n/a n/a 6.26 323 6.16 385 6.26 292 6.26 251 7.61 692 7.66 675 (20/4) 7765 96624 14288 14.1 1.67 2.53 42 2.53 152 2.53 100 2.53 83 2.53 220 2.53 224 (20/8) 17449 295648 20982 n/a n/a 13.61 268 13.33 345 15.85 258 15.87 231 16.02 624 15.56 614 (20/12) 29053 607632 27676 n/a n/a 6.23 448 5.92 507 7.82 454 8.49 402 8.49 1010 7.71 983 (25/4) 11705 150784 18234 5.99 2.11 2.28 47 2.28 196 2.82 118 2.82 96 2.28 268 2.28 261 (25/8) 25809 467168 24828 n/a n/a 3.82 347 3.58 398 4.29 323 4.98 282 5.78 736 5.78 731 (25/12) 42313 945552 31522 n/a n/a 4.1 547 3.61 647 5.98 538 4.365 449 4.9 1240 4.5 1221 (30/4) 16445 216944 23221 4.9 3.39 1.76 63 1.76 246 1.76 147 1.76 122 1.76 320 1.76 324 (30/8) 35769 671008 29915 n/a n/a 4.43 437 4.43 496 4.43 384 4.5 332 4.461 905 4.43 927 (30/12) 57973 1357872 36609 n/a n/a 2.96 690 2.9 769 3.87 628 3.87 544 4.59 1451 2.96 1453
249
6.11 Study Analysis of Elements of Metaheuristic Techniques
The stochastic element α value in SA and maximum number of iterations in TS have
a significant effect on the final results of any solution technique. Typical values for
cool parameter α in simulated annealing vary between 0.8 and 0.99 (Aarts et al.,
2005). Figure 6.29 shows the results for α values 0.88, 0.90, 0.92 and 0.95. The
makespan for hyper SA/TS using α = 0.90 is better than the makespan values using
other α values in most cases for solution quality.
Figure 6.29: Effect of α value on the hyper SA/TS solution
CPU time for all cases using different α values is described in Figure 6.30.
Figure 6.30: Effect of α value on the hyper SA/TS solution
0
2
4
6
8
10
12
14
16
18
α=0.88 α=0.90 α=0.92 α=0.95
Perc
enta
ge Im
prov
emen
t ( %
)
(15/4)
(15/8)
(15/12)
(20/4)
(20/8)
(20/12)
(25/4)
(25/8)
(25/12)
(30/4)
(30/8)
(30/12)
0
100
200
300
400
500
600
700
800
900
α=0.88 α=0.90 α=0.92 α=0.95
CPU
tim
e (s
econ
d)
(15/4)
(15/8)
(15/12)
(20/4)
(20/8)
(20/12)
(25/4)
(25/8)
(25/12)
(30/4)
(30/8)
(30/12)
250
Figure 6.31 shows no improvement in the hyper TS/SA value after 250 iterations and
it also shows the percentage improvements for different cases where the best
percentage improvement was 15.87% for case study 20/8. By increasing the number
of iterations, the CPU increases sharply for no benefit.
Figure 6.31: Effect of number of iterations on hyper TS/SA solution
Figure 6.32 shows the CPU time of all cases using hyper TS/SA technique using
different number of iterations. CPU time of all cases study is growing up sharply by
increasing number of iterations.
Figure 6.32: Effect of number of iterations on CPU time of hyper TS/SA solution
0
2
4
6
8
10
12
14
16
18 10
20
40
60
80
100
150
200
250
Perc
enta
ge Im
prov
emen
t (%
)
Number of iterations
(15/4)
(15/8)
(15/12)
(20/4)
(20/8)
(20/12)
(25/4)
(25/8)
(25/12)
(30/4)
(30/8)
(30/12)
0
100
200
300
400
500
600
700
10
20
40
60
80
100
150
200
250
CPU
tim
e (s
econ
d)
Number of iterations
(15/4)
(15/8)
(15/12)
(20/4)
(20/8)
(20/12)
(25/4)
(25/8)
(25/12)
(30/4)
(30/8)
(30/12)
251
Figure 6.33 describes the average of makespan using hyper TS/SA technique for
different runs in case study 30/12 as a largest problem in Table 6.6 and example for
explaining how makespan is calculated for the other cases.
Figure 6.33: The average of makespan of hyper TS/SA case study 30/12 with different runs
Minimum value of makespan is 34930 and maximum value is 36334 for 165 runs for
the 30/12. The average value of makespan, 35192, is calculated where the makespan
value determined after 145 runs from Figure 6.33.
35100
35150
35200
35250
35300
35350
35400
35450
35500
5 15 25 35 45 55 65 75 85 95 105 115 125 135 145 155 165
Ave
rage
of m
akes
pan
Number of Runs
Average of Makespan
252
6.12 Inclusion of Delivery and Collection Time Constraints for a Hyper TS/SA
Results
The hyper TS/SA technique is used to solve a complicated real life example to
produce high quality solution in reasonable time. This case includes 30 single
sections, 15 double sections, 15 harvesters, and delivery and collection of 1552 bins
under time constraints. Empty bins are delivered in the outbound direction while full
bins are collected in the inbound direction. The delivery and collection operations
can be executed at the same time when the siding is located at the end of each rail
branch. The delivery of the last visit is used as an inventory for the next day. In the
sugarcane rail system, delaying the last visit time for each harvester can increase the
mill stock of bins.
An optimal (or near optimal) number of train is determined under the system’s
constraints (see Chapter 3 for detail). The best makespan solution is found as 26.3
hr with seven trains and 21 runs (trips). A graphical presentation of the solution is
shown in Figure 6.34. We found infeasible solutions for 4, 5 and 6 trains because
some of the system constraints are not satisfied, for example, four trains only
delivered 1434 bins and six trains only delivered 1417 bins in 24 hours and these
cases could not satisfy the delivering and collecting 1552 bins constraints; and for
Eight trains with 22 runs satisfy the system’s constraints and we find a feasible
solution for this case, however the total operating time and makespan of the system
is found larger than 7 trains with 21 runs case. More than nine trains created
congestions at the network so the maximum number of trains is determined as 9
trains for this case study.
Train operations for delivering and collecting 1552 bins of the practical example are
shown in Table 6.7. As seen in this table, six runs are assigned for first train (T1),
five runs are assigned for the second train (T2), three runs are assigned for third train
(T3), three runs are assigned for fourth train (T4), two runs are assigned for fifth
train (T5), one run is assigned for sixth train (T6) and one run is assigned for seventh
train (T7).
253
Figure 6.34: A sample train schedule of seven trains, 45 sections and 15 harvesters
254
Table 6.7: Train operations for delivering and collecting empty and full bins
Train Run Siding Start Time
Activity
Delivered empty bins
Collected full bins
T1 1 Mill 05:10:18 Start run S2 05:11:38 49 0 S6 05:28:02 49 0 S8 05:44:24 22 0 S6 06:01:48 0 49 S4 06:18:10 0 24 S2 06:38:24 0 27
Mill 06:40:48 End run 2 Mill 06:40:48 Start run
S2 06:58:48 33 0 S4 07:19:12 49 0 S6 07:35:24 33 0 S8 07:51:36 5 0
S10 08:22:12 0 29 S8 08:37:48 0 48 S6 08:54:36 0 23
Mill 09:33:36 End run 3 Mill 09:33:36 Start run
S4 09:56:24 16 0 S8 10:14:24 47 0
S10 10:30:12 49 0 S10 10:45:29 0 23 S8 11:00:36 0 9 S6 11:16:48 0 23 S4 11:33:45 0 45
Mill 11:55:48 End run 4 Mill 11:55:48 Start run
S2 12:13:48 14 0 S6 12:35:24 14 0
S10 14:09:36 49 0 S10 14:24:36 0 45 S8 14:40:12 0 38 S6 14:56:24 0 1 S4 15:12:36 0 16
Mill 15:35:24 End run 5 Mill 15:35:24 Start run
S4 16:13:48 31 0 S8 17:48:36 22 0
S10 18:04:12 27 0 S10 18:19:12 0 49 S8 18:34:48 0 1 S4 18:52:48 0 11 S2 19:12:36 0 39
Mill 19:15:10 End run 6 Mill 19:15:10 Start run
S10 20:57:36 49 0 S10 21:12:36 0 28 S2 21:36:36 0 30
Mill 21:39:10 End run
255
T2 1 Mill 01:13:48 Start run S12 01:30:36 49 0 Mill 01:40:48 End run
2 Mill 01:50:24 Start run S20 02:15:20 51 0 S30 02:43:12 49 0 S30 03:01:12 0 32 S20 03:29:24 0 20 Mill 03:38:24 End run
3 Mill 04:59:24 Start run S16 05:16:12 49 0 S18 06:19:48 45 0 S20 06:40:48 26 0 S24 07:37:12 0 38 S22 07:59:24 0 24 S20 08:19:12 0 38 Mill 08:58:12 End run
4 Mill 08:58:12 Start run S12 09:14:24 26 0 S14 09:34:12 39 0 S14 09:49:48 0 36 S12 10:09:36 0 49 Mill 10:10:12 End run
5 Mill 20:06:00 Start run S16 20:17:45 33 0 S26 21:31:12 11 0 S28 21:49:48 9 0 S30 22:10:12 33 0 S30 22:28:12 0 15 S28 22:49:12 0 37 S26 23:07:48 0 48 Mill 23:35:24 End run
T3 1 Mill 8:27:25 Start run S18 08:59:24 20 0 S20 09:19:48 34 0 S22 09:39:36 45 0 S24 10:01:48 21 0 S24 10:16:48 0 4 S22 1038:24 0 24 S20 10:58:12 0 39 S18 11:19:12 0 33 Mill 11:37:12 End run
2 Mill 11:37:12 Start run S12 11:54:30 21 0 S14 12:13:48 39 0 S14 12:28:48 0 27 S12 12:48:36 0 47 Mill 12:49:48 End run
3 Mill 23:04:48 Start run S26 24:16:12 36 0 S28 24:34:48 36 0 S28 25:19:12 0 8 S26 25:37:48 0 42 S16 26:04:48 0 30 Mill 26:19:48 End run
256
T4 1 Mill 11:06:39 Start run S22 11:48:36 9 0 S24 12:10:48 38 0 S24 12:25:48 0 21 S22 12:48:15 0 27 S20 13:07:12 0 23 S18 13:28:12 0 29 Mill 13:46:48 End run
2 Mill 13:46:48 Start run S14 14:22:48 0 19 Mill 14:28:48 End run
3 Mill 17:16:12 Start run S14 17:52:48 34 0 S14 18:07:48 0 30 Mill 18:13:48 End run
T5 1 Mill 13:15:55 Start run S18 13:47:24 25 0 S20 14:08:24 19 0 S22 14:28:12 36 0 S24 14:49:48 0 30 S22 15:12:35 0 15 S20 15:31:48 0 10 S18 15:52:12 0 28 S16 16:09:36 0 17 Mill 16:10:48 End run
2 Mill 17:15:30 Start run S16 17:33:10 14 0 Mill 17:48:10 End run
T6 1 Mill 15:40:10 Start run S24 17:14:24 39 0 S24 17:29:24 0 5 S16 18:04:12 0 49 Mill 18:24:10 End run
T7 1 Mill 17:37:25 Start run S26 18:45:36 49 0 S28 19:04:12 45 0 S30 19:25:12 14 0 S30 19:43:12 0 49 S28 20:03:36 0 45 S26 20:22:12 0 6 Mill 20:50:24 End run
Total allotment
1552 1552
257
6.13 Conclusion
Metaheuristic techniques are adapted to the sugarcane rail transport system to obtain
near optimal solutions in a reasonable time. Hybrid and hyper techniques improve
the quality of the solution and decrease the CPU time. The two hybrid techniques
proposed are SA/TS and TS/SA both of which depend on integrating the
metaheuristic techniques. The hybrid techniques improved the quality of the solution
while the CPU time of the solution is high. As a result, the two hyper techniques of
SA/TS and TS/SA reduce the CPU time. The results indicated that the hyper TS/SA
produces better solutions than SA/TS with a shorter CPU time. The previous results
using the mixed integer programming technique identified that an optimal solution
can easily be produced for the small large-scale cases in the short time. Mixed
integer programming however is time consuming for large-scale cases. The hybrid
metaheuristic TS/SA on the other hand can solve large-scale cases in a reasonable
time.
The hyper TS/SA technique is applied to solve a more complicated real life example,
which includes Delivery and Collection Time constraints and high quality solution in
reasonable time is obtained.
258
Chapter 7
Conclusions and Future Work
Chapter Outline
7.1 Introduction.........................................................................................................259
7.2 Theoretical Contributions...................................................................................259
7.3 Practical Contributions.......................................................................................262
7.4 Future Work.......................................................................................................263
259
7.1 Introduction
Mathematical models, CP and MIP, have been developed to optimise the sugarcane
rail transport system using different solution techniques. Mathematical modelling
provides many benefits in terms of accuracy of results, prediction of urgent and
future problems and subsequent solutions which are faster than some other
techniques. The integration of the different solution techniques can improve the
efficiency of these techniques in producing high quality solutions and reducing the
CPU time. This research has investigated the impact of integrating CP search
techniques such as Best First Search Technique (BFS), Depth-First Search Strategy
(DFS), Slice Based Search (SBS), Limited Discrepancy Search (LDS), Depth-bound
Discrepancy Search (DDS), Interleaved Depth First Search (IDFS), Standard Search
Strategy (SSS) and Dichotomic Search Strategy (DSS) with MIP and CP models on
the quality of problem solutions. Linear relaxation is integrated with constraint
stratification techniques such as Constraint Propagation and search techniques to
solve the CP and MIP models. Exact solution techniques cannot solve large-scale
problems in a reasonable time. Therefore, metaheuristic techniques such as simulated
annealing (SA) and tabu search (TS) are used to reduce the CPU time of large-scale
problems. Hybrid and hyper metaheuristic techniques are proposed in this research to
improve the solutions of SA and TS. Real time constraints of the sugarcane rail
transport system were developed to optimise the efficiency of the system. The
theoretical and practical contributions of this thesis and future work are summarised
in this Chapter.
7.2 Theoretical Contributions Generic mathematical models and solution techniques were developed to optimise
the efficiency of the sugarcane rail transport system. The theoretical contributions
are:
A blocking parallel-machine job shop scheduling (BPMJSS) technique was
developed to solve the sugarcane rail transport system.
260
CP and MIP Models were developed to address different types of sugarcane
train scheduling problems. Each model includes rail operation scheduling
constraints and sugarcane system constraints.
• Blocking Segment MIP Model of the Sugarcane Rail Problem
• Blocking Segment CP Model of the Sugarcane Rail Problem
• Blocking Section CP Model of Sugarcane Rail System
• Blocking Section MIP Model of Sugarcane Rail System
• Inclusion of the Delivery and Collection Time Constraints to the Models.
The standard and CA algorithm results were examined in CP and MIP.
Different objective functions were developed and investigated in this
research - that is, minimising both makespan and the total waiting time.
These objective functions were applied to different sized sugarcane rail
problems so as to promote the efficiency of CP and MIP models.
Constraint Propagation and search techniques such as DFS, SBS, DDS, BFS
and IDFS were applied to CP and MIP models. The performance of each
technique was examined within the CP and MIP models. The integration of
these techniques as one solver, improved the quality of the CP and MIP
solutions and reduced the CPU time.
The integration of the different search techniques of DFS, SBS, DDS, BFS
and IDFS with the two search strategies SSS and DSS was used in
Optimisation Language Programming (OPL).
Different sizes of the sugarcane rail problem were tested and compared with
different solution techniques to investigate the new CP and MIP models and
the efficiency of solutions techniques.
Algorithms were developed to solve the rail network issues such as
delivering and collecting conflicts, and train conflict.
• Collecting and Delivering Conflict Elimination
• Terminal Segment Conflict Elimination
261
• Intermediate Segment Conflict Elimination
• Algorithms for Solving Train Conflict
• Segment Blocking Determination (SBD)
• Rail Conflicts Elimination (RCE)
• Section Conflict Elimination
• Computing Acceleration (CA) Algorithms
• Segment Elimination Algorithm
• Section Elimination Algorithm
The blocking constraints of the train scheduling were developed to the case
study to satisfy the safety conditions and solve train conflict using :
• Blocking Terminal Segments
• Blocking Intermediate Segments
New neighbourhood techniques were developed based on the section and
train neighbourhoods and were used in the metaheuristic techniques.
The new metaheuristic techniques of SA and TS were adapted to solve large-
scale size problems in the sugarcane rail transport system. These techniques
were applied to the sugarcane rail system and included all constraints of the
sugarcane rail system.
Hybrid metaheuristic techniques were developed to solve the large-scale size
problems. Hybrid SA/TS and hybrid TS/SA were developed to improve the
quality of the solutions when the CPU time is higher than the SA or TS. The
two hybrid techniques used the heuristic technique SPT to obtain an initial
solution. The final solution can be obtained in two ways by improving the
initial solution by:
• SA and then implementing the TS on SA.
• TS and then implementing the SA on TS.
Hyper metaheuristic techniques were developed to improve the quality of the
solution and the CPU time. That is, SPT as a heuristic technique, and SA and
262
TS as metaheuristic techniques were integrated interactively where the
interaction occurs between each iteration in TS and each iteration in SA. Two
hyper techniques developed in this research are:
• Hyper SA/TS technique to improve each solution of each iteration in
SA by integrating iteration of TS technique.
• Hyper TS/SA technique to improve each solution of each iteration in
TS by integrating iteration of the SA technique.
These techniques and the exact solutions were compared to investigate the
accuracy of these techniques. C# language was the design code for these
techniques.
The blocking segment and blocking section constraints were tested and
compared to investigate the sensitivity analyses of the rail utilisation using
different size problems.
7.3 Practical Contributions
The practical contributions are related to the real sugarcane transport system and the
real problems of train conflict, and the collecting and delivering times at the railway
sidings. The many practical contributions are:
The Kalamia Mill was used as a real case study of a complex sugarcane rail
transport system. The results show that the developed solutions can find near
optimal solution to the sugarcane rail transport problem in reasonable time.
ACTSS schedule checker and simulator is used to investigate the MIP model
in real situations (Kalamia mill)
A real time table was developed for the sugarcane rail transport system using
MIP model in Chapter 5. Real life constraints were developed to optimise the
visit times for each siding by each train. A specific scenario was established
to include the constraints of harvester start time and harvester rate at each
siding and satisfy the siding allotment.
Scheduling softwares was developed using graphic interface to optimise the
operation time of the sugarcane rail transport system based on the
263
metaheuristic techniques. Sugarcane rail transport system codes were
developed by:
• OPL and CPLEX codes to examine the CP and MIP models with small
cases.
• C# Language Programming including the section blocking constraints to
examine metaheuristic(SA and TS), hybrid(SA/TS and TS/SA) and
hyper (SA/TS and TS/SA) metaheuristic techniques.
• Practical example was solved including 15 harvesters throughout 45 rail
sections (30 single sections and 15 double sections) and 7 trains. Train
operations were optimised at each siding to satisfy the mill and harvester
requirements
7.4 Future Work
The proposed methodology has a great potential to be applied to other rail systems.
Railways play a vital role in transporting the coal from mines to ports, where the
majority of the coal is transported by rail. The trains used are the longest in the world
with a length of more than 2 kilometres. Each train can carry between 2100 and 8600
net tonnes of coal. The coal mining railway system collects the full wagons of coal
from mines and transports them to the port. The transport sector has a significant
impact on the overall cost of a coal mining production system.
All sections in a coal mining rail network are standard which means the length of
sections is sufficient for the length of trains so blocking constraints can be applied.
The blocking section cannot be useful if the length of the train is greater than the
length of the section. The blocking section MIP model in Chapter 4 can be applied to
a coal mining rail system application.
The blocking section constraints are important to ensure the passing of trains without
accidents or conflict. The mine’s requirements of the empty wagons must be
achieved without interruption in the production operations of coal. The main outputs
of the model are efficient schedules for coal rail transport systems that optimise the
performance of the system.
264
The coal mining rail problem will be a part of future work. A real case study will be
solved using real visit time constraints.
As a summary, this research has the potential to be developed further by:
Extending the metaheuristic techniques to solve dynamic train scheduling
problems using dynamic variables such as unexpected events or accidents in
real life train scheduling problems.
Investigating further techniques to manage the large-scale problems of rail
transport systems.
Developing cane road system models for transporting the empty and full bins
between harvester locations and sidings by trucks. The integration between
the rail models and road models will help in optimising the cane transport
system between harvesters and mill.
Applying a blocking section MIP model to solve the coal mining rail problem
as another application using real large-scale problem.
Testing the MIP model with the coal mining rail problem.
265
References Aarts, E., Korst, J., & Michiels, W. (2005). Simulated annealing. In E. K. Burk & G.
Kandall (Eds.). Search mjethodolgies: Introductory tutorials in optimization and decision support techniques (187-210). New York, NY: Springer.
Abril, M., Salida, M. A., & Barber, F. (2008). Distributed search in railway scheduling problems. Engineering Applications of Artificial Intelligence, 21(5), 744-755.doi:10.1016/j.engappai.2008.03.008.
Abu-Suleiman, A., Pratt, D. B., & Boardman, B. (2005). The modified critical ratio: Towards sequencing with a continuous decision domain. International Journal of Production Research , 43(15),3287–3296.
Achterberg, T. (2007a). Constraint integer programming (Doctoral dissertation). Retrieved from http://opus.kobv.de/tuberlin/volltexte/2007/1611
Achterberg, T. (2007b). Conflict analysis in mixed integer programming. Discrete Optimization, 4(1), 4-20.
Achterberg, T., Berthold, T., Koch, T., Wolter, K. (2008). Constraint integer
programming: A new approach to integrate CP and MIP. In: Perron, L., Trick, M.A. (Eds.) Integration of AI and OR techniques in constraint programming for combinatorial optimization problems, 5th international conference, CPAIOR 2008. Lecture Notes in Computer Science, 5015, 6-20. Springer, Heidelberg.
Adams, J., Balas, E., & Zawack, D. (1988). The shifting bottleneck procedure for job
shop scheduling. Management Science, 34, 391-401. Arjona, E., Bueno, G., & Salazar, L. (2001). An activity simulation model for the
analysis of the harvesting and transportation systems of a sugarcane plantation. Computer and Electronics in Agriculture, 32, 247-264.
Ariano, A. D., PacciarellI, D., & Pranzo, M. (2007). A branch and bound algorithm for scheduling trains in a railway network. European Journal of Operational Research, 183, 643–657.
Artigues, C., Gendreau, M., Rousseau, L. M., & Vergnaud, A. (2009). Solving an integrated employee timetabling and job-shop scheduling problem via hybrid branch-and-bound. Computers & Operations Research, 36, 2330-2340.
Baker, K.R. (1974). Introduction to sequencing and scheduling, (Pp. 305), John Wiley, NY.
266
Barba, I., Valle, C. D., & Borrego, D. (2009). A constraint-based job-shop scheduling model for software development planning. Actas de los Talleres de las Jornadas de Ingeniería del Software y Bases de Datos,3(1),1-12.
Barker, F. G. ( 2007). An economic evaluation of sugarcane combine harvester costs and optimal harvest schedules for Louisiana. Master thesis. The Graduate Faculty of the Louisiana State University and Agricultural and Mechanical College.
Bockmayr, A., & Kasper, T. (1998). Branch-and-Infer: A unifying framework for integer and finite domain constraint programming.” INFORMS J. Computing, 10, 287–300.
Beck, J. C., & Perron, L. (2000). Discrepancy-bounded depth first search. In
Proceedings Of the Second International Workshop on Integration of AI and OR Techniques in Constraint Programming for Combinatorial Optimization Problems (CP-AI-OR 2000), 21- 22.
Beck, J. C., & Refalo, P. (2003). A hybrid approach to scheduling with earliness and
tardiness costs. Annals of Operations Research, 118, 49-71. Beck ,C. J. , & Smith. B. M. (2009). Introduction to the special volume on constraint
programming, artificial intelligence, and operations research. Annals of Operations Research, 171, 1-2.
Belhadji, S., & Isli, A. (1998). Temporal constraint satisfaction techniques in job
shop scheduling problem solving. Constraints: An International Journal, 3,
203-211.
Bessière, C. (1994). Arc-consistency and arc consistency again. Artificial Intelligence, 65(1), 179-190.
Bessière, C., & Cordier, M. O. (1993). Arc-consistency and arc-consistency again.
In: Proceedings AAAI,93, 108-113.
Brucker, P., Jurisch, B., & Sievers, B. (1994). A branch and bound algorithm for job shop scheduling problem. Discrete Applied Mathematics, 49, 105-127.
Brucker, P. (2007). Scheduling algorithms. Berlin, Springer.
http://www.amazon.com/ Scheduling-Algorithms-Peter-Brucker/dp/354069515X#.
Burdett, R. L. & Kozan, E. (2006). Techniques for absolute capacity determination in railways. Transport Research Part B, 40, 616-632.
Burdett, R. L., & Kozan, E. (2008). A sequencing approach for creating new train timetables. OR Spectrum, DOI 10.1007/s00291-008-0143-6.
267
Burdett, R. L., & Kozan, E. (2010). A disjunctive graph model and framework for constructing new train schedules. European Journal of Operational Research, 200, 85-98.
Burke, E. K., Meisels, A., Petrovic, S., & Qu, R. (2007). A graph-based hyper heuristic for timetabling problems, European Journal of Operational Research, 176,177-192.
Cerny,V. (1985). Thermodynamical approach to the travelling salesman problem: An
efficient simulation algorithm. Journal of Optimizatoion Theory and Applications, 45, 41-51
Chang, Y. L., Sueyoshi, T., & Sullivan, R. S. (1996). Ranking dispatching rules by data envelopment analysis in a job shop environment. IIE Transactions, 28(8), 631- 642.
Cheng, C. C., & Smith, S. F. (1997). Applying constraint satisfaction techniques to
job shop scheduling. Annals of Operations Research, 70, 327- 357. Chetthamrongchai, P., Auansakul, A., & Supawan, D. (2001). Assessing the
transportation problems of the sugar cane industry in Thailand. Transport and Communications Bulletin for Asia and the Pacific, 70, 31- 40.
Colin, E.C. (2009). Innovative applications of O. R.: Mathematical programming accelerates implementation of agro-industrial sugarcane complex. European Journal of Operational Research, 199, 232-235.
Corman, F., D’Ariano, A., Pacciarelli, D., & Pranzo, M. (2010). A tabu search
algorithm for rerouting trains during rail operations. Transportation Research, Part B, 44, 175-192.
Cowling, P., & Chakhlevitch, K. (2003). Hyper heuristics for managing a large collection of low level heuristics to schedule personnel, In Proceedings of the Congress on Evolutionary Computation, 2, 1214-1221.
Cuesta, A., Garrido, L., & Marín, H. T. (2005). Building hyper-heuristics through
ant colony optimization for the 2D bin packing problem, LNCS 3684, 654-660.
Dı´az, J. A., & Pe´rez, I. G. (2000). Simulation and optimization of sugar cane
transportation in harvest season. In: J.A. Joines, R.R. Barton, K. Kang, P.A. Fishwick, (Eds.), Proceedings of the 2000 Winter Simulation Conference. San Diego, CA, USA.
Deville, Y. , & Hentenryck. P. V. (1991). An efficient arc consistency algorithm for a class of CSP problems. In Proceedings of the Twelfth International Joint Conference on Artificial Intelligence, 325-330. Sydney, Australia.
268
Dorndorf, U., Pesch, E., & Huy, T. P. (2002). Constraint propagation and problem decomposition: A pre-processing procedure for the job shop problem. Annals of Operations Research, 115,125-145.
Dovier, A., Formisano, A., & Pontelli, E. (2009). An empirical study of constraint
logic programming and answer set programming solutions of combinatorial problems. Journal of Experimental & Theoretical Artificial Intelligence, 21(2), 79-121.
El Khayat, G., Langevin, A., & Riopel, D. (2006). Integrated production and
material handling scheduling using mathematical programming and constraint programming. European Journal of Operational Research, 175, 1818-1832.
El Sakkout, H., & Wallace, M. G. (2000). Probe backtrack search for minimal perturbation in dynamic scheduling, in constraints. Special Issue on Industrial Constraint-Directed Scheduling, 5 (4), 359-388.
Everitt, P. G., & Pinkney, A. J. (1999). Cane transport scheduling: An integrated system. International Sugar Journal, 101(1204), 208-210.
Foccaci, F., Lodi, A., & Milano, M. (2002). Mathematical programming techniques in constraint programming: A short overview. Journal of Heuristics, 8, 7-17.
Glover, F. (1989). Tabu search-Part I. ORSA Journal on Computing, 1(3), 190-206. Glover, F. (1990). Tabu search-Part II. ORSA Journal on Computing, 2(1), 4-32. Gonzales, T., & Sahni, S. (1976). Open shop scheduling to minimize finish time.
Journal of the Association for Computing Machinery, 23(4), 665–679. Grimley, S., & Horton, J. (1997). Cost and service improvements in harvest/transport
through optimisation modelling. Proceedings of 19th Australian Society of Sugar Cane Technologists, 6-13.
Grunow, M., Gunther, H., & Westinner, R. (2007). Supply optimisation for production of raw sugar. International Journal of Production Economics, 110, 224-239.
Hahn, M., & Ribeiro, R. (1999). Heuristic guided simulator for the operational planning of the transport of sugar cane. Journal of the Operational Research Society, 50, 451-459.
Haralick, R. M., & Elliott, G. L. (1980). Increasing tree search efficiency for constraint satisfaction problems. Artificial Intelligence, 14, 263-313.
269
Harvey, W. D., & Ginsberg, M. L. (1995). Limited discrepancy search. In Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence (IJCAI-95), 1, 607-615.
Hentenryck , P. V. (1999) . The OPL optimization programming language. ISBN-10:
0-262-72030-2, MIT Press. Hentenryck, P.V. (2002). Constraint and integer programming in OPL. INFORMS
Journal on Computing, 14(4), 345-72. Hentenryck, P.V., Deville, Y., Teng, C.M. (1992). A generic are-consistency
algorithm and its specifications. Artificial Intelligence,27, 291-322. Hentenryck, P. V., & Michel, L. (2005). Constraint-based local search, The MIT
Press. Her, J. H., & Ramakrishna, R.S. (2007). An external-memory depth-first search -
algorithm for general grid graphs. Theoretical Computer Science, 374, 170-180.
Higgins, A. (2006). Scheduling of road vehicles in sugarcane transport: A case study at an Australian sugar mill. European Journal of Operational Research, 170, 987-1000.
Higgins, A., Antony, G., Sandell, G., Davies, I., Prestwidge, D., & Andrew, B. (2004a). A framework for integrating a complex harvesting and transport system for sugar production. Agricultural Systems, 82, 99-115.
Higgins, A. (2004). Australian sugar mills optimise siding rosters to increase profitability. Annals of Operations Research, 128, 235-249.
Higgins, A., & Davies, I. (2005). A simulation model for capacity planning in sugarcane transport. Computer and Electronics in Agriculture, 47, 85-102.
Higgins, A., Haynes, M., Muchow, R., & Prestwidge, D. (2004b). Developing and implementing optimised sugarcane harvest schedules through participatory research. Australian Journal of Agriculture Research, 55, 297-306.
Higgins, A., & Kozan, E. (1997). Heuristic techniques for single line train scheduling. Journal of Heuristics, 3, 43-62.
Higgins, A., Kozan, E., & Ferreira, L. (1996a). Modelling the number and location of sidings on a single line railway. Computers & Operations Research, 24(3), 209-220.
Higgins, A., Kozan, E., & Ferreira, L. (1996b). Optimal scheduling of trains on a
single line track. Transportation Research B, 30, 147-161.
270
Higgins, A., & Laredo, L. (2006). Improving harvesting and transport planning within a sugar value chain. Journal of Operational Research Society, 57, 367-376.
Higgins, A., Thorburn, P., Archer, A., & Jakku, E. (2007). Review opportunities for value chain research in sugar industries. Agricultural Systems, 94, 611-621.
Hoeve,W. J. (2005). Operations research techniques in constraint programming.
ILLC Dissertation Series DS-2005-02 . The Centrum voor Wiskunde en Informatica. ISBN 90-6196-529-2.
Hooker, J. N. (2005). A hybrid method for planning and scheduling. Constraints, 10,
385- 401. Iannoni, A. P., & Morabito, R. (2006). A discrete simulation analysis of a logistics
supply system, Transport Research, Part E, 42, 191-210. Jeong, K. C., & Kim. Y. D. (1998). A real-time scheduling mechanism for a flexible
manufacturing system: Using simulation and dispatching rules, International Journal of Production Research, 36, 2609-2626.
Jeong, B. J., & Kim, K. H. (2011). Scheduling operations of a rail crane and
container deliveries between rail and port terminals. Engineering Optimization, 43(6), 597- 613.
Kaewtrakulpong, K. (2008). Multi-objective optimisation for cost reduction of
mechanical sugarcane harvesting and transportation in thailand. PhD Thesis, the Graduate School of Life and Environment Sciences, the University of Tsukuba.
Kirkpatrick, S., Gelatt, C. D., & Vecchi, M. P. (1983). Optimization by simulated annealing. Science, 220, 671-680.
Kozan, E., & Burdett, R. (2005). A railway capacity determination model and rail access charging methodologies. Transportation Planning and Technology, 28(1), 27-45.
Lawler, E. L., Lenstra, J. K., Kan, A. H. G., & Shmoys, D. B. (1993). Sequencing and scheduling: Algorithms and complexity, in logistics of production and inventory, Handbooks Opertion Research Management Science, 4, S. C. Graves et al., eds., Elsevier, New York, 445–522.
Le Gal, P. Y., Lejars, C., & Auzoux, S. (2003). MAGI: A Simulation tool to address
cane supply chain management. Proceedings of the South African Sugar Technologists' Association, 7, 555-565.
271
Le Gal, P. Y., Le Masson, J., Bezuidenhout, C.N., & Lagrange, L.F. (2009). Coupled modelling of sugarcane supply planning and logistics as a management tool. Computers and Electronics in Agriculture, 68(2), 168-177.
Le Gal, P. Y., Lyne, P.W.L., Meyer, E., & Soler, L. (2008). Impact of sugarcane supply scheduling on mill sugar production: A south a frican case study, Agricultural systems, 96, 64–74.
Le Gal, P. Y., Meyer, E., Lyne, P. W. L., & Calvinho, O. (2004). Value and
feasibility of alternative cane supply scheduling for a South-African mill supply area. Proceedings of the South African Sugar Technologists' Association, 78, 81-94.
Lejars, C., Yves Le Gal, P., & Auzoux, S. (2008). A decision support approach for
cane supply management within a sugar mill area. Computer and Electronics in Agriculture, 60, 239-249.
Lenstra, J. K., & Rinnooy, R. H (1979). Computional complexity of discrete
optimizationproblems. Annals of Discrete Mathematics, 4, 121-140. Lhomme, O. (1993). Consistency techniques for numeric CSPs. Proceedings of the
9thInternational Joint Conference on Artificial Intelligence IJCAI, 93, 232-238.
Liu, S. Q., & Kozan, E. (2009). Scheduling trains as a blocking parallel-machine job
shop scheduling problem. Computer and Operations Research, 36(10), 2840-2852.
Liu, S. Q., & Kozan, E. (2011). Scheduling trains with priorities: A no-wait blocking parallel-machine job-shop scheduling model. Transportations Science, 45(2), 175-198.
Lopez, E., Miquel, S., & Pla, L. M. (2006). Sugar cane transportation in Cuba, A case study. European Journal of Operational Research, 174, 374-386.
Manne, A. S. (1960). On the job shop scheduling problem, Operations Research, 8, 219-223
Marinova, M., & Viegasb, J. (2001). A mesoscopic simulation modelling methodology for analyzing and evaluating freight train operations in a rail network. Simulation Modelling Practice and Theory, 19(1), 516-539.
Martin, F., Pinkney, A., & Yu, X. (2001). Cane railway scheduling via constraint logic programming: Labelling order and constraints in a real-live application. Annals of Operational Research, 108,193-209.
Mascis, A., & Pacciarelli, D. (2002). Job-shop scheduling with blocking and no-wait constraints. European Journal of Operational Research, 143, 498-517.
272
Masoud, M., Kozan, E., & Kent, G. (2010a). Scheduling techniques to optimise
sugarcane rail Systems. ASOR Bulletin; 29:25-34.
Masoud M., Kozan E., & Kent, G. (2010b). A constraint programming approach to
optimise sugarcane rail operations, Proceedings of the 11th Asia Pacific
Industrial Engineering and Management Systems Conference 2010, 147:1-7,
Malaysia.
Masoud, M., Kozan, E., & Kent G. (2010c). A comprehensive approach for
scheduling single track railways. The Annual Conference on Statistics,
Computer Sciences and Operations Research, Egypt, Cairo, 45,19-30.
Masoud, M., Kozan, E., & Kent, G. (2011). A job-shop scheduling approach for
optimising sugarcane rail operations. Flexible Services and
Manufacturing Journal; 23(2):181-196.
McWhinney, W., & Penridge, L. K. (1991). ACTSS – Animated cane transport
scheduling system. ACADS/AITPM Seminar on Transport Simulation Systems, 8pp.
Meng, L., & Zhou, X. (2011). Robust Single-Track Train Dispatching model under a dynamic and stochastic environment: A scenario-based rolling horizon solution approach. Transportation Research Part B, doi:10.1016/j.trb.2011.05.001.
Milano, M. (2004). Constraint and integer programming: Toward a unified methodology. Kluwer (Ed.), United States of America: 59-87.
Milford, B. (2002). The State of value Chains in the Australian Sugar Industry, CRC Sugar Occasional Publication, Australia, Townsville: 1-22.
Mohr, R., & Henderson, T. C. (1986). Arc and path consistency. Artificial
Intelligence, 28, 225- 233. Montanari, U. (1974). Networks of constraints: Fundamental properties and
applications to picture processing. Information Sciences, 7, 95-132. Moser, M., & Engell, S. (1992). Avoiding scheduling errors by partial simulation of
the future, Proceedings of the 31st Conference on Decision and Control, Tucson, Arizona, 411-412.
273
Mouret, S., Grossmann, I., E., & Pestiaux, P. (2009). Tightening the linear relaxation of a mixed integer nonlinear program using constraint programming. CAIOR, LNCS 5547, 208-222.
Murali, P., Dessouky,M. , Ordóñez, F., & Palmer, K. (2009). A delay estimation technique for single and double-track railroads. Transportation Research Part E, Logistics and Transportation Review, doi:10.1016/j.tre.2009.04.016.
Nuijten, W., & Le Pape, C. (1998). Constraint-based job shop scheduling with ILOG SCHEDULER. Journal of Heuristics, 3, 271-286.
Oliveira, S. (2001). Solving single-track railway scheduling problem using constraint
programming. Phd Thesis. Univ. of Leeds, School of Computing. Pierreval, H., & Mebarki, N. (1997). Dynamic selection of dispatching rules for
manufacturing system scheduling. International Journal of Production Research, 35, 1575-1591.
Pinedo, M. (2008). Scheduling: Theory, algorithms, and systems. Springer Science+ Business Media, LLC, doi, 10.1007/978-0-387-78935 - 4.
Pinkney, A. J., & Everitt, P.G. (1997). Towards an integrated cane transport scheduling system. Proceedings of the Australian Society of Sugar Cane Technologists 19, 420-425.
Puget, J. F. (1995). A comparison between constraint programming and integer programming. In: Conference on Applied Mathematical Programming and Modelling (APMOD95), Brunel University.
Puget, J. F. (1998). A fast algorithm for the bound consistency of alldiff constraints. Proceedings of 15th National Conference on Artificial Intelligence (AAAI), 359 -366.
Puget, J. F., & Lustig, I. (2001). Constraint programming and maths programming.
The Knowledge Engineering Review, 16, 1, 5-23. Rajendran, C., & Holthaus, O. (1999). A comparative study of dispatching rules in
dynamic flow shops and job shops. European Journal of Operational Research, 116(1), 156-170.
Regin, J. (1994). A filtering algorithm for constraints of difference in CSPs. In
Proceedings of the National Conference on Artificial Intelligence(AAAI), AAAI Press, 362 -367.
Rodriguez, J. (2007). A constraint programming model for real-time train scheduling at junctions. Transportation Research Part B, 41, 231-245.
274
Rose, O. (2002). Some issues of the critical ratio dispatch rule in semiconductor manufacturing. In Proceedings of the 2002 Winter Simulation Conference, 1401-1405.
Sadeh, N., & Fox, M. S. (1996). Variable and value ordering heuristics for the job
shop scheduling constraint satisfaction problem. Artificial Intelligence 86, 1-41.
Sadeh, N., Sycara, K. , & Xiong, Y. (1995). Backtracking techniques for the job
shop scheduling constraint satisfaction problem. Artificial Intelligence 76, 455-480.
Salassi, M. E., & Barker, F. G. (2008). Reducing harvest costs through coordinated
sugarcane harvest and transport operations in Louisiana. Journal Association Sugar Cane Technologists, 28, 32-41.
Salido, M.A., & Barber, F. (2009). Mathematical solutions for solving periodic railway transportation. Mathematical Problems in Engineering, doi:10.1155/2009/728916.
Sato, T., Kakumoto, Y., & Murata, T. (2007). Shunting scheduling method in a railway depot for Dealing with changes in operational conditions. IEEJ Transactions on Electronics, Information and Systems, 127(2), 274-283.
Sierra, M. R., & Varela, R. ( 2008). Pruning by dominance in best-first search for the job shop scheduling problem with total flow time. Journal of Intelligent Manufacturing, 21(1), 111-119.
Smith, B. M., & Grant, S. A. (1998). Trying harder to fail first. In Proc. Thirteenth
European Conference on Artificial Intelligence (ECAI), 249- 253.Wiley. Swarnkar, R., & Tiwari, M. K. (2004). Modeling machine loading problem of FMSs
and its solution methodology using a hybrid tabu search and simulated annealing-based heuristic approach. Robotics and Computer-Integrated Manufacturing, 20 (3), 199-209.
Trentesaux , D., Pesin, P., & Tahon, C. (2001). Comparison of constraint logic
programming and distributed problem solving: A case study for interactive, efficient and parcticable job-shop scheduling. Computer& Industrial Engineering, 39, 187-211.
Tsang, E. (1993). Foundations of constraint satisfaction. Academic press, London. Tsin, Y.H. (2002). Some remarks on distributed depth-first search. Information
Processing Letters, 82, 173-178. Walsh, T. (1997). Depth-bounded discrepency search. In Proceedings of the
Fifteenth International Joint Conference on Artificial Intelligence, 1388-1395. San Francisco, CA, USA
275
Waltz, D. (1975). Understanding line drawings of scenes with shadows. In Winston,
P. H., ed., The Psychology of Computer Vision, McGraw-Hill, 19-91. Watson, J. P., Beck, J. C. (2008). A hybrid constraint programming / local search
approach to the job-shop scheduling problem. L. Perron and M. Trick (Eds.), CPAIOR, LNCS 5015, 263-277. Springer, Berlin.
Yang, L.,Gao, Z., & Li, K. (2010). Passenger train scheduling on a single-track or
partially Double - track railway with stochastic information. Engineering Optimization, 42(11), 1003-1022.
Yang, S. (2006). Job-shop scheduling with an adaptive neural network and local
search hybrid approach. In Proceedings of the 2006 IEEE international joint conference on neural networks, 2720-2727. http://hdl.handle.net/2381/8505
Yang, S., Wang, D., Chai,T., & Kendall, G. (2010). An improved constraint
satisfaction adaptive neural network for job-shop scheduling. Journal of Scheduling, 13(1), 17-38.
Yuan, J., & Hansen ,I. A. (2007). Optimizing capacity utilization of stations by
estimating knock-on train delays. Transportation Research Part B, 41, 202-217
Zhang, D., & Deng, A., (2005). An effective hybrid algorithm for the problem of packing circles into a larger containing circle. Computers and Operations Research, 32 (8), 1941-1951.
Zhou, X., & Zhong, M. (2007). Single-track train timetable with guaranteed
optimality: Branch-and bound algorithms with enhanced lower bounds. Transportation Research Part B, 41, 320-341.
Zolfaghari, S., & Liang, M. (1999). Jointly solving the group scheduling and machining speed selection problems: A hybrid tabu search and simulated annealing approach. International Journal of Production Research, 37 (10), 2377-2397.
276
Appendix A
A hyper Branch and Bound Technique for Job Shop
Scheduling Problems While exact techniques give an optimal solution for some problems, they are very
consuming time even in the small cases. The hyper branch and bound technique is
used to solve a job shop problem and investigate that a branch and bound techniques
as exact technique is time consuming to solve small problem such Example 3.1 in
Chapter 3. This example is a job shop problem 3/3/G/Cmax which is strongly
NP –hard problem. The solution steps of the hyper branch and bound technique to
solve Example 3.1 are described as follows:
Level 0 The problem is represented by conjunctive graph as shown in Figure A1.
Figure A1: Conjunctive graph for the 3/3/G/CRmax Rproblem
The initial parameters are calculated as follows:
α = {(1, 1), (1, 4), (2, 7)},
t (α) =min{0+4,0+4,0+3}=3,
i*=2,
α'={(2,7)}.
0
7 8
10
J1
J2
J3
4 5 6
M1 M3 M2
M2 M1 9
M3
1 2 3
M1
M2 M3
3
3
4
4
6
6
6
2
4
277
Level 1
Operation (2, 7) scheduled first on machine M2, and there are two disjunctive arcs
will be drawn from this operation, (2, 7) to (2, 2) and (2, 7) to (2, 6), as shown in
Figure A2.
Figure A2: Conjunctive graph for the 3/3/G/CRmax Rproblem
The lower bound is LRBR= L {0, (2, 7), (1, 8), (3, 9), 10} =13
The problem n/1/CRmaxR of the three machines MR1R, MR2R, and MR3 Ris solved in Table A1. To
find better lower bound, the data on each machine is concluded for all jobs as shown
in Table A1 and Figure A3.
Table A1: Solving n/1/CRmaxR problem for the three machines MR1R, MR2R, and MR3RR Rin level 1
MR1 MR2 MR3
JR1 JR2 JR3 JR1 JR2 JR3 JR1 JR2 JR3
SR11R=0 SR12R=0 SR13R=3 SR21R=4 SR22R=10 SR23R=0 SR31R=7 SR32R=0 SR33R=9
ΦR13R=1
3
ΦR13R=1
2
ΦR13R=1
0
ΦR21R=9 ΦR22R=2 ΦR23R=1
3
ΦR31R=6 ΦR32R=1
2
ΦR33R=4
dR11R=4 dR12R=5 dR13R=9 dR21R=7 dR22R=13 dR23R=3 dR31R=13 dR32R=5 dR33R=13
Optimal solution for MR1 Optimal solution for MR2 Optimal solution for MR3
JR1 JR2 JR3 JR3 JR1 JR2 JR2 JR3 JR1
4 8 14 time 3 4 7 10 12time 4 10 14 20time
LR1R=8 LR2R=0 LR3R=8
0
7 8
10
J1
J2
J3
4 5 6
M1 M3 M2
M2 M1 9
M3
1 2 3
M1
M2 M3 6
6
6
2
4
3
4
4
3
278
Figure A3: Optimal solutions of M1, M2, and M3 in level 1for n/1/Cmax problem
As a result, the new lower bound can be calculated: LB*=13+max {8, 0, 8} =21.
To find the next operation to be scheduled, operation (2, 7) is deleted from α and the
next operation (1, 8) in the same job 3 is added.
Update α to be:
α = {(1, 1), (1, 4), (1, 8)}
t(α)=min{0+4,0+4,3+6}=4
i*=1, and
α'={(1,1),(1,4),(1,8)} where si*j<t(α).
As a result, stage 1 is branched into three parts. One of branches is the operation (1,
1) scheduled first, another scheduled branch is operation (1, 4) first and the operation
(1, 8) scheduled first as well. Hence, the operation (1, 1) is denoted as level 2(a),
operation (1, 4) scheduled first as level 2(b), and operation (1, 8) scheduled first as
level 2(c) as shown in Figure A4:
Figure A4: The branching procedure at level 1
Level 2
Level 2a
Operation (1, 1) scheduled first on machine M1, therefore two disjunctive arcs are
drawn; (1, 1) to (1, 4) and (1, 1) to (1, 8) as shown in Figure A5.
1, 1 1, 4 1, 8
2, 7
279
Figure A5: Conjunctive graph for the 3/3/G/CRmax Rproblem
The lower bound LRBR= L {0, (1, 1), (1, 4), (3, 5), (2, 6), 10} =16.
The problem n/1/CRmaxR is solved to obtain the better lower bound as shown in Table
A2 and Figure A6.
Table A2: Solving n/1/CRmaxR problem for the three machines MR1R, MR2R, and MR3RR Rin level 2a
MR1 MR2 MR3
JR1 JR2 JR3 JR1 JR2 JR3 JR1 JR2 JR3
SR11R=0 SR12R=4 SR13R=4 SR21R=4 SR22R=14 SR23R=0 SR31R=7 SR32R=8 SR33R=10
ΦR11R=1
6
ΦR12R=1
2
ΦR13R=1
0
ΦR21R=9 ΦR22R=2 ΦR23R=1
3
ΦR31R=6 ΦR32R=8 ΦR33R=4
dR11R=4 dR12R=8 dR13R=12 dR21R=10 dR22R=16 dR23R=6 dR31R=16 dR32R=14 dR33R=16
Optimal solution for MR1 Optimal solution for MR2 Optimal solution for MR3
JR1 JR2 JR3 JR3 JR1 JR2 JR2 JR3 JR1
4 8 14 time 3 4 7 14 16time 7 13 17 23time
LR1R=2 LR2R=0 LR3R=10 Figure A6: Solving n/1/CRmaxR problem for the three machines MR1R, MR2R, and MR3R in level 2a
The new lower bound is LRBRP
*P= LRB R+ max {LRiR} for i=1 to m.
LRBRP
*P= 16+max {2, 0, 10} =26
0
7 8
10
J1
J2
J3
4 5 6
M1 M3 M2
M2 M1 9
M3
1 2 3
M1
M2 M3 6
6
6
4
4
4
2
3
3
280
To find the next operation to be scheduled after operation (1, 1), the operation (1, 1)
is deleted from α and then the operation (2, 2) is added to α where it is the second
operation in job1.
α = {(2, 2), (1, 4), (1, 8)}
t(α)=min{4+3,4+4,4+6}=7 , then i*=2 and α'={(2, 2)},
So in the next branching, the operation (2, 2) is scheduled on M2 at level 3(a) as
shown in Figure A7.
Level 1
2a 2 b 2 c
3a
Figure A7: The branching procedure at level 2a
Level 2b
In this level operation (1, 4) scheduled first on M1 after operation (2, 7), therefore
there are two disjunctive arcs will be drawn; (1, 4) to (1, 1) and (1, 4) to (1, 8) as
shown in Figure A8.
0
7 8
10
J1
J2
J3
4 5 6
M1 M3 M2
M2 M1 9
M3
1 2 3
M1
M2 M3
1, 1 1, 4 1, 8
2, 7
2, 2
4
4
4
6
6
6
2
3
3
281
Figure A8: Disjunctive graph for the level 2b
LB = L {0, (1, 4), (1, 1), (2, 2), (3, 3), 10} =17.
Table A3 and Figure A9 are used to obtain the better lower bound.
Table A3: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level 2b
M1 M2 M3
J1 J2 J3 J1 J2 J3 J1 J2 J3
S11=4 S12=0 S13=4 S21=8 S22=10 S23=0 S31=11 S32=4 S33=10
Φ11=1
3
Φ12=1
7
Φ13=1
0
Φ21=9 Φ22=2 Φ23=1
3
Φ31=6 Φ32=8 Φ33=4
d11=8 d12=4 d13=13 d21=11 d22=17 d23=7 d31=17 d32=15 d33=17
Optimal solution for M1 Optimal solution for M2 Optimal solution for M3
J2 J1 J3 J3 J1 J2 J2 J3 J1
4 8 14 time 3 8 11 13 time 4 10 14 20time
L1=1 L2=0 L3=3
Figure A9: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level 2b
The new lower bound is LB*=17+max {1, 0, 3} =20
To find the next operation to be scheduled after operation (1, 4), operation (1, 4) is
deleted from α and operation (3,5 ) is added to α.
α ={(1, 1), (3, 5), (1, 8)}, t(α)=min{4+4,4+6,4+6}=8, then i*=1 and α'={(1,1),(1,8)}.
So, level 2(b) is branched into two parts, One of branches is the operation (1, 1)
scheduled first on M1 and another scheduled branch is operation (1,8) will be
scheduled first on M1. Hence, the operation (1, 1) is denoted as level 3 bi and
operation (1,8) scheduled first as level 3 bii. As shown in Figure A10.
Level 1
2a 2 b 2c
3bi 3bii
1, 1 1, 4 1, 8
2, 7
1, 1 1, 8
282
Figure A10: The branching procedure at level 2b
Level 2c
In this level, operation (1, 8) is scheduled first on M1 after operation (2, 7), therefore
there are two disjunctive arcs will be drawn. (1, 8) to (1, 1) and (1, 8) to (1, 4) as
shown in Figure A11.
Figure A11: Disjunctive graph for the level 2c
The lower bound is LRBR= L {0, (2, 7), (1, 8), (1, 1), (2, 2), (3, 3), 10} =22.
Table A4 and Figure A12 are used to obtain the better lower bound
Table A4: Solving n/1/CRmaxR problem for the three machines MR1R, MR2R, and MR3RR Rin level 2c
MR1 MR2 MR3
JR1 JR2 JR3 JR1 JR2 JR3 JR1 JR2 JR3
SR11R=9 SR12R=9 SR13R=3 SR21R=13 SR22R=19 SR23R=0 SR31R=16 SR32R=13 SR33R=9
ΦR11R=13 ΦR12R=1
2
ΦR13R=1
9
ΦR21R=9 ΦR22R=2 ΦR23R=2
2
ΦR31R=6 ΦR32R=8 ΦR33R=4
dR11R=13 dR12R=14 dR13R=9 dR21R=16 dR22R=22 dR23R=3 dR31R=22 dR32R=20 dR33R=22
Optimal solution for MR1 Optimal solution for MR2 Optimal solution for MR3
JR3 JR1 JR2 JR3 JR1 JR2 JR3 JR2 JR1
3 9 13 17 time 3 13 16 19 21 time 9 13 19 25time
0
7 8
10
J1
J2
J3
4 5 6
M1 M3 M2
M2 M1 9
M3
1 2 3
M1
M2 M3
6
6
6
4
4
4
3
3
2
283
L1=3 L2=0 L3=3 Figure A12: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level 2c
The new lower bound is LB
*=22+max {3, 0, 3} =25
To find the next operation to be scheduled after operation (1, 8), operation (1, 8) is
deleted from α and operation (3, 9) is added to α.
As a results, α = {(1, 1), (1, 4), (3, 9)}, t(α)=min{9+4,9+4,9+4}=13,
then i*=1, 3 and α’= {(1, 1), (1, 4), (3, 9)}.
So, level 2(c) is branched into three parts, One of branches is the operation(1, 1)
scheduled first on M1, another scheduled branch is operation (1, 4) will be scheduled
first on M1 and the operation (3,9) will be scheduled first on M3.. Hence, the
operation (1, 1) is denoted as level 3 ci and operation (1,4) scheduled first as level
3cii and the operation (3,9) will be scheduled at level 3ciii as shown in Figure A13.
Level 1
2a 2b 2c
2ci 3cii 3ciii
Figure A13: The branching procedure at level 2c
To start level 3, Figure A14 shows the graph at level 2 and the all branches which are
evaluated in the level 3.
Level 1
2a 2 b 2 c
1,1 1,4 1,8
2,7
1, 4 1, 1 3, 9
1, 1 1, 4 1, 8
2, 7
284
3a 3bi 3bii 3ci 3cii 3ciii
Figure A14: The complete branching procedure at level 2
Level 3
Level 3a
Operation (2, 2) scheduled on M2 after operation (1, 1), therefore a disjunctive arc is
drawn from (2, 2) to (2, 6) as shown in Figure A15.
Figure A15: Disjunctive graph for the level 3a
The lower bound is LRBR= L {0, (1, 1), (1, 4), (3, 5), (2, 6), 10} =16.
Table A5 and Figure A16 are used to obtain the better lower bound.
Table A5: Solving n/1/CRmaxR problem for the three machines MR1R, MR2R, and MR3RR Rin level 3a
MR1 MR2 MR3
JR1 JR2 JR3 JR1 JR2 JR3 JR1 JR2 JobR3
SR11R=0 SR12R=4 SR13R=4 SR21R=4 SR22R=14 SR23R=0 SR31R=7 SR32R=8 SR33R=10
ΦR11R=16 ΦR12R=1
2
ΦR13R=1
0
ΦR21R=9 ΦR22R=2 ΦR23R=1
3
ΦR31R=6 ΦR32R=8 ΦR33R=4
dR11R=4 dR12R=8 dR13R=12 dR21R=10 dR22R=16 dR23R=6 dR31R=16 dR32R=14 dR33R=16
1, 1 1, 8 1, 1 1, 4 3, 9 2, 2
0
7 8
10
J1
J2
J3
4 5 6
M1 M3 M2
M2 M1 9
M3
1 2 3
M1
M2 M3 6
6
6
4
4
4
3
3
2
285
Optimal solution for M1 Optimal solution for M2 Optimal solution for M3
J1 J2 J3 J3 J1 J2 J3 J2 J1
4 8 14 time 3 4 7 14 16time 7 13 17 23time
L1=2 L2=0 L3=10 Figure A16: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level 3a
The new lower bound LB*=16+max {2,0,10}=26.
To find the next operation to be scheduled, operation (2, 2) will be deleted from α
and then the operation (3,3) will be added to α.
As a result, α will be updated as follows: α ={(3, 3), (1, 4), (1, 8)} , t(α)=min{7+6,
4+4, 4+6}=8, then i*=1, and α'= {(1, 4),(1, 8)},
So, level 3(a) will be branched into two parts, One of branches is the operation (1, 4)
scheduled first on M1, and another scheduled branch is the operation (1, 8) is
scheduled first on M1. Hence, the operation (1, 4) is denoted as level 4ai and
operation (1, 8) scheduled first as level 4aii as shown in Figure A17.
Level 1
2a 2 b 2 c
3a 3bi 3bii 3ci 3cii 3ciii
4ai 4aii Figure A17: The branching procedure at level 3a
Level 3bi In this level, operation (1, 1) scheduled on M2 after operation (1, 4), therefore there
is one disjunctive arc from (1, 1) to (1, 8) as shown in Figure A18.
1, 1 1, 4 1, 8
2, 7
1, 1 1, 8 1, 1 1, 4 3, 9 2, 2
1, 8 1, 4
0 10
J1
J2
4 5 6
M1 M3 M2
1 2 3
M1
M2 M3 3
6
6
4
2
4
286
Figure A18: Disjunctive graph for the level 3bi
The lower bound LRBR= L {0, (1, 4), (1, 1), (1, 8), (3, 9), 10} =18
Table A6 and Figure A19 are used to obtain the better lower bound. Table A6: Solving n/1/CRmaxR problem for the three machines MR1R, MR2R, and MR3RR Rin level 3bi
MR1 MR2 MR3
JR1 JR2 JR3 JR1 JR2 JR3 JR1 JR2 JR3
SR11R=4 SR12R=0 SR13R=8 SR21R=8 SR22R=10 SR23R=0 SR31R=11 SR32R=4 SR33R=14
ΦR11R=14 ΦR12R=1
8
ΦR13R=1
0
ΦR21R=9 ΦR22R=2 ΦR23R=1
3
ΦR31R=6 ΦR32R=8 ΦR33R=4
dR11R=8 dR12R=4 dR13R=14 dR21R=12 dR22R=18 dR23R=8 dR31R=18 dR32R=16 dR33R=18
Optimal solution for MR1 Optimal solution for MR2 Optimal solution for MR3
JR2 JR1 JR3 JR3 JR1 JR2 JR2 JR1 JR3
4 8 14 time 3 8 11 13 time 4 10 11 17 21time
LR1R=0 LR2R=0 LR3R=3
Figure A19: Solving n/1/CRmaxR problem for the three machines MR1R, MR2R, and MR3 Rin level 3bi
The new lower bound LRBRP
*P=18+max {0, 0, 3} =21
To find the next operation to be scheduled after operation (1, 1), operation (1, 1) is
deleted from α and operation (2, 2) is added to α.
As a results, α = {(2, 2),(3, 5), (1, 8)}, t(α)=min{8+3,4+6,8+6}=10,
then i P
*P=1 and α'={(3,5)}.
So in the next branching, operation (3, 5) is scheduled at level 4bi after operation (1,
1) as shown in Figure A20.
Level 1
7 8
J3
M2 M1 9
M3
2, 7
3
6
4
287
2a 2 b 2 c
3a 3bi 3bii 3ci 3cii 3ciii
4bi
Figure A20: The branching procedure at level 3bi
Level 3bii
Operation (1, 8) scheduled first on M1 after operation (1, 4), therefore one
disjunctive arc (1, 8) to (1, 4) as shown in Figure A21.
Figure A21: Disjunctive graph for the level 3bii
The lower bound = {0, (1, 4), (1, 8), (1, 1), (2, 2), (3, 3), 10}=23
Table A7 and Figure A22 are used to obtain the better lower bound.
Table A7: Solving n/1/CRmaxR problem for the three machines MR1R, MR2R, and MR3RR Rin level 3bii
MR1 MR2 MR3
JR1 JR2 JR3 JR1 JR2 JR3 JR1 JR2 JR3
SR11R=10 SR12R=0 SR13R=4 SR21R=14 SR22R=10 SR23R=0 SR31R=17 SR32R=4 SR33R=10
ΦR11R=13 ΦR12R=2
3
ΦR13R=1
9
ΦR21R=9 ΦR22R=2 ΦR23R=2
2
ΦR31R=6 ΦR32R=8 ΦR33R=4
1, 1 1, 4 1, 8
1, 1 1, 8 1, 1 1, 4 3, 9 2, 2
3, 5
0
7 8
10
J1
J2
J3
4 5 6
M1 M3 M2
M2 M1 9
M3
1 2 3
M1
M2 M3 6
6
4
4
3
3
2
6
4
288
d11=14 d12=4 d13=10 d21=17 d22=23 d23=4 d31=23 d32=21 d33=23
Optimal solution for M1 Optimal solution for M2 Optimal solution for M3
J2 J3 J1 J3 J1 J2 J2 J3 J1
4 10 14 time 4 14 17 19 time 4 10 14 17 23time
L1=0 L2=0 L3=0 Figure A22: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level 3bii
The new lower bound LB*=23+max {0, 0, 0} =23,
To find the next operation to be scheduled after operation (1, 8), operation (1, 8) will
be deleted from α and operation (3, 9) will be added to α.
As a results, α = {(1, 1), (3, 5), (3, 9)} and t (α) =min {10+4,4+6,9+4}=10,
Then i*=3 and α'= {(3, 5), (3, 9)}.
So, level 3bii will be branched into two parts, One of branches is the operation (3, 5)
scheduled first on M3, and another scheduled branch is the operation (3, 9) will be
scheduled first on M3. Hence, the operation (3, 5) is denoted as level 4(bii)a and
operation (1,8) are scheduled first as level (4bii)b as shown in Figure A23.
Level 1
2a 2 b 2 c
3a 3bi 3bii 3ci 3cii 3ciii
(4bii)a (4bii)b
Figure A23: The branching procedure at level 3bii
Level 3ci
Operation (1, 1) scheduled first on M1 after (1, 8), therefore there is one disjunctive
arc; (1, 1) to (1, 4) as shown in Figure A24.
1, 1 1, 4 1, 8
2, 7
1, 1 1, 8 1, 1 1, 4 3, 9 2, 2
3, 5 3, 9 3, 5 3, 9
1 2 3
M1
M2 M3 6
3
4
289
Figure A24: Disjunctive graph for the level 3ci
The lower bound LRBR= L {0, (2, 7),(1, 8), (1, 1), (1, 4),(3, 5), (2, 6),10}=25.
Table A8 and Figure A25 are used to obtain the better lower bound.
Table A8: Solving n/1/CRmaxR problem for the three machines MR1R, MR2R, and MR3RR Rin level 3ci
MR1 MR2 MR3
JR1 JR2 JR3 JR1 JR2 JR3 JR1 JR2 JR3
SR11R=9 SR12R=13 SR13R=3 SR21R=13 SR22R=23 SR23R=0 SR31R=16 SR32R=17 SR33R=9
ΦR11R=16 ΦR12R=1
2
ΦR13R=2
2
ΦR21R=9 ΦR22R=2 ΦR23R=2
5
ΦR31R=6 ΦR32R=8 ΦR33R=4
dR11R=13 dR12R=17 dR13R=9 dR21R=19 dR22R=25 dR23R=3 dR31R=25 dR32R=23 dR33R=25
Optimal solution for MR1 Optimal solution for MR2 Optimal solution for MR3
JR3 JR1 JR2 JR3 JR1 JR2 JR3 JR1 JR2
3 9 13 17 time 4 13 16 23 25 time 9 13 17 23 29time
LR1R=0 LR2R=0 LR3R=4
Figure A25: Solving n/1/CRmaxR problem for the three machines MR1R, MR2R, and MR3 Rin level 3ci
The new lower bound is LRBRP
*P= 25+max {0, 0, 4} =29.
To find the next operation to be scheduled after operation (1, 1), operation (1, 1) will
be deleted from α and operation (2, 2) will be added to α.
As a results,
α = {(2, 2), (1, 4), (3,9)},
0
7 8
10
J1
J2
J3
4 5 6
M1 M3 M2
M2 M1 9
M3
6
6
4
4
3
2
290
t(α)=min{13+3,13+4,9+4}=13, then
i*=3,
α'={(3, 9)}, So in the next branching, the operation (3, 9) is scheduled on M3 at level
4(ci) as shown in Figure A26.
Level 1
2a 2 b 2 c
3a 3bi 3bii 3ci 3cii 3ciii
4ci
Figure A26: The branching procedure at level 3ci
Level 3cii
Opration (1, 4) scheduled first on M1 after (1, 8), therefore there is one disjunctive
arc; (1, 4) to (1, 1) as shown in Figure A27.
1, 1 1, 4 1, 8
2, 7
1, 1 1, 8 1, 1 1, 4 3, 9 2, 2
3,9
0 10
J1
J2
J3
4 5 6
M1 M3 M2
1 2 3
M1
M2 M3 6
6
4
4
4
3
3
2
291
Figure A27: Disjunctive graph for the level 3cii
LB= L {0,(2, 7), (1, 8),(1, 4), (1, 1), (2, 2), (3, 3), 10}=26.
Table A9 and Figure A28 are used to obtain the better lower bound.
Table A9: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level 3cii
M1 M2 M3
J1 J2 J3 J1 J2 J3 J1 J2 J3
S11=13 S12=9 S13=3 S21=17 S22=19 S23=0 S31=20 S32=13 S33=9
Φ11=13 Φ12=1
7
Φ13=2
3
Φ21=9 Φ22=2 Φ23=2
6
Φ31=6 Φ32=8 Φ33=4
d11=17 d12=13 d13=9 d21=20 d22=26 d23=3 d31=26 d32=24 d33=26
Optimal solution for M1 Optimal solution for M2 Optimal solution for M3
J3 J2 J1 J3 J1 J2 J3 J2 J1
3 9 13 17 time 3 17 20 22 time 9 13 19 20 26time
L1=0 L2=0 L3=0
Figure A28: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level 3cii
The new lower bound is LB*=26+max {0, 0, 0} =26,
To find the next operation to be scheduled after operation (1, 4), operation (1, 4) is
deleted from α and operation (3, 5) is added to α.
As a results, α = {(1, 1), (3, 5), (3, 9)} and t(α) =min{13+4,13+6,9+4}=13,
then i*=3 and α'={(3,5)}.
So in the next branching, the operation (3, 5) will be scheduled on M3 at level 4(cii)
as shown in Figure A29.
7 8
M2 M1 9
M3
6
292
Level 1
2a 2 b 2 c
3a 3bi 3bii 3ci 3cii 3ciii
4cii
Figure A29: The branching procedure at level 3cii
Level 3ciii
In this level, operation (3, 9) scheduled first after operation (1, 8), therefore two
disjunctive arcs will be drawn as shown in Figure A30.
Figure A30: Disjunctive graph for the level 3ciii
LRBR= L {0, (2, 7), (1, 8), (1, 1), (2, 2), (3, 3), 10} =22.
Table A10 and Figure A31 are used to obtain the better lower bound.
Table A10: Solving n/1/CRmaxR problem for the three machines MR1R, MR2R, and MR3RR Rin level 3ciii
MR1 MR2 MR3
JR1 JR2 JR3 JR1 JR2 JR3 JR1 JR2 JR3
SR11R=9 SR12R=9 SR13R=3 SR21R=13 SR22R=19 SR23R=0 SR31R=16 SR32R=13 SR33R=0
1, 1 1, 4 1, 8
2, 7
1, 1 1, 8 1, 1 1, 4 3, 9 2, 2
3, 5
0
7 8
10
J1
J2
J3
4 5 6
M1 M3 M2
M2 M1 9
M3
1 2 3
M1
M2 M3 6
6
6
4
4
4
3
3
2
293
Φ11=13 Φ12=1
2
Φ13=1
9
Φ21=9 Φ22=2 Φ23=2
2
Φ31=6 Φ32=8 Φ33=2
2
d11=13 d12=14 d13=9 d21=16 d22=22 d23=3 d31=22 d32=20 d33=3
Optimal solution for M1 Optimal solution for M2 Optimal solution for M3
J3 J2 J1 J3 J1 J2 J3 J2 J1
3 9 13 17 time 3 13 16 19 21time 9 13 19 25time
L1=3 L2=0 L3=3 Figure A31: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level 3ciii
The new lower bound LB*=22+max {3, 0, 3} =25.
By the same way level 4 is completed, Figure A32 shows the graph at level 4 and the
all branches which will be branched from level 3.
Level 0
Level 1
2a 2 b 2 c
3a 3bi 3bii 3ci 3cii 3ciii
4ai 4aii 4bi (4bii)a (4bii)b 4ci 4cii
Figure A32: The complete branching procedure at level4
Level 4
Level 4ai
Operation (1, 4) scheduled on machine 1 after operation (2, 2). Therefore it fixes a
adjunctive arc (1, 4) to (1, 8) as shown in Figure A33.
1, 1 1, 4 1, 8
2, 7
1, 1 1, 8 1, 1 1, 4 3, 9 2, 2
3, 5 3, 9 3, 5 3, 9 3, 9 3, 5 3, 5 1, 8 1, 4
21
26 20 25
26 21 23 26 25 25
24
24 21 23 24 29
29
1 2 3
M1
M2 M3 3
6
4
294
Figure A33: Disjunctive graph for the level 4ai
The lower bound LRBR= L {0, (1, 1), (1, 4), (1, 8), (3, 9), 10}=18
Table A11 and Figure A34 are used to obtain the better lower bound. Table A11: Solving n/1/CRmaxR problem for the three machines MR1R, MR2R, and MR3RR Rin level 4ai
MR1 MR2 MR3
JR1 JR2 JR3 JR1 JR2 JR3 JR1 JR2 JR3
SR11R=0 SR12R=4 SR13R=8 SR21R=4 SR22R=14 SR23R=0 SR31R=7 SR32R=8 SR33R=14
ΦR11R=18 ΦR12R=1
4
ΦR13R=1
0
ΦR21R=9 ΦR22R=2 ΦR23R=1
3
ΦR31R=6 ΦR32R=8 ΦR33R=4
dR11R=4 dR12R=8 dR13R=14 dR21R=12 dR22R=18 dR23R=8 dR31R=18 dR32R=16 dR33R=18
Optimal solution for MR1 Optimal solution for MR2 Optimal solution for MR3
JR1 JR2 JR3 JR3 JR1 JR2 JR2 JR3 JR1
4 8 14 time 3 4 7 14 16 time 8 14 18 24time
LR1R=0 LR2R=0 LR3R=6
Figure A34: Solving n/1/CRmaxR problem for the three machines MR1R, MR2R, and MR3 Rin level 4ai
The new lower bound LRBRP
*P=18+max {0, 0, 6} =24.
To find the next operation to be scheduled, operation (1, 4) is deleted and operation
(3, 5) is added to α.
α ={(3, 3), (3, 5), (1, 8)} and t(α)=min{7+6,8+6,8+6}=13,
then i P
*P=3 and α'={(3,3),(3,5)}.
So in the next branching, the operation (3, 3) and (3, 5) is scheduled on MR3R at level
5aia and 5aib. The branching at level 4ai is shown in Figure A35.
0
8
10
J1
J2
J3
4 5 6
M1 M3 M2
7
M2
9 M3
4
6
2
3
6
4
M1
295
Level 0
Level 1
2a 2 b 2 c
3a 3bi 3bii 3ci 3cii 3ciii
4ai 4aii 4bi (4bii)a (4bii)b 4ci 4cii
Figure A35: The complete branching procedure at level4ai
Level 4aii Operation (1, 8) scheduled on machine 1 after operation (2, 2), therefore it fixes a
disjunctive arc: (1, 8) to (1, 4) as shown in Figure A36.
Figure A36: Disjunctive graph for the level 4aii
The lower bound LRBR= L {0, (1, 1), (1, 8), (1, 4), (3, 5), (2, 6), 10} =22
Table A12 and Figure A37 are used to obtain the better lower bound.
Table A12: Solving n/1/CRmaxR problem for the three machines MR1R, MR2R, and MR3RR Rin level 4aii
MR1 MR2 MR3
JR1 JR2 JR3 JR1 JR2 JR3 JR1 JR2 JR3
1, 1 1, 4 1, 8
2, 7
1, 1 1, 8 1, 1 1, 4 3, 9 2, 2
3, 5 3, 9 3, 5 3, 9 3, 9 3, 5 3, 5 1, 8 1, 4
21
26 20 25
26 21 23 26 25 25
24
3, 3 3, 5
0
1 2 3
8
10
J1
J2
J3
M1
4 5 6
M2 M3
M1 M3 M2
7
M2
9 M3
3
6
4
6
2
3
6
4
4
M1
296
S11=0 S12=10 S13=4 S21=4 S22=20 S23=0 S31=7 S32=14 S33=10
Φ11=22 Φ12=1
2
Φ13=1
8
Φ21=9 Φ22=2 Φ23=2
1
Φ31=6 Φ32=8 Φ33=4
d11=4 d12=14 d13=10 d21=16 d22=22 d23=4 d31=22 d32=20 d33=22
Optimal solution for M1 Optimal solution for M2 Optimal solution for M3
J1 J3 J2 J3 J1 J2 J1 J2 J3
4 10 14 time 3 4 7 20 22 time 7 13 14 20 24time
L1=0 L2=0 L3=2 Figure A37: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level 4aii
The new lower bound LB
*=22+max {0, 0, 2} =24.
To find the next operation to be scheduled, operation (1, 8) is deleted and operation
(3, 9) is added to α.
α ={(3, 3), (1, 4), (3, 9)} and t(α)=min{7+6, 9+12, 10+4}=13, then i*=3 and
α'={(3, 3), (3, 9)}.
So in the next branching, the operation (3, 3) and (3, 9) will be scheduled on M3 at
level 5aiia and 5aiib. The branching at level 4aii can be shown in Figure A38.
Level 0
Level 1
2a 2 b 2 c
3a 3bi 3bii 3ci 3cii 3ciii
4ai 4aii 4bi (4bii)a (4bii)b 4ci 4cii
Figure A38: The complete branching procedure at level4aii
Level 4bi
1, 1 1, 4 1, 8
2, 7
1, 1 1, 8 1, 1 1, 4 3, 9 2, 2
3, 5 3, 9 3, 5 3, 9 3, 9 3, 5 3, 5 1, 8 1, 4
21
26 20 25
26 21 23 26 25 25
24
3, 3 3, 9
297
In this level, operation (3, 5) is scheduled on the machines after operation (1, 1),
therefore it fixes two arcs : (3,5) to (3,3) and (3,5) to (3,9) as shown in Figure A39.
Figure A39: Disjunctive graph for the level 4bi
The lower bound is LRBR= L {0, (1, 4), (1, 1), (1, 8), (3, 9), 10} =18.
Table A13 and Figure A40 are used to obtain the better lower bound. Table A13: Solving n/1/CRmaxR problem for the three machines MR1R, MR2R, and MR3RR Rin level 4bi
MR1 MR2 MR3
JR1 JR2 JR3 JR1 JR2 JR3 JR1 JR2 JR3
SR11R=4 SR12R=0 SR13R=8 SR21R=8 SR22R=10 SR23R=0 SR31R=11 SR32R=4 SR33R=14
ΦR11R=14 ΦR12R=1
8
ΦR13R=1
0
ΦR21R=9 ΦR22R=2 ΦR23R=1
3
ΦR31R=6 ΦR32R=1
2
ΦR33R=4
dR11R=8 dR12R=4 dR13R=14 dR21R=12 dR22R=18 dR23R=8 dR31R=18 dR32R=12 dR33R=18
Optimal solution for MR1 Optimal solution for MR2 Optimal solution for MR3
JR2 JR1 JR3 JR3 JR1 JR2 JR2 JR1 JR3
4 8 14 time 3 8 11 13 time 4 10 11 17 21time
LR1R=0 LR2R=0 LR3R=3
Figure A40: Solving n/1/CRmaxR problem for the three machines MR1R, MR2R, and MR3 Rin level 4bi
The new lower bound LRBRP
*P=18+max {0, 0, 3} =21.
To find the next operation to be scheduled, we delete operation (3, 5) and add (2, 6)
to α.
α = {(2, 2), (2, 6), (1, 8)} and t(α)=min{8+3,10+2,8+6}=11,
0
1 2 3
8
10
J1
J2
J3
M1
4 5 6
M2 M3
M1 M3 M2
7
M2
9 M3
3
6
4
6
2
3
6
4
4
M1
298
then i*=2 and α'={(2, 2),(2, 6)}.
So in the next branching, the operation (2, 2) and (2, 6) will be scheduled on M2 at
level 5bia and 5bib. The branching at level 4bii can be shown in Figure A41.
Level 0
Level 1
2a 2 b 2 c
3a 3bi 3bii 3ci 3cii 3ciii
4ai 4aii 4bi (4bii)a (4bii)b 4ci 4cii
Figure A41: The complete branching procedure at level 4bi
Level 4 biia
In this level, operation (3, 5) is scheduled first after operation (1, 8) and two arcs are
fixed as shown in Figure A42:
Figure A42: Disjunctive graph for the level 4biia
The lower bound LRBR= L {0, (1, 4), (1, 8), (1, 1), (2, 2),(3, 3), 10}=23.
1, 1 1, 4 1, 8
2, 7
1, 1 1, 8 1, 1 1, 4 3, 9 2, 2
3, 5 3, 9 3, 5 3, 9 3, 9 3, 5 3, 5 1, 8 1, 4
21
26 20 25
26 21 23 26 25 25
24
2, 2 2,6
0
1 2 3
8
10
J1
J2
J3
M1
4 5 6
M2 M3
M1 M3 M2
7
M2
9 M3
3
6
4
6
2
3
6
4
4
M1
299
Table A14 and Figure A43 are used to obtain the better lower bound.
Table A14: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level 4biia
M1 M2 M3
J1 J2 J3 J1 J2 J3 J1 J2 J3
S11=10 S12=0 S13=4 S21=14 S22=10 S23=0 S31=17 S32=4 S33=10
Φ11=13 Φ12=2
3
Φ13=1
9
Φ21=9 Φ22=2 Φ23=2
2
Φ31=6 Φ32=1
2
Φ33=4
d11=14 d12=4 d13=10 d21=17 d22=23 d23=4 d31=23 d32=17 d33=23
Optimal solution for M1 Optimal solution for M2 Optimal solution for M3
J2 J3 J1 J3 J1 J2 J2 J3 J1
4 10 14 time 3 14 17 19 time 4 10 14 17 23time
L1=0 L2=0 L3=0
Figure A43: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level 4biia
The new lower bound LB*=23+max {0, 0, 0} =23.
To find the next operation to be scheduled, operation (3, 5) is deleted and operation
(2, 6) is added to α.
α = {(1, 1), (2, 6), (3, 9)}, t(α)=min{0+4, 10+2, 10+4}=12, then i*=2,
α'={(2, 6)}, So in the next branching, the operation (2, 6) will be scheduled on M2 at
level 5biia after operation (3, 5). The branching at level 4biia is shown in Figure
A44.
Level 0
Level 1
2a 2 b 2 c
3a 3bi 3bii 3ci 3cii 3ciii
4ai 4aii 4bi (4bii)a (4bii)b 4ci 4cii
1, 1 1, 4 1, 8
2, 7
1, 1 1, 8 1, 1 1, 4 3, 9 2, 2
3, 5 3, 9 3, 5 3, 9 3, 9 3, 5 3, 5 1, 8 1, 4
21
26 20 25
26 21 23 26 25 25
23 26 29
300
Figure A44: The complete branching procedure at level 4biia
Level 4biib
In this level, operation (3, 9) is scheduled after operation (1, 8), and two arcs are
fixed as shown in Figure A45, (3, 9) to (3, 5) and (3, 9) to (3, 3),
Figure A45: Disjunctive graph for the level 4biib
The lower bound LRBR = L {0, (1, 4), (1, 8), (1, 1), (2, 2), (3, 3), 10} =23
Table A15 and Figure A46 are used to obtain the better lower bound.
Table A15: Solving n/1/CRmaxR problem for the three machines MR1R, MR2R, and MR3RR Rin level 4biib
MR1 MR2 MR3
JR1 JR2 JR3 JR1 JR2 JR3 JR1 JR2 JR3
SR11R=10 SR12R=0 SR13R=4 SR21R=14 SR22R=20 SR23R=0 SR31R=17 SR32R=14 SR33R=10
ΦR11R=13 ΦR12R=2
3
ΦR13R=1
9
ΦR21R=9 ΦR22R=2 ΦR23R=2
2
ΦR31R=6 ΦR32R=8 ΦR33R=1
2
dR11R=14 dR12R=4 dR13R=10 dR21R=17 dR22R=23 dR23R=4 dR31R=23 dR32R=21 dR33R=15
Optimal solution for MR1 Optimal solution for MR2 Optimal solution for MR3
JR2 JR3 JR1 JR3 JR1 JR2 JR2 JR3 JR1
4 10 14 time 3 14 17 19 time 10 14 20 26time
LR1R=0 LR2R=0 LR3R=3 Figure A46: Solving n/1/CRmaxR problem for the three machines MR1R, MR2R, and MR3 Rin level 4biib
2,6
0
1 2 3
8
10
J1
J2
J3
M1
4 5 6
M2 M3
M1 M3 M2
7
M2
9 M3
3
6
4
6
2
3
6
4
4
M1
301
The new lower bound LB*=23+max {0, 0, 3} =26.
The branching terminated no operation after (3, 9).
Level 4ci In this level operation (3, 9) is scheduled after operation (1, 1), and two arcs are
fixed: (3, 9) to (3, 5) and (3, 9) to (3, 3) as shown in Figure A47.
Figure A47: Disjunctive graph for the level 4ci
The lower bound LRBR= L {0, (2,7),(1,8),(1,1),(1,4),(3,5),(2,6),10}=25
Table A16 and Figure A48 are used to obtain the better makespan.
Table A16: Solving n/1/CRmaxR problem for the three machines MR1R, MR2R, and MR3RR Rin level 4ci
MR1 MR2 MR3
JR1 JR2 JR3 JR1 JR2 JR3 JR1 JR2 JR3
SR11R=9 SR12R=13 SR13R=3 SR21R=13 SR22R=23 SR23R=0 SR31R=16 SR32R=17 SR33R=9
ΦR11R=16 ΦR12R=1
2
ΦR13R=1
9
ΦR21R=9 ΦR22R=2 ΦR23R=2
5
ΦR31R=6 ΦR32R=8 ΦR33R=1
2
dR11R=13 dR12R=17 dR13R=12 dR21R=19 dR22R=25 dR23R=3 dR31R=25 dR32R=23 dR33R=17
Optimal solution for MR1 Optimal solution for MR2 Optimal solution for MR3
JR3 JR1 JR2 JR3 JR1 JR2 JR3 JR2 JR1
3 9 13 17 time 3 13 16 23 25 time 9 13 17 23 29time
LR1R=0 LR2R=0 LR3R=4
0
1 2 3
8
10
J1
J2
J3
M1
4 5 6
M2 M3
M1 M3 M2
7
M2
9 M3
3
6
4
6
2
3
6
4
4
M1
302
Figure A48: Solving n/1/Cmax problem for the three machines M1,M2, and M3 in level 4ci
The new lower bound LB*=25+max {0, 0, 4} =29.
The branching terminated no operation after (3,9).
Level 4cii
The operation (3, 5) will be scheduled after (1, 4). It fixes two arcs: (3, 5) to (3, 3)
and (3, 5) to (3, 9) as shown in Figure A49.
Figure A49: Disjunctive graph for the level 4cii
The lower bound is LRBR=L {0, (2, 7), (1, 8), (1, 4), (1, 1), (2, 2), (3, 3), 10}=26
Table A17 and Figure A50 are used to obtain the better makespan.
Table A17: Solving n/1/CRmaxR problem for the three machines MR1R, MR2R, and MR3RR Rin level 4cii
MR1 MR2 MR3
JR1 JR2 JR3 JR1 JR2 JR3 JR1 JR2 JR3
SR11R=13 SR12R=9 SR13R=3 SR21R=17 SR22R=19 SR23R=0 SR31R=20 SR32R=13 SR33R=19
ΦR11R=13 ΦR12R=1
7
ΦR13R=2
3
ΦR21R=9 ΦR22R=2 ΦR23R=2
6
ΦR31R=6 ΦR32R=1
2
ΦR33R=4
dR11R=17 dR12R=13 dR13R=9 dR21R=20 dR22R=26 dR23R=3 dR31R=26 dR32R=20 dR33R=26
Optimal solution for MR1 Optimal solution for MR2 Optimal solution for MR3
JR3 JR1 JR2 JR3 JR1 JR2 JR2 JR3 JR1
3 9 13 17 time 3 17 20 22 time 13 19 23 29time
0
1 2 3
8
10
J1
J2
J3
M1
4 5 6
M2 M3
M1 M3 M2
7
M2
9 M3
3
6
4
6
2
3
6
4
4
M1
303
L1=0 L2=0 L3=3
Figure A50: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level 4cii
The new lower bound LB
*=26+max {0, 0, 3} =29.
To obtain the next operation after operation (3, 5), operation (3, 5) is deleted from α
and added the next one (2, 6) where:
α = {(1, 1),(2, 6), (3, 9)} and t(α)=min{13+4, 19+2, 19+4}=17,
then i*=1 and α'={(1, 1)}.
So in the next branching, the operation (1, 1) has to be scheduled on M1 at level
5(cii) but as seen in Figure A35, this operation has not any disjunctive arc to fix .
Hence, the next operation (2, 6) in t(α) is selected because its value is less than (3, 9).
So, operation (2, 6) is scheduled after operation (3, 5) and α'= {(2, 6)} at level 5(cii)
as shown in Figure A51.
Level 0
Level 1
2a 2 b 2 c
3a 3bi 3bii 3ci 3cii 3ciii
4ai 4aii 4bi (4bii)a (4bii)b 4ci 4cii
1, 1 1, 4 1, 8
2, 7
1, 1 1, 8 1, 1 1, 4 3, 9 2, 2
3, 5 3, 9 3, 5 3, 9 3, 9 3, 5 3, 5 1, 8 1, 4
21
26 20 25
26 21 23 26 25 25
23
2,6
26 29
24
3, 3 3, 5
24
3, 3 3,9 2,6
24
2, 2 2,6
29
304
Figure A51: The complete branching procedure at level 4cii
Level 5 Level 5aia
In Figure A52, Operation (3, 3) is scheduled on M3 after operation (1, 4). It fixes two
arcs:
(3, 3) to (3, 5) and (3, 3) to (3, 9)
Figure A52: Disjunctive graph for the level 5aia
LRBR= L {0, (1, 1), (2, 2), (3, 3), (3, 5), (2, 6), 10} =21
Table A18 and Figure A53 are used to obtain the better makespan.
Table A18: Solving n/1/CRmaxR problem for the three machines MR1R, MR2R, and MR3RR Rin level 5aia
MR1 MR2 MR3
JR1 JR2 JR3 JR1 JR2 JR3 JR1 JR2 JR3
SR11R=0 SR12R=4 SR13R=8 SR21R=4 SR22R=19 SR23R=0 SR31R=7 SR32R=13 SR33R=14
ΦR11R=21 ΦR12R=1
4
ΦR13R=1
0
ΦR21R=1
7
ΦR22R=2 ΦR23R=2
0
ΦR31R=1
4
ΦR32R=8 ΦR33R=4
dR11R=4 dR12R=11 dR13R=17 dR21R=7 dR22R=21 dR23R=4 dR31R=13 dR32R=19 dR33R=21
Optimal solution for MR1 Optimal solution for MR2 Optimal solution for MR3
0
1 2 3
8
10
J1
J2
J3
M1
4 5 6
M2 M3
M1 M3 M2
7
M2
9 M3
3
6
4
6
2
3
6
4
4
M1
305
J1 J2 J3 J3 J1 J2 J2 J3 J1
4 8 14 time 3 4 7 19 21 time 7 13 19 23time
L1=0 L2=0 L3=2
Figure A53: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level 5aia
The new lower bound LB*=21+max {0, 0, 2} =23.
Stop where no branching.
Level 5aib Operation (3, 5) scheduled on M3 after operation (1, 4). It fixes two disjunctive arcs:
(3, 5) to (3, 3) and (3, 5) to (3, 9) as shown in Figure A54.
Figure A54: Disjunctive graph for the level 5aib
LRBR= L {0, (1, 1), (1, 4), (3, 5), (3, 3), 10} =20
Table A19 and Figure A55 are used to find the better lower bound.
Table A19: Solving n/1/CRmaxR problem for the three machines MR1R, MR2R, and MR3RR Rin level 5aib
MR1 MR2 MR3
JR1 JR2 JR3 JR1 JR2 JR3 JR1 JR2 JR3
SR11R=0 SR12R=4 SR13R=4 SR21R=4 SR22R=14 SR23R=0 SR31R=7 SR32R=8 SR33R=14
ΦR11R=20 ΦR12R=1
6
ΦR13R=1
0
ΦR21R=9 ΦR22R=2 ΦR23R=1
3
ΦR31R=6 ΦR32R=1
2
ΦR33R=4
0
1 2 3
8
10
J1
J2
J3
M1
4 5 6
M2 M3
M1 M3 M2
7
M2
9 M3
3
6
4
6
2
3
6
4
4
M1
306
d11=4 d12=8 d13=16 d21=14 d22=20 d23=10 d31=20 d32=14 d33=20
Optimal solution for M1 Optimal solution for M2 Optimal solution for M3
J1 J2 J3 J3 J1 J2 J2 J3 J1
4 8 14 time 3 4 7 14 16 time 8 14 18 24time
L1=0 L2=0 L3=4 Figure A55: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level 5aib
The new lower bound LB*=20+max {0, 0, 4} =24.
To obtain the next operation after operation (3,5), we delete (3,5) from α and add
(the next one (2,6) where : α ={(3, 3), (2, 6) ,(1, 8)} and
t(α)=min{10+6,10+2,4+6}=10,
Then i*=1, but Figure A36 shows the operations (1,8) and (2,6) haven’t any new
disjunctive arcs to branch. Then operation (3, 3) is selected to schedule after (3,5)
and α'={(3,3)}. So in the next branching, the operation (3,3) will be scheduled on
M1 after (3,5) at level 6(aii).
Level 5aiia
The operation (3, 3) will be scheduled first after operation (1, 8). It fixes two
disjunctive arcs: (3, 3) to (3, 5) and (3, 3) to (3, 9) as shown in Figure A56.
0
1 2 3
8
10
J1
J2
J3
M1
4 5 6
M2 M3
M1 M3 M2
7
M2
9 M3
3
6
4
6
2
3
6
4
4
M1
307
Figure A56: Disjunctive graph for the level 5aiia
The lower bound LB= L {0, (1, 1), (1, 8), (1, 4), (3, 5), (2, 6), 10} =22
Table A20 and Figure A57 are used to obtain better lower bound. Table A20: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level 5aiia
M1 M2 M3
J1 J2 J3 J1 J2 J3 J1 J2 J3
S11=0 S12=10 S13=4 S21=4 S22=20 S23=0 S31=7 S32=14 S33=13
Φ11=22 Φ12=1
2
Φ13=1
8
Φ21=1
7
Φ22=2 Φ23=2
1
Φ31=1
4
Φ32=8 Φ33=4
d11=4 d12=14 d13=10 d21=8 d22=22 d23=4 d31=14 d32=20 d33=22
Optimal solution for M1 Optimal solution for M2 Optimal solution for M3
J1 J3 J2 J3 J1 J2 J1 J2 J3
4 10 14 time 3 4 7 20 22 time 7 13 14 20 24time
L1=0 L2=0 L3=2 Figure A57: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level 5aiia
The new lower bound LB*=22+max {0, 0, 2} =24.
Stop at node (3, 3)
Level 5aiib
The operation (3, 9) will be scheduled after operation (1, 8). It fixes two disjunctive
arcs: (3, 9) to (3, 3) and (3, 9) to (3, 5) as shown in Figure A58.
0
1 2 3
8
10
J1
J2
J3
M1
4 5 6
M2 M3
M1 M3 M2
7
M2
9 M3
3
6
4
6
2
3
6
4
4
M1
308
Figure A58: Disjunctive graph for the level 5aiib
The lower bound LB= L {0, (1, 1), (1, 8), (3, 9), (3, 5), (2, 6), 10} =22
Table A21 and Figure A59 are used to obtain the better lower bound.
Table A21: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level 5aiib
M1 M2 M3
J1 J2 J3 J1 J2 J3 J1 J2 J3
S11=0 S12=10 S13=4 S21=4 S22=20 S23=0 S31=14 S32=14 S33=10
Φ11=22 Φ12=1
2
Φ13=1
8
Φ21=9 Φ22=2 Φ23=2
1
Φ31=6 Φ32=8 Φ33=1
2
d11=4 d12=14 d13=10 d21=16 d22=22 d23=4 d31=20 d32=20 d33=14
Optimal solution for M1 Optimal solution for M2 Optimal solution for M3
J1 J3 J2 J3 J1 J2 J3 J2 J1
4 10 14 time 3 4 7 20 22 time 10 14 20 26time
L1=0 L2=0 L3=6 Figure A59: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level 5aiib
The new lower bound LB*=22+max {0, 0, 6} =28.
Stop at node (3, 9).
The level 5bia
The operation (2, 2) will be scheduled first on machine M2 after operation (3, 5).
Therefore it fixes one disjunctive arc (2, 2) to (2, 6) as shown in Figure A60:
0
1 2 3
10
J1
J2
M1
4 5 6
M2 M3
M1 M3 M2
3
6
4
6
2
4
309
Figure A60: Disjunctive graph for the level 5bia
The lower bound LRBR= L {0, (1, 4), (1, 1), (1, 8), (3, 9), 10} =18
Table A22 and Figure A61 are used to obtain the better lower bound.
Table A22: Solving n/1/CRmaxR problem for the three machines MR1R, MR2R, and MR3RR Rin level 5bia
MR1 MR2 MR3
JR1 JR2 JR3 JR1 JR2 JR3 JR1 JR2 JR3
SR11R=4 SR12R=0 SR13R=8 SR21R=8 SR22R=11 SR23R=0 SR31R=11 SR32R=4 SR33R=14
ΦR11R=14 ΦR12R=1
8
ΦR13R=1
0
ΦR21R=9 ΦR22R=2 ΦR23R=1
3
ΦR31R=6 ΦR32R=1
2
ΦR33R=4
dR11R=8 dR12R=4 dR13R=14 dR21R=12 dR22R=18 dR23R=8 dR31R=18 dR32R=12 dR33R=18
Optimal solution for MR1 Optimal solution for MR2 Optimal solution for MR3
JR1 JR2 JR3 JR3 JR1 JR2 JR2 JR1 JR3
4 8 14 time 3 8 11 13 time 4 10 11 17 21time
LR1R=0 LR2R=0 LR3R=3
Figure A61: Solving n/1/CRmaxR problem for the three machines MR1R, MR2R, and MR3 Rin level 5bia
The new lower bound LRBRP
*P=18+max {0, 0, 3} =21.
To obtain the next operation after operation (2, 2), (2, 2) is deleted from α and added
the next one (3, 3) where,
α = {(3, 3), (2, 6), (1, 8)},
t(α)=min{11+6, 11+2, 8+6}=13, then i P
*P=2,
α'={(2, 6)}, So in the next branching, the operation (2, 6) has to schedule on MR2R
after operation(2, 2) at level 6(bia).
8
J3
7
M2
9 M3
3
6
4
M1
310
But as we see, the operations (1, 8) and (2, 6) already they finished all branches. This
means no new branching or disjunctive arcs, as a result, operation (3, 3) will be
scheduled on machine 3 after operation(2, 2) at level 6bia instead of (2, 6).
i*=3,
α'={(3, 3)}.
Level 5bib
The operation (2, 6) will be scheduled first after operation (3, 5), there it fixes one
adjunctive arc: (2, 6) to (2, 2) as shown in Figure A62.
Figure A62: Disjunctive graph for the level 5bib
The lower bound LRBR= L {0, (1, 4), (3, 5), (2, 6), (2, 2), (3, 3), 10} =21
Table A23 and Figure A63 are used to obtain the better lower bound.
Table A23: Solving n/1/CRmaxR problem for the three machines MR1R, MR2R, and MR3RR Rin level 5bib
MR1 MR2 MR3
JR1 JR2 JR3 JR1 JR2 JR3 JR1 JR2 JR3
SR11R=4 SR12R=0 SR13R=8 SR21R=12 SR22R=10 SR23R=0 SR31R=15 SR32R=4 SR33R=14
ΦR11R=14 ΦR12R=1
8
ΦR13R=1
0
ΦR21R=9 ΦR22R=2 ΦR23R=1
4
ΦR31R=6 ΦR32R=1
7
ΦR33R=4
dR11R=11 dR12R=7 dR13R=17 dR21R=15 dR22R=21 dR23R=10 dR31R=21 dR32R=10 dR33R=21
0
1 2 3
8
10
J1
J2
J3
M1
4 5 6
M2 M3
M1 M3 M2
7
M2
9 M3
3
6
4
6
2
3
6
4
4
M1
311
Optimal solution for M1 Optimal solution for M2 Optimal solution for M3
J2 J1 J3 J3 J1 J2 J2 J3 J1
4 8 14 time 3 12 15 17 time 4 10 14 18 24time
L1=0 L2=0 L3=3
Figure A63: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level 5bib
The new lower bound LB*=21+max {0, 0, 3} =24.
Stop the branching here.
Level 5biia
The operation (2, 6) will be scheduled after operation (3, 5), it fixes one disjunctive
arc: (2, 6) to (2, 2) as shown in Figure A64.
Figure A64: Disjunctive graph for the level 5biia
The lower bound LRBR = L {0, (1, 4), (1, 8), (1, 1), (2, 2), (3, 3), 10} =23.
Table A24 and Figure A65 are used to obtain the better lower bound. Table A24: Solving n/1/CRmaxR problem for the three machines MR1R, MR2R, and MR3RR Rin level 5biia
MR1 MR2 MR3
JR1 JR2 JR3 JR1 JR2 JR3 JR1 JR2 JR3
SR11R=10 SR12R=0 SR13R=4 SR21R=14 SR22R=10 SR23R=0 SR31R=17 SR32R=4 SR33R=10
ΦR11R=13 ΦR12R=2
3
ΦR13R=1
9
ΦR21R=9 ΦR22R=1
1
ΦR23R=2
2
ΦR31R=6 ΦR32R=1
7
ΦR33R=4
0
1 2 3
8
10
J1
J2
J3
M1
4 5 6
M2 M3
M1 M3 M2
7
M2
9 M3
3
6
4
6
2
3
6
4
4
M1
312
d11=14 d12=4 d13=10 d21=17 d22=14 d23=4 d31=23 d32=12 d33=23
Optimal solution for M1 Optimal solution for M2 Optimal solution for M3
J2 J3 J1 J3 J2 J1 J2 J3 J1
4 10 14 time 3 10 12 14 17 time 4 10 14 17 23time
L1=0 L2=0 L3=0
Figure A65: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level 5biia
The new lower bound LB*=23+max {0, 0, 0} =23.
The branching terminated here, no branching.
Level 5cii The operation (2, 6) will be scheduled after operation (3, 5).
There is one disjunctive arc: (2, 6) to (2, 2) as shown in Figure A66
Figure A66. Disjunctive graph for the level 5cii
The lower bound is LRBR= L {0,(2, 7), (1, 8), (1, 4), (3, 5), (2, 6), (2, 2), (3, 3), 10}=30.
Table A25 and Figure A67 are used to obtain the better lower bound.
Table A25: Solving n/1/CRmaxR problem for the three machines MR1R, MR2R, and MR3RR Rin level 5cii
MR1 MR2 MR3
JR1 JR2 JR3 JR1 JR2 JR3 JR1 JR2 JR3
0
1 2 3
8
10
J1
J2
J3
M1
4 5 6
M2 M3
M1 M3 M2
7
M2
9 M3
3
6
4
6
2
3
6
4
4
M1
313
S11=13 S12=9 S13=3 S21=21 S22=19 S23=0 S31=24 S32=13 S33=19
Φ11=13 Φ12=2
1
Φ13=2
7
Φ21=9 Φ22=2 Φ23=3
0
Φ31=6 Φ32=1
7
Φ33=4
d11=21 d12=13 d13=9 d21=24 d22=30 d23=3 d31=30 d32=19 d33=30
Optimal solution for M1 Optimal solution for M2 Optimal solution for M3
J3 J1 J2 J3 J1 J2 J2 J3 J1
3 9 13 17 time 3 21 24 26 time 13 19 23 24 30time
L1=0 L2=0 L3=0
Figure A67: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level 5cii
The new lower bound LB*=30+max {0, 0, 0} =30.
After operation (2, 6) the branching terminated.
Level 6aii
The operation (3, 3) is scheduled after operation (3, 5). It fixes one arc: (3, 3) to (3,
9) as shown in Figure A68.
Figure A68. Disjunctive graph for the level 6aii
The lower bound LRBR= L {0, (1, 1), (1, 4), (3, 5), (3, 3), (3, 9), 10} =24
Table A26 and Figure A69 are used to obtain the better lower bound.
Table A26: Solving n/1/CRmaxR problem for the three machines MR1R, MR2R, and MR3RR Rin level 6aii
0
1 2 3
8
10
J1
J2
J3
M1
4 5 6
M2 M3
M1 M3 M2
7
M2
9 M3
3
6
4
6
2
3
6
4
4
M1
314
M1 M2 M3
J1 J2 J3 J1 J2 J3 J1 J2 J3
S11=0 S12=4 S13=4 S21=4 S22=14 S23=0 S31=14 S32=8 S33=20
Φ11=24 Φ12=2
0
Φ13=1
0
Φ21=1
3
Φ22=2 Φ23=1
6
Φ31=1
0
Φ32=1
6
Φ33=4
d11=4 d12=8 d13=20 d21=14 d22=24 d23=11 d31=20 d32=14 d33=24
Optimal solution for M1 Optimal solution for M2 Optimal solution for M3
J1 J2 J3 J3 J1 J2 J2 J1 J3
4 8 14 time 3 4 7 14 16 time 8 14 20 24time
L1=0 L2=0 L3=0 Figure A69: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level 6aii
The new lower bound LB
*=24+max {0, 0, 0} =24.
Stop the branching here where no disjunctive arcs more to fix.
Level 6bia
The operation (3, 3) will be scheduled after operation (2, 2). It fixes one a disjunctive
arc (3, 3) to (3, 9) as shown in Figure A70.
Figure A70: Disjunctive graph for the level 6bia
The lower bound LRBR=L {0, (1, 4), (1, 1), (2, 2), (3, 3), (3, 9), 10}=21
Table A27 and Figure A71 are used to obtain the better lower bound.
0
1 2 3
8
10
J1
J2
J3
M1
4 5 6
M2 M3
M1 M3 M2
7
M2
9 M3
3
6
4
6
2
3
6
4
4
M1
315
Table A27: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level 6bia
M1 M2 M3
J1 J2 J3 J1 J2 J3 J1 J2 J3
S11=4 S12=0 S13=8 S21=8 S22=11 S23=0 S31=11 S32=4 S33=17
Φ11=17 Φ12=2
1
Φ13=1
0
Φ21=1
3
Φ22=2 Φ23=1
6
Φ31=1
0
Φ32=1
6
Φ33=4
d11=8 d12=4 d13=14 d21=11 d22=21 d23=8 d31=17 d32=11 d33=21
Optimal solution for M1 Optimal solution for M2 Optimal solution for M3
J2 J1 J3 J3 J1 J2 J2 J1 J3
4 8 14 time 3 8 11 13 time 4 10 11 17 21time
L1=0 L2=0 L3=0
Figure A71: Solving n/1/Cmax problem for the three machines M1, M2, and M3 in level 6bia
The new lower bound LB*=21+max {0, 0, 0} =21.
The branching terminated at node (3, 3).
After building the complete search tree, Figure A72, the nodes at very bottom of the
tree correspond to all the active schedules, where the complete selection at the very
low level at the tree.
316
Level 0
Level 1
2a 2 b 2 c
3a 3bi 3bii 3ci 3cii 3ciii
4ai 4aii 4bi (4bii)a (4bii)b 4ci 4cii
5aia 5aib 5aiia 5aiib 5bia 5bib 5biia 5cii
The optimal solution Figure A72: The complete branching procedure at level 6bia
From the previous levels of the solution of Example 3.1, the optimal solution is
makespan=21 under critical path: {0, (1, 4), (1, 1), (2, 2), (3, 3), (3, 9), 10}.
1, 1 1, 4 1, 8
2, 7
1, 1 1, 8 1, 1 1, 4 3, 9 2, 2
3, 5 3, 9 3, 5 3, 9 3, 9 3, 5 3, 5 1, 8 1, 4
21
26 20 25
26 21 23 26 25 25
23
2, 6
26 29
24
3, 3 3, 5
24
3, 3 3, 9 2, 6
24
2, 2 2, 6
29
23 24 24 28 21 24 23 30
3,3 3,3 21 21
316
Simulated Annealing for Job Shop Scheduling Problems
Example 2.1 is solved using simulated annealing to explain how this technique can
be applied to a job shop problem. The main steps are described as follows:
Step 0
Figure B1 shows that the initial schedule is {0→7→8→4→5→9→6→10} with the
initial makespan 29. Initial conditions of simulated annealing algorithms are: T=100,
α=.95 and ε=.98. The solution by simulated annealing is shown by the next steps:
Figure B1: Disjunctive graph of an initial solution for the 3 jobs and 3 machines example
Makespan=29
CRmaxR(0)=29 and T=100
STEP 1
In this step, the new solution is generated using the neighbourhood structure. The
disjunctive graph is changed from (1, 8)→(1, 4) to (1, 4) →(1, 8) in the critical path.
The new disjunctive graph is shown in Figure B2. The new temperature, new
makespan and ∆ are calculated as following:
0
1 2 3
8
10
J1
J2
J3
M1
4 5 6
M2 M3
M1 M3 M2
7
M2
9 M3
3
6
4
6
2
3
6
4
4
M1
317
Figure B2. Disjunctive graph of an initial solution for the 3 jobs and 3 machines example
Makespan=23 T=αT = .95*100= 95, CRmaxR(1)=23
= CRmaxR(1)- CRmaxR(0)=23-29=-6<0 then the new schedule is accepted.
STEP 2
The new solution is generated using the neighbourhood structure. The disjunctive
graph is changed from (1, 8)→(1, 1) to (1, 1) →(1, 8) in the critical path. The new
disjunctive graph is shown in Figure B3. The new temperature, new makespan and ∆
are calculated as following:
Figure B3: Disjunctive graph of an initial solution for the 3 jobs and 3 machines example
Makespan=24
0
1 2 3
8
10
J1
J2
J3
M1
4 5 6
M2 M3
M1 M3 M2
7
M2
9 M3
3
6
4
6
2
3
6
4
4
0
1 2 3
8
10
J1
J2
J3
M1
4 5 6
M2 M3
M1 M3 M2
7
M2
9 M3
3
6
4
6
2
3
6
4
4
M1
M1
318
T=αT = .95*95= 90.25
Cmax(2)=24
Δ= Cmax(2)- Cmax(1)=24-23=1>0 then
Pr=exp(-∆∕T)= exp(-1∕90.25)=0.988>ε
The new schedule is accepted.
Step3
The new solution is generated in Step 3 as well by changing the neighbourhood on the critical path as shown in Figure B4.
(3, 9)→(3, 3) to (3, 3) →(3, 9)
Figure B4: Disjunctive graph of an initial solution for the 3 jobs and 3 machines example
Makespan=21
The new temperature, new makespan and ∆ are calculated as follows:
T=αT
.95*90.25= 85.73
CRmaxR(3)=24
= CRmaxR(3)- CRmaxR(2)=21-24=-3then
The new schedule is accepted
0
1 2 3
8
10
J1
J2
J3
M1
4 5 6
M2 M3
M1 M3 M2
7
M2
9 M3
3
6
4
6
2
3
6
4
4
M1
319
Step 4
Figure B5: Disjunctive graph of an initial solution for the 3 jobs and 3 machines example
Makespan=24
The new solution is generated by changing the neighbourhood on the critical path as shown in Figure B5.
(1, 4)→(1,1) to (1, 1) →(1, 4)
.95*85.73= 81.44
CRmaxR(4)=24
= CRmaxR(4)- CRmaxR(3)= 3 then Pr=exp(-∆∕T)= exp(-3∕81.44)=0.961<ε
The new schedule will be rejected. The solution at step three is used again to produce new solution Figure B5 shows that all disjunctive arcs on the critical path are used before in the previous steps. As a result, we stop at this stage because no more changes can be implemented. Table B1 is deduced using the previous steps.
Table B1: simulated annealing result for Example 2.1
step Makespan CRmax
∆ Temperature T
Boltzmann probability
Decision Best value found
0 - 100 - - 21 1 - 95 - accepted 2 90.25 0.988 accepted 3 - 85.73 - accepted 4 81.44 0.961 rejected
The Percentage Improvement of SA can be calculated as:
PISA= (initial solution of CRmaxR -SA value)/initial solution of CRmax
PISA= ((29-21)/29)*100 = 8/29= 27.5%
0
1 2 3
8
10
J1
J2
J3
M1
4 5 6
M2 M3
M1 M3 M2
7
M2
9 M3
3
6
4
6
2
3
6
4
4
M1
320
Appendix C
Tabu Search for Job Shop Scheduling Problems
The numerical Example 2.1 in Chapter 2 is solved by tabu search to describe how
tabu search can be applied to a job shop scheduling problem.
An initial solution will be supposed for minimising the makespan as an objective
function. Neighbourhood structure will be used to generate new solutions.
Iteration 0 According to the feasible solution in Figure C1, the sequence of operations on the
machines as following:
Figure C1: Disjunctive graph of an initial solution for the 3 jobs and 3 machines example
Makespan=29
MR1R OR8R→OR4R→ OR1
MR2R OR7R→OR2R→ OR6
MR3R OR5R→ OR9R→OR3
Where the makespan CRmax R=29 and the critical path for this solution is:
0
1 2 3
8
10
J1
J2
J3
M1
4 5 6
M2 M3
M1 M3 M2
7
M2
9 M3
3
6
4
6
2
3
6
4
4
M1
321
O0→O7→O8→O4→O5→O9→O3→O10, where O0 and O10 are dummies operations.
Tabu list={ } Iteration1 Three neighbourhoods can be constructed: (O4→O8), (O9→O5), (O3→O9)
Where
At neighbourhood (O4→O8), the sequence of operations on machines is:
M1 O4→O8→ O1
M2 O7→O2→ O6
M3 O5→ O9→O3
Figure C2 shows that makespan 23.
Figure C2: Disjunctive graph of an initial solution for the 3 jobs and 3 machines example
Makespan=23
At neighbourhood (OR9R→OR5R):
MR1R OR8R→OR4R→ OR1
MR2R OR7R→OR2R→ OR6
MR3R OR9R→ OR5R→OR3
Figure C3 shows that makespan=26.
0
1 2 3
8
10
J1
J2
J3
M1
4 5 6
M2 M3
M1 M3 M2
7
M2
9 M3
3
6
4
6
2
3
6
4
4
M1
322
Figure C3: Disjunctive graph of an initial solution for the 3 jobs and 3 machines example
Makespan=26
At neighbourhood (OR8R→ OR1R):
MR1R OR8R→OR4R→ OR1
MR2R OR7R→OR2R→ OR6
MR3R OR5R→ OR3R→OR9
Figure C4 shows that makespan 30.
Figure C4: Disjunctive graph of an initial solution for the 3 jobs and 3 machines example
Makespan=30
0
1 2 3
8
10
J1
J2
J3
M1
4 5 6
M2 M3
M1 M3 M2
7
M2
9 M3
3
6
4
6
2
3
6
4
4
0
1 2 3
8
10
J1
J2
J3
M1
4 5 6
M2 M3
M1 M3 M2
7
M2
9 M3
3
6
4
6
2
3
6
4
4
M1
M1
323
By comparison the three previous values of the makespan and by considering the
minimisation the makespan, we select Makspan = 23 as a seed for the new solutions.
The critical path for this solution is: O0→O4→O8→O1→O2→ O3→O10
As a result tabu list will be update to be:
Tabu list = {O8→O4} Iteration 2 The critical path is:
O0→O4→O8→O1→O2→ O3→O10
Neighbourhoods
O8→O4, O1→O8
At O8→O4
M1 O8→O4→ O1
M2 O7→O2→ O6
M3 O5→ O9→O3
Figure C5 shows that makespan =29.
Figure C5: Disjunctive graph of an initial solution for the 3 jobs and 3 machines example
Makespan=29
0
1 2 3
8
10
J1
J2
J3
M1
4 5 6
M2 M3
M1 M3 M2
7
M2
9 M3
3
6
4
6
2
3
6
4
4
M1
324
At O1→O8
M1 O4→O1→ O8
M2 O7→O2→ O6
M3 O5→ O9→O3
Figure C6: Disjunctive graph of an initial solution for the 3 jobs and 3 machines example
Makespan=24
Figure C6 shows that makespan=24.
By comparison the two previous values of the makespan and by considering the
minimisation the makespan, we select makespan = 24 with critical path
OR0R→OR4R→OR1R→OR8R→OR9R→ OR3R →OR10R as a seed for the new solutions.
As a result tabu list will be update to be:
Tabu list ={ OR8R→OR4R, OR8R→OR1R} Iteration3 The critical path is OR0R→OR4R→OR1R→OR8R→OR9R→ OR3R →OR10
The neighbourhoods: OR1R→OR4R, OR3R→OR9
At neighbourhood: OR1R→OR4R:
MR1R OR1R→OR4R→ OR8
MR2R OR7R→OR2R→ OR6
MR3R OR5R→ OR9R→OR3
0
1 2 3
8
10
J1
J2
J3
M1
4 5 6
M2 M3
M1 M3 M2
7
M2
9 M3
3
6
4
6
2
3
6
4
4
M1
325
Figure C7: Disjunctive graph of an initial solution for the 3 jobs and 3 machines example
Makespan=24
The makespan=24 as shown in Figure C7.
At OR3R→OR9
MR1R OR4R→OR1R→ OR8
MR2R OR7R→OR2R→ OR6
MR3R OR5R→ OR3R→OR9
Figure C8 shows that makespan = 21.
Figure C8: Disjunctive graph of an initial solution for the 3 jobs and 3 machines example
Makespan=21
0
1 2 3
8
10
J1
J2
J3
M1
4 5 6
M2 M3
M1 M3 M2
7
M2
9 M3
3
6
4
6
2
3
6
4
4
0
1 2 3
8
10
J1
J2
J3
M1
4 5 6
M2 M3
M1 M3 M2
7
M2
9 M3
3
6
4
6
2
3
6
4
4
M1
M1
326
By comparison the two previous values of the makespan and by considering the
minimisation the makespan, makespan=21 with critical path,
O0→O4→O1→O2→O3→ O9→O10 as a seed for the new solutions.
Tabu list = {O8→O4, O8→O1, O9→O3} From the critical path, the new neighbourhoods will be as: O1→O4 and O9→ O3.
The neighbourhood O9→O3 at the tabu list, so will be ignored and one
neighbourhood, O1→O4, will be used.
Iteration4
O1→O4
M1 O1→O4→ O8
M2 O7→O2→ O6
M3 O5→ O3→O9
The new makespan will be 24 with critical path: O0→O1→O4→O5→O3→
O9→O10 as shown in Figure C9.
Figure C9: Disjunctive graph of an initial solution for the 3 jobs and 3 machines example
Makespan=24
The new Tabu list = {OR8R→OR4R, OR8R→OR1R, OR9R→OR3R, OR4R→OR1R}
0
1 2 3
8
10
J1
J2
J3
M1
4 5 6
M2 M3
M1 M3 M2
7
M2
9 M3
3
6
4
6
2
3
6
4
4
M1
327
The new neghibour are: O4→O1, O3→O5, O9→O3.
The neighbourhoods, O4→O1, O9→O3, will be ignored because they are at the tabu
list and O3→O5 will be used to generate a new solution.
Iteration5
At: O3→O5
M1 O1→O4→ O8
M2 O7→O2→ O6
M3 O3→ O5→O9
The new makespan = 23 under critical path O0→O1→O2→O3→O5→ O9→O10 as
shown in Figure C10.
Figure C10: Disjunctive graph of an initial solution for the 3 jobs and 3 machines example
Makespan=23
The new tabu list= {OR8R→OR4R, OR8R→OR1R, OR9R→OR3R, OR4R→OR1R, OR5R→OR3R}
The new neighbourhoods for the critical path: OR0R→OR1R→OR2R→OR3R→OR5R→ OR9R→OR10R
are: OR5R→OR3R, OR9R→ OR5
The OR5R→OR3 Rwill be ignored and OR9R→ OR5 Rwill be used in the next iteration.
0
1 2 3
8
10
J1
J2
J3
M1
4 5 6
M2 M3
M1 M3 M2
7
M2
9 M3
3
6
4
6
2
3
6
4
4
M1
328
Iteration 6
At O9→ O5
M1 o1→o4→ o8
M2 o7→o2→ o6
M3 O3→ O9→O5
Figure C11: Disjunctive graph of an initial solution for the 3 jobs and 3 machines example
Makespan=26
The new makespan is 26 under critical path: OR0R→OR1R→OR4R→OR8R→OR9R→
OR5R→OR6R→OR10R as shown in Figure C11.
The new tabu list = {OR8R→OR4R, OR8R→OR1R, OR9R→OR3R, OR4R→OR1R, OR5R→OR3R, OR5R→ OR9R}
The new neighbourhoods are: OR8R→OR4R, OR4R→OR1R, OR5R→OR9
We note that all neighbourhoods are at tabu list so, we can stop.
The best solution found until now is makespan=21 in Figure C8 and the sequence of
the operations on machines:
MR1R OR4R→OR1R→ OR8
MR2 R OR7R→OR2R→ OR6
MR3R OR5R→ OR3R→OR9
0
1 2 3
8
10
J1
J2
J3
M1
4 5 6
M2 M3
M1 M3 M2
7
M2
9 M3
3
6
4
6
2
3
6
4
4
M1
329
Table C1 can be deduced from the previous results:
Table C1: Tabu search technique result for example 2.1
Iteration Makespan Cmax
Tabu list Best value
0 { } 21 1 { OR8R→OR4R} 2 { OR8R→OR4R, OR8R→OR1R} 3 { OR8R→OR4R, OR8R→OR1R, OR9R→OR3R } 4 { OR8R→OR4R, OR8R→OR1R, OR9R→OR3R , OR4R→OR1R} 5 { OR8R→OR4R, OR8R→OR1R, OR9R→OR3R , OR4R→OR1R,
OR5R→OR3R}
6 { OR8R→OR4R, OR8R→OR1R, OR9R→OR3R , OR4R→OR1R, OR5R→OR3R, OR5R→OR9R}
The Percentage Improvement of Tabu Search(PITS) is calculated as follows:
PITS = ((initial solution of CRmaxR-TS value)/initial solution )*100) of CRmax
PITS = ((29-21)/29)*100 =27.58%.