contents lists available at sciencedirect ...gen min 1 1 1 1 max 1 7 21 19 avg 1 1.55 3.55 4.3 cpu...

16
Journal of Computational and Applied Mathematics 235 (2011) 3063–3078 Contents lists available at ScienceDirect Journal of Computational and Applied Mathematics journal homepage: www.elsevier.com/locate/cam Genetic algorithm for asymmetric traveling salesman problem with imprecise travel times J. Majumdar a , A.K. Bhunia b,a Department of Mathematics, Durgapur Government College, Durgapur-713214, India b Department of Mathematics, The University of Burdwan, Burdwan-713104, India article info Article history: Received 11 November 2008 Received in revised form 28 December 2010 Keywords: Asymmetric traveling salesman problem Combinatorial optimization Interval order relation Genetic algorithm abstract This paper presents a variant of the asymmetric traveling salesman problem (ATSP) in which the traveling time between each pair of cities is represented by an interval of values (wherein the actual travel time is expected to lie) instead of a fixed (deterministic) value as in the classical ATSP. Here the ATSP (with interval objective) is formulated using the usual interval arithmetic. To solve the interval ATSP (I-ATSP), a genetic algorithm with interval valued fitness function is proposed. For this purpose, the existing revised definition of order relations between interval numbers for the case of pessimistic decision making is used. The proposed algorithm is based on a previously published work and includes some new features of the basic genetic operators. To analyze the performance and effectiveness of the proposed algorithm and different genetic operators, computational studies of the proposed algorithm on some randomly generated test problems are reported. © 2011 Elsevier B.V. All rights reserved. 1. Introduction The Traveling Salesman Problem (TSP) is one of the well-studied NP-hard combinatorial optimization problems [1,2] which determines the closed route of the shortest length or of the minimum cost (or time) passing through a given set of cities where each city is visited exactly once. Practical applications of the TSP include many problems in science and engineering, like vehicle routing, wiring, scheduling, flexible manufacturing, VLSI layout, etc. In the existing literature, a large number of approaches both by exact and heuristic methods (including genetic algorithms (GAs)) have been developed by several researchers for solving symmetric TSPs (STSPs). Exact methods include cutting plane, LP relaxation [3], branch and bound (B&B) [4], B&B based on assignment problem (AP) relaxation [5–7], branch and cut (B&C) [8–10] and dynamic programming [11,12]. However, very small size problems can be solved by exact methods. On the other hand, large size problems have been solved using heuristic and probabilistic methods like 2-opt [1,13,14], Markov chain [15], metaheuristic algorithms like Tabu Search [16–18], neural networks [19], simulated annealing [20–26] and genetic algorithms (GAs) [27–33]. For further improvement of GAs for STSPs, many approaches have been suggested. Among these approaches, the works of Yang and Stacey [34], Ray et al. [35], Baraglia et al. [36], Gang et al. [37], Tsai et al. [38], Liu et al. [39] and Al-Dulaimi and Ali [40] are worth mentioning. Comprehensive review of the methods developed for STSPs can be found in [2,41–43]. On the other hand, according to the existing literature, ATSP has not been well researched and many heuristics that are successful for STSPs cannot be applied efficiently to ATSPs. An ATSP with n cities can be transformed to STSP with 2n cities [44]. However, as most of the TSP applications are of asymmetric nature, further research is necessary for developing Corresponding author. E-mail addresses: [email protected] (J. Majumdar), [email protected] (A.K. Bhunia). 0377-0427/$ – see front matter © 2011 Elsevier B.V. All rights reserved. doi:10.1016/j.cam.2010.12.027

Upload: others

Post on 20-Jun-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Contents lists available at ScienceDirect ...Gen Min 1 1 1 1 Max 1 7 21 19 Avg 1 1.55 3.55 4.3 CPU time(s) Min 0.09 0.38 2.75 63.79 Max 0.24 8.74 58.39 281.99 Avg 0.133 4.270 29.159

Journal of Computational and Applied Mathematics 235 (2011) 3063–3078

Contents lists available at ScienceDirect

Journal of Computational and AppliedMathematics

journal homepage: www.elsevier.com/locate/cam

Genetic algorithm for asymmetric traveling salesman problem withimprecise travel timesJ. Majumdar a, A.K. Bhunia b,∗

a Department of Mathematics, Durgapur Government College, Durgapur-713214, Indiab Department of Mathematics, The University of Burdwan, Burdwan-713104, India

a r t i c l e i n f o

Article history:Received 11 November 2008Received in revised form 28 December 2010

Keywords:Asymmetric traveling salesman problemCombinatorial optimizationInterval order relationGenetic algorithm

a b s t r a c t

This paper presents a variant of the asymmetric traveling salesman problem (ATSP) inwhich the traveling time between each pair of cities is represented by an interval of values(wherein the actual travel time is expected to lie) instead of a fixed (deterministic) value asin the classical ATSP. Here the ATSP (with interval objective) is formulated using the usualinterval arithmetic. To solve the interval ATSP (I-ATSP), a genetic algorithm with intervalvalued fitness function is proposed. For this purpose, the existing revised definition of orderrelations between interval numbers for the case of pessimistic decision making is used.The proposed algorithm is based on a previously published work and includes some newfeatures of the basic genetic operators. To analyze the performance and effectiveness of theproposed algorithm and different genetic operators, computational studies of the proposedalgorithm on some randomly generated test problems are reported.

© 2011 Elsevier B.V. All rights reserved.

1. Introduction

The Traveling Salesman Problem (TSP) is one of the well-studied NP-hard combinatorial optimization problems [1,2]which determines the closed route of the shortest length or of the minimum cost (or time) passing through a given setof cities where each city is visited exactly once. Practical applications of the TSP include many problems in science andengineering, like vehicle routing, wiring, scheduling, flexible manufacturing, VLSI layout, etc.

In the existing literature, a large number of approaches both by exact andheuristicmethods (including genetic algorithms(GAs)) have been developed by several researchers for solving symmetric TSPs (STSPs). Exactmethods include cutting plane,LP relaxation [3], branch and bound (B&B) [4], B&B based on assignment problem (AP) relaxation [5–7], branch and cut(B&C) [8–10] and dynamic programming [11,12]. However, very small size problems can be solved by exact methods.On the other hand, large size problems have been solved using heuristic and probabilistic methods like 2-opt [1,13,14],Markov chain [15], metaheuristic algorithms like Tabu Search [16–18], neural networks [19], simulated annealing [20–26]and genetic algorithms (GAs) [27–33].

For further improvement of GAs for STSPs, many approaches have been suggested. Among these approaches, the worksof Yang and Stacey [34], Ray et al. [35], Baraglia et al. [36], Gang et al. [37], Tsai et al. [38], Liu et al. [39] and Al-Dulaimi andAli [40] are worth mentioning. Comprehensive review of the methods developed for STSPs can be found in [2,41–43].

On the other hand, according to the existing literature, ATSP has not been well researched and many heuristics thatare successful for STSPs cannot be applied efficiently to ATSPs. An ATSP with n cities can be transformed to STSP with 2ncities [44]. However, as most of the TSP applications are of asymmetric nature, further research is necessary for developing

∗ Corresponding author.E-mail addresses:[email protected] (J. Majumdar), [email protected] (A.K. Bhunia).

0377-0427/$ – see front matter© 2011 Elsevier B.V. All rights reserved.doi:10.1016/j.cam.2010.12.027

Page 2: Contents lists available at ScienceDirect ...Gen Min 1 1 1 1 Max 1 7 21 19 Avg 1 1.55 3.55 4.3 CPU time(s) Min 0.09 0.38 2.75 63.79 Max 0.24 8.74 58.39 281.99 Avg 0.133 4.270 29.159

3064 J. Majumdar, A.K. Bhunia / Journal of Computational and Applied Mathematics 235 (2011) 3063–3078

Fig. 1. Pseudocode of SGA.

good heuristic algorithms for ATSPs. Among them, one may refer to the works of Smith et al. [45], Carpaneto and Toth [46],Cirasella et al. [47], Burke et al. [48], Johnson et al. [49] and Choi et al. [50].

To the best of our knowledge, generally, the distance/cost parameters have been used in the concerned TSPs of theaforesaid works and those parameters were specified precisely by fixed real numbers. However, due to the competitivemarket situation of present-day scenario, consideration of traveling times between two cities will be more appropriateinstead of distance/cost parameters. Moreover, in real life situations of the third-world countries, the traveling times fromone city to another would be imprecise instead of precise (fixed) numbers due to several diverse situations arising fromtraffic jam, bad condition of road/railway track, rainy/foggy weather, etc. for which there prevails a poor transport system.Thus, in real life considerations, the time parameters are flexible in nature and their values lie within intervals and so anI-ATSP can be framed. To solve this type of I-ATSP, order relations between interval numbers are essential. To the best of ourknowledge, very few researchers defined order relations between interval valued numbers. Among them, one may refer tothe works of Ishibuchi and Tanaka [51] and Chanas and Kuchta [52]. However, their definitions are not complete. Senguptaand Pal [53] proposed two different approaches (one is deterministic and the other is fuzzy) to compare any two intervalnumbers with respect to the decision makers’ (optimistic and pessimistic) point of view. However, in some cases, both oftheir approaches fail to find the order relation between two interval numbers (see [54] for details). Again, Majumdar andBhunia [54] studied and solved assignment problemwith interval cost(s)/time(s) by elitist GA where they proposed reviseddefinitions of order relations of interval numbers with respect to optimistic as well as pessimistic decision maker’s point ofview. But, for some particular cases, their definition also fails to find the order relation.

Genetic algorithms (GAs) are relatively new robust metaheuristics based on the ideas borrowed from natural selectionand natural genetics. These are known to be a promising tool for solving a wide variety of real-world combinatorialoptimization problems. Unlike traditional heuristics and somemetaheuristics like Tabu search, GAs work with a populationof feasible solutions (known as chromosomes) iteratively by successively applying three basic genetic operators, viz.selection, crossover and mutation in each iteration (called generation) until a termination criterion is satisfied. The basicstructure of a simple GA (SGA) is shown in Fig. 1.

Recently, Sengupta and Pal [55] developed an algorithm to solve interval valued TSP (ITSP) based on the traditionalassignment technique and interval ordering considering the additional constraint of cyclic tour. However, theirmethodologycannot be applied for problems with an increased number of cities due to its enumeration nature with large computationsrequired for finding cyclic solution(s) from assignment solutions. Again in this area, Montemanni et al. [56] solved TSPs withinterval data from the graph-theoretic view point, for which they used some exact algorithms like branch and bound, branchand cut, Benders’ decomposition and some heuristic and approximation algorithms like 2-opt, 3-opt, etc.

In an alternative way, in this paper, an ATSP with interval valued time parameters has been proposed and solved by agenetic algorithm (GA) with the help of interval arithmetic and modified interval order relations from the point of view ofthe pessimistic decision makers’ preference due to Mahato and Bhunia [57]. To solve this I-ATSP, a GA approach has beenproposed that is based on the work of Gang et al. [37], in which there is a composition of two SGAs—one is global GA (GGA)and the other is local GA (LGA). GGA has been applied to the main tours for searching the global optima while LGA has been

Page 3: Contents lists available at ScienceDirect ...Gen Min 1 1 1 1 Max 1 7 21 19 Avg 1 1.55 3.55 4.3 CPU time(s) Min 0.09 0.38 2.75 63.79 Max 0.24 8.74 58.39 281.99 Avg 0.133 4.270 29.159

J. Majumdar, A.K. Bhunia / Journal of Computational and Applied Mathematics 235 (2011) 3063–3078 3065

applied to the randomly selected sub-tours obtained from the main tour of GGA for searching the local optimal solution.Our proposed algorithm incorporates some new features on initialization, a modified selection and replacement scheme,modified form of two existing crossovers and a new crossover, a heuristic mutation and a tour improvement heuristic.Finally, the results of computational experiments on some randomly generated I-ATSPs as well as comparative analyses ondifferent GA variants and genetic operators have been reported.

2. Interval arithmetic and order relations between intervals

Generally, an interval is defined by its lower and upper limits asA = [aL, aR] = {x : aL ≤ x ≤ aR, x ∈ R}

where aL and aR are the lower and upper limits respectively and R, the set of all real numbers. The interval A is also definedby its centre and radius as

A = ⟨ac, aw⟩ = {x : ac − aw ≤ x ≤ ac + aw, x ∈ R}

where ac = (aL + aR)/2 and aw = (aR − aL)/2 are respectively the centre and radius of interval A. Now, we shall discusssome interval arithmetic.

Definition 2.1. Let ∗ ∈ {+, −, .,÷} be a binary operation on the set of real numbers. If A and B are two closed intervals,then

A ∗ B = {a ∗ b : a ∈ A, b ∈ B}

defines a binary operation on the set of closed intervals. In the case of division, it is assumed that 0 ∈ B.The operations on intervals used in this paper may be explicitly calculated (for two interval numbers A = [aL, aR] =

⟨ac, aw⟩ and B = [bL, bR] = ⟨bc, bw⟩) from Definition 2.1 as:

A + B = [aL, aR] + [bL, bR] = [aL + bL, aR + bR], (1)A + B = ⟨ac, aw⟩ + ⟨bc, bw⟩ = ⟨ac + bc, aw + bw⟩, (2)kA = k[aL, aR] = [kaL, kaR] for k ≥ 0, (3a)

= [kaR, kaL] for k < 0, (3b)kA = k⟨ac, aw⟩ = ⟨kac, |k|aw⟩ (4)

where k is a real number.In our paper, we shall use only Eqs. (1) and (3a).Next, we shall discuss the order relations for finding the decision maker’s preference between interval times of

minimization problems.We shall restrict only to pessimistic decisionmaking for our TSP as this will be verymuch beneficialfor traveling salesmen.

Let the uncertain times from two alternatives be represented by two closed intervals A = [aL, aR] = ⟨ac, aw⟩ andB = [bL, bR] = ⟨bc, bw⟩ respectively. It is also assumed that the time of each alternative lies in the corresponding interval.These two intervals A and B may be of the following three types:

Type I: Both the intervals are disjoint.Type II: Intervals are partially overlapping.Type III: One interval is contained in the other.These three types of intervals are shown in Fig. 2 for different situations.

Optimistic decision makingFor optimistic decision making, the decision maker expects the lowest time/cost ignoring the uncertainty.

According to Mahato and Bhunia [57] the order relations of interval numbers for minimization problems in case ofoptimistic decision making are as follows:

Definition 2.2. Let us define the order relation ≤Omin between A = [aL, aR] and B = [bL, bR] as

A≤Omin B ⇔ aL ≤ bLA<Omin B ⇔ A≤Omin B ∧ A = B.

Pessimistic decision makingFor pessimistic decision making, the decision maker expects the minimum cost/time for minimization problems

according to the principle ‘‘Less uncertainty is better than more uncertainty’’.

According to Mahato and Bhunia [57] the order relations of interval numbers for minimization problems in case ofpessimistic decision making are as follows:

Definition 2.3. Let us define the order relation ≤pmin between A = [aL, aR] = ⟨ac, aw⟩ and B = [bL, bR] = ⟨bc, bw⟩ as

A<pmin B ⇔ ac < bc for Type I and Type II intervalsA<pmin B ⇔ (ac ≤ bc) ∧ (aw < bw) for Type III intervals.

Page 4: Contents lists available at ScienceDirect ...Gen Min 1 1 1 1 Max 1 7 21 19 Avg 1 1.55 3.55 4.3 CPU time(s) Min 0.09 0.38 2.75 63.79 Max 0.24 8.74 58.39 281.99 Avg 0.133 4.270 29.159

3066 J. Majumdar, A.K. Bhunia / Journal of Computational and Applied Mathematics 235 (2011) 3063–3078

(a) Type I intervals.

(b) Type II intervals.

(c) Type III intervals.

Fig. 2. Three types of intervals for different situations.

However, for Type III intervals with (ac < bc) ∧ (aw > bw), the pessimistic decision cannot be taken. Here, the optimisticdecision is to be considered.

3. Problem formulation

Consider an ATSP in which there are n cities c1, c2, c3, . . . , cn to be visited by a salesman with inter-city times t(ci, cj).Solving the TSP means finding a permutation π of the cities that minimizes the sum of the times

n−1−i=1

t(cπ(i), cπ(i+1)) + t(cπ(n), cπ(1)).

In our problem, t(ci, cj) = [tLij , tRij ], an interval and also t(ci, cj) = t(cj, ci). Then, our proposed I-ATSP can be formulated asfollows:

Minimize Z =

n−i=1

n−j=1

[tLij , tRij ]xij (5)

subject ton−

i=1

xij = 1, j = 1, 2, . . . , n (6)

andn−

j=1

xij = 1, i = 1, 2, . . . , n (7)

wherexij = 1, if the salesman travels from ci to cj

= 0, otherwise.

4. Proposed GA approach for solving I-ATSP

The objective of our proposed GA approach is to provide the best found solution of the I-ATSP quickly as compared tothe existing GA approaches which are more time consuming and mostly rely on the employment of local improvementtechniques including the approach suggested in [37]. As in [37], our algorithm consists of two parts—GGA and LGA (shownin Figs. 3 and 4). The LGA acts as a local search operator in the GGA to reorder the sequence of cities for the improvementof the best found tour (whereas Gang et al. [37] considered randomly chosen tour from the population) obtained from theGGA. Moreover, instead of choosing a sub-tour of successive cities (as in [37]), any set of (≥4) cities are randomly chosenfrom the best foundmain tour of GGA to form the sub-tour for LGA. Note that the sub-tour is an open tour whichmeans thatthe terminal cities of the sub-tour are not connected. Another important point to be noted is that all the chromosomes ofthe population of LGA have the same terminal cities in order to ensure that either better or at least the same main tour willbe produced when the best sub-tour from LGA is fed into it. This LGA will run in every generation of the GGA. As soon as the

Page 5: Contents lists available at ScienceDirect ...Gen Min 1 1 1 1 Max 1 7 21 19 Avg 1 1.55 3.55 4.3 CPU time(s) Min 0.09 0.38 2.75 63.79 Max 0.24 8.74 58.39 281.99 Avg 0.133 4.270 29.159

J. Majumdar, A.K. Bhunia / Journal of Computational and Applied Mathematics 235 (2011) 3063–3078 3067

Fig. 3. Pseudocode of GGA.

best individual of the LGA is found, it is fed back to the main tour of the GGA in order to replace the original part. Details ofthe components of our two GAs are described in the following sections.

4.1. Chromosome representation and initialization

In our GA, usual path representation is used to represent chromosomes (solutions) where the cities are listed in the orderin which they are visited. For example, assume an ATSP with six cities {1, 2, 3, 4, 5, 6}, then the chromosome (solution) (2 13 5 6 4) represents the tour 2 → 1 → 3 → 5 → 6 → 4 → 2. This representation is not only unique up to the directionof traversal clockwise or counter-clockwise and the originating city but also requires less storage space (array of n integersfor a chromosome).

For GGA, to create the initial population of different tours, a combination of two heuristics is used alternately. In thefirst heuristic, each gene of the chromosome is assigned a random number drawn from {1, 2, . . . , n}. The second one is thewell-known random insertion heuristic where a position in the chromosome is selected and then a random number drawnfrom {1, 2, . . . , n} is inserted. In both heuristics, the last gene of the chromosome is replaced by the first one.

On the other hand, for LGA, the initial population is created with the same individuals (chromosomes) considering therandomly selected sub-tour cut from the best found main tour of the previous generation of GGA.

4.2. Evaluation of fitness function

In our algorithm, each chromosome has a fitness value which is the sum of the travel times in the tour represented bythe chromosome which are itself interval numbers. In I-ATSP, the smaller the fitness value is, the better the tour. The besttour is the one which is a minimizer of (5) subject to (6) and (7).

4.3. Selection

In the selection process, the better chromosomes are selected according to their fitness values for which they will takepart in genetic operations (viz. crossover and mutation). Unlike Gang et al. [37], in our case, at first the current population

Page 6: Contents lists available at ScienceDirect ...Gen Min 1 1 1 1 Max 1 7 21 19 Avg 1 1.55 3.55 4.3 CPU time(s) Min 0.09 0.38 2.75 63.79 Max 0.24 8.74 58.39 281.99 Avg 0.133 4.270 29.159

3068 J. Majumdar, A.K. Bhunia / Journal of Computational and Applied Mathematics 235 (2011) 3063–3078

Fig. 4. Pseudocode of LGA.

Fig. 5. Transitional process between two consecutive generations.

is sorted from best to worst in terms of interval valued fitness following Definition 2.3 of comparing interval numbersand then a part of the better individuals (called TOP, Ref. Fig. 5) are copied from the current generation to the next. Thisstrategy is somewhat called elitism (with size TOP). Its main advantage is that the best solution is monotonically improvedin subsequent generations as well as good quality chromosomes are added formating. However, it can lead the convergenceof the population rapidly to a local minimum that again can be avoided by using high mutation rates.

Page 7: Contents lists available at ScienceDirect ...Gen Min 1 1 1 1 Max 1 7 21 19 Avg 1 1.55 3.55 4.3 CPU time(s) Min 0.09 0.38 2.75 63.79 Max 0.24 8.74 58.39 281.99 Avg 0.133 4.270 29.159

J. Majumdar, A.K. Bhunia / Journal of Computational and Applied Mathematics 235 (2011) 3063–3078 3069

4.4. Crossover

For the crossover operation, two parent chromosomes are randomly chosen from the current population to produceoffspring. One of the parents is chosen among the best individuals of the population (TOP in Fig. 5), while the other israndomly chosen from the whole population (including TOP). In contrast to the traditional one-point crossover suggestedin [37], here in GGA, three crossover operators are implemented, viz. modified order crossover (MOX), modified cyclecrossover (MCX) and the newly created sequence crossover (SX), while in LGA only SX is implemented. These operatorsare discussed in detail.(i)Modified Order Crossover (MOX)

The MOX used here is a modified version of the order crossover (OX) as suggested in [58] that generates offspring bychoosing a sub-string from one parent and preserving the relative order of cities from the other (see the pseudocode inFig. 6).

For example, consider a pair of parents given below:

Parent 1:(1 | 5 4 3 | 2 6)Parent 2:(2 | 3 5 6 | 4 1).

First, two crossover sites, marked by ‘|’, are randomly chosen. Next, the cities from the other parent are replaced in the sameorder from the beginning of the string, setting the middle sub-strings between the sites unchanged. The resulting offspringare as follows:

Offspring 1:(2 5 4 3 6 1)Offspring 2:(1 3 5 6 4 2).

(ii)Modified Cycle Crossover (MCX)The MCX proposed in our algorithm is also a modification of the cycle crossover (CX) as suggested in [58] that generates

offspring by choosing a sub-string from one parent and preserving the cyclic order of cities from the other parent (see thepseudocode in Fig. 7).

For example, consider the two parents given below:

Parent 1:(1 | 3 5 4 | 2 6)Parent 2:(2 | 4 3 1 | 6 5).

At first, two crossover sites are randomly chosen and marked with ‘|’ on each parent. Next, the cities from the other parentare replaced in the same cyclic order as in the middle sub-string (between the sites) from the beginning of the string. Theresulting offspring are as follows:

Offspring 1:(4 3 5 1 2 6)Offspring 2:(2 3 5 1 6 4).

(iii) Sequence Crossover (SX)The steps for SX are as follows:

Step 1: Randomly choose one yet unselected city from each parent, say p1i from Parent 1 and p2j from Parent 2.Step 2: If p1i = p2j then fill the first and second positions of Offspring 1 in the sequence p1i, p2j and for Offspring 2, fill themin reverse sequence p2j, p1i. Otherwise, repeat Step 1.Step 3: Repeat Steps 1–2 until all the cities of both the parents are selected.

It is to be noted that for the case when both parents have odd number of cities, after step 3, the remaining unselectedcities of the respective parents are to be copied at the last positions of the respective offspring.

For example, consider a pair of parents given below:

Parent 1:

21 5

32

16 4 3

Parent 2:

2 6

15 1

34

23

.

Suppose city 6 from Parent 1 and city 5 from Parent 2 are selected randomly for the first time. Similarly, for the second time,city 1 from Parent 1 and city 3 from Parent 2 are selected and for the third time, city 2 from Parent 1 and city 4 from Parent2 are selected. These are displayed here by using numbers on the top of the corresponding cities. The resulting offspring arethen

Offspring 1:(6 5 1 3 2 4)Offspring 2:(5 6 3 1 4 2).

Page 8: Contents lists available at ScienceDirect ...Gen Min 1 1 1 1 Max 1 7 21 19 Avg 1 1.55 3.55 4.3 CPU time(s) Min 0.09 0.38 2.75 63.79 Max 0.24 8.74 58.39 281.99 Avg 0.133 4.270 29.159

3070 J. Majumdar, A.K. Bhunia / Journal of Computational and Applied Mathematics 235 (2011) 3063–3078

Fig. 6. Pseudocode of MOX.

4.5. Mutation

In our implementation, two mutation operators are proposed, viz. exchange mutation (EM) (as in [37]) and the newlycreated replacement mutation (RM), which are discussed in detail.(i) Exchange Mutation (EM)

The EM is nothing but a pairwise interchange of any two randomly selected or adjacent (left/right) cities for a randomlychosen tour.

For example, the result of EM on the tour (1 3 4 2 5 6)

(a) for randomly selected second and fifth cities will be (1 5 4 2 3 6)(b) for right adjacent city of 2 will be (1 3 4 5 2 6)

Page 9: Contents lists available at ScienceDirect ...Gen Min 1 1 1 1 Max 1 7 21 19 Avg 1 1.55 3.55 4.3 CPU time(s) Min 0.09 0.38 2.75 63.79 Max 0.24 8.74 58.39 281.99 Avg 0.133 4.270 29.159

J. Majumdar, A.K. Bhunia / Journal of Computational and Applied Mathematics 235 (2011) 3063–3078 3071

Fig. 7. Pseudocode of MCX.

(c) for left adjacent city of 2 will be (1 3 2 4 5 6).

Here the EM is used as a deterministic hill climber in the sense that the result of the swap is considered to be valid ifit results in an improvement. Otherwise, the tours are discarded. Consequently, a good tour never gets lost as well as highdiversity is always maintained in each generation.(ii) Replacement Mutation (RM)

RM is used here in a broader sense than usual and its aim is to prevent premature convergence of the population. Insteadof performing gene-by-gene mutation, with small probability in each generation, here some new randomly generatedindividuals (mutants) using the same heuristic as the original (initial) population are introduced into the next generation(BOT in Fig. 5). In this way, no genetic material of the current population is brought in and thus premature convergence isunlikely to occur. Fig. 5 depicts the transitional process between two consecutive generations.

4.6. Termination criteria

In [37], the termination conditions have been set to the total number of generations in the GGA and the LGA. But ouralgorithm terminates when any one of the following two conditions is satisfied earlier:

(a) the best solution does not improve within 10 consecutive generations (for both GGA and LGA)(b) the number of generations reaches user defined iterations (generations) (for both GGA and LGA).

5. Tour improvement heuristic

In our algorithm, a tour improvement heuristic is applied to eachmember of the offspring population (see the pseudocodein Fig. 8).

Page 10: Contents lists available at ScienceDirect ...Gen Min 1 1 1 1 Max 1 7 21 19 Avg 1 1.55 3.55 4.3 CPU time(s) Min 0.09 0.38 2.75 63.79 Max 0.24 8.74 58.39 281.99 Avg 0.133 4.270 29.159

3072 J. Majumdar, A.K. Bhunia / Journal of Computational and Applied Mathematics 235 (2011) 3063–3078

Fig. 8. Pseudocode of tour improvement heuristic.

Table 1Different GA variants.

Crossover Mutation Variant name

MOX EM GA-1MOX RM GA-2MCX EM GA-3MCX RM GA-4SX EM GA-5SX RM GA-6

Table 2Parameters of GGA and LGA.

Problem psize mgen pc pm TOP BOTGGA LGA GGA LGA GGA LGA GGA LGA GGA (%) LGA (%) GGA (%) LGA (%)

I-ATSP(10)a 20 10 50 25 0.6 0.7 0.2 0.3 0.2 0.2 0.2 0.3I-ATSP(15)a 30 10 50 25 0.6 0.7 0.2 0.3 0.2 0.2 0.2 0.3I-ATSP(20)a 40 10 50 25 0.6 0.7 0.2 0.3 0.1 0.2 0.2 0.3I-ATSP(30)a 60 10 50 25 0.6 0.7 0.2 0.3 0.05 0.2 0.2 0.3

psize—population size;mgen—maximum number of generations allowed; pc—probability of crossover; pm—probability of mutation.a The numbers within brackets of each problem indicate the number of cities for the respective problem.

6. Computational results with discussion

In this section, the computational results of our proposed algorithm on four randomly generated I-ATSPs of 10 cities, 15cities, 20 cities and 30 cities are presented. Here, the tLij and tRij values (for i = j) are taken as follows:

tLij = 1.25 (1 + random integer on the interval [0, 3])tRij = tLij + 0.25 (1 + random integer on the interval [0, 3]).

For the second problem with 15 cities, the data for the matrices tLij and tRij separately together with the best found as wellas the global optimal solutions are listed in the Appendix.

The proposed algorithm is coded in C/C++ and tested on a Pentium IV 3.0 GHz processor with 1 GB RAM PC under LINUXenvironment. To solve the problems, six variants of our developed GA are proposed based on different combinations ofcrossover and mutation shown in Table 1. Because of the stochastic nature of GAs, all the results reported in this paper areobtained using 20 independent runs per problem and also for each GA variant with different sets of random numbers.

The best found objective values (Best), success rate (SR) of the best found solution, minimum, maximum and averagenumber of generations until the final solution is first seen, minimum, maximum and average CPU times (in seconds)required to obtain the best solution and also theminimum,maximum and average numbers of objective function evaluation(Fn-count) for each problem and also for each GA variant are reported in Table 3.

6.1. Parameter selection

In the experiments, the parameters of Table 2 are used.

Page 11: Contents lists available at ScienceDirect ...Gen Min 1 1 1 1 Max 1 7 21 19 Avg 1 1.55 3.55 4.3 CPU time(s) Min 0.09 0.38 2.75 63.79 Max 0.24 8.74 58.39 281.99 Avg 0.133 4.270 29.159

J. Majumdar, A.K. Bhunia / Journal of Computational and Applied Mathematics 235 (2011) 3063–3078 3073

Table 3Computational results of different GA variants for four different I-ATSPs.

Variant name Problems → I-ATSP(10) I-ATSP(15) I-ATSP(20) I-ATSP(30)

GA-1 Best [16.50, 23.25] [22.50, 30.25] [29.50, 39.75] [43.50, 59.50]SR 100 50 60 40GenMin 1 1 1 1Max 1 7 21 19Avg 1 1.55 3.55 4.3CPU time(s)Min 0.09 0.38 2.75 63.79Max 0.24 8.74 58.39 281.99Avg 0.133 4.270 29.159 122.893Fn-countMin 40 60 120 720Max 40 540 1280 1800Avg 40 286.5 562 918

GA-2 Best [16.50, 23.25] [22.50, 30.25] [29.50, 39.75] [43.25, 58.25]SR 100 50 40 50GenMin 1 1 1 1Max 1 13 19 17Avg 1 4.3 7.6 6.95CPU time(s)Min 0.09 0.41 1.21 84.03Max 0.33 7.91 58.31 235.50Avg 0.150 3.473 33.42 150.985Fn-countMin 40 60 80 720Max 40 540 1200 1680Avg 40 234 704 1077

GA-3 Best [16.50, 23.25] [22.50, 30.25] [29.50, 39.75] [43.25, 58.75]SR 100 50 50 40GenMin 1 1 1 1Max 1 10 9 9Avg 1 1.85 2.3 2.85CPU time(s)Min 0.08 0.41 14.87 66.54Max 0.13 6.78 45.33 178.34Avg 0.097 3.795 25.089 112.947Fn-countMin 40 60 480 720Max 40 420 880 1200Avg 40 265.5 532 831

GA-4 Best [16.50, 23.25] [22.50, 30.25] [29.50, 39.75] [42.75, 57.25]SR 100 60 50 60GenMin 1 1 1 1Max 1 16 21 27Avg 1 6.45 6.4 11.55CPU time(s)Min 0.09 1.66 1.19 102.32Max 0.17 10.80 70.38 398.84Avg 0.120 5.351 28.696 185.939Fn-countMin 40 60 480 720Max 40 600 1280 2280Avg 40 348 616 1353

GA-5 Best [16.50, 23.25] [22.50, 30.25] [29.25, 40.25] [44.50, 58.50]SR 100 65 40 45GenMin 1 1 1 1Max 1 3 11 17Avg 1 1.1 2.3 2.4CPU time(s)Min 0.10 0.40 15.46 63.33Max 0.13 6.68 55.54 225.68Avg 0.113 4.076 25.387 102.049

(continued on next page)

Page 12: Contents lists available at ScienceDirect ...Gen Min 1 1 1 1 Max 1 7 21 19 Avg 1 1.55 3.55 4.3 CPU time(s) Min 0.09 0.38 2.75 63.79 Max 0.24 8.74 58.39 281.99 Avg 0.133 4.270 29.159

3074 J. Majumdar, A.K. Bhunia / Journal of Computational and Applied Mathematics 235 (2011) 3063–3078

Table 3 (continued)

Variant name Problems → I-ATSP(10) I-ATSP(15) I-ATSP(20) I-ATSP(30)

Fn-countMin 40 60 480 720Max 40 420 880 1320Avg 40 288 532 804

GA-6 Best [16.50, 23.25] [22.50, 30.25] [29.50, 39.75] [42.25, 57.75]SR 100 65 60 70GenMin 1 1 1 1Max 1 2 7 17Avg 1 1.23 2.1 1.55CPU time(s)Min 0.07 0.36 0.49 55.57Max 0.11 5.83 55.90 103.07Avg 0.09 3.604 24.137 96.454Fn-countMin 40 60 200 720Max 40 360 800 1180Avg 40 230 444 893

Global optimum [16.50, 23.25] [22.50, 30.25] [29.50, 39.75] [42.75, 57.25]

Best—best found objective value; SR—success rate of the best found solutions; Gen—number of generations until the final solution is first seen; CPU time(s)—execution time required for the best solution; Fn-count—number of objective function evaluations.

6.2. Comparisons and discussions

From the results of the proposed algorithm shown in Table 3, the following remarks may be noted:

1. The best found objective values for the first three I-ATSPs as obtained from different GAs remain the same except for thecase when I-ATSP(20) is solved using GA-5. But for the fourth problem, viz. I-ATSP(30), the best found objective valuesare different for different GAs and optimal value (i.e., [42.75, 57.25]) is obtained for a single instance when the problemis solved using GA-4.

2. The best found solution is often found, as the success rate is 50% or more for the majority (19 out of 24) of the cases. Thisrate is 100% for I-ATSP(10) for all the GAs and 60% and above for GA-6 for all the four problems.

3. Minimum one generation is required until the best found final result is seen for the first time for all the cases. On theother hand, a maximum of less than 28 generations and an average of less than 12 generations are needed for thesame.

4. Our GAs take a minimum, a maximum and an average CPU time of less than 15.47 s, 70.39 s and 25.1 s respectively, forthe first three problems while for the fourth problem, these are seen to be 102.4 s, 399 s and 176.5 s, respectively.

5. The average number of objective function evaluations per run lies between 40 and 1353.6. Although GA-1 and GA-4 find better solutions of I-ATSP(20) and I-ATSP(30), respectively, GA-6 (i.e., combination of SX

and RM) is proved to be comparatively the best in all respects.

The gradual evolution of generations that yield convergence of I-ATSP(15) using the best GA variant, viz. GA-6 is presentedin Table 4 from which a clear depiction of our proposed methodology is reflected.

7. Comments and conclusions

For the first time, this paper deals with a realistic asymmetric traveling salesman problem based on the concept of timeminimization, where the inter-city times are considered as intervals and solves the same using genetic algorithm (GA) withthe help of interval arithmetic and the existing interval order relations developed recently from the pessimistic decisionmakers’ point of view only. In order to keep the population diversity in LGA, the values of pc and pm are set higher thanthose in the GGA. During experiments, it is seen that the process converges rapidly with the increase of city numbers in thesub-tour and appears stable when n/4 ≤ ns ≤ n/2, ns being the number of cities in a sub-tour. From the computationalpoint of view, though running of a big LGA (i.e., LGA with large number of cities in the sub-tour) in every generation of GGAis expensive, LGA can be an effective local search operator in the GGA to improve the individuals of the main populationwhich is observed in our experiments.

From our analysis and experiments, it indicates that the combination of GGA with LGA including new features on GAinitialization, selection, crossover, mutation, and tour improvement heuristic allows the genetic search to maintain highdiversity with reasonable time.

Experiments of six new variants of our proposed GA on four interval valued test problems verify that our alternativeapproach is well competent with the traditional GAs including the GA suggested in [37] for solving deterministic problems.

Page 13: Contents lists available at ScienceDirect ...Gen Min 1 1 1 1 Max 1 7 21 19 Avg 1 1.55 3.55 4.3 CPU time(s) Min 0.09 0.38 2.75 63.79 Max 0.24 8.74 58.39 281.99 Avg 0.133 4.270 29.159

J. Majumdar, A.K. Bhunia / Journal of Computational and Applied Mathematics 235 (2011) 3063–3078 3075

Table 4Evolution of generations of I-ATSP(15) using GA-6.

Trial Gen (GGA) Best Obj. Val. (LGA) Obj. Val. (GGA)

1 1 [5.00, 7.25] [22.00, 31.00]2 [5.00, 6.50] [22.50, 30.25]

2 1 [12.00, 15.50] [22.50, 30.25]3 1 [9.25, 12.25] [22.50, 30.25]4 1 [11.75, 15.50] [22.50, 30.25]5 1 [8.75, 12.00] [22.00, 31.00]

2 [5.75, 8.00] [22.00, 31.00]3 [14.75, 20.75] [22.00, 31.00]4 [10.25, 14.50] [22.00, 31.00]5 [7.25, 10.50] [22.00, 31.00]6 [7.50, 10.50] [22.00, 31.00]7 [8.50, 12.75] [22.00, 31.00]8 [16.25, 22.75] [22.00, 31.00]9 [13.50, 18.50] [22.00, 31.00]

10 [11.75, 16.25] [22.00, 31.00]

6 1 [11.75, 17.25] [22.00, 31.00]2 [6.75, 10.00] [22.00, 31.00]3 [8.75, 12.50] [22.00, 31.00]4 [16.00, 23.00] [22.00, 31.00]5 [14.50, 20.75] [22.00, 31.00]6 [4.25, 6.00] [22.00, 31.00]7 [7.00, 9.50] [22.00, 31.00]8 [8.75, 12.00] [22.00, 31.00]9 [4.25, 6.00] [22.00, 31.00]

10 [16.25, 23.25] [22.00, 31.00]

7 1 [9.00, 12.00] [22.50, 30.25]8 1 [4.00, 5.75] [22.50, 30.25]9 1 [10.50, 13.75] [22.50, 30.25]

10 1 [8.00, 10.50] [22.50, 30.25]11 1 [9.25, 12.75] [22.00, 31.00]

2 [12.00, 16.75] [22.00, 31.00]3 [8.50, 12.75] [22.00, 31.00]4 [9.00, 12.50] [22.00, 31.00]5 [7.75, 10.75] [22.00, 31.00]6 [7.50, 10.25] [22.00, 31.00]7 [11.50, 17.25] [22.00, 31.00]8 [6.50, 9.00] [22.00, 31.00]9 [19.25, 27.50] [22.00, 31.00]

10 [9.00, 13.00] [22.00, 31.00]

12 1 [14.75, 20.25] [22.00, 31.00]2 [5.75, 7.00] [22.50, 30.25]

13 1 [7.75, 10.25] [22.00, 31.00]2 [10.25, 14.50] [22.00, 31.00]3 [4.75, 5.75] [22.00, 31.00]4 [4.25, 6.00] [22.00, 31.00]5 [4.25, 6.00] [22.00, 31.00]6 [4.75, 6.50] [22.00, 31.00]7 [8.50, 12.50] [22.00, 31.00]8 [11.50, 17.25] [22.00, 31.00]9 [5.75, 8.25] [22.00, 31.00]

10 [8.50, 12.50] [22.00, 31.00]

14 1 [4.25, 6.00] [22.00, 31.00]2 [6.00, 8.50] [22.00, 31.00]3 [4.75, 6.50] [22.00, 31.00]4 [4.25, 6.50] [22.00, 31.00]5 [5.75, 7.75] [22.00, 31.00]6 [15.00, 21.50] [22.00, 31.00]7 [8.50, 12.50] [22.00, 31.00]8 [12.00, 17.00] [22.00, 31.00]9 [10.50, 14.75] [22.00, 31.00]

10 [4.50, 6.75] [22.00, 31.00]

15 1 [11.75, 17.25] [22.00, 31.00]2 [4.25, 6.00] [22.00, 31.00]3 [5.75, 8.25] [22.00, 31.00]

(continued on next page)

Page 14: Contents lists available at ScienceDirect ...Gen Min 1 1 1 1 Max 1 7 21 19 Avg 1 1.55 3.55 4.3 CPU time(s) Min 0.09 0.38 2.75 63.79 Max 0.24 8.74 58.39 281.99 Avg 0.133 4.270 29.159

3076 J. Majumdar, A.K. Bhunia / Journal of Computational and Applied Mathematics 235 (2011) 3063–3078

Table 4 (continued)

Trial Gen (GGA) Best Obj. Val. (LGA) Obj. Val. (GGA)

4 [4.75, 5.75] [22.00, 31.00]5 [10.50, 15.25] [22.00, 31.00]6 [11.75, 16.25] [22.00, 31.00]7 [4.25, 6.00] [22.00, 31.00]8 [5.75, 8.25] [22.00, 31.00]9 [7.75, 10.75] [22.00, 31.00]

10 [4.25, 6.00] [22.00, 31.00]

16 1 [6.75, 10.00] [22.00, 31.00]2 [4.75, 6.75] [22.50, 30.25]

17 1 [10.75, 14.00] [22.50, 30.25]18 1 [18.75, 26.25] [22.00, 31.00]

2 [13.50, 18.25] [22.00, 31.00]3 [9.75, 14.00] [22.00, 31.00]4 [4.25, 5.75] [22.00, 31.00]5 [4.00, 5.75] [22.00, 31.00]6 [5.50, 8.00] [22.00, 31.00]7 [6.25, 8.00] [22.00, 31.00]8 [11.50, 15.75] [22.00, 31.00]9 [12.75, 18.25] [22.00, 31.00]

10 [13.00, 18.50] [22.00, 31.00]

19 1 [4.25, 5.50] [22.50, 30.25]20 1 [10.75, 13.75] [22.50, 30.25]

Gen (GGA)—generation number of GGA; Best Obj. Val. (LGA)—best found objective value from LGA; Obj. Val. (GGA)—objective value found from GGA.

However, we have solved some ATSPs with 34 cities (viz. ‘‘ftv 33’’), 36 cities (viz. ‘‘ftv 35’’), 43 cities (viz. ‘‘p 43’’), 48 cities(viz. ‘‘ry 48p’’), 53 cities (viz. ‘‘ft 53’’), 56 cities (viz. ‘‘ftv 55’’), 91 cities (viz. ‘‘ftv 90’’) and 101 cities (viz. ‘‘ftv 100’’) obtainedfrom TSPLIB (URL: ftp://ftp.zib.de/pub/Packages/mp-testdata/tsplib/atsp/index.html) instances by our proposed GA and ineach case the best found solution coincides with the global solution given in TSPLIB. Those solutions are not mentioned inthe revised manuscript.

Thus, we believe that, our alternative GA approach will be suitable to solve other optimization problems due to its rapidconvergence with high diversity features. For future research, one may develop even more efficient GA to solve the singleas well as multi-objective I-ATSP.

Acknowledgement

We are grateful to the honourable Reviewer for his helpful comments and constructive suggestions which significantlyimproved this paper.

Appendix. Matrix ([tLij , tRij ]15×15) used for I-ATSP(15)

(tLij)15×15

=

− 4.50 3.25 2.75 4.00 3.25 1.25 2.00 3.75 5.00 2.75 3.50 2.25 3.50 3.002.75 − 4.00 1.50 3.25 2.50 4.25 3.50 2.25 3.50 4.75 4.75 1.25 1.25 4.751.25 2.00 − 5.00 3.75 2.50 2.25 1.50 3.50 2.25 1.50 2.50 3.00 5.00 2.502.75 4.25 4.25 − 1.25 1.50 4.75 4.00 5.00 1.25 3.50 3.25 4.00 5.00 3.002.00 2.00 3.50 4.50 − 2.00 1.50 2.00 2.75 1.25 1.25 2.50 2.00 4.75 1.503.50 3.75 3.75 3.00 1.25 − 4.75 1.75 3.50 2.75 2.25 1.75 2.25 3.75 2.751.25 5.00 2.00 4.50 2.75 5.00 − 2.50 4.00 3.25 3.25 3.25 2.25 4.00 1.252.25 4.00 2.50 3.00 4.50 5.00 4.50 − 3.25 3.50 2.75 1.25 3.50 4.00 3.753.75 3.50 4.25 4.75 4.50 1.75 1.75 1.50 − 2.50 4.00 3.75 1.50 1.25 1.253.25 2.25 2.50 3.75 3.25 1.75 2.50 4.50 2.50 − 2.00 2.50 1.75 1.25 4.751.75 3.00 4.75 4.25 4.75 5.00 4.25 4.25 2.25 3.00 − 4.50 4.25 5.00 2.751.50 3.25 3.25 4.50 2.75 2.00 3.75 1.50 1.50 1.75 3.75 − 2.75 4.75 2.003.00 3.75 1.75 4.75 2.75 2.25 4.25 3.75 1.50 1.50 3.00 3.50 − 2.50 4.754.00 2.00 3.50 3.50 2.50 4.00 4.50 3.75 5.00 3.75 4.50 2.50 5.00 − 1.753.50 1.25 5.00 1.75 3.50 3.25 2.50 1.25 5.00 2.00 4.75 1.75 4.50 4.50 −

Page 15: Contents lists available at ScienceDirect ...Gen Min 1 1 1 1 Max 1 7 21 19 Avg 1 1.55 3.55 4.3 CPU time(s) Min 0.09 0.38 2.75 63.79 Max 0.24 8.74 58.39 281.99 Avg 0.133 4.270 29.159

J. Majumdar, A.K. Bhunia / Journal of Computational and Applied Mathematics 235 (2011) 3063–3078 3077

(tRij)15×15

=

− 5.50 3.75 3.75 5.00 4.00 2.25 2.50 4.50 6.00 3.50 4.50 3.25 4.00 3.253.00 − 4.25 2.25 3.75 2.75 4.75 4.00 3.25 4.25 5.00 5.75 2.00 2.25 5.751.50 2.25 − 5.75 4.75 3.25 2.50 1.75 4.00 3.25 2.50 3.50 3.50 5.25 3.253.75 4.50 4.75 − 2.25 1.75 5.25 4.50 5.25 1.75 3.75 3.50 4.25 6.00 3.252.50 3.00 4.25 5.00 − 3.00 1.75 3.00 3.50 2.00 1.75 2.75 2.75 5.75 1.754.50 4.25 4.00 4.00 2.25 − 5.00 2.75 4.00 3.50 2.50 2.50 2.50 4.00 3.001.50 5.75 2.50 5.25 3.00 5.50 − 2.75 4.75 3.50 3.50 4.25 3.25 4.25 2.253.00 4.75 2.75 4.00 4.75 5.25 4.75 − 3.50 3.75 3.00 1.50 4.00 4.25 4.254.00 4.00 4.50 5.00 5.00 2.75 2.75 2.00 − 3.25 4.25 4.50 2.25 1.75 2.003.50 3.25 2.75 4.75 3.75 2.00 3.25 5.50 2.75 − 2.25 2.75 2.25 1.50 5.002.50 4.00 5.00 4.75 5.75 5.75 5.25 4.75 2.75 4.00 − 5.00 4.50 5.25 3.502.00 3.75 3.75 5.00 3.25 2.50 4.50 2.25 2.25 2.00 4.50 − 3.50 5.75 2.503.25 4.75 2.00 5.25 3.75 2.75 5.00 4.00 2.25 2.00 3.50 4.50 − 3.50 5.254.50 2.75 3.75 4.00 3.25 4.50 5.50 4.25 5.75 4.00 5.50 3.25 6.00 − 2.504.00 2.00 5.25 2.00 4.25 4.25 2.75 2.25 5.50 3.00 5.00 2.25 4.75 5.25 −

Best found solution = [22.50, 30.25] Global optimal solution = [22.50, 30.25].

References

[1] S. Lin, B.W. Kernighan, An effective heuristic algorithm for the traveling salesman problem, Oper. Res. 21 (1973) 498–516.[2] E.L. Lawler, J.K. Lenstra, A.H.G. Rinnooy Kan, D.B. Shmoys, The Traveling Salesman Problem: G.E. Re Guided Tour of Combinatorial Optimization, Wiley

and Sons, New York, 1985.[3] G.B. Dantzig, D.R. Fulkerson, S.M. Johnson, Solution of a large-scale traveling-salesman problem, Oper. Res. 2 (1954) 393–410.[4] M. Padberg, G. Rinaldi, Optimization of a 532-city symmetric traveling salesman problem by branch and cut, Oper. Res. 6 (1987) 1–7.[5] W.L. Eastman, Linear programming with pattern constraints, Ph.D. Thesis, Harvard University, Cambridge, MA, 1958.[6] M. Held, R.M. Karp, The traveling salesman problem and minimum spanning trees: part II, Math. Program. 1 (1971) 6–25.[7] E. Balas, N. Christofids, A restricted Lagrangian approach to the traveling salesman problem, Math. Program. 21 (1981) 19–46.[8] H. Crowder, M.W. Padberg, Solving large-scale symmetric traveling salesman problems to optimality, Manage. Sci. 26 (1980) 495–509.[9] M.W. Padberg, S. Hong, On the symmetric traveling salesman problem: a computational study, Math. Prog. Stud. 12 (1980) 78–107.

[10] M. Grötschel, O. Holland, Solution of large-scale symmetric traveling salesman problems, Math. Program. 51 (1991) 141–202.[11] B.E. Bellman, Dynamic programming treatment of the traveling salesman problem, J. Assoc. Comput. Mach. 9 (1963).[12] Ö. Ergan, J.B. Orlin, A dynamic programming methodology in very large scale neighbourhood search applied to the traveling salesman problem,

Discrete Optim. 3 (2006) 78–85.[13] D.S. Johnson, More approaches to the traveling salesman guide, Nature 330 (1987) 525.[14] G.A. Croes, A method for solving traveling salesman problems, Oper. Res. 6 (1958) 791–812.[15] O. Martin, S.W. Otto, E.W. Felten, Large step Markov chains for the traveling salesman problem, Complex Systems 2 (1991) 299–326.[16] J. Knox, The application of Tabu search to the symmetric traveling salesman problem, Ph.D. Dissertation, University of Colorado, 1989.[17] C.-N. Fiechter, A parallel Tabu search algorithm for large scale traveling salesman problems, Working Paper 90/1, Départment de Mathématiques,

École Polytechnique, Fédérale de Lausanne, 1990.[18] F. Glover, Artificial intelligence, heuristic frameworks and Tabu search, Manag. Decis. Econom. 11 (1990).[19] J.J. Hopfield, D.W. Tank, Neural computation of decisions in optimization problems, Biol. Cybernet. 52 (1985).[20] S. Kirkpatrick, C.D. Gelatt, M.P. Vecchi, Optimization by simulated annealing, Science 220 (1983) 671–680.[21] E. Bonomi, J.-L. Lutton, The N-city traveling salesman problem: statistical mechanics and the metropolis algorithm, SIAM Rev. 26 (1984) 551–568.[22] S. Kirkpatrick, G. Toulouse, Configuration space analysis of traveling salesman problems, J. Phys. 46 (1985) 1277–1292.[23] B.L. Golden, C.C. Skiscim, Using simulated annealing to solve routing and location problems, Naval Res. Logist. Quart. 33 (1986) 261–280.[24] J. Lam, An efficient simulated annealing schedule, Ph.D. Dissertation, Department of Computer Science, Yale University, 1988.[25] S. Nahar, S. Sahni, E. Shragowitz, Simulated annealing and combinatorial optimization, Int. J. Comp. Aided VLSI Des. 1 (1989) 1–23.[26] C.C. Lo, C.C. Hus, An annealing framework with learning memory, IEEE Trans. Syst. Man Cybern. A 28 (1998) 1–13.[27] J.J. Grefenstette, R. Gopal, B. Rosimaita, D. Van Gucht, Genetic algorithms for the traveling salesman problem, in: Proceedings of the International

Conference on Genetic Algorithms and their Applications, 1985, pp. 160–168.[28] H.D. Nguyen, I. Yoshihara, K. Yamamori, M. Yasunaga, Implementation of an effective hybrid GA for large-scale traveling salesman problems, IEEE

Trans. Syst. Man Cybern. B 37 (2007) 92–99.[29] D.E. Goldberg, Genetic Algorithms in Search, Optimization and Machine Learning, Addison-Wesley, New York, 1989.[30] P. Jog, J.Y. Suh, D. Van Gucht, The effect of population size, heuristic crossover and local improvement on a genetic algorithm for the traveling salesman

problem, in: Proceedings of the Third International Conference on Genetic Algorithms and their Applications, 1989, pp. 110–115.[31] D. Whitley, T. Starkweather, D’Ann Fuquay, Scheduling problems and traveling salesman: the genetic edge recombination operator, in: Proceedings

of the Third International Conference on Genetic Algorithms and their Applications, 1989, pp. 133–140.[32] H.C. Braun, On solving traveling salesman problems by genetic algorithm, in: H.P. Schwefel, R. Manner (Eds.), Parallel Problem Solving from Nature,

in: Lecture Notes in Computer Science, vol. 496, 1991, pp. 129–133.[33] P. Larranaga, C. Kuijpers, R. Murga, I. Inza, S. Dizdarevic, Genetic algorithms for the traveling salesman problem: a review of representations and

operators, Artif. Intell. Rev. 13 (1999) 129–170.[34] L. Yang, D.A. Stacey, Solving the Traveling Salesman Problem Using the Enhanced Genetic Algorithm, Springer-Verlag, 2001, pp. 307–316.[35] S.S. Ray, S. Bandyopadhyay, S.K. Pal, Newoperators of genetic algorithms for traveling salesmanproblem, Cambridge, UK, ICPR-04 2, 2004, pp. 497–500.[36] R. Baraglia, J.I. Hidalgo, R. Perego, A Parallel Hybrid Heuristic for the TSP, Springer-Verlag, 2001, pp. 193–202.[37] P. Gang, I. Iimura, H. Tsurusawa, S. Nakayama, Genetic Local Search Based onGenetic Recombination: A Case for Traveling Salesman Problem, Springer-

Verlag, 2004, pp. 202–212.[38] H.K. Tsai, J.M. Yang, Y.F. Tsai, C.Y. Kao, Some issues of designing genetic algorithms for traveling salesman problems, Soft Comput. 8 (2004) 689–697

Springer-Verlag.[39] S.B. Liu, K.M. Ng, H.L. Ong, A new heuristic algorithm for the classical symmetric traveling salesman problem, Int. J. Math. Sci. 1 (4) (2007) 234–238.[40] B.F. Al-Dulaimi, H.A. Ali, Enhanced traveling salesman problem solving by genetic algorithm technique (TSPGA), in: Proceedings of World Academy

of Science, Engineering and Technology, vol. 28, April 2008, pp. 296–302.

Page 16: Contents lists available at ScienceDirect ...Gen Min 1 1 1 1 Max 1 7 21 19 Avg 1 1.55 3.55 4.3 CPU time(s) Min 0.09 0.38 2.75 63.79 Max 0.24 8.74 58.39 281.99 Avg 0.133 4.270 29.159

3078 J. Majumdar, A.K. Bhunia / Journal of Computational and Applied Mathematics 235 (2011) 3063–3078

[41] G. Laporte, The traveling salesman problem: an overview of exact and approximate algorithms, European J. Oper. Res. 59 (1992) 231–247.[42] D.S. Johnson, L.A. McGeoch, The traveling salesman problem: a case study, in: E. Aarts, J.K. Lenstra (Eds.), Loca; Search in Combinatorial Optimization,

John Wiley & Sons, New York, 1997, pp. 215–310.[43] D.L. Applegate, R.E. Bixby, V. Chvatal, W.J. Cook, The Traveling Salesman Problem: A Computational Study, Princeton University Press, Princeton, 2006.[44] C.H. Papadimitriou, K. Steiglitz, The complexity of local search for the traveling salesman problem, SIAM J. Comput. 6 (1977) 78–83.[45] T.H.C. Smith, V. Srinivasan, G.L. Thompson, Computational performance of three subtour elimination algorithms for solving asymmetric traveling

salesman problems, Ann. Discrete Math. 1 (1977) 495–506.[46] G. Carpaneto, P. Toth, Some new branching and bounding criteria for the asymmetric traveling salesman problem, Manage. Sci. 26 (1980) 736–743.[47] J. Cirasella, D.S. Johnson, L.A. McGeoch,W. Zhang, The asymmetric traveling salesman problem: algorithms, instance generators, and tests, in: ALENEX

Proceedings, in: Springer Lecture Notes in Computer Science, vol. 2153, 2001, pp. 32–59.[48] E.K. Burke, P.I. Cowling, R. Keuthen, Effective local and guided variable neighbourhood search methods for the asymmetric travelling salesman

problem, in: E.J.W. Boers, et al. (Eds.), EvoWorkshop, in: Springer Lecture Notes in Computer Science, vol. 2037, 2001, pp. 203–212.[49] D.S. Johnson, G. Gutin, L.A. McGeoch, A. Yeo, W. Zhang, A. Zverovitch, Experimental analysis of heuristics for the ATSP, in: Gutin, Punnen (Eds.), The

Traveling Salesman Problem and its Variations, Kluwer Academic Publisher, 2002, pp. 445–487.[50] I.C. Choi, S.I. Kim, H.S. Kim, A genetic algorithm with a mixed region search for the asymmetric traveling salesman problem, Comput. Oper. Res. 30

(2003) 773–786.[51] H. Ishibuchi, H. Tanaka, Multiobjective programming in optimization of the interval objective function, European J. Oper. Res. 48 (1990) 219–225.[52] S. Chanas, D. Kuchta, Multiobjective programming in optimization of interval objective functions—a generalized approach, European J. Oper. Res. 94

(1996) 594–598.[53] A. Sengupta, T.K. Pal, Theory and methodology on comparing interval numbers, European J. Oper. Res. 127 (2000) 28–43.[54] J. Majumdar, A.K. Bhunia, Elitist genetic algorithm for assignment problem with imprecise goal, European J. Oper. Res. 177 (2007) 684–692.[55] A. Sengupta, T.K. Pal, Travelling salesman problem with interval cost constraints, in: Fuzzy Preference Ordering of Interval Numbers in Decision

Problems, Springer, 2009, pp. 111–119.[56] R.Montemanni, J. Barta,M.Mastrolilli, L.M. Gambardella, The robust traveling salesman problemwith interval data, Transp. Sci. 41 (3) (2007) 366–381.[57] S.K. Mahato, A.K. Bhunia, Interval-arithmetic-oriented interval computing technique for global optimization, AMRX Appl. Math. Res. Exp. 2006 (2006)

1–19.[58] Z. Michalewicz, Genetic Algorithms + Data Structure = Evolution Programs, Springer-Verlag, Berlin, 1999.