parametric methods in integer linear programming

20
Annals of Operations Research 27 (1990) 77-96 77 PARAMETRIC METHODS IN INTEGER LINEAR PROGRAMMING Larry JENKINS Department of Engineering Management, Royal Military College of Canada, Kingston, Ontario, Canada, K7K 5LO Abstract In contrast to methods of parametric linear programming which were developed soon after the invention of the simplex algorithm and are easily included as an extension of that method, techniques for parametric analysis on integer programs are not well known and require considerable effort to append them to an integer programming solution algorithm. The paper reviews some of the theory employed in parametric integer programming, then discusses algorithmic work in this area over the last 15 years when integer programs are solved by different methods. A summary of applications is included and the article concludes that parametric integer programming is a valuable tool of analysis awaiting further populari- zation. Keywords: Parametric integer programming. 1. Introduction As with all mathematical programming, the need for sensitivity analysis in integer programs arises mostly from recognized uncertainty in the data, and a desire to analyse the effects of deviations from the best-estimate point-values. In many managerial contexts there may also be a related desire to identify "robust solutions" - solutions that are optimal or near-optimal over considerable varia- tion of model parameters, or the wish to find several near-optimal solutions from which one may be selected according to criteria not included in the model, such as general acceptance or ease of implementation. In certain circumstances where an optimization problem is solved repeatedly there may be systematic variation of just a few parameters so it is efficient to solve all the problems together as a "family" rather than performing individual computations. In complete contrast to linear programs (LPs) with continuous variables, where nowadays problems with hundreds or even thousands of variables and constraints are routinely solved by off-the-shelf software based on the simplex algorithm, there is as yet no general algorithm for integer problems that even approaches the efficiency of the simplex method. Furthermore, while parametric analysis can be added to an LP solution procedure almost as an afterthought, and requires just a few more simplex iterations, for parametric analysis to be performed on a pure J.C. Baltzer A.G. Scientific Publishing Company

Upload: larry-jenkins

Post on 19-Aug-2016

215 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Parametric methods in integer linear programming

Annals of Operations Research 27 (1990) 77-96 77

PARAMETRIC METHODS IN INTEGER LINEAR PROGRAMMING

Larry J E N K I N S

Department of Engineering Management, Royal Military College of Canada, Kingston, Ontario, Canada, K7K 5LO

Abstract

In contrast to methods of parametric linear programming which were developed soon after the invention of the simplex algorithm and are easily included as an extension of that method, techniques for parametric analysis on integer programs are not well known and require considerable effort to append them to an integer programming solution algorithm.

The paper reviews some of the theory employed in parametric integer programming, then discusses algorithmic work in this area over the last 15 years when integer programs are solved by different methods. A summary of applications is included and the article concludes that parametric integer programming is a valuable tool of analysis awaiting further populari- zation.

Keywords: Parametric integer programming.

1. Introduction

As with all mathematical programming, the need for sensitivity analysis in integer programs arises mostly from recognized uncertainty in the data, and a desire to analyse the effects of deviations from the best-est imate point-values. In many managerial contexts there may also be a related desi re to identify " robus t solutions" - solutions that are optimal or near-optimal over considerable varia- tion of model parameters, or the wish to find several near-optimal solutions f rom which one may be selected according to criteria not included in the model, such as general acceptance or ease of implementation. In certain circumstances where an optimization problem is solved repeatedly there may be systematic variation of just a few parameters so it is efficient to solve all the problems together as a " family" rather than performing individual computat ions.

In complete contrast to linear programs (LPs) with cont inuous variables, where nowadays problems with hundreds or even thousands of variables and constraints are routinely solved by off-the-shelf software based on the simplex algorithm, there is as yet no general algorithm for integer problems that even approaches the efficiency of the simplex method. Furthermore, while parametr ic analysis can be added to an LP solution procedure almost as an afterthought, and requires jus t a few more simplex iterations, for parametr ic analysis to be per formed on a pure

�9 J.C. Baltzer A.G. Scientific Publishing Company

Page 2: Parametric methods in integer linear programming

78 L. Jenkins ,/Parametric integer programming

integer program (IP) or a mixed integer program (MIP) requires careful planning before any optimization is begun, and is likely to require many times the resources needed to solve the problem at just point-values. In other words, not only is the solution of IPs and MIPs generally an exercise radically more difficult than solution of LPs, it is also a major step from solving IPs and MIPs at just point-values to performing parametric analysis on them.

In the simplex algorithm the dual multipliers provide the information for moving toward the optimum and afterwards provide post-optimal sensitivity measures and allow parametric analysis by further simplex iterations. Unfor- tunately, a considerable body of research on duality in IPs and MIPs demon- strates that any kind of pricing involves a more complex function than the LP equivalent, and so far the dual functions of IPs and MIPs have not led to any efficient solution algorithm and offer incomplete sensitivity information. The fundamental difficulty derives from the lack of continuity in IP and MIP solutions as a function of parameter changes, a topic explored thoroughly in the Ph.D. thesis of Radke [42] and more recently in the monograph of Bank and Mandel [4]. IP and MIP duality will not be reviewed in this paper, but for an entry to the subject, the interested reader is referred to work by Blair and Jeroslow [5-9], Wolsey [54,55], such recent papers as Cook et al. [13], Hiller et al. [22] Kim and Cho [31], Schrage and Wolsey [47], and the text by Schrijver [48].

In the discussions which follow it will be convenient to give restricted meaning to two terms. "Sensitivity analysis" will refer to post-optimal analysis that defines a range of parameter variation for which the identified solution remains optimal. This includes determining if the current solution remains optimal when variables are added to or deleted from the formulation. "Parametric analysis" will refer to identifying optimal solutions for all values of one or more parameters that may vary over an arbitrarily wide range. Many parametric algorithms consist firstly of a sensitivity analysis to determine the range of optimality of the parameter of interest, and secondly of pushing that parameter to an epsilon-value beyond the range and re-optimizing, repeating the process as necessary to cover the complete parametric interval required. It follows that an exact ranging by the sensitivity analysis is best, and a lower bound might be acceptable, though computationally wasteful. A parametric method based on a sensitivity analysis that might over- estimate the range of optimality runs the risk of giving incorrect information over part of the parametric interval considered.

Geoffrion and Nauss [19] prepared an excellent review of parametric and post-optimality analysis in integer linear programming back in 1977 and helped to inspire subsequent work in the area. This earlier review is recommended to the reader, though for the sake of completeness many of the ideas mentioned there will be repeated herein.

Due to the absence of a single effective general solution method for IPs, we will need to review sensitivity and parametric analysis within the context of different solution procedures. Another aspect of the lack of an efficient general

Page 3: Parametric methods in integer linear programming

L. Jenkins / Parametric integer programming 79

solution algorithm for IPs is that certain problems that could easily be formulated as IPs or MIPs but could not be solved easily are instead formulated and solved by other approaches. Perhaps the best examples are various scheduling problems that are solved by specialized optimizing algorithms of combinatorics. We limit this review to problems that are cast in the classic IP or MIP formulation.

The next sections give the standard formulations for parametric analysis on IPs and MIPs and some theoretical results that are often used in the sensitivity and parametric analyses. We then review the research according to whether the parametric method is an extension of a method for solving point-value IPs or MIPs by enumeration, branch-and-bound, cutting planes, or is independent of the solution algorithm. This choice of categories is somewhat arbitrary, particu- larly since some authors combine techniques; it is used simply to organize the material. Reported applications are summarized in section 8.

2. Formulation

We take as starting point the mixed integer linear program:

Maximize z = cx subject to A x <~ b, x >1 O, ~ integer, (P)

where c and x are vectors having elements cj and x j, j = 1 . . . n, A is a matrix with elements a~j, i = 1 . . . m, j = 1 . . . n and b is a vector with elements b~, i = 1 . . . m. ~ is a sub-vector of x containing those elements that must be integer-valued. P will include equally the alternate formulations of minimizing z a n d / o r having the constraint set A x >1 b or A x = b.

We use MIP to denote the mixed integer linear program where ~- has at least one element but is strictly a sub-vector of x and IP to denote a pure IP where

- x. It is very common for ~" to have elements limited to the values of 0 and 1. In this case we will talk of a binary MIP or a binary IP, as appropriate. P is understood to include binary and general integer IPs and MIPs. In all discussions that follow we assume that P has a bounded, feasible solution.

The IP and MIP with parametric objective will be written:

Maximize z(q~) = (c + epc')x subject to A x ~ b, x >/0, i integer, (Pq~)

where @ is a scalar and c ' is a change vector of dimension n which may be chosen such that, without loss of generality, 0 ~< @ ~< 1. "Solving" Pq~ means determining an optimal solution, denoted x*(q~) and its value, denoted z*(q~), for all 0 ~< @~< 1.

Similarly, parameterization of the right hand side (RHS) and A-matrix respec- tively are defined:

Maximize z( O ) = cx subject to A x <~ ( b + Oh'), x >i O, ~ integer, (P0)

and

Maximize z()~) = ex subject to ( A + )~A' )x <~ b, x >t O, i integer. (P2k)

Page 4: Parametric methods in integer linear programming

80 L. Jenkins / Parametric integer programming

Most work has been done on changes in the b or c vectors. Some authors consider a family of problems with different point-values for the b or c vectors, which cases may or may not be subsumed by the above parametric formulations.

3. General theoretical results

In this section we introduce some concepts that can play an important role in solving P and its parametric variations.

A relaxation of any mathematical program is derived by loosening or removing some of the constraints of the original problem. The feasible solution space is enlarged so that, for a maximization,

z * (original problem) ~< z* (relaxed problem).

It follows that if the x* of the relaxed problem happens to satisfy the constraints of the original problem, then that solution is optimal for the original problem. A common step toward solving P is to ignore the requirement that x be integer. This is called the linear programming relaxation which we denote by LPR(P). The converse of a relaxation is a restriction which has tighter or more constraints than the original problem.

The linear constraint set of P defines a polytope in the non-negative orthant of n-space. The requirement of integrality of at least some of the x /de f ines a set of lattice points and the set of feasible solutions of P are those lattice points within the polytope. Figure 1 illustrates this in 2-space. The optimal solution to P will be one of these points on the integer lattice near the perimeter of the polytope.

The convex hull, written conv(P), is the smallest polytope that includes all the feasible solutions of P. Theoretically it can be defined by adding linear con-

x2

X 1 I 2 3 4 5

Fig. 1. The convex hull inside the LP polytope.

Page 5: Parametric methods in integer linear programming

L. Jenkins / Parametric integer programming 81

z(,/ ,)

I I I

o 4, I t

zc I F ( c * , / , g) x ( ~ L )

0 ~ - - |

A ,~.'- I I I l I I I , I I I

Fig. 2. D e t e r m i n i n g z* (0) by concavi ty .

straints to LPR(P). The solution to P is an extreme point (or possibly an edge or face if there are alternate solutions) of conv(P) so if conv(P) can be identified, then solving P reduces to solving an LP over the feasible space conv(P). With our present state of knowledge, conv(P) can be identified easily for only a few problems with special structure such as network flow and matching problems, but nevertheless its definition yields some valuable theoretical results.

In problem P~ there is no change in the feasible space and P~ is equivalen t to an LP parametric analysis over conv(P). Then in a minimization z* (~) can be characterized as a piece-wise linear concave function of ~ [23]. Correspondingly a maximized z * (~) would be convex. The changes in slope of z * (~) correspond to the optimum moving from one vertex of conv(P) to an adjacent one as ~ varies.

The concavity of z*(~) can be exploited to provide a simple method for solving P~. Figure 2 helps to demonstrate the technique for an IP. First solve P~ at ~ = 0, denoted P0, giving solution x*(0). Now determine the linear function (c + ~c ' ) x*(0) shown as line AB in fig. 2 (left). Next solve PI. If x*(0) =~ x*(1) (if they were equal the procedure would be completed) determine function (c+ q~c')x*(1) illustrated as line BC. By concavity AC is a lower bound on z*(~) for 0 ~< ~ ~< 1 and envelope ABC is an upper bound. Now solve P ~ where ~a is determined by (c + ,~lc')x*(O) = (c + ~lc ' )x*(1) = z(~]). If z*(~ l ) = z(~l) then P~ is solved. If not, as demonstrated in fig. 2 (right), (c + q~c')x*(qh) gives two new points of intersection (])2 and (/)3 at which to solve point-value IPs. The procedure continues until LB(~) = UB(ff) for all 0 ~< ~ ~< 1.

The method requires solving 2 p - 1 point value IPs [28], where p is the number of linear segments in z*(~) for 0 ~< @ ~< 1, each of which coincides with a different x * (-) and point-value IPs have been solved at the { ~k } that correspond to the break-points in z*(~) . It follows that the method is wasteful to the extent that p - 1 of the point-value IPs solved will find an x*(- ) already identified. Possible ways to reduce this wastage are discussed in sections 6 and 7.

In this decade a number of researchers have considered the computational complexity of parametric combinatorial algorithms. In 1983 Carstensen [11] devised a class of 0 - 1 programming problems where the number of break-points in z*(~) is exponential in the number of variables in the problem. About the

Page 6: Parametric methods in integer linear programming

82 L. Jenkins / Parametric integer programming

z* (0)

I -----4

0 62 01 1

Fig. 3. z*(O) is a step-function for an IP.

same time Gusfield [21] published a paper explaining how, by carrying q~ along as a variable in a branching algorithm, the complexity of such an algorithm to determine the value of q~ for a breakpoint would be no more q times the complexity of the algorithm for solving P, q being the number of branching points in the basic algorithm, l ie showed how this could be extended to two parameters, using as example a small problem solvable by a minimum-cut algorithm.

For problem P0, when all elements of the change vector b' are positive or zero, if O k < Ok+ 1 then POk+ 1 is a relaxation of POt,. For an IP X * ( O k ) will remain optimal for some range O k ~ 0 < Ok+ 1 but at Ok+ 1 a new lattice point becomes optimal and z(O) will suddenly have an upward jump discontinuity (for a maximization). This will continue with a series of upward jumps in z*(O) until 0 = 1 (fig. 3).

A simple method of solving P0 for an IP is therefore to solve P0 first at 0 = 1, giving x*(1), and then ascertain the value 01 at which x*(1) suddenly becomes infeasible. Solving a point-value IP at 01 - c gives a new solution (unless no feasible solution exists) and this new solution x*(01 - c) will be optimal down to some value 02. x*(02 - c ) is found and the process repeated until the P0 has been solved over the complete range down to 0---0. The whole process requires solving just p point-value IPs where p is the number of different x*( - ) found in the range 0 ~< 0 ~< 1.

For an MIP it is possible that an increase in 0 may permit an increase in z * (0) by varying some of the continuous variables without permitting a j ump to a different lattice point. In this case z*(0) is made up of a mixture of piece-wise linear convex portions from the LP parameter variations and then changes, possible with a jump discontinuity, as the integer variables change (fig. 4).

Finally, we point out after Noltemeier [38] that for an IP with all integer coefficients a~j (arrived at if necessary by multiplying each row i by the lowest common denominator of the a~j coefficients in that row), resolving P0 over the

Page 7: Parametric methods in integer linear programming

L. Jenkins / Parametric integer programming 83

z" (0)

I

/

J

Fig. 4. z*(O) for an MIP.

continuum 0 ~< 0 ~< 1 can be reduced to solving a family of problems at point-val- ues 0 k, 0k+ 1, . . . . For consider some constraint Y'.jaux J <~ b~ + Ob'. If all a U and xj are integer, then the effect of the constraint cannot change except when the integer portion of b~ + Ob~' changes, or in other words, does not change until /7 has enlarged enough for b i + Ob; to increase by 1. Then determining family (P0 k } to solve P0 amounts to identifying every value of/7 for which b; + Ob~ has integer elements for at least one row i = 1 . . . m.

4. Implicit enumeration methods

Historically the first algorithmic method for sensitivity and parametric analysis on binary IPs was developed as an extension of the additive algorithm of Balas [3] and its improvements. This section discusses such methods; then the closely related method of branch-and-bound is discussed in the next section.

The fundamental idea of both methods is that of all the possible solutions that could be enumerated (2" for an IP with n binary variables) many of them can be ruled out because they would be (1) infeasible or (2) inferior in value to some feasible solution already identified. This feasible solution, called the incumbent, has value ~ and places a bound on the value of the optimal solution. Usually the incumbent is updated each time a better feasible solution is found.

Balas' algorithm can be defined in terms of a binary tree where the 2" terminal nodes represent the possible solutions and the 2" - 1 intermediate nodes repre- sent partial solutions. Specifically a partial solution S is defined to be an assignment of values 0 or 1 to a subset of the variables x j, j = 1 . . . n. A partial solution can be fathomed (not requir e further exploration of its possible comple- tions) if it can be shown that all its completions are infeasible or have a z * not better than L

In a 1972 paper, Roodman [43], using the fathoming tests defined by Balas, stored all fathomed nodes as well as which particular constraint caused that node to be fathomed. Roodman added to the constraint set Ej a u x j <~ bi, i = 1 . . . m a

Page 8: Parametric methods in integer linear programming

84 L. Jenkins / Parametric integer programming

0-row where a0j = cj and b 0 = ~,, thus including fathoming by the value Z of the incumbent. After completing the optimization, all partial solutions are sorted into disjoint subsets ~t, i = 0 . . . m according to which of the constraints caused the fathoming. Then ranging analysis can be performed on an element of interest c j, aij or bj by examining the set ~; for whichever of the i = 0 . . . m constraints the element appeared in and identifying the smallest parameter variation at which one of the partial solutions in ~i would now fail a fathoming test.

Parametric analysis is achieved by pushing the parameter of interest just beyond this critical value, further exploring the partial solution which is no longer fathomed, updating all sets ~;, i = 0 . . . m and repeating this procedure until the complete range of parameter variation is examined.

Piper and Zoltners [40] improved on Roodman's method in a number of ways, using sharper fathoming tests and more efficient schemes for storing and updat- ing the sets ~,. of fathomed solutions. In a related paper [41] they used the developed techniques for identifying the best solutions with z within a specified tolerance of the z* of the optimal solution and also to determine whether one of these other solutions would be optimal under certain specified perturbations of the parameters.

The method pioneered by Roodman was further explored by Loukakis and Muhlemann in 1984 [33]. They improved further on the method for computing the sets ~i, i - - 0 . . , m and presented their algorithmic method in detail. They considered a wide variation in a single cj value (this simply amounts to identify- ing the value between - o o and + or which the optimum changes from not including to including the binary variable x/) and wide changes in a value of b; (from the lowest value of bi that permits a feasible solution to a value beyond which the constraint has no effect on the solution). They did extensive computa- tional testing and present results for problems up to size of 11 constraints and 25 binary variables. For this size of problem they often found a need to store more than 13,000 partial solutions.

Recently, Wilson and Jain [53] have applied the approach of Piper and Zoltners to the case of goal-programming on a binary IP. They generate a set of k-best solutions, then determine the range over which one of these solutions is still optimal for changes in individual cj, b~, or the weighting of different goals - equivalent to a Pq~ sensitivity analysis in our notation. They report computational tests on small problems.

5. Branch-and-bound methods

In 1974 Roodman [44] extended his earlier work to MIPs and for this examined partial solutions by solving their LP relaxation. For some partial solution S the LP relaxation has the following features: (1) If LPR(S) is infeasible, then S has no feasible completions.

Page 9: Parametric methods in integer linear programming

L. Jenkins / Parametric integer programming 85

(2) Because the value of LPR(S) gives an upper bound (when maximizing) on the value of z for all completions leading from that node, if z *[LPR(S)] < z then the node is fathomed.

(3) If x* [LPR(S)] satisfies the integer requirements for the variables, this solu- tion is the best possible completion of S.

In the same style as his earlier work on IPs, Roodman solves the initial problem P and saves all the fathomed partial solutions storing for each S a record of those variables at 0 or 1, the value z*[LPR(S)] and some sensitivity information easily obtainable from the LP solution of LPR(S). He then considers a change in a single coefficient cj or, in a separate analysis, a change in some bi.

A change in c/ does not alter the feasible space and the current MIP solution remains optimal until either there is a change in the values of the integer variables and /or a change in the LP basis for the continuous variables. Roodman uses information from LP ranging analysis to provide a conservative estimate for the range of cj for which the current x*(~) is optimal. Outside this range the partial solution that now fails the fathoming test is explored further.

Since varying some b; does alter the feasible solution space, it is possible for some continuous variables to change continuously as b i changes, but this is easily predictable until there is a change in the LP basis accompanied perhaps by a change in the value of the integer variables. Roodman determines the critical value of bi + 8b~ by LP ranging analysis, then performs re-optimization outside this range to explore partial solutions that now fail the fathoming tests.

In his Ph.D. thesis [37] Nauss brought together, clarified and sharpened a number of earlier theoretical results in parametric integer programming. Much of that organisation of earlier thoughts is further refined and summarized in the excellent paper by Geoffrion and Nauss [19] already cited. After this summariza- tion in the first few chapters of the thesis, Nauss devotes the rest of the thesis to proposing, implementing and testing a number of algorithms for the problems P~ and P0 for IP and MIP problems with special structure. All the algorithms are elaborations of the branch-and-bound algorithm used for solving the correspond- ing problem without parametric analysis.

The first problem is a binary IP with a single constraint, known as the 0 - 1 knapsack problem. Nauss first reduces P0 to a family of problems with different values of b using the transformation of Noltemeier mentioned at the end of section 3. He then solves one member of the family and explores other members as sequential modifications to each previous problem, using sophisticated rules for limiting the re-optimization required. A similar approach is used with a finite number Of objective functions using techniques reminiscent of those of Roodman and Piper and Zoltners. The case of P~ with a continuously changing objective follows the style of solving a number of problems P~t, with ~k determined according to the concavity of P~ mentioned in section 3. Nauss gives extensive computational results.

The approach is repeated but with more complex solution algorithms as

Page 10: Parametric methods in integer linear programming

86 L. Jenkins / Parametric integer programming

necessary to solve a family (P0k} and (P~k } for the generalized assignment problem and the capacitated facility location problem, with extensive computa- tional testing in all cases.

A 1977 paper by Marsten and Morin [35] considers P0 with b' >/0 for a binary IP. They develop an extended branch-and-bound algorithm in which the point- value upper limit on z* usually employed in bounding nodes is replaced by a bound that varies as a function of 0.

Based on the step-function form of z *(0) as illustrated in fig. 3, they use z * (0) and some other heuristically-solved problems at arbitrary 0a, 02, 03, . . . to provide a lower bound on z*(0) , 0~< 0~< 1. An upper bound on z*(O) is obtained and updated by LP parametric analysis on the various solved PO: P0 t, P02, P03 . . . . . The extended branch-and-bound algorithm then uses these bounds, and tighter ones obtained by continual updating, to prove that all z *(0) and, by implication, all x*(0) have been identified in the range 0 ~< 0 ~< 1.

The method is efficient in saving computer time over what would be needed to solve a set of {P0k} individually, but the authors recognize that the computa- tional burden of storing intermediate information may be considerable with a large range of parameter variation.

The method of Marsten and Morin was extended to binary MIPs in a 1985 paper by Ohtake and Nishida [39]. They used LP parameter analysis to determine a lower bound on z*(0) as well as an upper bound. They report computational testing on a small problem.

6. Cutting plane methods

The method of solving IPs or MIPs by cutting planes starts by solving LPR(P) and iteratively adds constraints (cuts) that cut off portions of the LP feasible region that are outside conv(P) until the solution to the restricted LP happens to be integer, at which time P is solved. At each iteration the method adds one or more constraints that cut off the current LP solution, then re-optimizes by using a dual-based LP method. In the past the cuts used most commonly were derived following the theory of Gomory, but recently tighter cuts called knapsack facets have been used with great success. See Jenkins [29] for a comparison of computa- tional performance between different Gomory cuts and knapsack facets.

Changes in the objective do not alter the solution space and in a parametric analysis the current solution remains optimal until c changes sufficiently for an adjacent extreme point of conv(P) to become optimal. Working with LPR(P) plus some added cuts unfortunately often moves the op t imum to a non-integer extreme point of the solution space outside conv(P). (See fig. 5. If the coefficient of x 1 is increased, the opt imum will move to xl = 3.5, x 2 = 0 though the integer solution x 1 = 3, x 2 = 1 is optimal for - 1 < c~ < oo.) If that happens, adding further cuts in the region of the new LP opt imum becomes necessary. In some

Page 11: Parametric methods in integer linear programming

L. Jenkins / Parametric integer programming 87

..+,,:+

/ i%, I 2 3 4

Fig. 5. Changing the objective coefficients may give a non-integer optimum.

Max 2x~ t" xz

s.t. x l + xz < S

-x, + xz < 0

6x= + 2 x z < 21

IstCut 2Xl t. x;z s 7

2nd Cut xz :; 2

3rdCut x= + x 2 ' : 4 .

cases the cuts added previously may speed convergence to the new optimum. In other cases their only effect is to enlarge the constraint set and thereby slow the computat ion involved.

Changes in the RHS may be insufficient to alter conv(P) or may involve a relaxation or restriction that has no effect on the current optimum. Sensitivity analysis aims to identify these cases. Larger changes or changes in a critical direction may change the integer optimum.

In his thesis [37] Nauss extended results of Frank [17] and pointed out that the equations of the Gomory cuts do not change under a change in b, but the first paper to explore this idea in detail was by Holm and Klein [24]. They considered a single discrete change in the RHS, gave details of how to re 'compute the shifted cuts with a new RHS of b + A and also re-computed a "shifted" x*(b + A) which is simply the solution calculated from the optimal LP basis with RHS b using a new RHS (b + A). If x*(b + A) is feasible, meaning in this circumstance, integer and non-negative, it is the new optimum. If not, further Gomory cuts are added to LPR(P) with added cuts from the original problem that had a RI-IS of b. In computational tests on 12 small problems, the re-computed x*(b + A) was optimal, while in the remaining 2 cases further cuts were necessary to find a feasible optimum.

In a related paper Klein and Holm [32] specify conditions for range of optimality of the current solution of an IP or MIP in terms of changes to b, changes to c and introduction of a new variable to the formulation, thus specifying whether or not the new variable will enter the solution.

A 1984 paper [25] by the same authors compares their method with the method of Roodman, discussed in our section 4, and of Marsten and Morin, discussed in our section 5, for post-optimal analysis on IPs and gives computational results on a small problem for a P0 family.

A 1980 paper by Bailey and GiUet [2] considers solving P0 for an IP by cutting planes and starts with the case where b' >/0 so that POt, is a restriction of POk+ 1

Page 12: Parametric methods in integer linear programming

88 L. Jenkins / Parametric integer programming

where Ok+a > 0k. The algorithm starts at 0 = 1 and solves PI to completion using Gomory cuts. Then each constraint of the original problem is examined to find the smallest diminution in 0 that renders the current solution infeasible. 0 is set to this value and the problem re-optimized. The process is repeated until 0 = 0. The paper also discusses a modification of the algorithm for the situation where not every element of b is non-negative.

In a subsequent paper Rowntree and Gillet [45] use the transformation of Noltemeier to convert P0 for an IP into a family of problems { P0 k }. Then with 0 at 0, problem LPR(P0) is solved and problem P0 by branch-and-bound. Subse- quent to this, other members of the family are solved, using wherever possible lower and upper bounds on z*(0 k) from other previously solved members of the family. The authors consider the option of employing Gomory cuts in solving each P0 k, possibly followed by branch-and-bound, and using cuts from one problem of the family in other problems as appropriate. They provide computa- tional results for some small problems.

In a 1987 paper [30] Jenkins solves P@ using the approach explained in section 3 where the concavity of z*(~) is used to reduce the problem to solving 2p - 1 point-value IPs, where p is the number of different x*(~) in the range 0 ~< q5 ~< 1. In some experiments the solution method used was Gomory cutting planes, in other experiments knapsack facets were employed. The aim was to examine (a) the usefulness of the tests of Holm and Klein to ascertain if some optimal X*(~k) is optimal at some adjacent value @k+l (a positive test result is conclusive, a negative result gives no information); (b) to determine the usefulness of solving the family of problems leaving in cuts from solving earlier members of the family, versus solving each problem completely separately. Extensive computational testing on small to medium size problems (20 variables, 30 constraints) indicated that the ranging tests of Holm and Klein were usually inconclusive, and therefore of little value, but that cumulative addition of cuts can be advantageous.

7. Parametric methods independent of the IP solution method

All sensitivity and parametric methods described so far have treated the analysis as an extension of a point-value solution method and have, in the earlier work, considered the value of ~, or 0 beyond some critical value, then re-opti- mized and in later, more advanced work, solved a family of problems simulta- neously. All the methods founder on the difficulty of storing large quantities of intermediate information in the solution process, this quantity increasing greatly with the size of the problem and the range of parameter variation considered.

An alternative is to solve point-value IPs or MIPs independently of one another but to use the results explained earlier in section 3 to decide the values of

or 8 at which to solve the various point-value IPs in order to completely solve PO or P0 for the required range. The method for P0 does have the limitation that

Page 13: Parametric methods in integer linear programming

L. Jenkins / Parametric integer programming 89

for it to be applicable all dements of b' must be non-negative (or alternately all non-positive) but several authors have reported applied studies where the re- quired analyses easily fitted this mold.

In a 1982 paper [27] Jenkins was faced with the need to perform wide-ranging parametric analysis on c and b of an MIP model with 29 binary variables, 1736 continuous variables and 152 constraints. For this size of problem solving even point-value MIPs can be a challenge and for the particular application it was judged best to use proven commercial software to solve point-value MIPs.

For an MIP z *(q~) is a piecewise linear concave function of q~ but some of the changes in slope may result from a change in the LP basis while the integer variables stay the same. To save on computat ion Jenkins introduced the heuristic of assuming that if the integer variables have the same value in solution at q~l and t~2 then this will be the opt imum value for all q~l ~< q~ ~< t~2. The heuristic can be tempered by having a limit on the size of q~2 - q'a a n d / o r a limit on max(UB(q,)

- LB(~)} over the interval. If the heuristic is applied in its simple form, then the procedure requires solving 2 p - 1 point-value MIPs, where p is the number of different optimal values of the integer variables found in the interval 0 ~< q~ ~< 1, whereas an investigation requiring that UB(q , )= LB(q~) for the whole interval would require solving 2q - 1 point-value IPs. q represents the number of linear segments in z(q,) and may be considerably greater than p.

To solve P0 Jenkins did not require that b' be non-negative but did assume that P0 had some feasible solution for all 0 ~< O ~< 1 (this was true with the data examined). He then used a procedure comparable to that used for solving Pq~ and took advantage of the standard feature of most MIP software that allows LP parametric postoptimal analysis when, subsequent to solving' a point-value MIP, the integer variables are fixed at that solution's optimal value. First, P0 was solved at 0 = 0, then, with pegged integer variables, an LP parametric analysis performed. This gives a function z(O 1, 2(0)) which is the value of the objective with integer variables fixed at the value which is optimal at 0 = 0, designated .~(0), continuous variables free and 0 varied from 0 to 1. Next, P1 is solved and z(O, .~(1)) mapped out. The intersection of z(O, .~(0)) and z(O, 2(1)) defines a value 01 at which a new MIP will be solved. The intersection of z(O, x(0)) and z(O, x(O1) ) defines value 02 and so on. If .~(0) = .~(02) or .~(02) = ,~(01) then the heuristic is used that x(0) is assumed to be optimal for all 0 ~< 0 ~< 02 and x(Oa) is assumed optimal for all 02 ~< 0 ~< 01. By this heuristic, 2p - 1 point-value IPs are required to completely solve P0, where p is the number of different optimal x(-) found in 0 ~< 0 ~< 1.

In the same study Jenkins experimented on the advantage or disadvantage of using the "scratch tree" from one MIP as input to guide solving the MIP for a similar problem with a different value of q~ or 0. The "scratch tree" is the branching followed in solving an IP or MIP from scratch - without the software being instructed to follow any branching other than that which it would derive by its own evaluation rules. In his thesis work [37] Nauss had conducted such

Page 14: Parametric methods in integer linear programming

90 L. Jenkins / Parametric integer programming

experiments and concluded that using a scratch tree from a closely related problem to guide a branch-and-bound search could be valuable. Jenkins saved scratch trees using standard features of the software he was using (MPSX/370 with the integer module [26], and APEX-III [12]). Used in this way, he concluded that inputting the scratch tree from a related problem more often hindered then helped the software in solving the point-value MIPs involved.

Jenkins used the techniques of section 3 in a later paper [28] for an IP with both binary and general integer variables using the software LINDO [46] to solve point-value IPs. He also analysed variations in elements of the A-matrix, problem PX. If all the elements in the change matrix A' are non-negative, then for a minimization, z*(X2) ~< z*(X1) where X2 > hi. Using this result permits PX to be solved by the same approach as outlined in section 3 for solving P0.

This is a convenient point at which to mention two papers on parametric integer programming that include non-linear functions. In a 1980 paper [36] McBride and Yormark considered parametric analysis on the RHS of a quadratic pure binary program defined thus:

Minimize x'Qx subject to Ax >1 (b + Oh'), x integer (0, 1).

The notation for the constraints is that used in this paper. Q is a symmetric matrix of dimension n and x ' is the transpose row-vector of column vector x. The authors consider the case where b' >/0 so that for 01 < 02, P01 is a relaxation of P02. Then z*(01)<~ z*(02). The authors use the technique described in our section 3, to solve P0 resolving each point-value problem by a branch-and-bound search. In their custom-built algorithm they experimented with using the scratch- tree and other computed information from adjacent problems, and thus achieve considerable computational saving.

In a 1981 paper [14] Cooper considers a RHS parametric analysis on a general IP problem with a general non-linear objective and a general non-linear con- straint set but once again the parametric analysis has a RHS of the form b + Oh' with b' elements all non-negative so that all problems are relaxations of one another. Cooper examines a family of problems { POt, } with each problem having 0 augmented by a constant amount from the previous value of 0. She uses a dynamic programming algorithm to solve each point-value member of the family, often being able to use information from the previous problem to save computa- tion in the current problem.

8. Applications of parametric integer programming

Early papers [10,17,20] indicate that the importance of sensitivity analysis on IP and MIP models was recognized soon after the first attempts to solve them, but there is still a dearth of published reports on using parametric methods. In view of the theoretical and practical difficulties this is perhaps not surprising and

Page 15: Parametric methods in integer linear programming

L. Jenkins / Parametric integer programming 91

until this decade usually the best that one saw was analysis of a number of different scenarios, omitting the crucial managerial information of the range of validity of each scenario's solution and of how solutions with parameters on the continuum between different scenarios might be similar to or different from the solutions at the extremes of the continuum.

Probably the first published work that reported integer programming paramet- ric analyses without calling it that was by Manne [34] in 1967. The problem was a binary MIP where each binary variable represented a particular sequence of building plants to supply fertilizer in India, and the continuous variables repre- sented the quantities supplied. The author analysed the effect of a systematic variation in one of the objective function coefficients and solved a number of point-value cases. However, this information was then translated into a continu- ous range defining, in the notation of this paper, z* (~) and x*(~) for 0 ~< ~ ~< 1. The graphed results were the first illustration of the concave form of z*(~) , though neither this nor its theoretical foundation was explicitly mentioned in the paper.

In a 1974 paper [18] Geoffrion and Graves consider a large binary MIP of distributing a number of foodstuffs from plants and warehouses to a number of customers. The model was extremely large - 11,854 rows, 727 binary variables and 25,513 continuous variables - and was solved successfully using Bender's decomposition. Much of the paper discusses details of the computer implementa- tion but the authors also mention the need for several computer runs for sensitivity analysis and continuity analysis. This was achieved by separate runs with small changes in the data, but the authors were seeking symptoms of major discontinuities in the results that may arise from RHS changes. It happened that with the data of the particular case discussed, they found no sudden changes in optimal choice of facilities.

Eisner and Severance [15] considered storing different segments of database records on fast and slow access devices with a trade-off between cost of storage and speed of access. Because of the difficulty of evaluating a trade-off ratio, ~ in our notation, they considered all values of ~ over a wide range. They used the techniques explained in section 3, exploiting the concavity of z*(~) to solve P~, but by a neat formulation of the problem, were able to solve point-values P~t, by a minimum-cut labelling algorithm.

A 1978 conference paper [16] by Falk, working for The International Paper Company, describes P0 analyses on models of production planning and distribu- tion that resemble the previously mentioned model of Geoffrion and Graves, though much smaller in size. The P0 analysis is performed using a crude version of the methodology described in section 3. The authors used MPSX/370 [16] to solve point-value MIPs in the analyses.

A 1981 conference paper [1] reported work on an MIP model of selecting equipment, specifically metal-working presses, for a plant with a prescribed level of activity in manufacturing agricultural machinery. The authors performed a P0

Page 16: Parametric methods in integer linear programming

92 L. Jenkins / Parametric integer programming

analysis to see how the solution would change as the prescribed level of activity changed. For the data considered b ' > 0 and the parametric analysis was per- formed using the method in section 3, involving some data-dependent reasoning that made the methodology immediately applicable to the MIP.

The first of the papers of Jenkins [27] mentioned in the previous section was once again a binary MIP for a plant location problem, in that case for plants reclaiming resources from municipal solid waste. Because of the overall uncer- tainty in market demand for the recovered products, as well as uncertainty in capital and operating costs of the proposed plants, a number of parametric analyses on both objective function and RHS coefficients were judged essential. After working through many different analyses, Jenkins was able to make the valuable observation that in different parametric analyses the same plants kept appearing, so that in most of the solutions with just one plant, the plant was almost always the same, and for solutions with two plants it was commonly the same two plants and furthermore one of the plants was the same as in the single plant solutions. This had the delightful result of allowing a number of diverse parametric analyses to be distilled into an identification of "robust" plants - that if plants were to be built at all it was obvious which plant should be built first, which plant should be built second, and so on.

In the later paper [28] of parametric analyses on an IP model of aeroplane acquisition for a fleet, Jenkins observed a similar occurrence of robust solutions. In that analysis the solution found with best-estimate point-value data was definitely not robust, for slight perturbations in the data made radically different solutions optimal, and after a variety of parametric analyses five different fleet mixes appeared attractive. Nevertheless, close examination of the different solu- tions indicated a core of common plane acquisitions, so that purchase of these planes could be recommended as a first phase of implementation, to be com- pleted afterwards by further analyses with updated data.

In a 1985 paper [49] Tayi formulates a problem of choosing one of four methods of liquifaction of coal in order to minimize total sulphur dioxide emission arising in the conversion process and the combustion of the final products. The resulting formulation is one of minimizing a continuous concave function over a continuous linear constraint set. The continuous objective func- tion is approximated as piece-wise linear and thereby converted to a composite of linear functions and binary variables of which only one, pertaining to one of the linear portions of the objective, can be positive. The author uses a custom-built algorithm and solves a number of problems with different RHS taking advantage of the (PO k } being relaxations of one another.

A 1987 paper [51] by Turgeon considers P0 analysis on a binary MIP for selecting among possible hydro-electric developments on a river in northern Quebec. A binary variable represents whether or not a dam will be built at a particular site or whether or not a power plant will be built at a particular site. The continuous variables represent different sizes for the dam or plants. The

Page 17: Parametric methods in integer linear programming

L. Jenkins / Parametric integer programming 93

model contains a number of technical constraints which are approximated by linear inequalities. The author wrote his own branch-and-bound master routine using MPSX/370 as a subroutine for solving individual LP problems. Working with a model with 7 binary variables, 5580 continuous variables and 2608 constraints the author performed a number of RHS continuous parametric analyses to analyse different electricity demands. The technique used for solving P0 imitated that used by Jenkins and was explained here in section 6, employing the heuristic reasoning mentioned that if two adjacent P0t` and PO/, +! for an MIP have the same optimal values for binary variables, these values are assumed optimal for all 0 k ~< 0 ~< Ok+ 1.

9. Conclusions

Before attempting to draw any conclusions about the current state and possible trends in parametric integer programming, it is appropriate to delineate again the material that this paper has reviewed. The emphasis has been on wide-ranging parametric analysis and algorithmic methods to that end. Because of this bias the area of integer programming duality theory has been omitted. The single paper that attempts to derive practical algorithmic methods from duality theory was published in 1985 by Schrage and Wolsey [47], though they tackle only rather specialized sensitivity questions and the limited scope of their results suggests that progress in this area will be slow. Nevertheless, this author believes that incorporating integer programming duality theory into practical algorithms for solving point-value or parametric integer programs would be a worthwhile direction for research. Also this paper has not mentioned the special techniques of Lagrangian relaxation for solving IP and MIP problems. The technique "dualizes" some of the constraints into the objective function and the resulting Lagrange multipliers computed in the solution do indeed yield some sensitivity information. Once again, however, this information is of little value in leading to a wide-ranging parametric analysis.

A recent bibliography [52] lists papers that consider parametric combinatorial algorithms in a wider sense than this paper, including linear programming and also integer programs with special structure that permits their solution by simple (usually labelling) algorithms.

Another area that has not been discussed and has some relation to parametric integer programming is that of multi-objective integer programming. For a recent review see [50]. Currently there are many more publications in that area than in parametric integer programming. Interestingly the researchers in that area have not availed themselves of techniques from parametric integer programming, though the techniques used in multi-objective linear programming often derive from parametric linear programming.

Page 18: Parametric methods in integer linear programming

94 L. Jenkins / Parametric integer programming

So what is the current state of pa ramet r ic integer p rog ramming? On the theoretical f ront there are no major deve lopments b e y o n d those well summar ized in the paper of Geof f r ion and Nauss [19] wr i t ten over 10 years ago. The deve lopment that has taken place is a popu la r i za t ion a m o n g researchers of the fundamen ta l results and emp loymen t of these in a lgor i thmic m e t h o d s and in specific applicat ions. Slowly it is being recognized tha t pa ramet r ic integer pro- g ramming is a valuable technique that can be appl ied and, especial ly wi th the p lummet ing cost of comput ing , is a cost-effective tool of analysis.

Two basic ideas f rom section 3, name ly (a) solving P~ by exploi t ing the concavi ty of z * ( ~ ) , and (b) solving P0 or P?t in those cases where b' and A' have all non-negat ive (or all non-posit ive) e lements by solving a fami ly of point -value IPs, wait to be included in the mater ia l of s t anda rd tex tbooks on mathemat ica l p rog ramming and opera t ions research.

References

[1] D.W. Ashley and R.L. Brunner, Capital equipment selection with mixed integer programming, presented at CORS/TIMS/ORSA Joint Meeting, Toronto (1981).

[2] M.G. Bailey and B.E. Gillett, Parametric integer programming analysis: a contraction ap- proach, J. Oper. Res. Soc. 31 (1980) 257-262.

[3] E. Balas, An additive algorithm for solving linear programs with zero-one variables, Oper. Res. 13 (1965) 517-546.

[4] B. Bank and R. Mandel, Parametric integer optimization, Math. Res. (Berlin) 39 (1988). [5] C.E. Blair and R.G. Jeroslow, The value function of a mixed integer program: I, Discr. Math.

19 (1977) 121-138. [6] C.E. Blair and R.G. Jeroslow, The value function of a mixed integer program: II, Discr. Math.

25 (1979) 7-19. [7] C.E. Blair and R.G. Jeroslow, The value function of an integer program, Math. Programming

23 (1982) 237-273. [8] C.E. Blair and R.G. Jeroslow, Constructive characterizations of the value-function of a

mixed-integer program I, Discr. Appl. Math. 9 (1984) 217-233. [9] C.E. Blair and R.G. Jeroslow, Constructive characterizations of the value function of a

mixed-integer program II, Discr. Appl. Math. 10 (1985) 227-240. [10] V.J. Bowman, Jr., Sensitivity analysis in linear integer programming, AIIE Technical Papers

(1972). [11] P.J. Carstensen, Complexity of some parametric integer and network programming problems,

Math. Programming 26 (1983) 64-75. [12] Control Data Corporation, APEX-Ill Reference Manual, CDC Data Services Publications,

Minneapolis, MN (1975). [13] W. Cook, A.M.H. Gerards, A. Schrijver and E. Tardos, Sensitivity theorems in integer linear

programming, Math. Programming 34 (1986) 251-264. [14] M.W. Cooper, Postoptimality analysis in nonlinear integer programming: the right-hand side

case, Naval Res. Logist. Quarterly 28 (1981) 301-307. [15] M.J. Eisner and D.J. Severance, Mathematical techniques for efficient record segmentation in

large shared databases, J. ACM 23 (1976) 619-635. [16] P.G. Falk, Experiments in mixed integer linear programming concurrent to a manufacturing

hierarchy, presented at TIMS/ORSA Joint Meeting, New York (1978).

Page 19: Parametric methods in integer linear programming

L. Jenkins / Parametric integer programming 95

[17] C.R. Frank, Jr., Parametric programming in integers, Oper. Res. Verfahren 3 (1967) 167-180. [18] A.M. Geoffrion and G.W. Graves, Multicommodity distribution system design by Benders

decomposition, Management Sci. 20 (1974) 822-844. [19] A.M. Geoffrion and R. Nauss, Parametric and postoptimality analysis in integer linear

programming, Management Sci. 23 (1977) 453-466. [20] R.E. Gomory and W.J. Baumol, Integer programming and pricing, Econometrica 28 (1960)

521-550. [21] D. Gusfield, Parametric combinatorial computing and a problem of program module distribu-

tion, J. ACM 30 (1983) 551-563. [22] R.S. Hiller, C.A. Holmes, T.M. Magee and J.F. Shapiro, Constructive duality for mixed integer

programming: Part I - Theory, Working Paper No. OR 147-86, Massachusetts Institutes of Technology (1987).

[23] F.S. Hillier and G.J. Lieberman, Introduction to Operations Research, 3rd ed. (Holden-Day, San Francisco, CA, 1980).

[24] S. Holm and D. Klein, Discrete right hand side parameterization for linear integer programs, Europ. J. Oper. Res. 2 (1978) 50-53.

[25] S. Holm and D. Klein, Three methods for postoptimal analysis in integer linear programming, Math. Programming Study 21 (1984) 97-109.

[26] International Business Machines Corporation, Mathematical Program Extended~370 (hiPS)(/ 370), Basic Reference Manual, IBM Technical Publications Department, White Plains, NY (1976).

[27] L. Jenkins, Parametric mixed integer programming: an application to solid waste management, Management Sci. 28 (1982) 1270-1284.

[28] L. Jenkins, Using parametric integer programming to plan the mix of an air transport fleet, INFOR 25 (1987) 117-135.

[29] L. Jenkins and D. Peters, A computational comparison of Gomory and knapsack cuts, Comput. Opel Res. 14 (1987) 449-456.

[30] L. Jenkins, Parametric-objective integer programming using knapsack facets and Gomory cutting planes, Europ. J. Oper. Res. 31 (1987) 102-109.

[31] S. Kim and S. Cho, A shadow price in integer programming for management decision, Europ. J. Oper. Res. 37 (1988) 328-335.

[32] D. Klein and S. Holm, Integer programming post-optimal analysis with cutting planes, Management Sci. 25 (1979) 64-72.

[33] E. Loukakis and A.P. Muhlemarm, Parameterisation algorithms for the integer linear programs in binary variables, Europ. J. Oper. Res. 17, 104-115, (1984).

[34] A.S. Manne, Two producing areas - constant cycle time policies, in: Investments for Capacity Expansion, ed. A.S. Manne (George Allen and Unwin Ltd., London, 1967).

[35] R.E. Marsten and T.L. Morin, Parametric integer programming: the right-hand side case, Ann. Discr. Math. 1 (1977) 375-390.

[36] R.D. McBride and J.S. Yormark, Finding all solutions for a class of parametric quadratic integer programming problems, Management Sci. 26 (1980) 784-795.

[37] R.M. Nauss, Parametric integer programming, Ph.D. Dissertation, University of California at Los Angeles (1974); subsequently published by University of Missouri Press, Columbia, Miss. (1979).

[38] H. Noltemeier, Sensitivitiitsanalyse bei diskreten linearen Optimierungsproblemen, Lecture Notes in Operations Research and Mathematical Systems, vol. 30 (Springer, Berlin and New York, 1970).

[39] Y. Ohtake and N. Nishida, A branch-and-bound algorithm for 0-1 parametric mixed integer programming, Oper. Res. Lett. 4 (1985) 41-45.

Page 20: Parametric methods in integer linear programming

96 L. Jenkins / Parametric integer programming

[40] C.J. Piper and A.A. Zoltners, Implicit enumeration based algorithms for postoptimizing zero-one programs, Naval Res. Logist. Quarterly 22 (1975) 791-809.

[41] C.J. Piper and A.A. Zoltners, Some easy postoptimality analysis for zero-one programming, Management Sci. 22 (1976) 759-765.

[42] M.A. Radke, Sensitivity analysis in discrete optimization, Ph.D. Dissertation, available as Working Paper No. 240, Western Management Science Institute, UCLA (1975).

[43] G.M. Roodman, Postoptimality analysis in zero-one programming by implicit enumeration, Naval Res. Logist. Quarterly 19 (1972) 435-447.

[44] G.M. Roodman, Postoptimality analysis in integer programming by implicit enumeration: the mixed integer case, Naval Res. Logist. Quarterly 21 (1974) 595-607.

[45] S.L.K. Rountree and B.E. Gillet, Parametric integer linear programming: a synthesis of branch and bound with cutting planes, Europ. J. Oper. Res. 10 (1982) 183-189.

[46] L. Schrage, User's Manual for LINDO (Scientific Press, Palo Alto, CA, 1981). [47] L. Schrage and L. Wolsey, Sensitivity analysis for branch and bound integer programming,

Oper. Res. 33 (1985) 1008-1023. [48] A. Schrijver, Theory of Linear and Integer Programming (Wiley, New York, 1986). [49] G.K. Tayi, Sensitivity analysis of mixed integer programs: an application to environmental

policy making, Europ. J. Oper. Res. 22 (1985) 224-233. [50] J. Teghem Jr., A survey of techniques for finding efficient solutions to multi-objective integer

linear programming, Asia-Pacific J. Oper. Res. 3 (1986) 95-108. [51] A. Turgeon, An application of parametric mixed-integer linear programming to hydropower

development, Water Resources Res. 23 (1987) 399-407. [52] C.P.M. Van Hoesel, A.W.J. Kolen, A.H.G. Rinnooy Kan and A.P.M. Wagelmans, Sensitivity

analysis in combinatorial optimization: a bibliography, Report 8944/A, Erasmus University, Rotterdam (1989).

[53] G.R. Wilson and H.K. Jain, An approach to postoptimality and sensitivity analysis of zero-one goal programs, Naval Res. Logist. Quarterly 35 (1988) 73-84.

[54] L.A. Wolsey, Integer programming duality: price functions and sensitivity analysis, Math. Programming 20 (1981) 173-195.

[55] L.A. Wolsey, The b-hull of an integer program, Discr. Appl. Math. 3 (1981) 193-201.