combinatorial optimization · 2015-10-27 · combinatorial optimization lecture notes, ws 2010/11,...

51
Combinatorial Optimization Lecture notes, WS 2010/11, TU Munich Prof. Dr. Raymond Hemmecke Version of February 9, 2011

Upload: others

Post on 14-Jul-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Combinatorial Optimization · 2015-10-27 · Combinatorial Optimization Lecture notes, WS 2010/11, TU Munich Prof. Dr. Raymond Hemmecke Version of February 9, 2011

Combinatorial Optimization

Lecture notes, WS 2010/11, TU Munich

Prof. Dr. Raymond Hemmecke

Version of February 9, 2011

Page 2: Combinatorial Optimization · 2015-10-27 · Combinatorial Optimization Lecture notes, WS 2010/11, TU Munich Prof. Dr. Raymond Hemmecke Version of February 9, 2011
Page 3: Combinatorial Optimization · 2015-10-27 · Combinatorial Optimization Lecture notes, WS 2010/11, TU Munich Prof. Dr. Raymond Hemmecke Version of February 9, 2011

Contents

1 The knapsack problem 1

1.1 Complete enumeration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

1.2 Dynamic programming approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

1.3 FPTAS for the knapsack problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

1.4 Related problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

2 Main solution ideas for ILPs 7

2.1 General remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

2.2 Cutting plane methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

2.3 Branch-and-bound algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2.4 Branch-and-cut algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

2.5 Primal methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

2.6 Goals for the rest of the course . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

3 Methods to strengthen an IP formulation 11

3.1 Repetition: Concepts from polyhedral geometry . . . . . . . . . . . . . . . . . . . . 11

3.2 Methods to generate valid inequalities . . . . . . . . . . . . . . . . . . . . . . . . . 12

3.2.1 Rounding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

3.2.2 Superadditivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

3.2.3 Modular arithmetic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

3.2.4 Disjunctions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

3.3 Methods to generate facet defining inequalities . . . . . . . . . . . . . . . . . . . . 16

3.3.1 Using the definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

3.3.2 Lifting a valid inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

3.4 Lift-and-Project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

i

Page 4: Combinatorial Optimization · 2015-10-27 · Combinatorial Optimization Lecture notes, WS 2010/11, TU Munich Prof. Dr. Raymond Hemmecke Version of February 9, 2011

CONTENTS ii

4 Ellipsoid Method and Separation 25

4.1 Ellipsoid Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

4.2 Separation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

5 Branching strategies 29

5.1 Selection of a node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

5.2 Branching rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

6 Column generation 35

6.1 The Cutting Stock Problem (CSP) . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

6.1.1 A first IP formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

6.1.2 Second IP formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

6.2 Column generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

6.3 Column generation for CSP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

7 Benders decomposition 39

7.1 Formal derivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

7.2 Solution via Benders decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . 41

7.3 Stochastic Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

8 Lagrange relaxation 43

8.1 Solving the Lagrange dual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

Page 5: Combinatorial Optimization · 2015-10-27 · Combinatorial Optimization Lecture notes, WS 2010/11, TU Munich Prof. Dr. Raymond Hemmecke Version of February 9, 2011

Chapter 1

The knapsack problem

We are given n items having positive weights a1, . . . , an and positive values c1, . . . , cn. Moreover,

we are given a knapsack and a bound b on the weight that can be carried in the knapsack. We wish

to put into the knapsack a selection of items of maximum total value. To avoid trivialities, we will

assume that ai ≤ b for all i, as otherwise the i-th item could not be carried in the knapsack and

we could discard it from the problem.

To model the knapsack problem, we introduce binary variables xi, i = 1, . . . , n. xi shall be equal to

1 if item i is chosen, and 0 otherwise. Then our knapsack problem can be modeled as the following

integer program:

max

{n∑i=1

cixi :

n∑i=1

aixi ≤ b, xi ∈ {0, 1}∀i

}.

This is already an NP-hard optimization problem! Therefore, unless P=NP, we can at most hope

for a good approximation algorithm. Below we will encounter a best possible such approxima-

tion algorithm, a so-called FPTAS (short-hand for “fully polynomial-time approximation

scheme”). First, let us deal with exact solution methods that find an optimal selection of items.

1.1 Complete enumeration

The probably simplest, but most naive, method is to enumerate all 2n possible selections of items,

check for each whether it fits into the knapsack and compute the largest total value among those

selections that fit into the knapsack.

Clearly, from a complexity point of view, this complete enumeration is not a good idea and should

be avoided.

1

Page 6: Combinatorial Optimization · 2015-10-27 · Combinatorial Optimization Lecture notes, WS 2010/11, TU Munich Prof. Dr. Raymond Hemmecke Version of February 9, 2011

1.2. Dynamic programming approach 2

1.2 Dynamic programming approach

Let us explain the idea behind the dynamic programming approach by solving our knapsack prob-

lem

max

{n∑i=1

cixi :

n∑i=1

aixi ≤ b, xi ∈ {0, 1}∀i

}.

Instead of choosing all items at once, we assume that we choose the items one at a time, starting

with item 1, then item 2, and so on. After j decisions, we have determined the values of x1, . . . , xj .

At this point, the accumulated value isj∑i=1

cixi and accumulated weight isj∑i=1

aixi.

Let Cj(u) be the least possible weight that has to be accumulated in order to attain a total value

of exactly u using only (some or all of) the first j items. If it is not possible to attain a total value

of u using the first j items, we put Cj(u) = +∞. Moreover, we put C0(0) = 0 and C0(u) = +∞for u 6= 0. The crucial observation now is that we have the following simple recursion formula

Cj+1(u) = min {Cj(u), Cj(u− cj+1) + aj+1} .

In words: if we wish to accumulate a total value of u with least weight using only items from

{1, . . . , j + 1}, we either do not choose item j + 1 and have smallest possible weight Cj(u) or we

do choose item j + 1 and have smallest possible weight Cj(u− cj+1) + aj+1. For both subcases we

already use known best solutions. It is this step where we gain a big advantage in running time

over the complete enumeration!

Note: Our knapsack problem is equivalent to finding the largest u for which Cn(u) ≤ b.

What is the largest u for which we have to compute Cn(u)? Clearly, if cmax := maxi=1,...,n

ci, then∑ni=1 cixi ≤ ncmax. Thus, we only have to compute the values Cn(u) for u = 0, 1, . . . , ncmax, to

solve our knapsack problem.

In order to do so, we first compute C1(0), . . . , Cn(0) (well, they are all 0), then C1(1), . . . , Cn(1),

then C1(2), . . . , Cn(2), . . . , and finally C1(ncmax), . . . , Cn(ncmax). These values can be arranged

into a two-dimensional n × ncmax array. In order to obtain the actual values x1, . . . , xn of an

optimal solution, we only have to back-track the computation of Cn(u) for the optimal value u.

Let us have a look at the running time of this dynamic programming approach. If amax := maxi=1,...,n

ai,

then the computation of Cj+1(u) from Cj(u) and Cj(u− cj+1) + aj+1 takes O(n(〈amax〉+ 〈cmax〉))binary operations. Thus, in total, we have a running time of O(n3cmax(〈amax〉 + 〈cmax〉)) for this

dynamic programming approach. Although cmax is typically exponential in the encoding length of

the input data (Exercise: What is the encoding length of a knapsack instance?), we can clearly see

that the dynamic programming approach outperforms the complete enumeration of 2n potential

solutions.

Exercise: A somewhat more natural dynamic programming algorithm for the solution of the

knapsack problem could be obtained by computing values Vj(w), encoding the maximum possible

accumulated value using some of the first j items subject to the constraint that their accumulated

weight is exactly w.

Page 7: Combinatorial Optimization · 2015-10-27 · Combinatorial Optimization Lecture notes, WS 2010/11, TU Munich Prof. Dr. Raymond Hemmecke Version of February 9, 2011

Chapter 1. The knapsack problem 3

1.3 FPTAS for the knapsack problem

Definition 1.3.1 Consider an optimization problem max {f(x) : x ∈ X} and denote by x∗ an

optimal feasible solution.

(a) An algorithm A is an ε-approximation algorithm if it computes a feasible solution x0 ∈ Xwith f(x0) ≥ (1− ε)f(x∗).

(b) A family of algorithms Aε is a polynomial time approximation scheme (PTAS) if for

every error parameter ε > 0, Aε is an ε-approximation algorithm and its running time is

polynomial in the encoding length of the instance for fixed ε.

(c) If Aε is an approximation scheme and its running time is polynomial in the encoding length

of the instance and in 1/ε, then it is a fully polynomial time approximation scheme

(FPTAS).

Let us now state an FPTAS for the solution of the knapsack problem. Be reminded that the

running time of the dynamic programming approach above is O(n3cmax(〈amax〉+ 〈cmax〉)). Thus, if

we reduce cmax, the resulting instance is faster to solve. For example, instead of solving a knapsack

problem with objective function 94x1+72x2+104x3 we could slightly change the objective function

to 90x1 + 70x2 + 100x3 and hope that the optimal cost does not change too much. The latter

objective, however, is equivalent to 9x1 + 7x2 + 10x3, leading to a knapsack that can be solved 10

times faster!

FPTAS for the knapsack problem

Input: vectors a, c ∈ Zn+, a scalar b ∈ Z+, an error parameter ε > 0

Output: a solution x0 to max{cᵀx : aᵀx ≤ b,x ∈ {0, 1}n} with cᵀx0 ≥ (1− ε)cᵀx∗

1. If n/cmax > ε, then solve the knapsack problem exactly using the dynamic programming

algorithm above.

2. (Rounding) If n/cmax ≤ ε, let

t =⌊log10

(εcmax

n

)⌋,

ci =⌊ ci

10t

⌋, i = 1, . . . , n.

Solve the knapsack problem max{cᵀx : aᵀx ≤ b,x ∈ {0, 1}n} exactly using the dynamic

programming algorithm above. Let x0 be the optimal solution found.

3. Return x0.

The following theorem shows that this is indeed an FPTAS for the knapsack problem.

Page 8: Combinatorial Optimization · 2015-10-27 · Combinatorial Optimization Lecture notes, WS 2010/11, TU Munich Prof. Dr. Raymond Hemmecke Version of February 9, 2011

1.4. Related problems 4

Theorem 1.3.2 This algorithm finds a feasible solution x0 to max{cᵀx : aᵀx ≤ b,x ∈ {0, 1}n}with cᵀx0 ≥ (1− ε)cᵀx∗ in O(n4(〈amax〉+ 〈cmax〉)/ε) steps.

Proof. If n/cmax > ε, we solve the problem exactly, that is, the approximation error in this case

is 0. The running time is O(n3cmax(〈amax〉+ 〈cmax〉)) = O(n4(〈amax〉+ 〈cmax〉)/ε).

If n/cmax ≤ ε, then t satisfies

ε

10<n10t

cmax≤ ε. (1.1)

Thus, we get

cmax ≤cmax

10t≤ 10n

ε,

and the running time is O(n3cmax(〈amax〉+ 〈cmax〉)) = O(n4(〈amax〉+ 〈cmax〉)/ε).

Let us now compute the approximation error. Let S (respectively S) be a set of items chosen by

some optimal solution for max{cᵀx : aᵀx ≤ b,x ∈ {0, 1}n} (respectively for max{cᵀx : aᵀx ≤b,x ∈ {0, 1}n}). Then we have∑

i∈Sci ≥

∑i∈S

ci ≥ 10t∑i∈S

ci ≥ 10t∑i∈S

ci ≥∑i∈S

(ci − 10t) ≥∑i∈S

ci − n10t. (1.2)

The first inequality holds because of optimality of S for the original problem. The second inequality

holds, since 10tci ≤ ci for all i. The third inequality holds because of optimality of S for the modified

problem. The fourth inequality holds, since ci = bci/10tc ≥ ci/10t − 1 and thus 10tci ≥ ci − 10t.

Finally, the last inequality holds, since |S| ≤ n.

Thus, we have

cᵀx∗ − cᵀx0

cᵀx∗=

∑i∈S

ci −∑i∈S

ci∑i∈S

ci≤ n10t

cmax≤ ε. (1.3)

The first inequality follows from Equation (1.2) and from the fact that ai ≤ b for all i and hence∑i∈S

ci ≥ cmax. The second inequality follows from Equation (1.1).

Rewriting Equation (1.3), we get

cᵀx0 ≥ (1− ε)cᵀx∗,

and the approximation error is sufficiently small. �

1.4 Related problems

The knapsack problem is the simplest discrete optimization (optimization over lattice points in a

simplex), but it is already NP-hard. It relates to other interesting problems that, at least, we wish

to mention here.

Page 9: Combinatorial Optimization · 2015-10-27 · Combinatorial Optimization Lecture notes, WS 2010/11, TU Munich Prof. Dr. Raymond Hemmecke Version of February 9, 2011

Chapter 1. The knapsack problem 5

The Frobenius problem

Frobenius problem. Let a1, . . . , an ∈ Z+ with gcd(a1, . . . , an) = 1. What is the largest number

b = f(a1, . . . , an) ∈ Z+ such that the equation

a1x1 + . . .+ anxn = b

has no solution x1, . . . , xn ∈ Z+. This number f(a1, . . . , an) is called the Frobenius number of

a1, . . . , an.

The Frobenius problem is also known as the Coin exchange problem: If a1, . . . , an ∈ Z+ denote the

values of coins, what is the largest amount that cannot be changed with these coins?

Exercise. Why is f(a1, . . . , an) <∞ whenever gcd(a1, . . . , an) = 1?

The Frobenius number is NP-hard to compute. However, one can again design a dynamic program-

ming framework to compute it. From a theoretical point of view, it is interesting to find explicit

formulas for the Frobenius number. For n = 2, there exists such a closed-form expression:

Lemma 1.4.1 (Nijenhuis and Wilf, 1972) For a1, a2 ∈ Z+ with gcd(a1, a2) = 1 we have

f(a1, a2) = (a1 − 1)(a2 − 1)− 1 = a1a2 − (a1 + a2).

There is no known closed-form solution for n = 3, although a semi-explicit solution is known which

allows values to be computed quickly.

The feasibility problem

One fundamental problem in integer programming is the question, whether the set of feasible

solutions {x ∈ Zn : Ax ≤ b} or {x ∈ Zn : Ax = b,x ≥ 0} is empty or not.

Exercise. In fact, integer linear programs could be solved by polynomially many calls to a fea-

sibility oracle, which tells you for any given polytope, if this polytope contains a lattice point or

not. How? (“Polynomially many”: polynomially bounded from above in the encoding length of the

input data A and b.)

As the knapsack problem is the simplest (non-trivial) discrete optimization problem, it is pretty

well studied. To study the general situation, we see that {x ∈ Zn : Ax = b,x ≥ 0} is not

empty if and only if b ∈ monoid(a1, . . . ,an), where ai denotes the i-th column of A. A necessary

condition for b ∈ monoid(a1, . . . ,an) is b ∈ cone(a1, . . . ,an)∩Zn, which can be checked via linear

programming.

One interesting problem is the question whether monoid(a1, . . . ,an) = cone(a1, . . . ,an)∩Zn, that

is, whether a1, . . . ,an form a Hilbert basis for the cone they generate. In other words: is there some

b ∈ Zn that can be represented as a nonnegative real linear combination but not as a nonnegative

integer linear combination of a1, . . . ,an. Clearly, this is an NP-hard problem. From a practical

perspective, however, it would be great to model this problem as some feasibility or linear/nonlinear

optimization problem (that may have good practical algorithms and implementations for their

solution).

Page 10: Combinatorial Optimization · 2015-10-27 · Combinatorial Optimization Lecture notes, WS 2010/11, TU Munich Prof. Dr. Raymond Hemmecke Version of February 9, 2011

1.4. Related problems 6

Page 11: Combinatorial Optimization · 2015-10-27 · Combinatorial Optimization Lecture notes, WS 2010/11, TU Munich Prof. Dr. Raymond Hemmecke Version of February 9, 2011

Chapter 2

Main solution ideas for ILPs

2.1 General remarks

For any polytope P we denote by PI the convex hull of all lattice points in P , that is,

PI = conv(P ∩ Zn).

If we knew a linear description of PI , we could apply any method from linear programming to

solve the IP

max{cᵀx : x ∈ P ∩ Zn} = max{cᵀx : x ∈ PI}.

The better P approximates PI , the better the LP relaxation max{cᵀx : x ∈ P} approximates the

true IP optimum. Thus, we will be interested in strengthening the inequalities of P (so that we

get a smaller polytope still containing PI) and in adding more inequalities (cutting off some parts

of P that do not belong to PI).

2.2 Cutting plane methods

The main idea behind cutting plane methods is the following:

1. Relax the problem and solve the LP relaxation max{cᵀx : x ∈ P}. → x∗ optimal solution

2. If LP relaxation is infeasible, STOP: IP is infeasible.

3. If x∗ ∈ Zn, we have found an optimal IP solution. → STOP: optimal IP solution found

4. If x∗ 6∈ Zn, find an inequality aᵀx ≤ α that cuts off x∗, Let P := P ∩ {x : aᵀx ≤ α} and go

to 1.

7

Page 12: Combinatorial Optimization · 2015-10-27 · Combinatorial Optimization Lecture notes, WS 2010/11, TU Munich Prof. Dr. Raymond Hemmecke Version of February 9, 2011

2.3. Branch-and-bound algorithm 8

max cᵀ x0

xstart

Simplexalgorithmus

max cᵀ

x2

x0

x1

2.3 Branch-and-bound algorithm

The main ideas behind the branch-and-bound algorithm are the following.

Branching

In a branch-and-bound framework, the original IP is iteratively split into smaller and smaller sub-

problems in some way. For example, one may consider a relaxed problem by dropping integrality

or nonnegativity constraints. Then, an optimal solution to this relaxed problem is used to split the

given IP into two or more smaller IPs which are then solved individually. In the worst case, this

iterative construction leads to a complete enumeration of all lattice points in the original polyhe-

dron. Another way to split the original problem, even without considering any relaxed problem, is

to split it according to xi = 0, 1, . . . , ui.

max cᵀ x0 max cᵀ x0 max cᵀ x0

Bounding

To prune this tree of IP subproblems, one can use the following observations.

• If a subproblem is infeasible, it can be dropped.

• If the optimal solution of a subproblem is integer (or nonnegative or ...), we have found a

feasible solution to our original problem. The subproblem need not be split any further and

can be removed, as it has been solved to optimality.

Page 13: Combinatorial Optimization · 2015-10-27 · Combinatorial Optimization Lecture notes, WS 2010/11, TU Munich Prof. Dr. Raymond Hemmecke Version of February 9, 2011

Chapter 2. Main solution ideas for ILPs 9

• If the optimal value of (the relaxation of) a subproblem is equal or worse than the currently

best objective value of a feasible solution found so far, this subproblem can be removed, since

it cannot contain a better feasible solution than the currently best known solution.

2.4 Branch-and-cut algorithm

Branch-and-cut combines the ideas of branch-and-bound and cutting plane methods. In some or all

nodes of the branch-and-bound tree, the IP formulation is strengthened by cuts. This strengthening

is very expensive computationally. However, far fewer nodes have to be considered. Currently,

branch-and-cut is the practically best approach to solve an ILP.

2.5 Primal methods

Primal methods start with a feasible solution and iteratively augment it to better and better

feasible solutions until an optimal solution is found. Prominent examples are the Ford-Fulkerson

algorithm to find a maximum flow in a network.

Heuristics that try to find an augmentation step for a given feasible solution may also help any

branch-and-bound or branch-and-cut algorithm, since better feasible solutions help to prune the

search tree even further.

max cᵀ

xstart

2.6 Goals for the rest of the course

Among others, we will deal with several aspects of the above algorithms:

• strengthening of inequalities

• finding new valid inequalities and facets for PI

• separating over exponentially many facets/cuts

• primal and dual heuristics to find good feasible solutions or bounds

• branching rules

Page 14: Combinatorial Optimization · 2015-10-27 · Combinatorial Optimization Lecture notes, WS 2010/11, TU Munich Prof. Dr. Raymond Hemmecke Version of February 9, 2011

2.6. Goals for the rest of the course 10

Page 15: Combinatorial Optimization · 2015-10-27 · Combinatorial Optimization Lecture notes, WS 2010/11, TU Munich Prof. Dr. Raymond Hemmecke Version of February 9, 2011

Chapter 3

Methods to strengthen an IP

formulation

3.1 Repetition: Concepts from polyhedral geometry

In this section we repeat some notions from polyhedra geometry that will be needed for the

remainder of the course.

Definition 3.1.1

• A polyhedron is the solution set to a system of finitely many linear inequalities Ax ≤ b.

• A polyhedron is called rational if it is the solution set to a system of finitely many linear

inequalities Ax ≤ b for some rational matrix A and some rational vector b.

• A polyhedral cone is the solution set to a (homogeneous) system of finitely many linear

inequalities Ax ≤ 0.

• A (polyhedral) cone is called rational if it is a rational polyhedron.

• A polytope is the convex hull of finitely many points.

Lemma 3.1.2 A set P is a polytope if and only if P is a bounded polyhedron.

Lemma 3.1.3 (Minkowski-Weyl) A set P is a rational polyhedron if and only if there exist

finite sets V ∈ Qn and E ∈ Qn such that P = conv(V ) + cone(E).

Definition 3.1.4 Let P = {x ∈ Rn : Ax ≤ b} be a polyhedron.

• A valid inequality for P is an inequality cᵀx ≤ γ that is satisfied by all x ∈ P .

11

Page 16: Combinatorial Optimization · 2015-10-27 · Combinatorial Optimization Lecture notes, WS 2010/11, TU Munich Prof. Dr. Raymond Hemmecke Version of February 9, 2011

3.2. Methods to generate valid inequalities 12

• If cᵀx ≤ γ is valid for P , then P ∩ {x : cᵀx = γ} is called a face of P .

• For a face F of P , the set eq(F ) denotes the set of indices of inequalities Ai.x ≤ bi that are

satisfied with equality on F , that is, F = P ∩{x ∈ Rn : Aeq(F )x = beq(F )

}.

• The dimension of a face is the dimension of aff(F ) ={x ∈ Rn : Aeq(F )x = beq(F )

}, that is,

dim(F ) = n− rank(Aeq(F )

).

• Faces of dimensions 0, 1, and dim(P )−1 are called vertices, edges, and facets, respectively.

3.2 Methods to generate valid inequalities

Let P = {x ∈ Rn+ : Ax ≤ b} with A ∈ Zd×n. We wish to generate new valid inequalities for

PI = conv(x : x ∈ P ∩ Zn). Ideally, these inequalities define facets of PI .

3.2.1 Rounding

We use the following two simple ideas:

• If f(x) ∈ Z and f(x) ≤ α, then we can strengthen the inequality to f(x) ≤ bαc.

• If the inequalities Ax ≤ b are valid for PI and if u ≥ 0, then (uᵀA)x ≤ uᵀb is valid for PI .

With this we can produce new valid inequalities for PI as follows:

1. Choose u ∈ Rd+. We obtain the valid inequality (uᵀA)x =n∑j=1

(uᵀA.j)xj ≤ uᵀb for PI .

2. Since x ≥ 0, we can conclude that alson∑j=1

buᵀA.jcxj ≤ uᵀb is valid for PI .

3. Since x ∈ Zn, the inequalityn∑j=1

buᵀA.jcxj ≤ buᵀbc is valid for PI .

The valid inequalityn∑j=1

buᵀA.jcxj ≤ buᵀbc can be added to Ax ≤ b to strengthen the IP formu-

lation. Collecting all such inequalities for varying u ≥ 0, we obtain the first Chvatal-closure P1

of P :

P1 :=

x ∈ Rn+ : Ax ≤ b,

n∑j=1

buᵀA.jcxj ≤ buᵀbc,∀u ∈ Rd+

.

It can be shown that P1 is again a polyhedron, that is, finitely many inequalities suffice to define

it. In fact, if Ax ≤ b is a TDI-description for P , then Ax ≤ bbc is a finite description for P1. If we

recursively define

Pt+1 = (Pt)1,

one can show that there is some finite number k, the so-called Chvatal-rank of P , such that

Pk = Pk+1 = PI , that is, the sequence P1, P2, . . ., always stabilizes at PI after finitely many

closure operations.

Page 17: Combinatorial Optimization · 2015-10-27 · Combinatorial Optimization Lecture notes, WS 2010/11, TU Munich Prof. Dr. Raymond Hemmecke Version of February 9, 2011

Chapter 3. Methods to strengthen an IP formulation 13

3.2.2 Superadditivity

The valid inequalityn∑j=1

buᵀA.jcxj ≤ buᵀbc can also be written in the formn∑j=1

F (A.j)xj ≤ F (b)

for F (a) = buᵀac. This function is a special case of the more general class of nondecreasing

superadditive functions.

Definition 3.2.1 A function F : D ⊂ Rd → R is superadditive if F (a1) + F (a2) ≤ F (a1 + a2)

for all a1,a2 ∈ D such that a1 + a2 ∈ D.

It is nondecreasing if F (a1) ≤ F (a2) whenever a1 ≤ a2, for all a1,a2 ∈ D.

Nondecreasing superadditive functions yield valid inequalities in general.

Theorem 3.2.2 If F : Rd → R is superadditive and nondecreasing with F (0) = 0, the inequality

n∑j=1

F (A.j)xj ≤ F (b)

is valid for PI , where P = {x ∈ Rn+ : Ax ≤ b}.

Proof. Let x ∈ PI . We first show by induction on xj that F (A.j)xj ≤ F (A.jxj). For xj = 0 the

inequality is clearly true. Assume that it is true for xj = k − 1. Then we obtain

F (A.j) k = F (A.j) + F (A.j) (k − 1) ≤ F (A.j) + F (A.j(k − 1)) ≤ F (A.j +A.j(k − 1)) ,

by superadditivity, and the induction is complete. Thus, we have

n∑j=1

F (A.j)xj ≤n∑j=1

F (A.jxj) .

By superadditivity we get

n∑j=1

F (A.jxj) ≤ F

n∑j=1

A.jxj

= F (Ax).

Since Ax ≤ b and since F is nondecreasing, we get

F (Ax) ≤ F (b).

Putting these inequalities together, we obtain

n∑j=1

F (A.j)xj ≤ F (b).

Page 18: Combinatorial Optimization · 2015-10-27 · Combinatorial Optimization Lecture notes, WS 2010/11, TU Munich Prof. Dr. Raymond Hemmecke Version of February 9, 2011

3.2. Methods to generate valid inequalities 14

3.2.3 Modular arithmetic

Let us consider the polyhedron

P =

x ∈ Rn+ :

n∑j=1

aixi = a0

for given ai ∈ Z, i = 0, 1, . . . , n. Moreover, let k ∈ Z+. We write aj = bj + ujk, where bj

(0 ≤ bj < k, bj ∈ Z) is the remainder when aj is divided by k. Then all points in PI satisfy

n∑j=1

bixi = b0 + rk, for some r ∈ Z.

Sincen∑j=1

bjxj ≥ 0 and b0 ≤ k, we get r ≥ 0. Thus, the inequality

n∑j=1

bjxj ≥ b0

is valid for PI .

Example. Let P ={x ∈ R4

+ : 27x1 + 17x2 − 64x3 + x4 = 203}

und choose k = 13. Then we

obtain the valid inequality x1 + 4x2 + x3 + x4 ≥ 8 for PI . �

An important family of inequalities is obtained when k = 1 and when the aj are not all integers.

In this case, since x ≥ 0, we obtainn∑j=1

bajcxj ≤ a0.

As x ∈ Zn, we getn∑j=1

bajcxj ≤ ba0c ,

and consequently,n∑j=1

(aj − bajc)xj ≥ a0 − ba0c .

These inequalities are called Gomory cuts and form the basis of the Gomory cutting plane

algorithm.

3.2.4 Disjunctions

The idea behind the following construction is to partition PI into two sets P1 and P2, obtain valid

inequalities for P1 and P2 and combine them to get valid inequalities for PI = P1 ∪ P2.

Page 19: Combinatorial Optimization · 2015-10-27 · Combinatorial Optimization Lecture notes, WS 2010/11, TU Munich Prof. Dr. Raymond Hemmecke Version of February 9, 2011

Chapter 3. Methods to strengthen an IP formulation 15

Lemma 3.2.3 If the inequalityn∑j=1

ajxj ≤ b is valid for P1 ⊆ Rn+, and the inequalityn∑j=1

cjxj ≤ d

is valid for P2 ⊆ Rn+, then the inequality

n∑j=1

min(aj , cj)xj ≤ max(b, d)

is valid for P1 ∪ P2.

Proof. Let x ∈ P1 ∪ P2. Without loss of generality, we may assume that x ∈ P1, and thereforen∑j=1

ajxj ≤ b. As x ≥ 0, this implies

n∑j=1

min(aj , cj)xj ≤n∑j=1

ajxj ≤ b ≤ max(b, d).

We now use this lemma to derive valid inequalities for PI , where P = {x ∈ Rn+ : Ax ≤ b}. For

this, let α ∈ Zn+.

Lemma 3.2.4 If the inequalityn∑j=1

ajxj − s(xk − α) ≤ a is valid for PI for some s ≥ 0, and the

inequalityn∑j=1

ajxj + t(xk − α− 1) ≤ a is valid for PI for some t ≥ 0, then the inequality

n∑j=1

ajxj ≤ a

is valid for PI .

Proof. For xk ≤ α, we have

n∑j=1

ajxj ≤n∑j=1

ajxj − s(xk − α) ≤ a.

Hence,n∑j=1

ajxj ≤ a is valid for P1 := PI ∩{x ∈ Rn+ : xk ≤ α

}. Similarly,

n∑j=1

ajxj ≤ a is valid for

P2 := PI ∩{x ∈ Rn+ : xk ≥ α+ 1

}. By the lemma above,

n∑j=1

ajxj ≤ a is valid for PI = P1 ∪P2. �

Example. Consider the two inequalities −x1 + 2x2 ≤ 4 and −x1 ≤ −1 and rewrite them as

(−x1 + x2) + (x2 − 3) ≤ 1 and (−x1 + x2)− (x2 − 2) ≤ 1.

Applying the previous lemma with α = 2, we obtain that the inequality −x1 + x2 ≤ 1 is valid. �

Page 20: Combinatorial Optimization · 2015-10-27 · Combinatorial Optimization Lecture notes, WS 2010/11, TU Munich Prof. Dr. Raymond Hemmecke Version of February 9, 2011

3.3. Methods to generate facet defining inequalities 16

3.3 Methods to generate facet defining inequalities

3.3.1 Using the definition

Assume that we have found a valid inequality aᵀx ≤ α for PI . We would like to show that this

inequality is in fact facet defining for PI . First we compute the dimension k of PI (this is the

maximum number k such that there exist k+ 1 affinely independent points in PI). Then we try to

find k affinely independent points in PI that satisfy aᵀx = α. To show the latter, it is sufficient to

show that dim ({x ∈ PI : aᵀx = α) = k − 1. Let us consider an example.

Example. (Stable set problem) Let G = (V,E) be an undirected graph with weights wi for

all i ∈ V . Then the weighted stable set problem is the problem of finding a stable set S in G of

maximum total weight. (Stable set S: no two nodes in S are connected by an edge in G.)

Let n = |V | and introduce decision variables xi.

xi =

{1, if node i is selected

0, otherwise

The problem can then be modeled as follows:

max

{∑i∈V

wixi : xi + xj ≤ 1 ∀{i, j} ∈ E, xi ∈ {0, 1} ∀i ∈ V

}.

A clique of G is a set U ⊆ V such that {i, j} ∈ E for all i, j ∈ U . Clearly, if U is a clique, the

following gives a valid inequality: ∑i∈U

xi ≤ 1.

We now try to find conditions under which this inequality is facet defining. Since 0 ∈ PI and since

all n unit vectors ei ∈ PI , there are n + 1 affinely independent vectors in PI . This implies that

dim(PI) = n.

A clique U is maximal, if for all i ∈ V \U , the set U ∪{i} is not a clique. We show that∑i∈U xi ≤ 1

is facet defining if and only if U is a maximal clique.

W.l.o.g, assume that U = [k]. Then, clearly, e1, . . . , ek satisfy∑i∈U

xi ≤ 1 with equality. Assume

that U is a maximal clique. That is, for each i 6∈ U , there is some node r(i) ∈ U , such that

{i, r(i)} 6∈ E. Therefore, the vector xi with the i-th and the r(i)-the coordinates equal to 1 and the

rest equal to 0 is in PI and satisfies∑i∈U

xi ≤ 1 with equality. The vectors e1, . . . , ek,xk+1, . . . ,xn

are linearly independent and thus, they are also affinely independent. Therefore, if U is a maximal

clique,∑i∈U

xi ≤ 1 is facet defining.

Conversely, if U is not maximal,∑i∈U

xi ≤ 1 is not facet defining: Since U is not maximal, there is

a node i 6∈ U such that U ∪{i} is a clique, and thus∑

j∈U∪{i}xj ≤ 1 is valid for PI . Since

∑i∈U

xi ≤ 1

is the sum of −xi ≤ 0 and∑

j∈U∪{i}xj ≤ 1, the inequality

∑j∈U

xj ≤ 1 cannot be facet defining. �

Page 21: Combinatorial Optimization · 2015-10-27 · Combinatorial Optimization Lecture notes, WS 2010/11, TU Munich Prof. Dr. Raymond Hemmecke Version of February 9, 2011

Chapter 3. Methods to strengthen an IP formulation 17

Remark. Similarly, if we consider a cycle C ⊆ E and denote by V (C) the set of nodes incident

to an edge in C. For any such cycle of odd cardinality, there cannot be more than (V (C) − 1)/2

nodes in a stable set. (Otherwise, two nodes in the stable set would be adjacent.) Therefore, the

inequality ∑i∈V (C)

xi ≤V (C)− 1

2,

is valid for any cycle C of odd cardinality. �

3.3.2 Lifting a valid inequality

Geometrically, lifting a valid inequality means to derive a higher-dimensional face from a lower-

dimensional face. Let us demonstrate the idea first on an example.

Example. (Stable set problem, continued) Consider the maximal clique inequalities on the

following graph G.

x1 + x2 + x3 ≤ 1

x1 + x3 + x4 ≤ 1

x1 + x4 + x5 ≤ 1

x1 + x5 + x6 ≤ 1

x1 + x2 + x6 ≤ 1

The LP maximizing6∑i=1

xi over the polytope defined by these inequalities and by xi ≥ 0, i =

1, . . . , 6, has as its unique optimal solution x0 = 12 (0, 1, 1, 1, 1, 1)ᵀ. Thus, the max-clique inequalities

are not enough to describe the convex hull of all stable sets.

Note that x0 does not satisfy the odd-cycle inequality for V (C) = {2, 3, 4, 5, 6}:

x2 + x3 + x4 + x5 + x6 ≤ 2. (3.1)

Let F be the set of feasible solutions to the stable set problem on G. Inequality (3.1) is satisfied

with equality by five linearly independent vectors corresponding to the stable sets {2, 4}, {2, 5},

Page 22: Combinatorial Optimization · 2015-10-27 · Combinatorial Optimization Lecture notes, WS 2010/11, TU Munich Prof. Dr. Raymond Hemmecke Version of February 9, 2011

3.3. Methods to generate facet defining inequalities 18

{3, 5}, {3, 6}, {4, 6}, but it is not facet-defining, as there are no other stable sets that satisfy (3.1)

with equality.

However, (3.1) is facet-defining for

conv(F ∩ {x ∈ {0, 1}6 : x1 = 0}).

We now wish to lift inequality (3.1) that it also defines a facet for conv(F). For this, we consider

the inequality

αx1 + x2 + x3 + x4 + x5 + x6 ≤ 2 (3.2)

for some α > 0. We now try to find a maximal number α such that (3.2) is still valid for all x ∈ F ,

and such that it defines a facet for conv(F).

For x1 = 0, inequality (3.2) is valid for all α. If x1 = 1, we get α ≤ 2 − x2 − x3 − x4 − x5 − x6.

However, since x1 = 1 implies x2 = x3 = x4 = x5 = x6 = 0, we get α ≤ 2. Thus, inequality (3.2)

is valid for 0 ≤ α ≤ 2. Moreover, for α = 2, the five vectors above satisfy (3.2) with equality. In

addition, also the solution corresponding to the stable set {1} satisfies (3.2) with equality. Since

these six vectors are linearly independent, the inequality

2x1 + x2 + x3 + x4 + x5 + x6 ≤ 2

is valid and defines a facet of conv(F). �

The general lifting procedure is as follows.

Theorem 3.3.1 Suppose that F ⊆ {0, 1}n, Fi := F ∩ {x ∈ {0, 1}n : x1 = i}, i = 0, 1, and that

the inequalityn∑j=2

ajxj ≤ a0

is valid for F0.

(a) If F1 = ∅, then x1 ≤ 0 is valid for F .

(b) If F1 6= ∅, then the inequality

a1x1 +

n∑j=2

ajxj ≤ a0

is valid for F for any a1 ≤ a0−Z, where Z is the optimal value of max

{n∑j=2

ajxj : x ∈ F1

}.

(c) If a1 = a0 −Z and if the inequalityn∑j=2

ajxj ≤ a0 defines a face of dimension k of conv(F0),

then the inequality a1x1 +n∑j=2

ajxj ≤ a0 defines a face of F of dimension k + 1.

Page 23: Combinatorial Optimization · 2015-10-27 · Combinatorial Optimization Lecture notes, WS 2010/11, TU Munich Prof. Dr. Raymond Hemmecke Version of February 9, 2011

Chapter 3. Methods to strengthen an IP formulation 19

Proof.

(a) Trivial.

(b) If F1 6= ∅, let x ∈ F0. Then

a1x1 +

n∑j=2

ajxj ≤ a0

holds. If x ∈ F1, then

a1x1 +

n∑j=2

ajxj ≤ a1 + Z ≤ a0.

Therefore, the inequality a1x1 +n∑j=2

ajxj ≤ a0 is valid for F = F0 ∪ F1.

(c) Ifn∑j=2

ajxj ≤ a0 defines a face of F0 of dimension k, there are k+1 affinely independent points

x1, . . . ,xk+1 in F0 satisfyingn∑j=2

ajxj ≤ a0 with equality. Let x∗ be an optimal solution of

the problem of maximizingn∑j=2

ajxj over x ∈ F1. For a1 = a0 − Z, we have

a1x∗1 +

n∑j=2

ajx∗j = a1 + Z = a0.

Thus, the points x∗,x1, . . . ,xk+1 satisfy a1x1 +∑nj=2 ajxj ≤ a0 with equality. They are

also affinely independent, since x∗ ∈ F1 and x1, . . . ,xk+1 ∈ F0. Consequently, the inequality

a1x1 +∑nj=2 ajxj ≤ a0 defines a face of conv(F) of dimension k + 1.

Theorem 3.3.1 is meant to be used sequentially. Given N1 ⊆ [n] and an inequality∑j∈N1

ajxj ≤ a0

that is valid for F ∩ {x ∈ {0, 1}n : xi = 0, i 6∈ N1}, we lift one variable at a time to obtain a valid

inequality of the form ∑j 6∈N1

πjxj +∑j∈N1

ajxj ≤ a0.

Note that the coefficients πj depend on the order in which the variables are lifted.

Example. Consider the knapsack problem

F = {x ∈ {0, 1}6 : 5x1 + 5x2 + 5x3 + 5x4 + 3x5 + 8x6 ≤ 17}.

The inequality x1 + x2 + x3 + x4 ≤ 3 is valid for conv(F ∩ {x ∈ {0, 1}6 : x5 = x6 = 0}). Applying

lifting first on x5 and then on x6 leads to the inequality x1 +x2 +x3 +x4 +x5 +x6 ≤ 3. If we first

lift x6 and then x5, we obtain x1 + x2 + x3 + x4 + 2x6 ≤ 3 instead. �

Clearly, the IP that is to be solved in the lifting phase could be as hard as the original problem.

In some cases, however, it is polynomial-time solvable. Even if this IP is hard, we can still use

relaxations to obtain lower bounds for Z. After all, we are still strengthening a known inequality.

Page 24: Combinatorial Optimization · 2015-10-27 · Combinatorial Optimization Lecture notes, WS 2010/11, TU Munich Prof. Dr. Raymond Hemmecke Version of February 9, 2011

3.4. Lift-and-Project 20

3.4 Lift-and-Project

In this section, we describe a systematic way to derive an integral formulation for binary optimiza-

tion problems. The idea of the method is to consider the integer optimization problem in a higher

dimensional space (the lifting phase), and then to project back to the original space inequalities

found in the higher dimensional space resulting in a tighter formulation (the projection phase).

For a polyhedron P = {(x,y) : Dx + By ≤ d}, we define the projection of P onto the set of

x-variables to be

Px := {x : ∃ y with (x,y) ∈ P}.

A description of Px can be obtained as follows.

Lemma 3.4.1 Let C = {u : uᵀB = 0,u ≥ 0} and E be the set of extreme rays of C. Then

Px = {x : (uᵀD)x ≤ uᵀd, for all u ∈ E}. (3.3)

Proof. Clearly, C is a polyhedral cone and it suffices to take in (3.3) only vectors u that are

extreme rays of C, since C = cone(E). (The “non-extreme ray inequalities” are nonnegative linear

combinations of the “extreme ray inequalities”.)

Let S = {x : (uᵀD)x ≤ uᵀd, for all u ∈ E}. If x ∈ Px, then there exists some y such that

(x,y) ∈ P . Hence for all u ∈ E, we have (uᵀD)x ≤ uᵀd, that is, x ∈ S. Conversely, if x ∈ S, we

have (uᵀD)x ≤ uᵀd, for all u ∈ C. By the Farkas Lemma, By ≤ (d −Dx) has a solution if and

only if for every u ∈ C we have that uᵀ(d − Dx) ≥ 0. This latter condition holds. Thus, there

exists some y such that (x,y) ∈ P . �

Remark. Consequently, the problem of obtaining an explicit description of Px reduces to the

problem of determining the extreme rays of C. �

Let

F := {x ∈ {0, 1}n : Ax ≤ b}

with A ∈ Zd×n and b ∈ Zd. Our goal is to derive an explicit description of conv(F). W.l.o.g. we

assume that the inequalities 0 ≤ xi ≤ 1, i = 1, . . . , n, are contained in Ax ≤ b. Let

P = {x : Ax ≤ b}.

The lift-and-project method works as follows.

Algorithm

Input: a matrix A ∈ Zd×n, a vector b ∈ Zd, and an index j ∈ [n]

Output: a polyhedron Pj

(1) (Lift) Multiply Ax ≤ b by xj and by (1− xj):

(Ax)xj ≤ bxj ,

(Ax)(1− xj) ≤ b(1− xj),

Page 25: Combinatorial Optimization · 2015-10-27 · Combinatorial Optimization Lecture notes, WS 2010/11, TU Munich Prof. Dr. Raymond Hemmecke Version of February 9, 2011

Chapter 3. Methods to strengthen an IP formulation 21

and substitute yij := xixj for i = 1, . . . , n, i 6= j, and xj := x2j . Let Lj(P ) be the resulting

polyhedron in (x,y)-space.

(2) (Project) Project Lj(P ) back to the original space of the x-variables by eliminating all

y-variables. Let Pj = (Lj(P ))x be the resulting polyhedron.

We now show that the j-th component of each vertex of Pj is either 0 or 1.

Theorem 3.4.2

Pj = conv(P ∩ {x ∈ Rn : xj ∈ {0, 1}}).

Proof. Let x ∈ P ∩ {x ∈ Rn : xj ∈ {0, 1}} and yij = xixj for i 6= j. Since xj = (xj)2 and Ax ≤ b,

we have (x, y) ∈ Lj(P ) and thus x ∈ Pj . Consequently,

conv(P ∩ {x ∈ Rn : xj ∈ {0, 1}}) ⊆ Pj .

If P ∩{x ∈ Rn : xj = 0} = ∅, then from the Farkas-Lemma there exists u ≥ 0 such that uᵀA = −ej

and uᵀb = −ε with ε > 0. Thus, for all x satisfying

(Ax)xj ≤ bxj ,

(Ax)(1− xj) ≤ b(1− xj),

we have

uᵀAx(1− xj) ≤ uᵀb(1− xj).

Hence, for all x ∈ Pj , we get

−eᵀjx(1− xj) = −xj(1− xj) ≤ −ε(1− xj).

Replacing x2j by xj , we see that −xj(1− xj) = −xj + x2

j = −xj + xj = 0 and thus xj ≥ 1 is valid

for Pj . Since in addition Pj ⊆ P (Exercise: Why does this inclusion hold?), we obtain

Pj ⊆ P ∩ {x ∈ Rn : xj = 1} = conv(P ∩ {x ∈ Rn : xj ∈ {0, 1}}).

Similarly we show if P ∩ {x ∈ Rn : xj = 1} = ∅, then

Pj ⊆ conv(P ∩ {x ∈ Rn : xj ∈ {0, 1}}).

Suppose now that both P ∩ {x ∈ Rn : xj = 0} and P ∩ {x ∈ Rn : xj = 1} are not empty.

To show that Pj ⊆ conv(P ∩ {x ∈ Rn : xj ∈ {0, 1}}), we prove that all inequalities valid for

conv(P ∩ {x ∈ Rn : xj ∈ {0, 1}}) are also valid for Pj .

Let aᵀx ≤ α be valid for conv(P ∩ {x ∈ Rn : xj ∈ {0, 1}}) and let x ∈ P .

If xj = 0, then for all λ ∈ R, we have

aᵀx + λxj = aᵀx ≤ α,

since aᵀx ≤ α is valid for P ∩ {x ∈ Rn : xj = 0}.

Page 26: Combinatorial Optimization · 2015-10-27 · Combinatorial Optimization Lecture notes, WS 2010/11, TU Munich Prof. Dr. Raymond Hemmecke Version of February 9, 2011

3.4. Lift-and-Project 22

If xj > 0, then there exists λ ≤ 0, such that for all x ∈ P we have

aᵀx + λxj ≤ α.

Analogously, since aᵀx ≤ α is valid for P ∩ {x ∈ Rn : xj = 1}, there exists some ν ≤ 0, such that

for all x ∈ P , we have

aᵀx + ν(1− xj) ≤ α.

Thus, for all x satisfying

(Ax)xj ≤ bxj ,

(Ax)(1− xj) ≤ b(1− xj),

we have

(1− xj)(aᵀx + λxj) ≤ (1− xj)α,

xj(aᵀx + ν(1− xj)) ≤ xjα,

and thus, by adding these two inequalities together:

aᵀx + (λ+ µ)(xj − x2j ) ≤ α.

After setting x2j = xj , we obtain that for all x ∈ Pj the inequality aᵀx ≤ α holds. Consequently,

Pj ⊆ conv(P ∩ {x ∈ Rn : xj ∈ {0, 1}}).

Example. Let

P = {(x1, x2)ᵀ : 2x1 − x2 ≥ 0, 2x1 + x2 ≤ 2, 0 ≤ x1 ≤ 1, 0 ≤ x2 ≤ 1}.

Choose j = 1 and apply the algorithm above to obtain P1:

2x21 − x1x2 ≥ 0

2x1(1− x1)− x2(1− x1) ≥ 0

2x21 + x1x2 ≤ 2x1

2x1(1− x1) + x2(1− x1) ≤ 2(1− x1)

0 ≤ x21 ≤ x1

0 ≤ x1(1− x1) ≤ 1− x1

0 ≤ x1x2 ≤ x1

0 ≤ x2(1− x1) ≤ 1− x1

Page 27: Combinatorial Optimization · 2015-10-27 · Combinatorial Optimization Lecture notes, WS 2010/11, TU Munich Prof. Dr. Raymond Hemmecke Version of February 9, 2011

Chapter 3. Methods to strengthen an IP formulation 23

Setting y = x1x2 and x1 = x21, we get:

2x1 − y ≥ 0

−x2 + y ≥ 0

y ≤ 0

x2 − y ≤ 2− 2x1

0 ≤ x1 ≤ x1

0 ≤ 0 ≤ 1− x1

0 ≤ y ≤ x1

0 ≤ x2 − y ≤ 1− x1

This implies that y = 0, which leads to the inequalities:

x1 ≥ 0

−x2 ≥ 0

x2 ≤ 2− 2x1

0 ≤ x1 ≤ 1

0 ≤ x2 ≤ 1− x1

Thus, we get

P1 = {(x1, x2)ᵀ : 0 ≤ x1 ≤ 1, x2 = 0} = conv(P ∩ {(x1, x2)ᵀ : x1 ∈ {0, 1}}).

The algorithm above can be applied sequentially for every variable xj . If we apply it to xi1 , . . . , xit(in this order), we obtain a polyhedron

Pi1,...,it = ((Pi1)i2 . . .)it .

Theorem 3.4.3 The polyhedron Pi1,...,it satisfies:

Pi1,...,it = conv(P ∩ {x ∈ Rn : xj ∈ {0, 1} : j ∈ {i1, . . . , it}}).

Proof. We prove this statement by induction on t. For t = 1, the statement follows from Theorem

3.4.2. Let t ≥ 2 and assume that the statement is true for any sequence of size less than t. Consider

some sequence i1, . . . , it of size t and let

Q = {x ∈ Rn : xiq ∈ {0, 1} : q = 1, . . . , t− 1}.

Page 28: Combinatorial Optimization · 2015-10-27 · Combinatorial Optimization Lecture notes, WS 2010/11, TU Munich Prof. Dr. Raymond Hemmecke Version of February 9, 2011

3.4. Lift-and-Project 24

We have

Pi1,...,it = (Pi1,...,it−1)it

= conv(P ∩Q)it by induction hypothesis

= conv(conv(P ∩Q) ∩ {x : xit ∈ {0, 1}}) by induction hypothesis for k = 1

= conv(conv(P ∩Q) ∩ {x : xit = 0} ∪ conv(P ∩Q) ∩ {x : xit = 1})

= conv(conv(P ∩Q ∩ {x : xit = 0}) ∪ conv(P ∩Q ∩ {x : xit = 1}))

= conv((P ∩Q ∩ {x : xit = 0}) ∪ (P ∩Q ∩ {x : xit = 1}))

= conv((P ∩ {x : xiq ∈ {0, 1}, q = 1 . . . , t})),

where the last equation holds because

conv(conv(S) ∪ conv(T )) = conv(S ∪ T )

is true for arbitrary sets S, T ∈ Rn. �

This theorem implies that if we apply the lift-and-project algorithm above to all n variables, we

obtain the convex hull of solutions conv(F), that is, P1,...,n = conv(F). Moreover, this theorem

implies that the resulting polyhedron does not depend on the order of the variables that are used.

Thus, we may write P{i1,...,it} instead of Pi1,...,it . Finally, it should be noted that the method of

lift and project only applies to binary optimization problems.

Page 29: Combinatorial Optimization · 2015-10-27 · Combinatorial Optimization Lecture notes, WS 2010/11, TU Munich Prof. Dr. Raymond Hemmecke Version of February 9, 2011

Chapter 4

Ellipsoid Method and Separation

4.1 Ellipsoid Method

Notation. For a positive definite matrix D let

Ell(D, z) = {x : (x− z)ᵀD(x− z) ≤ 1}

be the ellipsoid described by D.

Problem. Given a polyhedron P = {x : Ax ≤ b}, find a point in P .

This problem can be solved via the following algorithm:

The Ellipsoid Algorithm.

1. Let γ be the maximum number of bits required to describe a vertex of P .

2. Set r = 2γ .

3. Start with the ellipsoid E0 = Ell(r2In,0).

4. At the i-th iteration, check whether zi lies in P .

• Yes: Output zi as feasible solution.

• No: Find a constraint aᵀkx ≤ bk valid for P , which is violated by zi. Let Ei+1 be the

minimum volume ellipsoid containing Ei ∩ {x : aᵀkx ≤ aᵀ

kzi}, and recurse with Ei+1.

25

Page 30: Combinatorial Optimization · 2015-10-27 · Combinatorial Optimization Lecture notes, WS 2010/11, TU Munich Prof. Dr. Raymond Hemmecke Version of February 9, 2011

4.2. Separation 26

The algorithm stops, when a point in P has been found. It has to stop, since P is contained in Ei

and in each iteration the volume of Ei+1 has dropped by a certain factor. For some value i, the

volume of Ei will be smaller than the volume of P . Clearly, the algorithm must halt before this

situation is reached. The ellipsoid method can be implemented in polynomial time!

Lemma 4.1.1 The minimum volume ellipsoid containing Ell(D, z) ∩ {x : aᵀx ≤ aᵀz} is exactly

E′ = Ell(D′, z′), where

z′ = z− 1

n+ 1· Da√

aᵀDa

and

D′ =n2

n2 − 1

(D − 2

n+ 1· DaaᵀD

aᵀDa

)and

vol(E′)

vol(E)≤ e−

12n+2 .

4.2 Separation

A separation procedure is a procedure that decides for a given point z either that it lies in P or it

finds a valid inequality aᵀx ≤ b for P that is violated for z.

Polynomiality of the ellipsoid method means that we can even solve integer programs in polynomial

time if we can separate over the facet defining inequalities of PI . Even if we cannot separate over

the full facet-description of PI , it may help a lot in cutting-plane methods if we can at least separate

over exponentially many facets of PI .

Example. Let us consider the maximum stable set problem in a graph G = (V,E). This problem

can be modeled (and solved) via the following IP:

max

{∑i

xi : xi + xj ≤ 1 ∀ {i, j} ∈ E, xi ∈ {0, 1} ∀ i

}

Page 31: Combinatorial Optimization · 2015-10-27 · Combinatorial Optimization Lecture notes, WS 2010/11, TU Munich Prof. Dr. Raymond Hemmecke Version of February 9, 2011

Chapter 4. Ellipsoid Method and Separation 27

The LP relaxation

max

{∑i

xi : xi + xj ≤ 1 ∀ {i, j} ∈ E, 0 ≤ xi ≤ 1 ∀ i

}

can be pretty bad. For example, for the complete graph, the optimal IP solution has value 1, while

the LP relaxation has optimal value n/2 with optimal solution x1 = . . . = xn = 1/2.

Thus, let us add the odd-cycle inequalities to the problem. The new LP is as follows:

max

{∑i

xi : xi + xj ≤ 1 ∀ {i, j} ∈ E,∑i∈C

xi ≤|C| − 1

2∀ odd cycles C, 0 ≤ xi ≤ 1 ∀ i

}

Note that the optimal values of this LP and of the IP can still be different.

Let us find a separation oracle for these constraints. As the inequalities xi + xj ≤ 1 ∀ {i, j} ∈ Eand 0 ≤ xi ≤ 1 can easily be checked, we only need to find a separation oracle for the odd-cycle

inequalities. For this, define yij = 1 − xi − xj for each edge {i, j} ∈ E. With this notation, the

odd-cycle inequalities are equivalent to ∑{i,j}∈E(C)

yij ≥ 1.

Therefore, in order to find a violated odd-cycle inequality, we need to find an odd cycle in G with

minimum weight∑

{i,j}∈E(C)

yij . If∑

{i,j}∈E(C)

yij < 1, we have found a violated odd-cycle inequality.

If∑

{i,j}∈E(C)

yij ≥ 1, none of the odd-cycle inequalities is violated.

Next we show that such a minimum-weight odd cycle in G can be found in polynomial time. For

this, we construct a graph G′ = (V1 ∪ V2, E′) such that there is a node i1 ∈ V1 and a node i2 ∈ V2

for every node i ∈ V . For each edge {i, j} ∈ E, we add two edges {i1, j2} and {j1, i2} to E′. Odd

cycles in G now correspond to paths from i1 to i2 in E′. Thus, finding a minimum-weight odd cycle

in G is equivalent to finding shortest paths from i1 to i2 in E′.

Thus, we have constructed a polynomial-time separation procedure. �

Page 32: Combinatorial Optimization · 2015-10-27 · Combinatorial Optimization Lecture notes, WS 2010/11, TU Munich Prof. Dr. Raymond Hemmecke Version of February 9, 2011

4.2. Separation 28

Page 33: Combinatorial Optimization · 2015-10-27 · Combinatorial Optimization Lecture notes, WS 2010/11, TU Munich Prof. Dr. Raymond Hemmecke Version of February 9, 2011

Chapter 5

Branching strategies

An important component of branch-and-bound (or branch-and-cut) algorithms is the branching

step. The chosen strategy often heavily affects the total running time. The branching strategy

includes two types of decisions:

1. How do we decompose a problem into subproblems?

2. Which open subproblem is considered next?

Goals of a branching strategy:

1. strong improvement of dual bound (for a better quality certificate)

2. quick construction of good feasible (primal) solutions (early pruning of branches)

The quality of a chosen strategy can only be checked experimentally and depends heavily on the

structure of the problem instances.

In the following, we always consider minimization problems. Let p be a (sub)problem.

• xi(p) = last LP-value of xi in p

• f−i (p) = xi(p)− bxi(p)c, f+i (p) = dxi(p)e − xi(p)

• fi(p) = min{f−i (p), f+

i (p)}

• z(p) = dual bound in p (last LP-optimum)

29

Page 34: Combinatorial Optimization · 2015-10-27 · Combinatorial Optimization Lecture notes, WS 2010/11, TU Munich Prof. Dr. Raymond Hemmecke Version of February 9, 2011

5.1. Selection of a node 30

5.1 Selection of a node

Problem: Given a set of open subproblems, which subproblem should be considered next?

Breadth-first search

Choose node in branching tree with smallest depth in the tree.

Advantage: None.

Depth-first search

Choose node with smallest depth in the branching tree.

Advantage: Finds primal solutions quickly, as many things get fixed.

Disadvantage: Dual bound is nearly left unchanged.

Best-first search

Choose node p with worst (= smallest) dual bound z(p).

Advantage: Dual bound potentially changes a lot.

Disadvantage: Rarely finds a feasible primal solutions.

Best-projection search

Idea: Search for good primal solutions. One guesses that these solutions can be found in

those subproblems, for which the dual bound remains bad.

Choose node p with minimal value

z(p) +z − z(root)

s(root)s(p),

where s(p) =∑i fi(p) and where z denotes the currently best feasible primal solution.

Moreover, z−z(root)s(root) is a measure of the expected improvement of the bound for decreasing

fractionality.

Page 35: Combinatorial Optimization · 2015-10-27 · Combinatorial Optimization Lecture notes, WS 2010/11, TU Munich Prof. Dr. Raymond Hemmecke Version of February 9, 2011

Chapter 5. Branching strategies 31

Final conclusions about selection of next node

• There are many more strategies.

• Experiments show:

– Depth-first search is best to find good feasible primal solutions.

– Best-first and Best-projection yield the best dual bounds (depending on the type of

instances).

5.2 Branching rules

Problem: Given a subproblem p, divide it into a several new subproblems, where the set of feasible

solutions of p is the union of the feasible solutions of the new subproblems.

There are many possibilities:

• divide into two or more subproblems

• divide with respect to variables or linear constraints

...

Let us concentrate here on the simplest way of branching: Branching along variables

Idea. Choose an integer variable xi with fractional xi(p) and generate two new subproblems by

adding the constraints

xi ≤ bxi(p)c or xi ≥ dxi(p)e

In case of binary variables:

xi = 0 or xi = 1

Problem: Which variable should be chosen for branching? → again many strategies

Most infeasible

Choose variable with maximal fi(p) → the least determined variable

Goal: Fixing has hopefully a large impact.

Experiments: Not much better than random choice of branching variable.

Page 36: Combinatorial Optimization · 2015-10-27 · Combinatorial Optimization Lecture notes, WS 2010/11, TU Munich Prof. Dr. Raymond Hemmecke Version of February 9, 2011

5.2. Branching rules 32

Pseudo-cost branching

Let p−i and p+i be the subproblems obtained from p by rounding xi down and up, respectively.

Let ∆+i (p) := z(p+

i )− z(p),∆−i (p) := z(p−i )− z(p). Then ∆−i (p)/fi(p) measures the gain by

rounding xi down in p.

Downwards-pseudo-costs: pk−(xi) = average of ∆−i (p)/fi(p) over all p, in which p−i has

been considered. (If there is none, pk−(xi) is undefined/uninitialized.)

Upwards-pseudo-costs: Analogously.

Pseudo-costs: pk(xi) = score(pk−(xi),pk+(xi)), where

score(a, b) = (1− µ) min(a, b) + µmax(a, b), µ ∈ [0, 1].

Typically, µ = 16 is chosen.

Choose fractional variable xi with largest value pk(xi) for branching.

Advantage: good branching decisions, fast computation

Disadvantage: initially pseudo-costs are uninitialized or contain nearly no information

Strong branching

Idea: Look ahead!

Bound ∆−i (p) and ∆+i (p) by temporarily adding xi ≤ bxi(p)c and xi ≥ dxi(p)e, respectively,

and by solving both LPs.

Choose variable with largest value of

score

(∆+i (p)

f+i (p)

,∆−i (p)

f−i (p)

).

Advantage: good estimation of success of branching using xi

Disadvantage: Computation takes a lot of time! Two LPs per candidate variable!

In praxis:

– Test only a selection of variables.

– Perform only a few simplex iterations.

– Still very expensive!

Page 37: Combinatorial Optimization · 2015-10-27 · Combinatorial Optimization Lecture notes, WS 2010/11, TU Munich Prof. Dr. Raymond Hemmecke Version of February 9, 2011

Chapter 5. Branching strategies 33

Mixed approaches

Goal: Combine advantages of pseudo-cost and strong branching. First perform strong branch-

ing, then use pseudo-cost branching when the information on former branchings become more

reliable.

There are a variety of more or less complex approaches:

1. Up to a predetermined depth in the search tree, use strong branching and otherwise

pseudo-cost branching.

2. If pseudo-costs (in one direction) are uninitialized, replace them by estimations from

strong branching.

3. If pseudo-costs (in one direction) rely on less than ν previous branchings, use strong

branching. This strategy is called Reliability branching and works pretty well in

praxis.

ν = 0 pseudo-cost branching

ν =∞ strong branching

Page 38: Combinatorial Optimization · 2015-10-27 · Combinatorial Optimization Lecture notes, WS 2010/11, TU Munich Prof. Dr. Raymond Hemmecke Version of February 9, 2011

5.2. Branching rules 34

Page 39: Combinatorial Optimization · 2015-10-27 · Combinatorial Optimization Lecture notes, WS 2010/11, TU Munich Prof. Dr. Raymond Hemmecke Version of February 9, 2011

Chapter 6

Column generation

Assume we wish to solve an LP

min{cᵀx : Ax = b,x ≥ 0},

where x ∈ Rn with n very large. Then checking the reduced costs in each iteration of the simplex

method is very expensive. The idea of “column generation” is to

• start with a small subset of variables,

• solve the small LP,

• generate one or more missing columns with negative reduced costs,

• add columns to master problem and iterate.

Let us have a look at an example.

6.1 The Cutting Stock Problem (CSP)

Consider a paper company that has a supply of large rolls of paper, of width W . (We assume that

W is a positive integer.) However, customer demand is for smaller widths of paper; in particular

di rolls of width wi, i = 1, 2, . . . , n, need to be produced. We assume that wi ≤ W for each i, and

that each wi is an integer. Smaller rolls are obtained by slicing a large roll in a certain way, called

a cutting pattern. For example, a large roll of width 70 can be cut into three rolls of width w1 = 17

and one roll of width w2 = 15, with a waste of 4. The goal of the company is to minimize the

number of large rolls used while satisfying customer demand.

• CSP is NP-hard.

• if all di = 1: binary CSP

35

Page 40: Combinatorial Optimization · 2015-10-27 · Combinatorial Optimization Lecture notes, WS 2010/11, TU Munich Prof. Dr. Raymond Hemmecke Version of February 9, 2011

6.1. The Cutting Stock Problem (CSP) 36

6.1.1 A first IP formulation

In general, a cutting pattern is a vector q ∈ Zn+, which encodes the cutting of a large roll of paper

into q1 rolls of width w1, q2 rolls of width w2, . . ., qn rolls of width wn, plus potentially some waste.

Clearly, every feasible cutting pattern satisfiesn∑i=1

qiwi ≤ W . Moreover, there is a trivial upper

bound for the number of large rolls needed:

K :=

n∑i=1

di.

(Simply cut each small roll out of a separate large roll.)

Thus, define binary variables y1, . . . , yK with yk = 1 if and only if roll k is used. Hence, we wish

to minimizeK∑k=1

yk.

To each roll k, we assign a valid cutting pattern (xk1 , . . . , xkn)ᵀ. Validity is encoded in

n∑i=1

wixki ≤Wyk.

As we also need to satisfy demand, we get the constraints

K∑k=1

xki ≥ di, ∀i = 1, . . . , n.

Summing up, we get:

zIP1= min

K∑k=1

yk :K∑k=1

xki ≥ di, ∀i = 1, . . . , n,

n∑i=1

wixki ≤Wyk, ∀k = 1, . . . ,K,

yk ∈ {0, 1}, ∀k = 1, . . . ,K,

xki ∈ Z+, ∀k = 1, . . . ,K, i = 1, . . . , n

This formulation contains only polynomially many rows and columns, which is “good” for Branch-

and-Cut. HOWEVER:

Theorem 6.1.1 The optimal value of the LP relaxation of zIP1is

n∑i=1

widi

W .

Proof. Exercise! �

This bound is typically pretty bad:

Example. Consider the problem instance with n = 1 and w1 =⌊W2

⌋+ 1. Then we need zIP1

= d1

large rolls. However, the optimal LP solution has value

d1w1

W=d1

(⌊W2

⌋+ 1)

W≈ d1

2,

for large d1.

Consequently:

Page 41: Combinatorial Optimization · 2015-10-27 · Combinatorial Optimization Lecture notes, WS 2010/11, TU Munich Prof. Dr. Raymond Hemmecke Version of February 9, 2011

Chapter 6. Column generation 37

• LP-solution gives a bad bound

• in a branch-and-bound tree only a few subproblems can be examined in detail

• typically one has to do a complete enumeration of all solutions

Moreover, the large symmetry in the problem causes even further problems for a branch-and-bound

approach.

6.1.2 Second IP formulation

Let Q be the set of valid cutting patterns xj . For j = 1, . . . , |Q| introduce variables µj ∈ Z+ that

count how often cutting pattern xj ∈ Q is used.

Goal: Minimize used rolls: min|Q|∑j=1

µj .

Master problem:

(IP2) :

|Q|∑j=1

xjiµj ≥ di,∀ i = 1, . . . , n,

µj ∈ Z+

Subproblem: The xj are feasible solutions of

n∑i=1

wixji ≤ W,

xj ∈ Zn+.

Problem: exponentially many (= |Q|) variables

Solution: column generation/pricing

• general principle for the solution of an LP

• particularly useful for LPs with exponentially many variables

6.2 Column generation

Assume we have to solve a (feasible) LP:

(LP) : max{cᵀx : Ax ≤ b,x ≥ 0}

Let A′ consist of a subset of n′ columns of A, but with the same set of rows as A. W.l.o.g., A′

consists of the first n′ columns. We consider

(LP′) : max{c′ᵀx′ : A′x′ ≤ b,x′ ≥ 0},

Page 42: Combinatorial Optimization · 2015-10-27 · Combinatorial Optimization Lecture notes, WS 2010/11, TU Munich Prof. Dr. Raymond Hemmecke Version of February 9, 2011

6.3. Column generation for CSP 38

where we assume that A′ is chosen in such a way that (LP′) is feasible.

Let x∗′ be an optimal solution of (LP′) with basis (A′B , A′N ). Then

(x∗′B ,0

)is feasible for (LP)

and defines a feasible basis (A′B , AN ) (possibly not optimal) for (LP).

A column in AN with positive reduced costs is a candidate to enter the basis for (LP). Such a

column is found by solving the so-called pricing problem.

Reduced costs: c = cN − cbA′B−1AN

Let yB := cBA′B−1

and consider the problem

(Pri) : max{ca − yᵀBa : a is column of A}.

The optimal solution of (Pri) gives a column of AN with largest reduced costs ca (without neces-

sarily going through all columns of AN one by one).

If ca ≤ 0, we have solved (LP) optimally. Otherwise, we add a as a column to A′ and iterate until

an optimal solution has been found.

6.3 Column generation for CSP

Start with (IP2) with a subset of all columns (cutting patterns). Via pricing, we generate additional

columns if necessary. The pricing problem searches for the cutting pattern with smallest reduced

costs. (CSP is a minimization problem!)

(Pri) : min

{1− yᵀxj :

n∑i=1

wixji ≤W,x

j ∈ Zn+

}.

Herein, y = cBA′B−1

, where A′B is an optimal basis of the current reduced master problem.

The pricing problem for CSP is simply a knapsack problem! This is also NP-hard, but can be

solved exactly for pretty large n. (Remember that n is much smaller than |Q|.)

Remark. Pricing can also be used within a Branch-and-Cut framework. → Branch-Cut&Price

Page 43: Combinatorial Optimization · 2015-10-27 · Combinatorial Optimization Lecture notes, WS 2010/11, TU Munich Prof. Dr. Raymond Hemmecke Version of February 9, 2011

Chapter 7

Benders decomposition

7.1 Formal derivation

Consider the following problem:

min {cᵀx + fᵀy : Ax +By = b,x ≥ 0,y ∈ Y } , (7.1)

where x and y are vectors of continuous variables having dimensions p and q, respectively, Y is a

polyhedron, A,B are matrices, and b, c, f are vectors having appropriate dimensions. Suppose that

the y-variables are “complicating variables” in the sense that the problem becomes significantly

easier to solve if the y-variables are fixed, e.g., due to a special structure inherent in the matrix A.

Benders decomposition partitions problem (7.1) into two problems:

• a master problem that contains the y-variables, and

• a subproblem that contains the x-variables.

Note that problem (7.1) can be rewritten as

min {fᵀy + q(y) : y ∈ Y } , (7.2)

where q(y) is defined to be the optimal value of

min {cᵀx : Ax = b−By,x ≥ 0} . (7.3)

Observations:

• Problem (7.3) is a linear program for any given value y ∈ Y .

• If (7.3) is unbounded for some y ∈ Y , then also problem (7.2) is unbounded, and thus, also

our original problem (7.1).

39

Page 44: Combinatorial Optimization · 2015-10-27 · Combinatorial Optimization Lecture notes, WS 2010/11, TU Munich Prof. Dr. Raymond Hemmecke Version of February 9, 2011

7.1. Formal derivation 40

• Assuming boundedness of (7.3), we can calculate q(y) by solving the dual problem of (7.3).

Let us associate dual variables α for the constraints Ax = b−By, then the dual problem of (7.3)

is

max {αᵀ(b−By) : Aᵀα ≤ c} . (7.4)

Key observation: The feasible region of the dual formulation does not depend on the value of y,

which only affects the objective function.

If the feasible region of the dual problem (7.4) is empty, then either

• the primal problem (7.3) is unbounded for some y ∈ Y (and thus also the original problem

(7.1) is unbounded), or

• the feasible region of the primal problem (7.3) is also empty for all y ∈ Y (and thus also the

original problem (7.1) is infeasible).

Thus, assume that the feasible region of the dual problem (7.4) is not empty. Enumerate

• all extreme points α1p, . . . ,α

Ip and

• all extreme rays α1r, . . . ,α

Jr .

Then, for a given vector y, the dual problem can be solved by

• checking whether (αjr)ᵀ(b−By) > 0 for an extreme ray αjr, in which case the dual formulation

is unbounded and the primal formulation is infeasible, and by

• finding an extreme point αip that maximizes the value of the objective function (αip)ᵀ(b−By),

in which case both primal and dual formulations have finite optimal solutions.

Based on this, the dual problem (7.4) can be reformulated as follows:

min

q :

(αjr)ᵀ(b−By) ≤ 0 ∀ j ∈ 1, . . . , J

(αip)ᵀ(b−By) ≤ q ∀ i ∈ 1, . . . , I

q ∈ R

.

We can use this to replace q(y) in (7.2) and obtain a reformulation of the original problem (7.1):

min

fᵀy + q :

(αjr)ᵀ(b−By) ≤ 0 ∀ j ∈ 1, . . . , J

(αip)ᵀ(b−By) ≤ q ∀ i ∈ 1, . . . , I

y ∈ Y

q ∈ R

. (7.5)

Page 45: Combinatorial Optimization · 2015-10-27 · Combinatorial Optimization Lecture notes, WS 2010/11, TU Munich Prof. Dr. Raymond Hemmecke Version of February 9, 2011

Chapter 7. Benders decomposition 41

7.2 Solution via Benders decomposition

• Since there is typically an exponential number of extreme points and extreme rays of the

dual formulation (7.4), generating all constraints of (7.5) is not practical.

• Instead, Benders decomposition starts with a subset of these constraints, and solves a relaxed

master problem, which yields a candidate optimal solution (y∗, q∗).

• It then solves the dual subproblem (7.4) to calculate q(y∗).

• If the subproblem has an optimal solution having q(y∗) = q∗, then the algorithm stops.

• Otherwise, if the dual subproblem is unbounded, then a constraint (αjr)ᵀ(b − By) ≤ 0 is

generated and added to the relaxed master problem, which is then re-solved.

→ Benders feasibility cuts (they enforce necessary conditions for feasibility of the primal

subproblem (7.3))

• Similarly, if the subproblem has an optimal solution having q(y∗) > q∗, then a constraint

(αip)ᵀ(b−By) ≤ q is added to the relaxed master problem, and the relaxed master problem

is re-solved.

→ Benders optimality cuts (they are based on optimality conditions of the subproblem)

• Since I and J are finite, and new feasibility or optimality cuts are generated in each iteration,

this method converges to an optimal solution in a finite number of iterations.

Remarks.

• The master problem need not be a linear program. It could also be an integer linear program,

a nonlinear program or a constrained linear program.

• The subproblem need not be a linear program, either. It could be a convex program, since

dual multipliers satisfying strong duality conditions can be calculated for such problems.

7.3 Stochastic Programming

Let us now consider a problem type where Benders decomposition can be applied. In 2-stage

stochastic programming problems, some action needs to be taken in a first stage, which is followed

by the occurrence of an uncertain event. After the realization of the uncertain event has been

observed, a second-stage decision, a so-called recourse decision, can be made. For example, one

has to decide which way to drive from A to B without knowing the current and future traffic (and

the possible delay due to heavy traffic).

Clearly, if we do not know anything about the uncertain event, it is impossible to make a good first-

stage decision. (For example: How would you define “good” in the first place?) So, we assume that

Page 46: Combinatorial Optimization · 2015-10-27 · Combinatorial Optimization Lecture notes, WS 2010/11, TU Munich Prof. Dr. Raymond Hemmecke Version of February 9, 2011

7.3. Stochastic Programming 42

we know a probability distribution for this uncertain event. Such knowledge can often be generated

from data from the past. Or it has to be guessed/approximated by experts. Then we could ask

to minimize the costs for the first-stage decision x plus the expected costs for the second-stage

recourse decision (expectation model). Nowadays, one even models risk, that is, one wishes to

minimize first-stage costs plus expected costs for the recourse decision plus the risk that the overall

costs exceed a certain threshold.

The expectation model for the 2-stage stochastic programming problem can be written as follows:

min{cᵀx +Q(x) : x ∈ X},

where

Q(x) = Eω(min{dᵀ

ωy : Wy ≥ bω − Tx,y ∈ Rm+}).

Theoretically, the expected value in Q(x) can be computed via symbolic integration. However, this

becomes practically intractable already for very few variables. Hence, one typically approximates

the probability distribution via N scenarios (Problem: How should we should them well?) which

simplifies the integrals to sums. We obtain the (integer) linear programming equivalent to our

stochastic program:

min

cᵀx +

N∑i=1

pidiᵀyi :

T W...

. . .

T W

x

y1

...

yN

b1

...

bN

,x ∈ X,y1, . . . ,yN ∈ Rm+

,

in which pi denotes the probability of scenario i. Typically, one has X = {x ∈ Rn : Ax ≥ b0}.Moreover, one often also puts integrality constraints onto some or all x- or yi-variables. Note that

the large LP (or IP) decomposes into N independent smaller LPs (or IPs) once the first-stage

decision x has been fixed.

If we have no uncertainty in d, that is d1 = . . . = dN , and if all second-stage variables y are con-

tinuous, then we can apply Benders decomposition again. Even if there are integrality constraints

on x! In this case, one has one master problem in x and N subproblems in y1, . . . ,yN .

Page 47: Combinatorial Optimization · 2015-10-27 · Combinatorial Optimization Lecture notes, WS 2010/11, TU Munich Prof. Dr. Raymond Hemmecke Version of February 9, 2011

Chapter 8

Lagrange relaxation

Let us consider an integer program

zIP = max{cᵀx : x ∈ S},

where |S| <∞. Let us assume that we can rewrite the problem as follows:

IP(Q) : max{cᵀx : A1x ≤ b1︸ ︷︷ ︸hard

,x ∈ Q︸ ︷︷ ︸easy

},

where Q = {x : A2x ≤ b2,x ≥ 0}. Let S = {x ∈ Zn : Ax ≤ b,x ≥ 0}, that is,

A =

(A1

A2

)and b =

(b1

b2

).

Assume that IP(Q) is much easier to solve if the constraints A1x ≤ b1 are dropped. Many problems

can be formulated in such a way, for example also two-stage stochastic integer programs.

Example. (2-stage stochastic integer programming) Let us introduce copies x1, . . . ,xN of

the first-stage variables x and put them all equal: x1 = . . . = xN . Thus, after rearranging variables,

the two-stage stochastic IP can be written as

max

N∑i=1

pi(cᵀxi + diᵀyi) :

T W. . .

T W

− − H − −− − −H − −

x1

y1

...

xN

yN

b1

...

bN

0

0

,xi ∈ X,yi ∈ Rm+

,

in which H(x1,y1, . . . ,xN ,yN

)ᵀ= 0 models the constraints x1 = . . . = xN . Relaxing these

coupling constraints, the problem decomposes into N smaller IPs. After solving them, one faces

the problem that the first-stage decisions x1, . . . ,xN may be all different. Still, the optimal value

of the relaxed IP gives an upper bound for the 2-stage stochastic IP. If x1 = . . . = xN for the

relaxed problem, we have found an optimal solution to our stochastic IP. �

43

Page 48: Combinatorial Optimization · 2015-10-27 · Combinatorial Optimization Lecture notes, WS 2010/11, TU Munich Prof. Dr. Raymond Hemmecke Version of February 9, 2011

44

Let us assume that A1x ≤ b1 contains m1 inequalities and let λ ∈ Rm1+ . We consider the Lagrange

relaxation LR(λ) of IP(Q):

LR(λ) : zLR = max{z(λ,x) : x ∈ Q} where z(λ,x) = cᵀx + λᵀ(b1 −A1x).

Since λ ≥ 0: If x ∈ Rn violates A1i.x ≤ b1

i , we have A1i.x > b1

i , that is, b1i − A1

i.x < 0. Thus, for

sufficiently large λ1 the inequality A1i.x ≤ b1

i is satisfied in an optimal solution.

Lemma 8.0.1 For all λ ∈ Rm1+ , LR(λ) is a relaxation of IP(Q), that is, zIP ≤ zLR.

Proof. Let x be feasible for IP(Q). Thus x ∈ Q and thus x is feasible for LR(λ) for any λ.

Moreover, we have

z(λ, x) = cᵀx + λ︸︷︷︸≥0

(b1 −A1x)︸ ︷︷ ︸≥0

≥ cᵀx.

Each vector λ ∈ Rm1+ defines a relaxation. We are interested in the best (= smallest) upper bound,

that is, we are looking for some λ∗, such that zLR(λ∗) is minimal. Let λ∗ be an optimal solution

to

LD : zLD = min{zLR(λ) : λ ∈ Rm1+ }.

LD is called the Lagrange dual of IP(Q) with respect to A1x ≤ b1.

Example. Consider the IP:

max{7x1 + 2x2 : −x1 + 2x2 ≤ 4,x ∈ Q},

where

Q =

x ∈ Z2+ :

5x1 + x2 ≤ 20

−2x1 − 2x2 ≤ −7

−x1 ≤ −2

x2 ≤ 4

.

The Lagrange relaxation is:

zLR = max{7x1 + 2x2 + λ(4 + x1 − 2x2) : x ∈ Q}

= max{(7 + λ)x1 + (2− 2λ)x2 + 4λ : x ∈ Q},

where Q = {(2, 2), (2, 3), (2, 4), (3, 1), (3, 2), (3, 3), (3, 4), (4, 0)}. Let us first consider z(λ,x) for

fixed λ:

z(λ,x) = (cᵀ − λA1)︸ ︷︷ ︸=: const

x + λb1︸︷︷︸=: const

,

and see it as an affine function in x.

Page 49: Combinatorial Optimization · 2015-10-27 · Combinatorial Optimization Lecture notes, WS 2010/11, TU Munich Prof. Dr. Raymond Hemmecke Version of February 9, 2011

Chapter 8. Lagrange relaxation 45

Thus, we may determine zLR(λ) from

zLR(λ) = max{z(λ,x) : x ∈ conv(Q)}.

On the other hand, we could fix x ∈ Q: For fixed x ∈ Q,

z(λ, x) = cᵀx︸︷︷︸=: const

+λ (b1 −A1x)︸ ︷︷ ︸=: const

is an affine function in λ.

zLR(λ) = minλ≥0

maxx∈Q

(cᵀx + λᵀ(b1 −A1x)

)= min{ω : ω ≥ z(λ, x) ∀ x ∈ Q}

One can easily show that zLR(λ) is a piece-wise linear and convex function. �

Theorem 8.0.2 The optimal value zLD of the Lagrange dual of max{cᵀx : A1x ≤ b1,x ∈ Q} is

equal to max{cᵀx : A1x ≤ b1,x ∈ conv(Q)}.

Remark. This means that the Lagrange dual has the same value as a vector from the convex hull

of Q that satisfies also the complicated inequalities A1x ≤ b1 and which, among those vectors,

maximizes cᵀx. �

Proof. We start by rewriting zLD:

zLD = min{zLR(λ) : λ ≥ 0}

= minλ≥0

maxxi∈Q

[cᵀxi + λᵀ(b1 −A1xi)]

= minλ≥0{γ : γ ≥ cᵀxi + λᵀ(b1 −A1xi) ∀ xi ∈ Q}

= min{γ : γ + λᵀ(A1xi − b1) ≥ cᵀxi ∀ xi ∈ Q,λ ≥ 0, γ ∈ R}

Page 50: Combinatorial Optimization · 2015-10-27 · Combinatorial Optimization Lecture notes, WS 2010/11, TU Munich Prof. Dr. Raymond Hemmecke Version of February 9, 2011

8.1. Solving the Lagrange dual 46

Dualizing this last minimization problem, we get

zLD = max

∑xi∈Q

(cᵀxi) :∑xi∈Q

α1 = 1,∑xi∈Q

αi(A1x1 − b1) ≤ 0,α ≥ 0

= max

cᵀ

∑xi∈Q

αixi

: α ≥ 0,∑xi∈Q

αi = 1, A1

∑xi∈Q

αixi

≤ b1

= max{cᵀy : y ∈ conv(Q), A1y ≤ b1}.

In general, we have

conv(S) ⊆ conv(Q) ∩ {x : A1x ≤ b1} ⊆ {x : Ax ≤ b}.

Consequently, we get

zIP ≤ zLD ≤ zLP.

8.1 Solving the Lagrange dual

We wish to solve the following problem:

zLD = min{zLR(λ) : λ ≥ 0}.

Herein, f(λ) := zLR(λ) is a piece-wise linear convex function. If it were differentiable, we could

easily apply a steepest descend method: Choose a starting point λ1 and iteratively augment it

along the descending direction −∇f(λ1). However, as in our case f(λ) is not differentiable, we

need to generalize the notion of a gradient.

Definition 8.1.1 A vector s is called a subgradient of f at λ∗ if

f(λ) ≥ f(λ∗) + sᵀ(λ− λ∗)

holds for all λ ∈ Rn. The set of all subgradients of f at λ∗ is denoted by ∂f(λ∗) and is called the

subdifferential of f at λ∗.

Lemma 8.1.2 A function f : Rn → R is convex if and only if for any λ∗ there exists a subgradient

of f at λ∗.

If f is differentiable, one can show that ∂f(λ∗) = {∇f(λ∗)}.

Lemma 8.1.3 Let f : Rn → R be a convex function. A vector λ∗ minimizes f over Rn if and only

if 0 ∈ ∂f(λ∗).

Page 51: Combinatorial Optimization · 2015-10-27 · Combinatorial Optimization Lecture notes, WS 2010/11, TU Munich Prof. Dr. Raymond Hemmecke Version of February 9, 2011

Chapter 8. Lagrange relaxation 47

Thus, we get the following subgradient algorithm to find the minimum of a convex function.

(1) Let λ1 ∈ Rm1+ be arbitrary. Choose an infinite sequence {s1, s2, . . .} of positive step-lengths.

Set k := 1.

(2) Compute a subgradient s ∈ ∂f(λk). If s = 0, then stop with optimal solution λk.

(3) Use step-length sk.

(4) Set λk+1 := λk − sk s‖s‖1 , k := k + 1, and go to step (1).

One can show that this approach converges if the step lengths s1, s2, . . . are chosen in such a way

that:

• limi→∞

si = 0 and

•∞∑i=1

si =∞

hold. In general, a good selection of sk is important for the practical convergence. Applying the

subgradient approach to the Lagrange dual, we get the following algorithm:

(1) Let λ1 ∈ Rm1+ be arbitrary.

(2) In step k let λk be given.

(3) Compute a vector xk that maximizes

cᵀx + λᵀk(b1 −A1x)

over Q. Note that b1 −A1xk is a subgradient.

(4) Set λk+1 := max{0,λk − sk

(b1 −A1xk

)}.

The step lengths sk have to be chosen carefully. Moreover, if λk = 0, stop.

Finally, it could make sense in praxis to fix a maximum number of iterations before the algorithm

stops.