minlp solver technology

136
MINLP Solver Technology Stefan Vigerske 10 March 2015

Upload: ngotruc

Post on 12-Feb-2017

223 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: MINLP Solver Technology

MINLP Solver Technology

Stefan Vigerske

10 March 2015

Page 2: MINLP Solver Technology

Outline

Solvers

Linear Relaxation of Non-Convex terms

More Relaxations for Quadratic programs

Even More Cuts ...

Reformulation / Presolving

Bound Tightening

Branching

Primal Heuristics

2 / 65

Page 3: MINLP Solver Technology

Outline

Solvers

Linear Relaxation of Non-Convex terms

More Relaxations for Quadratic programs

Even More Cuts ...

Reformulation / Presolving

Bound Tightening

Branching

Primal Heuristics

Solvers 3 / 65

Page 4: MINLP Solver Technology

Deterministic Global Optimization Solvers for MINLP

ANTIGONE (Algorithms for coNTinuous / Integer GlobalOptimization of Nonlinear Equations)

I by R. Misener (Imperial) and C.A. Floudas (Princeton)I originating from a solver for pooling problemsI available as commercial solver in GAMSI Misener and Floudas [2012a,b, 2014], Misener [2012]

BARON (Branch And Reduce Optimization Navigator)I by N. Sahinidis (CMU) and M. Tawarmalani (Purdue)I one of the first general purpose codesI available as commercial solver in AIMMS and GAMSI Tawarmalani and Sahinidis [2002, 2004, 2005]

Solvers 4 / 65

Page 5: MINLP Solver Technology

Deterministic Global Optimization Solvers for MINLP

ANTIGONE (Algorithms for coNTinuous / Integer GlobalOptimization of Nonlinear Equations)

I by R. Misener (Imperial) and C.A. Floudas (Princeton)I originating from a solver for pooling problemsI available as commercial solver in GAMSI Misener and Floudas [2012a,b, 2014], Misener [2012]

BARON (Branch And Reduce Optimization Navigator)I by N. Sahinidis (CMU) and M. Tawarmalani (Purdue)I one of the first general purpose codesI available as commercial solver in AIMMS and GAMSI Tawarmalani and Sahinidis [2002, 2004, 2005]

Solvers 4 / 65

Page 6: MINLP Solver Technology

Deterministic Global Optimization Solvers for MINLP

Couenne (Convex Over and Under ENvelopes for NonlinearEstimation)

I by P. Belotti (CMU, Clemson, now FICO)I COIN-OR open source solver based on Bonmin (based on

CBC and Ipopt)I supports also trigonometric functions (sin, cos)I available for AMPL and in GAMS and OSI Belotti, Lee, Liberti, Margot, and Wächter [2009]

LindoAPII by Y. Lin and L. Schrage (LINDO Systems, Inc.)I supports many functions, incl. trigonometric (sin, cos)I available as commercial solver within LINDO and GAMSI Lin and Schrage [2009]

Solvers 5 / 65

Page 7: MINLP Solver Technology

Deterministic Global Optimization Solvers for MINLP

Couenne (Convex Over and Under ENvelopes for NonlinearEstimation)

I by P. Belotti (CMU, Clemson, now FICO)I COIN-OR open source solver based on Bonmin (based on

CBC and Ipopt)I supports also trigonometric functions (sin, cos)I available for AMPL and in GAMS and OSI Belotti, Lee, Liberti, Margot, and Wächter [2009]

LindoAPII by Y. Lin and L. Schrage (LINDO Systems, Inc.)I supports many functions, incl. trigonometric (sin, cos)I available as commercial solver within LINDO and GAMSI Lin and Schrage [2009]

Solvers 5 / 65

Page 8: MINLP Solver Technology

Deterministic Global Optimization Solvers for MINLPSCIP (Solving Constraint Integer Programs)

I by Zuse Institute Berlin, TU Darmstadt, . . .I part of a constraint integer programming frameworkI free for academic use, available for AMPL and in GAMSI Berthold, Gleixner, Heinz, and Vigerske [2012], Vigerske [2013]

MIPI LP relaxationI cutting planesI column generation

MIP, GO, CP, and SATI branch-and-bound

GOI spatial branching

CPI domain propagation

SATI conflict analysisI periodic restarts

SCIP PrimalHeuristic

actconsdiving

coefdiving

crossover dins

feaspump

fixandinfer

fracdiving

guideddiving

intdiving

intshifting

linesearchdiving

localbranching

mutation

subnlp

objpscostdiving

octane

oneopt

pscostdiving

rensrins

rootsoldiving

rounding

shifting

shift&prop

simplerounding

trivial

trysol

twooptundercover

veclendiving

zi round

Variable

Event

default

Branch

allfullstrong

fullstrong

inference

leastinf

mostinf

pscostrandom

relpscost

Conflict

ConstraintHandler

and

bounddisjunc.

countsols cumu

lative

indicator

integral

knapsack

linear

linking

logicor

ororbitope

quadratic

setppc

soc

sos1

sos2

varbound

xor

Cutpool

LP

clp

cpxmsk

none

qso

spx

xprs

Dialog

default

Display

default

Nodeselector

bfs

dfs

estimate

hybridestim

restartdfs

· · ·

Presolver

boundshift

dualfix

implics

inttobinary

probing

trivial

Implications

Tree

Reader

ccg

cip

cnf

fix

lp

mps opb

ppm

rlp

sol

sos

zpl

Pricer

Separator

clique

cmir

flowcover

gomory

impliedbounds intobj

mcf

oddcycle

rapidlearn

redcost

strongcg

zerohalf

Propagator

pseudoobj

rootredcost

vbound

Relax

Solvers 6 / 65

Page 9: MINLP Solver Technology

Upcoming Deterministic Global Solvers for MINLP

COCONUT (COntinuous CONstraints – Updating the Technology)

I by A. Neumaier, H. Schichl, E. Monfroy (Vienna), et.al.I rigorous calculations via interval arithmetics, thus avoiding

floating point roundoff errorsI still in development, no stable release so farI Neumaier [2004], Bliek et al. [2001]

MINOTAUR (Mixed-Integer Nonconvex OptimizationToolbox – Algorithms, Underestimators, Relaxations)

I by A. Mahajan, S. Leyffer, J. Linderoth, J. Luedtke,T. Munson, et.al. (Argonne, Wisconsin-Madison, IIT Bombay)

I open source with AMPL interfaceI branch-and-bound with NLP relaxation (or its QP

approximation); facilities to handle and manipulatealgebraic expression are in place

I Mahajan and Munson [2010], Mahajan et al. [2012]

Solvers 7 / 65

Page 10: MINLP Solver Technology

Upcoming Deterministic Global Solvers for MINLP

COCONUT (COntinuous CONstraints – Updating the Technology)

I by A. Neumaier, H. Schichl, E. Monfroy (Vienna), et.al.I rigorous calculations via interval arithmetics, thus avoiding

floating point roundoff errorsI still in development, no stable release so farI Neumaier [2004], Bliek et al. [2001]

MINOTAUR (Mixed-Integer Nonconvex OptimizationToolbox – Algorithms, Underestimators, Relaxations)

I by A. Mahajan, S. Leyffer, J. Linderoth, J. Luedtke,T. Munson, et.al. (Argonne, Wisconsin-Madison, IIT Bombay)

I open source with AMPL interfaceI branch-and-bound with NLP relaxation (or its QP

approximation); facilities to handle and manipulatealgebraic expression are in place

I Mahajan and Munson [2010], Mahajan et al. [2012]

Solvers 7 / 65

Page 11: MINLP Solver Technology

Further MINLP SolversMixed-Integer Quadratic / Second Order Cone:

I CPLEX (IBM): AIMMS, AMPL, GAMSI GUROBI: AIMMS, AMPL, GAMSI MOSEK: AIMMS, AMPL, GAMSI XPRESS (FICO): AIMMS, AMPL, GAMS

Convex MINLP:I AlphaECP (Westerlund et.al., Åbo Akademi University, Finland): GAMSI AOA (Paragon Decision Technology): AIMMSI Bonmin (Bonami et.al., COIN-OR): AMPL, GAMSI DICOPT (Grossmann et.al., CMU): GAMSI FilMINT (Leyffer et.al., Argonne; Linderoth et.al., Lehigh): AMPLI Knitro (Ziena Optimization): AIMMS, AMPL, GAMSI SBB (ARKI Consulting): GAMSI XPRESS-SLP (FICO)

Stochastic Search:I LocalSolver (Innovation 24): GAMSI OQNLP (OptTek Systems, Optimal Methods): GAMS

MINLP Solver Survey: Bussieck and Vigerske [2010]Solvers 8 / 65

Page 12: MINLP Solver Technology

Outline

Solvers

Linear Relaxation of Non-Convex terms

More Relaxations for Quadratic programs

Even More Cuts ...

Reformulation / Presolving

Bound Tightening

Branching

Primal Heuristics

Linear Relaxation of Non-Convex terms 9 / 65

Page 13: MINLP Solver Technology

LP Relaxation of (MI)NLP

Convex Constraints g(x) ≤ 0:I Take some point x∗ and linearize in x∗:

g(x∗) +∇g(x∗)(x − x∗) ≤ 0

I may not work if gj(·) is nonconvex !

-1.0 -0.5 0.0 0.5 1.0

-1.0

-0.5

0.0

0.5

1.0

-1.0 -0.5 0.0 0.5 1.0

-1.0

-0.5

0.0

0.5

1.0

Linear Relaxation of Non-Convex terms 10 / 65

Page 14: MINLP Solver Technology

LP Relaxation of (MI)NLP

Convex Constraints g(x) ≤ 0:I Take some point x∗ and linearize in x∗:

g(x∗) +∇g(x∗)(x − x∗) ≤ 0

I may not work if gj(·) is nonconvex !

-1.0 -0.5 0.0 0.5 1.0

-1.0

-0.5

0.0

0.5

1.0

-1.0 -0.5 0.0 0.5 1.0

-1.0

-0.5

0.0

0.5

1.0

Linear Relaxation of Non-Convex terms 10 / 65

Page 15: MINLP Solver Technology

Convex UnderestimatorsInequalities g(x∗) +∇g(x∗)T(x − x∗) ≤ 0 may not be valid!

I use convex underestimator: convex and below g(x) for all x ∈ [x , x ]I introduces convexification gap

gHxL

-6 -4 -2 2 4

-15

-10

-5

5

10

gHxL

gcHxL

-6 -4 -2 2 4

-15

-10

-5

5

10

gHxL

gcHxL

-6 -4 -2 2 4

-15

-10

-5

5

10

I convex envelopes (largest convex function that underestimates some g(x)) aredifficult to find in general

I but are known for several simple cases:concave functions

0.5 1.0 1.5 2.0 2.5 3.0

-2.0

-1.5

-1.0

-0.5

0.5

1.0

x 7→ xk , k ∈ 2Z+ 1

-1.0 -0.5 0.5 1.0

-1.0

-0.5

0.5

1.0

x · y

Linear Relaxation of Non-Convex terms 11 / 65

Page 16: MINLP Solver Technology

Convex UnderestimatorsInequalities g(x∗) +∇g(x∗)T(x − x∗) ≤ 0 may not be valid!

I use convex underestimator: convex and below g(x) for all x ∈ [x , x ]I introduces convexification gap

gHxL

-6 -4 -2 2 4

-15

-10

-5

5

10

gHxL

gcHxL

-6 -4 -2 2 4

-15

-10

-5

5

10

gHxL

gcHxL

-6 -4 -2 2 4

-15

-10

-5

5

10

I convex envelopes (largest convex function that underestimates some g(x)) aredifficult to find in general

I but are known for several simple cases:concave functions

0.5 1.0 1.5 2.0 2.5 3.0

-2.0

-1.5

-1.0

-0.5

0.5

1.0

x 7→ xk , k ∈ 2Z+ 1

-1.0 -0.5 0.5 1.0

-1.0

-0.5

0.5

1.0

x · y

Linear Relaxation of Non-Convex terms 11 / 65

Page 17: MINLP Solver Technology

Convex UnderestimatorsInequalities g(x∗) +∇g(x∗)T(x − x∗) ≤ 0 may not be valid!

I use convex underestimator: convex and below g(x) for all x ∈ [x , x ]I introduces convexification gap

gHxL

-6 -4 -2 2 4

-15

-10

-5

5

10

gHxL

gcHxL

-6 -4 -2 2 4

-15

-10

-5

5

10

gHxL

gcHxL

-6 -4 -2 2 4

-15

-10

-5

5

10

I convex envelopes (largest convex function that underestimates some g(x)) aredifficult to find in general

I but are known for several simple cases:concave functions

0.5 1.0 1.5 2.0 2.5 3.0

-2.0

-1.5

-1.0

-0.5

0.5

1.0

x 7→ xk , k ∈ 2Z+ 1

-1.0 -0.5 0.5 1.0

-1.0

-0.5

0.5

1.0

x · y

Linear Relaxation of Non-Convex terms 11 / 65

Page 18: MINLP Solver Technology

Reformulation of factorable functionsI for general factorable functions (recursive sum of products of univariate

functions), reformulate into simple cases by introducing new variables andequations

Example:

g(x) =√

exp(x21 ) ln(x2)

x1 ∈ [0, 2], x2 ∈ [1, 2]

Reformulation:

g =√y1

y1 = y2y3

[0, ln(2)e4]

y2 = exp(y4)

[1, e4]

y3 = ln(x2)

[0, ln(2)]

y4 = x21

[0, 4]

0.0

0.51.0

1.52.0

1.01.52.0 0

2

4

6

Convex relaxation:√y1 +

y1 − y1√y1 +

√y1≤ g ≤ √y1; ln x2 + (x2 − x2)

ln x2 − ln x2

x2 − x2≤ y3 ≤ ln(x2)

max{

y2y3 + y3y2 − y2y3

y2y3 + y3y2 − y2y3

}≤ y1 ≤ min

{y2y3 + y3y2 − y2y3

y2y3 + y3y2 − y2y3

}· · ·

Linear Relaxation of Non-Convex terms 12 / 65

Page 19: MINLP Solver Technology

Reformulation of factorable functionsI for general factorable functions (recursive sum of products of univariate

functions), reformulate into simple cases by introducing new variables andequations

Example:

g(x) =√

exp(x21 ) ln(x2)

x1 ∈ [0, 2], x2 ∈ [1, 2]Reformulation:

g =√y1

y1 = y2y3

[0, ln(2)e4]

y2 = exp(y4)

[1, e4]

y3 = ln(x2)

[0, ln(2)]

y4 = x21

[0, 4]

0.0

0.51.0

1.52.0

1.01.52.0 0

2

4

6

Convex relaxation:√y1 +

y1 − y1√y1 +

√y1≤ g ≤ √y1; ln x2 + (x2 − x2)

ln x2 − ln x2

x2 − x2≤ y3 ≤ ln(x2)

max{

y2y3 + y3y2 − y2y3

y2y3 + y3y2 − y2y3

}≤ y1 ≤ min

{y2y3 + y3y2 − y2y3

y2y3 + y3y2 − y2y3

}· · ·

Linear Relaxation of Non-Convex terms 12 / 65

Page 20: MINLP Solver Technology

Reformulation of factorable functionsI for general factorable functions (recursive sum of products of univariate

functions), reformulate into simple cases by introducing new variables andequations

Example:

g(x) =√

exp(x21 ) ln(x2)

x1 ∈ [0, 2], x2 ∈ [1, 2]Reformulation:

g =√y1

y1 = y2y3 [0, ln(2)e4]

y2 = exp(y4) [1, e4]

y3 = ln(x2) [0, ln(2)]

y4 = x21 [0, 4]

0.0

0.51.0

1.52.0

1.01.52.0 0

2

4

6

Convex relaxation:√y1 +

y1 − y1√y1 +

√y1≤ g ≤ √y1; ln x2 + (x2 − x2)

ln x2 − ln x2

x2 − x2≤ y3 ≤ ln(x2)

max{

y2y3 + y3y2 − y2y3

y2y3 + y3y2 − y2y3

}≤ y1 ≤ min

{y2y3 + y3y2 − y2y3

y2y3 + y3y2 − y2y3

}· · ·

Linear Relaxation of Non-Convex terms 12 / 65

Page 21: MINLP Solver Technology

Reformulation of factorable functionsI for general factorable functions (recursive sum of products of univariate

functions), reformulate into simple cases by introducing new variables andequations

Example:

g(x) =√

exp(x21 ) ln(x2)

x1 ∈ [0, 2], x2 ∈ [1, 2]Reformulation:

g =√y1

y1 = y2y3 [0, ln(2)e4]

y2 = exp(y4) [1, e4]

y3 = ln(x2) [0, ln(2)]

y4 = x21 [0, 4]

0.0

0.51.0

1.52.0

1.01.52.0 0

2

4

6

Convex relaxation:√y1 +

y1 − y1√y1 +

√y1≤ g ≤ √y1; ln x2 + (x2 − x2)

ln x2 − ln x2

x2 − x2≤ y3 ≤ ln(x2)

max{

y2y3 + y3y2 − y2y3

y2y3 + y3y2 − y2y3

}≤ y1 ≤ min

{y2y3 + y3y2 − y2y3

y2y3 + y3y2 − y2y3

}· · ·

Linear Relaxation of Non-Convex terms 12 / 65

Page 22: MINLP Solver Technology

Basic Terms

The algebraic expressions that are not broken up further (i.e., convex & concaveestimators are known) depends on the solver.

Example: Classification of terms in ANTIGONE:

[Misener and Floudas, 2014]

Linear Relaxation of Non-Convex terms 13 / 65

Page 23: MINLP Solver Technology

Implications of Expression Analysis Approach

I Deterministic global optimization algorithms need to know the algebraicexpressions that the equations consist of, so they know how to convexify.

I Therefor, not all function types are supported by any deterministic globalsolver, e.g.,

I ANTIGONE, BARON, and SCIP do not support trigonometric functions.I Couenne does not support max or min.I No deterministic global solver support external functions that are given by

routines for point-wise evaluation of function and derivatives.

Linear Relaxation of Non-Convex terms 14 / 65

Page 24: MINLP Solver Technology

Implications of Expression Analysis Approach

I Deterministic global optimization algorithms need to know the algebraicexpressions that the equations consist of, so they know how to convexify.

I Therefor, not all function types are supported by any deterministic globalsolver, e.g.,

I ANTIGONE, BARON, and SCIP do not support trigonometric functions.I Couenne does not support max or min.I No deterministic global solver support external functions that are given by

routines for point-wise evaluation of function and derivatives.

Linear Relaxation of Non-Convex terms 14 / 65

Page 25: MINLP Solver Technology

Bound your Variables!I To construct convex underestimators, typically variable bounds are required.

Otherwise, the solver may “guess” some bounds (ANTIGONE, BARON) or isnot guarantee to finish within finite time (Couenne, SCIP).

I Example: min x · yBARON version 12.3.3. Built: LNX-64 Fri Jun 14 08:14:38 EDT 2013Preprocessing found feasible solution with value -.732842950994E+15[...]User did not provide appropriate variable bounds.Some model expressions are unbounded.We may not be able to guarantee globality.Number of missing variable or expression bounds = 4Number of variable or expression bounds autoset = 4[...]

*** Normal Completion ****** User did not provide appropriate variable bounds ***

*** Globality is therefore not guaranteed ***

**** SOLVER STATUS 1 Normal Completion**** MODEL STATUS 2 Locally Optimal**** OBJECTIVE VALUE -732842956409000.0000

I tighter variable bounds ⇒ tighter relaxations

Linear Relaxation of Non-Convex terms 15 / 65

Page 26: MINLP Solver Technology

Fractional TermsConvex underestimator of x/y for x , y ≥ 0 by Zamora and Grossmann [1998]:

x

y≥ 1

y

(x +√xx

√x +√x

)2

,

(convex envelope if y = 0, y =∞ [Tawarmalani and Sahinidis, 2001])

For y <∞, convex and concave envelopes arex − x

x − x

x

max(y , y−y

x−x(x − x) + y ,

y√x(x−x)

(x−x)√

x+(x−x)√

x

)+x − x

x − x

x

min(y ,

y−y

x−x(x − x) + y , y

√x(x−x)

(x−x)√

x+(x−x)√

x

) ≤ x

y,

1yy

min{yx − xy + xy , yx − xy + xy} ≥x

y

[Zamora and Grossmann, 1999, Tawarmalani and Sahinidis,2002, Jach et al., 2008]

More formulas for xy with x < 0 < x , ax+by

cx+dy ,f (x)y with f univariate concave, . . .

[Tawarmalani and Sahinidis, 2001, 2002].

Linear Relaxation of Non-Convex terms 16 / 65

Page 27: MINLP Solver Technology

Fractional TermsConvex underestimator of x/y for x , y ≥ 0 by Zamora and Grossmann [1998]:

x

y≥ 1

y

(x +√xx

√x +√x

)2

,

(convex envelope if y = 0, y =∞ [Tawarmalani and Sahinidis, 2001])

For y <∞, convex and concave envelopes arex − x

x − x

x

max(y , y−y

x−x(x − x) + y ,

y√x(x−x)

(x−x)√x+(x−x)

√x

)+x − x

x − x

x

min(y ,

y−y

x−x(x − x) + y , y

√x(x−x)

(x−x)√x+(x−x)

√x

) ≤ x

y,

1yy

min{yx − xy + xy , yx − xy + xy} ≥x

y

[Zamora and Grossmann, 1999, Tawarmalani and Sahinidis,2002, Jach et al., 2008]

More formulas for xy with x < 0 < x , ax+by

cx+dy ,f (x)y with f univariate concave, . . .

[Tawarmalani and Sahinidis, 2001, 2002].

Linear Relaxation of Non-Convex terms 16 / 65

Page 28: MINLP Solver Technology

Fractional TermsConvex underestimator of x/y for x , y ≥ 0 by Zamora and Grossmann [1998]:

x

y≥ 1

y

(x +√xx

√x +√x

)2

,

(convex envelope if y = 0, y =∞ [Tawarmalani and Sahinidis, 2001])

For y <∞, convex and concave envelopes arex − x

x − x

x

max(y , y−y

x−x(x − x) + y ,

y√x(x−x)

(x−x)√x+(x−x)

√x

)+x − x

x − x

x

min(y ,

y−y

x−x(x − x) + y , y

√x(x−x)

(x−x)√x+(x−x)

√x

) ≤ x

y,

1yy

min{yx − xy + xy , yx − xy + xy} ≥x

y

[Zamora and Grossmann, 1999, Tawarmalani and Sahinidis,2002, Jach et al., 2008]

More formulas for xy with x < 0 < x , ax+by

cx+dy ,f (x)y with f univariate concave, . . .

[Tawarmalani and Sahinidis, 2001, 2002].Linear Relaxation of Non-Convex terms 16 / 65

Page 29: MINLP Solver Technology

Multilinear TermsBilinear x · y :

max{

xy + yx − xyxy + yx − xy

}≤ x · y ≤ min

{xy + yx − xyxy + yx − xy

}[McCormick, 1976, Al-Khayyal and Falk, 1983]

Trilinear x · y · z :I Similar formulas by recursion, considering (x · y) · z , x · (y · z), and (x · z) · y⇒ 18 inequalities for convex underestimator [Meyer and Floudas, 2004]

I if mixed signs, e.g., x < 0 < x , recursion may not provide the convex envelopeI Meyer and Floudas [2004] derive the facets of the envelopes: for convex

envelope, distinguish 9 cases, each giving 5-6 linear inequalitiesI implemented in Couenne

Quadrilinear u · v · w · x :I Cafieri et al. [2010]: apply formulas for bilinear and trilinear to groupings

((u · v) · w) · x , (u · v) · (w · x), (u · v · w) · x , (u · v) · w · x and comparestrength numerically

Linear Relaxation of Non-Convex terms 17 / 65

Page 30: MINLP Solver Technology

Multilinear TermsBilinear x · y :

max{

xy + yx − xyxy + yx − xy

}≤ x · y ≤ min

{xy + yx − xyxy + yx − xy

}[McCormick, 1976, Al-Khayyal and Falk, 1983]

Trilinear x · y · z :I Similar formulas by recursion, considering (x · y) · z , x · (y · z), and (x · z) · y⇒ 18 inequalities for convex underestimator [Meyer and Floudas, 2004]

I if mixed signs, e.g., x < 0 < x , recursion may not provide the convex envelopeI Meyer and Floudas [2004] derive the facets of the envelopes: for convex

envelope, distinguish 9 cases, each giving 5-6 linear inequalitiesI implemented in Couenne

Quadrilinear u · v · w · x :I Cafieri et al. [2010]: apply formulas for bilinear and trilinear to groupings

((u · v) · w) · x , (u · v) · (w · x), (u · v · w) · x , (u · v) · w · x and comparestrength numerically

Linear Relaxation of Non-Convex terms 17 / 65

Page 31: MINLP Solver Technology

Multilinear TermsBilinear x · y :

max{

xy + yx − xyxy + yx − xy

}≤ x · y ≤ min

{xy + yx − xyxy + yx − xy

}[McCormick, 1976, Al-Khayyal and Falk, 1983]

Trilinear x · y · z :I Similar formulas by recursion, considering (x · y) · z , x · (y · z), and (x · z) · y⇒ 18 inequalities for convex underestimator [Meyer and Floudas, 2004]

I if mixed signs, e.g., x < 0 < x , recursion may not provide the convex envelopeI Meyer and Floudas [2004] derive the facets of the envelopes: for convex

envelope, distinguish 9 cases, each giving 5-6 linear inequalitiesI implemented in Couenne

Quadrilinear u · v · w · x :I Cafieri et al. [2010]: apply formulas for bilinear and trilinear to groupings

((u · v) · w) · x , (u · v) · (w · x), (u · v · w) · x , (u · v) · w · x and comparestrength numerically

Linear Relaxation of Non-Convex terms 17 / 65

Page 32: MINLP Solver Technology

Multilinear TermsBilinear x · y :

max{

xy + yx − xyxy + yx − xy

}≤ x · y ≤ min

{xy + yx − xyxy + yx − xy

}[McCormick, 1976, Al-Khayyal and Falk, 1983]

Trilinear x · y · z :I Similar formulas by recursion, considering (x · y) · z , x · (y · z), and (x · z) · y⇒ 18 inequalities for convex underestimator [Meyer and Floudas, 2004]

I if mixed signs, e.g., x < 0 < x , recursion may not provide the convex envelopeI Meyer and Floudas [2004] derive the facets of the envelopes: for convex

envelope, distinguish 9 cases, each giving 5-6 linear inequalitiesI implemented in Couenne

Quadrilinear u · v · w · x :I Cafieri et al. [2010]: apply formulas for bilinear and trilinear to groupings

((u · v) · w) · x , (u · v) · (w · x), (u · v · w) · x , (u · v) · w · x and comparestrength numerically

Linear Relaxation of Non-Convex terms 17 / 65

Page 33: MINLP Solver Technology

Multilinear TermsLet f (x) =

∑I∈I

aI∏i∈I

xi (I ⊆ {1, . . . , n}) with bounds x ∈ [x , x ].

I Luedtke et al. [2012]: compare strength of relaxation from recursiveapplication of McCormick with convex envelope

I Rikun [1997]: convex envelope is vertex polyhedral and given by

minλ∈R2n

{∑p

λpf (vp) : x =

∑p

λpvp,∑p

λp = 1, λ ≥ 0

}(C)

= maxa∈Rn,b∈R

{aTx + b : aTvp + b ≤ f (vp)∀p}, (D)

where {vp : p = 1, . . . , 2n} = vert([x , x ]) are the vertices of the box [x , x ].I (C) and (D) allow to compute facets of convex envelope

I naively: given x∗, take any n + 1 vertices ((2n+1

n

)choices!), check if induced

hyperplane underestimates f (vp) for every p, take one with highest value in x∗

I Bao et al. [2009] (BARON): for quadratic multilinear functions, solve (C) bycolumn generation

Linear Relaxation of Non-Convex terms 18 / 65

Page 34: MINLP Solver Technology

Multilinear TermsLet f (x) =

∑I∈I

aI∏i∈I

xi (I ⊆ {1, . . . , n}) with bounds x ∈ [x , x ].

I Luedtke et al. [2012]: compare strength of relaxation from recursiveapplication of McCormick with convex envelope

I Rikun [1997]: convex envelope is vertex polyhedral and given by

minλ∈R2n

{∑p

λpf (vp) : x =

∑p

λpvp,∑p

λp = 1, λ ≥ 0

}(C)

= maxa∈Rn,b∈R

{aTx + b : aTvp + b ≤ f (vp)∀p}, (D)

where {vp : p = 1, . . . , 2n} = vert([x , x ]) are the vertices of the box [x , x ].

I (C) and (D) allow to compute facets of convex envelope

I naively: given x∗, take any n + 1 vertices ((2n+1

n

)choices!), check if induced

hyperplane underestimates f (vp) for every p, take one with highest value in x∗

I Bao et al. [2009] (BARON): for quadratic multilinear functions, solve (C) bycolumn generation

Linear Relaxation of Non-Convex terms 18 / 65

Page 35: MINLP Solver Technology

Multilinear TermsLet f (x) =

∑I∈I

aI∏i∈I

xi (I ⊆ {1, . . . , n}) with bounds x ∈ [x , x ].

I Luedtke et al. [2012]: compare strength of relaxation from recursiveapplication of McCormick with convex envelope

I Rikun [1997]: convex envelope is vertex polyhedral and given by

minλ∈R2n

{∑p

λpf (vp) : x =

∑p

λpvp,∑p

λp = 1, λ ≥ 0

}(C)

= maxa∈Rn,b∈R

{aTx + b : aTvp + b ≤ f (vp)∀p}, (D)

where {vp : p = 1, . . . , 2n} = vert([x , x ]) are the vertices of the box [x , x ].I (C) and (D) allow to compute facets of convex envelope

I naively: given x∗, take any n + 1 vertices ((2n+1

n

)choices!), check if induced

hyperplane underestimates f (vp) for every p, take one with highest value in x∗

I Bao et al. [2009] (BARON): for quadratic multilinear functions, solve (C) bycolumn generation

Linear Relaxation of Non-Convex terms 18 / 65

Page 36: MINLP Solver Technology

Multilinear TermsLet f (x) =

∑I∈I

aI∏i∈I

xi (I ⊆ {1, . . . , n}) with bounds x ∈ [x , x ].

I Luedtke et al. [2012]: compare strength of relaxation from recursiveapplication of McCormick with convex envelope

I Rikun [1997]: convex envelope is vertex polyhedral and given by

minλ∈R2n

{∑p

λpf (vp) : x =

∑p

λpvp,∑p

λp = 1, λ ≥ 0

}(C)

= maxa∈Rn,b∈R

{aTx + b : aTvp + b ≤ f (vp)∀p}, (D)

where {vp : p = 1, . . . , 2n} = vert([x , x ]) are the vertices of the box [x , x ].I (C) and (D) allow to compute facets of convex envelope

I naively: given x∗, take any n + 1 vertices ((2n+1

n

)choices!), check if induced

hyperplane underestimates f (vp) for every p, take one with highest value in x∗

I Bao et al. [2009] (BARON): for quadratic multilinear functions, solve (C) bycolumn generation

Linear Relaxation of Non-Convex terms 18 / 65

Page 37: MINLP Solver Technology

Edge-Concave Functions

Definition [Tardella, 2004]: A function f : Rn → R is edge-concave on the box[x , x ], if it is concave on all segments that are parallel on an edge of [x , x ].

Tardella [1988/89]: If f is twice continuously differentiable, then it is

edge-concave ⇔ ∂2f

∂x2i≤ 0 ∀i = 1, . . . , n.

Tardella [1988/89]: Edge-concave functions admit a vertex-polyhedral envelope.

A quadratic functionxTQx =

∑i≤j

Qi,jxixj

is edge-concave if Qi,i ≤ 0 for all i

Linear Relaxation of Non-Convex terms 19 / 65

Page 38: MINLP Solver Technology

Edge-Concave Functions

Definition [Tardella, 2004]: A function f : Rn → R is edge-concave on the box[x , x ], if it is concave on all segments that are parallel on an edge of [x , x ].

Tardella [1988/89]: If f is twice continuously differentiable, then it is

edge-concave ⇔ ∂2f

∂x2i≤ 0 ∀i = 1, . . . , n.

Tardella [1988/89]: Edge-concave functions admit a vertex-polyhedral envelope.

A quadratic functionxTQx =

∑i≤j

Qi,jxixj

is edge-concave if Qi,i ≤ 0 for all i

Linear Relaxation of Non-Convex terms 19 / 65

Page 39: MINLP Solver Technology

Edge-Concave Functions

Definition [Tardella, 2004]: A function f : Rn → R is edge-concave on the box[x , x ], if it is concave on all segments that are parallel on an edge of [x , x ].

Tardella [1988/89]: If f is twice continuously differentiable, then it is

edge-concave ⇔ ∂2f

∂x2i≤ 0 ∀i = 1, . . . , n.

Tardella [1988/89]: Edge-concave functions admit a vertex-polyhedral envelope.

A quadratic functionxTQx =

∑i≤j

Qi,jxixj

is edge-concave if Qi,i ≤ 0 for all i

Linear Relaxation of Non-Convex terms 19 / 65

Page 40: MINLP Solver Technology

Cuts from Edge-Concave Functions (n = 3)

Meyer and Floudas [2005], Tardella [2008]: Some facets of the convex envelope of

Q1,1︸︷︷︸≤0

x21 + Q2,2︸︷︷︸≤0

x22 + Q3,3︸︷︷︸≤0

x23 + Q1,2︸︷︷︸6=0

x1x2 + Q1,3︸︷︷︸6=0

x1x3 + Q2,3︸︷︷︸6=0

x2x3,

dominate cuts from a term-wise relaxation (McCormick).

GLOMIQO 1.0 (predecessor of ANTIGONE) [Misener and Floudas, 2012a]:I group quadratic terms in constraints into sums of three-dimensional

edge-concave and convex functionsI for edge-concave terms with Q1,2Q1,3Q2,3 6= 0, compute facets of convex

envelope (naive approach, 70 candidates to check)

GloMIQO 2.0 / ANTIGONE [Misener, 2012]:I reduced complexity approach based on Meyer and Floudas [2005] (exploiting

dominance relations)I complexity scales by n! instead of

( 2n

n+1

)

Linear Relaxation of Non-Convex terms 20 / 65

Page 41: MINLP Solver Technology

Cuts from Edge-Concave Functions (n = 3)

Meyer and Floudas [2005], Tardella [2008]: Some facets of the convex envelope of

Q1,1︸︷︷︸≤0

x21 + Q2,2︸︷︷︸≤0

x22 + Q3,3︸︷︷︸≤0

x23 + Q1,2︸︷︷︸6=0

x1x2 + Q1,3︸︷︷︸6=0

x1x3 + Q2,3︸︷︷︸6=0

x2x3,

dominate cuts from a term-wise relaxation (McCormick).

GLOMIQO 1.0 (predecessor of ANTIGONE) [Misener and Floudas, 2012a]:I group quadratic terms in constraints into sums of three-dimensional

edge-concave and convex functionsI for edge-concave terms with Q1,2Q1,3Q2,3 6= 0, compute facets of convex

envelope (naive approach, 70 candidates to check)

GloMIQO 2.0 / ANTIGONE [Misener, 2012]:I reduced complexity approach based on Meyer and Floudas [2005] (exploiting

dominance relations)I complexity scales by n! instead of

( 2n

n+1

)

Linear Relaxation of Non-Convex terms 20 / 65

Page 42: MINLP Solver Technology

Cuts from Edge-Concave Functions (n = 3)

Meyer and Floudas [2005], Tardella [2008]: Some facets of the convex envelope of

Q1,1︸︷︷︸≤0

x21 + Q2,2︸︷︷︸≤0

x22 + Q3,3︸︷︷︸≤0

x23 + Q1,2︸︷︷︸6=0

x1x2 + Q1,3︸︷︷︸6=0

x1x3 + Q2,3︸︷︷︸6=0

x2x3,

dominate cuts from a term-wise relaxation (McCormick).

GLOMIQO 1.0 (predecessor of ANTIGONE) [Misener and Floudas, 2012a]:I group quadratic terms in constraints into sums of three-dimensional

edge-concave and convex functionsI for edge-concave terms with Q1,2Q1,3Q2,3 6= 0, compute facets of convex

envelope (naive approach, 70 candidates to check)

GloMIQO 2.0 / ANTIGONE [Misener, 2012]:I reduced complexity approach based on Meyer and Floudas [2005] (exploiting

dominance relations)I complexity scales by n! instead of

( 2n

n+1

)Linear Relaxation of Non-Convex terms 20 / 65

Page 43: MINLP Solver Technology

Linear Relaxation of Non-Convex terms 21 / 65

Page 44: MINLP Solver Technology

Bivariate TermsGiven f (x , y) ∈ C 2(R2,R), x ∈ [x , x ], y ∈ [y , y ], with fixed convexity behaviour(sign∇2

x,x f , sign∇2y,y f , sign∇2

x,y f , sign det∇2f are constant on [x , x ]× [y , y ]).

Convexity behaviour of f Convex Envelopejointly convex f itself

concave in x , concave in y vertex-polyhedral, thus McCormick [1976]

convex in x , concave in y Tawarmalani and Sahinidis [2001](or other way around) Locatelli and Schoen [2014], Locatelli [2010]

convex in x , convex in y , Jach, Michaels, and Weismantel [2008]indefinite Locatelli and Schoen [2014], Locatelli [2010]

f (x , y) = x2/y f (x , y) = xy f (x , y) = −√xy2 f (x , y) = x2y2

Linear Relaxation of Non-Convex terms 22 / 65

Page 45: MINLP Solver Technology

Bivariate TermsGiven f (x , y) ∈ C 2(R2,R), x ∈ [x , x ], y ∈ [y , y ], with fixed convexity behaviour(sign∇2

x,x f , sign∇2y,y f , sign∇2

x,y f , sign det∇2f are constant on [x , x ]× [y , y ]).

Convexity behaviour of f Convex Envelopejointly convex f itself

concave in x , concave in y vertex-polyhedral, thus McCormick [1976]

convex in x , concave in y Tawarmalani and Sahinidis [2001](or other way around) Locatelli and Schoen [2014], Locatelli [2010]

convex in x , convex in y , Jach, Michaels, and Weismantel [2008]indefinite Locatelli and Schoen [2014], Locatelli [2010]

f (x , y) = x2/y f (x , y) = xy f (x , y) = −√xy2 f (x , y) = x2y2

Linear Relaxation of Non-Convex terms 22 / 65

Page 46: MINLP Solver Technology

f (x , y) convex in x and concave in yConvex envelope in (x0, y0) ∈ int([x , x ]× [y , y ]) given by

vex[f ](x0, y0) := min tf (r , y) + (1− t)f (s, y)

s.t.(x0

y0

)= t

(ry

)+ (1− t)

(sy

)t ∈ [0, 1], r , s ∈ [x , x ]

[Tawarmalani and Sahinidis, 2001, Jach et al., 2008]

Using t = y−y0

y−y , r(s) =y−yy−y0 x

0 − y0−yy−y0 s, this simplifies to

vex[f ](x0, y0) = mins∈[s,s]

y − y0

y − yf

(y − y

y − y0x0 −

y0 − y

y − y0s, y

)+

y0 − y

y − yf (s, y)

⇒ univariate convex optimization problem

Maximally touching hyperplane on graph of vex[f ] at (x0, y0) is given by x0

y0

vex[f ](x0, y0)

+ R

s∗ − r∗

y − yf (s∗, y)− f (r∗, y)

+ R

10

∂f∂x

(x , y)

, (x, y) = (r∗, y) or (s∗, y).

Ballerstein et al. [2013]: implementation in SCIP(so far, classification of convexity behavior only for f (x , y) = xpyq, x , y ≥ 0)

Linear Relaxation of Non-Convex terms 23 / 65

Page 47: MINLP Solver Technology

f (x , y) convex in x and concave in yConvex envelope in (x0, y0) ∈ int([x , x ]× [y , y ]) given by

vex[f ](x0, y0) := min tf (r , y) + (1− t)f (s, y)

s.t.(x0

y0

)= t

(ry

)+ (1− t)

(sy

)t ∈ [0, 1], r , s ∈ [x , x ]

[Tawarmalani and Sahinidis, 2001, Jach et al., 2008]

Using t = y−y0

y−y , r(s) =y−yy−y0 x

0 − y0−yy−y0 s, this simplifies to

vex[f ](x0, y0) = mins∈[s,s]

y − y0

y − yf

(y − y

y − y0x0 −

y0 − y

y − y0s, y

)+

y0 − y

y − yf (s, y)

⇒ univariate convex optimization problem

Maximally touching hyperplane on graph of vex[f ] at (x0, y0) is given by x0

y0

vex[f ](x0, y0)

+ R

s∗ − r∗

y − yf (s∗, y)− f (r∗, y)

+ R

10

∂f∂x

(x , y)

, (x, y) = (r∗, y) or (s∗, y).

Ballerstein et al. [2013]: implementation in SCIP(so far, classification of convexity behavior only for f (x , y) = xpyq, x , y ≥ 0)

Linear Relaxation of Non-Convex terms 23 / 65

Page 48: MINLP Solver Technology

f (x , y) convex in x and concave in yConvex envelope in (x0, y0) ∈ int([x , x ]× [y , y ]) given by

vex[f ](x0, y0) := min tf (r , y) + (1− t)f (s, y)

s.t.(x0

y0

)= t

(ry

)+ (1− t)

(sy

)t ∈ [0, 1], r , s ∈ [x , x ]

[Tawarmalani and Sahinidis, 2001, Jach et al., 2008]

Using t = y−y0

y−y , r(s) =y−yy−y0 x

0 − y0−yy−y0 s, this simplifies to

vex[f ](x0, y0) = mins∈[s,s]

y − y0

y − yf

(y − y

y − y0x0 −

y0 − y

y − y0s, y

)+

y0 − y

y − yf (s, y)

⇒ univariate convex optimization problem

Maximally touching hyperplane on graph of vex[f ] at (x0, y0) is given by x0

y0

vex[f ](x0, y0)

+ R

s∗ − r∗

y − yf (s∗, y)− f (r∗, y)

+ R

10

∂f∂x

(x , y)

, (x, y) = (r∗, y) or (s∗, y).

Ballerstein et al. [2013]: implementation in SCIP(so far, classification of convexity behavior only for f (x , y) = xpyq, x , y ≥ 0)

Linear Relaxation of Non-Convex terms 23 / 65

Page 49: MINLP Solver Technology

Outline

Solvers

Linear Relaxation of Non-Convex terms

More Relaxations for Quadratic programs

Even More Cuts ...

Reformulation / Presolving

Bound Tightening

Branching

Primal Heuristics

More Relaxations for Quadratic programs 24 / 65

Page 50: MINLP Solver Technology

α-Underestimators [Androulakis et al., 1995, Adjiman and Floudas, 1996]Consider a function xTAx + bTx with A 6� 0.

Let α ∈ Rn be such that A− diag(α) � 0. Then

xTAx + bTx + (x − x)Tdiag(α)(x − x)

is a convex underestimator of xTAx + bTx w.r.t. the box [x , x ].

A simple choice for α is αi = λ1(A) (minimal eigenvalue of A), i = 1, . . . , n.

I available in ANTIGONE (in a modified form, see Misener [2012]) andCouenne (off by default)

I can be generalized to twice continuously differentiable functions g(x) bybounding the minimal eigenvalue of the Hessian ∇2H(x) for x ∈ [x , x ][Adjiman et al., 1998]

I underestimator is exact for xi ∈ {x i , x i}I thus, if x is a vector of binary variables (x2i = xi ), then

xTAx + bTx = xT(A− diag(α))x + (b + diag(α))Tx

for x ∈ {0, 1}n and A− diag(α) � 0. ⇒ used in CPLEX

More Relaxations for Quadratic programs 25 / 65

Page 51: MINLP Solver Technology

α-Underestimators [Androulakis et al., 1995, Adjiman and Floudas, 1996]Consider a function xTAx + bTx with A 6� 0.

Let α ∈ Rn be such that A− diag(α) � 0. Then

xTAx + bTx + (x − x)Tdiag(α)(x − x)

is a convex underestimator of xTAx + bTx w.r.t. the box [x , x ].

A simple choice for α is αi = λ1(A) (minimal eigenvalue of A), i = 1, . . . , n.

I available in ANTIGONE (in a modified form, see Misener [2012]) andCouenne (off by default)

I can be generalized to twice continuously differentiable functions g(x) bybounding the minimal eigenvalue of the Hessian ∇2H(x) for x ∈ [x , x ][Adjiman et al., 1998]

I underestimator is exact for xi ∈ {x i , x i}I thus, if x is a vector of binary variables (x2i = xi ), then

xTAx + bTx = xT(A− diag(α))x + (b + diag(α))Tx

for x ∈ {0, 1}n and A− diag(α) � 0. ⇒ used in CPLEX

More Relaxations for Quadratic programs 25 / 65

Page 52: MINLP Solver Technology

α-Underestimators [Androulakis et al., 1995, Adjiman and Floudas, 1996]Consider a function xTAx + bTx with A 6� 0.

Let α ∈ Rn be such that A− diag(α) � 0. Then

xTAx + bTx + (x − x)Tdiag(α)(x − x)

is a convex underestimator of xTAx + bTx w.r.t. the box [x , x ].

A simple choice for α is αi = λ1(A) (minimal eigenvalue of A), i = 1, . . . , n.

I available in ANTIGONE (in a modified form, see Misener [2012]) andCouenne (off by default)

I can be generalized to twice continuously differentiable functions g(x) bybounding the minimal eigenvalue of the Hessian ∇2H(x) for x ∈ [x , x ][Adjiman et al., 1998]

I underestimator is exact for xi ∈ {x i , x i}I thus, if x is a vector of binary variables (x2i = xi ), then

xTAx + bTx = xT(A− diag(α))x + (b + diag(α))Tx

for x ∈ {0, 1}n and A− diag(α) � 0. ⇒ used in CPLEX

More Relaxations for Quadratic programs 25 / 65

Page 53: MINLP Solver Technology

α-Underestimators [Androulakis et al., 1995, Adjiman and Floudas, 1996]Consider a function xTAx + bTx with A 6� 0.

Let α ∈ Rn be such that A− diag(α) � 0. Then

xTAx + bTx + (x − x)Tdiag(α)(x − x)

is a convex underestimator of xTAx + bTx w.r.t. the box [x , x ].

A simple choice for α is αi = λ1(A) (minimal eigenvalue of A), i = 1, . . . , n.

I available in ANTIGONE (in a modified form, see Misener [2012]) andCouenne (off by default)

I can be generalized to twice continuously differentiable functions g(x) bybounding the minimal eigenvalue of the Hessian ∇2H(x) for x ∈ [x , x ][Adjiman et al., 1998]

I underestimator is exact for xi ∈ {x i , x i}I thus, if x is a vector of binary variables (x2i = xi ), then

xTAx + bTx = xT(A− diag(α))x + (b + diag(α))Tx

for x ∈ {0, 1}n and A− diag(α) � 0. ⇒ used in CPLEX

More Relaxations for Quadratic programs 25 / 65

Page 54: MINLP Solver Technology

Eigenvalue ReformulationConsider a function xTAx + bTx with A 6� 0.

I Let λ1, . . . , λn be eigenvalues of A and v1, . . . , vn be corresp. eigenvectors.

⇒ xTAx + bTx + c =n∑

i=1

λi (vT

i x)2 + bTx + c . (E)

I introducing auxiliary variables zi = vT

i x , function becomes separable:

n∑i=1

λiz2i + bTx + c

I underestimate concave functions zi 7→ λiz2i , λi < 0, as known

I one of the methods for nonconvex QP in CPLEX (keeps convex λiz2i in

objective and solves relaxation by QP simplex)I ANTIGONE can make use of representation (E) to compute cuts in the spirit

of Saxena et al. [2011] [Misener and Floudas, 2012b]

More Relaxations for Quadratic programs 26 / 65

Page 55: MINLP Solver Technology

Eigenvalue ReformulationConsider a function xTAx + bTx with A 6� 0.

I Let λ1, . . . , λn be eigenvalues of A and v1, . . . , vn be corresp. eigenvectors.

⇒ xTAx + bTx + c =n∑

i=1

λi (vT

i x)2 + bTx + c . (E)

I introducing auxiliary variables zi = vT

i x , function becomes separable:

n∑i=1

λiz2i + bTx + c

I underestimate concave functions zi 7→ λiz2i , λi < 0, as known

I one of the methods for nonconvex QP in CPLEX (keeps convex λiz2i in

objective and solves relaxation by QP simplex)I ANTIGONE can make use of representation (E) to compute cuts in the spirit

of Saxena et al. [2011] [Misener and Floudas, 2012b]

More Relaxations for Quadratic programs 26 / 65

Page 56: MINLP Solver Technology

Reformulation Linearization Technique (RLT)Consider the QCQP

min xTQ0x + bT

0x (quadratic)s.t. xTQkx + bT

k x ≤ ck k = 1, . . . , q (quadratic)Ax ≤ b (linear)x ≤ x ≤ x (linear)

Introduce new variables Xi,j = xixj :

min 〈Q0,X 〉+ bT

0x (linear)s.t. 〈Qk ,X 〉+ bT

k x ≤ ck k = 1, . . . , q (linear)Ax ≤ b (linear)x ≤ x ≤ x (linear)X = xxT (quadratic)

Adams and Sherali [1986], Sherali and Alameddine [1992], Sherali and Adams[1999]:

I relax X = xxT by linear inequalities that are derived from multiplications ofpairs of linear constraints

More Relaxations for Quadratic programs 27 / 65

Page 57: MINLP Solver Technology

Reformulation Linearization Technique (RLT)Consider the QCQP

min xTQ0x + bT

0x (quadratic)s.t. xTQkx + bT

k x ≤ ck k = 1, . . . , q (quadratic)Ax ≤ b (linear)x ≤ x ≤ x (linear)

Introduce new variables Xi,j = xixj :

min 〈Q0,X 〉+ bT

0x (linear)s.t. 〈Qk ,X 〉+ bT

k x ≤ ck k = 1, . . . , q (linear)Ax ≤ b (linear)x ≤ x ≤ x (linear)X = xxT (quadratic)

Adams and Sherali [1986], Sherali and Alameddine [1992], Sherali and Adams[1999]:

I relax X = xxT by linear inequalities that are derived from multiplications ofpairs of linear constraints

More Relaxations for Quadratic programs 27 / 65

Page 58: MINLP Solver Technology

Reformulation Linearization Technique (RLT)Consider the QCQP

min xTQ0x + bT

0x (quadratic)s.t. xTQkx + bT

k x ≤ ck k = 1, . . . , q (quadratic)Ax ≤ b (linear)x ≤ x ≤ x (linear)

Introduce new variables Xi,j = xixj :

min 〈Q0,X 〉+ bT

0x (linear)s.t. 〈Qk ,X 〉+ bT

k x ≤ ck k = 1, . . . , q (linear)Ax ≤ b (linear)x ≤ x ≤ x (linear)X = xxT (quadratic)

Adams and Sherali [1986], Sherali and Alameddine [1992], Sherali and Adams[1999]:

I relax X = xxT by linear inequalities that are derived from multiplications ofpairs of linear constraints

More Relaxations for Quadratic programs 27 / 65

Page 59: MINLP Solver Technology

RLT: Multiplying Bound ConstraintsMultiplying bounds x i ≤ xi ≤ x j and x j ≤ xj ≤ x j yields

(xi − x i )(xj − x j) ≥ 0

⇒ Xi,j ≥ x ixj + x jxi − x ix j

(xi − x i )(xj − x j) ≥ 0

⇒ Xi,j ≥ x ixj + x jxi − x ix j

(xi − x i )(xj − x j) ≤ 0

⇒ Xi,j ≤ x ixj + x jxi − x ix j

(xi − x i )(xj − x j) ≤ 0

⇒ Xi,j ≤ x ixj + x jxi − x ix jI these are exactly the McCormick inequalities that we have seen earlierI the resulting linear relaxation is

min 〈Q0,X 〉+ bT0 x

s.t. 〈Qk ,X 〉+ bTk x ≤ ck k = 1, . . . , q

Ax ≤ b, x ≤ x ≤ x

Xi,j ≥ x ixj + x jxi − x ix j i , j = 1, . . . , n, i ≤ j

Xi,j ≥ x ixj + x jxi − x ix j i , j = 1, . . . , n, i ≤ j

Xi,j ≤ x ixj + x jxi − x ix j i , j = 1, . . . , n,

X = X T

I these inequalities are used by all solversI not every solver introduces Xi,j variables explicitly

More Relaxations for Quadratic programs 28 / 65

Page 60: MINLP Solver Technology

RLT: Multiplying Bound ConstraintsMultiplying bounds x i ≤ xi ≤ x j and x j ≤ xj ≤ x j and using Xi,j = xixj yields

(xi − x i )(xj − x j) ≥ 0 ⇒ Xi,j ≥ x ixj + x jxi − x ix j

(xi − x i )(xj − x j) ≥ 0 ⇒ Xi,j ≥ x ixj + x jxi − x ix j

(xi − x i )(xj − x j) ≤ 0 ⇒ Xi,j ≤ x ixj + x jxi − x ix j

(xi − x i )(xj − x j) ≤ 0 ⇒ Xi,j ≤ x ixj + x jxi − x ix j

I these are exactly the McCormick inequalities that we have seen earlierI the resulting linear relaxation is

min 〈Q0,X 〉+ bT0 x

s.t. 〈Qk ,X 〉+ bTk x ≤ ck k = 1, . . . , q

Ax ≤ b, x ≤ x ≤ x

Xi,j ≥ x ixj + x jxi − x ix j i , j = 1, . . . , n, i ≤ j

Xi,j ≥ x ixj + x jxi − x ix j i , j = 1, . . . , n, i ≤ j

Xi,j ≤ x ixj + x jxi − x ix j i , j = 1, . . . , n,

X = X T

I these inequalities are used by all solversI not every solver introduces Xi,j variables explicitly

More Relaxations for Quadratic programs 28 / 65

Page 61: MINLP Solver Technology

RLT: Multiplying Bound ConstraintsMultiplying bounds x i ≤ xi ≤ x j and x j ≤ xj ≤ x j and using Xi,j = xixj yields

(xi − x i )(xj − x j) ≥ 0 ⇒ Xi,j ≥ x ixj + x jxi − x ix j

(xi − x i )(xj − x j) ≥ 0 ⇒ Xi,j ≥ x ixj + x jxi − x ix j

(xi − x i )(xj − x j) ≤ 0 ⇒ Xi,j ≤ x ixj + x jxi − x ix j

(xi − x i )(xj − x j) ≤ 0 ⇒ Xi,j ≤ x ixj + x jxi − x ix jI these are exactly the McCormick inequalities that we have seen earlier

I the resulting linear relaxation is

min 〈Q0,X 〉+ bT0 x

s.t. 〈Qk ,X 〉+ bTk x ≤ ck k = 1, . . . , q

Ax ≤ b, x ≤ x ≤ x

Xi,j ≥ x ixj + x jxi − x ix j i , j = 1, . . . , n, i ≤ j

Xi,j ≥ x ixj + x jxi − x ix j i , j = 1, . . . , n, i ≤ j

Xi,j ≤ x ixj + x jxi − x ix j i , j = 1, . . . , n,

X = X T

I these inequalities are used by all solversI not every solver introduces Xi,j variables explicitly

More Relaxations for Quadratic programs 28 / 65

Page 62: MINLP Solver Technology

RLT: Multiplying Bound ConstraintsMultiplying bounds x i ≤ xi ≤ x j and x j ≤ xj ≤ x j and using Xi,j = xixj yields

(xi − x i )(xj − x j) ≥ 0 ⇒ Xi,j ≥ x ixj + x jxi − x ix j

(xi − x i )(xj − x j) ≥ 0 ⇒ Xi,j ≥ x ixj + x jxi − x ix j

(xi − x i )(xj − x j) ≤ 0 ⇒ Xi,j ≤ x ixj + x jxi − x ix j

(xi − x i )(xj − x j) ≤ 0 ⇒ Xi,j ≤ x ixj + x jxi − x ix jI these are exactly the McCormick inequalities that we have seen earlierI the resulting linear relaxation is

min 〈Q0,X 〉+ bT0 x

s.t. 〈Qk ,X 〉+ bTk x ≤ ck k = 1, . . . , q

Ax ≤ b, x ≤ x ≤ x

Xi,j ≥ x ixj + x jxi − x ix j i , j = 1, . . . , n, i ≤ j

Xi,j ≥ x ixj + x jxi − x ix j i , j = 1, . . . , n, i ≤ j

Xi,j ≤ x ixj + x jxi − x ix j i , j = 1, . . . , n,

X = X T

I these inequalities are used by all solversI not every solver introduces Xi,j variables explicitly

More Relaxations for Quadratic programs 28 / 65

Page 63: MINLP Solver Technology

RLT: Multiplying Bound ConstraintsMultiplying bounds x i ≤ xi ≤ x j and x j ≤ xj ≤ x j and using Xi,j = xixj yields

(xi − x i )(xj − x j) ≥ 0 ⇒ Xi,j ≥ x ixj + x jxi − x ix j

(xi − x i )(xj − x j) ≥ 0 ⇒ Xi,j ≥ x ixj + x jxi − x ix j

(xi − x i )(xj − x j) ≤ 0 ⇒ Xi,j ≤ x ixj + x jxi − x ix j

(xi − x i )(xj − x j) ≤ 0 ⇒ Xi,j ≤ x ixj + x jxi − x ix jI these are exactly the McCormick inequalities that we have seen earlierI the resulting linear relaxation is

min 〈Q0,X 〉+ bT0 x

s.t. 〈Qk ,X 〉+ bTk x ≤ ck k = 1, . . . , q

Ax ≤ b, x ≤ x ≤ x

Xi,j ≥ x ixj + x jxi − x ix j i , j = 1, . . . , n, i ≤ j

Xi,j ≥ x ixj + x jxi − x ix j i , j = 1, . . . , n, i ≤ j

Xi,j ≤ x ixj + x jxi − x ix j i , j = 1, . . . , n,

X = X T

I these inequalities are used by all solversI not every solver introduces Xi,j variables explicitly

More Relaxations for Quadratic programs 28 / 65

Page 64: MINLP Solver Technology

RLT: Multiplying Bounds and InequalitiesAdditional inequalities are derived by multiplying pairs of linear equations andbound constraints:

(AT

` x − b`)(xj − x j) ≥ 0 ⇒n∑

i=1

A`,ixi (xj − x j)− b`(xj − x j) ≥ 0

(AT

` x − b`)(AT

`′x − b`′) ≥ 0 ⇒

ANTIGONE [Misener and Floudas, 2012b]:I linear inequality (AT

` x − b` ≤ 0) × variable bound (xj − x j ≥ 0)⇒ consider for cut generation and bound tightening

I linear (in)equality (AT` x − b`≤/=0) × linear (in)equality (AT

`′x − b`′≤/=0)⇒ consider for cut generation and bound tightening

I linear equality (AT

` x − b` = 0) × variable (xj)⇒ add to the model (strong cut in context of QAP, pooling, ...)

(close to bilinear term elimination of Liberti and Pantelides [2006])I in all cases, consider only products that do not add new nonlinear terms

(avoid Xi,j without corresponding xixj)I learn useful RLT cuts in the first levels of branch-and-bound

More Relaxations for Quadratic programs 29 / 65

Page 65: MINLP Solver Technology

RLT: Multiplying Bounds and InequalitiesAdditional inequalities are derived by multiplying pairs of linear equations andbound constraints:

(AT

` x − b`)(xj − x j) ≥ 0 ⇒n∑

i=1

A`,i (Xi,j − xix j)− b`(xj − x j) ≥ 0

(AT

` x − b`)(AT

`′x − b`′) ≥ 0 ⇒

ANTIGONE [Misener and Floudas, 2012b]:I linear inequality (AT

` x − b` ≤ 0) × variable bound (xj − x j ≥ 0)⇒ consider for cut generation and bound tightening

I linear (in)equality (AT` x − b`≤/=0) × linear (in)equality (AT

`′x − b`′≤/=0)⇒ consider for cut generation and bound tightening

I linear equality (AT

` x − b` = 0) × variable (xj)⇒ add to the model (strong cut in context of QAP, pooling, ...)

(close to bilinear term elimination of Liberti and Pantelides [2006])I in all cases, consider only products that do not add new nonlinear terms

(avoid Xi,j without corresponding xixj)I learn useful RLT cuts in the first levels of branch-and-bound

More Relaxations for Quadratic programs 29 / 65

Page 66: MINLP Solver Technology

RLT: Multiplying Bounds and InequalitiesAdditional inequalities are derived by multiplying pairs of linear equations andbound constraints:

(AT

` x − b`)(xj − x j) ≥ 0 ⇒n∑

i=1

A`,i (Xi,j − xix j)− b`(xj − x j) ≥ 0

(AT

` x − b`)(AT

`′x − b`′) ≥ 0 ⇒ AT

` xAT

`′x − b`AT

`′x − b`′AT

` x + b`b`′ ≥ 0

ANTIGONE [Misener and Floudas, 2012b]:I linear inequality (AT

` x − b` ≤ 0) × variable bound (xj − x j ≥ 0)⇒ consider for cut generation and bound tightening

I linear (in)equality (AT` x − b`≤/=0) × linear (in)equality (AT

`′x − b`′≤/=0)⇒ consider for cut generation and bound tightening

I linear equality (AT

` x − b` = 0) × variable (xj)⇒ add to the model (strong cut in context of QAP, pooling, ...)

(close to bilinear term elimination of Liberti and Pantelides [2006])I in all cases, consider only products that do not add new nonlinear terms

(avoid Xi,j without corresponding xixj)I learn useful RLT cuts in the first levels of branch-and-bound

More Relaxations for Quadratic programs 29 / 65

Page 67: MINLP Solver Technology

RLT: Multiplying Bounds and InequalitiesAdditional inequalities are derived by multiplying pairs of linear equations andbound constraints:

(AT

` x − b`)(xj − x j) ≥ 0 ⇒n∑

i=1

A`,i (Xi,j − xix j)− b`(xj − x j) ≥ 0

(AT

` x − b`)(AT

`′x − b`′) ≥ 0 ⇒ AT

`XAT

`′ − (b`A`′ + b`′AT

` )x + b`b`′ ≥ 0

ANTIGONE [Misener and Floudas, 2012b]:I linear inequality (AT

` x − b` ≤ 0) × variable bound (xj − x j ≥ 0)⇒ consider for cut generation and bound tightening

I linear (in)equality (AT` x − b`≤/=0) × linear (in)equality (AT

`′x − b`′≤/=0)⇒ consider for cut generation and bound tightening

I linear equality (AT

` x − b` = 0) × variable (xj)⇒ add to the model (strong cut in context of QAP, pooling, ...)

(close to bilinear term elimination of Liberti and Pantelides [2006])I in all cases, consider only products that do not add new nonlinear terms

(avoid Xi,j without corresponding xixj)I learn useful RLT cuts in the first levels of branch-and-bound

More Relaxations for Quadratic programs 29 / 65

Page 68: MINLP Solver Technology

RLT: Multiplying Bounds and InequalitiesAdditional inequalities are derived by multiplying pairs of linear equations andbound constraints:

(AT

` x − b`)(xj − x j) ≥ 0 ⇒n∑

i=1

A`,i (Xi,j − xix j)− b`(xj − x j) ≥ 0

(AT

` x − b`)(AT

`′x − b`′) ≥ 0 ⇒ AT

`XAT

`′ − (b`A`′ + b`′AT

` )x + b`b`′ ≥ 0

ANTIGONE [Misener and Floudas, 2012b]:I linear inequality (AT

` x − b` ≤ 0) × variable bound (xj − x j ≥ 0)⇒ consider for cut generation and bound tightening

I linear (in)equality (AT` x − b`≤/=0) × linear (in)equality (AT

`′x − b`′≤/=0)⇒ consider for cut generation and bound tightening

I linear equality (AT

` x − b` = 0) × variable (xj)⇒ add to the model (strong cut in context of QAP, pooling, ...)

(close to bilinear term elimination of Liberti and Pantelides [2006])I in all cases, consider only products that do not add new nonlinear terms

(avoid Xi,j without corresponding xixj)I learn useful RLT cuts in the first levels of branch-and-bound

More Relaxations for Quadratic programs 29 / 65

Page 69: MINLP Solver Technology

RLT: Multiplying Bounds and InequalitiesAdditional inequalities are derived by multiplying pairs of linear equations andbound constraints:

(AT

` x − b`)(xj − x j) ≥ 0 ⇒n∑

i=1

A`,i (Xi,j − xix j)− b`(xj − x j) ≥ 0

(AT

` x − b`)(AT

`′x − b`′) ≥ 0 ⇒ AT

`XAT

`′ − (b`A`′ + b`′AT

` )x + b`b`′ ≥ 0

ANTIGONE [Misener and Floudas, 2012b]:I linear inequality (AT

` x − b` ≤ 0) × variable bound (xj − x j ≥ 0)⇒ consider for cut generation and bound tightening

I linear (in)equality (AT` x − b`≤/=0) × linear (in)equality (AT

`′x − b`′≤/=0)⇒ consider for cut generation and bound tightening

I linear equality (AT

` x − b` = 0) × variable (xj)⇒ add to the model (strong cut in context of QAP, pooling, ...)

(close to bilinear term elimination of Liberti and Pantelides [2006])I in all cases, consider only products that do not add new nonlinear terms

(avoid Xi,j without corresponding xixj)I learn useful RLT cuts in the first levels of branch-and-bound

More Relaxations for Quadratic programs 29 / 65

Page 70: MINLP Solver Technology

RLT: Multiplying Bounds and InequalitiesAdditional inequalities are derived by multiplying pairs of linear equations andbound constraints:

(AT

` x − b`)(xj − x j) ≥ 0 ⇒n∑

i=1

A`,i (Xi,j − xix j)− b`(xj − x j) ≥ 0

(AT

` x − b`)(AT

`′x − b`′) ≥ 0 ⇒ AT

`XAT

`′ − (b`A`′ + b`′AT

` )x + b`b`′ ≥ 0

ANTIGONE [Misener and Floudas, 2012b]:I linear inequality (AT

` x − b` ≤ 0) × variable bound (xj − x j ≥ 0)⇒ consider for cut generation and bound tightening

I linear (in)equality (AT` x − b`≤/=0) × linear (in)equality (AT

`′x − b`′≤/=0)⇒ consider for cut generation and bound tightening

I linear equality (AT

` x − b` = 0) × variable (xj)⇒ add to the model (strong cut in context of QAP, pooling, ...)

(close to bilinear term elimination of Liberti and Pantelides [2006])

I in all cases, consider only products that do not add new nonlinear terms(avoid Xi,j without corresponding xixj)

I learn useful RLT cuts in the first levels of branch-and-bound

More Relaxations for Quadratic programs 29 / 65

Page 71: MINLP Solver Technology

RLT: Multiplying Bounds and InequalitiesAdditional inequalities are derived by multiplying pairs of linear equations andbound constraints:

(AT

` x − b`)(xj − x j) ≥ 0 ⇒n∑

i=1

A`,i (Xi,j − xix j)− b`(xj − x j) ≥ 0

(AT

` x − b`)(AT

`′x − b`′) ≥ 0 ⇒ AT

`XAT

`′ − (b`A`′ + b`′AT

` )x + b`b`′ ≥ 0

ANTIGONE [Misener and Floudas, 2012b]:I linear inequality (AT

` x − b` ≤ 0) × variable bound (xj − x j ≥ 0)⇒ consider for cut generation and bound tightening

I linear (in)equality (AT` x − b`≤/=0) × linear (in)equality (AT

`′x − b`′≤/=0)⇒ consider for cut generation and bound tightening

I linear equality (AT

` x − b` = 0) × variable (xj)⇒ add to the model (strong cut in context of QAP, pooling, ...)

(close to bilinear term elimination of Liberti and Pantelides [2006])I in all cases, consider only products that do not add new nonlinear terms

(avoid Xi,j without corresponding xixj)I learn useful RLT cuts in the first levels of branch-and-bound

More Relaxations for Quadratic programs 29 / 65

Page 72: MINLP Solver Technology

RLT: Give your solver a hand an equation.

Misener and Floudas [2012b]:I As a final observation with respect to generating RLT

equations, notice that a modeler will significantlyimprove the performance of GloMIQO by writinglinear constraints that can be multiplied togetherwithout increasing the number of nonlinear terms.

⇒ RLT is an example where adding redundant constraints can help (recall Jeff’sslides on the pooling problem).

More Relaxations for Quadratic programs 30 / 65

Page 73: MINLP Solver Technology

Semidefinite Programming (SDP) Relaxationmin xTQ0x + bT

0x

s.t. xTQkx + bT

k x ≤ ck

Ax ≤ b

x ≤ x ≤ x

⇔ min 〈Q0,X 〉+ bT

0x

s.t. 〈Qk ,X 〉+ bT

k x ≤ ck

Ax ≤ b

x ≤ x ≤ x

X = xxT

I relaxing X − xxT = 0 to X − xxT � 0, which is equivalent to

X :=

(1 xT

x X

)� 0,

yields a semidefinite programming relaxation

I SDP is computationally demanding, so approximate by linear inequalitiesSherali and Fraticelli [2002]:for X ∗ 6� 0 compute eigenvector v with eigenvalue λ < 0, then

〈v , X v〉 ≥ 0

is a valid cut that cuts off X ∗

I available in Couenne and Lindo API (non-default)I Qualizza et al. [2012] (Couenne): sparsify cut by setting entries of v to 0I Saxena et al. [2011]: project into x-variables space (no Xi,j variables needed)

More Relaxations for Quadratic programs 31 / 65

Page 74: MINLP Solver Technology

Semidefinite Programming (SDP) Relaxationmin xTQ0x + bT

0x

s.t. xTQkx + bT

k x ≤ ck

Ax ≤ b

x ≤ x ≤ x

⇔ min 〈Q0,X 〉+ bT

0x

s.t. 〈Qk ,X 〉+ bT

k x ≤ ck

Ax ≤ b

x ≤ x ≤ x

X = xxT

I relaxing X − xxT = 0 to X − xxT � 0, which is equivalent to

X :=

(1 xT

x X

)� 0,

yields a semidefinite programming relaxationI SDP is computationally demanding, so approximate by linear inequalities

Sherali and Fraticelli [2002]:for X ∗ 6� 0 compute eigenvector v with eigenvalue λ < 0, then

〈v , X v〉 ≥ 0

is a valid cut that cuts off X ∗

I available in Couenne and Lindo API (non-default)I Qualizza et al. [2012] (Couenne): sparsify cut by setting entries of v to 0I Saxena et al. [2011]: project into x-variables space (no Xi,j variables needed)

More Relaxations for Quadratic programs 31 / 65

Page 75: MINLP Solver Technology

SDP vs RLT vs α-BB

Anstreicher [2009]:I the SDP relaxation does not dominate the RLT relaxationI the RLT relaxation does not dominate the SDP relaxationI combining both relaxations can produce substantially better bounds

Anstreicher [2012]:I the SDP relaxation dominates the α-BB underestimators

More Relaxations for Quadratic programs 32 / 65

Page 76: MINLP Solver Technology

SDP vs RLT vs α-BB

Anstreicher [2009]:I the SDP relaxation does not dominate the RLT relaxationI the RLT relaxation does not dominate the SDP relaxationI combining both relaxations can produce substantially better bounds

Anstreicher [2012]:I the SDP relaxation dominates the α-BB underestimators

More Relaxations for Quadratic programs 32 / 65

Page 77: MINLP Solver Technology

Outline

Solvers

Linear Relaxation of Non-Convex terms

More Relaxations for Quadratic programs

Even More Cuts ...

Reformulation / Presolving

Bound Tightening

Branching

Primal Heuristics

Even More Cuts ... 33 / 65

Page 78: MINLP Solver Technology

More Cutting Planes for MINLP

MIP ⊂ MINLP, so don’t forget about the MIP cuts:I GomoryI Mixed-Integer RoundingI Flow CoverI . . .

Available in ANTIGONE (via CPLEX), BARON,Couenne (off by default), SCIP, and Lindo API.

Disjunctive Programming Cuts:I given the (tightened) LP relaxations after branching on a variable, compute a

cut that is valid for the union of both relaxationsI apply during strong branchingI available in FilMINT and Couenne [Kilinç et al., 2010, Belotti, 2012]I see also Bonami, Linderoth, and Lodi [2012] for a review

Even More Cuts ... 34 / 65

Page 79: MINLP Solver Technology

More Cutting Planes for MINLP

MIP ⊂ MINLP, so don’t forget about the MIP cuts:I GomoryI Mixed-Integer RoundingI Flow CoverI . . .

Available in ANTIGONE (via CPLEX), BARON,Couenne (off by default), SCIP, and Lindo API.

Disjunctive Programming Cuts:I given the (tightened) LP relaxations after branching on a variable, compute a

cut that is valid for the union of both relaxationsI apply during strong branchingI available in FilMINT and Couenne [Kilinç et al., 2010, Belotti, 2012]I see also Bonami, Linderoth, and Lodi [2012] for a review

Even More Cuts ... 34 / 65

Page 80: MINLP Solver Technology

Outline

Solvers

Linear Relaxation of Non-Convex terms

More Relaxations for Quadratic programs

Even More Cuts ...

Reformulation / Presolving

Bound Tightening

Branching

Primal Heuristics

Reformulation / Presolving 35 / 65

Page 81: MINLP Solver Technology

Convexity DetectionAnalyze the Hessian:

f (x) convex on [x , x ] ⇔ ∇2f (x) � 0 ∀x ∈ [x , x ]

I f (x) is quadratic ⇒ ∇2f (x) constant ⇒ compute spectrum numericallyI done by ANTIGONE, Couenne (off by default), BARON, SCIPI general f ∈ C 2 ⇒ estimate eigenvalues of Interval-Hessian [Nenov et al., 2004]

Analyze the Algebraic Expression:

f (x) convex⇒ a · f (x)

{convex, a ≥ 0concave, a ≤ 0

f (x), g(x) convex⇒ f (x) + g(x) convexf (x) concave⇒ log(f (x)) concave

f (x) =∏i

xeii , xi ≥ 0⇒ f (x)

convex, ei ≤ 0 ∀iconvex, ∃j : ei ≤ 0 ∀i 6= j ;

∑i ei ≥ 1

concave, ei ≥ 0 ∀i ;∑

i ei ≤ 1

Available in ANTIGONE, BARON, and SCIP.[Maranas and Floudas, 1995, Bao, 2007, Fourer et al., 2009, Vigerske, 2013]

Reformulation / Presolving 36 / 65

Page 82: MINLP Solver Technology

Convexity DetectionAnalyze the Hessian:

f (x) convex on [x , x ] ⇔ ∇2f (x) � 0 ∀x ∈ [x , x ]

I f (x) is quadratic ⇒ ∇2f (x) constant ⇒ compute spectrum numericallyI done by ANTIGONE, Couenne (off by default), BARON, SCIPI general f ∈ C 2 ⇒ estimate eigenvalues of Interval-Hessian [Nenov et al., 2004]

Analyze the Algebraic Expression:

f (x) convex⇒ a · f (x)

{convex, a ≥ 0concave, a ≤ 0

f (x), g(x) convex⇒ f (x) + g(x) convexf (x) concave⇒ log(f (x)) concave

f (x) =∏i

xeii , xi ≥ 0⇒ f (x)

convex, ei ≤ 0 ∀iconvex, ∃j : ei ≤ 0 ∀i 6= j ;

∑i ei ≥ 1

concave, ei ≥ 0 ∀i ;∑

i ei ≤ 1

Available in ANTIGONE, BARON, and SCIP.[Maranas and Floudas, 1995, Bao, 2007, Fourer et al., 2009, Vigerske, 2013]

Reformulation / Presolving 36 / 65

Page 83: MINLP Solver Technology

Second Order Cones (SOC)Consider a constraint xTAx + bTx ≤ c .

I if A has only one negative eigenvalue, it may be reformulated as asecond-order cone constraint [Mahajan and Munson, 2010], e.g.,

N∑k=1

x2k − x2N+1 ≤ 0, xN+1 ≥ 0 ⇔

√√√√ N∑k=1

x2k ≤ xN+1

I

√∑Nk=1 x

2k is a convex term that can easily be linearized

I BARON and SCIP recognize “obvious” SOCs N∑

k=1(αk xk )

2 − (αN+1xN+1)2 ≤ 0

Example: x2 + y2 − z2 ≤ 0 in [−1, 1]× [−1, 1]× [0, 1]

feasible region not recognizing SOC recognizing SOC(initial relaxation)Reformulation / Presolving 37 / 65

Page 84: MINLP Solver Technology

More PresolveI Products with binary variables can be linearized:

x ·N∑

k=1

akyk with x ∈ {0, 1} ⇔

MLx ≤w ≤ MUx ,

N∑k=1

akyk −MU(1− x) ≤w ≤N∑

k=1

akyk −ML(1− x),

where ML and MU are bounds on∑N

k=1 akyk .

Implemented by Lindo API and SCIP.⇒ Don’t rewrite x · y as Big-M! – the solver may do it for you :-)

I Liberti [2012], Liberti and Ostrowski [2014]: Automated symmetry detectionand breaking for Couenne (“orbital branching”, off by default)

I MINOTAUR: Coefficient tightening for certain nonlinear constraints:

c(x) ≤ b +M(1− y) y ∈ {0, 1}, 0 ≤ xi ≤ x iy

⇒ c(x) + (c(0)− b)y ≤ c(0)

[Belotti, Kirches, Leyffer, Linderoth, Luedtke, and Mahajan, 2013]

Reformulation / Presolving 38 / 65

Page 85: MINLP Solver Technology

More PresolveI Products with binary variables can be linearized:

x ·N∑

k=1

akyk with x ∈ {0, 1} ⇔

MLx ≤w ≤ MUx ,

N∑k=1

akyk −MU(1− x) ≤w ≤N∑

k=1

akyk −ML(1− x),

where ML and MU are bounds on∑N

k=1 akyk .

Implemented by Lindo API and SCIP.⇒ Don’t rewrite x · y as Big-M! – the solver may do it for you :-)

I Liberti [2012], Liberti and Ostrowski [2014]: Automated symmetry detectionand breaking for Couenne (“orbital branching”, off by default)

I MINOTAUR: Coefficient tightening for certain nonlinear constraints:

c(x) ≤ b +M(1− y) y ∈ {0, 1}, 0 ≤ xi ≤ x iy

⇒ c(x) + (c(0)− b)y ≤ c(0)

[Belotti, Kirches, Leyffer, Linderoth, Luedtke, and Mahajan, 2013]

Reformulation / Presolving 38 / 65

Page 86: MINLP Solver Technology

More PresolveI Products with binary variables can be linearized:

x ·N∑

k=1

akyk with x ∈ {0, 1} ⇔

MLx ≤w ≤ MUx ,

N∑k=1

akyk −MU(1− x) ≤w ≤N∑

k=1

akyk −ML(1− x),

where ML and MU are bounds on∑N

k=1 akyk .

Implemented by Lindo API and SCIP.⇒ Don’t rewrite x · y as Big-M! – the solver may do it for you :-)

I Liberti [2012], Liberti and Ostrowski [2014]: Automated symmetry detectionand breaking for Couenne (“orbital branching”, off by default)

I MINOTAUR: Coefficient tightening for certain nonlinear constraints:

c(x) ≤ b +M(1− y) y ∈ {0, 1}, 0 ≤ xi ≤ x iy

⇒ c(x) + (c(0)− b)y ≤ c(0)

[Belotti, Kirches, Leyffer, Linderoth, Luedtke, and Mahajan, 2013]

Reformulation / Presolving 38 / 65

Page 87: MINLP Solver Technology

Outline

Solvers

Linear Relaxation of Non-Convex terms

More Relaxations for Quadratic programs

Even More Cuts ...

Reformulation / Presolving

Bound Tightening

Branching

Primal Heuristics

Bound Tightening 39 / 65

Page 88: MINLP Solver Technology

Bounds Tightening (Domain Propagation)

Tighten variable bounds [x , x ] such thatI the optimal value of the problem is not changed, orI the set of optimal solutions is not changed, orI the set of feasible solutions is not changed

-1.0 -0.5 0.0 0.5 1.0

-1.0

-0.5

0.0

0.5

1.0

Bound tightening allows toI find infeasible subproblems (when bound tightening gives inconsistent bounds)

I reduce possible integral values of discrete variables ⇒ less branchingI very important: improve underestimators without branching ⇒ tighter lower

bounds

Some techniques [Belotti et al., 2009]:Reduced Cost Bounds Tightening (cheap), same as in MIP

FBBT Feasibility-Based Bounds Tightening (cheap)ABT Aggressive Feasibility-Based Bounds Tightening (expensive)

OBBT Optimality/Optimization-Based Bounds Tightening (expensive)

Bound Tightening 40 / 65

Page 89: MINLP Solver Technology

Bounds Tightening (Domain Propagation)

Tighten variable bounds [x , x ] such thatI the optimal value of the problem is not changed, orI the set of optimal solutions is not changed, orI the set of feasible solutions is not changed

-1.0 -0.5 0.0 0.5 1.0

-1.0

-0.5

0.0

0.5

1.0

Bound tightening allows toI find infeasible subproblems (when bound tightening gives inconsistent bounds)

I reduce possible integral values of discrete variables ⇒ less branchingI very important: improve underestimators without branching ⇒ tighter lower

bounds

Some techniques [Belotti et al., 2009]:Reduced Cost Bounds Tightening (cheap), same as in MIP

FBBT Feasibility-Based Bounds Tightening (cheap)ABT Aggressive Feasibility-Based Bounds Tightening (expensive)

OBBT Optimality/Optimization-Based Bounds Tightening (expensive)

Bound Tightening 40 / 65

Page 90: MINLP Solver Technology

Bounds Tightening (Domain Propagation)

Tighten variable bounds [x , x ] such thatI the optimal value of the problem is not changed, orI the set of optimal solutions is not changed, orI the set of feasible solutions is not changed

-1.0 -0.5 0.0 0.5 1.0

-1.0

-0.5

0.0

0.5

1.0

Bound tightening allows toI find infeasible subproblems (when bound tightening gives inconsistent bounds)

I reduce possible integral values of discrete variables ⇒ less branching

I very important: improve underestimators without branching ⇒ tighter lowerbounds

Some techniques [Belotti et al., 2009]:Reduced Cost Bounds Tightening (cheap), same as in MIP

FBBT Feasibility-Based Bounds Tightening (cheap)ABT Aggressive Feasibility-Based Bounds Tightening (expensive)

OBBT Optimality/Optimization-Based Bounds Tightening (expensive)

Bound Tightening 40 / 65

Page 91: MINLP Solver Technology

Bounds Tightening (Domain Propagation)

Tighten variable bounds [x , x ] such thatI the optimal value of the problem is not changed, orI the set of optimal solutions is not changed, orI the set of feasible solutions is not changed

-1.0 -0.5 0.0 0.5 1.0

-1.0

-0.5

0.0

0.5

1.0

Bound tightening allows toI find infeasible subproblems (when bound tightening gives inconsistent bounds)

I reduce possible integral values of discrete variables ⇒ less branchingI very important: improve underestimators without branching ⇒ tighter lower

bounds

Some techniques [Belotti et al., 2009]:Reduced Cost Bounds Tightening (cheap), same as in MIP

FBBT Feasibility-Based Bounds Tightening (cheap)ABT Aggressive Feasibility-Based Bounds Tightening (expensive)

OBBT Optimality/Optimization-Based Bounds Tightening (expensive)

Bound Tightening 40 / 65

Page 92: MINLP Solver Technology

Bounds Tightening (Domain Propagation)

Tighten variable bounds [x , x ] such thatI the optimal value of the problem is not changed, orI the set of optimal solutions is not changed, orI the set of feasible solutions is not changed

-1.0 -0.5 0.0 0.5 1.0

-1.0

-0.5

0.0

0.5

1.0

Bound tightening allows toI find infeasible subproblems (when bound tightening gives inconsistent bounds)

I reduce possible integral values of discrete variables ⇒ less branchingI very important: improve underestimators without branching ⇒ tighter lower

bounds

Some techniques [Belotti et al., 2009]:Reduced Cost Bounds Tightening (cheap), same as in MIP

FBBT Feasibility-Based Bounds Tightening (cheap)ABT Aggressive Feasibility-Based Bounds Tightening (expensive)

OBBT Optimality/Optimization-Based Bounds Tightening (expensive)

Bound Tightening 40 / 65

Page 93: MINLP Solver Technology

FBBT for Linear Constraints

Feasibility-Based Bound Tightening for a linear constraint:

b ≤∑i :ai>0

aixi +∑i :ai<0

aixi ≤ b,

⇒ xj ≤1aj

b −

∑i :ai>0,i 6=j

aix i −∑i :ai<0

aix i , if aj > 0

b −∑i :ai>0

aix i −∑

i :ai<0,i 6=j

aix i , if aj < 0

xj ≥1aj

b −

∑i :ai>0,i 6=j

aix i −∑i :ai<0

aix i , if aj > 0

b −∑i :ai>0

aix i −∑

i :ai<0,i 6=j

aix i , if aj < 0

Bound Tightening 41 / 65

Page 94: MINLP Solver Technology

FBBT for Linear Constraints

Feasibility-Based Bound Tightening for a linear constraint:

b ≤∑i :ai>0

aixi +∑i :ai<0

aixi ≤ b,

⇒ xj ≤1aj

b −

∑i :ai>0,i 6=j

aix i −∑i :ai<0

aix i , if aj > 0

b −∑i :ai>0

aix i −∑

i :ai<0,i 6=j

aix i , if aj < 0

xj ≥1aj

b −

∑i :ai>0,i 6=j

aix i −∑i :ai<0

aix i , if aj > 0

b −∑i :ai>0

aix i −∑

i :ai<0,i 6=j

aix i , if aj < 0

Bound Tightening 41 / 65

Page 95: MINLP Solver Technology

FBBT for Nonlinear Constraints[Schichl and Neumaier, 2005, Vu et al., 2009]

Represent algebraic structure of problem in one directed acyclic graph:I nodes: variables, operations, constraintsI arcs: flow of computation

Example:√x + 2

√xy + 2

√y ∈ [−∞, 7]

x2√y − 2xy + 3√y ∈ [0, 2]

x , y ∈ [1, 16]

Forward propagation:I compute bounds on

intermediate nodes (top-down)

Backward propagation:I reduce bounds using reverse

operations (bottom-up)

x y

√· ·2 ∗ √

·

√· ∗

+ +

−2

2

3

2

[1,16] [1,16]

[−∞,7] [0,2]

Bound Tightening 42 / 65

Page 96: MINLP Solver Technology

FBBT for Nonlinear Constraints[Schichl and Neumaier, 2005, Vu et al., 2009]

Represent algebraic structure of problem in one directed acyclic graph:I nodes: variables, operations, constraintsI arcs: flow of computation

Example:√x + 2

√xy + 2

√y ∈ [−∞, 7]

x2√y − 2xy + 3√y ∈ [0, 2]

x , y ∈ [1, 16]

Forward propagation:I compute bounds on

intermediate nodes (top-down)

Backward propagation:I reduce bounds using reverse

operations (bottom-up)

x y

√· ·2 ∗ √

·

√· ∗

+ +

−2

2

3

2

[1,16] [1,16]

[−∞,7] [0,2]

Bound Tightening 42 / 65

Page 97: MINLP Solver Technology

FBBT for Nonlinear Constraints[Schichl and Neumaier, 2005, Vu et al., 2009]

Represent algebraic structure of problem in one directed acyclic graph:I nodes: variables, operations, constraintsI arcs: flow of computation

Example:√x + 2

√xy + 2

√y ∈ [−∞, 7]

x2√y − 2xy + 3√y ∈ [0, 2]

x , y ∈ [1, 16]

Forward propagation:I compute bounds on

intermediate nodes (top-down)

Backward propagation:I reduce bounds using reverse

operations (bottom-up)

x y

√· ·2 ∗ √

·

√· ∗

+ +

−2

2

3

2

[1,16] [1,16]

[0,2]

[1,4] [1,256] [1,256] [1,4]

[1,16]

[1,1024]

[5,7]

Bound Tightening 42 / 65

Page 98: MINLP Solver Technology

FBBT for Nonlinear Constraints[Schichl and Neumaier, 2005, Vu et al., 2009]

Represent algebraic structure of problem in one directed acyclic graph:I nodes: variables, operations, constraintsI arcs: flow of computation

Example:√x + 2

√xy + 2

√y ∈ [−∞, 7]

x2√y − 2xy + 3√y ∈ [0, 2]

x , y ∈ [1, 16]

Forward propagation:I compute bounds on

intermediate nodes (top-down)

Backward propagation:I reduce bounds using reverse

operations (bottom-up)

x y

√· ·2 ∗ √

·

√· ∗

+ +

−2

2

3

2

[0,2]

[1,9] [1,16]

[1,3] [1,256] [1,16] [1,4]

[1,4]

[1,511]

[5,7]

Bound Tightening 42 / 65

Page 99: MINLP Solver Technology

Aggressive Bound Tightening (Probing)[Tawarmalani and Sahinidis, 2004, Belotti et al., 2009]

I assume a local optimum x is knownI can the search be restricted to an area around x ?

I to find out, tentatively reduce bounds of a nonlinear variable, e.g.,

xi ≥ xi + 1/2(x i − xi ),

and apply FBBTI if infeasibility is detected, exclude this region by tightening bound, i.e.,

x i := xi + 1/2(x i − xi )

I Nannicini et al. [2011]:I instead of applying FBBT, do a limited branch-and-bound search on the

reduced problemI use success of FBBT and a predictor to decide for which variables the method

should be employed

Bound Tightening 43 / 65

Page 100: MINLP Solver Technology

Aggressive Bound Tightening (Probing)[Tawarmalani and Sahinidis, 2004, Belotti et al., 2009]

I assume a local optimum x is knownI can the search be restricted to an area around x ?I to find out, tentatively reduce bounds of a nonlinear variable, e.g.,

xi ≥ xi + 1/2(x i − xi ),

and apply FBBT

I if infeasibility is detected, exclude this region by tightening bound, i.e.,

x i := xi + 1/2(x i − xi )

I Nannicini et al. [2011]:I instead of applying FBBT, do a limited branch-and-bound search on the

reduced problemI use success of FBBT and a predictor to decide for which variables the method

should be employed

Bound Tightening 43 / 65

Page 101: MINLP Solver Technology

Aggressive Bound Tightening (Probing)[Tawarmalani and Sahinidis, 2004, Belotti et al., 2009]

I assume a local optimum x is knownI can the search be restricted to an area around x ?I to find out, tentatively reduce bounds of a nonlinear variable, e.g.,

xi ≥ xi + 1/2(x i − xi ),

and apply FBBTI if infeasibility is detected, exclude this region by tightening bound, i.e.,

x i := xi + 1/2(x i − xi )

I Nannicini et al. [2011]:I instead of applying FBBT, do a limited branch-and-bound search on the

reduced problemI use success of FBBT and a predictor to decide for which variables the method

should be employed

Bound Tightening 43 / 65

Page 102: MINLP Solver Technology

Aggressive Bound Tightening (Probing)[Tawarmalani and Sahinidis, 2004, Belotti et al., 2009]

I assume a local optimum x is knownI can the search be restricted to an area around x ?I to find out, tentatively reduce bounds of a nonlinear variable, e.g.,

xi ≥ xi + 1/2(x i − xi ),

and apply FBBTI if infeasibility is detected, exclude this region by tightening bound, i.e.,

x i := xi + 1/2(x i − xi )

I Nannicini et al. [2011]:I instead of applying FBBT, do a limited branch-and-bound search on the

reduced problemI use success of FBBT and a predictor to decide for which variables the method

should be employed

Bound Tightening 43 / 65

Page 103: MINLP Solver Technology

Optimization-Based Bound Tightening (OBBT)[Quesada and Grossmann, 1993, Maranas and Floudas, 1997, Smith and Pantelides, 1999, ...]

Given LP relaxation

min{cTx : Ax ≤ b, x ∈ [x , x ]},

solve for some variables xk :

min /max {xk : Ax ≤ b, cTx ≤ cTx∗, x ∈ [x , x ]},

where x∗ is the current incumbent solution.I computationally intensive (solving up to 2n LPs)

I Couenne: at nodes of depth ≤ 10, for deeper nodes with probability 210−depth

I ANTIGONE: on nonlinear and binary variables as long as effective;if not effective for a node, disable for all child nodes

I SCIP: efficient implementation by Gleixner and Weltge [2013]:I bound filtering (exclude bounds with guaranteed fail)I bound grouping (heuristically search groups of bounds with likely success)I solve OBBT LPs only at root node, but learn new linear inequalities

xk ≥ rTx + µ〈c, x∗〉+ λTb from dual solution of LP⇒ approximate OBBT during tree search

Bound Tightening 44 / 65

Page 104: MINLP Solver Technology

Optimization-Based Bound Tightening (OBBT)[Quesada and Grossmann, 1993, Maranas and Floudas, 1997, Smith and Pantelides, 1999, ...]

Given LP relaxation

min{cTx : Ax ≤ b, x ∈ [x , x ]},

solve for some variables xk :

min /max {xk : Ax ≤ b, cTx ≤ cTx∗, x ∈ [x , x ]},

where x∗ is the current incumbent solution.I computationally intensive (solving up to 2n LPs)I Couenne: at nodes of depth ≤ 10, for deeper nodes with probability 210−depth

I ANTIGONE: on nonlinear and binary variables as long as effective;if not effective for a node, disable for all child nodes

I SCIP: efficient implementation by Gleixner and Weltge [2013]:I bound filtering (exclude bounds with guaranteed fail)I bound grouping (heuristically search groups of bounds with likely success)I solve OBBT LPs only at root node, but learn new linear inequalities

xk ≥ rTx + µ〈c, x∗〉+ λTb from dual solution of LP⇒ approximate OBBT during tree search

Bound Tightening 44 / 65

Page 105: MINLP Solver Technology

Outline

Solvers

Linear Relaxation of Non-Convex terms

More Relaxations for Quadratic programs

Even More Cuts ...

Reformulation / Presolving

Bound Tightening

Branching

Primal Heuristics

Branching 45 / 65

Page 106: MINLP Solver Technology

Spatial Branching

Branching on a nonlinear variable in a nonconvex term allows for tighterrelaxations:

-1.0 -0.5 0.5 1.0

0.2

0.4

0.6

0.8

1.0

-1.0 -0.5 0.5 1.0

0.2

0.4

0.6

0.8

1.0

How to select the branching variable from a set of variable candidates?

Belotti, Lee, Liberti, Margot, and Wächter [2009]: adapt branching rules fordiscrete variables in MIP (most fractional, strong branching, pseudo costs, reliabilitybranching) to continuous variables

Branching 46 / 65

Page 107: MINLP Solver Technology

Spatial Branching

Branching on a nonlinear variable in a nonconvex term allows for tighterrelaxations:

-1.0 -0.5 0.5 1.0

0.2

0.4

0.6

0.8

1.0

-1.0 -0.5 0.5 1.0

0.2

0.4

0.6

0.8

1.0

How to select the branching variable from a set of variable candidates?

Belotti, Lee, Liberti, Margot, and Wächter [2009]: adapt branching rules fordiscrete variables in MIP (most fractional, strong branching, pseudo costs, reliabilitybranching) to continuous variables

Branching 46 / 65

Page 108: MINLP Solver Technology

Branching Rule: Most violated“Most fractional” rule for integer variables:

I branch on variable with maximal fractional value in solution of LP relaxation(argmaxi∈I min{xi − bxic, dxie − xi})

“Most violated” rule for nonlinear variables [Belotti et al., 2009]:I for each variable, collect the “convexification gaps” for all nonconvex terms

that involve this variable

-1.0 -0.5 0.0 0.5 1.0

0.2

0.4

0.6

0.8

I aggregate collected values for each variable to compute a scoreI branch on variable with highest scoreI available in Couenne and SCIP

“Violation transfer” [Tawarmalani and Sahinidis, 2004]:I available in BARON

Branching 47 / 65

Page 109: MINLP Solver Technology

Branching Rule: Most violated“Most fractional” rule for integer variables:

I branch on variable with maximal fractional value in solution of LP relaxation(argmaxi∈I min{xi − bxic, dxie − xi})

“Most violated” rule for nonlinear variables [Belotti et al., 2009]:I for each variable, collect the “convexification gaps” for all nonconvex terms

that involve this variable

-1.0 -0.5 0.0 0.5 1.0

0.2

0.4

0.6

0.8

I aggregate collected values for each variable to compute a scoreI branch on variable with highest scoreI available in Couenne and SCIP

“Violation transfer” [Tawarmalani and Sahinidis, 2004]:I available in BARON

Branching 47 / 65

Page 110: MINLP Solver Technology

Branching Rule: Most violated“Most fractional” rule for integer variables:

I branch on variable with maximal fractional value in solution of LP relaxation(argmaxi∈I min{xi − bxic, dxie − xi})

“Most violated” rule for nonlinear variables [Belotti et al., 2009]:I for each variable, collect the “convexification gaps” for all nonconvex terms

that involve this variable

-1.0 -0.5 0.0 0.5 1.0

0.2

0.4

0.6

0.8

I aggregate collected values for each variable to compute a scoreI branch on variable with highest scoreI available in Couenne and SCIP

“Violation transfer” [Tawarmalani and Sahinidis, 2004]:I available in BARON

Branching 47 / 65

Page 111: MINLP Solver Technology

Branching Rule: Pseudo Costs

Integer variables xi :I when branching, memorize resulting change in lower bound relative to change

in variable value due to branching (xi − bxic, dxie − xi )I average bound improvements over all down- and up-branches of xi so farI use to predict resulting change in lower bound for branching candidates

Continuous variables:I branching does not result in immediate change of variable’s value in LP⇒ cannot estimate bound changes w.r.t. xi − bxic or dxie − xiI Belotti et al. [2009] proposed several alternatives for weighting the lower

bound change:I variable infeasibility (analog to fractionality)I domain width after branching

I available in ANTIGONE and Couenne and used in SCIP

Branching 48 / 65

Page 112: MINLP Solver Technology

Branching Rule: Pseudo Costs

Integer variables xi :I when branching, memorize resulting change in lower bound relative to change

in variable value due to branching (xi − bxic, dxie − xi )I average bound improvements over all down- and up-branches of xi so farI use to predict resulting change in lower bound for branching candidates

Continuous variables:I branching does not result in immediate change of variable’s value in LP⇒ cannot estimate bound changes w.r.t. xi − bxic or dxie − xi

I Belotti et al. [2009] proposed several alternatives for weighting the lowerbound change:

I variable infeasibility (analog to fractionality)I domain width after branching

I available in ANTIGONE and Couenne and used in SCIP

Branching 48 / 65

Page 113: MINLP Solver Technology

Branching Rule: Pseudo Costs

Integer variables xi :I when branching, memorize resulting change in lower bound relative to change

in variable value due to branching (xi − bxic, dxie − xi )I average bound improvements over all down- and up-branches of xi so farI use to predict resulting change in lower bound for branching candidates

Continuous variables:I branching does not result in immediate change of variable’s value in LP⇒ cannot estimate bound changes w.r.t. xi − bxic or dxie − xiI Belotti et al. [2009] proposed several alternatives for weighting the lower

bound change:I variable infeasibility (analog to fractionality)I domain width after branching

I available in ANTIGONE and Couenne and used in SCIP

Branching 48 / 65

Page 114: MINLP Solver Technology

Branching Rules: Strong and Reliability Branching

Strong Branching:I at each node, compute lower bound for all (or many) possible branchings (i.e.,

construct two branches, update and solve relaxation) and choose the one withbest bound improvement

I expensive, but best reduction in number of nodesI available in ANTIGONE and Couenne

Reliability branching:I compute exact bound improvement only for variables with a low number of

branchings so farI otherwise, assume pseudo costs are reliable and use them to evaluate

potential bound improvement⇒ initialization of pseudo costs by strong branchingI used in ANTIGONE, BARON, and Couenne

Branching 49 / 65

Page 115: MINLP Solver Technology

Branching Rules: Strong and Reliability Branching

Strong Branching:I at each node, compute lower bound for all (or many) possible branchings (i.e.,

construct two branches, update and solve relaxation) and choose the one withbest bound improvement

I expensive, but best reduction in number of nodesI available in ANTIGONE and Couenne

Reliability branching:I compute exact bound improvement only for variables with a low number of

branchings so farI otherwise, assume pseudo costs are reliable and use them to evaluate

potential bound improvement⇒ initialization of pseudo costs by strong branchingI used in ANTIGONE, BARON, and Couenne

Branching 49 / 65

Page 116: MINLP Solver Technology

Outline

Solvers

Linear Relaxation of Non-Convex terms

More Relaxations for Quadratic programs

Even More Cuts ...

Reformulation / Presolving

Bound Tightening

Branching

Primal Heuristics

Primal Heuristics 50 / 65

Page 117: MINLP Solver Technology

Sub-NLP HeuristicsGiven a solution satisfying all integrality constraints,

I fix all integer variables in the MINLPI call an NLP solver to find a local solution to the

remaining NLP

I available in all solversI ANTIGONE calls CONOPT (default) or SNOPTI BARON decides automatically whether to call

CONOPT, IPOPT, MINOS, or SNOPTI Couenne and SCIP call IPOPTI Lindo API calls CONOPT

I variable fixings given by integer-feasible solution toLP relaxation

I additionally, SCIP runs its MIP heuristics on MIPrelaxation (rounding, diving, feas. pump, LNS, ...)

min

NLP-Diving:I solve NLP relaxation, restrict bound on fractional variable, resolve NLPI available in SCIP; QP-diving variant in MINOTAUR [Mahajan et al., 2012]

Primal Heuristics 51 / 65

Page 118: MINLP Solver Technology

Sub-NLP HeuristicsGiven a solution satisfying all integrality constraints,

I fix all integer variables in the MINLPI call an NLP solver to find a local solution to the

remaining NLPI available in all solvers

I ANTIGONE calls CONOPT (default) or SNOPTI BARON decides automatically whether to call

CONOPT, IPOPT, MINOS, or SNOPTI Couenne and SCIP call IPOPTI Lindo API calls CONOPT

I variable fixings given by integer-feasible solution toLP relaxation

I additionally, SCIP runs its MIP heuristics on MIPrelaxation (rounding, diving, feas. pump, LNS, ...)

min

NLP-Diving:I solve NLP relaxation, restrict bound on fractional variable, resolve NLPI available in SCIP; QP-diving variant in MINOTAUR [Mahajan et al., 2012]

Primal Heuristics 51 / 65

Page 119: MINLP Solver Technology

Sub-NLP HeuristicsGiven a solution satisfying all integrality constraints,

I fix all integer variables in the MINLPI call an NLP solver to find a local solution to the

remaining NLPI available in all solvers

I ANTIGONE calls CONOPT (default) or SNOPTI BARON decides automatically whether to call

CONOPT, IPOPT, MINOS, or SNOPTI Couenne and SCIP call IPOPTI Lindo API calls CONOPT

I variable fixings given by integer-feasible solution toLP relaxation

I additionally, SCIP runs its MIP heuristics on MIPrelaxation (rounding, diving, feas. pump, LNS, ...)

min

NLP-Diving:I solve NLP relaxation, restrict bound on fractional variable, resolve NLPI available in SCIP; QP-diving variant in MINOTAUR [Mahajan et al., 2012]

Primal Heuristics 51 / 65

Page 120: MINLP Solver Technology

Sub-NLP HeuristicsGiven a solution satisfying all integrality constraints,

I fix all integer variables in the MINLPI call an NLP solver to find a local solution to the

remaining NLPI available in all solvers

I ANTIGONE calls CONOPT (default) or SNOPTI BARON decides automatically whether to call

CONOPT, IPOPT, MINOS, or SNOPTI Couenne and SCIP call IPOPTI Lindo API calls CONOPT

I variable fixings given by integer-feasible solution toLP relaxation

I additionally, SCIP runs its MIP heuristics on MIPrelaxation (rounding, diving, feas. pump, LNS, ...)

min

NLP-Diving:I solve NLP relaxation, restrict bound on fractional variable, resolve NLPI available in SCIP; QP-diving variant in MINOTAUR [Mahajan et al., 2012]

Primal Heuristics 51 / 65

Page 121: MINLP Solver Technology

Sub-MIP / Sub-MINLP Heuristics

Berthold and Gleixner [2014]: “Undercover” (SCIP):I Fix nonlinear variables, so problem becomes MIP

(pass to SCIP)I not always necessary to fix all nonlinear variables,

e.g., consider x · yI find a minimal set of variables to fix by solving a Set

Covering Problem

Berthold et al. [2011]: Large Neighborhood Search heuris-tics extended from MIP/CP to MINLP (SCIP):

I RENS [Berthold, 2014a]: fix integer variables withintegral value in LP relaxation

I RINS, DINS, Crossover, Local Branching

Primal Heuristics 52 / 65

Page 122: MINLP Solver Technology

Sub-MIP / Sub-MINLP Heuristics

Berthold and Gleixner [2014]: “Undercover” (SCIP):I Fix nonlinear variables, so problem becomes MIP

(pass to SCIP)I not always necessary to fix all nonlinear variables,

e.g., consider x · yI find a minimal set of variables to fix by solving a Set

Covering Problem

Berthold et al. [2011]: Large Neighborhood Search heuris-tics extended from MIP/CP to MINLP (SCIP):

I RENS [Berthold, 2014a]: fix integer variables withintegral value in LP relaxation

I RINS, DINS, Crossover, Local Branching

Primal Heuristics 52 / 65

Page 123: MINLP Solver Technology

Rounding HeuristicsNannicini and Belotti [2012]: Couenne Iterative Rounding Heuristic (off bydefault):1. find a local optimal solution to the NLP relaxation2. find the nearest integer feasible solution to the MIP relaxation3. fix integer variables in MINLP and solve remaining sub-NLP locally4. forbid found integer variable values in MIP relaxation (no-good-cuts) and

reiterate

Berthold [2014b]: Couenne Feasibility Pump (off by default)I alternately find feasible solutions to MIP and NLP relaxationsI solution of NLP relaxation is “rounded” to solution of MIP relaxation (by

various methods trading solution quality with computational effort)I solution of MIP relaxation is projected onto NLP relaxation (local search)I various choices for objective functions and accuracy of MIP relaxationI D’Ambrosio et al. [2010, 2012]: previous work on Feasibility Pump for

nonconvex MINLP

Primal Heuristics 53 / 65

Page 124: MINLP Solver Technology

Rounding HeuristicsNannicini and Belotti [2012]: Couenne Iterative Rounding Heuristic (off bydefault):1. find a local optimal solution to the NLP relaxation2. find the nearest integer feasible solution to the MIP relaxation3. fix integer variables in MINLP and solve remaining sub-NLP locally4. forbid found integer variable values in MIP relaxation (no-good-cuts) and

reiterate

Berthold [2014b]: Couenne Feasibility Pump (off by default)I alternately find feasible solutions to MIP and NLP relaxationsI solution of NLP relaxation is “rounded” to solution of MIP relaxation (by

various methods trading solution quality with computational effort)I solution of MIP relaxation is projected onto NLP relaxation (local search)I various choices for objective functions and accuracy of MIP relaxationI D’Ambrosio et al. [2010, 2012]: previous work on Feasibility Pump for

nonconvex MINLP

Primal Heuristics 53 / 65

Page 125: MINLP Solver Technology

End.

Thank you for your attention!

Consider contributing your MINLP instances to MINLPLib!

Some recent MINLP reviews:I Burer and Letchford [2012]I Belotti, Kirches, Leyffer, Linderoth, Luedtke, and Mahajan [2013]

Some recent books:I Lee and Leyffer [2012]I Locatelli and Schoen [2013]

54 / 65

Page 126: MINLP Solver Technology

Literature I

Warren P. Adams and Hanif D. Sherali. A tight linearization and an algorithm for zero-onequadratic programming problems. Management Science, 32(10):1274–1290, 1986.doi:10.1287/mnsc.32.10.1274.

Claire S. Adjiman and Christodoulos A. Floudas. Rigorous convex underestimators for generaltwice-differentiable problems. Journal of Global Optimization, 9(1):23–40, 1996.doi:10.1007/BF00121749.

Claire S. Adjiman, S. Dallwig, Christodoulos A. Floudas, and A. Neumaier. A global optimizationmethod, αBB, for general twice-differentiable constrained NLPs – I. Theoretical advances.Computers & Chemical Engineering, 22:1137–1158, 1998.

Faiz A. Al-Khayyal and James E. Falk. Jointly constrained biconvex programming. Mathematicsof Operations Research, 8(2):273–286, 1983. doi:10.1287/moor.8.2.273.

Ioannis P. Androulakis, Costas D. Maranas, and Christodoulos A. Floudas. αBB: A globaloptimization method for general constrained nonconvex problems. Journal of GlobalOptimization, 7(4):337–363, 1995. doi:10.1007/BF01099647.

Kurt Anstreicher. Semidefinite programming versus the reformulation-linearization technique fornonconvex quadratically constrained quadratic programming. Journal of Global Optimization,43(2):471–484, 2009. doi:10.1007/s10898-008-9372-0.

Kurt Anstreicher. On convex relaxations for quadratically constrained quadratic programming.Mathematical Programming, 136(2):233–251, 2012. doi:10.1007/s10107-012-0602-3.

55 / 65

Page 127: MINLP Solver Technology

Literature II

Martin Ballerstein, Dennis Michaels, and Stefan Vigerske. Linear underestimators for bivariatefunctions with a fixed convexity behavior. ZIB-Report 13-02, Zuse Institute Berlin, 2013.urn:nbn:de:0297-zib-17641.

X. Bao. Automatic convexity detection for global optimization. Master’s thesis, University ofIllinois at Urbana-Champaign, 2007.

X. Bao, N. V. Sahinidis, and M. Tawarmalani. Multiterm polyhedral relaxations for nonconvex,quadratically-constrained quadratic programs. Optimization Methods and Software, 24(4-5):485–504, 2009. doi:10.1080/10556780902883184.

Pietro Belotti. Disjunctive cuts for non-convex MINLP. In Lee and Leyffer [2012], pages117–144. doi:10.1007/978-1-4614-1927-3_5.

Pietro Belotti, Jon Lee, Leo Liberti, F. Margot, and Andreas Wächter. Branching and boundstightening techniques for non-convex MINLP. Optimization Methods and Software, 24(4-5):597–634, 2009. doi:10.1080/10556780903087124.

Pietro Belotti, Christian Kirches, Sven Leyffer, Jeff Linderoth, Jim Luedtke, and AshutoshMahajan. Mixed-integer nonlinear optimization. Acta Numerica, 22:1–131, 2013.doi:10.1017/S0962492913000032.

Timo Berthold. RENS – the optimal rounding. Mathematical Programming Computation, 6(1):33–54, 2014a. doi:10.1007/s12532-013-0060-9.

Timo Berthold. Heuristic algorithms in global MINLP solvers. PhD thesis, TU Berlin, 2014b.

56 / 65

Page 128: MINLP Solver Technology

Literature IIITimo Berthold and Ambros M. Gleixner. Undercover: a primal MINLP heuristic exploring a

largest sub-MIP. Mathematical Programming, 144(1–2):315–346, 2014.doi:10.1007/s10107-013-0635-2.

Timo Berthold, Stefan Heinz, Marc E. Pfetsch, and Stefan Vigerske. Large neighborhood searchbeyond MIP. In Luca Di Gaspero, Andrea Schaerf, and Thomas Stützle, editors, Proceedingsof the 9th Metaheuristics International Conference (MIC 2011), pages 51–60, 2011.urn:nbn:de:0297-zib-12989.

Timo Berthold, Ambros M. Gleixner, Stefan Heinz, and Stefan Vigerske. Analyzing thecomputational impact of MIQCP solver components. Numerical Algebra, Control andOptimization, 2(4):739–748, 2012. doi:10.3934/naco.2012.2.739.

Christian Bliek, Peter Spellucci, Luis N. Vicente, Arnold Neumaier, Laurent Granvilliers, EricMonfroy, Frederic Benhamou, Etienne Huens, Pascal Van Hentenryck, Djamila Sam-Haroud,and Boi Faltings. Algorithms for solving nonlinear constrained and optimization problems: Thestate of the art. Technical report, Universität Wien, 2001. URLhttp://www.mat.univie.ac.at/~neum/glopt/coconut/StArt.html.

Pierre Bonami, Jeff Linderoth, and Andrea Lodi. Disjunctive cuts for mixed integer nonlinearprogramming problems. In A. Ridha Mahjoub, editor, Progress in Combinatorial Optimization,chapter 18, pages 521–541. ISTe-Wiley, 2012.

Samuel Burer and Adam N. Letchford. Non-convex mixed-integer nonlinear programming: Asurvey. Surveys in Operations Research and Management Science, 17(2):97–106, 2012.doi:10.1016/j.sorms.2012.08.001.

57 / 65

Page 129: MINLP Solver Technology

Literature IVMichael R. Bussieck and S. Vigerske. MINLP solver software. In J. J. Cochran et.al., editor,

Wiley Encyclopedia of Operations Research and Management Science. Wiley & Sons, Inc.,2010. doi:10.1002/9780470400531.eorms0527.

Sonia Cafieri, Jon Lee, and Leo Liberti. On convex relaxations of quadrilinear terms. Journal ofGlobal Optimization, 47(4):661–685, 2010. doi:10.1007/s10898-009-9484-1.

Claudia D’Ambrosio, Antonio Frangioni, Leo Liberti, and Andrea Lodi. Experiments with afeasibility pump approach for non-convex MINLPs. In Paola Festa, editor, Proceedings of 9thInternational Symposium on Experimental Algorithms, SEA 2010, volume 6049 of LectureNotes in Computer Science, pages 350–360. Springer, 2010.doi:10.1007/978-3-642-13193-6_30.

Claudia D’Ambrosio, Antonio Frangioni, Leo Liberti, and Andrea Lodi. A storm of feasibilitypumps for nonconvex MINLP. Mathematical Programming, 136(2):375–402, 2012.doi:10.1007/s10107-012-0608-x.

Robert Fourer, Chandrakant Maheshwari, Arnold Neumaier, Dominique Orban, and HermannSchichl. Convexity and concavity detection in computational graphs: Tree walks for convexityassessment. INFORMS Journal on Computing, 22(1):26–43, 2009.doi:10.1287/ijoc.1090.0321.

Ambros M. Gleixner and Stefan Weltge. Learning and propagating Lagrangian variable bounds formixed-integer nonlinear programming. In Carla Gomes and Meinolf Sellmann, editors,Integration of AI and OR Techniques in Constraint Programming for CombinatorialOptimization Problems, volume 7874 of Lecture Notes in Computer Science, pages 355–361.Springer, 2013. doi:10.1007/978-3-642-38171-3_26.

58 / 65

Page 130: MINLP Solver Technology

Literature VMatthias Jach, Dennis Michaels, and Robert Weismantel. The convex envelope of (n–1)-convex

functions. SIAM Journal on Optimization, 19(3):1451–1466, 2008. doi:10.1137/07069359X.

Mustafa Kilinç, Jeff Linderoth, and Jim Luedtke. Effective separation of disjunctive cuts forconvex mixed integer nonlinear programs. Technical Report 1681, University ofWisconsin-Madison, Computer Sciences Department, 2010. URLhttp://digital.library.wisc.edu/1793/60720.

Jon Lee and Sven Leyffer, editors. Mixed Integer Nonlinear Programming, volume 154 of TheIMA Volumes in Mathematics and its Applications. Springer, 2012.doi:10.1007/978-1-4614-1927-3.

Leo Liberti. Reformulations in mathematical programming: automatic symmetry detection andexploitation. Mathematical Programming, 131(1-2):273–304, 2012.doi:10.1007/s10107-010-0351-0.

Leo Liberti and James Ostrowski. Stabilizer-based symmetry breaking constraints formathematical programs. Journal of Global Optimization, 60(2):183–194, 2014.doi:10.1007/s10898-013-0106-6.

Leo Liberti and Constantinos Pantelides. An exact reformulation algorithm for large nonconvexNLPs involving bilinear terms. Journal of Global Optimization, 36:161–189, 2006.

Youdong Lin and Linus Schrage. The global solver in the LINDO API. Optimization Methods &Software, 24(4–5):657–668, 2009. doi:10.1080/10556780902753221.

Marco Locatelli. Convex envelopes for quadratic and polynomial functions over polytopes.Technical report, Optimization Online, 2010. URLhttp://www.optimization-online.org/DB_HTML/2010/11/2788.html.

59 / 65

Page 131: MINLP Solver Technology

Literature VIMarco Locatelli and Fabio Schoen. Global Optimization: Theory, Algorithms, and Applications.

Number 15 in MOS-SIAM Series on Optimization. SIAM, 2013.

Marco Locatelli and Fabio Schoen. On convex envelopes for bivariate functions over polytopes.Mathematical Programming, 144(1-2):65–91, 2014. doi:10.1007/s10107-012-0616-x.

James Luedtke, Mahdi Namazifar, and Jeff Linderoth. Some results on the strength of relaxationsof multilinear functions. Mathematical Programming, 136(2):325–351, 2012.doi:10.1007/s10107-012-0606-z.

Ashutosh Mahajan and Todd Munson. Exploiting second-order cone structure for globaloptimization. Technical Report ANL/MCS-P1801-1010, Argonne National Laboratory, 2010.URL http://www.optimization-online.org/DB_HTML/2010/10/2780.html.

Ashutosh Mahajan, Sven Leyffer, and Christian Kirches. Solving mixed-integer nonlinearprograms by QP-diving. Preprint ANL/MCS-P2071-0312, Argonne National Laboratory, 2012.URL http://www.optimization-online.org/DB_HTML/2012/03/3409.html.

Costas D. Maranas and Christodoulos A. Floudas. Finding all solutions of nonlinearly constrainedsystems of equations. Journal of Global Optimization, 7(2):143–182, 1995.doi:10.1007/BF01097059.

Costas D. Maranas and Christodoulos A. Floudas. Global optimization in generalized geometricprogramming. Computers & Chemical Engineering, 21(4):351–369, 1997.doi:10.1016/S0098-1354(96)00282-7.

Garth P. McCormick. Computability of global solutions to factorable nonconvex programs: Part I– convex underestimating problems. Mathematical Programming, 10(1):147–175, 1976.doi:10.1007/BF01580665.

60 / 65

Page 132: MINLP Solver Technology

Literature VII

Clifford A. Meyer and Christodoulos A. Floudas. Trilinear monomials with mixed sign domains:Facets of the convex and concave envelopes. Journal of Global Optimization, 29(2):125–155,2004. doi:10.1023/B:JOGO.0000042112.72379.e6.

Clifford A. Meyer and Christodoulos A. Floudas. Convex envelopes for edge-concave functions.Mathematical Programming, 103(2):207–224, 2005. doi:10.1007/s10107-005-0580-9.

Ruth Misener. Novel Global Optimization Methods: Theoretical and Computational Studies onPooling Problems with Environmental Constraints. PhD thesis, Princeton University, 2012.URL http://arks.princeton.edu/ark:/88435/dsp015q47rn787.

Ruth Misener and Christodoulos A. Floudas. Global optimization of mixed-integerquadratically-constrained quadratic programs (MIQCQP) through piecewise-linear andedge-concave relaxations. Mathematical Programming, 136(1):155–182, 2012a.doi:10.1007/s10107-012-0555-6.

Ruth Misener and Christodoulos A. Floudas. GloMIQO: Global mixed-integer quadratic optimizer.Journal of Global Optimization, 57(1):3–50, 2012b. doi:10.1007/s10898-012-9874-7.

Ruth Misener and Christodoulos A. Floudas. ANTIGONE: Algorithms for coNTinuous / IntegerGlobal Optimization of Nonlinear Equations. Journal of Global Optimization, 59(2-3):503–526,2014. doi:10.1007/s10898-014-0166-2.

Giacomo Nannicini and Pietro Belotti. Rounding-based heuristics for nonconvex MINLPs.Mathematical Programming Computation, 4(1):1–31, 2012. ISSN 1867-2949.doi:10.1007/s12532-011-0032-x.

61 / 65

Page 133: MINLP Solver Technology

Literature VIIIGiacomo Nannicini, Pietro Belotti, Jon Lee, Jeff Linderoth, François Margot, and Andreas

Wächter. A probing algorithm for MINLP with failure prediction by SVM. In Tobias Achterbergand J. Christopher Beck, editors, Integration of AI and OR Techniques in ConstraintProgramming for Combinatorial Optimization Problems, volume 6697 of Lecture Notes inComputer Science, pages 154–169. Springer, 2011. doi:10.1007/978-3-642-21311-3_15.

Ivo P. Nenov, Daniel H. Fylstra, and Lubomir V. Kolev. Convexity determination in the MicrosoftExcel solver using automatic differentiation techniques. Extended abstract, Frontline SystemsInc., 2004. URL http://www.autodiff.org/ad04/abstracts/Nenov.pdf.

Arnold Neumaier. Complete search in continuous global optimization and constraint satisfaction.In Acta Numerica, volume 13, chapter 4, pages 271–369. Cambridge University Press, 2004.doi:10.1017/S0962492904000194.

Andrea Qualizza, Pietro Belotti, and François Margot. Linear programming relaxations ofquadratically constrained quadratic programs. In Lee and Leyffer [2012], pages 407–426.doi:10.1007/978-1-4614-1927-3_14.

Ignacio Quesada and Ignacio E. Grossmann. Global optimization algorithm for heat exchangernetworks. Industrial & Engineering Chemistry Research, 32(3):487–499, 1993.doi:10.1021/ie00015a012.

Anatoliy D. Rikun. A convex envelope formula for multilinear functions. Journal of GlobalOptimization, 10(4):425–437, 1997. doi:10.1023/A:1008217604285.

Anureet Saxena, Pierre Bonami, and Jon Lee. Convex relaxations of non-convex mixed integerquadratically constrained programs: projected formulations. Mathematical Programming, 130(2):359–413, 2011. doi:10.1007/s10107-010-0340-3.

62 / 65

Page 134: MINLP Solver Technology

Literature IXHermann Schichl and Arnold Neumaier. Interval analysis on directed acyclic graphs for global

optimization. Journal of Global Optimization, 33(4):541–562, 2005.doi:10.1007/s10898-005-0937-x.

Hanif D. Sherali and Warren P. Adams. A Reformulation-Linearization Technique for SolvingDiscrete and Continuous Nonconvex Problems, volume 31 of Nonconvex Optimization and ItsApplications. Kluwer Academic Publishers, 1999.

Hanif D. Sherali and Amine Alameddine. A new reformulation-linearization technique for bilinearprogramming problems. Journal of Global Optimization, 2(4):379–410, 1992.doi:10.1007/BF00122429.

Hanif D. Sherali and Barbara M. P. Fraticelli. Enhancing RLT relaxations via a new class ofsemidefinite cuts. Journal of Global Optimization, 22(1):233–261, 2002.doi:10.1023/A:1013819515732.

Edward M. B. Smith and Constantinos C. Pantelides. A symbolic reformulation/spatialbranch-and-bound algorithm for the global optimization of nonconvex MINLPs. Computers &Chemical Engineering, 23(4-5):457–478, 1999. doi:10.1016/S0098-1354(98)00286-5.

Fabio Tardella. On a class of functions attaining their maximum at the vertices of a polyhedron.Discrete Applied Mathematics, 22(2):191–195, 1988/89. doi:10.1016/0166-218X(88)90093-5.

Fabio Tardella. On the existence of polyhedral convex envelopes. In Christodoulos A. Floudas andPanos Pardalos, editors, Frontiers in Global Optimization, volume 74 of NonconvexOptimization and Its Applications, pages 563–573. Springer, 2004.doi:10.1007/978-1-4613-0251-3_30.

63 / 65

Page 135: MINLP Solver Technology

Literature XFabio Tardella. Existence and sum decomposition of vertex polyhedral convex envelopes.

Optimization Letters, 2(3):363–375, 2008. doi:10.1007/s11590-007-0065-2.

Mohit Tawarmalani and Nikolaos V. Sahinidis. Semidefinite relaxations of fractional programs vianovel convexification techniques. Journal of Global Optimization, 20(2):133–154, 2001. ISSN0925-5001. doi:10.1023/A:1011233805045.

Mohit Tawarmalani and Nikolaos V. Sahinidis. Convexification and Global Optimization inContinuous and Mixed-Integer Nonlinear Programming: Theory, Algorithms, Software, andApplications, volume 65 of Nonconvex Optimization and Its Applications. Kluwer AcademicPublishers, 2002.

Mohit Tawarmalani and Nikolaos V. Sahinidis. Global optimization of mixed-integer nonlinearprograms: A theoretical and computational study. Mathematical Programming, 99(3):563–591, 2004. doi:10.1007/s10107-003-0467-6.

Mohit Tawarmalani and Nikolaos V. Sahinidis. A polyhedral branch-and-cut approach to globaloptimization. Mathematical Programming, 103(2):225–249, 2005.doi:10.1007/s10107-005-0581-8.

Stefan Vigerske. Decomposition in Multistage Stochastic Programming and a Constraint IntegerProgramming Approach to Mixed-Integer Nonlinear Programming. PhD thesis,Humboldt-Universität zu Berlin, 2013. urn:nbn:de:kobv:11-100208240.

Xuan-Ha Vu, Hermann Schichl, and Djamila Sam-Haroud. Interval propagation and search ondirected acyclic graphs for numerical constraint solving. Journal of Global Optimization, 45(4):499–531, 2009. doi:10.1007/s10898-008-9386-7.

64 / 65

Page 136: MINLP Solver Technology

Literature XI

Juan M. Zamora and Ignacio E. Grossmann. A global MINLP optimization algorithm for thesynthesis of heat exchanger networks with no stream splits. Computers & ChemicalEngineering, 22(3):367–384, 1998. doi:10.1016/S0098-1354(96)00346-8.

Juan M. Zamora and Ignacio E. Grossmann. A branch and contract algorithm for problems withconcave univariate, bilinear and linear fractional terms. Journal of Global Optimization, 14(3):217–249, 1999. doi:10.1023/A:1008312714792.

65 / 65