[1997 bertsimas, tsitsiklis] introduction to linear optimization (ch1-5)

268

Upload: scribdsekhmet

Post on 08-Feb-2018

229 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 1/267

Page 2: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 2/267

nroducion

o Linear pimizaion

Page 3: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 3/267

ATENA CIENTIIC ERIEIN TIMIZATIN AN NERAL CMTATIN

1 . Dyac Prograg a Oa Coro Vo a by Dr P. Berea 1995.

. oear Prograg by Dr P. Berek 1995.

3 . NeroDyac Prograg by Dr P. Bere a Jo N.Tk 1996.

4 Corae Ozao a Lagrage Mer Meo by Dr P. Berea 1996.

5 . Socac Oa Coro Te DcreeTe Cae by Dr P.Bereka a See E Sree 1996.

6 . roco o Lear Ozao by Dr Bera a Jo Tk 199.

Page 4: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 4/267

nroducion

o Linear pimizaion

Dm mJ kMassahuses Insue of Tehnoogy

Sf m

Page 5: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 5/267

Ahena enos e Box 391Belmon Mass. 021789998

..A.Emal: [email protected] nformaon and orders: hp:world. sd. omahenas

over Design

nn aager

© 997 Diitris ertsias and John N TsitsiklisAll rights reserved No part of this book ay be reproduced in any forby any electronic or echanical eans including photocopying, recording,or inforation storage and retrieval without perission in writing frothe publisher

ulshers Caalogngnulaon aa

Bertsias, Diitris, Tsitsiklis, John NIntroduction to inear OptiizationIncludes bibliographical references and index inear prograing Matheatical optiization3 Integer prograing TitleT5774 B465 997 59 7 9678786

IBN 1886529191

Page 6: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 6/267

T Gergaan t Gerge Mhae wh e us s earyT Aeanra an Mena

Page 7: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 7/267

Page 8: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 8/267

Conens

refae x

Inroduon

. . Vaas of e ea pogammg pobem . . Eampes o f ea pogammg pobems . . Pecese ea coe objece fcos . . Gapca epeseao a soo

. . Lea ageba backgo a oao .6 . Agoms a opeao cos .. Eecses . . . . . . . . . Hsoy, oes, a soces

2 The geomery of near programmng

. . Poyea a coe ses . . . . . . . . . . .

. .Eeme pos, eces, a basc feasbe solos

. . Poyea saa fom.. Degeeacy . . . . . . . . . Esece of eeme pos .6 . Opmay of eeme pos . . Repeseao of boe poyhea* . . Pojecos of poyea: FoeMozk emao* .9 . Smmay . . . 0 . Eecses . .

. . Noes a soces

3 The smpex mehod

. . Opmay coos . . . Deeopme of e smpe meo . . . . Impemeaos of e smpe meo

6

15 

634

38 

1

4 6 

58 

665 

670

75 

981

9

v

Page 9: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 9/267

v 6

9 0

.

6 9 0

5

Acycg: ecogaphy a Ba's eFig a ia basc feasbe soioCom geomey a he smpe mehoCompaoa ececy of he smpe meho

Smmay . . . Eecses . . . Noes a soces

ualy heory

Moao . . . .The a pobem . .The ay heoem .

Opma a aabes as maga cossSaa fom pobems a he a simpe mehoFakas ' emma a ea eqaies . .om sepaag hypepaes o ay*Coes a eeme ays . . . . .Repeseao of poyhea . . Geea ea pogammg ay*Smmay . .

Eecises . . . Noes a soces

ensvy analyss

Loca sesiy aayss Goba epeece o he ighha se eco The se o f a a opima soos* . Goba epeece o he cos eco

Paamec pogammg 6 Smmay . . Eecses . Noes a soces

6 Large scale opmzaon

6 Deaye com geeao6 The cg sock pobem

6 Cg pae mehos 6 DazgWofe ecomposio6 Sochasic pogammg a Bees ecomposio6 6 Smmay . . 6 Eecses . 6 Noes a soces

tt0 9

9

139

06666996

99

201

06

9

231

234  

6239 

254  

60606

Page 10: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 10/267

tt7

. . . . . . .. . . .6 . . . . . .9 . . 0 . . .

. . . .

8.  

. . . . . . .. .. .6 . . . . .

9.

9 . .9 . .9 . .9 ..9 . .9 .6 .9 . .9 . .

10.0 . .0 . .0 . .0 . .0 . .

Newor ow prolems

Gaps .Fomaio of e eok o pobemTe eok simpe agoim Te egaie cos cyce agoimTe maimm o pobem Daiy i eok o pobemsDa asce meos* Te assigme pobem a e acio agoimTe soes pa pobem . .Te miimm spaig ee pobem Smmay .Eecises .Noes a soces

Complexy of lnear programmng and he ellpsodmehod . . . . . . . . . . . . . . . . . .

Ecie agoims a compaioa compeiy Te key geomeic es bei e eipsoi meoTe eipsoi meo fo e feasibiiy pobem

Te eipsoi meo fo opimizaio Pobems i epoeiay may cosais*Smmay .Eecises .Noes a soces

neror pon mehods

Te ae scaig agoimCoegece of ae scaig*Te poeia ecio agoihmTe pima pa fooig agoimTe pimaa pa fooig agoimA oeieEecises Noes a soces

neger programmng formuaonsMoeig eciqes Gieies fo sog fomaios Moeig i epoeiay may cosais SmmayEecises .

265

6906

6

359

60600392 

393

900990

5164 65 

Page 11: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 11/267

tt0 .6 . Noes a soces

Ineger programmng mehods 79

. . Cg pae meos 0 . . Bac a bo . . Dyamc pogammg 90 . . Iege pogammg ay 9 . . Appomao agoms 0 .6 . Loca seac . . Smae aeag . . Compey eoy .9 . Smmay . 0 . Eecses . . Noes a soces 0

2 The ar n near opmzaon 533

. . Moeg agages fo ea opmzao 534  

. . Lea opmzao baes a geea obseaos . . Te ee assgme pobem

..Te a ac o maageme pobem 54 4  

.. Te job sop sceg pobem .6 . Smmay 6 . . Eecses 6 . . Noes a soces 6

Referenes 569

Index 579

Page 12: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 12/267

Preface

e ppose of s book s o poe e, sgf, moeeme of le opmzo, s, le pogmmg, eok opobems, scee e opmzo We scss bo cssc op

cs , s el s e se of e We ge spec eo o eoy, bso coe ppcos pese cse ses O m objece s oelp e ee become sopsce pcoe of (e) opmzo, o esece Moe speccly, e s o eeop e by ofome fy compe opmzo poblems, poe ppecoof e m csses of pobems e pccly sobe, escbe ebe soo meos, b esg of e qlepopees of e soos ey poe

O geel posopy s sg mes mos Fo e sbjec me of s book, s ecessy eqes geomec e Oe oe , pobems e soe by goms, ese c oybe escbe gebcy Hece , o focs s o e bef epybeee geb geoey We b esg sg ges geomec gmes, e se es o gebc foms goms Ge eog me, e epec e ee eeop eby o pss om oe om o e oe o mc eo

Aoe of o objeces s o be compeese, b ecoomc We

e me eo o coe gg l of e pcp es se Hoee, e e o e o be ecycopec , o o scss eeypossble e ee o pc lgom O pemse s oceme esg of e bsc pcpes s pce, fe es cbe cqe by e ee lle o eo

O s objece s o bg e ee p o e espec o ese of e s s especlly e o eme of eo pomeos, ge sce opmzo, e peseo of cse ses

sec e ms of cey ble goms compese sccess of y opmzo meooogy ges o s by oel lge mpo pobems I sese , e s cpe ,o e of e opmzo, s cc p of s book I , eope, coce e ee pogess o cegg pobems eqesbo pobem specc sg, s e s eepe esg of eelyg eoy

Page 13: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 13/267

rfI ay book eaig i iea pogammig, ee ae some impo

a coices o be mae egaig e eame of e simpe meoaiioay, e simpe meo is eeope i ems of e f simpeabea, ic es o become e cea opic We ae fo a e

f simpe abea is a sef eice fo okig o meica eampesB oe a a, e ae ie o o oeempasize is impoaceLe s aso meio aoe epae om may oe ebooks

Iocoy eames ofe focs o saa fom pobems, ic isscie fo e pposes of e simpe meo O e oe a, isappoac oe eaes e eae oeig ee ceai popeies aegeeay e, a ca ie e eepe esaig of e sbjec Weepa fom is aiio: e cosie e geea fom of iea pogam-mig pobems a ee key coceps

eg, eeme pois

ii is

coe Of cose, e i comes o agoims, e ofe ae o speciaize o e saa fom I e same spii, e sepaae e scaesaig of iea pogammig om e paicas of e simpemeo Fo eampe, e ice a eiaio of aiy eoy a oeso ey o e simpe meo

Fiay, is book coais a eame of seea impoa opicsa ae o commoy coee Tese ice a iscssio of e com geomey a of e isigs i poies io e eciecy of e

simpe meo, e coecio beee aiy a e picig of a-cia asses, a ie ie of eaye com geeaio a cig paemeos, socasic pogammig a Bees ecomposiio, e acioagoim fo e assigme pobem, ceai eoeica impicaios ofe eipsoi agoim, a oog eame of ieio poi meos,a a oe cape o e pacice of iea opimizaio Tee aeaso seea oeoy opics a ae coee i e eecises, sc asLeoief sysems, sic compemeaiy, opios picig, o Nema's

agoim, sbmoa cio miimizaio, a bos fo a mbe ofiege pogammig pobemsHee is a cape by cape escipio of e book

Chaper 1: Ioces e iea pogammig pobem, ogee ia mbe of eampes, a poies some backgo maeia o ieaageba

Chaper 2 Deas i e basic geomeic popeies of poyea, focsig o e eiio a e eisece of eeme pois, a empasizig

e iepay bee e geomeic a e agebaic iepoisChaper 3: Coais moe o ess e cassica maeia associae i esimpe meo, as e as a iscssio of e com geomey I sasi a igee a geomeicay moiae eiaio of e simpemeo I e ioces e eise simpe meo, a cocesi e simpe abea Te sa opics of Pase I a aicycig ae

Page 14: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 14/267

rf also coee

Chaper : I is a compehensie eamen of linea pogamming aliy The aliy heoem is s obaine as a coollay of he simplemeho A moe absac eiaion is also poie base on he sepaaing hypeplane heoem hich is eelope fom s pinciples I ensih a eepe look ino he geomey of polyhea

Chaper 5: Discsses sensiiiy analysis ha is he epenence of so-lions an he opimal cos on he poblem aa incling paameicpogamming I also eelops a chaaceizaion of al opimal solionsas sbgaiens of a siably ene opimal cos fncion

Chaper 6: Pesens he complemenay ieas of elaye colmn gen-

eaion an cing planes These mehos ae s eelope a a highleel an ae hen mae concee by iscssing he cing sock pob-lem DanzigWolfe ecomposiion sochasic pogamming an Benesecomposiion

Chaper 7: Poies a compehensie eie of he pincipal esls anmehos fo he ieen aians of he neok o poblem I conainsepesenaies fom all majo ypes of algoihms pimal escen ( hesimple meho) al ascen (he pimalal meho) an appoimae

al ascen (he acion algoihm) The focs is on he majo algoihmicieas ahe han on he enemens ha can lea o bee compleiyesimaes

Chaper 8: Incles a iscssion of compleiy a eelopmen of he el-lipsoi meho an a poof of he polynomialiy of linea pogamming Ialso iscsses he eqialence of sepaaion an opimizaion an poieseamples hee he ellipsoi algoihm can be se o eie polynomialime esls fo poblems inoling an eponenial nmbe of consains

Chaper 9: Conains an oeie of all majo classes of ineio poinmehos incling ane scaling poenial ecion an pah folloing(boh pimal an pimalal) mehos I incles a iscssion of henelying geomeic ieas an compaional isses as ell as conegencepoofs an compleiy analysis

Chaper 10: Inoces inege pogamming fomlaions of isceeopimizaion poblems I poies a nmbe of eamples as ell as someiniion as o ha consies a song" fomlaion

Chaper 11: Coes he majo classes of inege pogamming algoihmsincling eac mehos (banch an bon cing planes ynamic po-gamming) appoimaion algoihms an heisic mehos (local seachan simlae annealing) I also inoces a aliy heoy fo inegepogamming

Chaper 12: Deals ih he a in linea opimizaion ie he pocess

Page 15: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 15/267

rfof moeg, epog pobem sce, a e g of opmzaoagohms e scss he eae pefomace of eo po mehos a ee aas of he smpe meho, a easc age scaeseg e aso ge some cao of he sze of pobems ha ca be

cey soeA mpoa heme ha s hogh seea chapes s he moe

g, compey, a agohms fo pobems a epoea mbecosas e scss moeg Seco 0, compey Seco ,agohmc appoaches Chape 6 a , a e coce h a casesy Seco

Thee s a fa mbe of eecses ha ae ge a he e o f eachchape Mos of hem ae ee o eepe he esag of he

sbjec, o o epoe eesos of he eoy he e, as opposeo oe s Hoee, seea meca eecses ae aso ceSae eecses ae sppose o be fay a A soos maa foqae scos ca be obae fom he aos

e hae mae a speca eo o keep e e as moa as possbe ,aog he eae o om cea opcs ho oss of coy Foeampe, mch of he maeal Chapes a 6 s aey se hees of he book hemoe, Chape (o eok o poblems) , aeae ho has goe hogh he pobem fomao ( Secos

ca

mmeaey moe o ay ae seco ha chape Aso , he eopo agohms of Chape 9 ae o se ae, h he ecepo ofsome of he appcaos Chape Ee h e coe capes(Chapes   , hee ae may secos ha ca be skppe g a seag Some secos hae bee make h a sa cag ha eycoa someha moe aace maea ha s o say coee a ocoy cose

The book as eeope hle e ook s eachg a syea

gaae cose a MT, fo ses egeeg a opeaos eseach The oy peeqse s a okg koege of ea ageba Ifac, s oy a sma sbse of ea ageba ha s eee (eg, hecoceps of sbspaces, ea epeece, a e ak of a ma)Hoee, hese eemeay oos ae somemes se sbe ays, asome mahemaca may o he pa of he eae ca ea o a beeappecao of he sbjec

The book ca be se o each seea ee ypes of coses The

s o sggesos beo ae oesemese aas ha e have e aMT, b hee ae also ohe meagfl aeaes, epeg o heses' backgo a he cose's objeces

(a) Coe mos of Capes , a f me pems , coe a sma mbeof opcs fom Chapes 9

(b) A aeae co be he same as aboe, ecep ha eo po

Page 16: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 16/267

rf v

agoms (Cape 9 ae fy coee, eplacng neok o pobems (Cape .

(c ) A boa oee cose can be consce by concenang on eease maeial n mos of e capes . Te coe of sc a coseco conss of Cape Secons . . . . . . . . . 9 . 0 . some of e ease maeal in Cape an anappcaon fom Cape .

() Fnaly, e book s aso sable fo a afcose on nege pogammng, base on pas of Capes an as el as Capes0.Tee s a ly lage eae on nea opmzaon, an e make

no aemp o poe a compeense bblogapy. To a gea een, esoces a e ce ae ee ogina efeences of socal nees, oecen es ee aonal nfomaon can be fon. Fo ose opcs ,oee, a oc pon cen eseac, e aso poe pones oecen jona aces.

We ol lke o epess o anks o a nmbe of nas . Weae gaef o o coleages Dmi Beseas an Rob en, fo manyscssons on e sbjecs n s book, as el as fo eang pas ofe manscip. Seea of o sens , colleages , an fens ae conbe by eang pas of e manscp, pong ccal commens,an okng on e eecses: Jm Csooeas, Taa Cyssko,Asn Fak, Da Gamanik, Leon Hs, Spyos Konogogs , Pee Ma-bac, Gna Mozno, anns Pascals, Geoga Peaks, Laks Poymenakos, Jay Seaman, Saa Sock, Pal Tseng, an Ben Van Roy.B mosy, e ae gaef o o famles fo ei paence, oe, ansppo n e cose of is ong pojec.

Dmtrs ertsmsJohn TstslsCmbrdge Jnury 1997

Page 17: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 17/267

Page 18: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 18/267

Chapt

Iod uio

ontents

ariants of the linear prograing proble

2 Exaples of linear prograing probles

3 iecewise linear convex objective functions

4 raphical representation and solution 5 . inear algebra background and notation

6 Algoriths and operation counts

7 Exercises

8 History notes, and sources

1

Page 19: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 19/267

2 Chp 1 Introducton

I s cape, e oce near pgmmng e pobem of mmzg a ea cos fco sbjec o ea eqay a eqay cosas We cose a fe eqae foms a e pese a mbeof exampes o lsae e applcaby of ea pogammg o a eaey of coexs We aso soe a fe smpe exampes a oba somebasc geomec o o e ae of e pobem Te cape es a ee of ea ageba a of e coeos se escbge compaoa eqemes (opeao co) of agoms

1 . 1 Variants of the linear programming

problem

I s seco, e pose e ea pogammg pobem, scss a fespeca foms a akes, a esabs some saa oao a el be sg Rae a sag absacy, e s sae a coceeexampe, c s mea o facae esag of e foma eoa folo Te example e ge s eo of ay epeao Laeo, Seco . , e ae ampe oppoy o eeop exampes aase pacca segs

Example 1.1 h oowing i a ina pogaing pob

iniiz X X2 + 4X3 

ubt to X + X2 + X4 X2 X3 5

X3 + X4 Xl 0

X3

Xl , X2 , X3, and X4 a vaiabl who vau a to b hon to iniiz

th ina ot ntion X l - X2 + X3, ubt to a t o lina quaity andinquaity ontaint o o th ontaint, uh a Xl 0 and X3 0 aount to ipl tition on th ign o tain vaiab h ainingontaint a o th o a b, a = b, o a b, wh a = (l , 2 , 3 , 4 )i a givn vto , = (X l , X2 , X3 , X4) i th vto o diion iab, a ithi inn podut : ixi, and b i a givn aa Fo xap, in th tontaint, w hav a = ( 1 1 0 1 ) and b = .

We o geealze I agene

lea pogammg pobem, e aege a cos eco c C . . . , n a e seek o mmze a ea cosfco c'x  =1 Cii oe a mesoa ecos x , n ,

s discussd fuh i Scio . all vcos a assud o colu vcos ada ad as such i aixvco poducs. ow vcos a idicad as aspossof colu vcos Howv whv w f o a vco isid h x wus h o cooical oaio v hough is a colu vco.Th ad who is ufailia wih ou oaio ay wish o cosul Scio focoiuig

Page 20: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 20/267

Sec 1 . 1 Vrnts o the lner progrmmng problem 3

sbjec o a se of iea eqaiy a ieqaiy cosais. I paica,e  2 3 be some ie ie ses, a sppose a fo eey iay oe of ese ses, e ae gie a imesioa eco ai a a scaabi a i be se o fom e cosai . Le also N a N 2 besbses of {, . . . , } a iicae ic aiabes Xj ae cosaie o beoegaie o oposiie, especiey. We e cosie e pobem

miimize c'xsbjec o a�x > bi

a�x < b i  2 a�x bi 3 ( )

Xj > 0 NXj < 0 N 2•

Te aiabes Xl , . . . , Xn ae cae decon varae a a eco x saisig al of e cosais is cae a feae o uton o feae vector.The se of al feasibe soios is cale e feae et o feae regon.If is i eie N o N 2 ee ae o esicios o e sig of Xj , iic case e say a Xj is a e o unretrcted aiabe. Te fcioc'x is cae e oectve ncton o cot functon A feasibe solio x*

a miimizes e objecie fcio (a is, c'x* : c'x, fo a feasible xis cae a optma feae o uton o, simpy, a optma outon. Teae of c'x* is e cae e optma cot. O e oe a, if foeey ea mbe K e ca a feasibe soio x ose cos is lessa K e say a e opima cos is 0 o a e cos is unoundedeow. (Someimes, e il abse emioogy a say a e pobem isunounded.) We ay oe a ee is o ee o sy maimizaiopobems sepaaey, becase maimizig c'x is eqiae o miimizige iea cos fcio c'x

A eqaiy cosai a�x bi is eqiae o e o cosaisa�x : bi a a�x 2 bi• I aiio, ay cosai of e fom a�x : bi cabe eie as i x 2 -bi Fialy, cosais of e fom Xj 2 0 oXj : 0 ae specia cases of cosais of e fom a�x 2 bi ee � is ai eco a bi O We coce a e feasibe se i a geea lieapogammig pobem ca be epesse ecsiey i ems of ieqaliycosais of e fom a�x 2 bi Sppose a ee is a oa of sccosais, iee by , , , e b bl bT a e A be e

mai ose os ae e o ecos

� a is,

A [- � -j.

Te, e cosais a�x 2 bi , , , ca be epesse compaclyi e fom Ax 2 b , a e iea pogammig pobem ca be ie

Page 21: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 21/267

asmiimize c'x

sbjec o Ax b

Chp 1 Introducton

)

Ieqaliies sch as Ax 2 b ill alays be iepee compoeise; hais , fo eey , he h compoe of he eco Ax, hich is a�x, is geaeha o eqal o he h compoe i of he eco b.

Example 1.2 h na pogang po n xap 1 1 an wttn a

nz 2X X + 4Xut to -X X X -2

3X X > 53X + X > -5 

X3  + X 3X 0

X 0,  

whh o th a o a th po ( 1 2 ) wth c = ( 2, - 1 4 0)

- 1 -1 0 - 10 3 - 1 0

A =0 3 1 00 0 1 11 0 0 00 0 -1 0

an b = (- 2 5 - 5 3 0 0)

tadard form probems

A liea pogammig poblem of he fom

miimize c'xsbjec o Ax b )

x > 0

is sai o be i tandard form We poie a iepeaio of pobems isaa fom Sppose ha x has imesio a le A , , An be hecolms of A The, he cosai Ax b ca be ie i he fom

n

LAii b 

i=l

Iiiely, hee ae aailable esoce ecos A An , a a ageeco b  We ish o syhesize" he age eco b by sig a o-egaie amo Xi of each esoce eco Ai , hie miimizig he cos =l iXi , hee i is he i cos of he h esoce . The foloig is amoe cocee eample.

Page 22: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 22/267

Sec. 1 . 1 Vrnts o the lner progrmmng problem 5

xap (The diet probem) uppo that th a dint oodand dint nutint, and that w a givn th oowing tab with thnutitiona ontnt o a unit o ah ood:

ood 1 ood nutint 1 l l l

nutint l

t A b th

atix with nti j ot that th jth oun Aj

o thi atix pnt th nutitiona ontnt o th jth ood t b b avto with th quint o an ida dit o, quivanty, a piation oth nutitiona ontnt o an "ida ood W thn intpt th tandad opob a th pob o ixing nonngativ quantiti X o th avaiab ood,to ynthiz th ida ood at inia ot n a vaiant o thi pob, thvto b pi th ii quint o an adquat dit ; in that a, thontaint Ax = b a pad by Ax b, and th pob i not in tandado

Reductio to stadard form

As age eae, any nea pogammng pobem, ncng e sanafom pobem (1.3), s a speca case of e genea fom (1.1). We noage a e conese s aso e an a a genea nea pogammngpobem can be ansfome no an eqaen pobem n sana fom.

Hee , en e say a e o pobems ae eqaen , e mean a gena feasbe soon o one pobem, e can consc a feasbe soon oe oe, e same cos . In paca, e o pobems ae esame opma cos an gen an opma soon o one pobem, e canconsc an opma soon o e oe. Te pobem ansfomaone ae n mn noes o seps:

a Elmnton o ree rbles Gen an nesce aabe Xj n apobem n genea fom, e epace by - , ee an xj

ae ne aabes on c e mpose e sgn consans 0an 2 O Te neyng ea s a any ea nmbe can been as e eence of o nonnegae nmbes.

b Elmnton o nequlty constrnts Gen an neqay consanof e fom

nL ijXj i ,j

Page 23: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 23/267

Chap 1 Introducton

e oce a e aabe a e saa fom cosasnL jXj + ,

jl > O.

Sc a aabe s cae a ac aabe Smay a eqaycosa   jXj ca be p saa fom by ocg a uu aabe a e cosas  = jXj , ; O

We coce a a geea pobem ca be bog o saa foma eefoe e oy ee o eeop meos a ae capabe of sog

saa fom pobemsxap 1 h pol

iniiz X + 4Xut to X + X 3

3X + 2X 14X 0

i quivalnt to th tandad fo pol

iniiz X + x x;ut to X + x x X 33X + x x; 14X X X X

Fo xapl, givn th fail olution (X X ) = ( 2) to th oigina pol, w otain th fai olution (X X x, X) = ( 0 2 1 ) to th tandadfo pol, whih ha th a ot Convly, givn th fail olution(X x, x, X) = ( 1 0 ) to th tandad fo pol, w otain th fail

olution (X X )

=( -5) to th oiginal pol with th a ot.

I e seqe e oe se e geea fom Ax ; o eeope eoy of ea pogammg Hoee e comes o agomsa especay e smpe a eo po meos e be focsgo e saa fom Ax , x ; 0, c s compaoay moecoee

1 . 2 Examples of linear programmingproblems

I s seco e scss a mbe of eampes of ea pogammgpobems Oe of o pposes s o cae e as age of saos oc ea pogammg ca be appe Aoe ppose s o eeopsome famay e a of coscg maemaca fomaos ofoosey ee opmzao pobems

Page 24: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 24/267

Sec 1 . 2 Exmples o lner progrmmng problems 7

A c bl

A m poces n ee goos sg ee a maeas. Le i , , . , , be e aaable amo of e a maeal . Te

goo, , n

eqes ij s of e maeal a esls aeee of j pe poce. Te m faces e poblem of ecgo mc of eac goo o poce oe o mamze s oal eee.

I s eampe, e coce of e ecso aables s smpe. Le Xj , , . . n be e amo of e goo. Te, e pobem facg em ca be fomae as folos:

mamze X + + nXnsbjec o iX + + inXn < i ,

Xj 0 ,

, . . ,, 1 ,

nc l b c fc

Te eampe a e cose ee s a problem a Dga EqpmeCopoao (DEC) a face e fo qae of 188. I saese compees a ceaes of ea o appcaos, as ell ase sefess of maemaca moeg for makg mpoa saegcecsos.

I e seco qae of 188 , DEC oce a e famy of (sgleCP) compe sysems a oksaos : GP, GP2 , a GP3, cae geea ppose compe sysems ee memoy, sk soage,a epaso capabes, as el as S a S2, c ae ok-aos . I Tabe . 1 , e ls e moels , e ls pces, e aeage sksage pe sysem, a e memoy sage. Fo eampe, GP ses fo256K memory boas, a 3 o of eey 10 s ae poce a ske.

ysem re # ds drves # K boards

G1 $60,000 0.3 4

G $40,000 1 . 7 2

G3 $30,000 0 2

W1 $30,000 1.4 2

W $5,000 0 1

be 1 Fatu of th v dit C yt.

Spmes of s e famy of pocs sae e qaea ampe soy g e fo qae . Te folog cesee acpae fo e e qae:

Page 25: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 25/267

8 Chp 1 Introducton

(a) The inhouse suppier of CPUs could provide at most 7 ,000 units , dueto debugging problems

(b) The suppy of disk drives was uncertain and was estimated by the

manufacturer to be in the range of 3,000 to 7,000 units( c) The supply of 256K memory boards was aso imited in the range of

8,000 to 16,000 units

On the demand side, the marketing department estabished tat themaximum demand for the rst quarter of 1989 would be 1,800 for GP1systems, 300 for GP3 systems, 3,800 systems for the whoe GP famiy, and3,200 systems for the S family ncuded in these projections were 500orders for GP2, 500 orders for S1, and 400 orders for S2 that had

already been received and had to be fulled in the next quartern the previous quarters, in order to address the disk drive shortage,

DC had produced GP1, GP3, and S2 with no disk drive (athough3 out of 10 customers for GP1 systems wanted a disk drive), and GP2,S 1 with one disk drive e refer to this way of conguring the systemsas the constrained mode of production

n addition, DEC coud address the shortage of 256K memory boardsby using two alternative boards, instead of four 256K memory boards, in

the GP1 system DEC coud provide 4,000 alternative boards for the nextquartert was cear to the manufacturing sta that the problem had become

complex, as revenue, protabiity, and customer satisfaction were at riskThe foowing decisions needed to be made:

(a) The production pan for the rst quarter of 1989

(b) Concerning disk drive usage, should DC continue to anufactureproducts in the constrained mode, or shoud it plan to satisfy cus

tomer preferences?(c) Concerning memory boards , shoud DEC use alternative memory

boards for its GP1 systems?

( d) A na decision that had to be made was reated to tradeos be-tween shortages of disk drives and of 256K memory boards Themanufacturing sta woud ike to concentrate their eorts on eitherdecreasing the shortage of disks or decreasing the sortage of 256K

memory boards Hence , they would ike to know which aternativewould have a larger eect on revenue

In order to mode the probem that DC faced, we introduce variabesXl , X2 , X3 , X4 , X5 , that represent the number (in thousands) of GP1, GP2, GP3, S1, and S2 systems, respectivey, to be produced in thenext quarter Stricty speaking, since 000X stands for number of units, itmust be an integer This can be accompished by truncating each Xi afterthe third decimal point; given the size of te demand and the size of the

Page 26: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 26/267

Sec 1 2 Exmples o liner progrmming problems 9

variabes Xi this has a negigibe eect and the integraity constraint on000Xi can be ignored

D had to make two distinct decisions: whether to use the con-strained mode of production regarding disk drive usage, and whether touse aternative memory boards for the G system As a resut, there arefour dierent combinations of possibe choices

e rst deveop a mode for the case where aternative memoryboards are not used and the constrained mode of production of disk drivesis seected The probem can be formuated as foows:

maximize 60X + 40X2 + 0X3 + 0X + x (tota revenue)

subject to the foowing constraints:

X + X2 + X3 + X + X5 < 7 avaiabiity)4X + X2 + X3 + X + X < 6K avaiabiity)

X2 + X < (disk drive avaiabiity)X < (max demand for G)

X3 < 0 (max demand for G)X + X2 + x3 < (max demand for G)

X + X < (max demand for S)X2 > 0 (min demand for G)

X > 0 (min demand for S1)X > 04 (min demand for S2)

X X2 X 3 X X5 O.

Notice that the objective function is in miions of doars In some

respects, this is a pessimistic formuation, because the 6K memory anddisk drive avaiabiity were set to and respectivey, which is the owestvaue in the range that was estimated It is actuay of interest to determinethe soution to this probem as the 6K memory avaiabiity ranges from to 6 and the disk drive avaiabiity ranges om to 7, because thisprovides vauabe information on the sensitivity of the optima soution onavaiabiity In another respect , the formuation is optimistic because, forexampe, it assumes that the revenue from G systems is 60X for anyX : even though a demand for 00 G systems is not guaranteed

In order to accommodate the other three choices that D had, someof the probem constraints have to be modied, as foows If we use theunconstrained mode of production for disk drives , the constraint X2+X : is repaced by

rthermore, if we wish to use aternative memory boards in G systems,we repace the constraint 4X + X2 + X3 + X + X : by the two

Page 27: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 27/267

0

constraints

2X < 4,

2X +

2X3

+2X4

+X5 <

Chp 1 Introducton

The four combinations of choices ea to four ierent inear programmingprobems, each of which nees to be sove for a variety of parameter vauesbecause, as iscusse earier, the righthan sie of some of the constraintsis ony nown to ie within a certain range. Methos for soving iearprogramming probems, when certain parameters are aowe to vary, wibe stuie in Chapter where this case stuy is revisite.

l l f lcc cc

A state wants to pan its eectricity capacity for the next years . Thestate has a forecast of dt megawatts, presume accurate, of the emanfor eectricity uring year t The existing capacity, which is inoire pants, that wi not be retire an wi be avaiabe uring yeart is et There are two aternatives for expaning eectric capacity : coare or nucear power pants. There is a capita cost of Ct per megawatt

of coare capacity that becomes operationa at the beginning of year t The corresponing capita cost for nucear power pants is nt For variouspoitica an safety reasons, it has been ecie that no more than 0%of the tota capacity shou ever be nucear. Coa pants ast for 0 years,whie nucear pants ast for years. A east cost capacity expansion panis esire.

The rst step in formuating this probem as a inear programmingprobem is to ene the ecision variabes . Let Xt an Yt be the amountof coa respectivey, nucear capacity brought on ine at the beginning

of year t. Let Wt an Zt be the tota coa

respectivey, nucear

capacity

avaiabe in year t The cost of a capacity expansion pan is therefore,

TctXt + ntYt t=l

Since coare pants ast for 0 years, we have

tWt

Lt

s=rax{1t-19}

Simiary, for nucear power pants,

tZt L Ys, t =

s=rax{t-14}

Page 28: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 28/267

Sec 1 . 2 Exmples o lner progrmmng problems 1 1

S t bl ty t t t ft q

= 1

Fly t 20% f t ttl ty l b

b tt

Zt<0 2 ,

Wt + Zt + et

Sz t ty x b fll:

minimize  l CtXt + ntYt )

t=t

bjt t Wt L Xs = 0s=max{1t-1}

tZt

LYs = 0

s=max{1t14}

= 1

= 1

= 1

= 1

= 1

W t tt t ft t tly t b t dt f tt y f t H t bll tt f t t t

A cl bl

I t xl t f t bl flyttf W x t l b-

A tl t t y t ft (128 l

f t T f f t t ft dy t dj , = 1 . . . , 7. Ey y t tft T bl t t b f t tl t

O l ty bl Y j q t t b f tt y Wt t dt l t bbl t t t tt tt y y Ft t d b tly d Xj

Page 29: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 29/267

1 Chp 1 Introducton

the mbe f e tti thei ee dy (F exmle ehe ee tt dy ill dy 6 ) We the he theflli blem fmlti:

miimize X + X + X + X4 + X5 + X6 + X7bject t Xl + X4 + X5 + X6 + X7 > d

X + X + X5 + X6 + X7 > dX + X + X + X6 + X > dXl + X + X + X4 + X7 > d4Xl + X + X + X4 + X5 > d5

X + X + X4 + X5 + X6 > d6X + X4 + X5 + X6 + X > d7

Xj � 0 Xj itee

Thi ld be lie mmi blem excet f the ctit thtech Xj mt be itee d e ctlly he lie nteger progrmmng blem Oe y f deli ith thi ie i t ie ( elx" the itelity ctit d bti the clled near progmmng reaaton f the iil blem Bece te lie mmi blemh fee ctit d theefe me ti te timl ct ill bele th eql t the timl ct f the iil blem If te timllti t the lie mmi elxti he t be itee theit i l timl lti t the iil blem If it i t itee ec d ech Xj d th btii feible bt t eceilytiml lti t the iil blem It t t tht f thi tic-l blem timl lti c be fd itht t mc etHee thi i the exceti the th the le : di timl lti t eel itee mmi blem i tyiclly diclt memetd ill be diced i Chte

Coo t cocto tork

Cide cmmicti et citi f n de Nde e cected by cmmicti li A li lli ey tmii fmde t de i decibed by deed i ) . Let be the etf ll li We me tht ech li ) c cy t Uij bite ecd Thee i itie che ij e bit tmitted l tht

li Ech de eete dt t the te f k€ bit e ecd ththe t be tmitted t de £ eithe th diect li ( , £ bytci eqece f li The blem i t ce t l hichll dt ech thei iteded detiti hile miimizi the ttl ctWe ll the dt ith the me ii d detiti t be lit d betmitted l dieet t

I de t fmlte thi blem lie mmi bleme itdce ible x idicti the mt f dt ith ii d

Page 30: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 30/267

Sec 1 . 2 Exmples o lner progrmmng problems

dtt £ tt t l ) Lt

b k£

b£ _b kt'

< 0,

f ,f £,

tw

13

T, b£ t t w t d f td t tw, f dt wt d dtt £ W t t fllw flt

n nz L L LCi 7  }

(i)EA k=l £=1bjct t

n nL L7  } : Ui k=l £=

, £ 1 , n

)

x}?O, ) , £ = 1 , nT t ctt w ct ctt t d f dt wt d dtt £ T xp

{ (i)EA} x£<Jpt t t f dt wt d dtt d £, pctly, tt l d l l T xp

{ (i)EA}xk£J<

pt t t f dt wt t d dtt ttt d

t l Flly,

b7£ t t t f c

dt tt t d f td t tw T cd cttxp t qt tt t ttl tc t l ) ctxcd t l cpcty

T pbl w t mutcommodty ow pbl, wt ttc cpd t c dtt p wd dtcdty A ttclly l pbl w w cd tptt cpy tt w t tpt l cdt ft t t dtt t tw T

f t pbl, w t ct networ ow pbl, wc w d t dt btw dt cdt Itd, w t t f xtl pply dd t c d dt bjct t tpt tl f t pply d t t ddd , t ct T tw w pbl, wc t bjctf Cpt ct pcl c ptt pbl c t tt pt pbl, t x w pbl, d t tpbl

Page 31: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 31/267

4 Chp 1 Introducton

tt clct

W xml f bjt f y t t t f t ft tm f m t a; Objt

b t f t f xml t t tt t b tM tly tt bjt m f

(t t l) I t txt tml ft t ai t mmz t tt f t tm T t mt f ai (t ft) l b t ttyf t bjt t t f t tm t l m ml W tt caer bjt(t t t y lbl xml) l t t t m f l f

A near caer tm f ml t x l Xn , t f G bjt t ftt a t t t b bjt f t t f

f t f

a /x <n

I m t b f mbt f t t ft O bjt t t lbl xm t "

T my y f t bm bt btt t b t qmt tt t l mt tt f f t b xm Lt b t t fxm f t t W t f m x Xn tt

t t tta�x Xn ,a�x <Xn ! ,

, .

Nt tt t t f tt l tt qty t qt f t fm mm T bby by b tt if m f x Xn t l ft b tt t t xt m t (bt by

mltly x Xn by tbly t ) tt t

a�x Xn ,a �x :  Xn+l - 1 ,

, .

W tt t f tt t lbl xml bm f fb lt t l mm bm

Page 32: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 32/267

Sec 1 ecewse lner conex objecte unctons

1 . 3 ieewise linear onvex objetive

untions

5

A o te exape i te precedig ectio ioed a near objectiectio Howeer , tere i a iportat cla o optiizatio probewit a oiear objectie ctio tat ca be cat a iear programigprobem; tee are exaied ext

We rt eed oe deitio

enon

a A ncton  n s called onvx or eery x E  n

and eery

[0 we he

+ ( 1 )Y (x) ( )()

b A uncton  n s clled onave or eery x E  nnd eery 0 we he

+ ( 1 ) (x + )(

Note tat i x ad are ector i  n ad i rage i 0, 1 thepoit o te orm + (1 ) beog to te lie egmet joiig xad Te deitio o a coex ctio reer to the ale o , ait arget trace ti eget were iear , te ieqaity i part(a) o te deiio wold old with eqaity Te ieqality tereoreea tat we we retrict attetio to ch a eget, te grap o tectio lie o igher tha te grap o a correpodig iear ctio;

ee Figre 11(a)t i eaiy ee tat a ctio i coex i ad oy i the ctio i cocae Note tat a ctio o the or x) + I =l ii ,were , , n are caar, caed a ane ctio, i both coex adcocae (t r ot that ae ctio are te oly ctio hat arebot coex ad cocae ) Coex (a wel a cocae) ctio play acetra role i optiizatio

We ay that a ecor x i a oca mnmum o i () () or al i the iciity o x We ao ay tat x i a goa mnmum i () ()

or al A coex ctio caot hae loca iima tat ai to be globalmiia (ee Figre 11), ad thi property i o great ep i deigigeciet optiizatio agorith

Let Cl , , be ector i  n , et d , , be caar, ad coiderthe ctio  n deed by

() ax x + di )=l

Page 33: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 33/267

1

V(x) J ) (y�/--7f Y)

f x)

x Jy y

()

Chp. 1 Introducton

()

Fgure 1 . 1 : () utton o th dnton o onvx nton) A onv unton () A nton tht nth onvx noonv; not tht A o ut not go nu

ee Figure 1 (a )] Such a fuctio i coex, a a coequece of thefolowing reut

Theorem 1.1 et , fm be conex unctons Thente uncton f dened by f(x) max . fi(X) is lso conex

roof. et x y  and et A , 1] We hae

f AX + (1 - A)Y max fi AX + (1 - A)Y < . 

max  Afi (X) +  ( 1 - A)h (y)   . m

<  . max  Afi (X) +  . max  ( 1  - A)fi (Y)  m . Af () + ( 1 A)f(y)

A fuctio of the form max m (c�x+di) i caled a pecee neaconve fuctio A impe exampe i the abolute aue fuction deed by

f () I l max{, -} . A iutrated i Figure 1 (b) , a piecewie liearcoex fuction can be ued to approximate a genera coex ctio

We ow coider a generaizatio of liear programmig, where theobjectie fuctio i piecewie liear ad conex rather tha inear

minimize max (cx + di ) . m

ubject to x ;h

Page 34: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 34/267

Sec. 1 . ecewse lne conex objecte unctons

Fgure 1. a A piecewise inear convex function of a singevariable b An approximation of a convex function by a piecewiseinear convex function

17

Not that i= (c�x + di ) is qal to th smallst nmbr z thatsatiss z � c�x + di or all For this rason, th optimization problmndr considration is qialnt to th linar programming problm

minimizsbjct to zz � c�x + di ,

x � b,

whr th dcision ariabls ar z and x

1, . ,,

To smmariz, linar programming can b sd to sol problms withpicwis linar conx cost nctions, and th lattr class o nctions canb sd as an approximation o mor gnral conx cost nctions. nth othr hand, sch a picwis linar approximation is not always a o

ida bcas it can trn a smooth nction into a nonsmooth on (piwslinar nctions ha discontinos driatis) .

W nally not that i w ar gin a constraint o th orm fx) whr f i s th picwis linar conx nction f(x) i= [x+9) sch a constraint can b rwrittn as

Ix + 9i 1 , ,,and linar programming is again applicabl.

Problems ivolvig absolute vaues

Considr a problm o th ormn

minimiz L i Xi Ii=

sbjct to x � b,

Page 35: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 35/267

8 Chp 1 Introducton

were x (Xl Xn ad were te cot coeciet   are aedto be oegatie Te cot criterio, beig te o te piecewieiear coex ctio i I Xi I i eaiy ow to be piecewie iear adcoex (Exercie 1 2 ) Howeer, expreig ti cot criterio i te orax x ) i a bit ioed, ad a ore direct rote i preerabeWe obere tat Xi i te aet ber Zi tat atie Xi Zi ad-Xi Zi ad we obtai te iear prograig oratio

niiize LiZi

i=lbject to Ax > b

Xi<

Zi 1 n-Xi < Zi 1 nA ateratie etod or deaig wit abote ae i to itrodce

ew ariabe cotraied to be oegatie, ad et Xi - (Or itetio i to ae Xi or Xi - depedig o weter Xi ipoitie or egatie ) We te repace eery occrrece o I Xi wit ad obtai te ateratie oratio

niiize L i ( i=l

bject to Ax Ax 2 bx , x 2 0

were x ( ad x ( Te reatio Xi - 2 0 2 0 are ot eog to

garatee tat Xi I ad te aidity o ti reoratio ayot be etirey obio et ae or ipicity tat i > 0 or a At a optia otio to te reorated probe, ad or eac wet ae eiter 0 or X 0 becae oterwie we cod redce bot ad by te ae aot ad preere eaibiity, wie redcig tecot , i cotradictio o optiaity Haig garateed tat eiter 0or 0 te deired reatio Xi I ow oow

Te ora correcte o te two reoratio tat ae bee pre

eted ere, ad i a oewat ore geera ettig, i te bject oExercie 1 5 We ao ote tat te oegatiity aptio o te cotcoeciet i i crcia becae, oterwie, te cot criterio i ocoex

xame 5 Consider te probem

minimize x + X2

sbect to X + X2 4

Page 36: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 36/267

Sec 1 . ecewse lne conex objecte unctons 19

ur rst reformuation yieds

minimize Zl + X2subect to Xl + X2 4

Xl Zl-Xl Zl ,

wie te second yieds

minimize 2 + 2 + X2

subect to   + X2 4

0

X

>

We ow cotie wit oe appicatio ioig piecewie iearcoex objectie ctio

D

We are gie data poit o te or �, i, = 1 , , were � 

ad i, ad wi to bid a ode tat predict te ae o te ariabe o owedge o te ector a c a itatio, oe ote e a iearodel o te or = a/x were x i a paraeter ector to be deteriedGie a particar paraeter ector x te redua, or predictio error, atte t data poit i deed a I i -a�x l Gie a coice betwee ateratieodel, oe od cooe a ode tat expai" te aaiabe data abet a poibe, ie, a ode tat ret i a reida

Oe poibiity i to iiize te arget reida Ti i te probe

o iiizigmax I bi - a�x l , 

wit repect to x bject to o cotrait Note tat we are deaig erewit a piecewie iear coex cot criterio Te oowig i a eqiaetliear prograig orlatio

iiize zbject to

i a�x : Z ,i + a�x

te deciio ariabe beig ad x

=

,,

= ,, a ateratie oratio, we cod adopt te cot criterio

L I i a�x l ·i=

Page 37: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 37/267

0 Cp 1 Introducton

Sice I i - ax i the maet mber Zi that atie i - ax : Zi adi + ax : Zi , we obtai the formatio

miimize

bject to

Z1

+ . . + Zi - ax : Zi ,i + ax : Zi ,

1 , ,, 1 , , practice, oe may wih to e the qadratic cot criterio 2 (i -

ax) 2 , i order to obtai a eat qare t " Thi i a probem which ieaier tha iear programmig; it ca be oed ig cac method,bt it dicio i otide the cope of thi boo.

Optima cotrol of liear systems

Coider a dyamica ytem that eoe accordig to a mode of the form

x(t + 1 )(t)

Ax(t) + u(t) ,'x(t)

Here x(t) i the tate of the ytem at time t, (t) i the ytem otpt,amed caar, ad

u(t)i a cotro ector that we are free to chooe

bject to iear cotrait of the form Du(t) d thee might icdeatratio cotrait, ie, hard bod o the magitde o each com-poet of u(t) ] To metio ome poibe appicatio, thi cod be amode of a airpae, a egie, a eectrica circit, a mechaica ytem,a mafactrig ytem, or ee a mode of ecoomic growth We are aogie the iitia tate x(O) oe poibe probem, we are to chooe theae o the cotro ariabe u(O) , , u(T - 1 ) to drie the tate x(T) toa target tate, amed for impicity to be zeo additio to zeroig the

tate, it i ofte deirabe to eep the magitde of the otpt ma at aitermediate time, ad we may wih to miimize

max ( t ) 1 1

We the obtai the foowig iear programmig probem

miimize Z

bject to -z (t) zx(t + 1 ) Ax(t) + u(t) ,(t) 'x(t) ,Du(t) d,

x(T ) 0 ,

x(O ) gie

t 1, , T - 1 ,t , , T - 1 ,

t 1 , , T - 1 ,

t , , T - 1 ,

Page 38: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 38/267

Page 39: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 39/267

Chp 1 Introducton

Fgure 13 raphical solion of he problem in xample 16

pariclar, increasing

zcorresponds o moving he line

z = along hedirecion of he vecor c ince we are inerested in minimizing z we would likeo move he line as mch as possible in he direcion of c as long as we do noleave he feasible region he bes we can do is z = 2 (see Figure 13), and hevecor = ( 1 , 1 ) is an opimal solion oe ha his is a corner of he feasibleset (he concep of a "corner will be dened formally in Chapter 2 )

For a probe i three dieio, the ae approach ca be edexcept that the et o poit with the ae ae o 'x i a pae, itead oa ie Thi pae i agai perpedicar to the ector , ad the obectiei to ide thi pae a ch a poibe i the directio o , a og awe do ot eae the eaibe et

xame 17 uppose tha the feasible se is he uni cbe, described by heconsraints 0 1, = 1 , 2 , 3, and ha c = 1 1 , 1 ) hen, he vecor = ( 1 , 1 , 1 ) is an opimal solion nce more, he opimal solion happens obe a corner of he feasible se (Figre 14)

Page 40: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 40/267

Sec 1 . 4 Grphcl representton nd soluton

Fgure 1 he hree-dimensiona inear programming problemin xampe 17

3

bo o e precedig exape, e eaible e i bouded doeo exed o iiy , ad e probe a a uique opia ouioTi i o away e cae ad we ae oe addiioa poibiiie a

are iuraed by e exaple a oow

xame 18 Consider he feasibe se in 2 dened by he consrains

which is shown in Figre 15

X + X2 1X > 0X2 > 0,

(a) For he cos vecor

c =( 1 , 1) , i is clear ha

=(0, 0) is he nieopimal solion.

(b) For he cos vecor c = 1 , 0 , here are mliple opima solions, namely,every vecor of he form = (0, X2 ) , wih 0 X2 1 , is opimal. oeha he se of opimal soions is bonded.

(c) For he cos vecor c = 0, 1 , here are mliple opima solions, namely,every vecor of he form = (X , 0 , wih X 0, is opimal . n his case,he se of opima soions is nbonded (conains vecors of arbirariyarge magnide) .

(d) Consider he cos vecor

c = 1 , 1 For any feasibe solion (X , X2 ) , wecan always prodce anoher feasibe soion wih less cos, by increasinghe vle of X . herefore, no feasibe solion is opimal. rhermore,by considering vecors (X , X2) wih ever increasing vales of X and X2 , wecan obain a seence of feasible solions whose cos converges o 0 We herefore say ha he opimal cos is 0

(e) f we impose an addiional consrain of he form X +X2 2 i i s evidenha no feasibe soion exiss.

Page 41: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 41/267

24 Cp 1 Introducton

Fgure 1.5: he feasible se in xample 1.8. Fo eah hoie ofc an opimal solion is obained by moving as mh as possible

in he dieion of

cTo uariz th iight obtaid ro Exapl w ha th

ollowig poibiliti

(a) Thr xit a uiqu optia outio

(b) Thr xit ultipl optial olutio ; i thi a, th t o optial

olutio a b ithr boudd or uboudd() Th optia ot i - 0 ad o aibl olutio i optia

(d) Th aib t i pty

priipl, thr i a additioal poibility a optial olutiodo ot xit though th probl i aib ad th optial ot iot - 0 thi i th a, or xapl, i th probl o iiizig l/x

ubjt to x > (or ry aibl outio, thr xit aothr with lot, but th optial ot i ot 0 W wil latr i thi book thatthi poibility r ari i liar prograig

th xapl that w ha oidrd, i th prob ha at ato optial outio, th a optia outio a b oud aog thorr o th aibl t Chptr , w wil how that thi i a gralatur o liar prograig probl, a og a th aibl t ha atat o orr

Page 42: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 42/267

Sec 1 4 Grphcl representton nd soluton 25

Vlz tdd fom poblem

We ow dicu a metod tat allow u to iualize tadard orm problemee i te dimeio n o te ector x i greater ta tree Te reao

or wiig to do o i tat wen

te eaible et o a tadardorm problem doe ot ae muc ariety ad doe ot proide eougiigt ito te geeral cae ( cotrat, i te eaible et i decribed bycotrait o te orm Ax ? b, eoug ariety i obtaied ee i x adimeio tree )

Suppoe tat we ae a tadard orm problem, ad tat te matrixA a dimeio n particular, te deciio ector x i o dimeion ad we ae equality cotrait We aume tat n ad tat

te cotrait Ax b orce x to lie o a n )dimeioal et(tuitiely, eac cotrait remoe oe o te degree o reedom" o x) we tad" o tat n )dimeioal et ad igore te dimeioortogoal to it , te eaible et i oly cotraied by te liear iequalitycotrait Xi ? 0, 1 , n particular, i n 2, te eaibleet ca be draw a a twodimeioal et deed by n liear iequalitycotrait

To illutrate ti approac, coider the eaible et i  3 deed by

te cotrait X + X2 + X3 1 ad X , X2 , X3 ? 0 Figure 16(a)], ad otetat n ad 1 we tad o te plae deed by te cotrait

X + X2 + X3 1, te te eaible et a te appearace o a triagle itwodimeioal pace Furtermore, eac edge o te triagle correpodto oe o te cotrait X , X2 , X3 ? 0 ee Figure 16(b)

Figure 1.: ( n dimensionl view of he fesible se (bn ( -dimensionl view of he sme se

Page 43: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 43/267

26 Chp Introducton

1 . 5 Linear algebra bakground and notation

Thi ectio proide a ary o the ai otatioa coetio thatwe wi be epoyig t ao cotai a brie reiew o thoe relt roiear agebra that are ed i the eqe

S c

8 i a et ad i a eeet o 8 we write 8 A et ca bepecied i the or 8 { I atie , a te et o a eeethaig property Te cardiaity o a ite et 8 i deoted by 8 The

io o two et 8 ad T i deoted by 8 T ad their iterectio by8T We e 8\T to deote the et o a eeet o 8 that do ot beogto T The otatio 8 T ea that 8 i a bet o T ie, eery eeeto 8 i ao a eeet o T; i particar, 8 cod be eqa to T iadditio 8 T we ay that 8 i a proper bet o T We e to deotethe epty et The ybo ad ' hae the eaig there exit" ador a," repectiey

We e to deote the et o rea ber For ay rea ber a

ad b, we dee the coed ad ope itera [a, b ad a, b , repectiey,by

[a , b { I a b,

ad

a, b {� I a < <b

Vc c

A matr o dieio i a array o rea ber aij:

Matrice wi be away deoted by pper cae bodace character Ai a atrix, we e the otatio aij or [Aij to reer to it ( , ) th etryA ro vector i a atrix with 1 ad a coumn vector i a atrixwith 1 The word vector wi away ea coumn vector e thecotrary i expicity tated Vector wi be ay deoted by ower caebodace character We e the otatio   to idicate the et o aldieioa ector For ay ector x   , we e Xl , X2 , . . . , Xn to

Page 44: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 44/267

Sec ner lgebr bcground nd notton 27

idicat it copot T,

T or cooica otatio x (Xl X2 Xn wi ao b d i w ar rrrig to co ctor W 0 to dot t ctor wita copot qa to zro T t unt vector i i t ctor wit acopot qa to zro xcpt or t t copot wic i qa too

T tnpoe A o a n

atrix A i tn

atrix

tat i, A/ ij Aji Siiarly, i x i a ctor i � n it t rapo x' i

t row ctor wit t a tri x ad y ar two ctor i � n t

nI I "X Y Y x X Y  .i= l

Ti qatity i cad t nner pduct o x ad y Two ctor arcad ortogona i tir ir prodct i zro Not tat XX � or ry

ctor x wit qaity odig i ad oy i x o T xprio X/Xi t Eucdean norm o x ad i dotd by l x l l T cwart neuatyart tat or ay two ctor o t a diio, w a

X/y l l x l l l l y l l

wit qaity odig i ad oy i o o t two ctor i a caar tipo t otr

A i a n atrix, w Aj

to dot it t co, tat i,

Aj (lj 2j mj (Ti i or oy xcptio to t r o igowr ca caractr to rprt ctor) W ao ai to dot tctor ord by t tri o t t row, tat i, ai (i l i2 · · · in T,

Page 45: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 45/267

28 Chp Introducton

Given two atrices B o iensions m an n , respectivelytheir product B is a atri o iensions m whose entries are givenby

n

[B] j ] [B]£j Bj £=where is the th row o an Bj is the th coln o B Matriltiplication is associative ie B) B) bt in general it isnot cotative that is the eqality B B is not always tre Wealso have B) B

Let be an m atri with colns We then have ny vector x n can be written in the or x Z:= X which leas

to n n nAx L X L L X ·

= = = ierent representation o the atrivector proct Ax i s provie bythe orla

ax

1x

Ax

x

where are the rows o atri is calle uare i the nber m o its rows is eqal to the

nber o its colns We se to enote the dentty atri which isa sqare atri whose iagonal entries are eqal to one an its oiagonalentries are eqal to zero The ientity atri satises an B Bor any atrices B o iensions copatible with those o

I x is a vector the notation x � 0 an x > 0 eans that everycoponent o x is nonnegative respectively positive) I is a atrithe ineqalities � 0 an > 0 have a siilar eaning

atrix iversio

Let be a sqare atri I there eists a sqare atri B o the saeiensions satising B B we say that is nverte or non

nguar ch a atri B calle the nvere o is niqe an is enote by We note that ) ) lso i an B areinvertible atrices o the sae iensions then B is also invertible anB) B 1

Given a nite collection o vectors Xl , , xK n , we say that theyare neary dependent i there eist real nbers l . . , K , not all othe zero sch that J kxk 0; otherwise they are calle nearyndependent n eqivalent enition o linear inepenence reqires that

Page 46: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 46/267

Sec Linear algebra background and notation 29

none o the ectors Xl . . . , xK is a linear combination o the remainingectors (Exercise 1 18 ) We hae the ollowing result.

Theoem 12 Le be a square marix. Then, he fowig sate-ments ae equivaent:

(a) he matrix is invertibe

The marix is invertibe

The deerinan of is nonzer

he rws of ae ieay independent

e he ous f are ieay independen

F every vecor he inear ssem x has a uniquesi

g Thee exiss sme ve sh ha he linea sysem x hs ique suio

Assuming that is an inertible square matrix, an explicit ormulaor the solution x l o the system x , is gien by Cmer'

rue Specically, the th component o x is gien bydet( )X det()

,

where is the same matrix as , except that its th column is replacedby Here, as well as later, the notation det() is used to denote thedetermnant o a square matrix .

ubspaces ad basesA nonempty subset S o is called a uace o i x + by E S oreery x, y E S and eery , b E I, in addition, S = we say that S isa roer subspace. Note that eery subspace must contain the zero ector.

The an o a nite number o ectors xl , . . . , xK in is the subspaceo dened as the set o all ectors y o the orm y =l kxk , whereeach k is a real number. Any such ector y is called a near comnatono xl , . . . , xK

Gien a subspace S o with S = {}, a a o S is a collection oecors that are linearly independent and whose span is equal to S Eerybasis o a gien subspace has the same number o ectors and this numberis called the dmenon o the subspace. In particular, the dimension o is equal to and eery proper subspace o has dimension smallerthan Note that onedimensional subspaces are lines through the origin;twodimensional subspaces are planes through the origin. Finally, the set{} is a subspace and its dimension is dened to be zero.

Page 47: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 47/267

30 Chp. Introdcton

8 i a proper bpace o � n , te tere exit a ozero ector awic i ortogoa to 8 tat i, ax 0 or eery x8. More geeray,i 8 a dieio < , tere exit lieary idepedet ectortat are ortogoa to 8

Te ret tat oow proide oe iportat act regardig baead iear idepedece.

heorem 3 ppose tht the sp o the ectors X . . xhs

dmenson Then

a There exsts bss o8 consstng o o the ector X , . . . x . b k nd x , . . x

re lnerly ndpendent cn orm a

bass o 8 by strtng wthx . . x

nd coosg k o theectors + x .

roo We oy proe part (b) , becae (a) i te pecia cae o part(b) wit k O. eery ector x+ . . . xK

ca be expreed a a iearcobiatio o X . . . x

, te eery ector i te pa o x . . . xKi

ao a iear cobiatio o X . . . x ad te atter ector or a bai.( particar,

k ) Oterwie, at eat oe o te ector x+ . . . xK

i ieary idepedet ro X ..

. x .By picig oe c ector, we

ow ae k + 1 o te ector X . . . xK tat are ieary idepedet. Byrepeatig ti proce k tie, we ed p wit te deired bai o 8.

D

Let be a atrix o dieio . Te coumn pace o i te bpace o � m paed by te co o . Te row pace o i te bpace o � n paed by te row o . Te dieio o

te co pace i away eqa to te dieio o te row pace, adti ber i caed te n o . Ceary, ra () i{, . Teatrix i aid to ae n i ra() i{, . Fiay, te et{x E � n I x o i caed te nupace o ; it i a bpace o � n adit dieio i eqa to ra() .

A sbscs

Let 8 be a bpace o � n ad et xObe oe ector. we add xO

to

eery eeet o 8 ti aot to traatig 8 byxO .

Te retiget 8 ca be deed oray by

8 8 + xO x + xO

x8} .

geera, 8 i ot a bpace, becae it doe ot eceariy cotaite zero ector, ad it i caed a ane upace Te dmenon o 8 ideed to be eqa to te dieio o te deryig bpace 8

Page 48: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 48/267

Sec 1 . 5 Lnear algebra background and notaton 31

A a exape, et X , X , xk be oe ector i � n , ad coiderte et 8 o a ector o te or

were , Ak are arbitrary caar For ti cae, 80 ca be idetiedwit te pa o te ector X , xk ad 8 i a ae bpace te ector x , xk are ieary idepedet, teir pa a dieiok ad te ae bpace 8 ao a dieio k

For a ecod exape, we are gie a m n atrix A ad a ectorb � m , ad we coider te et

8 x � n I Ax b ,

wic we ae t o be oepty Let x oe c tat Ax bA arbitrary ector x beog to 8 i ad oy i Ax b Ax orAx - X o Hece, x 8 i ad oy i x - x beog to te bpace80 { I A O} We cocde tat 8 { + x 80 } ad8 i a ae bpace o � n . A a m ieary idepedet row, itpace 80 a dieio n - m Hece, te ae bpace 8 ao adieio n - m titiey, i a� are te row o A eac oe o te

cotrait a�x bi reoe oe degree o reedo ro x t redcigte dieio ro n to n - m ee Figre or a itratio

Fgure 17 Consider a set S in dened by a single eualityconstraint a = b et be an element of S he vector a isperpendicular to S f l and 2 are linearly independent vectorsthat are orthogonal to a, then every S is of the form = + l + 22 n particular, S is a two-dimensional anesubspace.

Page 49: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 49/267

32 Cap Introducton

1 . 6 Algorithms and operation counts

Optmzato problem uch a lar programmg ad, more geerally,all computatoal problem are olved by agort Looely peakg, a

algorthm a te et of tructo of the type ued como pro-grammg laguage arthmetc operato, codtoal tatemet, readad wrte tatemet, etc Although the rug tme of a algorthmmay deped ubtatally o clever programmg or o the computer hard-ware avalable, we are tereted comparg algorthm wthout havgto exame the detal of a partcular mplemetato A a rt approx-mato, th ca be accomplhed by coutg the umber of arthmetcoperato addto, multplcato, dvo, comparo requred by

a algorthm Th approach oe adequate eve though t gore thefact that addg or multplyg large teger or hghpreco oatgpot umber more demadg tha addg or multplyg gledgtteger A more reed approach wll e dcued brey Chapter

Example 1.9() e and b be vecors in he naural algorihm for compuing 'b

reuires n muliplicaions and n l addiions for a oal of nl arihmeicoperaions

(b) e A and B be marices of dimensions n n he radiional way ofcompuing AB forms he inner produc of a row of A and a column of Bo obain an enry of AB ince here are n enries o be evaluaed aoal of n n arihmeic operaions are involved

I Exaple 1 9 , a exact operato cout wa poble However ,for more complcated problem ad algorthm, a exact cout uuallyvery dcult For th reao, we wll ettle for a etmate of the rate ofgrowth of the umber of arthmetc operato , a a fucto of the problem

parameter Thu, Example 1 9 , we ght be cotet to ay that theumber of operato the computato of a er product creaelearly wth n ad the umber of operato matrx multplcatocreae cubcally wth n Th lead u to the order of magtude otatothat we dee ext

it 12 Le f a g e fs ha map pse mes pse mes

We w fn)

Ogn) f hee exs pse umbes n ac sh ha fn) : gn) f a n ; n

We we fn) gn) f hee exs pse umes n sh ha fn) cgn) f a n ; n

We we fn) 8gn) f h fn) Ogn) a fn)gn) h

Page 50: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 50/267

Sec . 6 Algorths d operaton counts 33

For example, we hae 3 0 , log , adlog ().

While he rig ime of he algorihms cosidered i Example 9 ispredicable , the rig ime of more complicaed algorihms oe depedso he merical ales of he ip daa sch cases, isead of tryigo esimae the rig ime for each possible choice of he ip, i iscsomary o esimae he rig ime for he wort poe nput data of agie size" For example, if we hae a algorihm for liear programmig,we migh be ierested i esimaig is worstcase rig ime oer allproblems wih a gie mber of ariables ad costraits This emphasiso the worst case is somewha coseraie ad, i pracice, he aerage"rig ime of a algorihm migh be more releat Howeer, he aerage

rig time is mch more dicl o esimae, or ee o dee, ad forhis reaso, he worscase approach is widely sed

Example 1.10 Operation count of inear system solvers and matrixinversion Consder he problem of solvng a sysem of lnear eaons n nknowns he classcal mehod ha elmnaes one varable a a me assanelmnaon s known o rere ( ) arhmec operaons n order o ehercompe a solon or o decde ha no solon exss raccal mehods formarx nverson also rere ( ) arhmec operaons hese facs wll be of

se laer on

s he rig ime of Gassia elimiaio good or bad? Someperspec ie io his qestio is proided by the followig obseraio eachime ha echological adaces lead o comper hardware ha is fasterby a facor of presmably eery few years , we ca sole problems of wicehe size ha earlier possible A similar argmet applies to algorihmswhose rig ime is for some posiie ieger k Sch algorihms

are said o r i poynoma tmeAlgorihms also exis whose rig ime is (2 ) where is aparameter represeig problem size ad is a cosa hese are said oake a least eonenta tme For sch algorihms ad if each imehat comper hardware becomes faser by a facor of 2 we ca icreasehe ale of hat we ca hadle oly by is he reasoable o expechat o mater how mch echology improes, problems wih trly largeales of will always be dicl o hadle

Example 111 ppose ha we have a choce of wo algorhms he rnnngme of he rs s O /100 exponenal and he rnnng me of he seconds O polynomal For very small , eg for = 3 he exponenal mealgorhm s preferable o gan some perspecve as o wha happens for larger, sppose ha we have access o a worksaon ha can exece 10 arhmecoperaons per second and ha we are wllng o le rn for 1000 secondse s gre o wha s ze problems can each algorhm handle whn hs meame he eaon O /100 = 10 1000 yelds = 12 , whereas he eaon

Page 51: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 51/267

34 Chp Introducton

O = 0 000 yields = 000, indicaing ha he polynomial ime algorihmallows s o solve mch larger problems

Te poit o iew emergig rom te aboe dicio i tat, a a rtct, it i e to jxtapoe poyomia ad expoetia time agoritm,te ormer beig iewed a reatiey at ad ecie, ad te atter arelatiey ow Ti poit o iew i tied i may bt ot al cotext ad we wil be retrig to it ater i ti boo

1.7 Exerises

Exercse . ppose ha a ncion I f is boh concave and convexrove ha I is an ane fncion

Exercse .2 ppose ha are convex fncions om ino andle I = X

(a) how ha if each I is convex so is I.(b) how ha i f each is piecewise linear and convex so is I

Exercse .3 Consider he problem of minimizing a cos fncion of he formc' + Id', sbec o he linear consrains A � b ere d is a givenvecor and he fncion f is as specied in Figre 1 rovide a linearprogramming formlaion of his problem

I

gure .8: he fncion of xercise 13

Exercse .4 Consider he problem

minimize 2X + 3   0

sbec o X + 2 + X 5 ,

and reformlae i as a linear programming poblem

Page 52: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 52/267

Sec 7 Exercses 35

xercse .5 Consder a lnear opmzaon problem, wh absolue values, ofhe followng form:

mnmze c' + d'subec o A + By b

Yi = Xi ,

ssume ha all enres of B and d are nonnegave

(a) rovde wo deren lnear programmng formulaons, along he lnes dscussed n econ 13

(b) how ha he orgnal problem and he wo reformulaons are euvalenn he sense ha eher all hree are nfeasble, or all hree have he same

opmal cos(c) rovde example o show ha f B has negave enres, he problmmay have a local mnmum ha s no a global mnmum. ( wll be seenn Chaper 2 ha hs s never he case n lnear programmng problemsence, n he presence of such negave enres, a lnear programmng reformulaon s mplausble )

xercse . rovde lnear programmng formulaons of he wo varans ofhe rocke conrol problem dscussed a he end of econ 13

xercse .7 The moment probem) uppose ha s a random varableakng values n he se 0, 1 , , wh probables Po P PK respecvely.We are gven he values of he rs wo momens [] = ':=o kP and [2] ='=o k2P of and we would lke o oban upper and lower bounds on he value

of he fourh momen [4 = ':=o 4pk of how how lnear programmngcan be used o approach hs problem.

xercse .8 (oad ighting) Consder a road dvded no segmens ha s

llumnaed by

lamps. e Pj be he power of he jh lamp he llumnaon iof he h segmen s assumed o be ';= ijpj , where ij are known coecens.

e be he desred llumnaon of road .We are neresed n choosng he lamp powers so ha he llumnaons

i are close o he desred llumnaons . rovde a reasonable lnear programmng formulaon of hs problem oe ha he wordng of he problem s looseand here s more han one possble formulaon

xercse .9 Consder a school dsrc wh neghborhoods, J schools, and

Ggrades a each school ach school j has a capacy of Gj for grade

n eachnegborhood , he suden populaon of grade s i . Fnally, he dsance

of schoo j om neghborhood s dij Formulae a lnear programmng problemwhose obecve s o assgn all sudens o schools, whle mnmzng he oaldsance raveled by all sudens (ou may gnore he fac ha numbers ofsudens mus be neger.)

xercse . (rodction and inventory panning) company mus delver di uns of s produc a he end of he h monh aeral produced durng

Page 53: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 53/267

36 Chap 1 Introducton

a month can be deivered either at the end of the same month or can be storedas inventory and deivered at the end of a sbseqent month however, there isa storage cost of C doars per month for each nit of prodct hed in inventoryThe year begins with zero inventory f the company prodces nits in month

and nits in month + 1, it incrs a cost of

2l i i

doars, reectingthe cost of switching to a new prodction eve Formate a inear programmingprobem whose obective is to minimize the tota cost of the prodction and inventory schede over a period of tweve months Assme that inventory e atthe end of the year has no vae and does not incr any storage costs

Exercise 1.11 Optimal currency conversion Sppose that there are Navaiabe crrencies, and assme that one nit of crrency can be exchanged for'i nits of crrency j (Natray, we assme that 'i > 0) There aso certain

regations that impose a imit U on the tota amont of crrency that can beexchanged on any given day Sppose that we start with B nits of crrency 1 andthat we wod ike to maximize the nmber of nits of crrency N that we end pwith at the end of the day, throgh a seqence of crrency transactions Providea inear programming formation of this probem Assme that for any seqence of crrencies, we have 'i i 'i i 'i i 'ii 1, which means thatweath cannot be mtipied by going throgh a cyce of crrencies

Exercise 1.12 Chebychev center Consider a set P described by inearineqaity constraints, that is, P = { E a i , = 1 } A bawith center y and radis

'is dened as the set of a points within (cidean)

distance ' from y We are interested in nding a ba with the argest possiberadis, which is entirey contained within the set P. (The center of sch a ba iscaed the ee ete of P. Provide a inear programming formation ofthis probem

Exercise 1. 13 Linear fractional programming Consider the probem

+

+ minimize

sbect to x b + > O

Sppose that we have some prior knowedge that the optima cost beongs to aninterva , Provide a procedre, that ses inear programming as a sbrotine, and that aows s to compte the optima cost within any desired accracyHit Consider the probem of deciding whether the optima cost is ess than oreqa to a certain nmber

Exercise 1.14 A company prodces and ses two dierent prodcts The demand for each prodct is nimited, bt the company is constrained by cashavaiabiity and machine capacity

ach nit of the rst and second prodct reqires 3 and 4 machine hors,respectivey There are 20,000 machine hors avaiabe in the crrent prodctionperiod The prodction costs are $3 and $2 per nit of the rst and secondprodct , respect ivey The seing prices of the rst and second prodct are $and $5 40 per nit, respectivey The avaiabe cash is $4,000 ; frthermore, 45%

Page 54: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 54/267

Sec. 1 7 Exercises 37

of the sales revenes from the rst prodct and 30% of the sales revenes from thesecond prodct will be made available to nance operations dring the crrentperiod

a Formlate a linear programming problem that aims at maximizing net income sbect to the cash availability and machine capacity limitations

(b) Solve the problem graphically to obtain an optimal soltion

c Sppose that the company cold increase its available machine hors by2 ,000 , aer spending $400 for certain repairs. Shold the investment bemade

Exercise 1.15 A company prodces two kinds of prodcts A prodct of therst type reqires 1/4 hors of assembly labor, 1/ hors of testing, and $12worth of raw materials A prodct of the second type reqires 1 /3 hors ofassembly, 1/3 hors of testing, and $0 worth of raw materials iven the crrentpersonnel of the company, there can be at most 0 hors of assembly labor and0 hors of testing, each day Prodcts of the rst and second type have a marketvale of $ and $, respectivelya Formlate a linear programming problem that can be sed to maximize the

daily prot of the company

(b) Consider the foowing two modications to the original problem:i Sppose that p to 50 hors of overtime assembly labor can be sched

led, at a cost of $ per horii Sppose that the raw material spplier provides a 10% discont if

the daily bill is above $300

Which of the above two elements can be easily incorporated into the linear programming formation and how f one or both are not easy toincorporate , indicate how yo might nevertheless solve the problem

Exercise 1.1 A manager of an oil renery has million barrels of crde oil Aand 5 million barrels of crde oil B allocated for prodction dring the coming

month These resorces can be sed to make either gasoline, which sells for $3per barrel , or home heating oi, which sells for $33 per barrel There are threeprodction processes with the following characteristics:

Process 1 Process Process 3

npt crde A 3 1 5

npt crde B 5 1 3

tpt gasoline 4 1 3

tpt heating oil 3 1 4

Cost $51 $ 1 1 $40

All qantities are in barrels For example, with the rst process , 3 barrels ofcrde A and 5 barrels of crde B are sed to prodce 4 barrels of gasoline and

Page 55: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 55/267

38 Chp Introducton

3 barrels of heating oil he costs in this table refer to variable and allocatedoverhead costs, and there are no separate cost items for the cost of the crdesFormlate a linear programming problem that wold help the manager maximizenet revene over the next month

xercse .7 nvestment uner tton n investor has a portfolio of dierent stocks e has boght S shares of stock at price , = 1 , he crrent price of one share of stock is q he investor expects that the priceof one share of stock in one year will be T f he sells shares, the investor paystransaction costs at the rate of 1 of the amont transacted n addition, theinvestor pays taxes at the rate of 30 on capital gains For example, sppose thatthe investor sells 1,000 shares of a stock at $50 per share e has boght theseshares at $30 per share e receives $50,000 owever, he owes 0 30 (50,000 -

30,000) = $,000 on capital gain taxes and 0 01 (50,000) = $500 on transactioncosts o, by selling 1,000 shares of this stock he nets 50 ,000 - ,000 - 500 =$43,500 Formlate the problem of selecting how many shares the investor needsto sell in order to raise an amont of money , net of capital gains and transactioncosts, while maximizing the expected vale of his portfolio next year

xercse .8 how that the vectors in a given nite collection are linearlyindependent if and only if none of the vectors can be expressed as a linear combination of the others

xercse .9 ppose that we are given a set of vectors in ! that form abasis, and let y be an arbitrary vector in ! We wish to express y as a linearcombination of the basis vectors ow can this be accomplished?

xercse .2 et S = {A ! where A is a given matrix. how that S is a

sbspace of !

(b) ssme that S is a proper sbspace of ! . how that there exists a matrixB sch that S = {y ! By = O Hnt: se vectors that are orthogonalto S to form the matrix B

c ppose that is an -dimensional ane sbspace of ! with how that there exist linearly independent vectors , , , andscalars , , , sch that

= y �y = = 1 , , -

1 . 8 Hist ory, notes , and soures

Te word prograig" a bee ed traditioaly by paer to decribe te proce o operatio paig ad reorce aocatio te194, it wa reaized tat ti proce cold ote be aided by oig optiizatio proble ioig iear cotrait ad liear objectie Teter liear prograig" te eerged Te iitia ipet cae i teaerat o World War , witi te cotext o ilitary paig probe 1947, atzig propoed a agorit, te mpex metod, wic

Page 56: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 56/267

Sec 8 Htor note and ource 39

made te outio of iear programmig probem practica Tere foowed a period of itee actiity durig wic may importat probem itraportatio, ecoomic, miitary operatio, cedig, etc, were cati i framewor Sice te, computer techoogy a adaced rapidy,te rage of appicatio a expaded, ew powerfu metod ae beedicoered, ad te uderyig matematica udertadig a becomedeeper ad more compreeie Today, iear programmig i a routieyued too tat ca be foud i ome preadeet oware pacage

atzig' deeopmet of te impex metod a bee a deigmomet i te hitory of te ed, becaue it came at a time of grow-ig practica eed ad of adace i computig tecoogy But , a ithe cae wit mot cietic reoutio ," te itory of te ed i muc

ricer Eary wor goe bac to Forier , wo i 184 deeoped a ago-rit for oig ytem of iear iequaitie Fourier ' method i far eeciet ta te impex metod, but thi iue wa ot reeat at thetime 1 91 , de a Vae Poui deeoped a metod , imiar to te im-pex metod, for miimizig mi bi a�x , a probem tat we dicuedi Sectio 13

te ate 193, the Soiet mathematicia Katoroic became itereted i probem of optima reource aocatio i a cetray paed

ecoomy, for wic he gae iear programmig formuatio He ao proided a outio metod, but i wor did ot become widey ow at tetime Aroud te ame tie, eera mode ariig i caica, Waraia,ecoomic were tudied ad reed, ad ed to formuatio coey reatedto iear programmig Koopma , a ecoomit, payed a importat roead eetuay (i 1975) ared te Nobe Prize i ecoomic ciece witKatoroic

O te teoretica ot, te mathematica tructure tat uder-ie iear programmig were idepedety tudied, i te period 187193, by may promiet matematicia, uch a Fara, Miowi,Caratodory, ad oter Ao, i 198, o Neuma deeoped a importat reut i game teory that woud ater proe to hae trog co-ectio wit te deeper tructure of iea programig

Subequet to atzig' wor, tere ha bee muc ad importatreearc i area uc a arge cae optimizatio, etwor optimizatio,iterior poit metod , iteger programmig , ad compexity teory Wedefer te dicuio of ti reearc to te ote ad ource ectio of ater

chapter For a more detaied accot of the hitory of iear programmig,the reader i referred to Scrijer (1986), Orde (1993), ad te oumeedited by Letra, Riooy Ka, ad Schrijer (1991) (ee epeciay teartice by atzig i tat oume)

Tere are eera text tat coer te geera ubject of iear programmig, tartig wit a copreheie oe by atzig (1963) Somemore recet text are Papadimitriou ad Steigitz (198), Cta (1983),Murty (1983), Lueberger (1984), Bazaraa, Jari, ad Serai (199) Fi

Page 57: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 57/267

40 Cap Introduction

ay, Schrijer (1986) i a compreheie, but more adaced reerece othe ubject

1 . 1 . The ormuatio o the diet probem i due to Stiger ( 195)

1 .2 . The cae tudy o EC' prodctio paig wa deeoped by e-ud ad Shaaha (1992) Method or deaig with the urecheduig ad other cyclic problem are tudied by Barthodi, Ori,ad Rati (1980) More iormatio o patter claicatio ca beoud i uda ad Hart ( 1973) or Hayki ( 199)

1 .3 . A deep ad compreheie treatmet o coex uctio ad theirpropertie i proided by Rocaear ( 1970) Liear programmigarie i cotro probem, i way that are more ophiticated tha

what i decribed here; ee, eg, ahleh ad iazBobio (1995)1 .5 . For a itroductio to iear agebra, ee Strag ( 1988)

1.6 . For a more detaied treatmet o agorithm ad their computatioalrequiremet, ee Lewi ad Papadimitrio (1981) Papadiitriouad Steiglitz (1982) or Corme, Leiero, ad Riet (1990)

1. 7. Exercie 1 8 i adapted rom Boyd ad Vadeberghe ( 1995) Ex-ercie 19 ad 11 are adapted rom Bradey, Hax, ad Magati( 1977) Exercie 1 1 1 i adaptd rom Ahuja , Magati , ad Ori

(1993)

Page 58: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 58/267

Chapt

T gom o li

ogmmg

otets

2 1 olyhdra and onx sts

2 2 Extrm points , rtis, and basi asibl solutions

2 3 olyhdra in standard orm

2 gnray2 5 Existn o xrm points

2 6 Optimality o xtrm poins

2 7 Rprsntation o boundd polyhdra

2 8 . rojions o polyhdra FourirMotzkin limination

29 Summary

210 Exriss

2 1 1 ots and sours

4

Page 59: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 59/267

42 Ca 2 e geoet o lnea ogang

n i caper , we dene a poyedron a a e decribed by a nie nberof inear eqaiy and ineqaiy conrain n paricar , e feaibe ein a inear prograing probe i a poyedron We dy e baicgeoeric properie of poyedra in oe deai, wi epai on eircorner poin" erice A i rn o , coon geoeric iniionderied fro e faiiar reedieniona poyedra i eeniay correcwen appied o igerdieniona poyedra Anoer inereing apecof e deeopen in i caper i a cerain concep e g , e concepof a erex can be dened eier geoericay or agebraicay Wie egeoeric iew ay be ore nara, e agebraic approac i eenia forcarrying o copaion Mc of e ricne of e bjec ie in einerpay beween e geoeric and e agebraic poin of iew

Or deeopen ar wi a caracerizaion of e corner poinof feaibe e in e genera for x I Ax b Laer on, we foc on ecae were e feaibe e i in e andard for x I Ax b x and we derie a ipe agebraic caracerizaion o f e corner poin Teaer caracerizaion wi pay a cenra roe in e deeopen of eipex eod in Caper 3

Te ain re of i caper ae a a nonepy poyedron aa ea one corner poin if and ony if i doe no conain a ine, and if i

i e cae, e earc for opia oion o inear prograing probecan be rericed o corner poin Tee re are proed for e ogenera cae of inear prograing probe ing geoeric argenTe ae re wi ao be proed in e nex caper, for e cae ofandard for probe, a a coroary of or deeopen of e ipexeod T, e reader wo wie o foc on andard for probeay ip e proof in Secion 2 . 5 and 2. 6 Finay, Secion 2 and 2 . 8 canao be ipped dring a r reading any re in ee ecion a areneeded aer on wi be rederied in Caper , ing dieren ecniqe

2 . 1 olyhedra and onvex sets

n i ecion, we inrodce oe iporan concep a wi be edo dy e geoery of inear prograing, incding a dicion ofconexiy

Hyrls lscs lyr

We ar wi e fora deniion of a poyedron

Deto 2. polyhedro a et tat can be ecbed teo {x E  n Ax b wee A atx and b aecto  m

Page 60: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 60/267

Sec 2. olyhedra and convex sets 3

As dscssed n Secton 1 . 1 , the esbe set o ny ne pogingobe cn be descbed by neqty constnts o the o Ax � b,nd s theeoe poyhedo. n ptc, set o the o {x E � n IAx b, x � o s so poyhedon nd be eeed to s poyednn tandard form

A poyhedon cn ethe extend o nnty," o cn be conned n nte egon. The deniton tht olos ees to ths dstcton.

o 2 .2 A set  n s boudd f there exsts a constantK such that the absolute value of every comonent of very elementof s l than or equal to K

The next detion dels th olyhed detened by sngle lneconstnt.

o 2. 3 et a be a nonzero vector n  n and let b be a scalar

(a he set {x �  a/x s called a hyra(b he set {x E � n a/x � s called a hasa

Note tht hypepne s the bondy o coespondng hspce.n ddton, the ecto a n the denton o the hypepne s pepedicuto the hypepne tse. [To see ths, note tht x nd y beong to these hypepne, then a/x a/y Hence, a/ (x y) 0 nd theeoe as othogon to ny decton ecto conned to the hypepne.] Fny,note tht poyhedon s eql to the ntesecton o nte nbe ohspces; see Fge 2.1.

Cv Ss

We no dene the pott noton o conex set.

o 2 . A set  n s ov or any ,y E and any E [ O J e have X ( 1 )y

Note tht E [0, 1 ] ' then x ( 1 )y s eghted ege othe ectos x, y, nd theeoe belongs to the ne segent jonng x ndy Ths, set s conex the segent jonng ny to o ts eeents scontned n the set; see Fgue 2. 2 .

u next denton ees to eghted eges o nte nbe oectos; see Fge 2.3.

Page 61: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 61/267

44 Cap 2 Te geoetry o lnear prograng

a

Figure 2. : a A hyperpane and two hafspaces (b ) The poyhedron { , = 1, , 5 } is the intersection of ve halfspaces Note that each vector a is perpendicar to the hyperpane

{

= }

Denition 2 5 Le X xk be vecor n � and let > Ak benonnegaive calar woe u i unty

( Te vecor L=l AX i ad o be a convex ointion ofhe vecor Xl xk

( Te convex hull o te vector x , xk the et o all convexcombinaion o tee vecor.

The esult tht folos estblshes soe potnt fcts elted toconexty

Theore 2. 1

( The interecion of convex et convex

( Ever polhedron a convex e.

c convex cobination o a nte nuber o eleen o a convexet alo belong o a et

(d The convex hull o a n nuber o vector a convex et

Page 62: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 62/267

Sec 2 Polyedra ad coex sets

Figure 2.2: The set S is convex, bt the set Q is not, becasethe segment oining and is not contained in Q

_ x3 

Figure 2.3: The convex h of seven points in R

roof.

45

a) et Si , E I, e onvex sets whee I is soe index set nd supposetht x nd y elong to the intesetion iSi et A E 0 1 . Sineeh Si is onvex nd ontins x, y, we hve AX + ( 1 Ay E Si , whihpoves tht AX + (1 - Ay lso elongs to the intesetion of the sets

Si Theefoe iSi is onvex.( et a e veto nd let sl. Suppose tht x nd y stisfya'x � nd a'y � espetively nd theefoe elong to the sehlfspe. et A 0 1 . Then a'Ax+ ( 1 AY � Ab+ ( 1 Ab whih poves tht AX + 1 - Ay lso elongs to the se hlfspe.Theefoe hlfspe is onvex. Sine polyhedon is the intesetionof nite nue of hlfspes the esult follows fo pt () .

A onvex ointion of two eleents of onvex set lies in tht

Page 63: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 63/267

4 Ca 2 gotr o lnr roang

set , by the deto o coexty Let s ssme , s dctohypothess, tht coex combto o k elemets o coex setbelogs to tht set Cosde k + 1 elemets xl xk+ o coexset S d et k+ be oegte scls tht sm to 1 Wessme, thot loss o geety, tht k+ f 1 We the he

2 1

The coecets d (l - k+ ) 1 k e oegte d smto ty; sg the dcto hypothess , 7 x/ 1 - k+ ) S

The, the ct tht S s coex d q 2 1 mpy tht

7

X

S, d the dcto step s complete(d Let S be the coex hll o the ectos x xk d let y

�k �k L.=l

X Z L.= lX be to elemets o S, hee 2 0 2 0

d 7 7 1 Let [0 1 The,

k k ky + 1 - z L X + 1 - L X L + 1 - ) X

= =

We ote tht the coecets + 1 ) 1 k e oegte d sm to ty Ths shos tht y + 1 - z s coexcombto o X xk d, theeoe, belogs to S Ths estbshes the coexty o S

2 . 2 Extreme points, verties, and basi

feasible solutionsWe obseed Secto 14 tht optml soto to e pogmmgpoblem teds to occ t coe" o the polyhedo oe hch e eoptmzg ths secto , e sggest thee deet ys o deg thecocept o coe" d the sho tht l thee detos e eqet

st deto dees extreme pont o polyhedo s pottht cot be expessed s coex combto o to othe elemets o

the polyhedo, d s stted Fge 2 4 Notce tht ths detos etey geometc d does ot ee to specc epesetto o polyhedo tems o e costts

Denitio 26 t b a oldron A ctor x exreme pot o cnnot nd to ctor y z bot drnto x and a calar 0 c tat x y + ( 1 z .

Page 64: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 64/267

Sc 2.2 tr ont rtc and bac abl oluton

"Y

ure 2.: he vecto is not an exteme point because it is aconvex combination of and he vecto is an exteme point:if y + ( 1 z and [0, 1] ' then eithe y , o z , o = y , o z

7

An tente geoetc denton denes vertex o oyhedon

Ps the nqe ot soton to soe ne ogng obe th

esbe set P

e . 7 t P b a ldron A ctor x P a verxo P tr t o uc tat / < ' or al atn P and x

n othe ods, x s etex o P nd ony P s on one sde o

hyene the hyene y ' x}) hch eets P ony t theont x see Fge 2 5

The to geoetc dentons tht e he gen so e not esyto o th o n gothc ont o e. We od e to he denton tt ees on eesentton o oyhedon n tes o neconstnts nd hch edces to n gebc test . n ode to ode sch denton, e need soe oe tenoogy.

Consde oyhedon P n dened n tes o the ne eqty

nd neqty constntsax > ,

ax < ,

ax ,

hee , , nd e nte ndex sets , ech s ecto n n nd

Page 65: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 65/267

48 Chap 2 The geometry o near programmng

Figure 2.5: The ine at the bottom toches P at a singe pointand is a vertex n the other hand, w is not a vertex becasethere is no hyperpane that meets P ony at w

eh is slr The denitin tht fllws is illustrted in igure 2 6

Denition 28 a vect x satses x f some n MI M2 M3 we say tat te cespo cstat s ctive o iningat x

If there re nstrints tht re tive t vetr x then x stis-es ertin syste f liner equtins in unknwns This syste hs unique slutin if nd nly if these equtins re linerly independent"The result tht fllws gives preise ening t this stteent tgether

with slight generliztin

heoe 22 et x e a eemet o a et { x e the set f ces of costats tat ae actve at x Te tefow ae equvaet:

Thee exst vects n the set { E wc ae neayepeet

Te spa f te vectos E s a f � that s eveyeemet f ca e expesse as a ea cmat f tevects E

c Te system of equatons x E as a unque sout

roof. Suppse tht the vetrs , E spn  n Then the spn fthese vetrs hs diensin By There 13 in Setin 15 f

Page 66: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 66/267

Sec 2.2 Extree ont ertce d bac eable oluton

ure 2. : Let P = (X , X , X ) X + X + X = 1 , X , X , X �} There are three constraints that are active at each one of thepoints A B and D There are ony two constraints that areactive at point , namey X + X + X = 1 and X =

9

these ectos fom ss of �, nd e theefoe ney ndependentConesey sppose tht of the ectos a E I, e ney ndependentThen the sspce spnned y these n ectos s dmenson nd mste eq to �. Hence eey eement of � s ne comnton of theectos � E I Ths estshes the eqence of nd

If the system of eqtons a�x b E I, hs mtpe sotons sy

x nd x, then the nonzeo ecto d X

l x

stses a�d

0 fo E I Snce d s othogon to eey ecto a E I, d s not necomnton of these ectos nd t foos tht the ectos a E I, donot spn �. Conesey f the ectos a E I, do not spn �, choose nonzeo ecto d hch s othogon to the sspce spnned y theseectos If x stses a�x b fo E I, e so he a�x + d) b fo E I, ths otnng mtpe sotons We he theefoe estshedtht nd c e eqent D

Wth sght se of ngge e oen sy tht cetn contnt e neary ndependent enng tht the coespondng ectosa e ney ndependent Wth ths temnoogy sttement n Theoe 2 2 eqes tht thee exst ney ndependent constnts thte cte t x*

We e no edy to pode n gec denton of cone ponts fese soton t hch thee e ney ndependent cte constnts Note tht snce e e nteested n fese soton eqty

Page 67: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 67/267

5 Chap 2 The geomery of lnear pogrammng

constnts must be cte. Ts suggests te ollong y o lookng ocone ponts st mpose te equlty constnts nd ten eque ttenough ddtonl constnts be cte so tt e get totl o n lnelyndependent cte constnts. nce e e n lnely ndepedent cteconstnts unque ecto x s detemned Teoem 2 2 Hoee tspocedue hs no guntee of ledng to esble ecto x becuse someo te ncte constnts could be olted n the ltte cse e sy tte e bsc but not bsc esble soluton.

Denition 2. 9 Conse a olyheon P n by lnea eqaya qaly onsans a l x b an lemen of n

(a Th eo x s a asi solution f l eqaly onsans a ae; of h onsans ha ae ae a x* her re n of

hem ha ae ealy npn

( x s a bas solo ha sass all of he onsans wesay ha s a asi feasile solution

In efeence to Fgue 2 6 , e note tt ponts B nd ebsc esble solutons. Pont D s not bsc soluton becuse t ls tosts te equlty constnt . Pont s fesble but not bsc . I teequlty constnt Xl + X2 + X3 1 ee to e eplced by te constntsXl + X2 + X3 1 nd Xl + X2 + X3 � 1 ten D ould be bsc solutonccodng to ou denton. Ts sos tt hete pont s bscsoluton o not my depend on te y tt polyedon s epesented.Denton 29 s lso llustted n Fgue 27

Note tt te numbem

o constnts used to dene polyedonP n s less thn n the numbe of cte constnts t ny gen pontmust lso be less tn n nd thee e no bsc o bsc esble solutons.

We e gen so tee deent dentons tt e ment to cp-tue te sme concept to of tem e geometc exteme pont etexnd te td s lgebc bsc fesble soluton . Fotuntely ll teedentons e equlent s e poe next nd o ts eson te teetems cn be used ntechngebly.

23 P b a m oo a l E P low a ql

a s a

( x* s a m o

x s a bs asb solo

Page 68: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 68/267

e 2.2 Extreme ponts vertes an bas feasble soltons

Figure 2.7: The point D , F are all baic olutionbecaue at each one of them, there are two linearly independentcontraint that are active Point , D , F are baic feaibleolution

5

roof. Fo the puposes of this poof nd ithout loss of genelity, essue tht P is epesented in tes of constints of the fo a�x � ind a�x i Vertex Extree pointuppose tht x* P is etex Then, by Denition 2 , thee existssoe � n such tht * < fo eey stisfing P nd * f Y P, Z P, x*, Z x*, nd ° 1 , then* < nd * < , hich iplies tht * < ( + (1 ))nd, theefoe, x* + (1 -) Thus, x* cnnot be expessed s conexcobintion of to othe eleents of P nd is, theefoe, n extee point

cf Denition 26) Extree point Basic feasile solutionuppose tht x* P is not bsic fesible solution We ill sho tht x*is not n extee point of P et I { a�x* i Since x* is not sic fesible solution, thee do not exist n linely independent ectos inthe fily ai , I Thus, the ectos � I, lie in pope subspceof � n , nd thee exists soe nonzeo ecto � n such tht a� 0,fo ll I et be sll positie nube nd conside the ectos

x* + nd

x* Notice tht a�y a�x* i , fo Itheoe, fo I, e he a�x* > i nd, poided tht is sll, ewill lso he a�y > i t suces to choose so tht la� l <a�x* i foll I Thus, hen is sll enough, P nd, by siil guent, P We nlly notice tht x* ( + ) 2 , hich iplies tht x* is notn extee point.

Page 69: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 69/267

52 Chap 2 The geomery of lnear programmng

asic feasible sluti Vertex

Let x* be a basic feasibe soution and et Ic 2i ai We then have

c'x* L ax* L bi i i

ax*

Furthermore, for any x E and any , we have x i , and

c'x L x L bi .i i

bd Let

(2 .2)

This shows that x* is an optima sotion to the probem of minimizing c'xover the set rthermore, equaity hods in (2 .2 ) if and ony if ax ifor al E I Since x* is a basic feasibe soution, there are n inearyindependent constraints that are active at x*, and x* is the unique soutionto the system of equations ax i , E I Theorem 2.2) . It foows that x*is the unique minimizer of c'x over the set and, therefore, x* is a vertexof

Since a vector is a basic feasible soution if and ony if it is an extremepoint, and since the denition of an extreme point does not refer to anyparticuar representation of a poyhedron, we concude that the propertyof being a basic feasibe soution is aso independent of the representationused This is in contrast to the denition of a basic soution, which isrepresentation dependent, as pointed out in the discussion that folowedDenition 2 .9 . )

We nay note the foowing important fact

orllary 2.1 ie nie mber of liear neqaliy onsrainshere an only be a nie nmber o bas or basi feasble solons

rf. Consider a system of m inear inequaity constraints imposed ona vector x E Rn . At any basic soution, there are n ineary independentactive constraints Since any n ineary independent active constraints de-ne a unique point, it foows that dierent basic soutions correspond todierent sets of

nlinearly independent active constraints Therefore , the

number of basic soutions is bounded above by the number of ways that wecan choose n constraints ot of a total of m which is nite D

Athough the number of basic and, therefore, basic feasibe soutionsis guaranteed to be nite, it can be very arge For example, the unit cubex E Rn l O S Xi S 1 1 n } is dened in terms of 2 constraints,but has 2 basic feasibe soutions

Page 70: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 70/267

e 2.3 olhedr in stndrd orm 53

Ajc bsc sls

Two distinct basic soutions to a set of inear constraints in � are said tobe adacent if we can nd 1 ineary independent constraints that are

active at both of them In reference to Figure 2 D and

are adjacentto B aso, and C are adjacent to D. If two adjacent basic soutions areaso feasibe, then the ine segment that joins them is caed an edge of thefeasibe set (see aso Exercise 2 1 5 )

2 . 3 olyhedra in standard form

The denition of a basic soution (Denition 29) refers to genera poyhe-dra We wi now speciaize to poyhedra in standard form The denitionsand the resuts in this section are centra to the deveopment of the simpemethod in the next chapter

Let P { E � A b, } be a poyhedron in standardform, and et the dimensions of A be , where is the number ofequaity constraints In most of our discussion of standard form probems,we wi make the assumption that the rows of the matrix A are in-eary independent (Since the rows are dimensiona , this requires that

) At the end of this section, we show that when P is nonempty,ineary dependent rows of A correspond to redundant constraints that canbe discarded; therefore, our inear independence assumption can be madewithout oss of generaity

Reca that at any basic soution, there must be ineary independent constraints that are active rthermore , every basic soution mustsatisfy the equaity constraints A b, which provides us with activeconstraints; these are ineary independent because of our assumption on

the rows of A. In order to obtain a tota of active constraints, we needto choose of the variabes Xi and set them to zero, which makes thecorresponding nonnegativity constraints Xi 0 active However, for theresuting set of active constraints to be ineary independent, the choiceof these variabes is not entirey arbitrary, as shown by the foowingresut

Thrm onsider the onstrints A b nd x nd s

sume tht the mtrix A hs linerl independent rows vetor � is bsi solution i nd onl i e hve b nd thereexist indies B , . , B () suh tht

(a The oluns A " A e line independent;

(b) I i = B ( l ) . , B (m then Xi =  O .  

rf Consider some E � and suppose that there are indices B , ,

Page 71: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 71/267

Chap 2 The geometry of lnear programmng

B() that satisfy a and b in the statement of the theorem The activeconstraints X 0 B ( ) , , B () , and Ax b impy that

m n

= =l

Since the coumns AB , 1 , , are ineary independent, XB l , . . . ,XBm are uniquey determined. Thus, the system of equations formed bythe active constraints has a unique soution . By Theorem 2. 2 , there are ineary independent active constraints and this impies that x is a basicsoution.

For the converse, we assume that x is a basic soution and we wi

show that conditions a and b in the statement of the theorem are satis-ed Let XB , . . . , XB k be the components of x that are nonzero. Sincex is a basic soution, the system of equations formed by the active con-straints =1 AX b and X 0 B( ) , , B (k) , have a uniquesoution f. Theorem 22 ; equivaenty, the equation 7= AB XB bhas a unique soution It foows that the coumns ABl , . . . , AB k areineary independent [If they were not, we coud nd scaars , . . . , k ,not a of them zero, such that 7=

1

AB 0 This woud impy that

7=1 AB (XB + ) b, contradicting the uniqueness of the soution We have shown that the coumns ABl , . . . , ABk are ineary inde-

pendent and this impies that k Since A has ineary independentrows, it aso has ineary independent coumns, which span � m . It foows[f Theorem 1 . 3 (b) in Section 1 5 that we can nd -k additiona coumnsABk+1 , . . . , AB m so that the coumns AB , 1 , , are inearyindependent In addition, if B ( ) , , B () , then B ( ) , , B (k)because k ), and X 0 Therefore, both conditions a and b in

the statement of the theorem are satised In view of Theorem 2, a basic soutions to a standard form poy-

hedron can be constructed according to the foowing procedure

rocedure for constructin basic solutions1 . Choose ineary independent coumns AB1 , . . , AB(m) ·

. Let Xi = ° for  all i = B(l ) , . .  B (m ) . 

3 Sove the system of equation Ax b for the unnowns X B 1 . . . , X B m ·

If a basic soution constructed according to this procedure is nonneg-ative, then it is feasibe , and it is a basic feasibe soution Conversey, sinceevery basic feasibe soution is a basic soution, it can be obtained from thisprocedure. If x is a basic soution, the variabes X B 1 , . . . , XB m are caed

Page 72: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 72/267

Sec 2.3 olyhedra n standard form

ac varae; he remaining variabes are caled nonac. The coumnsABl , " " ABm are caed he ac coumn and, since hey are inearyindependent, hey form a a of � m . We wi someimes alk about wobases being dtnct or derent; our convenion is ha disinc bases invove dierent ses B ( ) , , B () } of ac ndce; if wo bases invovehe same se of indices in a dierent order, they will be viewed as one andhe same basis

By arranging he basic coumns nex o each oher, we obain an marix , caled a a matrx Noe ha his matrix is inveriblebecause he basic columns are required o be inearly independen We cansimilarly dene a vector B wih he values of the basic variabes. Thus,

XB l 1XB m

IAB2I

The basic variables are determined by solving the equation B b whoseunique souion is given by

B b

Example 2. Let the contraint

A =b be o f the form

1 2 1 0 0 n n1 0 1 0

0 0 0 0 11 0 0 0 0

Let u chooe A4 , A , A , A7 a our baic column Note that they are linearlyindependent and the correponding bai matrix i the identity We then obtainthe baic olution = (0 0 0 12 4 ) which i nonnegative and therefore

i a baic feaible olution Another bai i obtained by chooing the columnA , A , A , A7 note that they are linearly independent The correpondingbaic olution i = (0 0 4 0 - 12 4 ) which i not feaible becaue X =-12 O.

Suppoe now that there wa an eighth column A, identical to A7 Thenthe two et of column {A , A, A , A 7 and {A , A , A , A coincide nthe other hand the correponding et of baic indice which are {3 , 5, and{3, 5 , are dierent and we have two dierent bae according to our convention

For an inuiive view of basic souions, recall our inerpretation ofthe consrain Ax b, or =l AX b, as a requirement to synthesizethe veco b E � m using he resource vecors A Section 1 . 1 ) . In a basicsolution, we use only of he resource vectors, those associated with thebasic variables urthermore, in a basic feasible solution, this is accom-plished using a nonnegaive amount of each basic vecor; see Figure 2 8

Page 73: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 73/267

6 Chap. 2 The geometry of lnear programmng

iure .8: Conider a andard form problem with = 4 and = 2, and let the vector b , , be a hown The vector , form a bai; the correponding baic olution i infeaiblebecaue a negative value of X i needed to yntheize b om , . The vector , form another bai; the correpondingbaic olution i feaible Finally, the vector , do not forma bai becaue they are linearly dependent

Crrsc bss bsc sls

We now eaborate on the correspondence between basic sotions and bases.Dierent basic sotions must correspond to dierent bases, becase a basisniqey determines a basic soltion. However, two dierent bases may eadto the same basic sotion. (For an extreme exampe, if we have b then every basis matrix eads to the same basic soution, namey, te zerovector. ) Tis phenomenon has some important agorithmic impications ,and is cosey reated to degeneracy, which is the sbject of the next section.

Ajc bsc sls jc bss

Reca that two distinct basic sotions are said to be adjacent if tere aren 1 lineary independent constraints that are active at both of them.For standard form problems, we aso say that two bases are adacent if

they sare al bt one basic colmn Then, it is not hard to check thatadjacent basic sotions can aways be obtained from two adjacent bases.Conversey, if two adacent bases ead to distinct basic sotions, ten teatter are adjacent.

Example . In reference to Example 2 1 , the bae { , , , } and{ , , , } are adjacent becaue all but one column are the ame. Thecorreponding baic olution = ( 0, 0 , 0 , , 1 2 , 4, ) and = (0, 0 , 4 , 0 , - 12 , 4, )

Page 74: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 74/267

Page 75: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 75/267

8 Chap 2 The geometry of lnea programmng

Example 2.3 Conider the nonempty poyhedron dened by the contraint

2X + X + X 2X + X 1

1

The correponding matrix ha rank two. Thi i becaue the at two row( 1 , 1 , 0) and (1 , 0, 1 ) are linearly independent , but the rt row i equa to theum of the other two Thu, the rt contraint i redundant and aer it ieiminated, we til have the ame poyhedron

2. DegenerayAccording to our denition, at a basic soution, we must have n inearyindependent active constraints . This aows for the possibiity that thenumber of active constraints is greaer than n Of course, in n dimensions,no more than n of them can be ineary independent . In this case, we saythat we have a degenete basic soution. In other words, at a degeneratebasic soution, the number of active constraints is greater than the minimumnecessary

Deto 2.10 A basi soutio x  n is sai to be deeerate ifmore tha n of the ostrats ae atie at x

In two dimensions, a degenerate basic soution is at the intersectionof three or more ines ; in three dimensions, a degenerate basic soution is atthe intersection of four or more panes; see Figure 2 9 for an iustration It

turns out that the presence of degeneracy can strongy aect the behaviorof inear programming agorithms and for this reason, we wi now deveopsome more intuition

Example 2.4 Conider the poyhedron P dened by the contraint

X + X + 2X

X + X 12

X 4

X X , X , X > .

The vector = (2, , 0) i a nondegenerate baic feaibe oution, becaue thereare exactly three active and linearly independent contraint, namey, X + X +2X , X , and X . The vector = (4, 0, 2) i a degenerate baicfeaibe oution, becaue there are four active contraint, three of them inearlyindependent, namey, X + X + 2X , X + X 12 , X 4, and X

Page 76: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 76/267

Sec 2.4 Degeneracy

a

ure .9: The point A and are degenerate bai feaibleolution . The point and are nondegenerate bai feaibleolution The point D i a degenerate bai olution

Dgrcy sr fr lyr

9

At a basic soution of a poyhedron in standard form, the m equaity con-straints are aways active Therefore, having more than n active constraintsis the same as having more than n m variabes at zero eve This eadsus to the next denition which is a specia case of Denition 2 . 10 .

Denton .11 Cosier te stanar or poyeron P = x n Ax b, x � O} an et x be a basic soution. e m be te

nuber o rows o A Te ecor x is a deenerate basc soution iore tan n m o e coponents o x are zero

Example . Conider one more the polyhedron of Example 2 4 y introduing the lak variable 4, , 7 , we an tranform it into the tandard formp = = (X , , 7 = b � where

1 2

1 1

1

1

1

Conider the bai oniting of the linearly independent olumn , , , To alulate the orreponding bai olution, we rt et the nonbaivariable 4 , , and 6 to zero, and then olve the ytem = b for theremaining variable, to obtain = (4, , 2, , , , ) Thi i a degenerate baifeaible olution, beaue we have a total of four variable that are zero, wherea

Page 77: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 77/267

60 Chap 2 The geomety of lne pogammng

= 7 = 3 Thu, while we initially et only the three nonbaic variableto zero, the olution to the ytem = b turned out to ati one more ofthe contraint (namely, the contraint  2 � with equality Conider now thebai coniting of the linearly independent column , , , and Thecorreponding baic feaible olution i again

=

The preceding exampe suggests that we can think of degeneracy inthe foowing terms We pick a basic soution by picking ineary indepen-dent constraints to be satised with equaity, and we reaize that certainother constraints are aso satised with equaity If the entries of A orb were chosen at random, this woud amost never happen Aso , Figure2 10 iustrates that if the coecients of the active constraints are sightyperturbed, degeneracy can disappear (cf Exercise 2 1 8 ) In practica prob-ems, however, the entries of A and b often have a specia (nonrandom)structure, and degeneracy is more common than the preceding argumentwoud seem to suggest

ure .10: Small change in the contraining inequalitie canremove degeneracy

In order to visuaize degeneracy in standard form poyhedra, we assume that m = 2 and we draw the feasibe set as a subset of thetwodimensiona set dened by the equaity constraints A = b see Fig-ure 2 1 1 At a nondegenerate basic soution, exacty m of the constraintsXi 0 are active; the corresponding variabes are nonbasic In the case ofa degenerate basic soution, more than m of the constraints Xi 0 areactive, and there are usuay severa ways of choosing which m variabesto ca nonbasic; in that case, there are severa bases corresponding to thatsame basic soution (This discussion refers to the typica case However,

there are exampes of degenerate basic soutions to which there correspondsony one basis)

Dgrcy s rly grc rry

We cose this section by pointing out that degeneracy of basic feasibe soutions is not, in genera, a geometric (representation independent) property,

Page 78: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 78/267

Sec 2.4 egeneacy

iure .11: An ( dimenional illutration of degeneracy Here , = and = 4 The baic feaible olution A inondegenerate and the baic variable are  l ,  2 ,   , 6 The baic feaible olution i degenerate We can chooe  l , 6 a thenonbaic variable ther poibilitie are to chooe  l , , or tochooe , 6 Thu, there are three poible bae, for the amebaic feaible olution

61

but rather it may depend on the particuar representation o f a poyhedronTo iustrate this point, consider the standard form poyhedron cf. Figure2 12 )

We have = 3 , m = 2 and m = 1 The vector ( 1 1 0 ) is nondegenerate

because ony one variabe is zero . The vector (0 0 1 ) is degenerate becausetwo variabes are zero. However , the same poyhedron can aso be described

iure .1: An example of degeneracy in a tandard form problem

Page 79: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 79/267

Page 80: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 80/267

Sec 2. stence of exteme ponts 63

t turns out that the existence of an extreme point depends on whethera pohedron contains an innite ine or not; see Figure 2 13 We need thefoowing denition

De .1 polyhedon P  n cas le f thee exstsa ecto E P and a nonzeo ecto E  n such that + E P foal scalas

We then hae the foowing resut

Therem .6 Suppose that the polyedo P {E  n b 1 , } s onempt The te followg ae equalent

(a The polyhedon P has at least oe exteme pont

(b Te polyhedo P does ot conta a le

(c Thee est ectos out of the famly , , wc aeleay depedet

roof.(b

We rst proe that if P does not contain a ine, then it has a basic feasibesolution and, therefore, an extreme point A geometric interpretation ofthis proof is proided in Figure 2 1

Let be an eement of P and et I = { = b } f of the ectors , E I, corresponding to the actie constraints are linearl independent,then is, b denition, a basic feasibe solution and, therefore, a basicfeasibe solution exists If this is not the case , then al of the ectors

� E I , ie in a proper subspace of  n and there exists a nonzero ector E  n such that � = 0, for eer E I Let us consider the lineconsisting of a points of the form = + , where is an arbitrarscalar For E I, we hae � = � + � = � = b Thus, thoseconstraints that were actie at remain actie at all points on the ineHoweer, since the pohedron is assumed to contain no lines, it folowsthat as we ar , some constraint will be eentuall ioated At the

point where some constraint is about to be ioated, a new constraint mustbecome actie, and we concude that there exists some * and some t Isuch that ( + * = bj

We caim that is not a linear combination of the ectors , E IIndeed, we hae bj (because t I and ( + * = bj (b thedenition of * Thus,   o On the other hand, � = 0 for eer E I (b the denition of and therefore, is orthogona to an linearcombination of the ectors � E I Since is not orthogonal to aj we

Page 81: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 81/267

6 Chap 2 The geomety of lnea pogammng

Fiure .1: Starting from an arbitrary point of a polyhedron,we chooe a direction along which all currently active contraintremain active We then move along that direction until a newcontraint i about to be violated At that point , the number oflinearly independent active contraint ha increaed by at leatone We repeat thi procedure until we end up with linearlyindependent active contraint, at which point we have a baic

feaible olution

concude that aj is a not a inear combination of the vectors a E IThus, by moving from x to x + *, the number of lineary independentactive constraints has been increased by at east one By repeating the sameargument, as many times as needed, we eventuay end up with a point atwhich there are ineary independent active constraints Such a point is,by denition, a basic soution; it is aso feasibe since we have stayed within

the feasibe set(a (cIf P has an extreme point x then x is aso a basic feasibe soution cf Theorem 2 3 , and there exist constraints that are active at x with thecorresponding vectors a being inearly independent

c (bSuppose that of the vectors a are ineary independent and, without

oss of generaity, et us assume that a a are linearly independentSuppose that P contains a ine x + , where is a nonzero vector Wethen have a� x + � for a and al We concude that a� 0 fora If a� < 0, we can vioate the constraint by picking very arge; asymmetric argument appies if a� > 0.) Since the vectors a = 1 , are inearly independent, this impies that = o This is a contradictionand establishes that P does not contain a ine

Notice that a bounded poyhedron does not contain a ine Simiary,

Page 82: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 82/267

Sec 26 ptmaty of eteme pots 65

the positive orthant x x � O} does not contain a ine. Since a poyhedronin standard form is contained in the positive orthant, it does not contain aline either . These observations estabish the folowing important corolaryof Theorem 2 6

orollry 2 2 Eey oempty boe poyeo a yo empty poyeo staa fom as at east oe bas fasbe soto

2 . 6 Opti mality of extreme points

Having established the conditions for the existence of extreme points, wewi now conrm the intuition deveoped in Chapter 1 : as long as a inearprogramming probem has an optima solution and as ong as the feasibeset has at least one extreme point, we can always nd an optimal solutionwithin the set of extreme points of the feasibe set . Later in this section,we prove a somewhat stronger resut, at the expense of a more compicatedproof.

Theore 27 Cose te ea pogammg pobm of mmzg cx o a poyeo P. Sppose tat P as at east o tempot a tat tee ests a optma so to T te sts aoptma soto wc s a eteme pot of P

roof. (See Figure 15 for an illustration. ) Let Q be the set of al optimalsoutions , which we have assumed to be nonempty. Let P be of the form

P = x E  n Ax � } and et v be the optimal value of the cost cxThen, Q = x E  n Ax � , cx = v} , which is aso a poyhedron. Since

Q

Figure 215: Illustration of the proof of Theorem 7 Here, Qis the set of optimal solutions and an extreme point of Q is alsoan extreme point of P.

Page 83: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 83/267

66 Chap 2 The geometry of linear programming

Q P, and since P contains no ines (cf. Theorem 2.6), Q contains no ineseither. Therefore, Q has an extreme point.

Let * be an extreme point of Q. We wi show tat * is aso anextreme point of P Suppose, in order to derive a contradiction, that *is not an extreme point of P Then, there exist y E P, z E P, suc thaty * , z * , and some E [0, ] suc that * y + ( - ) It foowstat v e'* e'y + - )e'z rthermore, since v is the optimacost, e'y v and e'z v This impies that e'y e'z v and thereforez E Q and y E Q. But this contradicts the fact that * is an extreme pointof Q. The contradiction estabishes tat * is an extreme point of P Inaddition, since * beongs to Q, it is optima.

Te above teorem appies to poyedra in standard form, as we asto bounded poyhedra, since they do not contain a ine.Our next resut is stronger tan Theorem 2. 7 . It shows tat te

existence of an optima soution can be taken for granted, as ong as teoptima cost is nite.

Theorem 28 Cosier the inear programming problem of minimizig e' oer a poyhero P Suppose that P has at east oe etreme

poit Then either the optima cost is eqa to- 0

or ther eistsa etreme point which is optima.

roof The proof is essentiay a repetition of the proof of Theorem 2.6.The dierence is that as we move towards a basic feasibe soution, we wiaso make sure that the costs do not increase . We wi use the foowingterminoogy an eement of P has n if we can nd but not morethan k, ineary independent constraints that are active at

Let us assume that the optima cost is nite. Let P E   IA b and consider some E P of rank < We wi show that thereexists some y E P wich has greater rank and satises e'y : e' Let { I a i , where a is the th row of A Since k < the vectorsai , E , ie in a proper subspace of   , _and we can choose some nonzero E   orthogona to every ai , E rthermore, by possiby taking thenegative of , we can assume that e' o

Suppose that e' < Let us consider the hafine y + ,where i s a positive scaar. As in te proof of Theorem 2 .6 , a pointson this hafine satisfy the reations ay i , E If the entire afine were contained in P, the optima cost woud be - 0 wic we haveassumed not to be the case . Therefore, the hafine eventuay exits PWhen this is about to happen, we have some * > 0 and such tata( + *) j • We et y + * and note that e'y < e' As in theproof of Theorem 2.6, aj is ineary independent from ai , E , and therank of y is at east k + 1

Page 84: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 84/267

Sec 2.7 Representaton of bounded plyheda 67

Sppose now that 'd = o We consider the line y = x + d, where is an arbitrary scalar. Since P contains no ines, the ine mst eventuallyexit P and when that is abot to happen, we are again at a vector y of rankgreater than that of x rthermore, since 'd = 0, we have 'y = 'x

In either case , we have fond a new point y sch that 'y 'x andwhose rank is greater than that of x By repeating this process as manytimes as needed, we end up with a vector w of rank (thus, w is a basicfeasibe soltion) such that 'w 'x

Let W , . . . w be the basic feasibe soltions in P and let w* be abasic feasibe sotion sch that 'w* 'w for al We have areadyshown that for every x there exists some sch that 'w 'x It followsthat 'w* 'x for all xP, and the basic feasible soltion w* is optimal.

For a general linear programming problem, if the feasible set hasno extreme points, then Theorem 2 8 does not appy directy. On theother hand, any linear programming problem can be transformed into aneqivalent problem in standard form to which Theorem 2 8 does apply.This establishes the following corollary.

orollary .3 Csie e iea pgmmig pbem f miimizig x e a empty plyhe Te eihe e pima s isequa t - 0 hee eists a ptima sluti

The reslt in Crollary 2 3 shold be contrasted with what may happen in optimization problems with a noninear cost fnction. For example,in the problem of minimizing l/x sbject to x 1 , the optimal cost is not

- 0 bt an optimal soltion does not exist.

2 . 7 Representation of bounded polyhedra*

So far, we have been representing polyhedra in terms of their dening in-eqalities . In this section, we provide an alternative , by showing that abonded polyhedron can also be represented as the convex hll of its ex-

treme points. The proof that we give here is elementary and constrctive,and its main idea is smmarized in Figre 2 16 There is a similar repre-sentation of nbonded polyhedra involving extreme points and extremerays" (edges that extend to innity) . This representation can be developedusing the tools that we already have, at the expense of a more complicatedproof. A more elegant argment, based on dality theory, will be presentedin Section 4 and will also reslt in an alternative proof of Theorem below.

Page 85: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 85/267

68 Chap. 2 The geomety of ea pogammg

Figure 216: Given the vector we expre it a a convex combination of y and The vector belong to the polyhedron Qwhoe dimenion i lower than that of P ing induction on dimenion, we can expre the vector a a convex combination ofextreme point of Q Thee are alo extreme point of P

Theorem 29 A oempty a boe poyheo s the oeh of ts eteme pots

roof Every convex combination of extreme points is an element of thepolyhedron, since polyhedra are convex sets. Thus, we only need to prove

the converse result and show that every eement of a bounded polyhedroncan be represented as a convex combination of extreme points .We dene the dmenon of a polyhedron P as the smalest

integer such that P is contained in some dimensional ane subspaceof Recal om Section 1., that a dimensional ane subspace is atranslation of a dimensional subspace. ) Our proof proceeds by inductionon the dimension of the polyhedron P If P is zerodimensional, it consistsof a single point . This point is an extreme point of P and the result is true.

Let us assume that the result is true for al polyhedra of dimension lessthan . Let P = {x E a�x ? b, 1 , . . . } be a nonempty boundeddimensional polyhedron. Then, P is contained in a dimensiona anesubspace S of , which can be assumed to be of the form

where l , . . . , xk are some vectors in Let f , fk be - linearlyindependent vectors that are orthogonal to xl , , xk Let = fx, for

Page 86: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 86/267

Page 87: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 87/267

70 Chp 2 The gemetry liner prgrmming

Therefore,*Y � =

1 + *+ + *

which shows that is a convex combination of the extreme points of P D

Example .6 Conider the polyhedron

P = { (X1 , X2 ,X3) Xl + X2 + X3 1, X1 , X2 , X3 }

t ha four extreme point, namely, l = 1 , 0 , 0 , 2 = 0 , 1 , 0 , 3 = 0, 0 , 1 , and4 = 0 , 0 , 0 The vector = 1 / , 1 / , 1 / belong to P t can be repreenteda

1 1 2 1 3  1 4

= + + + 1

There is a converse to Theorem 2 9 asserting that the convex hu ofa nite number of points is a poyhedron This resut is proved in the nextsection and again in Section 49

2 . 8 rojetions of polyhedra:

Fourier-Motzkin elimination*

In this section, we present perhaps the odest method for solving linear programming probems This method is not practica because it requires a veryarge number of steps, but it has some interesting theoretica coroaries

The key to this method is the concept of a pecton dened asfoows if x = (X , , Xn is a vector in  n and k the projectionmapping k  n  k projects x onto its rst k coordinates

k ( = k (X l , , n = (X l Xk

We aso dene the projection k (S of a set S  n by letting

k (S = {k ( I E S } ;

see Figure 2 17 for an ilustration Note that S is nonempty if and ony ifk (S is nonempty An equivaent denition is

k (S =

{(Xl , Xk there exist Xk+l , · · · , Xn s t (Xl Xn E S }

Suppose now that we wish to decide whether a given poyhedronP  n is nonempty If we can somehow eiminate the variabe Xn andconstruct the set n l P)  n l , we can instead consider the presum-aby easier problem of deciding whether n l P) is nonempty If we keepeiminating variabes one by one, we eventuay arrive at the set P) that

Page 88: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 88/267

Sec. 2.8 rjectins plyhedr FurierMtzin elimintin*

Figure 2.17: The projection I S and 2S of a rotated

threedimenional cube

71

invoves a singe variabe, and whose emptiness is easy to check. The maindisadvantage of this method is that whie each step reduces the dimensionby one, a arge number of constraints is usuay added. Exercise 220 deaswith a famiy of exampes in which the number of constraints increasesexponentiay with the probem dimension.

We now describe the eimination method . We are given a poyhedronP in terms of inear inequaity constraints of the form

ijXj i,j=

= 1 , m

We wish to eiminate X and construct the projection - P)

Page 89: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 89/267

72 Chap. 2 The geometry of linear programming

Elimination algorithm

1 Rewrie each cosrai 27= 1 aijXj bi i he form

n- 1ainXn - L aijX + bi,j=l

i =  1 ,  . .   , m; 

if ain = 0, divide boh sides by ain By leig x = (Xl " ' Xn 1 ) ,we obai a equivale represeaio o P ivolvig he ollow-ig cosrais:

Xn

> di + fx, if ain

> 0, (2 4)

dj + fx > Xn , if ajn < 0 (2 5

0 > dk + f£ , if akn = O. (2 6)

Here, each di , dj , dk is a scalar, ad each fi fj fk is a vecor i? n .

2. Le Q be he polyhedron i ? n 1dened by he costrais

dj + fjx di + fix,o ; dk + f£ x,

i ain

> 0 ad ajn

< 0

if akn = O.

Example 2 7 Consider the polyhedron dened by the constraints

Xl + X2 ? 1

Xl+

X2+

xs ?

Xl + 3xs ? 3

Xl - 4xs > 4

-Xl + X2 - Xs ? 5 .

We rewrite these constraints in the form

0 ? 1 - Xl - X2

Xs ? 1 - (xl / ) - (x2/ )

Xs ? 1 - (X l /3 )

-1 + (xl/4)  ? Xs

-5 - Xl + X2 ? Xs ·

Then, the set Q is dned by the constraints

o ? 1 - Xl - X2

- 1 + xl/4 ? 1 - (x l/) - (x2/)

(2 7)

(2 8)

Page 90: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 90/267

Sec 2.8 rjectins plyhedr FurierMtzin elimintin* 73

-1 + xl/ 1 - Xl/

-5 - Xl + X2 > 1 - xl/ - X2/

-5 - Xl + X2 1 - Xl/

Thorm 210 The plyhedrn Q cnstructed by he elimintin lgrith is equl t the projectin - P) P

roof If x P) , there exists some X such that x ) P Inparticuar, the vector x = x ) satises Eqs 2 42 , om which it

foows immediate that x satises Eqs 2 7 2 8 , and x Q This showsthat P) QWe wi now prove that Q - P) Let x Q It foows from

Eq 2 7 that

Let X be an number between the two sides of the above inequait It

then foows that x ) satises Eqs 2 42 and, therefore, beongs tothe pohedron P

Notice that for an vector x = (X , , ) , we have

According, for an pohedron P, we aso have

B generaizing this observation, we see that if we app the eimination a-gorithm k times, we end up with the set -k P) ; if we app it n - l times,we end up with P) nfortunate, each appication of the eiminationagorithm can increase the number of constraints substantia, eading toa pohedron P) described b a ver arge number of constraints Ofcourse, since P) is onedimensiona, amost a of these constraints wibe redundant, but this is of no hep: in order to decide which ones are

redundant, we must, in genera, enumerate themThe eimination agorithm has an important theoretica consequence:

since te projection k P) can be generated b repeated appication of theeimination agorithm, and since the eimination agorithm awas producesa pohedron, it foows that a projection k P) of a pohedron is aso apohedron This fact might be considered obvious, but a proof simperthan the one we gave is not apparent We now restate it in somewhatdierent anguage

Page 91: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 91/267

Page 92: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 92/267

Sec 2.9 Summy 7

2.9 Summary

We summarize our main concusions so far regarding the soutions to inearprogramming probems

(a If the feasibe set is nonempty and bounded there exists an optimasoution rthermore there exists an optima soution which is anextreme point

(b I f the feasibe set i s unbounded there are the foowing possibiities

(i There exists an optima soution which is an extreme point

(ii There exists an optima soution but no optima soution is anextreme point (This can ony happen if the feasibe set has

no extreme points; it never happens when the probem is instandard form (iii The optima cost is - 0

Suppose now that the optima cost is nite and that the feasibe setcontains at east one extreme point Since there are ony nitey manyextreme points the probem can be soved in a nite number of steps byenumerating a extreme points and evauating the cost of each one Thisis hardy a practica agorithm because the number of extreme points can

increase exponentiay with the number of variabes and constraints In thenext chapter we wi expoit the geometry of the feasibe set and deveopthe mpex metod, a systematic procedure that moves om one extremepoint to another without having to enumerate a extreme points

An interesting aspect of the materia in this chapter is the distinctionbetween geometric (representation independent properties of a poyhedronand those properties that depend on a particuar representation In thatrespect we have estabished the foowing

(a Whether or not a point is an extreme point (equivaenty vertex orbasic feasibe soution is a geometric property

(b Whether or not a point is a basic soution may depend on the waythat a poyhedron is represented

(c Whether or not a basic or basic feasibe soution is degenerate maydepend on the way that a poyhedron is represented

2 . 1 0 Exerises

Exercise 2. For each one of the following sets, determine whether it is a polyhedron

The set of all (, ) 2 satising the constraints

cos + sin 1 ,

0,

0.

\ [0, 7/2] ,

Page 93: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 93/267

76 Chp. 2 The gemetry liner prgrmming

(b) The set of all x E satising the constraint x2 - 8x + 15 O (c) The empty set

Exercise 22 Let

n be a conve functio and let be sme constant

Show that the set S

={x E n (x) } is conve

Exercise 23 (Basic feasibe soutions in standard fr poyhedra withupper bounds) Consider a polyhedron dened by the constraints Ax = b and x u and assume that the matri A has liearly independent rows Providea procedure analogous to the one in Section 2 3 for constructing basic solutions,and prove an analog of Theorem 4 Exercise 24 We know that every linear programming problem can be coverted to an equivalent problem in standard form We also know that nonempty

polyhedra in standard form have at least one etreme point We are then temptedto conclude that every nonempty polyhedron has at least one etreme point plain what is wrong with this argument

Exercise 25 (xtree points of isonorphic poyhedra) A mapping iscalled ane if it is of the form (x) = Ax + b where A is a matri and b is avector Let P and Q be polyhedra in n ad m , respectively We say that Pand Q are somohc if there eist ane mappings P Q and 9 Q Psuch that Jx» = x for all x E P, and (y» = y for all y E Q (Intuitively,isomorphic polyhedra have the same shape )

(a) If P and Q are isomorphic, show that there eists a onetoone correspondence between their etreme points In particular, if and 9 are as above,show that x is an etreme point of P if and only if (x) is an etreme pointof Q

(b) (ntroducing sac riabes eads to an isonorphic poyhedron)Let P = {x E n Ax b x O} , where A is a matri of dimensions Let Q = { (x , ) E n

Ax - = b x 0 , } . Show that P

and Q are isomorphic

Exercise 26 (arathodorys theore) Let AI , , An be a collection ofvectors in m

(a) Let

= iAi , , n o } Show that any element of can be epressed in the form l

iAi,with

i 0, and with at most of the coecients

ibeing nonzero Hnt:

Consider the polyhedron

(b) Let P be the conve hull of the vectors Ai

Page 94: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 94/267

Sec. 2 0 Exercises 77

Show that any element of P can be epressed in the form l Aii, wherel Ai = 1 and Ai 0 for all , with at most + 1 of the coecients Aibeing nonzero

Exercise 27 Suppose that {x E �n ax bi,

=1 ,

} and {x E �n gx hi, = 1 , k} are two representations of the same nonempty polyhedron

Suppose that the vectors a , , a span �n. Show that the same must be truefor the vectors gl , , g

Exercise 28 Consider the standard form polyhedron {x Ax = b x O},and assume that the rows of the matri are linearly independent Let x be abasic solution, and let J = { Xi = } Show that a basis is associated with thebasic solution x if and only if every column i, E J is in the basis

Exercise 29 Consider the standard form polyhedron {x Ax

=b x O},

and assume that the rows of the matri are linearly independent

(a) Suppose that two dierent bases lead to the same basic solution Showthat the basic solution is degenerate

(b) Consider a degenerate basic solution s it true that it corresponds to twoor more distinct bases? Prove or give a countereample

c Suppose that a basic solution is degenerate Is it true that there eists anadjacent basic solution which is degenerate? Prove or give a counteream

pleExercise 210 Consider the standard form polyhedron P = {x Ax = b x O} Suppose that the matri has dimensions and that its rows arelinearly independent For each one of the following statements, state whether itis true or false f true, provide a proof, else, provide a countereample

(a) f = + 1 then P has at most two basic feasible solutions

(b) The set of all optimal solutions is bounded

() At every optimal solution, no more than variables can be positive

(d) f there is more than one optimal solution, then there are uncountablymany optimal solutions

e f there are several optimal solutions, then there eist at least two basicfeasible solutions that are optimal

(f) Consider the problem of minimizing ma{ 'x, d'x} over the set P f thisproblem has an optimal solution, it must have an optimal solution whichis an etreme point of P

Exercise 211 Let P

={x E �n Ax b} Suppose that at a particularbasic feasible solution, there are k active constraints, with k > Is it true

that there eist eactly bases that lead to this basic feasible solution? Here = k k - is the number of ways that we can choose out of k givenitems

Exercise 212 Consider a nonempty polyhedron P and suppose that for eachvariable Xi we have either the constraint Xi 0 or the constraint Xi : O Is ittrue that P has at least one basic feasible solution?

Page 95: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 95/267

78 Chp. 2 The gemetry liner prgrmming

Exercise 213 Cosider the stadard form polyhedro P = { I A = } . Suppose that the matri A of dimesios , has iearly depedetrows, ad that al basic feasible solutios are odegeerate Let be a eemetof P that has eacty positive compoets

(a) Show that is a basic feasibe solutio Show that the result of part (a) is false if the odegeeracy assumptio is

removed

Exercise 214 Let P be a bouded polyhedro i n, let a be a vector n,ad et be some scalar We dee

Q = E P a = }

Show that every etreme poit of Q is either a etreme poit of P or a covecombiato of two adjacet etreme poits of P

Exercise 2 15 (dges oining aacent vertices) Cosider the polyhedroP = { E n ax = 1 } Suppose that u ad v are distictbasic feasble solutios that satisfy au = av = 1 . . . - 1 ad thatthe vectors a , , an are liearly idepedet I particular, u ad v areadjacet ) Let = {u + ( 1 - V ° : : } be the segmet that jois u adv. Prove that = {   l � = , = 1 . . . - 1 } .

Exercise 2 16 Cosder the set { E n IXl  . . .  X

n- l  = 0, 0 : X

n: 

} .  Could this b e the feasble set of a problem i stadard form?

Exercise 217 Cosider the polyhedro { E n I A : } ad aodegeerate basc feasible soluto * We itroduce slack variables adcostruct a correspodg polyhedro { ( ) A + = 0 } istadard form Show that (* - A*) is a odegeerate basic feasbe soutiofor the ew polyhedro

Exercise 218 Cosider a polyhedro P = { A } Give ay E > 0,

show that there eists some wth the followig two properties(a) The absolute value of every compoet of is bouded by E very basc feasble soluto i the polyhedro P = { I A } is

odegeerate

Exercise 219 Let P n be a polyhedro stadard form whose detovolves liearly idepedet equality costrats Its dimesio is deed asthe smallest teger such that P is cotaed some dimesoal aesubspace of n

(a) pla why the dmeso of P s at most

(b) Suppose that P has a odegeerate basc feasble soluto Show that thedmesio of P s equal to -

(c) Suppose that s a degeerate basc feasble soluto Show that s degeerate uder every stadard form represetatio of the same polyhedro (ithe same space n Hit sg parts (a) ad (b) , compare the umber ofequality costrats i two represetatios of P uder whch s degeeratead odegeerate, respectively The, cout actve costrats

Page 96: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 96/267

Sec 2 Ntes nd surces 79

Exercise 2 20 Consider the Fourierotzkin elimination alorithm

a Suppose that the number m of constraints denin a polyhedron P is evenShow, by means of an eample, that the elimination alorithm may producea description of the polyhedron

n l P involvin as many as m2/4 linear

constraints, but no more than that(b) Show that the elimination alorithm produces a description of the one

dimensional polyhedron P involvin no more than m2n- 1 /2n 2 constraints

(c) Let = ++ , where is a nonneative inteer Consider a polyhedronin Rn dened by the 8 constraints

1 i j k where all possible combinations are present Show that after eliminations,we have at least

22constraints Note that this number increases eponentially with

Exercise 2 2 1 Suppose that Fourierotzkin elimination is used in the mannerescribed at the end of Section 8 to nd the optimal cost in a linear proramminroblem Show how this approach can be aumented to obtain an otimal solutionas well

Exercise 222 Let P and Q be polyhedra in Rn. Let P + Q = {x + y I x E y E Q}a Show that P + Q i s a polyhedron

(b) Show that every etreme point of P + Q is the sum of an etreme point ofP and an etreme point of Q

2 . 1 1 Notes and sourcesThe reation between agebra and geometry goes far back in the history ofmathematics , but was imited to two and threedimensiona spaces. Theinsight that the same reation goes through in higher dimensions ony camein the midde of the nineteenth century

. . Our agebraic denition of basic feasibe soutions for genera poy-hedra, in terms of the number of ineary independent active constraints , is not common. evertheess, we consider it to be quitecentra, because it provides the main bridge between the agebraicand geometric viewpoint, it aows for a unied treatment, and showsthat there is not much that is specia about standard form probems.

.8. Fourierotzkin eimination is due to Fourier 18 , Dines 1918 ,and otzkin 1936

Page 97: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 97/267

Page 98: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 98/267

hapt

T imlx mo

ontents Optimaity conditions

Deveopment of the simpex method

Impementations of the simpex method

Anticycing exicography and Band's rue

Fiding an initia basic feasibe soution Coum geometry and the simpex method

Computationa eciency of the simpex method

Summary

Exercises

Notes and sources

81

Page 99: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 99/267

82 Chp. 3 The simple meh

We saw in hapter , that if a linear programming problem in standardform has an optimal solution, then there exists a basic feasible solution thatis optimal The simplex method is based on this fact and searches for an optimal solution by moving from one basic feasible solution to another, alongthe edges of the feasible set , always in a cost reducing direction Eventually, a basic feasible solution is reached at which none of the available edgesleads to a cost reduction such a basic feasible solution is optimal and thealgorithm terminates In this chapter we provide a detailed developmentof the simplex method and discuss a few dierent implementations, including the simplex tableau and the revised simplex method We also addresssome diculties that may arise in the presence of degeneracy We providean interpretation of the simplex method in terms of column geometry, and

we conclude with a discussion of its running time, as a function of thedimension of the problem being solved

Throughout this chapter, we consider the standard form problem

minimize csubject to b

> 0 ,

and we let P be the corresponding feasible set We assume that the dimensions of the matrix are m and that its rows are linearly independentWe continue using our previous notation is the th column of the matrix, and a� is its th row

3 . Optiality conditions

Many optimization algorithms are structured as follows given a feasiblesolution, we search its neighborhood to nd a nearby feasible solution withlower cost If no nearby feasible solution leads to a cost improvement , thealgorithm terminates and we have a ocay otma solution For generaloptimization problems, a locally optimal solution need not be globallyoptimal Fortunately, in linear programming, local optimality implies globaloptimality this is because we are minimizing a convex function over a

convex set cf Exercise ) In this section, we concentrate on the problemof searching for a direction of cost decrease in a neighborhood of a givenbasic feasible solution, and on the associated optimality conditions

Suppose that we are at a point E P and that we contemplate movingaway from , in the direction of a vector d E  n . learly, we should onlyconsider those choices of d that do not immediately take us outside thefeasible set This leads to the following denition, illustrated in Figure

Page 100: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 100/267

c. 3. ptimlity cnditins 83

Figure 3.1 Feasible directions at dierent points of a polyhedron.

Denition 3. 1 Lt x b n lmnt plyhdn t E n is id t b feasile irection t x i th its psiti scl wih x + E

Let x be a basic feasible solution to the standard form problem,lt B ( l) . . B (m) be the indices of the basic variables, and let =[B l ABm ] be the corresponding basis matrix In particular, we have = for every nonbasic variable, while the vector B = B l . . . , Bm f basic variables is given by

= lbWe consider the possibility of moving away from x to a new vector

x + , by selecting a nonbasic variable Xj (which is initially at zero level,nd increasing it t o a positive value , while keeping the remaining nonbasicvariables at zero Algebraically, dj = , and d = for every nonbasic index other than . At the same time, the vector of basic variables changest + , where = dB , dB2 , , dB m is the vector with thosecmponents of that correspond to the basic variables

Given that we are only interested in feasible solutions, we requireA (x + = , and since x is feasible, we also have x = Thus, for the

quality constraints to be satised for > , we need = Recall nowthat dj = , and that d = for all other nonbasic indices Then,

n m = A = L Ad = L AB dB + Aj = + Aj

= l = lince the basis matrix is invertible, we obtain

= -lAj )

Page 101: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 101/267

84 Chp 3 The simplex methd

The direction vector d that we have just constructed wi be referredto as the th ac drecton We have so far guaranteed that the equaityconstraints are respected as we move away from x aong the basic directiond How about the nonnegativity constraints? We reca that the variabe

Xj is increased, and a other nonbasic variabes stay at zero eve. Thus,we need ony worry about the basic variabes. We distinguish two cases

(a) Suppose that x is a nondegenerate basic feasibe soution. Then,B > from which it foows that B B ? , and feasibiity ismaintained, when is sucienty sma. In particuar, d is a feasibedirection.

(b) Suppose now that x is degenerate . Then, d is not aways a feasibe di

rection. Indeed, it is possibe that a basic variabe XB i is zero, whiethe corresponding component dB i of B = B- 1Aj is negative. Inthat case, if we foow the th basic direction, the nonnegativity constraint for XB i is immediatey vioated, and we are ed to infeasibesoutions; see Figure

We now study the ects on the cost function if we move aong a basicdirection. If d is the th basic direction, then the rate cd of cost changeaong the direction d is given by C�d j , where CB = B  l , " " B m

sing Eq. , this is the same as j C�B1Aj . This quantity is im-portant enough to warrant a denition. For an intuitive interpretation, jis the cost per unit increase in the variabe Xj , and the tem C�B1Aj isthe cost of the compensating change in the basic variabes necessitated bythe constraint Ax = h

Denition 32 Let x be bsic slutin let B be n sscited bsismtrix nd let CB be the vectr f csts the bsic vrbles. Fr

ech e dene the reduced cost j the vrible Xj ccrding tthe rmul BAj = j - cB j

Example 31 Consider the linear programming problem

minimize I XI + 2X2 + 3X3 + 4X4subject to Xl + X2 + X3 + X4

X I + X3 + X4Xl , X2 , X3 , X4 O

The rst two columns of the matri are l = , and 2 = , 0 Sincthey are linearly independent, we can choose Xl and X2 as our basic variablesThe corresponding basis matri is

Page 102: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 102/267

Sec. 3. ptimlity cnditins

Figure 3 2: Let = 5, = 2 As discussed in Section 4 wecan visualize the feasible set by standing on the twodimensionalset dened by the constraint Ax = b, in which case, the edges ofthe feasible set are associated with the nonnegativity constraintsXi O At the nondegenerate basic feasible solution the vari

ables Xl and X3 are at zero level

nonbasic

and X2, X4, X are

positive basic variables. The rst basic direction is obtained byincreasing Xl , while keeping the other nonbasic riable X3 at zerolevel. This is the direction corresponding to the edge F Consider now the degenerate basic feasible solution F and let X3, X

be the nonbasic variables . Note that X4 is a basic variable at zerolevel. A basic direction is obtained by increasing X3, while keepingthe other nonbasic variable X at zero level. This is the direction

corresponding to the line

FGand it takes us outside the feasibleset. Thus, this basic direction is not a feasible direction.

8

e set X3 = X4 = 0 and solve for Xl , X2 , to obtain Xl = and X2 = . We havethus obtained a nondegenerate basic feasible solution.

A basic direction corresponding to an increase in the nonbasic variable X3,is constructed as follows. We have d3 = and d4 = O The direction of change ofthe basic variables is obtained using q.

3 . / 2 3/2 /2 3 /2 he cost of moving along this basic direction is c'd = 3Cl/2 + 2/2 + 3 Thisis the same as the reduced cost of the variable X3

Consider now Denition for the case of a basic variable . Since is the matrix [AB l " AB , we have 1 [AB 1 " AB I, where

Page 103: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 103/267

86 Chp. 3 The simple methd

I is the m m identity matrix. In particular, 1AB is the th columnof the identity matrix, which is the th unit vector Therefore, for everybasic variable XB , we have

1A ' B B - CB B B - CB  B - B and we see that the reduced cost of every basic variable is zero.Our next result provides us with optimality conditions . Given our

interpretation of the reduced costs as rates of cost change along certaindirections, this result is intuitive.

Theorem 31 Cnside bsic esible slutin x sscited with

bsis mti nd let

be the cespnding ect educed csts.a � then x is ptiml x is ptiml nd nn degenete then � o

roof.a We assume that � , we let y be an arbitrary feasible solution, and

we dene y - x Feasibility implies that Ax = Ay and,

therefore, A = o. The latter equality can be rewritten in the form

B + L Ad ,N

where is the set of indices corresponding to the nonbasic variablesunder the given basis . Since is invertible, we obtain

and

B - L 1Ad ,

N

c'd CB + L d L( - C1A)d Ld N N N

For any nonbasic index E we must have X and, since yis feasible, Y � O. Thus, d � and d � for all E Weconclude that c' y - x c'd � and since y was an arbitrary

feasible solution, x is optimal Suppose that x is a nondegenerate basic feasible solution and that

j < for some . Since the reduced cost of a basic variable is alwayszero, X must be a nonbasic variable and j is the rate of cost changalong the th basic direction. Since x is nondegenerate, the th basicdirection is a feasible direction of cost decrease, as discussed earlier.By moving in that direction, we obtain feasible solutions whose costis less than that of x, and x is not optimal.

Page 104: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 104/267

Sec. 32 eelpment the imple methd 87

ote that Theorem allows the possibility that x i s a (degenerate)optimal basic feasible solution, but that j < for some nonbasic index here is an analog of Theorem that provides conditions under whicha basic feasible solution x is a unique optimal solution; see Exercise A related view of the optimality conditions is developed in Exercises and

According to Theorem , in order to decide whether a nondegeneratebasic feasible solution is optimal, we need only check whether all reducedcosts are nonnegative, which is the same as examining the basicirections If x is a degenerate basic feasible solution, an equally simplecomputational test for determining whether x is optimal is not available(see Exercises and ) Fortunately, the simplex method, as developed

in subsequent sections, manages to get around this diculty in an eectiveanner

ote that in order to use Theorem and assert that a certain basic solution is optimal, we need to satis two conditions feasibility, andnonnegativity of the reduced costs This leads us to the following denition

enition 3.3 i mti i id t e optimal i

a � nd c cB- A � '

learly, if an optimal basis is found, the corresponding basic solutionis feasible , satises the optimality conditions , and is therefore optimal Onthe other hand, in the degenerate case, having an optimal basic feasiblesolution does not necessarily mean that the reduced costs are nonnegative

3 . 2 Developent of the siplex ethod

e will now complete the development of the simplex method Our maintask is to work out the details of how to move to a better basic feasiblesolution, whenever a protable basic direction is discovered

Let us assume that every basic feasible solution is nondegeneratehis assumption will remain in eect until it is explicitly relaxed later

in this section Suppose that we are at a basic feasible solutionx

andthat we have computed the reduced costs j of the nonbasic variables Ifall of them are nonnegative, Theorem shows that we have an optimalsolution , and we stop If on the other hand, the reduced cost j of a nonbasicvariable j is negative, the th basic direction is a feasible direction ofcost decrease [This is the direction obtained by letting dj , d or i ( ) . () and B- Aj ] While moving along thisdirection , the nonbasic variable j becomes positive and all other nonbasic

Page 105: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 105/267

88 Chp 3 The simplex methd

variabes remai at zero We describe this situation by sayig that Xj orAj ) enter or ugt nto te a.

Oce we start movig away om x aog the directio d, we aretracig poits of the form x + Od, where O. Sice costs decrease aongthe directio d, it is desirabe to move as far as possibe This takes us tothe poit x + O*d, where

* max { x + Od p} .

The resutig cost chage is O* c'd, which is the same as O*j We ow derive a formua for * ive that Ad , we have Ax +

Od) Ax b for a , ad the equaity costraits wi ever be vioatedThus, x + Od ca become ifeasibe oy if oe of its compoets becomesegative We distiguish two casesa If d , then x + Od for a , the vector x + Od ever

becomes ifeasibe, ad we et * 0b If di < for some i the costrait Xi + di becomes � -i/di

This costraint o must be satised for every i with di <O. Thus,the argest possibe vaue of is

* mi

Xi

i < di

Reca that if Xi is a obasic variabe, the either Xi is the eterigvariabe ad di , or ese di O. I either case, di is noegativeThus, we oy eed to cosider the basic variabes ad we have theequivaent formua

Xi m .{ i=l , . .  ,m B ; <O}  dS (i)

Note that * > , because Xi > for a i as a cosequece ofodegeeracy

Example 32 This is a continuation of ample om the previous section,dealing with the linear programming problem

minimize Xl + 2X2 + 3X3 + 4X4subject to Xl + X2 + X3 + X4 

Xl+

X3+

X4X l , X 2 , X 3 , X 4 � O

Let us again consider the basic feasible solution = , , 0 , 0 and recall that thereduced cost C3 of the nonbasic variable X3 was found to be - + 2/ + 3Suppose that c = , 0 , 0 , 0 , in which case, we have C3 = -3 Since C3 is negative,we form the corresponding basic direction, which is = - / , / , , 0 , andconsider vectors of the form +, with � O As increases, the only componentof that decreases is the rst one (because dl 0 The largest possible value

Page 106: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 106/267

Sec 3.2 evelpment the simplex methd 89

of is iven by * = -X/ = / This takes us to the point y = + d/ = 0 4/ / 0 Note that the columns 2 and 3 correspondin to the nonzerovariables at the new vector y are 0 and respectively, and are linearlyindeendent Therefore, they form a basis and the vector y is a new basic feasiblesolution. In particular, the variable X has entered the basis and the riable Xhas eited the basis

Once * is chosen, and assming it is nite, we move to the newfeasibe sotion x+* Since Xj and dj we have Yj * > OLet £ be a minimizing index in Eq. , that is,

in particar,

and

X(R) min X( ) * d(£) { m < O } d ()

,

d(R) <

X(R) + *d(R) O.e observe that the basic variabe () has become zero, whereas thenonbasic variabe Xj has now become positive, which sggests that Xj shodrepace (R) in the basis Accordingy, we take the od basis matrix and

repace A() with Aj, ths obtaining the matrix

A(R )

A (m) ] )

qivaenty, we are repacing the set { ( ) , , () } of basic indices bya new set {( ) , , () } of indices given by

()

heorem 32( The clumns A() £ nd Aj re linerly independent nd

therere is bsis mtrix

(b Th vectr = x + * is bsic esible slutin sscitedwith the bsis mtrix

roof If the vectors A() , , are ineary dependent, then there

exist coecients , m , not a of them zero, sch thatm

A() ,

Page 107: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 107/267

Page 108: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 108/267

Sec 3.2 evelpment the simple methd

If some component of u is positive, let

* minXB i

i } Ui

91

5 Let be such that * XB   U Form a new basis b replacingAB  with Aj • If is the new basic feasible solution, the valuesof the new basic variables are Y * and YB i X *Ui ,

The simplex method is initialized with an arbitrary basic feasible

solution, which, for feasible standard form problems, is guaranteed to exist The following theorem states that, in the nondegenerate case, the simplexethod works correctly and terminates after a nite number of iterations

heorem 3. 3 ssme tht the esile set is nnempty nd tht evey sic esile sltin is nndegenete Then the simple methdtemintes e nite nme itetins t temintin theee the llwing tw pssiilities

(a We hve n ptiml sis B nd n sscited sic esilesltion which is ptiml

We hve nd vect d stising Ad d � nd c'd <nd the ptiml cst is 0.

roof. If the algorithm terminates due to the stopping criterion in Step2 then the optimality conditions in Theorem have been met, B is an

ptimal basis, and the current basic feasible solution is optimalIf the algorithm terminates because the criterion in Step has beenet, then we are at a basic feasible solution x and we have discovered anonbasic variable Xj such that j <° and such that the corresponding basicdirection d satises Ad and d � O In particular, x + d E for all > Since c'd j < , by taking arbitrarily large, the cost can beade arbitrarily negative, and the optimal cost is -0

At each iteration, the algorithm moves b a positive amount * alonga direction d that satises c'd < Therefore, the cost of ever successivebasic feasible solution visited by the algorithm is strictly less than the costf the previous one, and no basic feasible slution can be visited twiceSince there is a nite number of basic feasible solutions, the algorithmust eventually terminate

Theorem provides an independent proof of some of the resultsf hapter 2 for nondegenerate standard form problems In particular,it shows that for feasible and nondegenerate problems, either the optimal

Page 109: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 109/267

92 Chp 3 The simplex methd

cost is - 0 or tere exists a basic feasibe sotion wic is optima cf.Teorem 8 in Section 6) Wie te proof given here migt appear moreeementary, its extension to te degenerate case is not as simpe.

sl r gr rbls

We have been working so far nder te assmption tat a basic feasibesotions are nondegenerate. Suppose now tat te exact same agoritmis sed in te presence of degeneracy. Ten, te foowing new possibiit iesmay be encontered in te corse of te agoritm.

a If te crrent basic feasibe soution x is degenerate , * can be equa

to zero, in wic case, te new basic feasibe sotion

is te same asx. Tis appens if some basic variabe X() is eqa to zero and tecorresponding component d() of te direction vector d is negative.Neverteess, we can sti dene a new basis B by repacing A()wit Aj [f. Eqs. ( .)( .4) ] and Theorem . is sti vaid.

b Even if * is positive, it may appen tat more tan one of te originabasic variabes becomes zero at the new point x + * d Since ony oneof tem exits the basis, te oters remain in te basis at zero eve,

and te new basic feasibe sotion is degenerate.Basis canges wie staying at the same basic feasibe sotion are

not in vain As istrated in Figre . a seqence of sc basis cangesmay ead to te eventua discovery of a cost redcing feasibe direction. Onte oter and, a seqence of basi canges migt ead back to te initiabasis, in which case te agoritm may oop indenitey. Tis ndesirabephenomenon is caed ccg An exampe of cycing is given in Section . after we deveop some bookkeeping toos for carrying ot the mecanics of

te agoritm. It is sometimes maintained that cycing is an exceptionayrare penomenon. However, for many igy strctred inear program-ming probems, most basic feasibe sotions are degenerate, and cycingis a rea possibiity. Cycing can be avoided by jdiciousy choosing tevariabes that wi enter or exit the basis see Section .4) . We now discussthe freedom avaiabe in this respect.

Pv Slc

Te simpex agoritm, as we described it , as certain degrees of freedomin Step we are free to choose any wose reduced cost j is negative;aso, in Step , there may be severa indices £ tat attain te minimm inthe denition of * , and we are free to coose any one of tem Res formaking sc choices are caed vtg es

Regarding the choice of the entering comn, te foowing rues aresome natra candidates:

Page 110: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 110/267

Page 111: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 111/267

94 Chp 3 The impe meth

used, such as the smest suscrit rule, that chooses the smallest forwhich Cj is negative . Under this rule, once a negative reduced cost isdiscovered, there is no reason to compute the remaining reduced costs.Other criteria that have been found to improve the overall running timeare the Deve (Harris, 97) and the steeest ege rule (Goldfarb and Reid,977) Finally, there are methods based on cite ists whereby oneexamines the reduced costs of nonbasic variables by picking them one ata time from a prioritized list . Ther are dierent ways of maintainingsuch prioritized lists, depending on the rule used for adding, removing, orreordering elements of the list

Regarding the choice of the exiting column, the simplest option isagain the smest suscrit rule out of all variables eligible to exit the

basis, choose one with the smallest subscript . It turns out that by followingthe smallest subscript rule for both the entering and the exiting column,cycling can be avoided (cf Section .4) .

3 . 3 Ipleentations of the siplex ethod

In this section, we discuss some ways of carrying out the mechanics of thesimplex method It should be clear from the statement of the algorithm

that the vectors - A play a key role If these vectors are available,the reduced costs, the direction of motion, and the stepsize are easilycomputed. Thus, the main dierence between alternative implementationslies in the way that the vectors -A are computed and on the amountof related information that is carried from one iteration to the next.

When comparing dierent implementations, it is important to keepthe following facts in mind (cf. Section .6) . If is a given matrixand b E is a given vector, computing the inverse of or solving a linear

system of the form x b takes ( arithmetic operations omputinga matrixvector product takes ( operations Finally, computingan inner product pb of two dimensional vectors takes ( arithmeticoperations.

Naive implemetatio

We start by describing the most straightforward implementation in whichno auxiliary information is carried from one iteration to the next At the

beginning of a typical iteration, we have the indices B( l ) , B () ofthe current basic variables . We form the basis matrix and computep - , by solving the linear system pB c for the unknown vectorp This vector p is called the vector of sime mutiiers associated withthe basis ) The reduced cost C j A of any variable Xj isthen obtained according to the formula

Page 112: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 112/267

Sec. 33 Implementtins the simple methd 95

Depending on the pivoting rule employed, we may have to compute all of thereduced costs or we may compute them one at a time until a variable witha negative redced cost is encountered Once a colmn Aj is selected tonter the basis, we solve the linear system Bu Aj in order to determinethe vector u B-1 Aj At this point, we can form the direction alongwhich we will be moving away from the current basic feasible solution Wenally determine 0 and the variable that will exit the basis , and constructthe new basic feasible solution

We note that we need Om3) arithmetic operations to solve the systems p'B c� and Bu Aj In addition, computing the reduced costs ofall variables requires Omn) arithmetic operations , because we need to formthe inner product of the vector p with each one of the nonbasic columns Aj Thus, the total computational eort per iteration is Om3 + mn) We willsee shortly that alternative implementations require only Om2 +mn) arithetic operations Therefore, the implementation described here is ratherinecient , in general On the other hand, for certain problems with a special structure, the linear systems p'B c� and Bu Aj can be solvedvery fast, in which case this implementation can be of practical interestWe will revisit this point in hapter 7, when we apply the simplex methodto network ow problems

evised simpex method

Much of the computational burden in the naive implementation is due tothe need for solving two linear systems of equations In an alternativeiplementation, the matrix B- is made available at the beginning of eachiteration, and the vectors c�B- 1 and B-1Aj are computed by a matrixvector multiplication For this approach to be practical , we need an ecient

ethod for updating the matrix B each time that we eect a change ofbasis This is discussed nextLet

be the basis matrix at the beginning of an iteration and let

B =  [ A(1 ). . .  A(R- 1 )

Aj  A(H1 ) · · ·  A(m) ]

be the basis matrix at the beginning of the next iteration These two basismatrices have the same columns except that the £th colmn AR) the onethat exits the basis has been replaced by Aj It is then reasonable to expectthat B- 1 contains information that can be exploited in the computation of-1 After we develop some needed tools and terminology, we will see thatthis is indeed the case An alternative explanation and line of developmentis outlind in Exercise

Page 113: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 113/267

96 Chp. 3 The simplex methd

Denition 3. Given mtrix nt necessrily squre the pertinf dding cnstnt multiple ne r t the se r t nther ris clled n elemenary row operation

The example that follows indicates that performing an elementar rowoperation on a matrix is equivalent to forming the matrix , where Qis a suitabl constructed square matrix.

Example 3.3 Let

Q = 

[ � 0

10

and note that 1 1QC =  � 

=

14 4

: ,n particular, multiplication from the le by the matri Q has the eect of multiplying the third row of by two and adding it to the rst row

Generalizing Example 3.3, we see that multipling the th row b and adding it to the th row (for ) is the same as leftmultipling bthe matrix Q I + Dj , where Dj is a matrix with all entries equal tozero, except for the ( , ) th entr which is equal to . The determinant ofsuch a matrix Q is equal to and, therefore, Q is invertible.

Suppose now that we appl a u of K elementar row operations and that the th such operation corresponds to lemultiplicationb a certain invertible matrix Qk Then, the sequence of these elementar

row operations is the same as lemultiplication b the invertible matrixQQ- · · · QQ We conclude that performing a u of elementar row operations on a given matrix is equivalent to lemultipling thatmatrix b a certain invertible matrix.

Since B B I, we see that B A() is the th unit vector e .Using this observation, we have

B-'B  � 

[ I I

1e e- u e e

I I I I Ul

U

U

Page 114: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 114/267

Sec. 3.3 Implementtins the simple methd 97

where u Aj Let us appl a sequence of elementar row operationsthat will change the above matrix to the identit matrix In particular,consider the following sequence of elementar row operations

(a For each " , we add the th row times -i £ to the th row(Recall that £ > This replaces Ui b zero

(b We divide the th row b £. This replaces £ b one

In words, we are adding to each row a multiple of the th row toreplace the th column u b the th unit vector This sequence of ele-mentar row operations is equivalent to leftmultipling b a certaininvertible matrix Since the result is the identit, we have I,which ields

The last equation shows that if we appl

the same sequence of row operations to the matrix (equivalentl, leftmultipl b , we obtain We conclude that all it takes to generate

, is to start with and appl the sequence of elementar row oper-ations described above

Example 3.4 Let

B1 =

[ -! 2

-3 

1 '-2and suppose that = . Thus our objective is to transform the vector to theunit vector e3 = (0 0 1 ) We multiply the third row by 2 and add it to the rstrow We subtract the third row from the second row Finally we divide the thirdrow by 2. We obtain

B 1 = [-�2-:-1 5

1 1 - 1

When the matrix is updated in the manner we have described, we ob-tain an implementation of the simplex method known as the vse smemeth which we summarize below

An iteration of the revise simplex metho1 . In a tpical iteration, we start with a basis consisting of the basic

columns A  , , A  , an associated basic feasible solution

x, and the inverse of the basis matrix2 . ompute the row vector p' and then compute the re-duced costs j j p'Aj If the are all nonnegative thecrent basic feasible solution is optimal, and the algorithm terminates; else, choose some for which j <O

3 ompute u Aj If no component of u is positive, theoptimal cost is - 0 and the algorithm terminates

Page 115: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 115/267

98 Chp 3

4 If some component of is positive, let

min

m } U

The simple methd

5 et be such that * J U Form a new basis by replacingA with Aj If is the new basic feasible solution, the valueso the new basic variables are Yj and Y *

Form the m + ) matrix B - I ] Add to each one ofits rows a multiple of the th row to make the last column equal

to the uni vector e The rst m columns o he result is thematrix B-

The fu tabeau impemetatio

We nally describe the implementation of simplex method in terms of thesocalled teu. Here, instead of maintaining and updating the matrix- , we maintain and update the m n + ) matrix

with columns B- b and B- A " B A This matrix is called thesime teu. ote that the column B b, called the zerth cumcontains the values of the basic variables The column B- A is called theth column of the tableau The column B-Aj corresponding to thevariable that enters the basis is called the ivt cum. If the th basicvariable exits the basis, the th row of the tableau is called the

ivt wFinally, the element belonging to both the pivot row and the pivot columnis called the ivt eemet. ote that the pivot element is U and is alwayspositive unless , in which case the algorithm has met the terminationcondition in Step ) .

The information contained in the rows of the tableau admits the fol-lowing interpretation The equality constraints are initially given to usin the form b Ax Given the current basis matrix B, these equalityconstraints can also be expressed in the equivalent form

which is precisely the information in the tableau In other words, the rowsof the tableau provide us with the coecients of the equality constraints b B-Ax

At the end of each iteration, we need to update the tableau B- b I A]and compute b I A] This can be accomplished by leftmultiplying the

Page 116: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 116/267

Sec 33 Implementtins the simple methd 99

simplex tableau with a matrix Q satising QB- 1 B 1 As explainedearlier, this is the same as performing those elementary row operations thatturn B1 to B 1 ; that is, we add to each row a multiple of the pivot row to

set all entries of the pivot column to zero, with the exception of the pivotelement which is set to oneRegarding the determination of the exiting column A (R) and the

stepsize , Steps and in the summary of the simplex method amountto the following X () /U is the ratio of the th entry in the zeroth columnof the tableau to the th entry in the pivot column of the tableau We onlyconsider those for which U is positive The smallest ratio is equal to and determines

It is customary to augment the simplex tableau by including a top

row, to be referred to as the zeth row. The entry at the top le cornercontains the value �x which is the negative of the current cost (Thereason for the minus sign is that it allows for a simple update rule, as willbe seen shortly The rest of the zeroth row is the row vector of reducedcosts, that is, the vector ' �B 1 A Thus, the structure of thetableau is

or, in more detail,

�B- 1b ' �B- 1A

B 1b B 1A

� C1

X( )

B- 1A1 X m) I

B 1AI

The rule for updating the zeroth row turns out to be identical to therule used for the other rows of the tableau add a multiple of the pivot rowto the zeroth row to set the reduced cost of the entering variable to zeroWe will now verify that this update rule produces the correct results forthe zeroth row

At the beginning of a typical iteration, the zeroth row is of the form

[0 ' ' [b I A,

where ' �B- 1 Hence, the zeroh row is equal to [0 ' plus a linearcombination of the rows of [b I A Let column be the pivot column, androw be the pivot row ote that the pivot row is of the form h' [b A,where the vector h' is the th row of B-1 Hence, aer a multiple of the

Page 117: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 117/267

100 Chp 3 The simple methd

pivot row is added to the zeroth row, that row is again equal to [ I plusa dierent linear combination of the rows of [b A] , and is of the form

[ I p b I A] ,

for some vector p Recall that our update rule is such that the pivot columnentry of the zeroth row becomes zero, that is,

() pA() j pAj O

onsider now the th column for £. This is a column correspondingto a basic variable that stays in the basis The zeroth row entry of thatcolumn is zero, before the change of basis, since it is the reduced cost of

a basic variable Because B-A() is the th unit vector and £, theentry in the pivot row for that column is also equal to zero Hence, addinga multiple of the pivot row to the zeroth row of the tableau does not aectthe zeroth row entry of that column, which is left at zero We concludethat the vector p satises () pA() for every column A() in

the new basis This implies that pB 0 and p B- Hence,with our update rule, the updated zeroth row of the tableau is equal to

as desiredWe can now summarize the mechanics of the full tableau implemen

tation.

An ieration of he ll ableau implemenaion1 . A typical iteration starts wih the tableau associate with a basis

matrix B and he corresponing basic feasible solutionx2. Examine the reduced costs in the zeroth row of the tableau f

they are all nonnegative, the current basic feasible solution isoptimal, and he algorithm terminates; else, choose some forwhich Cj <O

3. onsider the vector u B-Aj, which i s the th column thepivot column of the tableau f no component of u is positive,the optimal cost is 0 and the algorithm terminates

. For each for which U is positive, compute the atio X() /U et £ be the index of a row that corresponds to the smallest ratioThe column A() exis the basis and the column Aj enters thebasis

5. Add to each row of the tableau a constant multiple of the £throw the pivot row so that U the pivot element becomes oneand all other entries of the pivot colum become zero

Page 118: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 118/267

Sec. 3.3 Implementations of the simplex method

B (0 0 10)

c (000)

-D (0,0,0)

Figure 3.4: The feasible set in Example 3 .5 . Note that we

have ve extreme points . These are A = (0, 0 , 0 ) with cost 0,

B = (0, 0 , 10 ) with cost -10, = ( 0, 1 0, 0) with cost -10,

D = ( 10 , 0 , 0) with cost -100, and E = (4, 4 , 4) with cost -16 In

particular, E is the unique optimal solution

Example 3.5 Consider the problem

minimize -OXl 1x2 1x3

subject to Xl + X2 + X3 : 0

Xl + X2 + X3 : 0

Xl + X2 + X3 : 0

Xl , X2 , X3 2 0.

The feasible set is shown in Figure 3.4.  

101

Aer  introducing  slack variables,  we  obtain  the  following  standard  form 

problem: 

minimize -OXl 1x2 1x3

subject to Xl + X2 + X3 + X4 0

Xl + X2 + X3 + X5 0

Xl + X2 + X3 + X 0

Xl , , X 2 0

Note that x = (0, 0 , 0 , 0, 0, 0) is a basic feasible solution and can be used to

start the algorithm. Let accordingly, B( ) = 4, B() = 5, and B() = 6. The

Page 119: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 119/267

10 Chp 3 The simplex methd

correspondng bass matr s the dentty matr To obtan the zeroth row ofthe ntal tableau, we note that B = 0 and, therefore, �B = 0 and = cHence, we have the followng ntal tableau

X X2 X3 X4 X X6

0 -0 - - 0 0 0

0 0 0

X = 0 0 0

0 0 0

We note a few conventons n the format of the above tableau the label Xion top of the th column ndcates the varable assocated wth that column Thelabels "Xi = to the le of the tableau tell us whch are the basc varables and nwhat order For eample, the rst basc varable XB ( ) s X4 and ts value s 0.Smlarly, XB (2) = X = 0, and XB (3 ) = X6 = 0. Strctly speakng, these labelsare not qute necessary We know that the column n the tableau assocated wththe rst basc vrable must be the rst unt vector nce we observe that thecolumn assocated wth the varable X4 s the rst unt vector, t follows that X4s the rst basc varable

We contnue wth our eample The reduced cost of X s negatve and welet that varable enter the bass The pvot column s = , , . We form theratos XB (i ) /Ui = , , the smallest rato corresponds to = and = . Webreak ths te by choosng = . Ths determnes the pvot element, whch wendcate by an astersk The second basc varable XB (2) whch s X ets thebass The new bass s gven by = 4, B = , and B = We multplythe pvot row by 5 and add t to the zeroth row We multply the pvot row by/ and subtract t from the rst row We subtract the pvot row from the thrdrow Fnally, we dvde the pvot row by . Ths leads us to the new tableau

X X2 X3 X4 X X6

00 0 -7 - 0 5 0

X4 = 0 0 . 5 -0.5 0

0 0 .5 0 0 .5 0

0 0 - 0 -

The correspondng basc feasble soluton s x = 0 , 0 , 0 , 0 , 0 , 0 . In termsof the orgnal varables X X2 X3 we have moved to pont D = 0 , 0 , 0 nFgure .4. Note that ths s a degenerate basc feasble soluton, because thebasc arable X6 s equal to zero Ths agrees wth Fgure .4 where we observethat there are four actve constrants at pont D

We have mentoned earler that the rows of the tableau other than thezeroth row amount to a representaton of the equalty constrants B Ax =B I b whch are equvalent to the orgnal constrants Ax = b In our current

Page 120: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 120/267

Sec 3.3 Implementtin the imple methd 103

eample, the tableau idicates that the equality costraits ca be writte i theequivalet form

0 Xl + 0X2 + X3 + 0X

X2 X3 X + X6

We ow retur to the simple method With the curret tableau, thevariables X2 ad X3 have egative reduced costs Let us choose X3 to be the oethat eters the basis The pivot colum is - . Sice U3 0 we olyform the ratios XB (i ) /Ui, for i . There is agai a tie, which we break bylettig £ = ad the rst basic variable, X4, eits the basis The pivot elemet is

agai idicated by a asterisk After carryig out the ecessary elemetary rowoperatios, we obtai the followig ew tableau

X X2 X3 X4 X X6

0 0 -4  0 4 0

X3 =  0 0 .5 -0 .5 0

0 0 0

0 0 .5 0 - . 5

I terms of Figure .4 we have moved to poit B = 0 0 0 ad the costhas bee reduced to 0. At this poit, X2 is the oly variable with egativereduced cost We brig X2 ito the basis, X6 eits, ad the resultig tableau is:

X X2 X3 X4 X X6

6 0 0 0 .6 .6 . 6

X3 = 4 0 0 0.4 0.4 -0 .6

4 0 0 0 .6 0.4 0.4

4 0 0 0 .4 -0 .6 0.4

We have ow reached poit i Figure .4. Its optimality is cormed byobservig that all reduced costs are oegative

I this eample, the simple method took three chages of basis to reach

the optimal solutio, ad it traced the path A D B

i Figure .4. With

dieret pivotig rules , a dieret path would have bee traced Could thesimple method have solved the problem by tracig the path A - D - whichivolves oly two edges , with oly two iteratios? The aswer is o The iitialad al bases dier i three colums, ad therefore at least three basis chagesare required I particular, if the method were to trace the path A D therewould be a degeerate chage of basis at poit D with o edge beig traversed ,which would agai brig the total to three

Page 121: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 121/267

104 Chp. 3 The simplex methd

Example 3 .6 This eampe shows that the simpe method can indeed cyce.We consider a problem described in terms of the foowing initia tabeau.

Xl X2 X3 X4 X5 X6 X7

3 -3/4  0 -/ 6 0 0 0

0 /4 -8  - 9  0 0

X6 = 0 / - -/ 3  0 0

X7 = 0 0 0 0 0

We use the foowing pivoting rues

a We seect a nonbasic variabe with the most negative reduced cost j to bethe one that enters the basis

b ut of a basic variables that are eigibe to eit the basis, we seect theone with the smaest subscript

We then obtain the foowing sequence of tabeau the pivot element is indicatedby an asterisk

X X2 X3 X4 X5 X6 X7

3  0 -4 -7/ 3  0 0

0 -32   -4 6 4  0 0

0 0 4*  3/2   -15  - 0

X7 = 0 0 0 0 0

Xl X2 X3 X4 X5 X6 X73 0 0 - 8 0

0 0 8 -84 - 8 0

X2 = 0 0 /8 -5/4 -/ /4 0

X7 = 0 0 0 0 0

Xl X2 X3 X4 X5 X6 X73 /4 0 0 - - 3  0

0 / 8 0 - / -/ 0

0 -3/64  0 /6 /6 -/8 0

X7 = -/8 0 0 / / -

Page 122: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 122/267

Sec. 3.3 Implementtins the simple methd

Xl  X2   X3 X4 X5 X6  X7

-/ 6 0 0 - 0

X3= 0 -5/ 56 0 -6 0

0 -1/4  16/3  0 / -2 /3  0

5/2  -56  0 0 - 6

Xl X2   X3 X4 X5 X6 X7

-7/4  44 / 0 0 - 0

0 -5/4 8 / 0 - 0

0 /6 -4 -/6 0 / 0

X7 =  0 0 0 0 0

Xl X2 X3 X4 X5 X6 X7

-3/4  0 -/ 6 0 0 0

0 1/4  -8  - 0 0

X6 = 0 / - -/ 0 0

X7 = 0 0 0 0 0

Aer si pivots, we have the same basis and the same tableau that we startedwith At each basis change, we had * = O In particular, for each intermediate tableau, we had the same feasible solution and the same cost. The samesequence of pivots can be repeated over and over, and the simple method neverterminates

Crs f fll bl rvs sls

Let us pretend that the problem is changed to

minimize c' + 'Ysubject to + b

We implement the simplex method on this new problem, except that wenever allow any of the components of the vector to become basic Then,the simplex method performs basis changes as if the vector were entirely

Page 123: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 123/267

106 Chp. 3 The simplex methd

absent ote also that the vector of reduced costs in the augmented problemis

Thus, the simplex tableau for the augmented problem takes the form

�B-b c�B-

B-b B-1A  B-

In particular, by following the mechanics of the full tableau method on theabove tableau, the inverse basis matrix B- is made available at each iter

ation We can now think of the revised simplex method as being essentiallythe same as the full tableau method applied to the above augmented prob-lem, except that the part of the tableau containing B- A is never formedexplicitly; instead, once the entering variable Xj is chosen, the pivot columnB- Aj is computed on the y. Thus, the revised simplex method is justa variant of the full tableau method , with more ecient bookkeeping. Ifthe revised simplex method also updates the zeroth row entries that lie ontop of B- by the usual elementary operations , the simplex multipliers

p' �B- become available, thus eliminating the need for solving thelinear system p'B c� at each iteration.We now discuss the relative merits of the two methods. The full

tableau method requires a constant and small number of arithmetic op-erations for updating each entry of the tableau Thus, the amount of computation per iteration is proportional to the size of the tableau, which isOmn). The revised simplex method uses similar computations to updateB- and �B- , and since only Om2 ) entries are updated, the compu-tational requirements per iteration are Om2 ) . In addition, the reducedcost of each variable

can be computed by forming the inner product

p' Aj , which requires Om) operations. In the worst case, the redced costof every variable is computed, for a total of Omn) computations per iteration. Since m n, the worstcase computational eort per iteration isOmn+m2) Omn), under either implementation. On the other hand, ifwe consider a pivoting rule that evaluates one reduced cost at a time, untila negative reduced cost is found, a typical iteration of the revised simplexmethod might require a lot less work. In the best case, if the rst reduced

cost computed is negative, and the corresponding variable is chosen to enter the basis, the total computational eort is only Om2 ) . The conclusionis that the revised simplex method cannot be slower than the full tableaumethod, and could be much faster during most iterations.

Another important element in favor of the revised simplex methodis that memory requirements are reduced from Omn) to Om2 ) . As n isoften much larger than m, this eect can be quite signicant . It could becounterargued that the memory requirements of the revised simplex method

Page 124: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 124/267

Sec. 33 Implementtins the simple methd 107

are also O(mn) because of the need to store the matrix A. However, inmost large scale problems that arise i applications, the matrix A is verysparse has many zero entries and can be stored compactly ote that thesparsity of A does not usually help in the storage of the full simplex tableaubecause even if A and B are sparse, B-A is not sparse, in general

We summarize this discussion in te following table

Full tableau eise siplex

Meor O(mn) O(m2 )

Worstcase tie O(mn) O(mn)

Bestcase tie O(mn) O(m2 )

Table 3 . 1 Comparson of the full tableau method and revsedsmple The tme requrements refer to a sngle teraton

Prccl rrc cs

Practical implementations of the simplex method aimed at solving problemsof moderate or large size incorporate a number of additional ideas fromnumerical linear algebra which we briey mention

The rst idea is related to reiversio Recall that at each iterationof the revised simplex method, the inverse basis matrix B-1 is updatedaccording to certain rules Each such iteration may introduce roundoor truncation errors which accumulate and may eventually lead to highlyinaccurate results For this reason, it is customary to recompute the matrix

B- from scratch once in a while The eciency of such reinversions can begreatly enhanced by using suitable data structures and certain techniquesfrom comptational linear algebra

Another set of ideas is related to the way that the inverse basis matrixB- is represented Suppose that a reinversion has been just carried outand B- is available Subsequent to the current iteration of the revisedsimplex method, we have the option of generating explicitly and storingthe new inverse basis matrix B- An alternative that carries the same

information, is to store a matrix Q such that QB - 1 B-1

ote that Qbasically prescribes which elementary row operations need to be applied toB- in order to prodce B- It is not a full matrix, and can be completelyspecied in terms of m coecients for each row, we need to know whatmultiple of the pivot row must be added to it

Suppose now that we wish to solve the system Bu Aj for u, whereAj is the entering column, as is required by the revised simplex method

We have u B-Aj QB-Aj which shows that we can rst compute

Page 125: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 125/267

108 Chp. 3 The simplex methd

B-Aj ad te emutipy by equiaety, appy a sequece of e-emetary row operatios) to produce . Te same idea ca aso be usedto represet te ierse basis matrix aer seera simpex iteratios, as aproduct of the iitia ierse basis matrix ad seera sparse matrices ikeQ. 

Te ast idea we metio i s te foowig Subsequet to a rei-ersio," oe does ot usuay compute B- expicity, but B- is isteadrepreseted i terms of sparse triaguar matrices wit a specia structure

The metods discussed i tis subsectio are desiged to accompishtwo objecties improe umerica stabiity miimize te eect of roudoerrors) ad expoit sparsity i te probem data to improe bot ruigtime ad memory requiremets Tese metods ae a critica eect i

practice Besides aig a better cace of producig umericay trust-worty resuts, tey ca aso speed up cosideraby te ruig time ofte simpex method Tese techiques ie muc coser to te subject ofumerica iear agebra, as opposed to optimizatio, ad for this reasowe do ot pursue tem i ay greater dept

3 . 4 Anticycling: lexicography and Bland' s

rule

I this sectio, we discuss aticycig rues uder wic te simpex metodis guarateed to termiate , tus extedig Theorem to degeerate prob-ems As a importat coroary, we cocude tat if te optima cost is -ite , the tere exists a optima basis, tat is, a basis satisig Bb 0ad c' �BA 0'

Lxcgry

We preset ere te exicograpic piotig rue ad proe tat it preetste simpex metod om cycig Historicay, tis piotig rue was de-ried by aayzig te beaior of te simpex metod o a odegeerateprobem obtaied by meas of a sma perturbatio of the rigthad sideector b. Tis coectio is pursued i Exercise . 5 .

We start wit a deitio

Defnition 35 vectr � is sid t be leicorapcalllarer r saller thn nther vectr E i nd therst nnzer cmpnent - is psitive r negtive respectively .Symbliclly e rite

> or

<

Page 126: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 126/267

Sec. 34 nticycing lecgrphy nd lnds rule 109

For example,

0, 0L> 0, 4

0, 0 L ( ) 4 5 <

Lexicohic piotin ule1 . oose a eteg col Aj arbtrarl, as log as ts rece

cost s egatve Let u = B1Aj be te t colum of tetableau

2 Fo eac wt Ui > , ve te t ow of te tableau clug te et te zeot colm b Ui a coose te lexcogapcall smallest w f row s lexcorapcall smallest,ten te t basc varable B exts te bass

xple 3 .7 Consider the following tableau the eroth row is omitted andsuppose that the pivot column is the third one (j = .

0 5

4 6 -

0 7 9

Note that there is a tie in trying to determine the eiting variable becauseX /U = / and X3 /U3 = /9 = / . We divide the rst and third rows ofthe tableau by

U=

and

U3=

9,respectively to obtain

/ 0 5/

* * * */ 0 7/9

The tie between the rst and third rows is resolved by performing a leicographiccomparison Since 7/9 5/, the third row is chosen to be the pivot row and

the variable X3 eits the basis

We ote tat the lexcographc pvotg rle alwas leas to a unquechoce for te extng varable Iee, f ts were not the case, two of therows te tableau woul ave to be proportonal Bt f two rows of tematrx B 1A are proportonal, the matrx B-1A has rak smaller than man, terefoe, A also as rak less tan m whc contracts or stangassmpton tat A as learl nepenet rows

Page 127: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 127/267

110 Cap. 3 T smpl mo

heore 34 ppos smp lgom sas w a ows smp ala o a z ow logap pos. ppos a logap pog s olow

T E ow o smp a o a zo ow

ms ogapa ps og lgom

b T zo ow sl ass logapall a a a

c T spl mo mas a a m o aos

roof. Suppose that all rows of the simplex tableau, other than the zeroth

row, are lexicographically positive at the beginning of a simplex iter-ation Suppose that X enters the basis and that the pivot row is theth row According to the lexicographic pivoting rule, we have Uc > 0and

(th row (th row, if and U > O. 3 5U

cU

To determine the new tableau, the th row is divided by the positivepivot element Uc and, therefore, remains lexicographically positiveonsider the th row and suppose that U O In order to zero the( , th entry of the tableau, we need to add a positive multiple ofthe pivot row to the th row Due to the lexicographic positivity ofboth rows, the th row will remain lexicographically positive after thisaddit ion Finally, consider the th row for the case where U > 0 and We have

U (new th row = (old th row (old th row Uc

Because of the lexicographic inequality 3 5 which is satised by theold rows, the new th row is also lexicographically positive

b At the beginning of an iteration, the reduced cost in the pivot columnis negative In order to make it zero, we need to add a positivemultiple of the pivot row Since the latter row is lexicographicallypositive, the zeroth row increases lexicographically

c Since the zeroth row increases lexicographically at each iteration, itnever returns to a previous value Since the zeroth row is determinedcompletely by the current basis, no basis can be repeated twice andthe simplex method must terminate after a nite number of iterations

The lexicographic pivoting rule is straightforward to use if the simplexmethod is implemented in terms of the full tableau It can also be used

Page 128: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 128/267

ec. 3.5 Finding n initil bsic esible sltin 111

in conjnction with the revised simplex method, provided that the inversebasis matrix B- is formed explicitly see Exercise .6) . On the otherhand, in sophisticated implementations of the revised simplex method, thematrix B- is never compted explicitly, and the lexicographic rle is notreally sitable

We nally note that in order to apply the lexicographic pivoting rule,an initial tableau with lexicographically posit ive rows is reqired Let sassme that an initial tableau is available methods for obtaining an initialtablea are discssed in the next section We can then rename the variables so that the basic variables are the rst m ones This is eqivalentto rearranging the tableau so that the rst m colmns of B-A are the munit vectors The reslting tablea has lexicographically positive rows, as

desired

Blad's rule

The smallest sbscript pivoting rle, also known as Bland's rle, is as follows

Sallest subscrp piin ule1 .   Find the smalest for which he redced cost i s negative and

have the column A enter the basis

. Ot o f al variables X that are tied in the test for choosing anexiting variabe, select the one with the smallest vale of

This pivoting rle is compatible with an implementation of the revised simplex method in which the reduced costs of the nonbasic variables

are computed one at a time, in the natural order, ntil a negative one isdiscovered Under this pivoting rle, it is known that cycling never occursand the simplex method is guaranteed to terminate after a nite nmberof iterations

3 . Finding an initia asic feasie soution

In order to start the simplex method, we need to nd an initial basic feasible

soltion Sometimes this is straightforward For example, sppose that weare dealing with a problem involving constraints of the form Ax b, whereb ? We can then introduce nonnegative slack variables s and rewritethe constraints in the form Ax + s b The vector x, s dened by x 0and s b is a basic feasible soltion and the corresponding basis matrix isthe identity In general , however, nding an initial basic feasible solutionis not easy and reqires the solution of an auxiliary linear programmingproblem, as will be seen shortly

Page 129: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 129/267

Chp 3 The simplex methd

Consider the problem

minimize c'xsbject to Ax b

x > oBy possibly mltiplying some of the eqality constraints by , we canassme, withot loss of generality, that b � o We now introdce a vector E of rtici vries and se the simplex method to solve theaxiliary problem

minimize + + · + sbject to Ax + b

x > 0 > o

Initialization is easy for the axiliary problem by letting x 0 an b, we have a basic feasible soltion and the corresponding basis matrixis the identity.

If x is a feasible soltion to the original problem, this choice of together with 0 yields a zero cost soltion to the axiliary proble.

Therefore, if the optimal cost in the axiliary problem is nonzero, we conclde that the original problem is infeasible . If on the other hand, we obtaina zero cost soltion to the axiliary problem, it mst satisfy 0 and is a feasible soltion to the original problem.

At this point , we have accomplished or objectives only partially. Whave a method that either detects infeasibility or nds a feasible soltion tothe original problem. However, in order to initialize the simplex method forthe original problem, we need a basic feasible soltion, an associated basismatrix B, and depending on the implementation the correspondintablea. All this is straightforward if the simplex method, applied to thaxiliary problem, terminates with a basis matrix B consisting exclsivelyof colmns of A. We can simply drop the colmns that correspond to tharticial variables and contine with the simplex method on the originalproblem, sing B as the starting basis matrix.

Drvg rc vrbls f bss

The sitation is more complex if the original problem is feasible , the simplemethod applied to the axiliary problem terminates with a feasible soltionx* to the original problem, bt some of the articial variables are in thnal basis. Since the nal vale of the articial variables is zero, thisimplies that we have a degenerate basic feasible soltion to the axiliaryproblem. Let k be the nmber of colmns of A that belong to the nal basis(k m and, withot loss of generality, assme that these are the colmnsAB  . . . AB k . In particlar, XB  . . . XB k are the only variables

Page 130: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 130/267

Sec 3.5 Finding n initil bsic esible slutin 113

that can be at nonzero level . ote that the columns A( , . . . , A(k mustbe linearly independent since they are part of a basis . Under our standardassumption that the matrix A has full rank, the columns of A span m ,and we can choose

m k additional columns A(k

, . . . , A (

m of A, to

obtain a set of m linearly independent columns, that is, a basis consistingexclusively of columns of A. With this basis, all nonbasic components ofx* are at zero level, and it follows that x* is the basic feasible solutionassociated with this new basis as well. At this point , the articial variablesand the corresponding columns of the tableau can be dropped.

The procedure we have just described is called rivig the rticivries out of the sis, and depends crucially on the assumption that thematrix A has rank m. After all, if A has rank less than m, constructing a

basis for m using the columns of A is impossible and there exist redundantequality constraints that must be eliminated, as described by Theorem 5i Section . All of the above can be carried out mechanically, in termsof the simplex tableau, in the following manner.

Suppose that the th basic variable is an articial variable, which isin the basis at zero level. We examine the th row of the tableau and ndsome such that the th entry of B- A is nonzero We claim that Ais linearly independent from the columns A( , . . . , A (k . To see this,

note that B-A( e, i

. . . , k, and since k , the th entry ofthese vectors is zero . It follows that the th entry of any linear combinationof the vectors B-1A( , . . . , B- A(k is also equal to zero. Since theth entry of B- A is nonzero, this vector is not a linear combinationof the vectors B A( , . . . , B A(k . Equivalently, A is not a linearcombination of the vectors A( , . . . , A(k , which proves our claim. Wenow bring A into the basis and have the th basic variable exit the basis.This is accomplished in the usual manner perform those elementary rowoperations that replace B- A by the th unit vector. The only dierencefrom the usual mechanics of the simplex method is that the pivot elementthe th entry of B- A could be negative . Because the th basic variablewas zero , adding a multiple of the th row to the other rows does not changethe values of the basic variables . This means that after the change of basis,we are still at the same basic feasible solution to the auxiliary problem,but we have reduced the number of basic articial variables by one . Werepeat this procedure as many times as needed until all articial variablesare driven out of the basis.

Let us now assume that the th row of BA is zero, in which casethe above described procedure fails. ote that the th row of BA isequal to g'A, where g' is the th row of B- . Hence, g'A ' for somenonzero vecor g, and the matrix A has linearly dependent rows. Since weare dealing with a feasible problem, we must also have g'b Thus, theconstraint g'Ax g' is redundant and can be eliminated cf. Theorem .5in Section .) Since this constraint i s the information provided by the throw of the tableau, we can eliminate that row and continue from there.

Page 131: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 131/267

11 Chp. 3 The simple methd

Example 38 Consider the linear programming problem

minimize Xl  X2   X3subject to Xl  X2   X3 

-Xl  + 2 X2   6X3 

4X2   9X3X3

Xl , , X4 � o X4

5

In order to nd a feasible solution, we form the auiliary problem

minimize X5 X6  X7 Xs

subject to Xl  + 2 X2   X3 X5-Xl  2 X2   6X3  X6 

4X2   9X3 X7X3 X4 +  Xs 

Xl , Xs � o

5

A basic feasible solution to the auiliary problem is obtained by lettingX5 , X6 , X7, xs = b = , , 5 , he corresponding basis matri is the identityrthermore, we have = , , , We evaluate the reduced cost of each on

of the original variables Xi which is � A and form the initial tableau

Xl X2   X3 X4  X5 X6  X7 Xs 

- 0 -8 - - 0 0 0 0

0 0 0 0

- 0 0 0 0

X7 =  5 0 9 0 0 0 0

Xs =  0 0 0 0 0

We bring X4 into the basis and have X eit the basis The basis matri B is stillthe identity and only the zeroth row of the tableau changes. We obtain

Xl X2 X3 X4 X5 X6 X7 Xs

-0 0 -8 -8 0 0 0 0

0 0 0 0

- 0 0 0 0

5 0 9 0 0 0 0

0 0 0 0 0

Page 132: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 132/267

Sec. 3 Finding n initil bsic esible slutin 11

e now bring X3 into the basis and have X4 eit the basis The new tableau is

Xl X2 X3 X4 X X6 X7 Xs

-4 0 8 0 6 0 0 0 7X = 0 - 0 0 -

0 - 0 - 0 0 -

0 4 0 - 0 0 -

/ 0 0 / 0 0 0 /

e now bring X2 into the basis and X6 eits Note that this is a degenerate pivotwith * = O The new tableau is

Xl X2 X3 X4 X X6 X7 Xs

-4 -4 0 0 - 0 4 0 -

X = 0 0 - 0

X2 = 0 -/ 0 - 0 / 0 -

X 7 = 0 0 0 -

/ 0 0 / 0 0 0 /

e now have Xl enter the basis and X eit th e basis e obtain th e followingtableau

Xl X2 X3 X4 X X6 X7 Xs

0 0 0 0 0 0 0 0 / / -/ 0 /

/ 0 0 -3/4  /4 /4 0 -3/4 

0 0 0 0 0 - - 0/ 0 0 / 0 0 0 /

Note that the cost in the auiliary problem has dropped to zero, indicating thatwe have a feasible solution to the original problem However, the articial variableX7 is still in the basi , at zero level In order to obtain a basic feasible solutionto the original problem, we need to drive X7 out of the basis Note that X7 is thethird basic variable and that the third entry of the columns B Aj , = . . . 4 associated with the original variables, is zero This indicates that the matriA has linearly dependent rows At this point , we remove the third row of thetableau, because it corresponds to a redundant constraint, and also remove all ofthe articial variables This leaves us with the following initial tableau for the

Page 133: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 133/267

116 Chp 3 The simplex methd

original problem

Xl X2 X X4

* * * * *Xl = 0 0 /

/ 0 0 -3/4 

X = / 0 0 /

We may now compute the reduced costs of the original variables, ll in the eroth row of the tableau and start eecuting the simple method on the originalproblem

We observe that in this eample the articial variable Xg was unnecessarynstead of starting with Xg = we could have started with X4 = thus eliminating the need for the rst pivot ore generally, whenever there is a variablethat appears in a single constraint and with a positive coecient slack variablesbeing the typical eample we can always let that variable be in the initial basisand we do not have to associate an articial variable with that constraint

The two-phase simplex methodWe can now summarize a complete algorithm for linear programming problems in standard form.

hase :1 By multiplying some of the constraints by change the prob

lem so that b

Introduce articial variables Y1 Ym if necessary, and applythe simplex method to the auxiliary problem with cost 1 Y

3. If the optimal cost in the auxiliary problem is positive, the original problem is infeasible and the algorithm terminates

4 If the optimal cost in the auxiliary problem is zero, a feasiblesolution to the original problem has been found If no articialvariable is in the nal basis, the articial variables and the corresponding columns are eliminated, and a feasible bass for theoriginal problem is available

5 If the th basic variable is an articial one , examine the th entryof the columns B- Aj If all of these entries arezero, the th row represents a redundant constraint and is eliminated Otherwise , if the th entry of the th column is nonzero,apply a change of basis with this entry serving as the pivot

Page 134: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 134/267

Sec. 3.5 Finding n initil bsic esible slutin 117

element) the £th basic variable exits and X enters the basis.Repeat this operation until all articial variables are driven outof the basis.

Phae 1 . et the nal basis and tableau obtained fom Phase I be the

initial basis and tableau for Phase II.

2. ompute the reduced costs of all vaiables for this initial basis,using the cost coecients of the original problem

3 Apply the simplex method to the original problem

The above twophase algorithm is a complete method, in the sensethat it can handle all possible outcomes. As long as cycling is avoided dueto either nondegeneracy, an anticycling rule, or luck) , one of the followingpossibilities will materialize

a) If the problem is infeasible, this is detected at the end of Phase I.

b If the problem is feasible but the rows of A are linearly dependent,this is detected and corrected at the end of Phase I, by eliminating

redundant equality constraints.c) If the optimal cost is equal to -0 this is detected while running

Phase II.

d) Else, Phase II terminates with an optimal solution.

T bg-M

We close by mentioning an alternative approach, the igM metho, thatcombines the two phases into a single one . The idea is to introduce a costfunction of the form

l il

where M is a large positive constant, and where Yi are the same articialvariables as in Phase I simplex. For a suciently large choice of M, if the

original problem is feasible and its optimal cost is nite, all of the articialvariables are eventually driven to zero Eercise 6) which takes us backto the minimization of the original cost function. In fact , there is no reasonfor xing a numerical value for M We can leave M as an undeterminedparameter and let the reduced costs be functios of Whenever M iscompared to another number in order to determine whether a reduced costis negative) , M will be always treated as being lager.

Page 135: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 135/267

118 Chap. 3 The simplex method

Example 3.9 We consider the same linear programming problem a in Example 38:

minimize Xl + X2 + X3subject to Xl + X2   + X3 3

-Xl  + X2   + 6X3  24X2   + 9X3  5

X3  + X4 1Xl , . . , X4 o .

We use the bigM method in conjunction with the following auxiliary problem,in which the unnecessary articial variable Xs is omitted

minimize Xl + X2 + X3 + MX5  + MX6 + MX7

subject to Xl + X2 + X3  + X5  3

-Xl + X2 + 6X3  + X6 2X2 + 9X3 + X7 5

X3 + X4 1Xl , . , X7 o .

A basic feasible solution to the auxiliary problem is obtained by letting(X5 , X6 , X7, X4) = b = ( 3 , 2 , 5 , 1 ) . The corresponding basis matrix is the identity.Frthermore, we have CB = (M, M, M, 0) We evaluate the reduced cost of eachone of the original variables X , which is i C�Ai , and form the initial tableau:

Xl X2 X3 X4 X5 X6 X7

-10M 1 -8M + - 18M + 1 0 0 0 0 

X5 =  3 1 2 3 0 1 0 0

X6 = 2 -1 2 0 0 1 0

X7 =  5 0 9 0 0 0 1

X4 =  1  0 0 3* 1 0 0 0

The reduced cost of X3 is negative when M is large enough We therefore bringX3 into the basis and have X4 exit Note that in order to set the reduced costof X3 to zero, we need to multiply the pivot row by 6M - 1/3 and add it to thezeroth row The new tableau is:

Xl X2 X3 X4  X5 X6 X7 

-4M - 1/3 1 -8M + 0 6M - 1/3 0 0 0

X5 = 2 1 2 0 - 1 1 0 0

X6 = 0 -1 2* 0 -2 0 1 0

X7 = 2 0 0 -3 0 0 1

X3 = 1/3 0 0 1 1/3 0 0 0

The reduced cost of X2 is negative when M is large enough We therefore bringX2 into the basis and X6 exits Note that this is a degenerate pivot with e* = o.

Page 136: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 136/267

Sec. 3. Clumn gemetry nd the simple methd 119

The new tableau is

Xl X2   X3 X4 X5 X6  X7

4 M 4M + 0 0 M + 0 4M 0

0 0 0X2  =  0 / 0 0 / 0X7 =  0 0 0

/ 0 0 / 0 0 0

We now have Xl enter and X5 eit the basis We obtain the following tableauX  X2   X3 X4 X5 X6 X7

/6 0 0 0 / M /4 M + /4 0 0 0 / / / 0

X2  =  / 0 0 /4 /4 /4 0X7 = 

0 0 0 0 0

/ 0 0 / 0 0 0We now bring X4 into the basis and X3 eits The new tableau is

Xl X2 X3 X4 X5 X6 X7

7/4  0 0 /4 0 M /4 M + /4 0/ 0 / 0 / / 0

X2 = 5/4  0 9/4  0 /4 /4 0X7 = 0 0 0 0 0

X4 = 0 0 0 0 0With M large enough, all of the reduced costs are nonnegative and we havean optimal solution to the auiliary problem In addition, all of the articialvariables have been driven to ero, and we have an optimal solution to the originalproblem.

3 . 6 Column geometry and the smplex

method

I this sectio, we itrodce a aterative way of visaizig the workigsof the simpex method This approach provides some isights ito why the

Page 137: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 137/267

120 Chp. 3 The simplex methd

simplex method appears to be ecient in practiceWe consider the problem

minimize c'x

sbject to Ax be'x (6)

x > 0

where A is an matrix and e is the dimensional vector with allcomponents eqal to one Althogh this might appear to be a special typeof a linear programming problem, it turns out that every problem with abonded feasible set can be brought into this form Exercise 8) Theconstraint e'x is called the coveit costit We also introdce

an auxiliary variable z dened by z c' x If A , A , , An are the columns of A, we are dealing with the problem of minimizing z subject tothe nonnegativity constraints x � 0 the convexity constraint X and the constraint

In order to capture this problem geometrically, we view the horizontal

plane as an dimensional space containing the colmns of A, and weview the vertical axis as the onedimensional space associated with the costcomponents . Then, each point in the resulting threedimensional spacecorresponds to a point A, ; see Figure .5 .

In this geometry, our objective is to constrct a vector b z whichis a convex combination of the vectors A, , sch that z is as small apossible ote that the vectors of the form b z lie on a vertical line , whichwe call the quiremet ie, and which intersects the horizontal plane at

b.If the reqirement line does not intersect the convex hll of the points

A, , the problem is infeasible If it does intersect it , the problem isfeasible and an optimal solution corresponds to the lowest point in theintersection of the convex hll and the reqirement line For example, inFigure .6 the reqirement line intersects the convex hull of the pointA, ; the point corresponds to an optimal solution, and its height ithe optimal cost

We now need some terminology

Defnition 3 .6a cllectin o ctrs y , yk in n re sid t be anely

ieeet i the ectrs y yk y _ yk yk yk re linery independent Nte tht we mst he k

b The cnex hll k + nely independent ectrs n isced kdimnsion silex

Page 138: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 138/267

Sec. 6 Clumn gemetry nd the imple methd

Figure 3. The column geometry

Thus, three points are either collinear or they are anely independentand determine a twodimensional simplex (a triangle) Similarly, four pointseither lie on the same plane, or they are anely independent and determinea threedimensional simplex (a pyramid)

Let us now give an interpretation of basic feasible solutions to prob-lem ( 6) in this geometry Since we have added the convexity constraint ,we have a total of + equality constraints Thus, a basic feasible solutionis associated with a collection of + linearly independent columns i )of the linear programming problem ( 6) These are in turn associated with + of the points i Ci which we call sic oits; the remaining pointsi Ci are called the osic oits It is not hard to show that the + basic points are anely independent (Exercise 9) and, therefore, theirconvex hull is an imensional simplex, which we call the sic sime.Let the requirement line intersect the dimensional basic simplex at somepoint b , The vector of weights Xi used in expressing b , as a convex

combination of the basic points, is the current basic feasible solution, and represents its cost For example, in Figure 6 the shaded triangle CF isthe basic simplex, an the point H corresponds to a basic feasible solutionassociated with the basic points C, , an

Let us now interpret a change of basis geometrically In a change ofbasis, a new point j Cj becomes basic, and one of the currently basicpoints is to become nonbasic For example, in Figure 6 if C, , F,are the current basic points, we could make point basic, replacing F

Page 139: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 139/267

Chp 3 The smplex methd

Figure 3.6 Feasibility and optimality in the column geometry

even thogh this turns out not to be protable The new basic siplexwold be the convex hll of C, , and the new basic feasible soltionwold correspond to point I Alternatively, we cold ake point basic,replacing C, and the new basic feasible solution wold now correspond topoint G Aer a change of basis , the intercept of the reqireent line withthe new basic siplex is lower, and hence the cost decreases, if and only

if the new basic point is below the plane that passes throgh the old basicpoints; we refer to the latter plane as the u e. For example, point is below the dual plane and having it enter the basis is protable; this isnot the case for point n fact, the vertical distance fro the dal planeto a point (A ) is equal to the redced cost of the associated variable XExercise 0 ; reqiring the new basic point to be below the dual planeis therefore eqivalent to requiring the entering colmn to have negativeredced cost

We discuss next the selection of the basic point that will exit thebasis Each possible choice of the exiting point leads to a dierent basicsimplex These basic siplices, together with the original basic simplexbefore the change of basis for the boundary the faces of an ( + )dimensional siplex The reqirement line exits this ( + diensionalsimplex throgh its top face and ust therefore enter it by crossing soeother face This deterines which one of the potential basic simplices willbe obtained aer the change of basis In reference to Figure , the basic

Page 140: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 140/267

Sec 36 Clumn gemetry nd the simple methd

po C eeme a omeoa ba mpe If o o beome ba e oba a eemeoa me (pam) vee C e equeme e e e am oug top fae vee C ee e pam oug e fae vee ; te e ba me

e a o vuaze pvog oug e foog pa aaog of e oga ba me vee C a a o obeaoe a vee Gap e oe of e ba me a e vee eavg e ba a u e oe o o e e ba po e o movg e mpe ge o ivot o ao a eo o e loe poo e omea peua em (e g mpe" pvo" ) aoae e me meo ave e oo oum

geome

aple Consider the problem illustrated in Figure .7 , in which = ,and the following pivoting rule choose a point (i , Ci) below the dual plane tobecome basic , whose vertical distance om the dual plane is largest. According toercise 0 this is identical to the pivoting rule that selects an entering variablewith the most negative reduced cost . Starting om the initial basic simpleconsisting of the points (, C) , (6 , C6 ) , the net basic simple is determined

by the points ( , C) , (5 , C5 ) , and the net one by the points (5 , C5) , (s , cs ) In particular , the simple method only takes two pivots in this case. This eampleindicates why the simple method may require a rather small number of pivots,even when the number of underlying variables is large.

7•net is

Fire 7 : The simple method nds the optimal basis aertwo iterations . Here, the point indicated by a number i corresponds to the vector (i , Ci) .

Page 141: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 141/267

124 Chp 3 The simple methd

3 . 7 Computational eciency of the simplex

method

The computatioal eciec of the impe metho i etemie b tofactoa the computatioal eot at eac iteatio;b the umbe of iteatio

The computatioal equiemet of eac teatio have alea bee scue i Sectio Fo eample, the full tableau mplemetatio eesO(m) athmetic opeato pe iteato; the ame tue fo the evieimple metho the ot cae e o tu to a icuio of the

umbe of iteatio

Th br f ras h wrs cas

Although the umbe of eteme poit of the feable et ca iceaeepoetiall ith the umbe of vaable a cotait, t ha beeobeve i pactice tat the imple metho tpica tae ol O(m)pvot to a optimal olutio Ufotuate, hoeve, thi pacticalobevatio i ot tue fo eve lea pogammg poblem. e ill

ecibe hotl a famil of poblem fo hich a epoetal umbe ofpivot ma be equie.

Recall that fo oegeeate poblem, the impe metho alasmove fom oe vete to a aacet oe, each time impovig te valueof the cot fuctio e il o ecibe a polheo tat a a expoetial umbe of vetce, alog ith a pat that viit all vetice, btag tep fom oe vete to a aacet oe tat ha oe cot Oceuch a polheo available, the the imple metho ue a pivotig

ue that tace th path ee a epoetal umbe of pivotCoie the uit cube �, ee b the cotait

i 1 , , o

The uit cube ha 2 vetce fo each i e ma let eite oe of the tcotait 0 : Xi o Xi : 1 become active themoe, thee eit path that tavel aog the ege of the cube a hich viit each vetexeactl oce; e call uch a path a sig th. It ca be cotucteaccoig to the poceue illutate i Figue 8

et u o touce the cot fuctio X Haf of the vetice ofthe cube ave zeo cot a the othe half have a cot of - . Thu, the cotcaot eceae tict ith each move alog te pag path, a e ot et have the eie eampe Hoeve, f e cooe ome ( 0, 1/2 a coie the petubato of the uit cube ee b the cotat

: Xl : 1 ,

Xi 1 : Xi : Xi l , i 2 , , ,

8

Page 142: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 142/267

Sec. 3.7 Cmputtinl eciency the simple meth

a b

Figure 3. 8 a A spanning path in the twodimensional cubeb A spanning path P in the threedimensional cube Noticethat this path is obtained by splitting the threedimensional cubeinto two twodimensional cubes following path in one of themmoving to the other cube and following in the reverse orderThis construction generalizes and provides a recursive denition ofa spanning path for the general dimensional cube

12

then it can be veried that the cost function decreases strictly with eachmove along a suitably chosen spanning path If we start the simplex methodat the rst vertex on that spanning path and if our pivoting rule is to alwaysmove to the next vertex on that path, then the simplex method will require2 1 pivots We summarize this discussion in the following theorem whoseproof is le as an exercise (Exercise 3 32) .

Theorm 3.5 Consid r prn poblem f zng Xn sbe to te cors (7)-(8 en

a Th fe set h rtces

(b T vetc cn e r so th ch acet to aa lr ct h rvs

T sts a tg ru u w mpx md

eq 2n 1 ch o bs bee t tates

We observe in Figure 3 .8 that the rst and the last vertex in the span-ning path are adjacent This property persists in the perturbed polyhedronas well Thus, with a dierent pivoting rule, the simplex method couldterminate with a single pivot We are thus led to the following question isit true that for every pivoting rule there are examples where the simplex

Page 143: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 143/267

26 Chp 3 The smpex meth

meo ae a eoea umbe of eao? Fo evea opuavog ue, u eame ave bee oue Hoeve, ee eame ao eue e ob a ome oe vog ue mghfae bee oe of e mo mpoa oe obem e eoof ea ogammg e e ubeo, e ae a oe eaeue

T r lyr Hrsc cjcr

e eeg uo ea u o e oo of e amee of a oeo ee a foo Suoe a fom a vee fe oeo, e ae o aoe o um o a aae vee

ee e ae d(x y beee o vee x a y a e mmuumbe of u um eque o ea y ag om x. e amee() of e poeo e ee a e mamum of d(x y ova a (x y of vee Fa, e ee D(n m) a e mamum f() ove al oue olea a ae eeee em of mequa oa e qua Du (n m) ee ma, eepa geea, ob uboue, oea ae aoe Fo eame, ave

D(2 m)

; 'aDu(2 m) m - 2 ;

ee Fgue 3 9

a (

Figure 3.9 Let = 2 and = 8. (a) A bounded polyhedronwith diameter /2 = 4 (b) An unbounded polyhedron withdiameter - 2 = 6

Suoe a e feabe e a ea ogammg obem amee d a a e ae beee vee x a y equa o If

Page 144: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 144/267

Sc 3.7 Cmputtn cncy f th smpx mthd 27

e mpe meo (o a oe meo a poee fom oe veeo a aae vee) aze a x a f appe o be e uqueopma ouo e a ea d ep be eque No f (n m) ou(n m) eae epoea a m mpe a ee eeampe fo e mpe meo ae a epoea eagumbe of ep o mae pvog ue ue Tu oe oave a ope of eveopg pvog ue ue e mpe meoeque a pooma umbe of eao e mu eab a(n m) o u (n m) go a m a e ae of ome poomalTe paa ue of e mpe meo a e o e oeue aee (n m) a u(n m) o o go epoea fa fa efoog mu oge oeue a bee avae

Hirsc Coecture: �(n, m) : m n

Depe e gae of (nm) a u(n m) e ae fa fom eabg e H oeue o eve fom eabg a ee quae eb pooma go o (Kee a aup 1967) ae H oeue fae fo uboue poera a paua

a u (n m) m n + l J .Ufouae e be oe bou o; eve oug povee H oeue fo uboue poea oe o pove ag a o ee e go of u (n m pooma o epoea

Regag uppe bou a bee eabe (Kaa a Kema 1993) a e oae amee go oe a epoeabu e avaabe uppe bou go faer a a polomal pa

ua e foog bou ae avaabe:�(n, m) : �u (n, m) < m1+o g2  =  (n) lo g2 . 

vrg cs bvr sl

Ou uo a bee foue o e oae beavo of e mpemeo bu o pa of e o ve f eve pvog rue reque a epoea umbe of eao e o ae o

eea eeva o e pa beavo of e mpe meo Fo eao ee a bee a fa amou of eea amg a a ueag of e pa o aveage beavo of e mpe meo a aepaao of obeve beavo

Te ma u ug e average beavo of a agome eg e meag of e em aveage" Baal oe ee oee a pobab buo over e e of a probem of a gve ze a e ae e maemaa epeao of e umbe of eao

Page 145: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 145/267

18 Chp 3 The simple methd

requred b the algorthm when appled to a random problem drawn accordng to the postulated probabilt dstrbuton. Unfortunatel there isno natural probablt dstrbuton over the set of lnear programmng problems evertheless a far number of postve results have been obtained fora few derent tpes of probablt dstrbutons . n one such result a set ofvectors , . . . , m E n and scalars b . . . is gven. For 1 . . . , m,we ntroduce ether constrant � b or � b wth equal probablt. We then have possble lnear programmng problems and supposethat L of them are feasble. Hamovch (193) has establshed that undera rather specal pvoting rule the smplex method requres no more than teratons on the average over those L feasble problems. Ths lneardependence on the sze of the problem agrees with observed behavior; some

empirical evdence is discussed n Chapter .

3.8 Suary

This chapter was centered on the development of the smplex method whchs a complete algorithm for solvng lnear programmng problems n standard form. The cornerstones of the smplex method are:

a) the optmalit condtions nonnegatvt of the reduced costs) thatallow us to test whether the current bass is optimal;

b) a sstematic method for performing basis changes whenever the optmalt condtons are violated.

At a high level the simplex method smpl moves from one extremepoint of the feasible set to another each tme reducng the cost untl anoptimal soluton s reached . However the lower level details of the smplexmethod relatng to the organizaton of the requred computations and theassocated bookkeeping pla an mportant role. We have descrbed threedierent mplementatons: the nave one the revsed smplex method andthe full tableau mplementation. Abstractl the are all equvalent butther mechancs are qute derent ractical implementatons of the smplex method follow our general descrpton of the revsed smplex methodbut the detals are derent because an explcit computaton of the nversebasis matrx s usuall avoded.

We have seen that degenerac can cause substantial dculties includng the possbilt of nontermnatng behavor cclng) . Ths s because

in the presence of degenerac a change of bass ma keep us at the samebasc feasible solution with no cost mprovement resultng . Cclng canbe avoded if suitable rules for choosing the enterng and exting varablespivoting rules) are applied e.g. Bland's rule or the lexcographic pvotngrule)

Starting the smplex method requires an ntal basc feasble solutonand an assocated tableau. These are provded b the hase smplexalgorthm whch s nothng but the smplex method appled to an auiliar

Page 146: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 146/267

Sec. Execises 129

poblem e a tat te cageove fom ae I to ae II volveome elcate tep eeve ome atcal vaable ae te al bacotucte by te ae I algotm.

Te mplex meto a ate ecet algotm a copoate mot of te commecal coe fo lea pogammg le te umbeof pvot ca be a expoetal fucto of te umbe of vaable acotat te ot cae, t obeve beavo a lot bette, ecete pactcal uefule of te meto

3.9 Exercises

Exerie 3.1 oca na of convex functons) Let Rn R be aconve function and let S Rn be a conve set Let x be an element of SSuppose that x is a local optimum for the problem of minimiing x over S;that is there eists some E > 0 such that x x for all x E S for which x x E Prove that x is globally optimal that is x x for allx E S

Exerie 3.2 pt aty contons) Consider the problem of minimiingcx over a polyhedron Prove the following

a) A feasible solution x is optimal if and only if c' 0 for every feasibledirection at x

(b) A feasible solution x is the unique optimal solution if and only if c' > 0for every nonero feasible direction at x

Exerie 3.3 Let x be an element of the standard form polyhedron = {x ERn Ax = b x O Prove that a vector E Rn is a feasible direction at x ifand only if A = 0 and di 0 for every such that Xi = O

Exerie 3.4 Consider the problem of minimiing cx over the set = {x ERn Ax = b Dx f x g} Let x be an element of that satisesDx = f x < g Show that the set of feasible directions at the point x is theset

{ E Rn A = 0 D O}

Exerie 3. 5 Let = {x E R3 Xl + X + X3 = 1 x O } and consider the

vector x

=0 0 1 Find the set of feasible directions at x

Exerie 3. ontons for a unque opt) Let x be a basic feasiblesolution associated with some basis matri B Prove the following

a) f the reduced cost of every nonbasic variable is positive then x is theunique optimal solution

(b) f x is the unique optimal solution and is nondegenerate then the reducedcost of every nonbasic variable is positive

Page 147: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 147/267

3 Ch The smpex meth

Exercise 3 .7 (ptimaity conitions Coside a feasie soutio x to astadad fo poe ad et = { I Xi = } ow tat x is a optiasoutio if ad ony if te iea pogaing poe

iiize c'suet to A = 0

di � 0, ,

as a optia ost of zeo. ( tis sese deidig optiaity is equiaet tsing a new iea pogaig poe.)

Exercise 3 .8 is exeise deas wit te poe of deidig wete a giedegeeate asi feasie soution is optia ad sows tat tis is essetiay as

ad as soig a geea iea pogaig poe.Coside te iea pogaig poe of iiizig c'x oe a x wee is a gie ouded poyedo. et

Q = { (tx, t) + x t 0, I J }

(a ow tat Q is a poyedo.

(b) ie a exape of ad Q wit = 2, fo wi te zeo eto (i+ is a degeeate asi feasie soution i Q; sow te exape in ague

(c ow tat te zeo eto (i + iiizes (c 'y oe a y Q ifad oy if te optia ost in te oigina iea pogaig poe isgeate ta equa to zeo.

Exercise 3.9 (Necessary an sucient conitions for a unique optimum Coside a iea pogaig poe i stadad fo and supposetat x* is a optia asi feasie soutio . Coside a optia asis assoiated wit x* et ad N e te set of asi ad oasi idies espetiey.et I e te set of oasi idies fo wi te oespodig edued osts

i ae zeo.

(a ow tat if I is epty te x* is te oy optia soution.

(b) ow tat x* is te uique optia soution if ad ony if te foowigiea pogaig poe as a optia aue of zeo:

axiize XiiEI

suet to Ax = b

Xi= 0,Xi � 0 , N \ I I

Exercise 3. ow tat if = 2 ten te sipex etod wi otye no atte wi piotig ue is used.

Exercise 3. Constut an exape wit = 3 ad a pioting ueunde wi te sipex etod wi ye

Page 148: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 148/267

Sec 3.9 xercses

Eercie Cosider te probem

miimizesubjet to

-X1 X2Xl X2

2Xl X2

6X1 , X2 0

a) Coert te probem ito stadard form ad ostrut a basi feasibesoutio at wi X1 , X2 ) = 0 0 .

b) Carry out te fu tabeau impeetatio of te simpex metod startigwit te basi feasibe soutio of part (a)

c) Draw a grapia represetatio of te probem i terms of te origiaariabes Xl , X2 , ad idiate te pat take by te simpex agoritm

Eercie Tis exerise sows tat our eiet proedures for updatig atabeau a be deried from a use fat i umeria iear agebraa) Matri inversion emma) Let be a iertibe matrix ad et

u v be etors i m . Sow tat

( V ) l = 1 1V 11 v 1w

(Note tat wv is a matrix) utipy bot sides by ( wv)

b ) Assumig tat 1 i s aaiabe expai ow to obtai ( V ) l usigoy 2 aritmeti operatios

c) Let ad be basis matries before ad aer a iteratio of te simpexmetod Let A £ A£ be te exitig ad eterig oum respetieySow tat

- = A£ - A £ e were is te £t uit etor

d) Note tat e� l is te t row of B 1 ad e 1 is te piot row Sowtat

= 1 ,

for suitabe saars i Proide a formua for i Iterpret te aboe equatio i terms of te meais for piotig i te reised simpex metod

e) utipy bot sides of te equatio i part (d) by b A ad obtai aiterpretatio of te meais for piotig i te tabeau implemetatio

Eercie 4 Suppose tat a feasibe tabeau is aiabe Sow ow to obtaia tabeau wit exiograpiay positie rows Permute te oums

Eercie erturbation approach to eicography) Cosider a stadard form probem uder te usua assumptio tat te rows of A are iearyidepedet Let € be a saar ad dee

b € = b

[€m2

]

Page 149: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 149/267

132 C 3 Te sme met

For every > 0 we dene the perturbed problem to be the lnear programmngproblem obtaned by replacng b wth bE

a) Gven a bass matr B show that the correspondng basc soluton ( )n the perturbed problem s equal to

b Show that there ests some * > 0 such that all basc solutons to theperturbed problem are nondegenerate for 0 < < *

c) Suppose that all rows of

B b

are lecographcally post ve Show

that ( ) s a basc feasble soluton to the perturbed problem for postve and sucently small) Consder a feasble bass for the orgnal problem and assume that all rows

of B b are lecographcally post ve Let some nonbasc varableX enter the bass and dene = B Let the etng varable bedetermned as follows For every row such that U s postve, dvde the throw of B b by U compare the results lecographcally and choosethe etng varable to be the one correspondng to the lecographcallysmallest row Show that ths s the same choce of etng varable as n

the orgnal smple method appled to the perturbed problem when ssucently smalle) plan why the revsed smple method wth the lecographc rule de

scrbed n part (d) s guaranteed to termnate even n the face of degeneracy

Exerie 3. 1 excograpy an te revse spex meto) Supposethat we have a basc feasble soluton and an assocated bass matr B such thatevery row of B s lecographcally postve Consder a pvotng rule that

chooses the enterng varable X arbtrarly (as long as

<0 and the etngvarable as follows Let = B • For each wth U > 0 dvde the th row

of Bb B by U and choose the row whch s lecographcally smallest frow was lecographcally smallest then the th basc varable X ets thebass Prove the followng

a) he row vector CB b CB ncreases lecographcally at eachteraton

b very row of B s lecographcally postve throughout the algorthmc) he revsed smple method termnates aer a nte number of steps

Exerie 3. 17 Solve completely (e both Phase and Phase ) va the smple method the followng problem

mnme 2X 3X 3X X 2Xsubject to X 3X 4X X 2

X 2X 3X X 2-X 4X 3X 1

X X o

Page 150: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 150/267

Sec. . Eecses 133

xrcis 3 .18 Cosider the simpex method appied to a stadard form probem ad assume that the rows of the matrx A are ieary idepedet For eachof the statemets that foow gie either a proof or a couterexampe

a) A iteratio of the simplex method may moe the feasibe soutio by apositie distace whie leaig the cost uchaged.

b) A ariable that has just e the basis caot reeter i the ery extiteratio.

 c) A riable that has just etered the basis caot eae i the ery extiteratio

) If there is a odegeerate optima basis the there exists a uique optimalbasis.

 ) If is a optima soutio o more tha of its compoets ca be

positie where

is the umber of equaity costraits

xrcis 3 .19 While soig a stadard form probem we arrie at the foowig tabeau with X , X4, ad X beig the basic ariables:

10 2 0 0 0

1 1 0 0

1

Q 0 1 0

0 0 1

The etries Q , , , i the tableau are ukow parameters For eachoe of the foowig statemets d some parameter aues that wi make thestatemet true

a) The curret soutio is optima ad there are mutipe optima solutios

b) The optima cost is  

 

c) The curret soutio is feasible but ot optima

xrcis 3.2 Cosider a iear programmig probem i stadard form described i terms of the foowig iitia tabeau:

0 0 0 0

0 1 0 Q 1 0

2 0 0 1 -2 2 1

1 0 0 0 1 2 1

The etries , , , i the tableau are ukow parameters Furthermoreet be the basis matrix correspodig to haig X2 , X , ad X (i that order)be the basic ariabes For each oe of the foowig statemets d the ragesof aues of the arious parameters that wi make the statemet to be true

a) Phase II of the simplex method ca be appied usig this as a iitialtableau

Page 151: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 151/267

34 Chp The smpex methd

b) The rst row in the present tabeau indiates that the probem is infeasibe.c The orresponding basi soution is feasibe but we do not hae an optimal

basis.

) The orresponding basi soution is feasibe and the rst simpex iterationindiates that the optimal ost is 0

The orresponding basi soution is feasible X is a andidate for enteringthe basis and when X is the entering ariable X leaes the basis.

f) The orresponding basi soution is feasible X7 is a andidate for entering the basis but if it does the soution and the objetie aue remainunhanged.

Eercise 32 Consider the oi renery problem in Exerise . .

a Use the simpex method to nd an optima soution.b) Suppose that the seing prie of heating oil is sure to remain xed oer the

next month but the selling prie of gasoine may rise. How high an it gowithout ausing the optimal solution to hange

c The renery manager an buy rude oi B on the spot market at $40/barrein unlimited quantities. How muh shoud be bought

Eercise 3 22 Consider the foowing inear programming problem with a singe onstraint

minimize CX

l

subjet to X = b lX 0 = ,

a Derie a simple test for heking the feasibility of this probem.

b) Assuming that the optima ost is nite deelop a simple method for obtaining an optima soution diretly.

Eercise 323 While soling a inear programming problem by the simpexmethod the folowing tabeau is obtained at some iteration.

o o

o

Assume that in this tabeau we hae j 0 for j = + , , and < OIn partiular X is the only andidate for entering the basis.a Suppose that X indeed enters the basis and that this is a nondegenerate

piot (that is * 0) . Proe that X wil remain basi in al subsequent

Page 152: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 152/267

Sc Excss 35

iteratios of te agoritm ad tat X is a basi ariabe i ay optimalbasis.

b) Suppose tat X ideed eters te basis ad tat tis is a degeerate piot(tat is * = 0) Sow tat X

eed ot be basi i a optima basi

feasibe solutio.

Eecise 3 24 Sow tat i Pase I of te simplex metod if a artiial ariabe beomes obasi it eed eer agai beome basi. Tus we a artiialariabe beomes obasi its olum a be eimiated from te tabeau.

Eecise 325 The simpex method with upper bound constraints)Cosider a probem of te form

miimizesubjet to x

A x = b

x u

were A as lieary idepedet rows ad dimesios m Assume tatU > 0 for al .

a) Let A  A  be m ieary idepedet olums of A (te "basiolums) . We partitio te set of a i = Bl Bm ito two disjoitsubsets L ad U We set X = 0 for al E L, ad X = U for al i E Wete sole te equatio Ax = b for te basi ariabes X  l , . . . X .Sow tat te resultig etor x is a basi soutio. Aso sow tat it isodegeerate if ad oly if 0 X U for eery basi ariabe X.

b) For tis part ad te ext assume tat te basi soutio ostruted ipart (a) is feasibe. We form te simpex tabeau ad ompute te reduedosts as usual . Let Xj be some obasi ariable su tat Xj = 0 adj o As i Setio 3 2 , we irease Xj by ad adjust te basi ariabesfrom to xB Aj . Gie tat we wis to presere feasibility wat iste argest possible alue of How are te ew basi oums determied

Let Xj be some obasi ariabe su tat Xj = Uj ad j > o. Wederease Xj by ad adjust te basi ariabes om to + B Aj .Gie tat we wis to presere feasibiity wat is te largest possibe alueof How are te ew basi oums determied

d) Assumig tat eery basi feasibe solutio is odegeerate sow tat teost strity dereases wit ea iteratio ad te metod termiates.

Eecise 326 The big method) Cosider te ariat oft e big met

od i wi is treated as a udetermied arge parameter. Proe te followig.

a) If te simpex metod termiates wit a soutio (x y) for wi y = 0,te x is a optima soutio to te origial problem.

b) If te simpex metod termiates wit a soutio x y for wi y = 0 ,te te origial problem is ifeasibe.

) If te simpex metod termiates wit a idiatio tat te optimal osti te auxiiary probem is 0 sow tat te origia problem is eiter

Page 153: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 153/267

36 . Te simex met

infeasible or its optimal ost is 0 Hit Wen te simplex metod terminates it as disoered a feasible diretion d = d , d of ost dereaseSow tat d = O

(d) Proide examples to sow tat bot alternaties in part () are possible

xercie 327 a) Suppose tat we wis to nd a etor x E R that satises Ax = 0 and

x 0 and su tat te number of positie omponents of x is maximizedSow tat tis an be aomplised by soling the linear programmingproblem

maximize Y

i lsubjet to Az Y = 0

Y 1 for all ,

z y O

Suppose that we wish to nd a etor x E R tat satises Ax = andx 0 and su tat te number of positie omponents of x is maximizedShow ow tis an be aomplised by soling a single linear programmingproblem

xercie 32 Consider a linear programming problem in standard form wita bounded feasible set Furthermore suppose that we know the alue of a salarU su tat any feasible solution satises X U for all i Show that teproblem an be transformed into an equialent one that ontains the onstraintl X = 1

xercie 3 29 Consider te simplex method iewed in terms of olumn geometry Sow tat the 1 basi points A C as dened in Setion 3, ar

anely independent

xercie 33 Consider te simplex method iewed in terms of olumn geometry In the terminology of Setion 3 , sow that the ertial distane from tedual plane to a point Aj Cj is equal to te redued ost of the ariable Xj

xercie 33 Consider the linear programming problem

minimize Xl 3X2 X X4

subjet to Xl 3X2 X X4 X X2 X 3X4 2X X2 X X4 1X X 4 0

where 2 are free parameters Let 2 be te feasible set Use te olumgeometry of linear programming to aswer te following questions

a) Caraterize expliitly (preferably with a piture te set of all 2 forwi 2 is nonempty

Page 154: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 154/267

Sec. 3 1 0 Ntes nd suces 37

b) Charactriz xplicitly (prfraly with a pictur) th st of all , 2 forwhich som asic fasil solution is dgnrat

c Thr ar four ass in this prolm in th th asis, all varials xcptfor X ar asic For vry , 2 for which thr xists a dgnrat asicfasil solution, numrat all ass that corrspond to ach dgnratasic fasil solution

) For = 1 , 4 lt = { , 2 th th asis is optimal} Idnti,prfraly with a pictur, th sts . . . 4 .

For which valus of , 2 is th optimal solution dgnrat

f) Lt = /5 and 2 = 7/5 Suppos that w start th simplx mthodwith X2 X X4 as th asic varials Which path will th simplx mthodfollow

Exercie 332 Prov Thorm 3.5.

Exercie 333 Considr a polyhdron in standard form, and lt y twodirnt asic fasil solutions If w ar allowd to mov from any asic fasilsolution to an adjacnt on in a singl stp, show that w can go fro to y ina nit numr of stps

3 . 1 0 Notes and souces

32 e mple meto a poeee b Datzg 1947, o lateoe a ompeeve et o te ubet (Datzg, 1963)

33 Fo moe uo o f patal mplemetato o f te mple meto bae o pout of pae mate , tea of B , ee te bookb Gll, Mua, a gt (191), Cvtal (193), Mut (193),a Luebege ( 194) A eelet touto to umeal leaageba te tet b Golub a Va Loa ( 193 ) ample 36 , o te poblt of lg, ue to Beale (1955)

If e ave uppe bou fo all o ome of e vaable, eaof ovetg te poblem to taa fom, e a ue a utableaaptato of te mpe meto T evelope ee 3 2a te tebook tat e metoe eale

34 e eogap atg ule ue to Datzg, Oe, a ofe

(1955) It a be vee a a outgot of a petubato meoeveope b Oe a ao b Cae ( 1952) Fo a epotoof te peubato meto, ee Cvtal (193) a Mut (193),a ell a ee 315 e mallet ubpt ule ue to Bla( 1977) A poof tat Bla' ule avo lg a alo be fou Papamtou a Stegtz (192), Cvtal (193), o Mut (193)

36 e olum geomet tepetato of te mple meto ue toDatzg (1963) Fo fute uo, ee Stoe a Tove (1991)

Page 155: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 155/267

Chap The simplex method

7 e eampe og a e mpe meo a ae a epoea umbe of eao ue o Kee a M ( 1972) e Hoeue a mae b H 1957 e eu o e aveage ae beavo of e mpe meo ee obae b Boga( 1982 ) a Smae ( 1983) Sve ( 1986) oa a oveve ofe ea eea aea, a e a poof of e /2 bou oe umbe of pvo ue o Hamov (1983)

e eu ee 3 10 a 3 1 1 , ea e maeeampe of g, ae ue o Maa a Suubae ( 1969) ema veo emma [ee 313(a) o a e SemaMoo fomua

Page 156: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 156/267

hapt 4

Dli o

a

2 e ua pbm

43 e ua em

. Opma ua aabe maga

5 Saa fm pbem a e ua mp meo

46 a mma a ea quae m epaag ppae ua*

8 Ce a eeme a

49 pea f pea

10 Geea a pgammg ua*

411 Summa

12 Eee

4 13 Ne a ue

Page 157: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 157/267

4 C. 4 Dy e

In th chapter, e tart th a lnear programmng probem, cae the prmal, an ntrouce another near programmng probem, cale the ual.Duat theor ea th the relaton beteen thee to probem an uncover the eeper tructure of near programmng. It a poerfu theoretcal too that ha numerou appcaton, prove ne geometrc nght,an ea to another algorthm for lnear programmng (the ua mpexmetho).

4. Motivation

Duat theor can be motvate a an outgroth of the Lagrange mutper

metho, often ue n cacuu to mnmze a functon ubject to equaltcontrant. For exampe, n orer to olve the problem

mnmze x2 + y2

ubject to x + y 1 ,

e ntrouce a Lagrange mutper p an form the Lagrangean L(x, y , p)ene b

L(x, y, p)

x2 + y2 + p ( l - x - y ) .he keepng p xe, e mnmze the agrangean over a x an y, ubjectto no contrant, hch can be one b ettng aLax an aLay to zero.The optma outon to th uncontrane problem

an epen on p. The contrant x + y 1 gve u the atona reaton

p

1, an the optmal outon to the orgna probem x

y

1/2 .The man ea n the above exampe the foong. Intea of

enforcng the har contrant x + y 1, e alo t to be voate anaocate a Lagrange mutper, or price p th the amount 1 - x Yb hch t voate. Th ea to the uncontrane mnmzaton ofx2 + y2 + p (l - x - y) . hen the prce proper choen (p 1, n ourexampe), the optma outon to the contrane probem ao optmalfor the uncontrane problem. In partcuar, uner that pecc vaue of p,

the preence or abence of the har contrant oe not aect the optmacot

The tuaton n near programmng mar e aocate a prcvarabe th each contrant an tart earchng for prce uner hchthe preence or abence of the contrant oe ot aect the optma co.It tur out that the rght prce an be foun b ovng a ne nearprogrammng probem, cale the u of the orgna. e no motvatethe form of the ua probem.

Page 158: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 158/267

ec. 41 Mtt

Cone te tana fom poblem

mnmZe ubject to = b

4

c e call te rim poblem, an let be an optmal oluton, aume to ext We ntouce a ree poblem n c te contant = b eplace b a penalt p'b , ee p a pce vecto ofthe ame menon a b We ae ten face t te poblem

mnmze + p' b ubject to o

et gp be te optmal cot fo te elaxe poblem, a a functon of tepce vecto p Te elaxe poblem allo fo moe opton tan toepeent n te pmal poblem, an e expect gp to be no lage tan teoptmal pmal cot Inee,

g (p) = min  [ c'x + p' (b Ax) ] : c'x* + p' (b - Ax* ) =  c'x* ,  ee te lat nequalt follo fom te fact tat a feable olutonto te pmal poblem, an ate

b.Tu, eac

plea to a

loe boun gp fo te optmal cot Te poblem

maxmze gpubject to no contant

can be ten ntepete a a eac fo te tgtet poble loe bounof t tpe, an knon a te u poblem Te man eult n ualt teo aet tat te optmal cot n te ual poblem equal tote optmal cot n te pmal In ote o, en te pce ae

coen accong to an optmal oluton fo te ual poblem, te optonof volatng te contant = b of no value

Ung te enton of gp , e ave

Note tat

gp mn + p' b p'b + mn p'

mn p'Ax O 0 ,f p' 'otee

In maxmzng gp , e onl nee to cone toe value of p fo cgp not equal to 0 We teefoe conclue tat te ual poblem the ame a te lnea pogammng poblem

maxmze p'bubject to p'

Page 159: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 159/267

4 Chap Duality theory

e peeg eampe, e ae e equa oaA = a e ee up o oa o e g of e peveo p f e pma pobem a ea equa oa of efom A , e ou be epae b A

= ,

o e equa

oa a be e e fom

A I ] : =

ea o e ua oa

p' [A I ] : [' ']

o, equvae, p'A : ' p o Ao, f e veo fee ae a goae, e ue e fa

' 'A{ 0, f ' p' A = ' m p = -0 oee,

o e up e oa p' A = ' e ua pobem ee o

eao movae e geea fom of e ua pobem e oue e e eo umma, e ouo of e ua of a pma mmzaon

pobem a be vee a foo e ave a veo of paamee uavaabe p, a fo eve p e ave a meo fo obag a oe bouo e opma pma o e ua pobem a mamzao pobema oo fo e ge u oe bou Fo ome veo p, eoepog oe bou equa o -0 a oe o a a uefufomao u , e o ee o mamze ove oe p a ea o

ova oe bou, a a gve e o e ua oa

4 . 2 The dual problem

e A be a ma o a� a oum A • Gve a m pobe e uue o o e ef, du ee o be e mamzao pobem o o e g

mmze ' mamzeube o a� i i E MI ube o

a� : i i E M2 a� = i i E M3X 0, E N X : 0, E N2 X fee, E N3

p'i 0,

i : 0,

i ee,p'A : ,p' A ,p'A = ,

i E Mi E M2 i E M3 E Nb E N2 E N3

Page 160: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 160/267

Sec 42 The du problem 4

Noe a fo ea oa e pma oe a e g oa e oue a aabe i ua pobem; fo ea aabe e pma e oue a oa e ua Depeig o eee pma oa a equai o equa oa e oepog ua aabe eie fee o goae epee ao epeg o ee a aabe e pma pobem fee ogoae e ae a equa o equa oa epeei e ua pobem e ummaze ee eao abe 41

PRMAL mmze mamze DUAL

i 0cosais i O iables

= i fee

" 0  < · Jvaiables O j cosais

fee =jble 4 : Rlation btwn primal and dual riabls and constraints

f e a a maimizao pobem e a ala oe io a equale mmzao pobem a e fom ual aoigo e ue e ae ebe Hoee o ao oo e aeeo e oeo a e pma a mmzao pobem a ua

a mamzao pobem Fa e eep efeig o e obeefuo e ual pobem a a o" a beg maimzeA pobem a i ua a be ae moe ompa ma

oao f a paua fom i aume fo e pmal e ae foeampe e foog pa of pma a ua pobem

mimze c'xube o Ax b

x >

a

mimze c'xube o Ax b

mamzeube o

maimzeube o

p'bp'A c'

p'bp'A = c' o

Exple 4 Considr th primal problm shown on th l and its dual shown

Page 161: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 161/267

44

on th right:

minimizsubjct to

X + 2X + 3X-X + 3X = 5

2X - X + 3X � X 4X 0X 0X fr

Chap. 4 Duaiy hory

maximizsubjct to

5 + + 4 fr

� 0 0- + 2 13 - � 2

3 = 3

W transform th dual into an quivalnt minimization problm rnam thvariabls om , , to X , X , X, and multiply th thr last constraints by 1 Th rsulting problm is shown on th l Thn on th right w show itsdual:

minimizsubjct to

-5X - X 4XX frX � 0X 0X - 2X � - 1

3X + X 2 3X X = -3,

maximizsubjct to

- 2 - 3 3 = -5

2 + - 3 -

� 0 0 fr

- � -4

W obsrv that th lattr problm is quivalnt to th primal problm w startd

with (Th rst thr constraints in th lattr problm ar th sam as th rstthr constraints in th original problm multiplid by -1 Also if th maximization in th lattr problm is changd to a minimization by multiplying thobjctiv function by 1 w obtain th cost function in th original problm)

The rst primal problem consiere in xample 4 . ha all of theingreients of a general linear programming problem This suggests thatthe conclusion reache at the en of the example shoul hol in generalInee, we have the following result Its proof nees nothing more than

the steps followe in xample 4 with abstract smbols replacing specicnumbers, an will therefore be omitte

Terem 4 w rasorm h ua io a uia miimizaio pom a orm is ua w oai a pom uiao h oiia prom

A compact statement that is oen use to escribe Theorem 4 isthat the ual of the ual is the primal "

An linear programming problem can be manipulate into one ofseveral equivalent forms, for example, b introucing slack variables or busing the ierence of two nonnegative variables to replace a single freevariable ach equivalent form leas to a somewhat ierent form for theual problem Nevertheless , the examples that follow inicate that theuals of equivalent problems are equivalent

Page 162: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 162/267

Sec 4.2 Te dual problem 4

Eapl 4 Considr th primal prolm shown on th lft and its dual shownon th right:

minimiz c'x

sujct to Ax bx

maximiz

sujct to

p'bp � D  

p'A = c'

W transform th primal prolm y introducing surplus variabls and thn otain its dual

minimizsubjct to

c'x 'sAx - s = bx frs 0

maximizsujct to

p'bp frp'A = c'

-p :

Altrnativly if w tak th original primal prolm and rplac x y signconstraind varials w otain th following pair of prolms:

inimizsujct to

c'x - c'xAx - Ax � bx � 0x 0

maximizsujct to

p'bp p'A : c'

-p'A : -c'

Not that w hav thr quivalnt forms of th primal W obsrv that thconstraint p � 0 is quivalnt to th constraint -p : Frthrmor th con

straint p'A = c' is quivalnt to th two constraints p'A : c' and -p'A : -c' Thus th duals of th thr variants of th primal prolm ar also quivalnt

The next example n the ame prt and examne the eect ofremovng redundant equalt contrant n a tandard form problem

Eapl 4 Considr a standard form prolm assumd fasil ad itsdual

minimiz c'xsujct to Ax b

x 0

maximizsujct to

p'bp'A : c'

t a . . . a b th rows of A and suppos that am =

a o somscalars . . . m . In particular th last quality constraint is rddat adcan liminatd y considring an aritray fasil solution x w otai

m m m = ax = ax =

(4 1

Not that th dual constraints ar of th fom :

a

c' and can rwtt

asm m a : c'

Furthrmor using q (41 th dual cost : is qual to

m

Page 163: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 163/267

4 Chap. 4 Duality theory

If w now lt = P P , w s that th dual problm is quivalnt to

=- 1 

maximize  L qibi 

subjct to a c'

l

W obsrv that this is th xact sam dual that w would hav obtaind if whad liminatd th last (and rdundant) constraint in th primal problm, bforforming th dual

Te ouo of e peeg o eampe ae ummaze a geeaze b e foog eu

Theorem 4 Sppose that we have transformd a linear programming problm to another linar programming problem b asequence of transformations of th following tes

(a Rplac a variable with th irenc of two nonngativvaiabls

 b Replace an inequality constraint by an qality constrnt involving a nonngativ slac variable

some row of the matrix A in a feasible standard form prblemis a linear combination of the other ro eminat the corresponding quality constraint

Thn the duas of d are equivaent i.e the are ithr bothinfeasibl or thy have the same optimal cost

Te poof of Teoem vove a ombao of e vaou ep ampe . a 3 , a ef o e eae

4 . 3 The dualty theorem

We a Seo .1 a fo pobem aa fom e o g(p)

of a ua ouo pove a oe bou fo e opma o We oo a pope ue geea

Theorem 4 (Wea duly is a feasible solution to th primaproblem and is a fble solution to the dua roblem hen

b

Page 164: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 164/267

ec 4.3 The duality theorem 47

Poof Fo a veo x a p e ee

Ui = (ax ) Vj = (j - p'Aj)Xj

Suppoe a x a p ae pmal a ua feable, epevel Te efo of e ua poblem eque e g of o be e ame a eg of ax - a e g of j - p'Aj o be e ame a e g of XjTu, pmal a ual feab mpl a

a

Noe a

a

Vj � 0,

Ui = p'Ax - p'b

Vj = c'x - p'Axj

e a ee o equale a ue e oegav of Ui Vj o oba

j

Te ea ual eoem o a eep eul, e oe poveome uefu fomao abou e elao beee e pmal a eual e ave, fo eampe, e foog oola

Cooy 4

th optimal cost in th primal is then the dul problemmust be ineasible

b the optial cost in the dual is +0 th th primal problemmust e ineasible

Poof Suppoe a e opmal o e pma pobem -0

aa e ual poblem a a feabe oluo p B ea ual, p aep'b c'x fo eve pmal feable x Tag e mmum ove al pmafeabe x e olue a p'b 0 mpoble a o ae ual ao ave a feabe ouo, u eabg pa a Pab foo b a mmea agume

oe mpoa ooa of e ea a eoe e foog

Page 165: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 165/267

148 C 4 Duit te

orollary 4.2 et d e eie utis t te im dte du eectie d suse tt ' ' Te d etim utis t te im d te du eectie

Proo et ad be as the statemet of the coolay. Fo evey pmalfeasbe soluto y, the weak dualty theoem yelds ' ' 'y whchpoves that s optma. The poof of optmalty of s smla.

The ext theoem s the ceta esult o lea pogammg dualty.

Theore 4.4 Stron uality ie gmmig ems tim suti de it du d te esectie timcsts e eu.

Proo Cosde the stadad fom pobem

mmze ' subject to A

> o.et us assume tempoaly that the ows of A ae leay depedet adthat thee exsts a optma soluto. et us apply the smpex method tths poblem. As log as cyclg s avoded, e .g. , by usg the lexcogaphcpvotg ule, the smplex method temates wth a optma soluto ad a optma bass et B be the coespod vecto ofbasc vaables. Whe the smpex metho temates, the educed costs

must be oegatve ad we obtaC' ' A > 'B

whee s the vecto wth the costs of the basc vaabes . et us deea vecto by ettg ' . We the have 'A / whch showsthat s a feasbe soluto to the dual poblem

In addto,

maxmze /

subject to ' A /

' I I I B B B C .It folows that s a optmal soluto to the dual (cf. Cooay 4.2) athe optmal dual cost s equa to the optma pmal cost.

If we ae dealg wth a geeal ea pogammg pobem thathas a optmal soluto, we st tasfom t to a equvaet stada

Page 166: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 166/267

Sec. 43 Te dy eem 4

fom pobem I , with the ame optima cot , a i which the ow of thematix A ae iea iepeet et a be the ua of II aI , epective B Theoem 4 2 , the ua pobem a have theame optima cot e have aea pove that I a have the ameoptima cot It foow tat I a ave the ame optima cot (eeFigue 4 1 D

equivaet

-  � Duait fotaa fompobem

ua of equivaet

pobe aeequivaet

Fiu 4 : Proof of th duality thorm for gnral linar programming problms

The peceig poof how that a optima outio to te ua pobem i obtaie a a bpouct of the impex metho a appie to a pimapobem i taa fom It i bae o the fact that the impex methoi guaatee to temiate a thi, i tu, epe o the exitece of

pivotig ue that pevet ccig Thee i a ateative eivatio of theuait theoem, which povie a geometic, agoithmiepeet viewof the ubect , a which i eveope i Sectio 47 At thi poit , wepovie a iutatio that cove mot of the cotet of the geometicpoof

xapl 44 Considr a solid ball constraind to li in a polyhdron dndby inquality constraints of th form a�x bi. If lft undr th inunc ofgravity, this ball rachs quilibrium at th lowst cornr x* of th polyhdron

s Figur 42 This cornr is an optimal solution to th problmminimiz cx

subjct to a�x bi, V

whr c is a vrtical vctor pointing upwards At quilibrium, gravity is countralancd by th forcs xrtd on th ball by th "walls of th polyhdron Thlattr forcs ar normal to th walls, that is, thy ar alignd with th vctors aiW conclud that c = 2i Piai for som nonngativ cocints Pi; in particular,

Page 167: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 167/267

Cha 4

th vctor is a asibl solution to th dual problm

maximiz subjct to A =

Duaty theory

ivn that orcs can only b xrtd by th walls that touch th ball w musthav Pi = 0 whnvr a� > bi Consquntly Pi(bi a� ) = 0 or all Wthror hav = Li Pibi = Li Pi�X* = It ollows (Corollary 4 that is an optimal solution to th dual and th optimal dual cost is qual to thoptimal primal cost

Fue 4: mchanical analogy o th duality thorm

Rea a a ea pogammg pobem, ea oe of e foog ee pobe ou

a Tee a opma ouo

b e pobem uboue ; a , e opma o -0 fommzao pobem , o +0 fo mamzao pobem

Te pobem feabe

ea o e pobe ombao fo e pma a e ua, ae o Tabe 42 B e og ua eoem, f oe pobem aa opma ouo, o oe e oe emoe, a ue eae,e ea ua eoem mpe a f oe pobem uboue, eoe mu be feabe T ao u o ma ome of e ee Tabe 42 a mpobe

Page 168: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 168/267

Sec 43 The dualty theorem

I I Fte optu Ubouded Ifeable

Fte optu Poble mpoble mpobe

Ubouded mpobe mpoble PobleIfeable mpoble Pobe Pobe

Table 4 : Th dirnt possibilitis for th primal and th dual

Te cae ee bo pobem ae feabe ca ee occu a o b

e foog eampleExaple 4 Considr th infasibl primal

minimiz Xsubjct to X

2X

Its dual ismaximiz

subjct to

which is also infasibl

2XX

2X

3

22

13 .

12

ee aoe eeg eao beee e pma a e ualc o a Cla eoem (Cla 1961) ae a uebo pobem ae feabe a ea oe of em mu ave a ubouefeabe e (ece 421)

Clry slckss

A mpoa eao beee pma a ua opmal ouo pove b e comemet sckess coo c e pee e

Teoe 4  Coleeta lacke Let x and p be easiblesolutions to th rimal and th dual roblem resctivly The vetors

x and p ar otimal solutons or the two rective roblems i andonly i

(ax ) = 0 , i( - p'A)j = 0 ,

Poof e poof of eoem 43, e ee Ui = (ax - ) aV = ( p'A) a oe a fo x pma feable a p ual feable

Page 169: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 169/267

Chap 4 uaty theory

e ave Ui � 0 a Vj � 0 for al i a . I ato, e oe tat

'x - p'b = L Ui + Vj i j

B te trog uat teorem, f x a p are optma, te 'x = p'bc mpe tat Ui = Vj = 0 for al i , Coverel, f Ui = Vj = 0 for ali , te 'x = p'b a Corolar 42 mpe tat x a p are optma

Te rt compemetar lacke coto automatcal ate b ever feable oluto to a problem taar form If te prmal probem ot taar form a a a cotrat ke ax � b i te correpog compemetar lacke coto aert tat te ualvarabe Pi zero ue te cotrat actve A tutve epaato tat a cotrat c ot actve at a optmal outo ca be remove from te probem tout aectg te optma cot, a tere opot aocatg a ozero prce t uc a cotrat Note ao teaalog t ample 44, ere force" ere o eerte b te actvecotrat

If te prmal problem taar form a a oegeerate optmal

bac feable outo ko, te complemetar lacke cotoeterme a uque outo to te ual probem e lutrate t fact te et eample

Exap 4 Considr a problm in standard form and its dual

minimiz 13 I O  X maximiz PI 3P2subjct to 5X I X 3X subjct to 5PI 3P2 13 

3X I X 3 PI P2 10X X X � 0, 3PI

As will b vrid shortly th vctor x* = 1 , 0 , 1 is a nondgnrat optimalsolution to th primal problm Assuming this to b th cas w us th complmntary slacknss conditions to construct th optimal solution to th dual Thcondition P ax* - = 0 is automatically satisd for ach sinc th primalis in standard form Th condition Cj - p'Aj ) = 0 is clarly satisd for = 2bcaus X = O owvr sinc > 0 and X > 0, w obtain

and3PI =

which w can solv to obtain PI = 2 and P2 = 1 Not that this is a dual fasiblsolution whos cost is qual to 19 , which is th sam as th cost of x* Thisvris that x* is indd an optimal solution as claimd alir

e o geeraze te above eampe Suppoe tat Xj a bac varabe a oegeerate optma bac feable outo to a prma

Page 170: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 170/267

Sec. 3 The dity theorem

problem i taar form The the omplemetar lake oitioCj _pAj)xj = 0 iel pAj = Cj for ever uh Sie the bai olumAj are liearl iepeet e obtai a tem of equatio for p hihha a uique olutio amel p = C . A imilar oluio a alobe ra for problem ot i taar form (xerie 4 12 O the otherha if e are give a egeerate optimal bai feaible olutio to theprimal omplemetar lake ma be of ver little help i etermiiga optimal olutio to the ual problem (xerie 417

e all metio that if the primal otrait are of the formAx h, x , a the primal problem ha a optimal olutio thethere exit optimal olutio to the primal a the ual hih ati strictcomemetr sckess; that i a variable i oe problem i ozero if

a ol if the orrepoig otrait i the other problem i ative(xerie 4 20 Thi reult ha ome iteretig appliatio i ireteoptimizatio but thee lie outie the ope of thi book

A grc vw

e o evelop a geometri vie that allo u to viualize pair of primala ual vetor ithout havig to ra the ual feaible et

e oier the primal problem

miimize xubet to a�x = 1 , . . . m

here the imeio of x i equal to e aume that the vetor pa The orrepoig ual problem i

maximize p'hm

ubet to

L = C

.

et be a ubet of {1, . . . m} of arialit uh that the vetor E are liearl iepeet The tem a�x = E ha a uiqueolutio eote b x, hih i a bai olutio to the primal problem(f Deitio 29 i Setio 22 e aume that x i oegeeratethat i a�x for

etp E

m be a ual vetor (ot eearil ual feaible) a letu oier hat i require for x a p to be optimal olutio to theprimal a the ual problem repetivel e ee

(a) a�x for all (primal feaibilit)

(b) = for all

()

() ,

(omplemetar lake)

(ual feaibilit)

(ual feaibilit)

Page 171: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 171/267

4 Chap Duaty theory

l

Fiur 4 : Considr a primal problm with two variabls and

v inquality constraints =

=5 and suppos that notwo of th vctors a ar collinar vry twolmnt subst

of { 3 5 dtrmins basic solutions x and p of th primaland th dual rspctivlyIf = { is primal infasibl point A and p is dual infasibl bcaus cannot b xprssd as a nonngativ linarcombination of th vctors a and a2 .If = { 3 is primal fasibl point ) and p is dual infasibl

If =

x is primal fasibl point ) and p is dual fasiblbcaus can b xprssd as a nonngativ linar combination ofth vctors a and 8 In particular x and p ar optimalIf = { 5 x is primal infasibl point D and p is dualfasibl

Gve e compemea lacke coo (b), coo (c) become

Sce e veco ai , i E I ae ea epee, e ae equaoa a uque ouo a e eoe b p fac, ea eea e veco ai , i E I fom a ba fo e ual poblem (c naa fom) a p e aocae bac oluo Fo e veco po be ua feabe, e ao ee o be oegave e cocue aoce e complemea acke coo (b) efoce, feab of

Page 172: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 172/267

Sec 44 ptimal dual variables marginal costs

Fur 44: h vctor x* is a dgnrat asic asil solutiono th primal I w choos = { 1 , , th corrsponding dualasic solution p is inasil, caus is not a nonngativ linarcomination o 2 n th othr hand, i w choos = { , 3or = { , 3 , th rsulting dual asic solution p is asil and,thror, optimal

155

e eug ua veo equvae o c beg a oegave eaombao of e veo a i E I, aoae w e ave pmaoa aow u o vuaze ua feab wou avg oaw e ua feabe e; ee Fgue 4.3.

f x* a egeeae ba ouo o e pma , ee a be eveaube I u a x = x* Ug ee oe fo I, a b ovge em

a=

c we ma oba evea ua ba ouo

ma e we be e ae a ome of em ae ua feabe a ome aeo; ee Fgue 4.4. S, f ua feabe (e, a ae oegave)a f x* pma feabe, e e ae bo opma, beaue we avebee efog ompemea ae a eoem 45 appe

4 . 4 Optmal dual vaables as magnal costs

eo, we eaboae o e epeao of e ua vaabe ape eme w be eve, moe ep, Cape 5 .Coe e aa fom pobem

mmze cxube o Ax b

x > o

e aume a e ow of A ae ea epee a a ee

Page 173: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 173/267

Chap 4 Dalty thery

a nonegenerate bac feable oluton x wc optmal et B bete correponng ba matr an let XB = B b be te vector of bacvarable, wc potve, by nonegeneracy et u now replace b byb + d were d a mall perturbaton vector Snce Bb > , we aloave B (b + d) > , a long a d mall T mple tat te ameba lea to a bac feable oluton of te perturbe problem a wellerturbng te rgtan e vector b a no eect on te reuce cotaocate wt t ba By te optmalty of x n te orgnal problem,te vector of reuce cot c' -cBA nonnegatve an t etabletat te ame ba optmal for te perturbe problem a well Tute optmal cot n te perturbe problem

were p' = cB an optmal oluton to te ual problem Terefore, amall cange of d n te rgtan e vector b reult n a cange of p'dn te optmal cot e conclue tat eac component i of te optmalual vector can be nterprete a te mrgi cost or show rice  perunt ncreae of te t requrement i

e conclue wt yet anoter nterpretaton of ualty, for tanarform problem In orer to evelop ome concrete ntuton, we praeour cuon n term of te et problem Eample 13 n Secton 11e nterpret eac vector Aj a te nutrtonal content o f te t avalablefoo, an vew b a te nutrtonal content of an eal foo tat we w toynteze et u nterpret i a te far" prce per unt of te t nutrent A unt of te t foo a a value of Cj at te foo maret, but t alo aa value of p'Aj f prce at te nutrent maret Complementary lacneaert tat every foo wc ue at a nonzero level to ynteze teeal foo, oul be contently prce at te two maret Tu, ualty

concerne wt two alternatve way of cot accountng Te value of teeal foo, a compute n te foo maret, c'x were x an optmaloluton to te prmal problem; te value of te eal foo, a computen te nutrent maret, p'b Te ualty relaton c'x = p'b tate tatwen prce are coen approprately, te two accountng meto oulgve te ame reult

4 . Standard form problems and the dualsimplex method

In t ecton, we concentrate on te cae were te prmal problem ntanar form e evelop te u sime metho, wc an alternatveto te mple meto of Capter 3 e alo comment on te relatonbetween te bac feable oluton to te prmal an te ual, nclunga cuon of ual egeneracy

Page 174: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 174/267

Sec. 4 5 Standard orm problems and the dal simple method 7

In the proof of the strong duality theorem, we considered the simplexmethod applied to a primal problem in standard form and dened a dualvector p by letting p' = C 1 We then noted that the primal optimalitycondition c' - C 1A 2 is the same as the dual feasibility condition

p' A c' . We can thus think of the simplex method as an algorithm thatmaintains primal feasibility and works towards dual feasibility. A methodwith this property is generally called a rim algorithm. An alternative isto start with a dual feasible solution and work towards primal feasibility. Amethod of this type is called a u algorithm. In this section, we present adual simplex method, implemented in terms of the full tableau. We arguethat it does indeed solve the dal problem, and we show that it moves fromone basic feasible solution of the dual problem to another. An alternative

implementation that only keeps track of the matrix l , instead of theentire tableau, is called a revise u sime method Exercise 423) .

The dual simpex metod

et us consider a problem in standard form, under the usual assumptionhat the rows of the matrix A are linearly independent. et be a basismatrix, consisting of linearly independent colmns of A, and considerthe corresponding tableau

or, in more detail,

CXB 1 . . nXB  I

1A1 . . . 1AnXB m I I

We do not require b to be nonnegative, which means that wehave a basic, but not necessarily feasible solution to the primal problem.

Howeer, we assume that

2 ; equivalently, the vector p' = c satises p' A c' , and we have a feasible soltion to the dual problem.The cost of this dual feasible soltion is p'b = C b = CXB  whichis the negative of the entry at the upper le corner of the tableau Ifthe inequality 1b 2 happens to hold, we also have a primal feasiblesolution with the same cost, and optimal solutions to both problems havebeen found. If the inequality 1b 2 fails to hold, we perform a changeof basis in a manner we describe next.

Page 175: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 175/267

Chap 4 Duality theory

We d some £ such that X (£) < 0 ad coside the £th ow o thetabeau, caed the ivot ; this ow is o the om X(£) , V , " " v ,whee V is the £th compoent o A Fo each i with V < 0 (i such iexist), we om the atio

/ Iv

lad et be a idex o which this atio

is smaest; that is, V < 0 and

C CJ  mi IV l { <O} IV (42)

(We ca the coespodi ety V the ivot eemet. Note that X mustbe a nobasic aiabe, sice the th coum i the tabeau cotais theeatie eemet V We the peom a chane o basis : coum Aetes the basis ad coum A(£) exits This chae o basis (o ivot is

eected exacty as i the pima simpex method: we add to each ow o thetabeau a utipe o the piot ow so that a eties i the piot coumae set to zeo, with the exceptio o the piot eemet which is set to Inpaticua, i ode to set the educed cost i the piot coum to zeo, wemutipy the piot ow by / lv l ad add it to the zeoth ow Fo eeyi the ew aue o C is equa to

C + V �

which is oeatie because o the way that was seected [ q (4 2) We cocude that the educed costs i the ew tabeau wi aso b e oeatie ad dua easibiity has bee maitaied

Exaple 47 Considr th tablau

Xl X2 X3 X4 X5

0 10 0 0

- 4 1 1 0

-1 4 - -3 0 1

Sinc XB (2) 0 w choos th scond row to b th pivot row Ngativ ntrisof th pivot row ar found in th scond and third column W compar thcorrsponding ratios / - and 10/ - 3 Th smallst ratio is / - and,thrfor, th scond column ntrs th basis Th pivot lmnt is indicatd byan astrisk W multiply th pivot row by 3 and add it to th zroth row W

multiply th pivot row by and add it to th rst row W thn divid th pivotrow by - Th nw tablau is

Xl X2 X3 X4 X5

-3  14 0 1 0 3

0 0 5 1

0

Page 176: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 176/267

Sec 45 Standard orm problems and the dual simple method

Th cost has incrasd to 3 . rthrmor w now hav B1b , and anoptimal solution has bn found

Nt tat t pit mt V i away c t b gati, w

a t cpig cd ct C i gati Lt tmpaiyam tat C i i fact piti . T, i t pac C by z, wd t ad a piti mltip f t pit w t th zt w. SicX(£) i gati, thi a t ct f aig a gati qatity t tpp ft c . qiaty, t a ct ica T , a g a tc ct f y baic aiab i z, t a ct icawit ac bai cag, a bai wi b pat i t c ft agith. t fw tat t agitm mt tay tmiat a

ti ca app i f tw waya W a a a ptima ti.

b A f t ti V , , Vn i t pit w a gati a wa thf ab t cat a pit mt . f aagy witt pima impx mth, ti impi tat t ptima a ct iqa t +0 a t pima pb i ifaib th pf i ft aa xci xci 4 22 )

W w pi a mmay f t agithm.

A iteatio of te dual implex metod A tpic tat tat wit t aba acat ith a bai

maix wit c ct gati.

xami t cmpt t c i t t cm f t tab f ty a gati, w a a ptima baic fab ti t agitm tmiat; ,

ch m ch tat (R) < O Cid t t w f th taba, wit mt X( , V

v t pi w f Vi 0 f a i t t ptima a cti +0 a t agit tiat

4 F ch i c tt Vi < 0, cmpt t ti cdlvi l ad t b t ix a cm tat cp t t mat ai. c A(£ it bi a t cm A ta itpac

A t ac w f t taba a mti f t t w tpt w tat V t pit mt bcm 1 a a thti f pt cm bcm O

Lt w ci t pibiity tat t c ct C tpi cm i ti ca, t zth w f t taba tcag ad th a ct mai t am. Th pf f tmia

Page 177: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 177/267

Chap. 4 Duality theory

tion given earlier does not apply and the algorithm can cycle. This can beavoided by employing a suitable anticycling rule, such as the following.

Lxcoc vo l fo dl lx od Choose any row £ such that B (C) < , to be the pivot row.

2 Determine the index of the entering column as follows or eachcolumn with V < divide all entries by V and then choosethe lexicographically smallest column. If there is a tie betweenseveral lexicographically smallest columns, choose the one withthe smallest index

I the dual simplex method is initialized so that every column of thetableau [that is, each vector ( j BAj) ] is lexicographically positive, andif the above lexicographic pivoting rule is used, the method terminates in anite number of steps. The proof is similar to the proo of the correspondingresult for the primal simplex method (Theorem 34) and is le as an exercise(Exercise 4.24).

Whe should we use the dual simplex method

At this point, it is natural to ask when the dual simplex method shouldbe used. One such case arises when a basic feasible solution of the dualproblem is readily available. Suppose , for example , that we already have anoptimal basis for some linear programming problem, and that we wish tosolve the same problem for a dierent choice of the righthand side vectorb The otimal basis for the original problem may be primal ineasibleunder the new value of b On the other hand, a change in b does not aectthe reduced cost and we still have a dual feasible solution. Thus , insteadof solving the new problem from scratch, it may be preferable to applythe dual simplex algorithm starting from the optimal basis for the originalproblem. This idea will be considered in more detail in Chapter 5

The geometry of the dual simplex method

Our development of the dual simplex method was based entirely on tableaumanipulations and algebraic arguments. We now present an alternative

viewpoint based on geometric considerations.We continue assuming that we are dealing with a problem in standard

form and that the matrix A has linearly independent rows . et B be a basismatrix with columns     This basis matrix determines abasic solution to the primal problem with XB Bb The same basis canalso be used to determine a dual vector by means of the equations

AB   i

Page 178: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 178/267

Sec 4 Standard orm problems and the dual simple method

These are eqations in nknowns; since the colmns A ) , , A (m)are lineary independent, there is a niqe soltion p For sch a vector pthe nmber of linealy independent active dua constaints is equal to thedimnsion of the da vector, and it follows that we have a basic soltionto the dal poblem. In matix notation, the da basic soltion p satisesp/B c , or p C l , which was referred to as the vector of simplexmltiplies in hapte 3. If p is also dal feasible, that is, if pA c thenp is a basic feasible soltion o the dual problem.

To smmarize, a basis matix is associated with a basic soltionto the prial problem and also with a basic solution to the dual A basicsoltion to the primal (respectively, dal) which is primal (espectively,dal) feasible , is a basic easibe soltion to the primal (respect ively, dual)

We now have a geometric interpretation of the dual simplex methodat evey iteration, we have a basic easible soltion to the dual problemThe basic feasible solutions obtained at any two consective iterations have lineary independent active constraints in common (the redced costsof the 1 variables that are common to both bases ae zero) ; ths,consecutive basic feasible solutions are either adjacent or they coincide

Exap 4.8 Consid th following standad fom poblm and its dual:

minimizsubjct to

X + X2X + X2 - X X X4 = 1XI , X 2 , X , X4 0

maximizsubjct to

PI + P2PI + P2 1PI 1PI , P2 0

Th fasibl st of th pimal poblm is 4dimnsional f w liminat thvaiabls X and X4, w obtain th quivalnt poblm

minimiz X + X2subjct to X + X2

X 1X , X2 0

Th fasibl sts of th quivalnt pimal poblm and of th dual a shown inFigus 45(a and 45(b spctivly

Th is a total of v dint bass in th standad fom pimal poblm

and v dint basic solutions Ths cospond to th points A B and

in Figu 45(a Th sam v bass also lad to v basic solutions toth dual poblm which a points A B, and in Figu 45(b

Fo xampl if w choos th columns A and A4 to b th basic columnsw hav th infasibl pimal basic solution ( 0 0 , -1 (point A Thcosponding dual basic solution is obtaind by ltting p'A C  ° andp'A4 C4 = 0 which yilds p (0 0 This is a basic fasibl solution of thdual poblm and can b usd to stat th dual simplx mthod Th associatdinitial tablau is

Page 179: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 179/267

Chap 4 Duality theory

Y

Fu 4 : Th asibl sts in xampl 4

X l   X 2    4

0 1 1 0 0

-2 -1 -2*  1 0

- 1 - 1 0 0 1

W carry out two itrations o th dual simplx mthod to obtain th olowingtwo tablaux

X l 

X 2     4

-1 1/2 0 1/2 0

1 1/2 1 -1/2 0

- 1 -1*  0 0 1

   2   4

-3/2 0 0 1/2 1/2

1/2 0 1 -1/2 1/2

1 1 0 0 - 1

This sunc o tabaux corrsponds to th path A in ithr gur Inth primal spac th path tracs a sunc o inasibl basic solutions until at

Page 180: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 180/267

Sec 4.5 Standard rm prblems and the dual simple methd

otimality it bcoms fasibl In th dual sac th algorithm bhavs xactlylik th rimal simlx mthod it movs through a squnc of (dual) basicfasibl solutions whil at ach st imroving th cost function

ag obee a e al mplex meo moe fom oe bacfeable oo of e a o a ajace oe ma be empg o aa e al mplex meo mpl e pimal mplex meo appleo e al T a omea ambgo aeme oee becae ea poblem o i aa fom If e ee o coe o aafom a e app e pmal mpex meo e elg meo io eceail ecal o e al mplex meo (xece 4 25 ) Amoe accae aeme o mpl a a e a mpex meo

a aa of e mplex meo aloe o poblem ee exclel em of ea eqa coa

ly grcy

e keep amg a e ae ealg a aa fom poblem ic e o of e max A ae leal nepee A baimax ea o a aocae al bac olo ge b p' A bac olo e al coai p'A acie f a o f

A , a f a ol f e ece co C zeo Sce p meoa al egeeac amo o ag moe a ececo a ae zeo Ge a e ece co of e bac aabem be zeo a egeeac obae eee ee ex a obacaable oe ece co zeo

Te example a follo ea e eao beee baic olo o e pmal a e al e face of egeeac

xapl 4 Considr th following standard om roblm and its dual

minimiz 3X + X2 maximiz Psubct to X + X2 - X subjct to PI + P2

XI X2 - X 0 P - P2X , X2 , X , X 0, PI , P2 O

W liminat X and X to obtain th quivlnt rimal roblm

minimizsubjct to

3X + X2X + X2

X - X2 0XI , X2 O

31

Th fasibl st of th quivalnt rimal and of th dual is shown in Figurs 4 (a)and 4(b) rsctivly

Thr is a total of six dirnt bs in th tandard form rimal roblmbut only four dirnt basic olutions oints A , C in Figur 4 (a) In thdul robm howvr th i b d to i ditinct ic oluto ointA A A" C Fur 6

Page 181: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 181/267

Chap 4 Dait ther

D

Figure Th fasibl sts in xampl 4

For xampl, if w lt columns and b basic, th primal basic solution has X X 0 and th corrsponding dual basic solution is , 0 ,0

Not that this is a basic fasibl solution of th dual problm If w lt columns and b basic, th primal basic solution has again X X o Forth dual problm, howvr, th quations p' and p' yild , 0 , 3/ , which is a basic fasibl solution of th dual, namly, pointA in Figur 4 b Finally, if w lt columns and b basic, w still havth sam primal solution For th dual problm, th quations p' andp' = yild , 0, -1 which is an infasibl basic solution to thdual, namly, point A in Figur 4 b

Example 49 has established that dierent bases may lead to the samebasic solution for the primal problem but to dierent basic solutions for thedual urthermore out of the dierent basic soltions to the dual problemit may be that some are feasible and some are infeasible

We conclude with a summary of some properties of bases and basicsolutions for standard form problems that were discussed in this section

a Every basis determines a basic solution to the primal but also a

corresponding basic solution to the dual namely p c 1 b This dual basic solution is feasible i and only if all of the reduced

costs are nonnegative

c Under this dual basic solution the reduced costs that are equal tozero correspond to active costraints in the dual problem

d This dual basic solution is degenerate if and only if some nonbasicvariable has zero reduced cost

Page 182: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 182/267

Se 46 Fars lemma and lnear inequalities

. 6 Fars' emma and inear inequalities

Suppose that we wish to determine whether a given system of linear in-equalities is infeasibe . I this section we approach this question using

duality theory and we show that infeasibiity of a given system of linearinequalities is equivalent to the feasibiity of another related system oflinear iequalities Intuitively the latter system of linear inequalities canbe interpreted as a search for a certicte of ifesiiit for the formersystem.

To be more specic consider a set of standard form costraints Ax and x o Suppose that there exists some vector such that 'A and ' < O Then for any x , we have 'Ax and since ' < 0

i t follows that 'Ax f ' We cocude that Ax f for al x Thisargument shows that if we can nd a vector satisfying 'A and' < 0 the standard form constraints cannot have any feasible soutionand such a vector is a certicate of infeasibility. Farkas' lemma beowstates that whenever a standard form problem is infeasibe such a certicateof infeasibility is guarateed to exist.

To 4 Let A be a matri dimensins

m X and let be a etr in  m Ten eatl ne te llwingtw alteaties ls

Tere eists sme x su that Ax =

There eists sme etr su tat 'A and ' < O

oof ne direction is easy. If there exists some x satisfying Ax and if 'A , then ' 'Ax 0 which shows that the second

alternative canot hold.Let us now assume that there exists no vector x satising Ax Consider the pair of problems

maximize xsubject to

x > ,

minimize 'subject to 'A ,

and note that the rst is the dual of the second . The maximization prob

lem is ifeasibe which impies that the miimization problem is eitherubounded (the optimal cost is -0) or infeasible. Since is a feasibe solution to the minimization problem it folows that the minimizationproblem is unbounded. Therefore there exists some which is feasibethat is 'A , and whose cost is negative that is ' < O

We now provide a geometric illustration of Farkas' lemma (see Figure 4.7). Let A . . . , A be the columns of the matrix A and note thatAx Ll A Therefore the existence of a vector x satisfying

Page 183: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 183/267

Cha. 4 Duality theory

Ax = b s te same as requrg tat b es te set o a oegateear combatos of te ectors A A wc s te sae rego Fgure 47 b oes ot beog to te sae rego ( wc case terst aterate Farkas' lemma oes ot o) we exect tutey tatwe ca a ector p a a assocate yerae {z I p'z = suctat b es o oe se o te yerae we te sae rego es o teoter se We te ae p'b < 0 a p'A 0 or a i, or equaetyp' A , a te seco aterate os

Farkas' emma reates te eeomet o ear rogrammg butuaty teory eas to a sme roo A eret roo base o tegeometrc argumet we just gae s roe te ext secto Falytere s a equaet statemet of Farkas' emma wc s sometmes more

coeet

Corollr 4 Let A A and b be given ectors and suosethat any vector p that satises p' 0 = 1 , . must alsosatis 'b Then b can be eessed a nonnegative lineacombination o the vectors A A

Our ext result s of a smar caracter

Teorem 47 Suose that the system of linear inequalties bhas at least one solution and let d be some scal Then the ollowingare equialent

a very easible solution to the syste Ax S b satises c'x d.b There eists some such that 'A = c' and 'b S d

roof Coser te foowg ar of robems

maxmzesubject to

c'xAx S b

mmzesubject to

p'bp'A = c'p O

a ote tat te rst s te ua o te seco f te system Ax S bas a feasbe souto a f eery easbe souto satses c'x S d te

te rst robem as a otma souto a te otmal cost s boueaboe by d By te strog uaty teorem te seco robem aso asa otma souto p wose cost s boue aboe by d Ts otmasouto satses p'A = c' p , a p'b S d

Coersey some p satses p'A = c' p , a p'b S d tete weak uaity teorem asserts tat eery easbe souto to te rstroblem must aso satsy c'x S d

Resuts suc as Teorems 46 a 4 7 are ote cae theorems of the

Page 184: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 184/267

Sec 4.6 Fars lemma d linear inequalities

Fiue 47: I th vctor b dos not blong to th st o allnonngativ linar combinations o thn w can nd ahyprplan {z pz = O} that sparats it rom that st

7

tetve There are several more results of ths type; see, for example,Exerses 426, 427, ad 428

Alcs rs' l ss rcg

Cosder a market that operates for a sgle perod , ad whh deretassets are traded Depedg o the evets durg that sgle perod, thereare m possble states of ature at the ed of the perod f we vest oedollar some asset i ad the state of ature turs out to be s we reeve a

payo of r si Ths, eah asset i s desrbed by a payo vetor rli , , rmi The followg m X payo matrx gves the payos of eah of the assetsfor eah of the m states of ature:

Let Xi be the amout held of asset i . A portfolo of assets s the a vetorx = Xl , , n The ompoets of a portfolo x a be ether postveor egatve A postve value of Xi dates that oe has bought Xi utsof asset i ad s thus ettled to reeve r siXi f state s materalzes Aegatve value of Xi dates a short" posto asset i : ths amouts tosellg IXi I uts of asset i at the begg of the perod, wth a promseto buy them bak at the ed Hee, oe must pay out rs i li l f state sours, whh s the same as reevg a payo of rs iXi

Page 185: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 185/267

Chap 4 Duality theory

The weah in sae s ha resus from a porfoio x is given by

nW = L rSixi

i 1We inroduce he vecor w = WI , and we obain

w = Rx

Le Pi be he price of asse i in he beginning of he period, and e p =PI , . . . , Pn be he vecor of asse prices . Then , he cos of acquiring aporfoio x is given by p'x

The cenra probem in asse pricing is o deermine wha he prices shoud be. In order o address his quesion, we inroduce he aenceof artrage condiion, which underies much of nance heory asse pricesshoud aways be such ha no invesor can ge a guaraneed nonnegaivepayo ou of a negaive invesmen . In oher words, any porfoio hapays o nonnegaive amouns in every sae of naure, mus be vauabe oinvesors , so i mus have nonnegaive cos . Mahemaicay, he absenceof arbirage condiion can be expressed as foows

if Rx � , hen we mus have p'x � Oiven a paricuar se o f asses , as described by he payo marix R onycerain prices p are consisen wih he absence of arbirage. Wha characerizes such prices? Wha resricions does he assumpion of no arbirageimpose on asse prices? The answer is provided by Fars' emma.

To 4 The absence o arbitrage condition holds i and only i

there eists a nonnegative vectorq q1 "

such that the priceo eac asset i is given by

= L qsri

8 1

Poo The absence of arbirage condiion saes ha here exiss no

vecor x such ha x'R' � and x'p < O This is of he same form ascondiion b in he saemen of Farkas' emma Theorem 4 6) Noe hahere p pays he roe of b and R' pays he roe of A. Therefore, byFarkas' emma, he absence of arbirage condiion hods if and ony if hereexiss some nonnegaive vecor q such ha R'q p which is he same ashe condiion in he heorem's saemen.

Theorem 48 assers ha whenever he marke works ecieny enougho eiminae he possibiiy of arbirage, here mus exis sae prices" qs

Page 186: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 186/267

Sec 46 om separating herplanes to dualit

that can be used to vaue the existin assets Intuitivey, it estabishesa nonneative rice for an eementary asset that ays one doar if thestate of nature is s and nothin otherwise It then requires that every assetmust be consistenty riced, its tota vaue bein the sum of the vaues ofthe eementary assets from which it is comosed There is an aternativeinterretation of the variabes as bein (unnormaized) robabiities ofthe dierent states s which, however, we wi not ursue In enera, thestate rice vector q wi not be unique, uness the number of assets equasor exceeds the number of states

The no arbitrae condition is very sime, and yet very owerfu Itis the key eement behind many imortant resuts in nancia economics,but these ie beyond the scoe of this text (See , however , Exercise 4 33 for

an aication in otions ricin)

4 . 7 o m sepaating hypeplanes to duality*

et us review the ath foowed in our deveoment of duaity theory Westarted from the fact that the simex method, in conjunction with an anti-cycin rue, is uaranteed to terminate We then exoited the terminationconditions of the simex method to derive the stron duaity theorem Wenay used the duaity theorem to derive arkas' emma, which we inter-reted in terms of a hyerane that searates b from the coumns of A Inthis section, we show that the reverse ine of arument is aso ossibe Westart from rst rincies and rove a enera resut on searatin hyeranes We then estabish arkas ' emma, and concude by showin that theduaity theorem foows from arkas ' emma This ine of arument is moreeeant and fundamenta because instead of reyin on the rather comi-cated deveoment of the simex method, it ony invoves a sma number

of basic eometric concets rthermore, it can be naturay eneraizedto noninear otimization robems

Closed sets ad Weerstrass' teorem

Before we roceed any further, we need to deveo some backround ma-teria A set S Rn is caed cose if it has the foowin roerty ifxl , x2 , is a sequence of eements of S that converes to some x E Rn

then x E S . In other words, S contains the imit of any sequence of eementsof S. Intuitivey, the set S contains its boundary

o Ee pohedro "

oo onsider the oyhedron {x E Rn I Ax b} Suose thatl , x2 , is a sequence of eements of that converes to some x* We have

Page 187: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 187/267

170 Chap 4 Dualty theory

to show that x* E For each k , we have xk E and, therefore, Axk bTaing the limit , we obtain Ax* A limk xk limk Axk band x* belongs to D

The following is a fundamental result rom real analysis that providesus with conditions for the eistence of an optimal solution to an optimization probem The proof lies beyond the scope of this boo and is omitted

Tor 410 irtra to f : s a ontnuous functon and S s a nonempt closd and bounded subsetof then thre ests some x* E S suh that x*) f) for alX Smlarly there sts some E such that * ) (x) or

all x E S

Weierstrass' theorem is not valid if the set S is not closed Consider,for eample, the set S {x E I x > . This set is not closed because wecan orm a sequence o elements of S that converge to zero, but x 0 doesnot belong to S We then observe that the cost function fx) x is notminimized at any point in S; or every x > 0 there eists another positivenumber with smaller cost, and no easible x can be optimal Ultimately,

the reason that S is not closed is that the feasible set was dened by meansof strict inequalities The denition of polyhedra and linear programmingproblems does not allow for strict inequalities in order to avoid situationso this type

srg yrl orm

The result that follows is "geometrically obvious but nevertheless e

tremely important in the study o conve sets and functions It states thatif we are given a closed and nonempty conve set S and a point x* S,then we can nd a hyperplane, called a setig ee such that Sand x* lie in dierent halfspaces Figure 48)

o 4 arti l to Let S be a nonempty closed conve subset o and let x* E be a ector thatdoes not belong to S hen there ests some vector E such that

'x < x for all x E S

Poof Let I I . I I be the uclidean norm dened by x (X'X)1/2 Let us some element of S, and let

{x x - x* 1 1 - x* } ,

and D S [Figure 49a) ] The set D is nonempty, because E D

Page 188: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 188/267

Sec 47 om separatng hyperplanes to dualt

Fu 48: A hyprplan that sparats th point x* from thconvx st S.

7

rterore, is te itersectio of the cosed set S wit te closed set ad is aso cosed Fiay, is a bouded set because is boudedosider te quatity I x - x* were x rages over te set This is

a cotiuous fuctio of x Sice is oempty, cosed, ad bouded,Weierstrass teore ipies tat tere exists soe y E suc tat

I y - x* I : I x - x* x E

For ay x E S that does ot beog to we ave I x - x* I I > I w - x* I I I y - x* · We cocude tat y iiizes I x - x* over all x E S

We ave so far estabised tat tere exists a eemet y of S wicis cosest to x* We ow sow tat te vector c y - x* as te desired

property [see Figure 49(b)Let E S For ay satisig 0 < 1 we ave y ( - y) E S

because S is covex Sice y miimizes I x - x* I I over a E S we obtai

I y - x* 2 < I I ( - y) - x* 2I y - x* 2 (y - x* ) ' (x - y) 2 x y 2

wic yiedsA (y - x* ) ' (x - y)  A2 1 x - Y 1 2 2 o. 

We divide by ad te take te imit as decreases to zero We obtai

(y - x* ) ' (x - y) .

Tis iequaity states tat te age i Figure 49(b) is o arger ta90 degrees Tus,

(y - x* ) 'x (y - x* ) 'y

Page 189: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 189/267

172 Chap 4 Duaty theory

x

Figure 4.9 : Illustrtion o f th proof o f th sprting hyprplnthorm

(y - x* ) 'x* + (y x* ) ' (y x* )

> (y - x* ) 'x* .etting y x* proves the theorem.

Fars' lemma revisited

We now show that Farkas emma is a consequence of the separating hyperplane theorem.

We wil ony be concerned with the dicut haf of Farkas emma. Inparticuar, we wi prove that if the system Ax b x ? , does not havea solution, then there exists a vector p such that p'A ? and p'b O

et

S {Ax x ? }

{y there exists x such that y Ax, x ? } ,

and suppose that the vector b does not beong to S. The set S is cearlyconvex it is aso nonempty because E S. Finaly, the set S is closed thismay seem obvious , but is not easy to prove. For one possible proof, notethat S is the projection of the poyhedron (x, y) y Ax, x ? ontothe y coordinates, is itsef a polyhedron see ection 28), and is thereforeclosed An alternative proof is utined in Exercise 4.37.

We now invoke the separating hyperpane theorem to separate b fromS and concude that there exists a vector p such that p'b p'y for every

Page 190: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 190/267

Sec. 7 om septg ypepes to duty* 7

. Since E , we must have 0 rthermore , for every column of A and every > 0, we have Ai E and . We divideboth sides of the latter inequality by and then take the limit as tendsto innity, to conclude that Ai 0 Since this is true for every i , weobtain A and the proof is complete.

T dualty torm rvstd

We will now derive the duality theorem as a corollary of Farkas' lemma.We only provide the proof for the case where the primal constraints are ofthe form A . The proof for the general case can be constructed alongthe same lines at the expense of more notation (Exercise . 38) . We also

note that the proof given here is very similar to the line of argument usedin the heuristic explanation of the duality theorem in Example ..

We consider the following pair of primal and dual problems

minimizesubect to

A

maximizesubect to

A = ,

and we assume that the primal has an optimal solution * We will showthat the dual problem also has a feasible solution with the same cost . Oncethis is done, the strong duality theorem follows from weak duality (cf. orollary .2).

et = {i * = bd be the set of indices of the constraints thatare active at * We will rst show that any vector that satises °for every i E must also satis 0. onsider such a vector and let be a positive scalar. We then have * ) * = bi for all i E In addition, if i and if is suciently small, the inequality * > bi

implies that * ) > bi. We conclude that when is suciently small,* is a feasible solution. By the optimality of * we obtain 0which establishes our claim. By Farkas ' lemma (cf. orollary .3) canbe expressed as a nonnegative linear combination of the vectors i E and there exist nonnegative scalars Pi i E such that

.3)

For i we dene Pi 0. We then have and Eq. .3) shows thatthe vector satises the dual constraint A = In addition,

b * * = �  i i = �  iaX = C X ,iEI EI

which shows that the cost of this dual feasible solution is the same as theoptimal primal cost . The duality theorem now follows om orollary . 2 .

Page 191: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 191/267

174 p. 4 Dulty teoy

) )

Fu 410: xampls of cons

I cocsio, we e ccompised e gos were se o i ebegiig of is secio We proed e seprig perpe eorem,wic is er iiie d seemig simpe res, b wi m im

por rmicios i opimizio d oer res i memics Wesed e seprig perpe eorem o esbis Frks' emm, d sowed e srog di eorem is es coseqece ofFrs' emm

4 . 8 Cones and exteme ays

We e see i Cper 2 if e opim cos i ier progrmmig

probem is ie, e or serc for opim soio c be resricedo ie m pois, me, e bsic fesibe soios, ssmig oeexiss I is secio, we wis o deeop simir res for e cse weree opim cos is I pricr, we wi sow e opim cosis if d o if ere exiss cos redcig direcio og wic wec moe wio eer eig e fesibe se rermore , or serc forsc direcio c be resriced o ie se of sib deed exremers"

Cos

Te rs sep i or deeopme is o irodce e cocep of coe

Dto 41 A e n i a c C o ll all x C

Page 192: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 192/267

Sec. 48 oes d exteme ys 17

Noice a f C is a nonepy cone en O To is see consideran arbirary eeen x of and se 0 in e deniion of a cone; seeaso Figre 4 10 A poyedron of e form = {x �n Ax O} iseasy seen o be a nonepy cone and s caed a ohed oe

Le x be a nonzero eleen of a poyedra cone O We en avex and x O Since x s e average of x and x s noan exree poin and erefore e ony possibe exreme poin s e zerovecor f e zero vecor s indeed an exree pon we say a e coneis oited Weer is w be e case or no s deerned by e creriaprovided by or nex res

Tor 1 Let

�nbe the polyhedl coe deed by theconstits ax 0 1

hen the olloing e euivlet

he eo ecto s n extee point o Ob The oe does ot coti a ie

c hee exist ectos out o the ily hich elinely idepedet

oof Ts res is a specia case of Teorem 6 in Secion

Ry rcio co

Consider a nonepy poyedron

and e s soe Y We dene e reessio coe t y as e se ofa drecions d aong wic we can ove ndeniey away from y wioeaving e se More foray e recession cone s dened as e se

d �n A y + d) b for a }

is easy seen a s se s e same as

and s a poyedra cone Ts sows a e recesson cone s ndependenof e sarng poin y; see Fgre 4 1 1 Te nonzero eemens of erecesson cone are caed e s of e polyedron

For e case of a nonepy poyedron {x �n Ax = b x n sandard for e recesson cone s seen o be e se of a vecorsd a a

Ad = O d o

Page 193: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 193/267

7 Chap 4 Duaty theory

Fiure 4 : Th rcssion con at dirnt lmnts of a polyhdron

Extreme rays

We now dene the extreme rays of a polyhedron Intuitively, these are thedirections associated with edges" of the polyhedron that extend to innitysee igure . for an illustration.

Defntion 42a oero eeet x o a poyhedra oe C �n s aed a

extree ra there are eary depedet ostratsthat are ate at x

etree ray o the reesso oe assoated wth a oeptypoyhedro s aso aed a extree ra

ote that a positive multiple of an extreme ray is also an extreme ray.We say that two extreme rays are equivet if one is a posit ive multiple ofthe other. ote that for this to happen, they must correspond to the same

linealy independent active constraints . Any linearly independentconstraints dene a line and can lead to at most two nonequivalent extremerays one eing the negative of the other . iven that there is a nitenumer of ways that we can choose constraints to ecome active,and as long as we do not distinguish etween equivalent extreme rays, weconclude that the numer of extreme rays of a polyhedron is nite A nitecollection of extreme rays will e said to e a comete set of etreme sif it contains exactly one representative from each equivalence class

Page 194: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 194/267

Sec. 4.8 oes d exteme ys

Fi 412: xtrm rays of polyhdral cons a Th vctory is an xtrm ray bcaus = and th constraint a = 0is activ at y b A polyhdral con dnd by thr linarlyindpndnt constraints of th form a O Th vctor isan xtrm ray bcaus = 3 and th two linarly indpndntconstraints a 0 and a 0 ar activ at

177

The denition of extreme rays mimics the denition of basic feasiblesolutions An alternatie and equialent denition, resembling the deni-tion of extreme points of polyhedra, is explore in Exercise 4.39.

Caracrizaio of uboudd liar programmigproblms

We now derie conditions under which the optimal cost in a linear pro-gramming problem is equal to 0 rst for the case where the feasible setis a cone, and then for the general case

o 4 1 osde te poblem o mmg ove potedpolyedl coe { E �n � ? 0 i 1 } Te optmlcost s eul to - 0 d oly some exteme y o stses

< O

Poof One direction of the result is triial because if some extreme rayhas negatie cost, then the cost becomes arbitrarily negatie by moingalong this ray

For the conerse, suppose that the optimal cost is -0 In particular,there exists some whose cost is negatie and, by suitably scaling ,

Page 195: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 195/267

178 Chap 4 Duality thry

e a aume a c'x 1 aua e oeo

p x   ax 2 ax 2 c'x - }

oem Se oe e veo a am a   a me a a a ea oe eeme o; e d be oe of em A de ave ea eee ave oa mea a 1ea eee oa of e fom ax 2 mu be ave foo a d a eeme a of

B eog ua eoem 413 ea o a eo fo ubouee geea ea ogammg obem eeg eoug eo oe o vove e ga e veo h

414 Csidr the prble f iiiig cx bet tAx h ad ass that th fasibl et has at last xtrepit The ptimal s i eual t ad ly if sm xrmera d f th feaibl et tis c' <

oof Oe eo o f e eu va beaue f a eeme a a

egave o e e o beome abal egave b ag a afeabe ouo a movg aog e eo of aFo e oof of e evee eo e oe e ual obem

mamze p'hube o p'A c'

p 2 o

f e pma pobem uboue e ua pobem feabe ee elae obem

mamze p'ube o p'A c'

p 2 ,

ao feabe mpe a e aoae pma pobem

mmze c'x

ube o Ax 2 ee uboue o feabe Se x oe feabe ouo mu be uboue Se e pma feable e a a ea oe eemepo e o of A pa   ee e meo of x foloa e eeo oe {x Ax 2 O} poe a b eoem 4.13,ee e a eeme a d of e eeo oe ag c'd < Beo a eee a of e feabe e

Page 196: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 196/267

Sec 4.9 Repesetto opolyed 179

bo crtro t x to

e e hs seco by og o ha f we ae a saa fom obem whch he oma cos s - he smlex meho oes s aemao wh a exeme ay

ee cose wha haes whe he smex meho emaesw a cao ha he oma cos s - A ha o we haea bass max B a obasc aabe X wh egae ece cos ae h com B1A of he abea has o ose elemes Cosehe h basc eco , whc e eco ha sases _B1A ,d 1, a d 0 fo eey obasc ex i ohe ha The heeco sases A a , a belogs o he ecesso coe

s aso a eco of cos ecease sce he ece cos of he eegaabe s egae

O of he cosas eg he ecesso coe he h basc -eco sases - 1 eay eee sc cosas wh eqay:hese ae he cosas A m of hem a he cosas d 0fo i obasic a ee ha - m - 1 of hem e coce ha s a exeme ay

4 . 9 Rpsntation of polyhda

hs seco we esabsh oe of he famea ess of ea ogammg heoy I aca we show ha ay eeme of a olyheoha has a eas oe exeme o ca be eesee as a coex combao of exeme os s a oegae lea combao of exeme

ays A ecse saeme s ge by o ex esl A geeazao ohe case of geea oyea s eeoe xecse 4.47.

o 4 outo tho Let

be oepty polyedon wt t lest oe extee pot . Let

l x

kbe te extee pots d let

be copleteset o extee ys o Let

Te =

Page 197: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 197/267

180

Poof We rst prove that Q . Let

k

p. 4

L ii + L jwj

i= j=

Duty teoy

be an eement of Q, where the coecients i and j are nonnegative, and7=1 i 1 The vector y 7=1 ii is a convex combination of eements of . It therefore beongs to and satises Ay . We aso haveAwj 0 for every , which implies that the vector z =1 jwj satisesAz o It then foows that the vector x y + z satises Ax andbeongs to .

For the reverse incusion, we assume that is not a subset of Q and

we wi derive a contradiction . Let z be an eement of that does notbeong to Q. Consider the inear programming probem

k maximize LOi + L Oj

i= j=k subject to ii + jwj z

i= j=

ki i=i 0, i 1 , . . . , ,j 0, 1 , , r ,

( )

which is infeasibe because z Q. This probem is the dua of the probem

minimize p'z + qsubject to p'xi + q 0,

p'wj 0,i 1, . . . , , 1 . ,r

( 5)

Because the atter probem has a feasibe soution, namey, p 0 and q 0,the optimal cost is -0 and there exists a feasible soution (p, q) whosecost p'z + q is negative. n the other hand, p'xi + q ° for a i and thisimpies that p'z < p'xi for a i We aso have p'wj ° for a . 1

Having xed p as above, we now consider the inear programming

probemminimize p'x

subject to Ax .

If the optima cost is nite, there exists an extreme point xi which is optima. Since z is a feasibe souton, we obtain p'xi p'z, which is a

Fo a ituitv viw of this poof th pupos of this paagaph was to costuct ahyppla that spaats fo

Page 198: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 198/267

Sec. 49 Repesetto o poyed 181

contraiction. f te optimal cot i - Theorem 414 implie tat tereexit an extreme ra w c tat p w 0, wic i again a contraiction.

Exaple 410 Considr th unboundd polyhdron dnd by th constraintsX - X2 -2

X + X2 1

XI , X2 s Figur 13 This polyhdron has thr xtrm points, namly, = 0 , ,x2 = 0 , 1 , and x = 1 , 0 Th rcssion con C i s dscribd by th inqualitis - 2 0, + 2 0, and , 2 0 W conclud that C = {l , 2 0

2

This con has two xtrm rays, namly,W = , 1

andw2

= 1 ,0 Th vctor

y= (2 2 ) is an lmnt of th polyhdron and can b rprsntd as

owvr, this rprsntation is not uniqu for xampl, w also hav

[ 2   1 [ 0   [ 1   3 [ 1 ]  1 2  1   3  I Y=

2=" 1

+ " ° + " 1= "x + "x + "w

Fue 41 Th polyhdron of xampl 0

e note tat the e t Q in Teorem 4 1 5 i te image of te poeron

H

{l , , k , , , ) t i , i O, O} ,

Page 199: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 199/267

182

nde te inea apping

p 4

k

Dulty teoy

. . . , k , , )

2AiXi +

20jWj

i=l jl

T, one coolay of te eotion teoe i tat evey poyedon ithe iage, nde a inea apping, of a polyhedon H wit ti paticatcte.

We now pecialize Teoe 4. to te cae of bonded polyheda,to ecove a elt tat wa ao poved in Section 2 . 7 , ing a dieent ineof agent.

Corollar 44 oempty bouded polyeo s te covex ull ots exteme pots.

Proof et = { A } be a nonepty bonded polyedon. If i a nonzeo eeent of the cone = { A O} and i an eeent of we ave + fo all 0, contadicting te bondedne of We concde tat conist of ony the zeo vecto and doe not ave anyextee ay. The elt then folow fro Theoe 4..

There i anote coolay of Teoe 4. 1 tat dea with cone , ndwhich i proved by noting tat a cone can have no extee points othertan te zeo vecto .

Corollar 45 ssume tt te coe = { A O} s potede evey elemet o c be expessed oegtve le

combto o te exteme ys o

Coverse o e resoluio eorem

et ay tat a et Q i ite geete if it i pecied in te fo

(4.6

whee Xl , . . . , xk and w1 . . . , w are oe given eleents of n The re-oltion theoe tate that a poyhedon wit at leat one extree pointi a nitely geneated et (tis is ao te fo geneal poyeda; ee Execie 4.47). We now discs a convee ret, which tate that evey nitelygeneated et i a polyhedon.

Page 200: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 200/267

Sec 4.9 General near proammng dualt

t t t Q tm f t

H = A1 , . . . , Ak , (h , I " Ai = l , Ai 0, {j O}1 t m T, t t f Sti 28 t tt t t t W t lt t f t

Tr 4 ntely gneated set s a polyhedron n patcularthe convex hull o tely may vector s a bouned polyhedron

f C th mmi m (44 tt t f f Tm 4 15 t z t t tt Q f t fm (46) f l if t pbm (44) fiti Ui lit, t t f f bm (45) itptm t W t bm (45) t t fm b tt b p+ , q+ , q , tt p p+ - p, q q+ -q , l l b S t fm ti , Thm 413 tt t tim t t t fmblm t f l f

(p+ ) 'z - (p ) 'z + q+ - q 0,f f it tl m xtm H, z Q f ifz t t ti f qt T tt Q i D

, t f ti h() tm f t t f i tt;() itl t t , i tm f t xtm pt xtm

T t pti mtmtill qit, bt b

qt t fm t t F xm, m b bl tdb h i tm f ml mb f ir tt f t t , t l m xtm pt , ipt t t t b m m mlt thrm, pifm tp f ipti t th t i, l, mplitmttil tk

. 1 0 General lnear programmng dualty*

t t f t m (Sti 42 , t lb i t h tit f th fm a�x bi , a�x bi , a�x bi .

Page 201: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 201/267

184 p 4 Dulty teoy

However, o dua variabes were associated with costraits of the formXi 0 or Xi O. the same spirit, ad i a more geeral approachto iear programmig duaity, we ca choose arbitrarily which costraitswil be associated with price variabes ad which oes wil ot I thissectio, we deveop a geera duaity theorem that covers such a situatio

Cosider the prima problem

miimize c'xsubect to x

x ,

where is the polyhedro

x Dx } We associate a dua vector p with the costrait x The costraitx is a geeralizatio of costraits of the form Xi 0 or Xi 0 addua variabes are ot associated with it

As i Sectio 41, we dee the du oectie g(p) by

g(p) mi c'x + p' (b x) (47)PThe du roem is the deed as

maximize g(p)subect to p o

We rst provide a geeraizatio of the weak duaity theorem

41 ak duat x pal ele Ax anx ) ad dual bl (p t g(p 'x

f f x ad p are prima ad dua feasibe, respectivey, the p' x) 0 which implies that

rem

g(p) mi c' + p' (b y) P< c'x + p'(b x)< c'x D

We also have the foowig geeraizatio of the strog duaity theo

418 t duaity pl poe optl oluton o doe te dul d te ptv optal t eqal

Page 202: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 202/267

Sec. 4 1 0 Gee e pog duty*

oof Sice {x Dx } , e primal problem is of e form

miimize cxsbec o Ax b

Dx ,

a we assme a i as a opimal solio. Is al, wic is

maximize p'b + 'sbec o p' A + 'D c

p

,

8

(4.8)

ms e ae e same opial cos . For ay xe p, e ecor solbe cose opially i e problem (4. 8) . s, e a problem (4. 8) caalso be wrie as

maximize p'b + f(p)sbec o p ,

were f(p) i s e opial cos i e problem

maximize 'sbec o 'D c p'A

(4.9)

[If e laer probem is ifeasible, we se f(p) ] sig e srogaliy eorem for problem (4 .9 ) , we obai

f(p) mi (c'x - p'Ax)

DdWe cocle a e al problem (4. 8) as e same opimal cos as eproblem

maximize p'b + mi (c'x - p'Ax)Ddsbec o p

By comparig wi q. (4.7), we see a is is e same as maximizigg(p) oer al p

e iea of seleciely assigig al ariables o some of e cosrais is ofe se i orer o rea simpler" cosrais ierelya more coplex" oes, a as meros applicaios i large scaeopimizaio. (Appicaios o ieger programmig are iscsse i Secio 1 1 .4 . ) Fialy, le s poi o a e approac i is secio exeso cerai oliear opimizaio problems . For exaple , if we repace e

Page 203: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 203/267

18 Cha 4 Dualty thory

ea co co c'x b a eea coe co (x) a e oheo b a eea coe e, e ca aga ee he a obeceacco o he oa

g(p)  (x) + p (b Ax)] PI o a e o a eoe ea a o c oeaobe, e able ecca coo, b le beo he coeo boo.

4 . 1 1 Summay

We aze ee e a ea a hae bee eeoe h cae.e a (a) lea poa obe, e ca aocae h

aoe (a) lea oa obe, b oo a e o ecaca e . The eo o e al pobe coe , he ee hae a o eqae a obe ae heee eqae.

Each a aabe aocae a aca a coaa ca be ee a a ea o oa ha coa . B eac

he al coa h ea e, e ceae e e o aaabeoo, a alo o coc a oo oe co eha he oa co . I aca , ee ual eabe eco ea o aoe bo o he oa co o he a oble (h he eece ohe ea a heoe) . The azao he a obe ea each fo he he ch oe bo. The o a eoeae a he e ch oe bo eqa o he oa aco.

A oal a aabe ca ao be eee a a aa co,ha , a he ae o chae o e oa a co e e eo aa ebao o he hha e eco b, a oeeeac.

A e eao beee opal a a a oo poe b he coeea ace coo . Ie, ee coo eqe ha a coa ha ace a a oa olocae a zeo ce, hch coabe h he eeao o cea aa co .

We a ha ee ba aix a aa o obe ee

e o o a al bac oo, bu ao a bac a olo. Thobeao a e hea o he a e eo . T eo a o he a e eho ha eeae a eqece opa bac oo, oehe h a aocae eqece o a bacolo. I ee, hoee, ha he al bac oo ae aeabe, h ee o co , he e a bac olo ae feabe (ece o he a oe) . We eeope he al e eho by ecb ecac a b o a aebac cao.

Page 204: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 204/267

Sec. 4 . 2 Execses 187

Neerthele, the a imle metho alo ha a eometric iterretatiot ee moi from oe a baic feaibe oltio to a aacet oea, i thi reect, it i imiar to the rima imle metho alie tothe al robem

All of ality theory ca be eeloe by eloiti the termiatiocoitio of the imle metho, a thi wa or iitia aroach to thebect We ao re a ateratie ie of eeomet that roceeefrom rt ricile a e eometric armet Thi i a more irecta more eera aroach, bt reqire more abtract reaoi

aity theory roie with ome owerf tool bae o whichwe were able to ehace or eometric ertai of oyhera Weerie a few theorem of the ateratie (ie Far' lemma), which are

rriily owerfl a hae alicatio i a wie ariety of cotetI fact, Far' lemma ca be iewe a the core of liear rorammiality theory Aother maor relt that we erie i the reoltiotheorem, which allow to ere ay elemet of a oemty oyherowith at leat oe etreme oit a a coe combiatio of it etremeoit a oeatie liear combiatio of it etreme ray; i otherwor, eery olyhero i itely eerate" The coere i alo tre,a eery itey eerate et i a olyhero (ca be rereete i

term of liear ieqality cotrait) Relt of thi tye lay a eyrole i cormi or ititie eometric ertai of oyhera aliear rorammi They allow to eelo alteratie iew of certaiitatio a lea to eeer ertai May ch relt hae aobio" eometric cotet a are ofte tae for rate Neerthele,a we hae ee, rioro roof ca be qite elaborate

4 . 1 2 Execses

Exece 4 Considr th linar programming problm:

minimiz X Xsubjct to X + 3X X +

3X + X + 4X-X X + X +

X 0X , X � o .

Writ down th corrsponding dual problmExece 42 Considr th primal problm

c'xminimizsubjct to Ax b

x

XXX

0� 3

Form th dual problm and convrt it into an quivalnt minimization problmriv a st of conditions on th matrix A and th vctors b, c, undr which th

Page 205: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 205/267

188 p. 4 Duty teoy

dual is idntical to th primal and construct an xampl in which ths conditionsar satisd

Exc 4 Th purpos of this xrcis is to show that solving linar pro

gramming proms is no hardr than solving systms of linar inqualitisSuppos that w ar givn a suroutin which givn a systm of linar inquality constraints ithr producs a solution or dcids that no solution xistsConstruct a simpl algorithm that uss a singl call to this suroutin and whichnds an optimal solution to any linar programming prolm that has an optimalsolution

Exc 44 Lt A a symmtric squar matrix Considr th linar programming prolm

minimizsujct to cxAx cx

Prov that if x* satiss Ax* c and x* 0, thn x* i s an optimal solution

Exc 4 Considr a linar programming prolm in standard form andassum that th rows of A ar linarly indpndnt For ach on of th followingstatmnts provid ithr a proof or a countrxampl

(a) Lt x* a asic fasil solution Suppos that for vry asis corrspond

ing to x*, th associatd asic solution to th dual is infasil Thn thoptimal cost must strictly lss that c x* .

(b) Th dual of th auxiliary primal prolm considrd in Phas I of thsimplx mthod is always fasil

(c) Lt th dual varial associatd with th th quality constraint inth primal liminating th th primal quality constraint is quivalnt tointroducing th additional constraint = 0 in th dual prolm

) If th unounddnss critrion in th primal simplx algorithm is satisd

thn th dul prolm is infasilExc 4 (Duaity in hebychev appoximation) Lt A an matrix and lt b a vctor in R . W considr th prolm of minimizing Ax ovr all x R . r is th vctor norm dnd y ma Lt v th valu of th optimal cost

(a) Lt p any vctor in R that satiss :l = 1 and p A 0 Showthat pb v

(b) In ordr to otain th st possil lowr ound of th form considrd in

part (a w form th linar programming prolm

maximiz pbsujct to p A 0

i=l

Show that th optimal cost in this prolm is qual to v

Page 206: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 206/267

Sec. 4 . 1 2 Execses 189

Excis 47 (Duaity in piecewise inear convex optimization) Considr th problm of minimizing max ax - i) ovr all x E   Lt vb th valu of th optimal cost assumd nit Lt A b th matrix with rowsa1 . . . , a , and lt b b th vctor with componnts 1 ,

(a) Considr any vctor p E   that satiss p A

0 , p 0, and

1 Show that -pb v (b) In ordr to obtain th bst possibl lowr bound of th form considrd in

part a w form th linar programming problm

maximiz -pbsubjct to p A 0

p'e  1p > 0,

whr e is th vctor with all componnts qual to 1 Show that th optimalcost in this problm is qual to v

Excis 48 Considr th linar programming problm of minimizing cx subjct to Ax b, x Lt x* b an optimal solution assumd to xist and ltp* b an optimal solution to th dual

(a) Lt b an optimal solution to th primal whn c is rplacd by som Show that c c - x* O

(b) Lt th cost vctor b xd at c, but suppos that w now chang b to and lt

b a corrsponding optimal solution to th primal Prov that

p* - b c - x* .

Excis 49 (Bacpropagation of ua vriabes in a mutiperioprobem) A company maks a product that can b ithr sold or stord tomt tur dmand Lt t 1, . . . dnot th priods of th planning horizon Lt t b th production volum during priod t which is assumd to bknown in advanc uring ach priod t a quantity Xt of th product is sold ata unit pric of dt rthrmor a quantity Yt can b snt to longtrm storag at

a unit transportation cost of c Altrnativly a quantity Wt can b rtrivd fromstorag at zro cost W assum that whn th product is prpard for longtrmstorag it is partly damagd and only a fraction f of th total survivs mandis assumd to b unlimitd Th main qustion is whthr it is protabl to storsom of th production in anticipation of highr prics in th futur This ladsus to th following problm whr Zt stands for th amount kpt in longtrmstorag at th nd of priod t

Tmaximiz t\dtXt - cYt ) + aTdT+1 ZT

t1subjct to Xt + Yt - Wt t ,Zt + Wt - Zt 1 - fYt 0

Z = 0Xt , Yt , Wt , Zt O

t 1 . . .

t 1 , . . .

r dT1 is th salvag priv for whatvr invntory is l at th nd of priod. rthrmor a is a discount factor with 0 < a < 1, rcting th fact thatfutur rvnus ar valud lss than currnt ons

Page 207: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 207/267

190 p. 4 Duty teory

(a) Lt Pt and qt b dual variabls associatd wit t rst and scond ualityconstraint, rsctivly Writ down t dual roblm

(b) Assum tat 0 f t 0 and O Sow tat t followingformula rovid an otimal solution to t dual roblm:

PTqt 

Pt

max {a? l dT ,  fqT - aT I e} ,  max { qt+ l at dt } max {at dt ,  fqt - at e ,  

t =

=

( c xlain ow t rsult in art (b) can b usd to comut an otimalsolution to t original roblm Primal and dual nondgnracy can bassumd

Exc 4 1 0 (adde points of the agrangean) Considr t standardform roblm of minimizing cx subct to Ax = b and x 0. W dn tLagngean by

Lx,p = cx + p b - Ax .

Considr t following "gam : layr cooss som x 0, and layr 2 coosssom p; tn layr ays to layr 2 t amount Lx, p Playr would lik

to minimiz Lx, p , wil layr 2 would lik to maximiz itA air x* , p* , wit x* 0, is calld an equlbum oint (or a saddlen or a Nas equlbum if

Lx* , p : Lx* , p* : L x ,p* , x 0 , V p.

(Tus w av an uilibrium i f no layr is abl to imrov r rformanc byunilatrally modiing r coic)

Sow tat a air x* , p* is an uilibrium if and only if x * and p * arotimal solutions to t standard form roblm undr considration and its dual

rsctivly

Exc 4 1 Considr a linar rogramming roblm in standard form wicis infasibl but wic bcoms fasibl and as nit otimal cost wn t lastuality constraint is omittd Sow tat t dual of t original (infasibl)problm is fasibl and t optimal cost is innit

Exc 412 (Degeneracy and uniqueness ) Considr a gnral linarrogramming roblm and supos tat w av a nondgnrat basic fasibl

solution to t primal Sow tat t complmntary slacknss conditions ladto a systm of uations for t dual vctor tat as a uniu solution

Exc 41 (Degeneracy and uniqueness ) Considr t followingair of problms tat ar duals of ac otr:

minimizsubct to

cxAx b

x 0,

maximizsubct to

pbpA : c

Page 208: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 208/267

Sec 4 . 2 xercises 191

(a) Prov that i on roblm has a nondgnrat and uniu otimal solution,so dos th othr

(b) Suos that w hav a nondgnrat otimal basis or th rimal andthat th rducd cost or on o th basic variabls is zro What dos th

rsult o art a imly Is it tru that thr must xist anothr otimalbasis

Excis 414 (Degeneracy and uniqueness III) iv an xaml inwhich th rimal roblm has a dgnrat otimal basic asibl solution, butth dual has a uniu otimal solution Th xaml nd not b in standardorm Excis 4 1 (Degeneracy and uniqueness IV) Considr th roblm

minimiz X2subjct to X2 =

  0

X ? o.

Writ down its dual For both th rimal and th dual problm dtrmin whthrthy hav uniu otimal solutions and whthr thy hav nondgnrat otimalsolutions Is this xaml in agrmnt with th statmnt that nondgnracy

o an otimal basic asibl solution in on roblm implis uniunss o optimalsolutions or th othr xlain

Exci 41 iv an xaml o a pair rimal and dual o linar rogramming problms, both o which hav multipl otimal solutions

Excis 4 17 This xrcis is mant to dmonstrat that knowldg o a primal otimal solution dos not ncssarily contain inormation that can b xloitd to dtrmin a dual optimal solution In particular , dtrmining an otimal solution to th dual is as hard as solving a systm o linar inualitis, vn

i an otimal solution to th rimal is avilablConsidr th roblm o minimizing cx subjct to Ax 0 , and supposthat w ar told that th zro vctor is otimal Lt th dimnsions o A b and suos that w hav an algorithm that dtrmins a dual optimalsolution and whos running tim O( ) k or som constant k Not that ix = 0 is not an otimal primal solution, th dual has no asibl solution, and wassum that in this cas our algorithm xits with an rror mssag Assumingth availability o th abov algorithm, construct a nw algorithm that taks asinput a systm o linar inualitis in variabls, runs or ( ) k tim,

and ithr nds a asibl solution or dtrmins that no asibl solution xistsExcis 4 8 Considr a problm in sandard or Supos that th matrixA has dimnsions and its rows ar linarly indpndnt Suos thatall basic solutions to th rimal and to th dual ar nondgnrat Lt x b aasibl solution to th primal and lt p b a dual vctor not ncssarily asibl ,such that th air x, p satiss comlmntary slacknss

(a) Sow that thr xist colums o A tat ar inaly indpndnt andsuch that th corrsonding componnts o x ar all ositiv

Page 209: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 209/267

92 p 4 Duty teoy

(b) Show that x and p are basic solutios to the primal ad the dual, respectively

(c) Show that the result of part (a) is false if the nondeeneracy assumption isremoved

Ex 49 Let P = {x E Rn I Ax = b, x O} be a onempty polyhedron,and let be the dimensio of the vector b. We call Xj a null aable if Xj = 0whenever x E P.(a) Suppose that there exists some p E Rm for which p' A 0 , p'b = 0, and

such that the th component of p' A is positive . Prove that Xj is a nullvariable

(b) Prove th coverse of (a) : if Xj is a null variable, then there exists somep E Rm with the properties stated in part (a)

(c) If Xj is not a ull variable, then by denition, there exists some y E P forwhich Yj > o se the results in parts (a) and (b) to prove that there existx E P ad p E Rm such that:

p'A O' , p'b = , x + A'p > O .

Ex 420 (Strict copleetary slackess)(a) Consider the followin liear prorammi problem ad its dual

minimize c' xsubject to Ax bx 0,

maximizesubject to p'bp'A c',

and assume that both problems have an optimal solution Fix some Suppose that every optimal solution to the primal satises Xj = . Showthat there exists an optimal solutio p to the dual such that p' Aj < Cj(Here, Aj is the th column of A. Hn Let be the optimal costConsider the problem of minimizin -Xj subject to Ax = b, x 0 , and-c'x , and form its dual

(b) Show that there exist optimal solutios x and p to the primal and to thedual, respectively, such that for every we have either j > 0 or p' Aj < C j Hn se part (a) for each , and the take the averae of the vectorsobtained.

(c) Consider now the followin liear prorammin problem and its dual:

minimize c' xsubject to Ax b

x 0,

maximizesubject to

p'bp'A < c'p O.

Assume that both problems have an optimal solution. Show that thereexist optimal solutions to the primal ad to the dual, respectively, thatsatis s mlemenay slakness that is:

(i ) For every we have either Xj > 0 or p' Aj < Cj

(ii) For every we have either a;x > or P > . (Here, a; is the throw of A. Hn Convert the primal to standard form and applypart (b)

Page 210: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 210/267

Sec. 4 . 1 2 Execses

(d) Cosider the liear prorammi problem

miimize 5X l + 5X2subject to Xl + X2

Xl X2 0Xl X2 .

oes the optimal primal solutio /3 /3 toether with the correspodi dual optimal solutio, satisfy strict complemetary slackess? etermie all primal ad dual optimal solutios ad ideti the set of all strictlycomplemetary pairs

Exc 4 (C larks theore) Cosider the followi pair of liear prorammi problems:

miimizesubject to

c'xAx > b

x 0,

maximizesubject to

p'bp'A c'p

Suppose that at least oe of these two problems has a feasible solutio Provethat the set of feasible solutios to at least oe of the two problems is ubouded Iterpret boudedess of a set i terms of the iteess of the optimal costof some liear prorammi problem.

Exc 4 Cosider the dul simplex method applied to a stadard formproblem with liearly idepedet rows Suppose that we have a basis which isprimal ifeasible, but dual feasible, ad let be such that X  . Supposethat all etries i the th row i the tableau (other tha X  are oeativeShow that the optimal dual cost is +

Exc 4 escribe i detail the mechaics of a revised dual simplex method that works i terms of the iverse basis matrix B istead of the ll simplextableau

Exc 44 Cosider the lexicoraphic pivoti rule for the dual simplexmethod ad suppose that the alorithm is iitialized with each colum of thetableau bei lexicoraphically positive Prove that the dual simplex methoddoes ot cycle

Exc 4 This exercise shows that if we bri the dual problem ito stadard form ad the apply the primal simplex method, the resulti alorithm isot idetical to the dual simplex method

Cosider the followi stadard form problem ad its dual

miimizesubject to

Xl + X2Xl = 1X2 = 1Xl X2 0

maximizesubject to

PI + 2PI

P2 1

Here, there is oly oe possible basis ad the dual simplex method must termiateimmediately Show that if the dual problem is coverted ito stadard form adthe primal simplex method is applied to it, oe or more chaes of basis may berequired

Page 211: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 211/267

194 p 4 Dulty teoy

Exc 42 Let A be a ive matrix. Sow tat exactly oe o te ollowialteratives must old

(a) ere exists some x = 0 suc tat Ax = 0 , x ?

(b) ere exists some p suc tat pA > 0 .

Exc 427 Let A be a ive matrix Sow tat te ollowi two statemets are equivalet.

(a) Every vector suc tat Ax ? 0 ad x ? 0 must satisy X =

(b) ere exists some p suc tat pA 0 , p ? 0, ad pA1 < 0, were A1is te rst colum o A.

Exc 428 Let a ad a1 , , am be ive vectors i Rn Prove tat teollowi two statemets are equivalet :

(a) For all x ? 0 , we ave ax X a�x

(b) ere exist oeative coeciets Ai tat sum to ad suc tat a :1 Aiai

Exc 429 (Inconsistent systes of linear inequalities) Let a1 , , ambe some vectors i Rn, wit > + Suppose tat te system o iequalities

a�x ? , =

does ot ave ay solutios Sow tat we ca coose

+ o tese iequalities, so tat te resulti system o iequalities as osolutios.

Exc 40 (Hey's theore)

(a) Let be a ite amily o polyedra i Rn suc tat every + polyedrai ave a poit i commo Prove tat all polyedra i ave a poiti commo Hn: se te result i Exercise 9

(b) For

= , part a asserts tat te polyedra H P2 , PK K ? 3 ite plae ave a poit i commo i ad oly i every tree o tem ave a

poit i commo s te result still true wit "tree replaced by "two ?

Exc 41 (Unit eigenvectors of stochastic atrices) We say tat a matrix P wit etries Pij is sas i all o its etries are oeativead

nPij =

j=l

tat is, te sum o te etries o eac row is equal to

se duality to sow tat i P is a stocastic matrix, te te system oequatios

pP = p , p ? 0,

as a ozero solutio. Note tat te vector p ca be ormalized so tat itscompoets sum to oe Te, te result i tis exercise establises tat everyite state arkov cai as a ivariat probability distributio

Page 212: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 212/267

Sec 4 . 2 Exercses 19

xece 4 Leontief systes and Sauelson's substitution theore A Lentef matx is an m matrix in wic every column as atmost one positive element For an interpretation, eac column j correspondsto a production process I ij is neative, ij represents te amount o oods

o type consumed by te process I ij is positive, it represents te amount ofoods o type produced by te process I Xj is te intensity wit wic process is used, ten Ax represents te net output o te dierent oods Te matrix is called ducte i tere exists some x � 0 suc tat Ax > O

a Let be a square productive Leontie matrix m = Sow tat everyvector tat satises � 0 must be nonneative Hnt: I satises � 0 but as a neative component, consider te smallest nonneative suc tat some component o x + becomes zero, and derive acontradiction

b Sow tat every square productive Leontie matrix is invertible and tatall entries o te inverse matrix are nonneative Hnt: se te result inpart (a)

c We now consider te eneral case were � m and we introduce a constraint o te orm ex were e = ( . . . ) . (Suc a constraint couldcapture, or example, a bottleneck due to te niteness of te labor orce)An "output vector y   is said t o be aceable i y � 0 and tereexists some x � 0 suc tat Ax = y and ey . An acievable vector y

is said to be ecent i tere exists no acievable vector suc tat � yand y (Intuitively, an output vector y wic is not ecient can be improved upon and is tereore uninterestin) Suppose tat is productiveSow tat tere exists a positive ecient vector y Hnt: Given a positiveacievable vector y , consider maximizin i Yi over all acievable vectorsy tat are larer tan y•

d Suppose tat is productive Sow tat tere exists a set o m productionprocesses tat are capable o eneratin all possible ecient output vectorsy Tat is, tere exist indices B B m suc tat every ecient

output vector y can be expressed in te orm y

=:1 iXi ' forsome nonneative coecients X i wose sum is bounded by Hn:

Consider te problem o minimizin ex subject to Ax = y, x � 0, andsow tat we can use te same optimal basis or all ecient vectors y

xece 4 Options pricing Consider a market tat operates or a sinleperiod, and wic involves tree assets : a stock, a bond , and an option Let Sbe te price o te stock, in te beinnin o te period Its price S at te end ofte period is random and is assumed to be equal to eiter Su, wit probability

, or Sd, wit probability Here u and are scalars tat satisy d<

<uonds are assumed riskless Investin one dollar in a bond results in a payo

o r at te end o te period (Here, r is a scalar reater tan . ) Finally, teoption ives us te rit to purcase, at te end o te period, one stock at a xedprice o K If te realized price S of te stock is reater tan K we exercise teoption and ten immediately sell te stock in te stock market, or a payo ofS K I on te oter and we ave S < K tere is no advantae in exercisinte option, and we receive zero payo Tus, te value o te option at te endo te period is equal to max{O, S K} Since te option is itsel an asset, it

Page 213: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 213/267

19 p 4 Duty teoy

should have a value in the beinnin of the time period . Show that under theabsence of arbitrae condition, the value of the option must be equal to

max{O, - K} + max{O, d - K } ,

where and are a solution to the followin system o f linear equations:

+ d

+

rHn Write down the payo matrix and use Theorem 4

Exc 44 (Finding separating hyperpanes Consider a polyhedron

P that has at least one extreme point(a Suppose that we are iven the extreme points and a complete set of

extreme rays of P Create a linear prorammin problem whose solutionprovides us with a separatin hyperplane that separates P from the oriin,or allows us to conclude that none exists

(b Suppose now that P is iven to us in the form P {x ax � , } Suppose that 0 P Explain how a separatin hyperplane canbe found

Exc 4 (Separation o disjoint poyhedra Consider two nonemptypolyhedra P {x E Rn Ax b} and Q {x E Rn Dx d} We areinterested in ndin out whether the two polyhedra have a point in common.

(a evise a linear prorammin problem such that : if P Q is nonempty, itreturns a point in pQ; if p Q is empty, the linear prorammin problemis infeasible

(b Suppose that P Q is empty. se the dual of the problem you have

constructed in part (a) to show that there exists a vector c such thatcx<

cy for all x E P and y E Q

Exc 4 (Containent o poyhedra

(a Let P and Q be two polyhedra in Rn described in terms of linear inequalityconstraints evise an alorithm that decides whether P is a subset of Q

(b Repeat part (a) if the polyhedra are described in terms of their extremepoints and extreme rays

Exc 47 (Cosedness o nitey generated cones Let , . , be iven vectors in Rm Consider the cone { l I X � O } and letyk , k . , be a sequence of elements of that converes to some y Showthat y E (and hence is closed) , usin the followin arument With y xedas above, consider the problem of minimizin y - l , subject to theconstraints X , , X � O Here 1 1 · 1 1 stands for the maximum norm, dened by x max X . Explain why the above minimization problem has an optimalsolution, nd the alue of the optimal cost, and prove that y E

Page 214: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 214/267

Sec 4 1 2 Execses 97

Exc 48 (o rs' lea to duality) se Fars' lemma toprove te duality teorem for a linear prorammin problem involvin constraintsof te form a�x , a�x � , and nonneativity constraints for some of tevariables j . Hn: Start by derivin te form of te set of feasible directions at

an optimal solution.

Exc 49 (Extree rays o cones) Let us dene a nonzero element d ofa pointed polyedral cone C to be an exme if it as te followin property:if tere exist vectors C and g C and some ( ) satisin d = + gten bot and g are scalar multiples of d Prove tat tis denition of extremerays is equivalent to enition

Exc 440 (Extree rays o a cone are extree points o its sec

tions) Consider te cone

C {x

Rn x �

m and assume

tat te rst constraint vectors al , , a are linearly independent For anynonneative scalar r we dene te polyedron P by

(a) Sow tat te polyedron P is bounded for every r �

(b) Let r > O Sow tat a vector x

P is an extreme point of P if and onlyif x is an extreme ray of te cone C

Exc 44 (Carathodory's theore) Sow tat every element x of abounded polyedron P Rn can be expressed as a convex combination of atmost + extreme points of P H: Consider an extreme point of te set ofall possible representatios of x

Exc 442 (Probles with side constraints) Consider te linear prorammin problem of minimizin cx over a bounded polyedron P Rn andsubject to additional constraints x , ,L . Assume tat te problem as a feasible solution Sow tat tere exists an optimal solution wic isa convex combination of L + extreme points of P Hn: se te resolutionteorem to represent P

Exc 44

(a) Consider te minimization of C Xl + CX subject to te constraints

Find necessary and sucient conditions on C l C for te optimal cost tobe nite.

(b) For a eneral feasible linear prorammin problem, consider te set of allcost vectors for wic te optimal cost is nite. s it a polyedron? Proveyour answer

Page 215: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 215/267

198 Chap 4 Dualty theory

Excs 4 44(a) Let = { (Xl X2 ) Xl - X2 = Xl + X2 = o } hat ae the exteme

point and the extreme ray of ?

(b) Let

={ (Xl X2 )

X + X2 8, X + X2

8} hat ae the extemepoint and the extreme ray of ?

(c) For the polyhedon of pat b , i it poible to expe each one of itelement a a convex combination of it exteme point plu a nonnegtive linea combination of it extreme ray? thi compatible with thereolution theorem?

Excs 44 Let be a polyhedron with at leat one extreme point itpoible to expre an arbitrary element of a a convex combination of it

extreme point plu a nonnegative multiple of a ingle extreme ray?Excs 44 (Resolution theore for polyhedral cones) Let be anonempty polyhedral cone

(a) Show that can be expreed a the union of a nite numbe I , , kof pointed polyhedal cone Hn nterect with orthant

(b) Show that an extreme ray of mut be an extreme ay of one of the coneI , , k •

(c) Show that there exit a nite number of element WI , w of uch

that

Excs 447 (Resolution theore for generl polyhedra) Let be apolyhedron Show that thee exit vector l , xk and WI , w uch that

Hn Generalize the tep in the peceding exercie

Excs 448 Polar nitely generated and polyhedral cones) Forany cone , we dene it a J by

= {p p for all x } (a) Let

Fbe a nitely generated coe, of the form

Show that F = {p pWi = } which i a polyhedral cone

(b) Show that the polar of F i F and conclude that the polar of a polyhedralcone i nitely generated Hn e Far' lemma

Page 216: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 216/267

Sec 4 . 3 Notes and sources

(c Show that a nitely enerated pointed cone F is a polyhedron. Hn Conside the polar o the polar

d (Polar cone heore Let 0 be a closed, nonempty, and convex cone.Show that 0 = . Hn imic the derivation o Farkas' lemma usin

the separatin hyperplane theorem (Section ) .

(e s the polar cone theorem true when 0 is the empty set

Execise 44 Consider a polyhedron, and let x, y be two basic easible solutions we are only allowed to make moves rom any basic easible solution toan adjacent one, show that we can o om x to y in a nite number o stepsHn Generalize the simplex method to nonstandard orm problems startinom a nonoptimal basic easible solution, move alon an extreme ray o the coneo easible diections

Execise 4 We are inteested in the problem o decidin whether a polyhedron

Q = x E �n I Ax : b, Dx d, x }is nonempty We assume that the polyhedron = {x E �n I Ax : b, x } isnonempty and bounded For any vector p, o the same dimension as d, we dene

g(p) = -p'd + m p'Dx P

(a Show that i Q i s nonempty, then g(p) or all p .(b) Show that i Q i s empty, then there exists some p 0 such that g(p) < o(c Q is empty, what is the minimum o g(p) over all p

4 . 1 3 otes and sources

4 The dality theorem is de to von Nemann (19), and Gale, Kuhn,

and cker ( 1951 ) .4 Farks' lemma is de to Faras (189) and Minkoski (1896) . SeeSchrijver (1986) for a comprehensive presentation of related results.The connection beteen dality theory and arbitrage as developedby Ross (196, 198) .

4 7 Weierstrass ' Theorem and it s proof can be fond in most texts on realanalysis; see, for example , Rdin (196) . While the simplex method isonly relevant to linear programming problems ith a nite nmber of

variables, the approach based on the separating hyperplane theoremleads to a generalization of dality theory that covers more generalconvex optimization problems, as ell as innitedimensional linearprogramming problems, that is, linear programming problems ithinnitely many variables and constraints; see, e.g., Lenberger (1969)and Rockafellar (190) .

4 The resoltion theorem and its converse are sally attribted toFarkas, Minkoski, and Weyl .

Page 217: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 217/267

Chp. 4 Duty theory

4 For extensions of uait teor to probems invoving genera convexfunctions an constraint sets, see Rockafear (170) an Bertses(15b)

4 Exercises 4 6 an 4 7 are aapte from Bo an Vanenberge ( 15) Te resut on strict compementar sackness (Exercise 420) wasprove b cker (156) Te resut in Exercise 421 s ue to Cark( 161 ) Te resut in Exercise 430 is ue to He (1 23 ) nputoutput macroeconomic moes of te form consiere n Exercse 4 32 ,ave been introuce b Leontief, wo was aware te 173 Nobeprize in economics Te resut in Exercise 4 41 is due to Caatoor(107)

Page 218: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 218/267

Chapt 5

Siivi li

oa e aa

5 Gba eeee o e ga e eo

3 e e of a a ma oo

Goba eeee e o eo5 Paame ogamg

.6 ma

5 ee

5 8 Noe a oe

Page 219: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 219/267

Chap

Coer e r or robe

t

e c'xbjec o

x > ,

xe 'bbjec o 'A c'

Senstvty anyss

I cer, e e eeece o e o co e o oo o e coece rx A, e reqree ecor , eco ector c T ort e rctce bece e oe e

colee koege o e roble e o rect eeec of cer reer cge

I te r eco of cer , e deeo coo er wce o b re e e ee cge e robe , e exe te coeqece o e ot co We o co o ob ot oo f e or elee oe cotr Ibeqe eco , e o rger cge e robe , reg e ot b, e eeo gob erecte o e ee

ece of e o co o e ecor c Te cer e t bre dco of retrc rogrg, c exeo o eex etod tored to te cae ere tere ge cr koreer

M o e re cer c be exee o coer geerler rogrg robe eertele, a orer o l ereeo, or g o rogo cer be e re elg r for roble te ro o e m rx A re earl eee

5 . 1 Loca sensitivity anaysis

I t eco, e eelo eooog or erorg e - We coer ler rogrg roble, d e e t elre e a ol b B e oced o oto x*We te e t oe er o A, , or c bee cage, or ta e cotrt de, or t e rble aed We rt lookfor coto der c e crret b tl ol If tee coo re oated, e ook for gor tat e otloto wto g to oe te e roble fro cratc We llee tat e ex etod c be qte ef reect

Hg aed tt B otal ba for te org robe,e follog to coo are ated :

eb

Page 220: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 220/267

Sec . Local senstvty analyss

c' c' A > B

( tmat)

e te rbem i cage, we cec t ee w tee ct areaecte B iitg tat bt cti (feabt a tmait)

fr te me rbem, we bta te ct er wc te bamatrx rema ta fr te mie rbem wat fw, wea t arac t eera exame

A w vb s

Sue tat we itruce a ew ariabe Xn+1 , tgeter wt a crreg cum An+ a btai te ew rbem

mimize c'x + n+1Xn+ubject t Ax + An+!Xn+ b

x

e wi t etermie weter te crret bai ti tmae te tat x, Xn+1 ) = x*, 0) a bac feabe t t te

ew rbem aciate wit the bai , a we ee t exame thetmat ct Fr the bai t remai tima, it ecear

a ciet tat te reuce ct f Xn+ be egate, tat ,n+! =  n+! - C 1An+1 o

If th ct i atie, x*, 0) i a tima uti t te ew rbem f, weer, n+! < 0 te x*, 0) t ecear tma rer t a ima uti, we ad a cum t te imex tabeau,acate wt te ew arabe, a a te rma imex agrthmtartig frm the curret ba . Tica, a tma uti t te ewrbem btaied wit a ma mber f terati, a t arac ua mc fater ta ig te ew rbem frm cratc

Exampe Cosider the problem

miimizesubject to

-5XI X2  +  12x3

3X I + X2 + X5X I + 3X2 + X4Xl , . . , X4

A optimal solutio to this problem is iven by ( ) and the corre

spodi simplex tableau is iven by

Xl X2 X3  X4

Xl =  3

X2    5 -3

Page 221: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 221/267

4 Chap. Senstvty analyss

Note that B is ive by the last two colums of the tableauLet us ow itroduce a variable X ad cosider the ew problem

miimize -5Xl  X2 + 12 X

subject to 3Xl + 2X2  + X + X 105Xl + 3X2 + 4 + X 16Xl , , X O

We have = 1 , ad

= - c B l = 1 - 5 - 1] - � _ =

Sice is eative itroduci the ew variable to the basis ca be beecialWe observe that B = 2 ad aumet the tableau by itroduci a

colum associated with X:

Xl X2 X 4 X

12 0 0 2

2 1 0 3 2 12 0 5 3 2

We the bri X ito the basis; X2 exits ad we obtai the followi tableau,which happes to be optimal:

Xl X2 X 4 X

16 0 2 12 1 0

3 0 5 0 0 5 0X5 =  1 0 0 5 2 5 - 1 5 1

A optimal solutio is ive by = 3 , 0 , 0 , 0 ,

A w qly csr s

Let us ow itrouce a ew costrait alx � m+1 were am+l a

m1 are gie If te optia soutio x* to te origia probe satsestis costrait te x* is a optia soutio to te ew probe as weIf te ew costrait is ioate we itrouce a oegatie sack ariabeXnl a rewrite te ew costrait i te for a1x - Xn m+1 We obtai a probe i staa for i wic te atrix A is repaceby

Page 222: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 222/267

Sec. 5. Lc senstvty analyss

Let be a optial basis for the origial proble We for a basisfor the ew proble by seectig the origial basic variables together withn+1 The ew basis atri is of the for

= a' -1

where the row vector a' cotains those copoets of a+ associated withthe origial basic colus (The deteriat of this atri is the egativeof the deteriat of , hece onzero, ad we therefore have a true basisatri The basic solutio associated with this basis is (x , a+1x m+ , ad is ifeasible because of our assuptio that x violates theew costrait Note that the ew iverse basis atri is readily available

because 1 1 = a'1 - 1

(To see this, ote that the product 1 is equal to the idetity atri Let CB be the diesioal vector with the costs of the basic vari

ables i the origial proble The, the vector of reduced costs associatedwith the basis for the ew proble, is give by

[c' 0] - [c' 0]

a'=

_ [ at+1 _�

= [c' - c'1A 0 a is oegative due to the optiality of for the original probleHence, is a dual feasible basis ad we are i a positio to apply the dualsiple ethod to the ew proble Note that a iitial siple tableaufor the ew proble is readily costructe or eaple, we have

1 A am+1 a' 1A - a'm+1where 1A is available fro the al siple tableau for the origialproble

xap Consider aain the problem in xample 51:

minimizesubject to

-5X X + 12 3X + 2X + X5X + 3X + XX , · · · , X 0,

and recall the optimal simplex tableau:

X X

12 0 0

Xl =  2 1 0

2 0 1

X X4 

2 7

-3  2

5 -3 

101

Page 223: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 223/267

Chap 5 Senstvty analyss

We introduce the additional constraint Xl + X2 5 , which is violated by theoptimal solution x· = 2 , 2 , 0 , 0 We hae a = 1 , 1 , 0 , 0 , = 5, anda x· < We form the standard form problem

minimize -5X X2 + 12xsubject to 3X + X2 + X 105X + 3X2 + X 16X + X2 - X 5X , , X

Let a consist of the components of a associated with the basic variablesWe then have a = 1 , 1 and

] [ 1

a B A - a

=1 1 0

1

-3

5

_� ] 1 1 0 0] = 0 0 2 1] The tableau for the new problem is of the form

X X2 X X4 X

12 0 0 2 0Xl = 2 1 0 -3 2 0X2

= 2 0 1 5 3 0- 1 0 0 2 - 1 1

We now have all the information necessary to apply the dual simplex method tothe new problem

Or iscssio has bee focse o the case where a ieqaity costrait is ae to the primal probem Sppose ow that we itrocea ew costrait pA+  +! i the al This is eqivalet to itro-ci a ew variabe i the primal, a we are back to the case that wascosiere i the precei sbsectio

A w qlty costrt s ddd

We ow cosier the case where the ew costrait is of the form a+x + , a we assme that this ew costrait is violate by the optimalsoltio x* to the oriia problem The al of the ew probem is

maximize  p/h + Pm+ bm+l 

sbject to [p + aA ] c' ,+

where + i s a al variabe associate with the ew costrait Let p*be a optimal basic feasible soltio to the oriial al probem The,p* ,O ) is a feasible soltio to the ew al probem

Page 224: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 224/267

Sec Loc senstvty nlyss 7

Le m be e ie f p hic i e ae a e iga ube f cai ice p* i a baic feabe ui he gia uabe, m f e ca (p* ' A c ae iea ieee aace ee, ee guaaee a a (p* 0 e i ae m+ 1 iea eee acie ca f e e ua be I aicua,(p* ee be a bac feaibe ui e e ua be aa e a ceie ag f e ua ex eh e e be ie i a be ibe ba a ua bac feaibe ui b eig m+! a uiab ce ze aue, e eeee a aeae aac

Le u aue, u f geeai, a + X* > m+ • eiuce e auxiia ia be

iizeubec

cx + MXn+Ax b

�+1X  Xn+1  =  bm+1 

x , Xn+ 0 ,

ee M i a age ite cat A ia feaibe bai f e auxia be i baie b icig e baic iabe f he iaui e igia be, gee i he aiabe Xn+! The e

uig bai aix e ae a e aix f e eceig ubeci Tee i a ieece, ee I e eceg ubeci, a a uafeaibe bai, eea ee i a ia feaibe ba F i ea,e ia iex e ca be ue e he auxiia be iai

ue a a tia ui he auxia be aieXn+ 0; i i be e cae if e e be i feaibe a ecece M i age eug Te, e adiia cai a+!x m+ a bee ae a e ae a ia ui t te e be

Cgs i t rqirt vctor b

ue a e ce i f e equiee ec b i caged i + Equiae, e ec b i caged t b + , ee i e ui ec e ih eteie e age f aue f ue iche cue bai ea ia Ne tat he iai cii ae aece b e cage i b e teefe ee exaie te

feaibi cdi(5 . 1 )

Le i ,  2i , ,  mi ) be e cu f Equai (5 . 1 )bece

XB + ,

, 1 m

Page 225: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 225/267

Equivaenty

Chap 5 Senstvty analyss

max _  < 8 <  min  _ ( XB  ) ( XB  ) O} <O}

For in this range the optima cost as a function of , is given byC (b + i ) p'b + i , here p' C l is the optima duasoution associated ith the current basis

If is outside the aoed range the current soution satises theoptimaity or dual feasibiity conditions but is prima infeasibe. In thatcase e can appy the dua simpex agorithm starting from the currentbasis.

xap Consider the optimal tableau

 2   4

12 0 0 7

1 0 -3 

0 1 5 -3

from xample 5 . .Let us contemplate adding 8 to . We look at the rst column of B

which is -3, 5 . The basic variables under the same basis are = - 38 and + 58. This basis will remain feasible as long as - 38 0 and + 58 0, thatis, if /5 8 /3. The rate of change of the optimal cost per unit change of8 is given by c B I el = 5 , - 1 -3, 5 = 10

If 8 is increased beyond /3 , then becomes negative. At this point, wecan perform an iteration of the dual simplex method to remove from the basis ,and   enters the basis.

Cags i cos vcor c

Suppose no that some cost coecient becomes + The primalfeasibiity condition is not aected . We therefore need to focus on theoptimality condition

C

A c'If is the cost coecient of a nonbasic variabe X then CB does not

change and the ony inequaity that is aected is the one for the reducedcost of ; e need

or

Page 226: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 226/267

Sec. 5.1 Local senstvty analyss

f tis ondition olds te rrent basis remains optimal; oterwise we aappl te primal simplex metod starting from te rrent basi feasiblesoltion.

f j is te ost oeient of te £t basi variable tat is if ( £)ten CB beomes CB + and all of te optimalit onditions will beaeted. Te optimalit onditions for te new problem are

\ .

(Sine Xj is a basi variable its reded ost stas at zero and need not beexamined.) Eqivalentl

\

were qi is te £t entr of Ai , wi an be obtained from te simplextablea. Tese ineqalities determine te range of for wi te samebasis remains optimal .

Exampe 4 W considr onc mor th problm in xampl 5 1 and dtrmin th rang of changs of C, undr which th sam basis rmains optimalSinc X and X ar nonbasic variabls, w obtain th conditions

=2

=

Considr now adding to C l om th simplx tablau w obtain 2 = 0, = -3, = 2, and w ar ld to th conditions

-2/3,

< /2

Cags a obasc colum of A

Sppose tat some entr ij in te t olmn Aj of te matrix A isanged to ij + . We wis to determine te range of vales o for white old optimal basis remains optimal.

f te olmn Aj is nonbasi te basis matrix does not hangeand te primal feasibilit ondition is naeted. rtermore nly tereded ost of te t olmn is aeted leading to te ondition

orj - i ? 0

were p' C l f tis ondition is violated the nonbasi olmn Ajan be brogt into te basis and we an ontine wit te primal simplexmetod.

Page 227: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 227/267

p 5 Sesitivity ysis

Chages i a basic colum of A

If one of the entres of a basc column Aj changes then both the feasbl-ty and optimalty condtons are aected This case s more complcated

and we leave the full development for the exercses. As t turns out therange of values of for which the same bass s optmal s agan an intervalxercse 5 3)

Suppose that the basic column Aj s changed to Aj e where eis the th unt vector Assume that both the orgnal problem and ts dualhave unque and nondegenerate optmal solutions x* and p respectvelyet x* ) be an optmal soluton to the moded problem as a functon of. It can be shown xercise 5 2 ) that for small we have

c'x* () = c'x* X 2 )

or an ntuitve nterpretation of ths equaton let us consder the detproblem and recall that j correspons to the amount of the th nutrientin the th food ven an optmal soluton x* to the orginal probleman ncrease of j by means that we are getting for free" an additonalamount of the th nutrient Snce the dual variable is the margnalcost per unt of the th nutrent we are gettng for free somethng that s

normally worth X and ths allows us to reduce our costs by that sameamount

Productio plaig revisited

In Secton 2 we ntroduced a producton plannng problem that DC hafaced n the end of 988 In ths secton we answer some of the questonsthat we posed Recall that there were two important choices whether touse the constraned or the unconstraned mode of producton for dsk drvesand whether to use alternatve memory boards As dscussed n Secton 2 these four combnatons of choces led to four derent lnear programmingproblems We report the soluton to these problems as obtaned from alnear programmng pacge n Table 5

Table 5 ndcates that revenues can substantally ncrease by usngalternatve memory boards and the company should dentely do so Thedecson of whether to use the constraned or the unconstraned mode ofproducton for dsk drves s less clear In the constraned mode the revenue

s 28 mllon versus 23 million n the unconstraned mode Howevercustomer satsfacton and therefore future revenues mght be aectedsnce n the constraned mode some customers wll get a product derentthan the desred one Moreover these results are obtaned assuming thatthe number of avalable 256K memory boards and dsk drves were 8000and 3000 respectvely whch s the lowest value n the range that wasestmated We should therefore examne the sensit ivity of the soluton asthe number of available 256K memory boards and dsk drves ncreases

Page 228: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 228/267

Sec. 5.1 oc sestvty yss

Alt. boards Mode Revenue X X2 X3 X4no onstr. 15 0 2 5 0 0 5

ye onstr 28 1 8 2 0 1

no unonstr 133 022 1 30 0 3 0 5

yes unonstr. 213 1 8 1 035 03 05

Table : ptmal soltos to the for varats of the prodcto plag problem Revee s mllos of dollars ad theqattes are thosads

X5

2

2

2

2

With most linear programming pakages, the output inludes the val-ues of the dual variables, as well as the range of parameter variations underwhih loal sensitivity analysis is valid. Table 5 2 presents the values ofthe dual variables assoiated with the onstraints on available disk drivesand 256K memory boards. In addition, it provides the range of allowedhanges on the number of disk drives and memory boards that would leavethe dual variables unhanged. This infrmation is provided for the two lin-ear programing problems orresponding to onstrained and unonstrainedmode of prodution for disk drives, respetively, under the assumption thatalternative memory boards will be used.

Mode onstrained nconstrained

Revenue 28 213

Dual variable 15 0for boards

Range-1 5 0 2 [-1 62 for boards

Dual variable0 2352for dis drives

Range[-0 2 0 5 ] 0 91 1 1 3]for dis drives

Tabe : al prces ad rages for the costrats correspodg to the avalablty of the mber of K memory boards addsk drves

Page 229: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 229/267

Chap. Senstvty analyss

the costraed mode creasg the umber of avalabe 256Kboards by 02 thousad (the argest umber the aowed rage) results a reveue crease of 15 0 2 3 mlo. the ucostraed modecreasg the umber of avalable 256K boards has o eect o reveuesbecause the dual varabe s zero ad the rage exteds upwards to ty. the costraed mode creasg the umber of avaable dsk drves byup to 05 thousad (the largest umber the aowed rage) has o eecto reveue. Fay the ucostraed mode creasg the umberof avalable dsk drves by 1 13 thousad results a reveue crease of23 52 1 13 26 5 mo.

cocluso the costraed mode of producto t s mportatto am at a crease of the umber of avaable 256K memory boards

whle the ucostraed mode creasg the umber of dsk drves smore mportat.Ths example demostrates that eve a smal lear programmg

probem (wth ve varables ths case) ca have a mpact o a com-pay' s pag process . Moreover the formato provded by ear pro-grammg sovers (dual varables rages etc. ) ca oer sgcat sghtsad ca be a very useful ad to decso makers.

5 . 2 Global dependence o n the rght-hand sde

vector

I ths secto we take a goba vew of the depedece of the optma costo the requremet vector b

LetP(b) {x Ax b, x � }

be the feasble set ad ote that our otato makes the depedece o bexpct. Let

S {b P(b) s oempty }

ad observe thatS {Ax x � } ;

partcular S s a covex set. For ay b S , we dee

F(b) m ex, P ( )whch s the optmal cost as a cto of b

Throughout ths secto we assume that the dua feasble set {p pA e/ s oempty. The dualty theory mples that the optmalprma cost F(b) s te for every b S. Our goal s to uderstad thestructure of the fucto F(b), for b S

Page 230: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 230/267

Sec 2 Globl dependence on the righthnd side vector

Let a particuar eement b* of S Suppoe tat tere exit anonegenerate prima optima baic feaie otion an et be te correponing optima bai matrix Te vector XB of baic variabe at tatoptima otion i given b

XB= 1b*, an i poitive b nonegenerac

n aition te vector of rece cot i nonnegative f we cange b* to ban if te ierence b - b* i cientl ma 1b remain poitive anwe ti ave a baic feaibe otion Te rece cot re not aecteb te cange rom b* to b an remain nonnegative Terefore i anoptimal bai for te new probem a wel Te optima cot F(b) for tenew probem i given b

F(b) = e 1b = pb, for b coe to b* ,

were p = e 1 i te optima oltion to te a probem Tietablie tt in te vicinit of b* , F(b) i a inear function of b an itgraient i given b p

We now trn to te globa propertie of F(b)

Theoem The optiml cost F(b) is convex nction o b onthe set S

oof. Let b1 an b2 be two eement of S For = 1 , 2 , let X be anoptimal oltion to te probem of minimizing ex bject to x � anAx = b T F(b1 ) = ex1 an F(b2 ) = ex2 Fix a caar [0 , 1 ] ,an note tat the vector y = x1 + ( 1 - ) x2 i nonnegative n atieAy = b 1 + (1 - ) b2 n particlar y i a feaibe otion to te inearprogramming problem obtaine wen te reqirement vector b i et tob 1 + (1 - )b2 Terefore

Fb1 + ( 1 - )b2 ) ey = ex1 + ( 1 - )ex2 = F(b1 ) + ( 1 - )F(b2 ) ,

etabliing te convexit of F

We now corroborate Teorem 5 . 1 b taking a ierent approacinvolving te ua problem

maximize pb

ubject to pA e ,wic a been ame feibe For an b S, F(b) i nite an btrong ait i eqa to te optima vale of the a objective Letp1 , p2 , , pN be te extreme point of te a feaibe et Or taningaumption i that te matrix A a linearl inepenent row; ence itcoumn pan m Eqivalentl te row of A pan m an Teorem 2 .6in Section 2 5 impie tat te a feaibe et mt ave at eat one

Page 231: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 231/267

4 Chap. Senstvty nyss

8

8

Figure : The optimal cost when the vector b is a functioof a scalar parameter ach inear piece is of the form p b* +d, where p is the th extreme point of the dual feasible setIn each one of the intervas < , < < 2, and > 2,we have dierent dual optimal soutions namely p , p2 , and p3 ,respectively For or 2, the dua probem has multipeoptimal solutios

extreme point ) Since the optimum of the dua must be attained at anextreme point we obtain

F(b) max (pi ) 'b 2 = N b E S (5 2)

In particular F is equal to the maximum of a nite colection of linear

functions It is terefore a piecewise inear convex function and we have anew proof of Theorem 5 1 In addition within a region where F is inearwe have F(b) (pi ) 'b where pi is a corresponding dua optima soutionin agreement with our earier discussion

For those vaues of b for which F is not dierentiabe that is at thejunction of two or more inear pieces the dua problem does not have aunique optimal solution and this implies that every optima basic feasibesoution to the primal is degenerate (This is because as sown earier in

this section the existence of a nondegenerate optima basic feasibe solutionto the primal impies that F is ocay linear)We now restrict attention to changes in b of a particular type namely

b b* + Od, where b* and d are xed vectors and is a scaar et() F(b* + Od) be the optima cost as a function of the scaar parameterO sing Eq (5 2) we obtain

() max (pi ) ' (b* + Od), = N b* + Od E S

Page 232: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 232/267

Sec 53 Te set o ll dul optiml solutions*

Fb F b

b* b b*

Fgure : Ilustration of subgradients of a function F at apoint b* A subgradient p is the gradient of a linear nctionFb* + p' b - b* that lies below the function Fb and agreeswith it for b b*

b

This is ssnially a sion" o h union i is again a piwis linaronvx union s Figur 5 1 On mor, a brakpoins o his union,

vry opimal basi asibl soluion o h primal mus b dgnra.

5 . 3 The set of all dual optmal solutons*

W hav sn ha i h union is dnd, ni, and linar in hviiniy of a rain

·vor b* hn hr is a uniqu opimal dual soluion,

qual o h gradin of a ha poin, whih lads o h inrpraion

o dual opimal soluions as marginal oss W would lik o xnd hisinrpraion so ha i rmains valid a h brakpoins o This isindd possibl w will show shorly ha any dual opimal soluion anb viwd as a gnralizd gradin" o W rs nd h followingdniion, whih is illusrad in Figur 5 2

enton et e conex unctio deed on coex set Set b* e n element o S We tt ecto p s subgrdento t b* i

b* + p b b b b S

o ha i b* is a brakpoin o h funion hn hr arsvral subgradins On h ohr hand, i is linar nar b* hr is auniqu subgradin, qual o h gradin o

Page 233: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 233/267

Chp 5 Senstvty nlyss

Thorm Suppose tha the lne pogmmng polem o mnmzng 'x sject to Ax = b* nd x s esle nd tht theopml cost s nte Then vecto p n optml soluton to the

dul polem nd only t s sudent o the optml costuncon F he pont b*

roof. Recal tat te fuctio F is ee o te set S, wic is teset of vectors b for wic te set P(b) of feasibe soutios to te rialrobe is noety. Suose tat p is a otia solutio to te uarobe. Te, strong uaity iies tat p'b* = F(b* ) Cosier owsoe arbitrary b S For any feasibe solutio x Pb), weak uaity

yies p'b � e'x Takig te iniu over al x Pb), we obtaip'b � F(b) Hece, p'b - p'b* � F(b ) - F(b* ) , an we cocue tat pis a subgraiet of F at b*

We now rove te converse . Let p be a subgraiet of F at b* tatis,

F(b* ) + p' (b - b* ) � F(b), b S. (5 .3 )

Pick soe x , et b = Ax, a ote tat x Pb) I articuar,F(b) � e'x Usig Eq. (5 .3 ) we obtai

p'Ax = p'b � F(b) - F(b* ) + p'b* � e'x - F(b* ) + p'b*

Since tis is true for all x , we ust ave p' A � e', wic sows tat pis a ua feasibe soutio. Also, by ettig x = , we obtai F(b*) � p'b*Usig weak uaity, every ual feasibe solutio ust satis 'b* �F(b*) � p'b* , wich sows that p is a dua otia soution.

5 . 4 Global dependence on the cost vectorI te last two sectios, we xe te atrix A an te vector e, a wecosidere te eect of cagig te vector b Te key to our eveoetwas te fact tat te set of ua feasibe soutios reais te sae as bvaries. I tis sectio, we stuy the case were A a b are xed, but tevector e varies. I tis case, te ria feasibe set reais unaecte; ourstadig assutio wi be tat it is oety.

We ee te ua feasibe set

Q(e) {p p'A � e } ,an et

T = {e Q(e) is oety} .

If el T an e2 T ten tere exist p a p2 suc tat (p ) 'A � e'an (p2 ) 'A � e' For ay scalar 0 1] w have

(pl ) ' + ( 1 - ) (p2 ) ' A � el + ( 1 - )e2 ,

Page 234: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 234/267

Sec. 5.5 arametrc programmng 7

and this estabishes that ( 1 - )2 T We have therefore shownthat T is a convex set.

If T the infeasibiity of the dua probem impies that the optimaprima cost is - 0

.On the other hand, if T the optima prima cost

must be nite . Thus, the optima prima cost , which we wi denote by () , is nite if and ony if T

et l , x2 , , xN be the basic feasibe soutions in the primal feasibleset; ceary, these do not depend on Since an optima soution to astandard form probem can aways be found at an extreme point, we have

() min Cxi .= NThus,

()

is the minimum of a nite coection of linear functions and isa piecewise inear concave function . If for some value * of the primahas a unique optima soution xi , we have (C* ) i (c* ) xj , for a i .For very cose to c* , the inequaities cxi cxj , i , cntinue t o hold,implying that xi is sti a unique prima optima soution with cost Cxi We concude that, ocay, () Ci On the other hand, at those vauesof that lead to mutiple primal optima solutions, the function has abreakpoint.

We summarize the main points of the preceding discussion.

Theoem Consder a easble lnear programmng roblem in standard orm

 a he set T o all or whch the optmal cost s nite s convex

(b he optmal cost ) s a concave ncton o on the set T or some le o the primal problem has a niqe optmal

solton x* then s lnear n the vcnty o and ts gradent

s eqal to x*

5 . 5 Parametrc programmng

et us x A, b , and a vector of the same dimension as For anyscaar , we consider the probem

minimize ) xsubject to Ax b

x >

and et g() be the optima cost as a function of . Naturaly, we assumethat the feasibe set is nonempty. For those vaues of for which the optimacost is nite, we have

g() min ) i ,= N

Page 235: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 235/267

Chp 5 esv lss

where x x are the extreme ponts of the feasble set; see Fgure 5 .3 In partcular g() s a pecewse lnear and concave functon of the parameter . In ths secton we dscuss a systematc procedure based on thesmplex method for obtanng g() for all values of We start wth anexample.

.

l pal x pa x3pal x4pa

Figure : The optimal cost () as a fctio of

Example Cosider the problem

miimize (-3 + 2)X + (3 - )X + Xsbject to X + 2X 3X 5

2X + X 4X3 

<

X X , X o

We itrodce slack variables i order to brig the problem ito stadard formad the let the slack variables be the basic variables This dtermies a basicfeasible soltio ad leads to the followig tablea

X X X X X

0 -3 + 2 3 -

00

5 2 -3 0

X = 2 1 -4  0

If -3 + 2 0 ad 3 0, all reduced costs are oegative ad wehave a optimal basic feasible solutio I particlar,

() = 0 , if < < 3.2 - -

Page 236: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 236/267

Sec . metc pogmmg

f s ncreased slghtly above 3 the reduced cost of X2 becomes negatveand we no longer have an optmal basc feasble soluton We let X2 enter thebass X4 exts and we obtan the new tableau:

Xl X2 X X4 X

-5 + 2 5 -45 + 2 5 0 5 5 1 5 -1 5 + 05 0

2 5 0 5 1 - 1 5 05 0

4.5  1 5 0 -2 5 -0 5 1

We note that all reduced costs are nonnegatve f and only f 3 5 5/1 5

The optmal cost for that range of values of s

() = 5 2 5 f 3 < < 5 5- - 1 5

f s ncreased beyond 55/15 the reduced cost of X becomes negatve f weattempt to brng X nto the bass we cannot nd a postve pvot element n thethrd column of the tableau and the problem s unbounded wth ( ) = 0

Let us now go back to the orgnal tableau and suppose that s decreasedto a value slghtly below 3/2 Then the reduced cost of Xl becomes negatve we

let Xl enter the bass and X exts The new tableau s:

Xl X2 X X4 X

10 5 - 0 4 5 - 2 -5 + 4 0 1 5 -

X4 = 1 5 0 1 5 -1 1 -0 5

3 .5  1 0 5 2 0 0 5

We note that all o f the reduced costs are nonnegatve f and only f 5/4 3/2For these values of we have an optmal soluton wth an optmal cost of

() = 105 + f < < 1 4 - - 2

Fnally for < 5/4 the reduced cost of X s negatve but the optmal cost sequal to 0 because all entres n the thrd column of the tableau are negatveWe plot the optmal cost n Fgure 54

We now generalize e steps in e preceding exaple in order oobain a broader eodology Te key observaion is a once a basisis xed e reduced coss are ane (linear plus a constan) functions of Ten if we require a all reduced coss be nonnegaive we force tobelong o soe inerval (Te inerval could be epy bu if i is nonepyits endpoins are also included) We conclude a for any given basis ese of for wic tis basis is optia is a cosed interval

Page 237: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 237/267

Chap 5 Sensitivity anaysis

g 8

Figure 4: he optima cost g(9) as a fnction of 9 in xampe55 For 9 otside the interva [5/4, /3] , g(9) is ea to 0

Let us now assume that we have chosen a basc feasbe souton anan assocate bass matrx , an suppose that ths bass s optma for satsng 2 . Let j be a varabe whose reuce cost becomesnegatve for > 2 . Snce ths reuce cost s nonnegatve for 2 ,t must be equa to zero when 2 . We now attempt to brng j ntothe bass an conser separatey the erent cases that may arse.

Suppose that no entry of the th coumn Aj of the smpextabeau s postve. For > 2 , the reuce cost of j s negatve, anths mpes that the optma cost s - 0 n that range.

If the th coumn of the tabeau has at east one postve eement, wecarry out a change of bass an obtan a new bass matrx . For 2 ,the reuce cost of the enterng varabe s zero an, therefore, the cost

assocate wth the new bass s the same as the cost assocate wth theo bass . Snce the o bass was optma for 2 , the same must betrue for the new bass . On the other han, for ( < 2 , the enterng varabej ha a postve reuce cost . Accorng to the pvotng mechancs, anfor ( < 2 , a negatve mutpe of the pvot row s ae to the pvot row,an ths makes the reuce cost of the extng varabe negatve. Thsmpes that the new bass cannot be optm for ( < 2 . We concue thatthe range of vaues of for whch the new bass s optma s of the form

2 3, for some 3 By contnung smary, we obtan a sequence ofbases, wth the th bass beng optma for Note that a bass whch s optma for E , ] cannot be optma

for vaues of greater than Thus , f > for a , the same basscannot be encountere more than once an the entre range of vaues of w be trace n a nte number of teratons, wth each teraton eangto a new breakpont of the optma cost functon () . (The number ofbreakponts may ncrease exponentay wth the menson of the probem. )

Page 238: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 238/267

Sec. 6 Summy

Te situation is more compicated if for some basis we ave In tis case, it is possible tat te agortm keeps cycling between a nitenumber of dierent bases, al of wic are optima only for Suc cycling can ony appen in te presence of degeneracy in te primalproblem (Exercise 5 . 1 ) , but can be avoided if an appropriate anticyclingrule is folowed In conclusion, te procedure we ave outlined, togeterwit an anticycling rule, partitions te range of possible vaues of intoconsecutive intervals and, for eac interval, provides us wit an optimalbasis and te optimal cost function as a function of

Tere is anoter variant o f parametric programming tat can be usedwen c is kept xed but b is replaced by b + , were d is a given vectorand is a scalar In tis case , te zerot coumn of te tableau depends

on Wenever reaces a value at wic some basic variable becomesnegative, we apply te dual simpex metod in order to recover primalfeasibility

5.6 Summary

In tis capter , we ave studied te dependence of optimal solutions and ofte optimal cost on te problem data, tat is, on te entries of A, b, and

c For many of te cases tat we ave examined, a common metodologywas used Subsequent to a cange in te problem data, we rst examine itseects on te feasibility and optimality conditions If we wis te same basisto remain optimal, tis leads us to certain limitations on te magnitude ofte canges in te problem data For larger canges , we no longer avean optima basis and some remedial action (involving te primal or dualsimplex metod) is typically needed

We close wit a summary of our main results

(a) If a new variable is added, we ceck it s reduced cost and if it i snegative, we add a new column to te tableau and proceed fromtere

(b) If a new constraint is added, we ceck weter it is violated and ifso, we form an auxiliary problem and its tableau, and proceed fromtere

(c ) If an entry of b or c is canged by we obtain an interval of valuesof for wic te same basis remains optimal

(d) If an entry of A is canged by , a similar analysis is possible How-ever, tis case is somewat complicated if te cange aects an entryof a basic column

(e) Assuming tat te dual problem is feasible , te optimal cost is apiecewise linear convex function of te vector b (for tose b for wicte primal is feasibe) rtermore, subgradients of te optima costfunction correspond to optima solutions to te dual problem

Page 239: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 239/267

222 Chp. 5 Sesiiiy lysis

f Assuming that th primal problm is fasibl, th optimal cost is apicwis linar concav function of th vctor c for thos c for whichth primal has nit cost

g If th cost vctor is an an function of a scalar paramtr e , thris a systmatic procdur paramtric programming for solving thproblm for all valus of e A similar procdur is possibl if thvctor is an an function of a scalar paramtr.

.7 Exercises

Exercise 51 Consider the same problem as in xample 5 1 for which we al

ready have an optimal basis. et us introduce the additional constraint  l  2

3 Form the auxiliary problem described in the text and solve it usin the primal simplex method Whenever the "lare constant M is compared to anothernumber M should be treated as bein the larer one

Exercise 5.2 Sensitivity ith espect to chnges in sic counof A) In this problem (and the next two) we study the chane in the valueof the optimal cost when an entry of the matrix A is perturbed by a smallamount. We consider a linear prorammin problem in standard form under theusual assumption that A has linearly independent rows. Suppose that we havean optimal basis B that leads to a nondeenerate optimal solution x* and anondeenerate dual optimal solution p We assume that the rst column is basicWe will now chane the rst entry of A from to 8 where 8 is a smallscalar. et be a matrix of dimensions (where is the number of rowsof A whose entries are all zero except for the top left entry which is equalto 1

Show that i f 8 is small enouh B8 i s a basis matrix for the new problem

Show that under the basis B 8 the vector of basic variables in the

new problem is equal to 8B

lB

lbc Show that if 8 is suciently small B 8 is an optimal basis for the new

problem.

(d) We use the symbol to denote equality when second order terms in 8 are inored The followin approximation is known to be true 8B 8B sin this approximation show that

where ! (respectively d is the rst component of the optimal solution tothe oriinal primal (respectively dual) problem and has been denedin part (b)

Exercise 5 .3 Sensitivity ith espect to chnges in sic counof A) Consider a linear prorammin problem in standard form under the usualassumption that the rows of the matrix A are linearly independent . Supposethat the columns A A form an optimal basis. et A be some vector andsuppose that we chane A to A + 8A Consider the matrix B8 consistin of

Page 240: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 240/267

Sec. . Execses

the colms A + A l A A Let [ be a closed iterval o vales o that cotais ero ad i which the determiat o B is oero Show thatthe sbset o ] or which B o is a optimal basis is also a closed iterval

Exercise 54 Cosider the problem i xample 51 with chaged om3 to 3 + o Let s keep X ad X as the basic variables ad let B o be thecorrespodig basis matrix as a ctio o oa Compte B l b For which vales o is B o a easible basis?

(b) Compte cB l For which vales o is B o a optimal basis?

(c) etermie the optimal cost as a ctio o whe is restricted to thosevales or which B o is a optimal basis matrix

Exercise 5.5 While solvig a stadard orm liear programmig problem sigthe simplex method we arrive at the ollowig tablea:

l 3

0 0 C3 0 C

1 0 1 -1 0

2 0 0 2 1 3 1 0 4 0

Sppose also that the last three colms o the matrix A orm a idetit matrix

a Give ecessar ad sciet coditios or the basis described b thistablea to be optimal (i terms o the coeciets i the tablea)

(b) Assme that this basis is optimal ad that C3 O Fid a optimal basiceasible soltio other tha the oe described b this tablea

(c) Sppose that o Show that there exists a optimal basic easible

soltio regardless o the vales o C3 ad C.(d Assme that the basis associated with this tablea is optimal Sppose

also that l i the origial problem is replaced b l + E Give pper adlower bods o E so that this basis remais optimal

Assme that the basis associated with this tablea is optimal Spposealso that C l i the origial problem is replaced b C l + E Give pper adlower bods o E so that this basis remais optimal

Exercise 56Compa A has agreed to sppl the ollowig qatities o spe

cial lamps to Compa drig the ext 4 moths:

Month aar Febrar arch April

Units 150 10 225 10

Compa A ca prodce a maximm o 10 lamps per moth at a cost o $35per it Additioal lamps ca be prchased rom Compa C at a cost o $50

Page 241: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 241/267

224 Chp. 5 Senstvty anyss

per lamp Company A incurs an inventory olding cost of $5 per mont for eaclamp eld in inventory

(a) Formulate te problem tat Company A is facing as a linear programmingproblem

(b) Solve te problem using a linear programming package

Company A is considering some preventive maintenance during one of terst tree monts If maintenance is sceduled for anuary te companycan manufacture only 151 units (instead of 10); similarly te maximumpossible production if maintenance is sceduled for February or arc is153 and 155 units respectively Wat maintenance scedule would yourecommend and wy?

d Company as oered to supply up to 50 lamps (total) to Company A

during eiter anuary February or arc Company carges $45 perlamp Sould Company A buy lamps om Company ? If yes wen andow many lamps sould Company A purcase and wat is te impact oftis decision on te total cost?

Company C as oered to lower te price of units supplied to CompanyA during February Wat is te maximum decrease tat would make tisoer attractive to Company A?

ecause of anticipated increases in interest rates te olding cost per lamp

is expected to increase to $ per unit in February How does tis cangeaect te total cost and te optimal solution?

(g) Company as just informed Company A tat it requires only 0 units inanuary (instead of 150 requested previously) Calculate upper and lowerbounds on te impact of tis order on te optimal cost using informationfrom te optimal solution to te original problem

Exercse 57 A paper company manufactures tree basic products: pads ofpaper 5packs of paper and 20packs of paper Te pad of paper consists of a

single pad of 25 seets of lined paper Te 5pack consists of 5 pads of papertogeter wit a small notebook Te 20pack of paper consists of 20 pads ofpaper togeter wit a large notebook Te small and large notebooks are notsold separately

Production of eac pad of paper requires 1 minute of papermacine time1 minute of supervisory time and $10 in direct costs Production of eac smallnotebook takes 2 minutes of papermacine time 45 seconds of supervisory timeand $ 20 in direct cost Production of eac large notebook takes 3 minutes ofpaper macine time 30 seconds of supervisory time and $ 30 in direct costs To

package te 5pack takes 1 minute of packager's time and 1 minute of supervisorytime To package te 20pack takes 3 minutes of packager's time and 2 minutesof supervisory time Te amounts of available papermacine time supervisorytime and packager's time are constants l , 2 , , respectively Any of te treeroducts can be sold to retailers in any quantity at te prices $30 $10 and$00 respectively

Provide a linear programming formulation of te problem of determiningan optimal mix of te tree products (You may ignore te constraint tat onlyinteger quantities can be produced ) y to formulate te problem in suc a

Page 242: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 242/267

Sec. . Exercises 225

way that the followng qestons can e answered y lookng at a sngle dalvarale or redced cost n the nal talea Also, for each qeston, gve a refexplanaton of why t can e answered y lookng at jst one dal prce or redcedcost

(a) What s the margnal vale of an extra nt of spervsory tme?

(b) What s the lowest prce at whch t s worthwhle to prodce sngle padsof paper for sale?

Sppose that parttme spervsors can e hred at $ per hor Is tworthwhle to hre any?

(d Sppose that the drect cost of prodcng pads of paper ncreases from $10

to $ 12 What s the prot decrease?

Exercse 5.8 A pottery manfactrer can make for derent types of dnngroom servce sets P nglsh, Crrer, Prmrose, and letal rthermore,Prmrose can e made y two derent methods ach set ses clay, enamel, dryroom tme, and kln tme, and reslts n a prot shown n Tale 5 3 (Here, ss the arevaton for ponds)

Resources E Total

lay (lbs 0 5 10 10 20 130

Enamel (lbs 1 2 2 1 3

Dry room (hours 3 1 6 6 3 45 

Kiln (hours 2 2 5 3 23

rot 5 02 66 66 89

Table 5.3: The rghtmost colmn n the tale gves the manfactrer 's resorce avalalty for the remander of the week Notcethat Prmrose can e made y two derent methods They othse the same amont of clay (1 0 s ) and dry room tme ( hors) t the second method ses one pond less of enamel and three

more hors n the kln

The manfactrer s crrently commtted to makng the same amont ofPrmrose sng methods 1 and 2 The formlaton of the prot maxmzatonprolem s gven elow The decson varales l 2 are the nmerof sets of type nglsh, Crrer, Prmrose ethod 1 Prmrose ethod 2 andletal, respectvely We assme, for the prposes of ths prolem, that thenmer of sets of each type can e fractonal

Page 243: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 243/267

226 Chp 5 Sensitivity nysis

mximize 51 + 0 + 66H + 66P2 + 8gBsubject to W + 15 + 10P + P2 + 0B 0

+ + H + P2 + B < + + 6H + 6P2 + B 45 + 4 + H + 5P2 + B <

H P2 0, , P , P2 , B .

The optiml solution to the priml nd the dul respectively toether withsensitivity informtion is iven in Tbles 5.4 nd 5 5 se this informtion tonswer the questions tht follow

E

B

ptimal Reduced bective Allowable AllowableValue ost oecient ncrease Decrease

0 3. 51 51 3 . 51 2 0 102 16 .66 12 5

0 0 66 351 0 -351 66 351 5 0 89 12 5

Table 5.4: The optiml priml solution nd its sensitivity withrespect to chnes in coecients of the objective function Thelst two columns describe the llowed chnes in these coecientsfor which the sme solution remins optiml

a Wht is the optiml quntity of ech service set nd wht is the totl

prot?(b) Give n economic (not mthemticl) interprettion of the optiml dul

vribles pperin in the sensitivity report for ech of the ve constrints Should the mnufcturer buy n dditionl 0 lbs of Cly t $11 per

pound?(d) Suppose tht the number of hours vilble in the dry room decreses by

0. Give bound for the decrese in the totl prot

n the current model the number of rimrose produced usin method 1 wsrequired to be the sme s the number of rimrose produced by method .Consider revision of the model in which this constrint is replced by theconstrint P P2 o n the reformulted problem would the mount ofrimrose mde by method 1 be positive?

Exercise 5.9 sin the nottion of Section 5 , show tht for ny positivesclr nd ny b we hve Fb Fb Assume ht the dul fesibleset is nonempty so tht Fb is nite

Page 244: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 244/267

Sec. 5.7 Exercses 227

lac Dual onstr. Allowable AllowableValue Variable RH ncrease Decrease

lay130 1.29 130 23.33 3.5

Enamel 9 0 13 0

Dry Rm. 1 0 45  0 28

Kiln 23 20. 3 23 5.0 3.50

rim. 0 11.29 0 3.50 0

Table 5.5: The optima da sotion and its sensitivity The

comn abeed "sack vae gives s the optima vaes of thesack variabes associated with each of the prima constraints Thethird comn simpy repeats the righthand side vector b whie theast two comns describe the aowed changes in the componentsof b for which the optima da sotion remains the same

Exercise 5.10 Consider the inear programming probem

minimize Xl X

sbect to Xl + X ,

Xl , X O

(a Find (by inspection) an optima sotion as a fnction of

b) raw a graph showing the optima cost a s a fnction of

se the pictre in part (b) t o obtain the set o f a da optima sotions for every vae of

Exercise 5. 1 1 Consider the nction , as dened in the beginning of Section 55 Sppose that is inear for [l , ] Is it tre that there exists aniqe optima sotion when l < < ? Prove or provide a conterexampe

Exercise 5.12 Consider the parametric programming probem discssed in Section 5 5

(a Sppose that for some vae of , there are exacty two distinct basic feasibesotions that are optima Show that they mst be adacent

b) Let * be a breakpoint of the fnction Let l , x , be basic feasibesotions a of which are optima for * Sppose that l is a niqeoptima sotion for < * , x is a niqe optima sotion for > * , andxl , x , x are the ony optima basic feasibe sotions for * Providean exampe to show that xl and x need not be adacent

Page 245: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 245/267

228 Chp. 5 Sensvy nlyss

Eercse 5.13 Consider the following linear programming problem

minimize 4X + 5Xsubject to 2X + X 5X 1

-3X + 4X + X 2X , X , X , X � 0

a Write down a simplex tableau and nd an optimal solution I s it unique

(b Write down the dual problem and nd an optimal solution. Is it unique

(c Suppose now that we change the vector b from b = ( 1 , 2 ) to b = ( 1 -2 , 2 - 3) , where is a scalar parameter Find an optimal solution andthe value of the optimal cost, as a function of (For all , both positiveand negative)

Eercse 5.14 Consider the problem

minimizesubject to

c + dxAx b +

x � 0,

where A is an matrix with linearly independent rows We assume that theproblem is feasible and the optimal cost () is nite for all values of in someinterval ,

a Suppose that a certain basis is optimal for = -10 and for = 10 Provethat the same basis is optimal for = 5

(b Show that ( ) is a piecewise quadratic function of Give an upper boundon the number of "pieces

(c Let b 0 and c = Suppose that a certain basis is optimal for = l For what other nonnegative values of is that same basis optimal

(d Is () convex, concave or neither

Eercse 5.15 Consider the problem

minimize c xsubject to Ax b + d

x � 0,

and let () be the optimal cost, as a nction of

a Let () be the set of all optimal solutions, for a given value of Forany nonnegative scalar dene (O, to be the union of the sets (),

° . Is (O , a convex set Provide a proof or a counterexample

(b Suppose that we remove the nonnegativity constraints x � 0 from theproblem under consideration Is (O, a convex set Provide a proof ora counterexample

(c Suppose that and belong to (O, Show that there is a continuouspath om to that is contained within (O, hat is, there existsa continuous function () such that ( ) = , ( ) = , and () E(O, for all E ( , )

Page 246: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 246/267

Sec. . Notes nd sources

Exercse 516 Cosider the parametric programmig problem of Sectio 55Sppose that some basic feasible soltio is optimal if ad oly if is eqal tosome *

a Sppose that the feasible set is boded Is it tre that there exist atleast three distict basic feasible soltios that are optimal he = *?

Aser the questio i part (a for the case here the feasible set isbouded

Exercse 517 Cosider the parametric programmig problem Sppose thateery basic solutio ecoutered by the algorithm is odegeerate Proe thatthe algorithm does ot cycle.

5 . 8 Nots and sourcs

Te material in ti capter, wit te exception of Section 5.3 i tandard,and can be found in any text on linear prorammin

5 1 A more detailed dicuion of te reult of te production plannincae tudy can be found in eund and Sannaan (199) .

5 3 Te reult in ti ection have beautiful eneralization to te caeof nonlinear convex optimization; ee, e, Rocfellar (190) .

5 5 Anticyclin rule for parametric prorammin can be found in Murty(1983) .

Page 247: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 247/267

Page 248: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 248/267

References

69

Page 249: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 249/267

7 Reerences

HUA R. K . , L AGNANT a J. B. RLN. 3 Newk Flws,Pretice all, Egewoo Cli, J.

NDERSEN E . , J. ONDZO C ESZAROS a . U. Implemetatio of iterior poit metho for large cale liear programmig, i Ine

n mehds n mahemaal gmmng, . erlaky e , KwerAcaemic Pbiher, oto, MAPLEGATE D , a W COOK. A comptatioa ty of the job hop

chelig probem, ORS Jual n Cmung, 3 45BALAS E. , CERIA . CORNUOLS a N ATRA. 5 omory ct

reviite, workig paper, CaregieMello Uiverity, Pittbrgh, PABALAS E , CERIA a CORNULS. 5 Mixe 0 - programmig by

iaproject i a brach a ct eviromet, workig paper, CaregieMello Uiverity, Pittbrgh, PA

BARAHONA F, a E. ARDOS. Note o Weitrab miimm cotcirclatio algorithm, SIM Jual n Cmung, 18 553BARNES E R A vriatio o Karmarkar algorithm for olvig liear

programmig probem, Mahemaal Pgmmng, 36 42BARR R , F LOVER a D LINGMAN. he ateratig path bai

algorithm for the aigmet problem, Mahemaal Pgmmng 13 3

BARTHOLDI J . J . J . B. RLIN a . D . ATLI. 0 Cyclic chelig viaiteger program with circlar oe , Oens Reseah, 28 0405

BAZARAA M. , J. J. JARVIS a D HERALI. 0 Lnea Pgmmngand Newk Flws, 2 eitio, Wiley, New York, NYBEALE E M L 55 Cyclig i the al implex algorithm, Naal Reseah

Lgss Quaely 2 225BELLMAN R E 5 a rotig problem, Quaely f led Mahemas ,

, 0 BENDERS J. F 2 Partitioig procere for ovig mixevariable pro

grammig problem, Numeshe Mahemak, , 23252 BERTSEKAS D P A itribte algorithm for the aigmet problem,

workig paper, Laboratory for Iformatio a Deciio ytem, M,Cambrige, MABERSEKAS D P 11 A ew algorithm for the aigmet problem Mahe

maal Pgmmng, 21 52BERTSEKAS D P Lnea Newk Omzan M Pre, Cambrige,

MABERTSEKAS D P 5a ynam Pgmmng and Omal Cnl Athea

cietic, elmot, MABERTSEKAS D P 5b Nnlnea Pgmmng Athea cietic, elmot,

MABERTSEKAS D. P . , a J. N SITSIKLIS. Pallel and sbued Cmuan Numeal Mehds, Pretice all, Eglewoo Cli, N

BERTSIMAS D , a L . A brach a ct algorithm for the job hopchelig probem, workig paper, peratio Reearch Ceter, M,Cambrige, MA

BERTSIMAS D , a . L 1 the wort cae complexity of potetialrectio algorithm for liear programmig, Mahemaal Pgammng,to appear

Page 250: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 250/267

Reerences 7

BERTSMAS an TOK 997. The ar trac o anageent probeth enroute capacties, Opens Reseah, to appear

BLAND 977 . e nte pvotng rues or the spe etho, Mahemas f Opens Reseah, 2, 0307.

BLAND , OLDAR an J ODD 9 . The eipso etho asurve, Opens Reseah, 29, 03909.BORGARDT K H 92 . The average nuber o pvot steps reure b the

speetho is ponoa, Zesh j Opens Reseah 26, 5777.

BOYD , an L ANDENERGHE 995. Indun nex pmzanwh engneeng applans, ecture notes, tanor Universt, tanor,CA

BRADLEY P , A C H, an T L AGNAN! 977. ppled Mahemaal

Pgmmng, Asonese, eang, ACARATHODORY C 907. ber en Varabttsberech er Koezienten vonPotenzreihen, ie gegebene erte nicht annehen, Mahemashe nnalen, 64, 955 .

CERA , C CORDER H ARHAND an L A WOLSEY 995. Cuttngpanes or nteger progras th genera nteger variabes, orkng paper,Couba Unversit, e York, Y

CHARNES A 952. ptait an egenerac in inear prograng, Enmea 20, 070.

CHRISODES 975 . orstcase anasis o a ne heurstc or the traveingsaesan probe, eport 3, rauate choo o Inustra Anstraton, Carnegeeon Unversit, Pittsburgh, PA

CHVAL V 93. Lnea Pgmmng, H eean, e York, YCLARK F E 9 . eark on the constrant sets n inear prograng, me

an Mahemaal Mnhly 68, 35352.COHAM A 95. The ntrnsc coputatona cut o unctons, n Lg

Mehdlgy and Phlsphy f Sene, Y BarHie e , orthHoan,Astera, The etherans, 2430.

COOK A 97 . The copeit o theore proving proceures, n Peedngs

f he

3 CM Sympsum n he They f Cmpung, 55 .

CORMEN H , C E EISERSON an L VEST 990. Indun ghms craHi, e York, Y

CUNNINGHAM H 97 A netork spe etho, Mahemaal Pgmmng, 11, 05 .

AHLEH A, an AZBILLO 995. Cnl f Unean Sysems Lnea Pgmmng ppah, Prentce Ha, Engeoo Cls, J

ANTZIG B 95 . Appcation o the sipe etho to a transportationprobe, n y nalyss f Pdun and llan, T C Koop

ans ed , ie, e York, Y, 359373.ANZIG B 93. Lnea Pgammng and Exensns, Princeton Universit

Press, Prnceton, JANZIG B 992. An precse easibe souton to a inear progra th a

convt constrant in terations nepenent o probe size , orkngpaper, tanor Unverst, tanor, CA

ANZIG B , A RDEN an P WOLE 955 . The generaize speetho or inizng a lnear or uner near neuat constrants,Pa Junal f Mahemas 5 395.

Page 251: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 251/267

7 Reerences

ANTZG B an WOLE 960 T omposition prinip or inarprograms Opetions Research, 8, 0

KSTRA 959 not on two probms in onnxion wit graps Numerische Mathematik, 1, 697

KN I I 967 Itrati soutions o probms o inar an quarati programming Soiet Mathematics Doklady, 8, 67675

KN I I 97 n t onrgn o an itrati pross UplyaemyeSistemi, 12, 560 In ussian

NES L L 9 8 Systms o inar inquaitis nnals of Mathematics, 20,999

UDA R an ART 97 Patte Classication and Scene nalysis,Wiy Nw or N

DMONDS J 965a ats trs an owrs Canadian Joual of Mathematics,

17, 967DMONDS J 965b aximum mating an a poyron wit 0 - rtis

Joual of Research of the National Buau of Standas, 69, 50DMONDS J 97 atrois an t gry agoritm Mathematical Pgm

ming, 1, 76DMONDS J an R ARP 97 Tortia impromnts in agoritmi

iny or ntwor ow probms Joual of the CM, 19 , 86LAS FENSTEN an C HANNON 956 Not on maximum ow

troug a ntwor IRE nsactions on Information Theory 2, 79

FARKAS 89 n t appiations o t mania prinip o ourirMathematikai s Termszettudomnyi rtesit, 12, 577 In ungarian

FAO V. , an CORMK 968 Nonlinear pgmming sequential unconstined minimization techniques, Wiy Nw or N

FEDERGRUEN an ROENEVELT 986 Prmpti suing o uniormmains by orinary ntwor ow tniqus Management Science, 32,9

FSHER an L HOMPSON 96 Probabiisti arning ombinations o

oa job sop uing rus in Industrial Scheduling, J ut an L Tompson s Prnti a Engwoo Cis NJ 55 FLOYD W 196 goritm 97 sortst pat Communications of CM, 5 ,

5FORD L R 956 Ntwor ow tory rport 9 an Corp Santa onia

CFORD L an R FULKERSON 1956a axima ow troug a ntwor

Canadian Joual of Mathematics 8, 990FORD L R an R FULKERSON 956b Soing t transportation probm

Management Science, 3, FORD L R an FULKERSON 96 Flows in Networks rinton

Unirsity rss rinton NJFOURER J B J 87 nays s aaux ami oya s Sins

pnant ann 8 Parti matmatiqu Histoire de l'cadmie Royaledes Sciences de l 'Institut de nce, 7, xii

FREUND 99 Poynomiatim agoritms or inar programming basony on prima an saing an projt graints o a potntia untionMathematical Pgrmming, 51 0

Page 252: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 252/267

frnces 7

UD . M . , B . HAAHA 992 orr rg robes DEC, reor, o oo o gee, M.., Cbrge, MA.

SH M. . 956 L rsoo es robes e rogre re r oe oe ogrie, Cahes du Smnae D ' Eonome

e 4, 720KRSO D. ., . B. ATZG 955 Coo o ow eworks, Naval Reseah Logss Quaely 2 277283

A D. , . W. UH A. W. UKER. 95 Lier rogrg he eory o ges, vy nalyss of Pduon and lloaon. C. Koos (e. , Wey, ew York, Y, 3 7329

A D., L. . HAPEY. 962 Coege ssios he sby orrge, mean Mahemaal Monhly 69, 95

AY M. . , D . . JOHSO. 979 Compues and Inably: a

Gude o he Theoy of NPompleeness W. . ee, ew York, Y.A . D. EMA. 98 ohs exio, bbs srbo, e Byes resoro o ges, IEEE ansaons on Pae nalyssand Mahne Inellgene 6, 727

P . E . , W . URRAY M. . WRGHT. 98 Pr izo,Aei Press, ew York, Y.

OR P . C . , . E. OMORY 96 A er rogrg ro oe g sok robe, Opeons Reseah 9, 89859

ORE P. C . , . E. OMORY 963 A ier rogrig ro ohe ig sok robe r II , Opeons Reseah 11 863888

AS M. , D BERTSMAS. 993 rvvbe eworks, LP rexos e rsoos roery, Mahemaal Pgmmng 60, 566

AS M . , D. WAMSO. 993 A ew 3/ roxio goror A A, i Peedngs of he d Inenaonal Confene n InegePgmmng and Combnaoal Opmzaon 3332

DERG A. V . , . E . ARA. 988 A ew ro o e xow robe, Joual of he CM 3 5, 9290

DRG A. V . , . E . ARA 989 Fig ios rosby eg egve yes, Joual of he CM 36, 873886

DAR D, J . K. ED. 977 A ribe seeesege sexgori, Mahemaal Pgmmng 12 3637

OU . . , C . F . A OA 983 Max Compuaons e Josoks Uversiy Press, Biore, D.

ORY . E . 958 ie o gor for ieger sotos o ierogrs, Bullen of he mean Mahemaal Soey 64 275278

OZAGA C. 989 A gor or sovg ier rogrg O(3L)oeros, Pgess n Mahemaal Pogmmng . egio (e . ,rgerVerg, ew York, Y, 28

ZAGA C. 990 Poyo e gors or er rogig, Mahemaal Pgmmng 49, 72 RTSHE M . , L . OVSZ A . HRVER 98 e eisoi eo

is oseees i obori oizo, Combnaoa 1 6997TSHE M . , OVSZ A. HRVER. 988 Geome lgohms

and Combnaoal Opmzaon rigerVerg, ew York, Y.AMOVH M. 983 e siex eho is very goo o e exee

ber o vo ses ree roeries o ro er rogrs,reri.

Page 253: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 253/267

74 Reerences

AEK, B . Cooig cedle for otial aealig, Mathematics of Operations Reseah 13 , 332.

ALL L A , ad R J ANDEREI. 3 . o tird i ar for ae calig,Opetions Research Letters 13 , 720.

ALL L A , A HULTZ D B HMOYS, ad J WEIN. 6. cedigto iiize average coletio tie olie ad olie aroxiatioalgorit, orkig aer, Jo Hoki Uiverity, Baltiore, MD

ANE , C A , C BARNHART, E L JOHNSON, R E ARSTEN L EMHAUSER, ad . IGISMONDI. 5 . e eet aiget robe olvig alargecale iteger rogra, Mathematical Pgmming 70, 2232.

ARRIS M J 73. ivot electio etod of te Devex L code, Mathematical Progmming 5 2.

AYKIN . 4. Neul Networks: A Comphensive Foundation McMila,

e York, YELD M, ad R M ARP. 62 . A dyaic rograig aroac to e

qecig roble, SIAM Jual on Appled Mathematcs 10 620.ELD M , ad R M ARP 70. e traveig alea roble ad ii

aig tree, Opetions Reseah 18 362.ELD M , ad R M ARP 7 . e travelig aea robe ad ii

aig tree art Mathematical Prgmming 1 625.

ELLY E 23 . ber Mege kovexer Krer it geeicaftice kte,Jahsbericht Deutsche Mathematische Verinungen 32 7576.

OHAUM, D ed 6. Apprximatin algorithms fr NP-hard prblemsKler Acadeic blier, Boto, MA

C 6. Integer Pgmming and Network Flows AddioWeley,Readig, MA

ARRA H , ad C E I. 75 Fat aroxiatio algorit for tekaack ad of bet roble, Joual f the A CM 22 46346.

NANGER, . 3. Planning under uncertanty: so lving largescale stchastclinear progms Boyd & aer, Daver, MA

JOHNSON D . , C RAGON, L OH, ad C HVON 0. tiizatio by ilated aealig a exerietal evalatio, art graartitioig, Opetions Research 37 , 652.

JOHNSON, D . C RAGON, L EOH ad C HEVON 2. tiizatio by ilated aealig a exerietal evalatio, art gracolorig ad ber artitioig, Opetons Research 39 3740

ALA . ad D LEITMAN. 2 . A qaiolyoial bod for te diaeterof gra of olyedra, Bulletin of the American Mathematical Society 26 3536.

ALL, P , ad W. WALLAE. 4 . Stochastic Pgmming Wiley, eYork, YARMARKAR 4. A e olyoialtie algorit for liear rogra

ig, Combinatorica 4 37335.ARP, R M 72. Redcibility aog cobiatorial roble, i Complexity

of Computer Computations R E Miller ad J W acer ed , lePre, e York, Y, 503.

ARP R M 7 . A caracterizatio of te ii cycle ea i a digra,Discrete Mathematics 23 303 .

Page 254: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 254/267

Reerences 7

ARP, . M. ad C. . APADIMITRIOU 982 . iea aateizatioo oiatoia otiizatio oe SIAM Joual on Computing, 11620632.

HAHIAN, 979. ooia agoit i iea ogaig Soviet

Mathematics Doklady, 20, 994.IRKPATRIK, S . C . D ELATT, JR, ad M. . EHI 983. tiizatio iated aeaig Science, 220, 67680.

LEE, V. ad J. INTY 972. ow good i te iex agoit? iInequalities I, . Sia ed. adei e ew Yok Y 5975.

LEE, V. ad D W. WALKUP 967. e te ojete o oeda odieio 6 Acta Mathematica, 17 5378.

LEIN, M. 967. ia etod o iia ot ow wit aiatio to te

aiget ad taotatio oe Management Science, 14 205220.OIMA, M. S . IZUNO, ad . OSHISE 989 . iada iteio oit

agoit o iea ogaig i Progss in Mathematical Progmming, . Megiddo ed. SigeVeag ew Yok Y 2947.

UHN, . W. 955. e gaia etod o te aiget oe NavalResearch Logistics Quarterly, 2, 8397.

ALER, E. 976. Combinatorial Optimization Networks and Matoids otieat ad Wito ew Yok Y.

ALER, E. , J. ENSTRA, . . INNOOY AN, ad D B . HMOYSed. . 985. The aveling Salesman Pblem: a Guided Tour of Combinatorial Optimization, Wie ew Yok Y.

ENSTRA, J . . . INNOOY AN, ad . HRIVER ed. . 99.History of Mathematical Progmming A Collection of Personal Reminiscences Eevie teda e etead.

EVIN, . Y. 965 . a agoit o te iiizatio o ovex tioSoviet Mathematics Doklady, 6 286290.

EVIN, . 973. Uivea otig ole Pblemy Pedachi Informatsii,9 265266. I ia.

EIS, . . ad C. . APADIMITRIOU 98 . Elements of the Theory ofComputation, etie Ha Egewood Ci J.

UENERGER, D 969. Optimization by Vector Space Methods, Wie ewYok Y.

UENERGER, D 984. Linear and Nonlinear Progmming 2d ed. ddioWee Readig

USTIG, . E . ARSTEN, ad D HANNO 994. Iteio oit etod :otatioa tate o f te at ORSA Journal on Computing, 6 4.

AGNANTI, . , ad . WOLSEY 995. tia ee i Handbook of

Opetions Research and Management Science Volume

6Network Models,

M. . Bal C. oa . agati ad eae ed.ot oad teda e etead 50365 .

ARSHALL, . ad J . W. UURALLE 969 . ote o clig i teiex etod Naval Research Logistics Quarterly, 16 237.

ARTIN, . ad D B. HMOYS 996 . ew aoac to cotig otia edue o te jobo cedui obe i Pceedings of theth International Conference in Integer Pgmming and CombinatorialOptimization, 389403.

Page 255: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 255/267

7 Reerences

HANE K A C L ONMA an D HANNO 99 An immnaion ofa rimaa inrior oin mho for inar rogramming ORSA Joualon Computing 1 083

EGIDDO N 989 Pathways o h oimal s in inar rogramming in

Pogss in Mathematical Progmming N Mgio ringrVragNw York NY 358EGIDDO N an HU M 989 onary bhavior of inrior oin algo

rihms in linar rogramming Mathematics of Opetions Research 1496

INKOSKI 896 Geometrie der Zahlen Tbnr Lizig GrmanyIZUNO 996 Infasib inrior oin agorihms in Interior Point Algo

rithms in Mathematical Progmming T Traky Kwr AcamicPbishrs oson MA

ONTEIRO R D C an I DLER 989a Inrior ah foowing rimalaagorithms; art I inar rogramming Mathematical Progmming 442

ONTEIRO R D C an I DLER 989b Inrior ah oowing rimaa agorihms; ar II convx araic rogramming MathematicalPgmming 44 366

OTZKIN T 936 Beitrge zur Theorie der linearen Ungleichungen Inagra Dissraion as Azri Jrsam

URTY K G 983 Linear Pogmming Wily Nw York NY

EMHAUSER G L an L A WOLSEY 988 Integer and Combinatorial Optimization Wiy Nw York NYESTEROV Y an A EMIROVSKII 99 Interior point polynomial algo

rithms for convex pgmming IAM is in Ali Mahmatics Philahia PA

VON EUMANN J 9 Discssion of a maximm robm nbish workingpar Instit for Avanc is Princton NJ

VON EUMANN J 953 A crain zrosm worson gam ivan o hoima assignmn roblm in Contributions to the Theory of Games II W Khn an A W ckr

s

Annals of Mathematics Studies 28,

Princon Univrsity Prss Princon NJ 52RDEN A 1993 LP from h 0s to h 90s Interfaces 23 22 RLIN J 98 Gninly olynomial simpx an nonsimpx algorihms

for h minimm cos ow roblm chnical ror 658 oan choolo Managmnt MIT Cambrig MA

ADERG M W an M R R 1980 Th Rssian mho an intgrrogramming working ar Nw York Univrsity Nw York NY

APADIMITRIOU C 99 Computational Complexity AisonWsly Raing MA

APADIMITRIOU C an K TEIGLITZ 982 Combinatorial Optimization:Algorithms and Complexity Prnic a nglwoo Cis NJ

LOTKIN an Taros 990 Imrov a ntwork simlx in Prceedings of the First A CMSIAM Symposium on Discrete Algorithms 3636

OLAK T 98 Introduction to Optimization imization owar IncNw York NY

RIM R C 95 hors conncion nworks an som gnralizaions BellSystem Technical Joual 36 3890

Page 256: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 256/267

Reerences 77

UYRANN M. 3. Srucur o a siml scduling olydron Mathematical Pgmming 58 6385.

SKI A 8. Matroid Theory and its Applications in Electric Network Theory and in Statics SringrVrag Nw Yor NY.

NGAR J 88 A olynomial im algorim basd on Nwons mod orlinar rogramming Mathematical Progmming 40, 53

OKALLAR R. 0. Convex Analysis Princon Univrsiy Prss Princon NJ

OKALLAR R. . 84 Network Flows and Monotropic Optimization WilyNw York NY.

R S . 6. Ris rurn and arbirag in Risk and Retu in Finance ind and J. Bicslr ds. Cambridg Ballingr England.

R

S . 8 A sim aroac o auaion o risy srams Joual ofBusiness 51 45345.

UDIN W 6. Real Analysis McGrawHill Nw York NY.USHMIR R. A . and S . A ONTOGIORGIS . Advancs in oimiza

ion o airlin assignmn ansportation Science o aar.HRIVR A. 86. Theory of Linear and Integer Progmming Wily Nw

Yor NY.HULTZ A. S . 6 . Scduling o minimiz oal wigd comlion im

rormanc guarans o LPbasd urisics and lowr bounds in Pr

ceedings of the th Inteational Conference in Integer Pgmming andCombinatorial Optimization 3035.HOR N . Z . 0 Uilizaion o oraion o sac dilaion in minimiza

ion o convx uncions Cybeetics 6 5.MAL S . 83 n avrag numbr o ss in simlx mod o linar

rogramming Mathematical Pgmming 27 46MITH W. E 56 . Various oimizrs or singsag roducion Naval Re

search Logistics Quarterly 3, 566.IGLR G 45 cos o subsisnc Joual of Farm Economics 27

30334.TOK S. 6 Allocaion o NSF gradua llowsis ror Sloan Scool o

Managmn M Cambridg MAON R. E and C. A OVY T simlx and rojciv scaling

algorims as iraivly rwigd las squars SIAM Review 33, 03.

TRANG G. 88. Linear Algeb and its Applications 3rd d. Acadmic PrssNw Yor NY

ARDOSE. 85. A srongly olynomial minimum cos circulaion algorimCombinatorica 5 455.

O C. 6 Consrucing aroximaion algorims via linar rogrammingrlaxaions rimal dual and randomizd rounding cniqus P . sis raions Rsarc Cnr MT Cambridg MA

SNG P. 8 . A siml comlxiy roo or a olynomialim linar rogramming algorim Opetions Research Letters 8 555.

SNG P. and Z . L . n convrgnc o an scalingalgorim Mathematical Pgmming 56 303.

Page 257: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 257/267

78 Reerences

SUHIYA, 99 oba ovegee of te ae saig etods fo degeeate liea pogaig pobles, Mtheticl Prging, 52 ,30

SUHYA, , ad URAMATSU 995 loba ovegee of a og-step

ae saig algoit fo degeeate iea pogaig pobles, SIAMJoul on Optiition, 5 52555 UKER, W 95 ua systes of oogeeous iea elatios, i Liner

Inequlitie nd Relted Syte W u ad W ke eds ,ieto ivesity ess, ieto, J, 3

ANDEREI, J, EKETON, ad B FREEDMAN 9 odiatio o aaka 's iea pogaig agoith, Algorithic, 1,3950

ANDEREI, J, J C AGARIAS 1990 I iki's ovegee esult fote ae-salig agoith, i Mtheticl Developent Ariing Liner Prging, J C agaias ad J odd eds , eiaateatia oiety, ovidee, , Contepory Mthetic, 114,10919

RANAS, 99 ptia sot aloatio fo uopea ai ta ow aageet, wokig pape, ea aeospae esea estabishet, Bei,eay

WAGNER, 959 a ass of apaitated taspotatio pobles, Mngeent Science 5, 303

WARSHA, 92 teoe o ooea aties, Joul o the ACM, 23,

112 WEER, 995 esoa ouiatioWEINTRAU, 19 pia algoit to sove etwok ow pobes wit

ovex osts, Mngeent Science, 2, 9WIIAMS, 1990 Model Building in Mtheticl Poging, Wiey,

ew ok, WIIAMSON, 99 te desig o appoxiatio algoits o a ass of

gap pobles, tesis, epatet of C, , Cabidge,

E, 99 potetia edutio agoit fo liea pogaig,Mtheticl Pging, 50 23925

E, , J ODD, ad IZUNO 99 -iteatio oogeeousa sefdual liea pogaig algoith, Mthetic o OpetionReerch, 19, 53

UDIN, B , ad EMIROVSKII 9 foatioal opexity ad eietethods fo te soutio of ovex extemal poes, Mtekon, 13 , 255

HANG, , ad APIA 993 supeliealy oveget polyoialpiadual iteio poit algoith fo liea pogaig, SIAM Joulon Optiition, 3, 133

Page 258: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 258/267

Idx

579

Page 259: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 259/267

580

A

solu valus prolms wih 7-9, 35civ consrain 48djacn

ass 56asic soluions 53, 56vrics 78

nfuncion 5, 34indpndnc 0suspac 30-3ransformaion 364

n scaling algorihm 394, 395-409,440-44, 448, 449

iniialiaion 403longsp 40, 40-403, 440, 44prformanc 403404shorsp 40, 404-409, 440

ir rac ow managmn 54455, 567lgorihm 3-34, 40, 36

complxiy of ee running imcin 363polynomial im 36, 5 5

nalyic cnr 4

nicyclingin dual simplx 60in nwork simplx 357in paramric programming 9in primal simplx 08-

pproximaion algorihms 480, 5075,58530, 558

rirag 68, 99rc

ackward 69

alancd 36dircd 68ndpoin of 67forward 69in dircd graphs 68in undircd graphs 67incidn 67, 68incoming 68ougoing 68

rihmic modl of compuaion 36ricial varials

liminaion of -3ss pricing 6769ssignmn prol 74, 30, 33,

3533wih sid consrain 56-57

ucion algorihm 70, 3533, 354, 358ugmning pah 304vrag compuaional complxiy

78, 38

Ball 364Barrir funcion 49Barrir prolm 40, 4, 43

Basic column 55Basic dircion 84Basic fasil soluion 50, 5

xisnc 6-65xisnc of an opimum 65-67ni numr of 5iniial ee iniialiaionmagniud ounds 373o oundd varial LP, 76o gnral LP, 50

Basic soluion 50, 5

ndex

o nwork ow prolms 80-84o sandard form LP, 53-54o dual 54, 664

Basic indics 55Basic varial 55Basis 55

adjacn 56dgnra 59opimal 87rlaion o spanning rs 80-84

Basis marix 55, 87Basis of a suspac 9, 30Bllman quaion 33, 336, 354BllmanFord algorihm 336-339 , 354-355 ,

358Bndrs dcomposiion 54-60, 63, 64Big mhod 7-9, 3536Big noaion 3Binary sarch 37Binding consrain 48

Bipari maching prolm 36, 353, 358Birkhovon umann horm 353Bi modl of compuaion 36Bland's rul ee smalls suscrip rulBoundd polyhdra rprsnaion 67-70Boundd s 43Branch and ound 485-490, 54, 530,

54-544, 560-56Branch and cu 489450, 530Bring ino h asis 88

andida lis 94apaciy

of an arc 7of a cu 309of a nod 75

arahodorys hor 76, 97ardinaliy 6arr prolm 347

Page 260: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 260/267

nde

Centra path 40 4 444Certicate f infeasibiity 65Changes in data sensitivity anaysisChebychev apprximatin 88Chebychev center 36Chesky factr 440 537Cark's therem 5 93Cassier 4Csedness f nitey generated cnes

7 96Circuits 35Circuatin 78

decmpsitin f 350simpe 78

Circuatin prbem 75

Cique 484Csed set 69Cumn

f a matrix ntatin 7zerth 98

Cumn generatin 36-38Cumn gemetry 9-3 37Cumn space 30Cumn vectr 6Cmbinatin

cnvex 44

inear 9Cmmunicatin netwrk -3Cmpementary sackness 5 - 5 5 9

ecnmic interpretatin 39in assignment prbem 36-37in netwrk w prbems 34strict 53 9 437

Cmpexity thery 54-53Cmputer manufacturing 7-0Cncave functin 5

characterizatin 503 55Cne 74cntaining a ine 75pinted 75pyhedra 75

Cnnected graphdirected 68undirected 67

Cnnectivity 35Cnvex cmbinatin 44Cnvex functin 5 34 40

Cnvex hu 44 68 74 83f integer sutins 464

Cnvex pyhedrn pyhedrnCnvex set 43Cnvexity cnstraint 0Crner pint extreme pintCst functin 3Cramers rue 9Crssver prbem 54-54Currency cnversin 36

Cut 309capacity f 309minimum 30 390 309

Cutset 467Cutset frmuati

58

f minimum spanning tree prbem467

f traveing saesman prbem 470Cutting pane methd

fr integer prgramming 480-484 530fr inear prgramming 36-39fr mixed integer prgramming 54

Cutting stck prbem 34-36 60 63Cyce

cst f 78directed 69in directed graphs 69in undirected graphs 67negativ cst 9unsaturated 30

Cycic prbems 40Cycing 9

in prima simpex 04-05 30 38 anticycing

DA sequencing 55Dantzig-Wfe decmpsitin 39-54

6-63 64Data tting 9-0Decisin variabes 3Deep cuts 380 388Degeneracy 586 536 54

and interir pint methds 439and uniqueness 90-9in assignment prbems 350in dua 63-64in standard frm 59-60 6in transprtatin prbems 349

Degenerate basic sutin 58Degree 67Deayed cumn generatin 36-38Deayed cnstraint generatin 36 63Demand 7Determinant 9

Devex rue 94 540Diameter f a pyhedrn 6Diet prbem 5 40 56 60-6Dijkstras agrithm 340-34 343 358Dimensin 9 30

f a pyhedrn 68Disjunctive cnstraints 454 47-473Dua agrithm 57Dua ascent

apprximate 66

Page 261: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 261/267

582

in netwrk w prbems, 66,36-35, 357

steepest, 354terminatin, 30

Dua pane, Dua prbem, 4, 4, 4-46

ptima sutins, 5-6Dua simpex methd, 56-64, 536-537,

540544fr netwrk w prbems, 66,33-35, 354, 358

gemetry, 60revised, 57

Dua variabesin netwrk w prbems, 85

interpretatin, 5556Duaity fr genera LP, 83-87Duaity gap, 399Duaity in integer prgramming, 494-507Duaity in netwrk w prbems, 3-36Duaity therem, 46-55, 73, 84, 97,

99Dynamic prgramming, 490-493, 530

integer knapsack prbem, 36zer-ne knapsack prbem, 49493traveing saesman prbem, 490

E

Edge f a pyhedrn, 53, 78Edge f an undirected graph, 67Ecient agrithm, see agrithmEectric pwer, 0-, 5556, 564Eementary directin, 36Eementary rw peratin, 96Eipsid, 364, 396

Eipsid methd, 363-39cmpexity, 377fr fudimensina bunded pyhe-dra, 37

fr ptimizatin, 378380practica perfrmance, 380siding bective, 379, 389

Enter the basis, 88Epsin-reaxatin methd, 66, 358Evauatin prbem, 57Expnentia number f cnstraints,

380387, 465-47, 55-56Expnentia time, 33Extreme pint, 46, 50see ls basic feasibe sutinExtreme ray, 67, 7677, 97, 55Eucidean nrm, 7

F

Faciity catin prbem, 453454,

46-464, 476, 58, 565Faras' emma, 65, 7, 97, 99Feasibe directin, 83, 9Feasibe set, 3Feasibe sutin, 3Finitey generated

cne, 96, 98set, 8

ndex

Fixed charge netwrk design prbem, 476,566

Feet assignment prbem, 537-544, 567Fw, 7

fesibe, 7Fw augmentatin, 304Fw cnservatin, 7

Fw decmpsitin therem, 98300, 35fr circuatins, 350Fyd-Warsha agrithm, 355356Frcing cnstraints, 453Frd-Fukersn agrithm, 305-3, 357FurierMtzkin eiminatin, 70-74, 79actina prgramming, 36ee variabe, 3

eiminatin f, 5Fu-dimensina pyhedrn , see pyhe

drn

Fu rank, 30, 57Fu tabeau, 98

Gaussian eiminatin, 33, 363Gba minimum, 5Gmry cutting pane agrithm, 48484Graph, 67-7

cnnected, 67, 68directed, 68

undirected, 67Graph cring prbem, 566567Graphica sutin, -5Greedy agrithm

fr minimum spanning trees, 344, 356Grundhding, 545

HHafspace, 43Hamitn circuit, 5

Hed-Karp wer bund, 50Heys therem, 94Heuristic agrithms, 480Hirsch cnecture, 6-7Hungarian methd, 66, 30, 33, 358Hyperpane, 43

Identity matrix, 8

Page 262: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 262/267

ndex

Incidence matrix 77, 457truncated 80

Independent set problem 484Initialization

ane scaling algorithm 403Dantzig-Wolfe decomposition 50-5negative cost cycle algorithm 94network ow problems 35networ simplex algorithm 86potential reduction algorithm 46-48primal path following algorithm4943

primal-dual path following algorithm435437

primal simplex method - 9

nner product 7nstance of a problem 36036size 36

nteger programming , 45mixed 45, 54zeroone 45, 57, 58

Interior 395Interior point methods 393-449, 537

computational aspects 439-440,536-537, 540-544

ntree 333Inverse matrix 8nvertible matrix 8

Job shop scheduling problem 476,

55563, 565, 567

KarushKuhnucer conditions 4

Knapsac problemapproximation algorithms 507509,

530complexity 58, 5dynamic programming 49-493, 530integer 36zeroone 453

KnigEgervry theorem 35

L

Lael correcting methods 339-340Labeling algorithm 307-309, 357Lagrange multiplier 40, 494Lagrangean 40, 90agrangean decomposition 5758agrangean dual 495

solution to 50-507agrangean relaxation 496, 530Leaf 69Length of cycle path wal 333

Leontief systems 95, 00Lexicographic pivoting rule 08,

3-3, 37in revised simplex 3in dual simplex 60

Libraries

seeoptimization libraries

Line 63Linear algera 63 , 40, 37inear combination 9Linear inequalities 65

inconsistent 94Linear programming , 38

examples 6-4

583

Linear programming relaxation , 46Linearly dependent vectors 8

Linearly independent vectors 8Linearly independent constraints 49ocal minimum 5, 8, 3ocal search 55, 530Lot sizing problem 475, 54

M

Marginal cost 55-56Marriage problem 35Matching problem 470-47, 477478

see lsbipartite matching stale

matchingMatrix 6

identity 8incidence 77inversion 363inverse 8invertible 8nonsingular 8positive denite 364

rotation 368, 388square 8Matrix inversion lemma 3, 38Max-ow min-cut theorem 3 0-3 , 35 ,

357 

Maximization problems 3Maximum ow prolem 73, 303Maximum satisability 59-530Mincut problem see cutMean cycle cost minimization 355, 358Minimum spanning tree prolem 343345,

356, 358, 466, 477multicut formulation 476

odeling languages 534535, 567Moment problem 35Multicommodity ow prolem 3Multiperiod prolems 0-, 89

N

N 58, 53

Page 263: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 263/267

584

Ncomplete 519 531Nhard 518 531 556SF fellowshps 459461 477ash equlbrum 190egatve cost cycle algorthm 291 30 1 357

largest mprovement rule 301 351 357mean cost rule 301 357

etwork 272etwork ow problem 13 551

capactated 2 73 291crculaton 275complementary slackness 314dual 312313 357formulaton 272278ntegralty of optmal solutons

289290 300senstvty 313314shortest paths relaton to 334sngle source 275uncapactated 273 286wth lower bounds 276 277wth pecewse lnear convex costs 347 prmaldual method

etwork smplex algorthm 278291 356357 536

antcyclng 357 358dual 323325 354

ewtondrecton 424 432method 432433 449step 422

ode 267 268labeled 307scanned 307snk 272source 272

odearc ncdence matrx 277truncated 280onbasc varable 55onsngular matrx 28ull varable 192ullspace 30urse schedulng 1112 40

Objectve functon 3

Onetree 501Operaton count 3234Optmal control 2021 40Optmal cost 3Optmal soluton 3

to dual 215216Optmalty condtons

for LP problems 8287 129 130for maxmum ow problems 310for network ow problems 298300

arushuhncker 42 1Optmzaton lbrares 535537 567Optmzaton problem 517Opons prcng 195Order of magntude 32Orthant 65Orthogonal vectors 27

5 15Parametrc programmng 217221

227229Path

augmentng 304drected 269n drected graphs 269n undrected graphs 267shortest 333unsaturated 307walk 333

dex

Path followng algorthm prmal 419431complexty 431ntalzaton 429431

Path followng algorthm prmaldual431438

complexty 435nfeasble 435436performance 437438quadratc programmng 445446selfdual 436437

Path followng algorthms 395396 449542

Pattern classcaton 14 40Perfect matchng 326 353 matchng problemPerturbaton of constrants and degener

acy 60 131132 541Pecewse lnear convex optmzaton

1617 189 347Pecewse lnear functon 15 455Pvot 90 158Pvot column 98Pvot element 98 158Pvot row 98 158Pvot selecton 9294Pvotng rules 92 108111

Polar cone 198Polar cone theorem 198199Polyhedron 42

contanng a lne 63fulldmensonal 365 370 375377

389n standard form 43 5358somorphc 76 representaton

Polynomal tme 33 362 515

Page 264: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 264/267

ndex

Potentia function 409 448Potentia reduction agoritm 394

409-419 445 448compexity 418 442initiaization 416418performance 419wit ine searces 419 443-444

Preemptive sceduing 302357

Preowpus metods 266 358Preprocessing 540Price variabe 140Prima agoritm 157 266Prima probem 141 142Primadua metod 266 320 321323

353 357Primadua pat foowing metod pat foowing agoritmProbabiity consistency probem 384386Probem 360Product of matrices 28Production and distribution probem 475Production panning 7 10 35 40

210-212 229Project management 335-336

Projections of poyedra 7074Proper subset 26Pusing ow 278

Q

Quadratic programming 445-446

Rank 30Ray 172

extreme ray

Recession cone 175Recognition probem 515 517Reduced cost 84

in network ow probems 285Reduction (of a probem to anoter) 515Redundant constraints 57-58Reinversion 107Reaxation inear programming reax

ation

Reaxation agoritm 266 321 358Reaxed dua probem 237Representation

of bounded poyedra 67of cones 182 198of poyedra 1 79-183 198

Reuirement ine 122Residua network 295-297Resoution teorem 179 198 199Restricted probem 233

Revised dua simpex metod 157Revised simpex metod 9598

105-107exicograpic rue 132

Rocket contro 21Row

space 30vector 26zerot 99

Running time 32 362

585

Sadde point of Lagrangean 190Samueson's substitution teorem 195Scaing

in auction agoritm 332in maximum ow probem 352in network ow probems 358

Scanning a node 307Sceduing 1 1-1 2 302 357 55 1563 567

wit precedence constraints 556Scwartz ineuaity 27Sefarc 267Sensitivity anaysis 2012 15 2 16 -21 7

adding new euaity constraint

206-207adding new ineuaity constraint

204206adding new variabe 203204canges in a nonbasic coumn 209canges in a basic coumn 21 0 222-223canges in b 207-208 212-215canges in 208-209 216-217in network ow probms 313-314

Separating yperpane 170between disjoint poyedra 196nding 196

Separating yperpane teorem 170Separation probem 237 382 392 555Seuencing wit setup times 457-459 518Set covering probem 456457 58Set packing probem 456457 518Set partitioning probem 456-457 518Setup times 457-459 51 8Sadow price 156Sortest pat probem 27 3 332-343

apairs 333 342-343 355-356 358atoone 333reation to network ow probem 333

Side constraints 197 526-527Simpex 120 137Simpex metod 90-91

average case beavior 127-128 138coumn geometry 119-123computationa eciency 124-128dua dua simpex metod

Page 265: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 265/267

586

or degeerate problems 92or etworks see etwork simplexull tableau implemetatio 98105,

105-107history 38implemetatios 94-108iitializatio 111119aive implemetatio 94-95perormace 536-537, 54-541revised see revised simplex methodtermiatio 91, 110two-phase 116-117ubouded problems 179with upper boud costraits 135

Simplex multipliers 94, 161

Simplex tableau 98Simulated aealig 512-514, 531Size o a istace 361Slack variable 6, 76Slidig obective ellipsoid metho 379, 389Smallest subscript rule 94, 111, 137Spa o a set o vectors 29Spaig path 124Spaig tree 271-272see ls miimum spaig treesSparsity 107, 108, 440, 536, 537

Square matrix 28Stable matchig problem 563, 567Staard orm 4-5

reuctio to 5-6visualizatio 25

Steepest edge rule 94, 540543Steier tree problem 391Stochastic matrices 194Stochastic programmig 254260, 264,

564

Strog duality 148, 184Strog ormulatios 461-465Strogly polyomial 357Subdieretial 503Subgradiet 215, 503, 504, 526Subgradiet algorithm 505-506, 530Submodular uctio miimizatio

391-392Subspace 29Subtour elimiatio

i the miimum spaig tree problem

466i the travelig salesma problem 470

Supply 272Surplus variable 6Survivable etwork desig problem 391,

528-529

T

Theorems o the alterative 166, 194

Total uimodularity 357Tour 383, 469Touramet problem 347

ndex

asormatio (o a problem to aother 516

asportatio problem 273, 274275, 358degeeracy 349

aspose 27asshipmet problem 266avelig salesma problem directed

478, 518brach ad boud 488-489dyamic programmig 490iteger programmig ormulatio 477

avelig salesma problem uirected

478, 565, 518, 526approximatio algorithm 509-510, 528iteger programmig ormulatio469-470, 476

local search 511512, 530lower boud 383384, 501-502with triagle iequality 509510, 521,

528ee 269

o shortest paths 333see ls spaig tree

ee solutio 280easible 280

Typography 524

Ubouded cost 3Ubouded problem 3

characterizatio 177-179Uique solutio 129, 130

to dual 152, 190-191

Uit vector 27Urestricted variable see ree variable

Vector 26Vehicle routig problem 475Vertex 47, 50see ls basic easible solutioVolume 364

o a simplex 390

vo euma algorithm 446-448, 449Vulerability 352

wWalk

irected 269i directed graphs 268i uirected graphs 267

Weak duality 146, 184, 495

Page 266: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 266/267

ndex

Weiestass' theoem, 70, 99Wost-case unning time 362

Zeoth column 9Zeoth ow 99

587

Page 267: [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

7/22/2019 [1997 Bertsimas, Tsitsiklis] Introduction to Linear Optimization (Ch1-5)

http://slidepdf.com/reader/full/1997-bertsimas-tsitsiklis-introduction-to-linear-optimization-ch1-5 267/267