a linear program
TRANSCRIPT
-
8/3/2019 A Linear Program
1/58
A linear program (LP) is a mathematical formulation of a problem.We define a set ofdecision variables which fully describe thedecisions we wish to make. We then use these variables to definean objective function which we wish to minimize or maximize, anda set ofconstraints which restrict the decision options open to us.In a linear program, the variables must be continuous and theobjective function and constraints must be linear expressions. Anexpression is linear if it can be expressed in the form c1x1 + c2x2+ ... + cnxn for some constants c1, c2, ... ,cn. For example, 2x1 + 7x2 isa linear expression; x2 and sin x are not.
We can think of an LP as having three sections, the objectivefunction, the constraints and the sign restrictions on the decisionvariables.
1.
The object function is a mathematical expression for what we aretrying to accomplish. This expression will always start with either"maximize" or "minimize" and be followed by a linear expressionin terms of the variables. The objective function takes the followingform:
-
8/3/2019 A Linear Program
2/58
where the coefficient (Coef.) is a real number and it is followed by
the variable name (Var.). For example, let's say we are producingthree products, call them p1, p2, and p3, and they sell for $2, $3, and$8 respectively. If our goal is to maximize our revenues, then ourobjective function would be:
2.In all systems that we model, we will be faced with constraintswhich restrict the values of our variables. We can have manyconstraints. Each constraint will be a linear expression of our
variables, with any appropriate coefficients, followed by the type ofrestriction and the value of the right hand side. We can think of aconstraint looking like this:
The "Constraint Sense" will either be less than equal to ( ),greater than equal to ( ) or equal to (=). Note that we cannot havestrict inequalities, i.e. we cannot use greater than (>) or less than(
-
8/3/2019 A Linear Program
3/58
For example, we may have a budget for producing our threeproducts. Let's assume that each product has a per unitproduction cost of $1, $2, and $5, respectively, and we have abudget of $300. An appropriate constraint would be:
Or maybe we have demand for the products that requires that weproduce a combination of at least 50 units of p1 and p2. Then wewould add the following constraint:
Note that not all variables need to appear in all constraints. We
could also think of the above constraint as 1p1 + 1p2 + 0p3 50.
Let's also assume that we have exactly 400 hours of availableproduction time and each unit requires 2, 4, and 5 hours ofproduction time, respectively. Then we would write a constraint tosay:
.
We can think of the sign restrictions as additional constraints onour variables, but we think of them as a unique section becausethey are important and often forgotten. In this section we specify
whether our variables our non-negative ( 0), non-positive ( 0),or free. The sign restrictions will take the following form:
-
8/3/2019 A Linear Program
4/58
In most practical cases, we will want our variables to be positive.In our example, we cannot produce negative quantities of aproduct. Therefore our Sign Restriction section will include thesethree expressions:
In the case where we do not want to impose sign restrictions on avariable, we will not include an expression for that variable in thissection, implying that the variable is free.
So, putting these pieces together, we have a graphicalrepresentation of our linear program with our example by itsside.
-
8/3/2019 A Linear Program
5/58
Solutions to Linear Programs
The constraints of a linear program restrict the values of ourdecision variables. A solution to an LP is a set of values for ourdecision variables. Every solution has an objective value which isthe value of the objective function when it is computed for thevariable values given by the solution. We call a solution feasible ifit satisfies all our constraints, and infeasible if it violates one ormore of them. For instance, p1=40, p2=50, and p3=24 is a feasiblesolution to our example problem because it satisfies all ourconstraints. The objective value of this solution is $422 ($2*40 +$3*50 + $8*24 = $422).
The task of solving a linear program is to find the feasible solutionto the problem that optimizes our objective function. If it is amaximization problem, this will be the solution with the largestobjective value. If it is a minimization problem, then we want thesolution with the smallest objective value. This solution is called
-
8/3/2019 A Linear Program
6/58
the optimal solution. For our sample problem, the optimal solutionis p1=100, p2=0, and p3=40 which has an objective value of $520.
In writing the mathematical formulation of a linear program,
we need to define three things: the variables, the objectivefunction, and the constraints. Sometimes the formulationwill be obvious and other times constructing the model willbe more challenging. Lets start with an example.
The Problem
ChemCo produces two fertilizers, FastGro andReallyFastGro. Each is made of a mix of two growth agents.FastGro is a 50/50 mix of the two, and it sells for $13 per
pound. ReallyFastGro is a 25/75 mix (i.e., its 25% Agent 1and 75% Agent 2), and sells for $15/pound. Agent 1 costs$2/pound and Agent 2 costs $3/pound. ChemCo canpurchase up to 250 pounds of Agent 1 and up to 350pounds of Agent 2 each day. What is ChemCo's optimalproduction strategy? i.e., How much of each product shouldit produce in order to maximize its profits?
We start by defining the variables.
In defining the variables, we need to ask ourselves what it isthat we wish the model to determine. In this case, we needto know how much of each product to produce. We will usethe units that are specified; namely, pounds and days.These observations lead us to believe the following might
be appropriate variable definitions:
= number of pounds of FastGro to produceeach day.
http://variables%28%29/http://obj_fcn%28%29/http://obj_fcn%28%29/http://constraints%28%29/http://variables%28%29/http://obj_fcn%28%29/http://obj_fcn%28%29/http://constraints%28%29/ -
8/3/2019 A Linear Program
7/58
= number of pounds of ReallyFastGro toproduce each day.
We need to define our objective function in terms ofthe variables that we defined for this problem.We are interested in maximizing our profits, so our
objective function should express our total profits interms of our variables. We need to include profitfrom sales and the expense of the raw materials. Wecan write our profits as:
(click onthis
figure tosee theproblemagain)
Profit = $13 FG + $15 RFG - $2 (0.5 FG+0.25 RFG) -$3 (0.5 FG + 0.75 RFG)
We can simplify this expression to: Profit = $10.5FG + $12.25 RFG
Therefore ourobjectivefunction is:
ChemCo produces two fertilizers, FastGro and ReallyFastGro. Each is made of a mix oftwo growth agents. FastGro is a 50/50 mix of the two, and it sells for $13 per pound.ReallyFastGro is a 25/75 mix (i.e., its 25% Agent 1 and 75% Agent 2), and sells for$15/pound. Agent 1 costs $2/pound and Agent 2 costs $3/pound. ChemCo can purchaseup to 250 pounds of Agent 1 and up to 350 pounds of Agent 2 each day. What is ChemCo'soptimal production strategy? i.e., How much of each product should it produce in order tomaximize its profits?
http://objfcnhelp%28%29/http://problem%28%29/ -
8/3/2019 A Linear Program
8/58
(clickon thisfigureto see
theproblem
again)
Our only two constraints in this problem arise from thesupply limitations for the two agents. We can expressthese restrictions with the following inequalities:
Agent 1:
Agent 2:
And finally, we need to determine if it is necessary toimpose sign restrictions on our variables. In this example,both our variables must be positive. Therefore we imposethe following non-negativity conditions:
http://constrainthelp%28%29/http://problem%28%29/ -
8/3/2019 A Linear Program
9/58
Putting all the pieces together, we have thefollowing formulation:
(click onthis figure
to see theproblemagain)
We are done. Now you can either go through a financeexample, a blending example, another production planning
example, view other examples, or you can try your hand atformulating a problem.
Finance Example Blending Example Production Example
Other Examples by J.E. Beasley You Try ItLets look now at an example from the world of finance.
The Problem
We are given $10 million to invest in a diversified portfolio.We can choose between 5 bond funds with the followingproperties:
BondName
BondType
MoodysQuality
BanksQuality
Years toMaturity
After-tax
Yield
http://fisher.osu.edu/~croxton_4/tutorial/finance/formulation.htmlhttp://fisher.osu.edu/~croxton_4/tutorial/blending/formulation.htmlhttp://fisher.osu.edu/~croxton_4/tutorial/production/formulation.htmlhttp://mscmga.ms.ic.ac.uk/jeb/or/lp.htmlhttp://fisher.osu.edu/~croxton_4/tutorial/user_example.htmlhttp://problem%28%29/http://fisher.osu.edu/~croxton_4/tutorial/finance/formulation.htmlhttp://fisher.osu.edu/~croxton_4/tutorial/blending/formulation.htmlhttp://fisher.osu.edu/~croxton_4/tutorial/production/formulation.htmlhttp://mscmga.ms.ic.ac.uk/jeb/or/lp.htmlhttp://fisher.osu.edu/~croxton_4/tutorial/user_example.html -
8/3/2019 A Linear Program
10/58
A Municipal Aa 2 9 4.3%
B Agency Aa 2 15 2.7%
C Govt. Aaa 1 4 2.5%
D Govt. Aaa 1 3 2.2%
E Municipal Ba 5 2 4.5%
In addition, we wish our portfolio to include at least $4million invested in government and agency bonds, anaverage quality rating of no more than 1.4 on the bank'sscale, and an average maturity which doesn't exceed 5years.
Once again, we start by defining the variables.
In defining the variables, we need to ask ourselves what it isthat we wish the model to determine. In this case, we need
to know how much money to invest into each bond.Therefore, we will use the following variable definitions:
= amount to be invested in Bond A
= amount to be invested in Bond B
= amount to be invested in Bond C
= amount to be invested in Bond D
= amount to be invested in Bond E
(clickon thisfigureto see
the
-
8/3/2019 A Linear Program
11/58
We need to define our objective function in terms of
the variables that we defined for this problem.We are interested in maximizing our after-taxearnings, so our objective function should expressour total earnings in terms of our variables. Thefollowing accomplishes this:
problemagain)
Total Return = 0.043 A + 0.027 B + 0.025 C + 0.022 D+ 0.045 E
Therefore our objective function is:
(click
on thisfigureto see
theproblemagain)
In this problem, we have several constraints. The first isthe amount of money we have. Namely, we can only invest$10 million. This constraint can be expressed with thefollowing expression:
Cash:
http://problem%28%29/http://problem%28%29/ -
8/3/2019 A Linear Program
12/58
Our next constraint states that we must invest at least $4million in governement and agency bonds. As a constraint,this is written:
G&A:
Next we state that our average quality rating must be nomore than 1.4 on the bank's scale. To compute the averagerating, we will use the following expression:
This can be rewritten as:
Quality:
And finally, we need to limit the average maturity to no morethan 5 years. We will compute the average maturity as wedid the average quality:
Similarly, this can be rewritten as:
Quality:
-
8/3/2019 A Linear Program
13/58
And as a final step, we need to determine if it is necessary toimpose sign restrictions on our variables. In this example, ourvariables must be positive. Therefore we impose the following non-negativity conditions:
Putting all the pieces together, we have the followingformulation:
(clickon
thisfigure toseethe
problem
again)
http://problem%28%29/ -
8/3/2019 A Linear Program
14/58
We are done. Now you can either go through a blendingexample, another production planning example, view otherexamples, or you can try your hand at formulating aproblem.
OK, now lets look at a blending example.
The Problem *
We need to blend iron ore from 4 mines in order to producetire tread. To assure proper quality, minimum requirementsof aluminum, boron, and carbon must be present in the finalblend. In particular, there must be 5, 100, & 30 pounds,respectively, in each ton of ore. These elements exist atdifferent levels in the mines. In addition, the cost of miningdiffers for each mine. The data is in the following tables.How much should be mined from each mine in order toattain a proper blend at minimum cost?
Mine 1 2 3 4
Aluminum(lbs/ton)
10 3 8 2
Boron (lbs/ton 90 150 75 175
Carbon (lbs/ton) 45 25 20 37
Cost $800 $400 $600 $500
Now that we have defined the problem, we can start writing
-
8/3/2019 A Linear Program
15/58
a formulation.
* This problem was adapted from "Introductory Management Science" by Eppen,Gould, Schmidt, Moore and Weatherford.
Once again, we start by defining the variables.
In defining the variables, we need to ask ourselves what it isthat we wish the model to determine. In this case, we needto know what percentage of a ton of the final blend should
come from each mine. Therefore, we will use the followingvariable definitions:
= fraction of a ton to be mined from Mine 1
= fraction of a ton to be mined from Mine 2
= fraction of a ton to be mined from Mine 3
= fraction of a ton to be mined from Mine 4
Now that we have variable definitions, we can think abouthow to write the objective function.
We need to define our objective function in terms ofthe variables that we defined for this problem.In this problem, we are interested in minimizing ourcosts. For each ton we ore in a particular mine, we
(clickon thisfigureto see
the
problemagain)
http://problem%28%29/ -
8/3/2019 A Linear Program
16/58
incur the mining cost. For each fraction of a ton weore in that mine, we incur that fraction of the cost.Therefore, the following objective function captures
our total per ton cost:
(clickon this
figureto seethe
problemagain)
We will now concentrate on our constraints. First, we haveour constraints for the minimum requirements for each ofthe three elements. These constraints can be expressedwith the following expressions:
In addition to these, we have an implied constraint on our
variables that we must include in the formulation. We havedefined our variables as percentages and we need them tosum to one. This constraint is easy to miss since it isn't partof our problem definition, but rather a by-product of ourvariable definition. As a constraint, this is written:
http://constrainthelp%28%29/http://problem%28%29/ -
8/3/2019 A Linear Program
17/58
Blend:
And as a final step, we need to determine if it is necessaryto impose sign restrictions on our variables. In this
example, our variables must be positive. Therefore weimpose the following non-negativity conditions. Note thatthese conditions, together with the blend constraint, assurethat our variables will be between 0 and 1 which is what wewant.
Putting all the pieces together, we have the followingformulation:
-
8/3/2019 A Linear Program
18/58
We will look at another production planning example.
The ProblemA manufacturer produces 4 wire cloth products: industrialwire cloth, insect screen, roofing mesh, and snow fence.For each product, aluminum wire is stretched to anappropriate thickness and then the wire is woven togetherto form a mesh. Production requirements and profit marginsvary by product according to the chart below. There are 600hours available on the wire-drawing machine and 1000hours on the loom. In addition, 15 cwt of aluminum wire is
available to be used as raw material. How much of eachproduct should be produced so as to maximize profit?Assume that the company can sell everything it produces.
Data per 1000 sq.ft. of eachproduct:
Aluminumwire (cwt)
Wiredrawing(100s of
Weaving(100s of
hrs.)
ProfitMargin($100s)
-
8/3/2019 A Linear Program
19/58
hrs.)
Industrial Cloth 1 1 2 6
Insect Screen 3 1 1 5
Roofing Mesh 3 2 1.5 3.8
Snow Fence 2.5 1.5 2 4
Given a clear problem description, the first step in writingthe formulation is to define the variables.
In defining the variables, we need to ask ourselves what it isthat we wish the model to determine. In this case, we needto know how much of each of the 4 products to produce.Therefore, we will use the following variable definitions:
= 1000s sq. ft. of industrial cloth to produce
= 1000s sq. ft. of insect screen to produce
= 1000s sq. ft. of roofing mesh to produce
= 1000s sq. ft. of snow fence to produce
Next we define the objective function.
(click
on thisfigureto see
theproblem
-
8/3/2019 A Linear Program
20/58
We need to define our objective function in terms ofthe variables that we defined for this problem.In this problem, we are interested in maximizingprofits. For each product, we have an associatedprofit margin. Note that we need to be sure that ourunits are correct. In this case, our variables and ourprofits are both given in terms of 1000s sq. ft..Therefore, the following objective function capturesour total profit (in $100s):
again)
(clickon thisfigureto see
theproblemagain)
We will now concentrate on our constraints. We have threeconstraints. The first is the capacity on the wire-drawingmachine. Our total production must not consume morethan 600 hours on the wire-drawing machine. Thisconstraint can be written as follows:
Wire-drawing:
We have a similar capacity constraint on the loom:
Loom:
And our third constraint is on the availability of raw
http://constraint1help%28%29/http://problem%28%29/http://problem%28%29/ -
8/3/2019 A Linear Program
21/58
materials. We only have 15 cwt of aluminum wire available.In constraint form, this translates into:
Wire
:
And as a final step, we need to determine if it is necessary to
impose sign restrictions on our variables. In this example, ourvariables must be positive. Therefore we impose the following non-negativity conditions.
Putting all the pieces together, we have the followingformulation:
http://problem%28%29/http://constraint3help%28%29/ -
8/3/2019 A Linear Program
22/58
The Problem
We wish to develop a beverage that meets specifiednutritional needs for protein, Calcium and Vitamin C, whilenot exceeding a caloric and volume restriction. We have
three available ingredients: yogurt, bananas, andstrawberries. We know the following information:
Food Quantity
Volume(ounces
)
Cost($)
Calories
Protein(grams
)
Calcium (%RDA)
VitaminC (%RDA)
Yogurt 1 ounce 1 0.1 18 1.5 5 0.5
Banana 1 2 0.2 105 1 1 17
Strawberry
1 0.2 0.1 4 None 0.2 7
Our beverage must contain no more than 300 calories,contain at least 6 grams of protein, at least 15% of our dailycalcium recommendation and at least 30% of our vitamin Crecommendation. In addition, we wish our beverage to fit inan 8-ounce container (but it can be less than 8 ounces). We
would like to find a solution that will cost us the leastamount to produce.
The Variables
-
8/3/2019 A Linear Program
23/58
We will define our variables as follows:
Y = number of ounces of yogurt to include
B = number of bananas to include
S = number of strawberries to include
The Formulation
See if you can fill in the blanks in order toprovide a complete formulation for thisproblem:
(clickon thisfigureto see
theproblemagain)
Objective: Y + B + SSubject to:
Calories: Y + B + S
Protein: Y + B + S
Calcium: Y + B + S
Vitamin C: Y + B + S
Volume: Y + B + S
Signs of Vars:
Y 0B 0S 0
Once you feel that you have the correctcoefficients for all the variables, submit your
answer.
The formulation is:
Minimize 0.1 Y + 0.2 B + 0.1 S
Subject to
http://problem%28%29/ -
8/3/2019 A Linear Program
24/58
Calories: 18 Y + 105 B + 4 S = 6
Calcium: 5 Y + 1 B + 0.2 S >= 15
Vitamin C: 0.5 Y + 17 B + 7 S >= 30 Volume: 1 Y + 2 B + 0.2 S = 0 B >= 0 S >= 0
We can take a problem with only two variables and graph it.From the graph we can then determine the optimal
solution. While it may be rare that we would need to solvea 2 variable problem in the "real world," understanding thegeometry can lead us to better intuition about LPs and howwe can solve them.
Lets try this with the following example:
Maximize x+y
Subjectto:
x + 2 y 2
x 3 y 4 x 0 y 0
http://optimal_solution%28%29/http://optimal_solution%28%29/http://optimal_solution%28%29/http://optimal_solution%28%29/ -
8/3/2019 A Linear Program
25/58
We first look at the non-negativity constraints. If x
and y are greater than or equalto zero, we only need toconcern ourselves with thepositive quadrant of our graph.
Therefore, we will start ourgraphing exercise with thegraph on the right.
We will now look at our firstconstraint: x + 2 y >= 2. Wewant to find all the pointswhich satisfy this constraint.We therefore will graph x + 2 y= 2 which will provide theborder for the region of points
that satisfy it.
We can see that the point (0,0)does not satisfy thisconstraint, therefore its thepoints above this line that arefeasible for this constraint.
-
8/3/2019 A Linear Program
26/58
Our second constraint saysthat x
-
8/3/2019 A Linear Program
27/58
We now have all ourconstraints graphed and we
can see the region where allthe constraints are satisfied.This is called the feasibleregion. Any point in thisregion satisfies all theconstraints, and is calledfeasible. Any point outside ofthis region violates one ormore of them, and is calledinfeasible.
Ouroptimal solution must be feasible, therefore it must liein the green region. For each point in this region we cancalculate the objective value. For instance, the point (1,2)has an objective value of 1+2=3. Our task is to find thepoint which gives the maximum objective value. Clearly,(1,2) is not optimal since the point (1,3) has a largerobjective value of 4. But which point has the largest value?
Next we will look at how we find this solutiongeometrically.
http://feasible_region%28%29/http://feasible_region%28%29/http://optimal_solution%28%29/http://obj_value%28%29/http://feasible_region%28%29/http://feasible_region%28%29/http://optimal_solution%28%29/http://obj_value%28%29/ -
8/3/2019 A Linear Program
28/58
We can now look at ourobjective function. If weplot the line x + y = c forsome constant c, all points
on this line will have anobjective value of c. Forinstance, the blue line is x +y = 4. The points along thisline have an objective valueof 4.
Can we do better than this?
We want to maximize ourobjective function, so if wemove the blue line in thedirection of the arrow, wewill be improving ourobjective value. Here wehave moved it to x + y = 5.All points on this line havean objective value of 5.Those points on the linethat are within the greenregion are feasiblesolutions with an objectivevalue of 5.
This is better than ourprevious objective value of4, but can we do evenbetter?
http://objfcn%28%29/http://obj_value%28%29/http://feasible_soln%28%29/http://feasible_soln%28%29/http://objfcn%28%29/http://obj_value%28%29/http://feasible_soln%28%29/http://feasible_soln%28%29/ -
8/3/2019 A Linear Program
29/58
This time we will try x + y =6. Now we have feasible
solutions with an objectivevalue of 6. The point (2,4)is one such point.
We are improving ourobjective value, but can westill do better? Whathappens if we try x + y = 7?
Indeed we still have afeasible solution that lies onour line. Namely, (3,4) hasan objective value of 7.
But can we do better thanthis?
Notice that if we push theobjective function anyfurther in the direction ofthe arrow, the line will lieentirely outside of ourfeasible region. Thismeans that we can'timprove any further and wehave found ouroptimalsolution.
http://optimalsoln%28%29/http://optimalsoln%28%29/http://optimalsoln%28%29/http://optimalsoln%28%29/ -
8/3/2019 A Linear Program
30/58
The point (3,4) is thefeasible solution thatoptimizes our objectivefunction, therefore we call itthe optimal solution.
Notice that the optimalsolution lies on two of the
constraints. We call theseactive orbindingconstraints. The constraint
x + 2y >= 2 is non-binding. In fact, at ouroptimal solution, x + 2y = 3+ 2(4) = 11. The slack ofthis constraint then is 11-2= 9.
So, we just took a 2 variable LP and solved it graphically.We first plotted all the constraints in order to find thefeasible region. We then pushed the objective function asfar as we could before leaving the feasible region. Thisshowed us where our optimal solution was. We will nowlook at another example.
Lets try solving this example geometrically:
Minimize x - y
Subjectto:
1/3 x + y 4
-2 x + 2 y 4
http://active%28%29/http://binding%28%29/http://nonbinding%28%29/http://nonbinding%28%29/http://slack%28%29/http://active%28%29/http://binding%28%29/http://nonbinding%28%29/http://nonbinding%28%29/http://slack%28%29/ -
8/3/2019 A Linear Program
31/58
x 3 x 0 y 0
Again, because of the non-negativity constraints, we onlyneed to concern ourselveswith the positive quadrant ofour graph.
Our first constraint is: 1/3 x + y
-
8/3/2019 A Linear Program
32/58
Our second constraint saysthat -2 x + 2 y
-
8/3/2019 A Linear Program
33/58
We now have all ourconstraints graphed and we
can see the region where allthe constraints are satisfied.Again, this shows us the"feasible region". Any point inthis region satisfies all theconstraints. Any point outsideof this region violates one ormore of them.
We should also note that sometimes our constraints areinconsistent and our feasible region is "empty." Supposewe had the following two constraints: x + y >= 4 and x + y
-
8/3/2019 A Linear Program
34/58
We can now look at ourobjective function: Minimizex - y. We'll start by plottingx - y = 1. The points along
this blue line have anobjective value of 1.
But we can do better thanthis. We want to minimizeour objective function, sowe would like to findsolutions with an objectivevalue of less than 1. Thiscorresponds to moving our
objective function in thedirection of the arrow.
We could look for solutionswith an objective value of 0,but we'll be ambitious andlook for solutions with an
objective value of -1.
The blue line here plots x-y= -1 and shows us feasiblesolutions with an objectivevalue of -1, for instance(1,2). Can you see what isgoing to happen as we tryto do even better?
-
8/3/2019 A Linear Program
35/58
This time we will try x - y =-2. Now we have feasible
solutions with an objectivevalue of -2.
Do you think we can dobetter than this?
Notice that if we push theobjective function anyfurther, the line will lieentirely outside of our
feasible region. This meansthat we can't improve anyfurther.
But also notice that ourobjective function coincideswith one of our constraints.This means that any pointalong the intersection ofthis blue line and our
feasible region is feasibleand has the same optimalobjective value.
-
8/3/2019 A Linear Program
36/58
In this case, all the pointsbetween the two bluecircles (on the blue line) areoptimal. They all have anobjective value of -2.
Just as in the first example, we took a 2 variable LP andsolved it graphically. We first plotted all the constraints inorder to find the feasible region. We then pushed theobjective function as far as we could before leaving thefeasible region. In this case we found that there weremultiple optimal solutions. We will now look at one moreexample.
Lets look at one more example and then we'll draw someconclusions.
Minimize x + 1/3 y
Subjectto:
x + y 20
-2 x + 5 y 150
x 5 x 0 y 0
-
8/3/2019 A Linear Program
37/58
This time around we will dothings more quickly. To the
right you have a graph of thenon-negative quadrant(representing the non-negativity constraints) and thethree constraints.
What does the feasible regionlook like in this case?
Our feasible region goes onforever off to the right. Thisregion is called "unbounded"because we can go in onedirection forever. For instance,the point (x, 10) is feasible forany positive x greater than 10 -
no matter how large!
-
8/3/2019 A Linear Program
38/58
We will next look at ourobjective function. Here we
have graphed x + 1/3 y = 30.
Now we would like to minimizeour objective function, so wewant to move this constraintso as to find solutions withlower objective values, i.e. inthe direction of the arrow.
And rather than take
incremental steps this time,lets just move it as far as wecan in that direction in onebold move!
And there we have it. We havemoved our objective function
as far as we can. If we move itany further we will leave thefeasible region. Therefore wehave found our optimalsolution at (5,15) with anobjective value of 10.
-
8/3/2019 A Linear Program
39/58
Lets look again at ourobjective function. What would
have happened if we hadwanted to maximize it? Nowour arrow would point theother way and we would beencouraged to move theobjective function as far as wecould in that direction.
Well, we could keep pushingout the objective function
forever. It would never leavethe feasible region becauseour feasible region isunbounded in that direction.We call this an "unboundedLP" and there is no optimalsolution.
So we solved another 2 variable LP. This time we had anunbounded feasible region. Depending on the objectivefunction, this may cause no problems in solving the LP andwe can find an optimal solution. But in other cases it willlead to an unbounded LP and there will be no optimalsolution.
graphically. In essence, we have taken math and turned itinto geometry.
We have seen examples with a single optimal solution and
one with multiple optimal solutions. We have seen boundedfeasible regions and unbounded feasible regions. Inaddition, we have looked at what can happen if a feasibleregion is unbounded: it can lead to an unbounded LP inwhich case there is no optimal solution. We also mentionedanother case when you would not have an optimal solution
-
8/3/2019 A Linear Program
40/58
is if your feasible region is empty.
Lets now look again at the results from these threeexamples and see if we can draw any conclusions.
Do you notice anything about these 3 examples? Do theyappear to have anything in common?
How about this: they all have an optimal solution which lieson a corner of the feasible region.
This turns out to be a fundamental truth of linearprogramming. If an optimal solution exists, you can alwaysfind an optimal solution which is at a corner of the feasible
region. These corner points are also called "extreme points"or "basic feasible solutions". Lets see if we can convinceourselves that this is true in 3 dimensions as well.
As we just saw in 3 examples, the constraints of a 2 variableproblem define a 2 dimensional feasible region. We
-
8/3/2019 A Linear Program
41/58
determined the bounds of this region by graphing eachconstraint as a line and dividing the 2-dimensional spaceinto 2 halves - a feasible half and an infeasible half. We thenfound where these half-spaces intersected, and defined the
feasible region.
If we extend that to a 3 variable problem, we would grapheach constraint as a plane and divide the 3-dimensionalspace into a feasible half and an infeasible half. Our feasibleregion would similarly be defined by where these half-spaces intersect. The result would be a three dimensionalspace like the one below. Each face of this regioncorresponds to a constraint.
Now imagine adding an objective function to this. Ourobjective function would also be in 3 variables so it wouldlook like a plane cutting through this feasible region. Wewould then want to move this plane as far as we could in
one direction. We would again push it until it was just aboutto leave the feasible region. Can you see that this wouldagain give us a corner point optimal solution?
-
8/3/2019 A Linear Program
42/58
This is the case for any feasible region and any objectivefunction - we will always get a corner point optimal solution(as long as the feasible region isn't empty or the LPunbounded).
Can we extend this to a 4 variable problem? Well, unlessyou can see in 4-dimensions, it is tough to showgeometrically, but in fact there are formal proofs that show
that indeed it is true for all problems - big or small.
Lets review what we have just learned. First, the constraintsof an LP give rise to a geometrical shape which defines thefeasible region. We call this shape a polyhedron. For thepurpose of building intuition, we can assume that thedimension of the polyhedron will be equal to the number ofvariables in the problem
Second, as long as the feasible region is non-empty and theLP is bounded, the objective function is always optimizedover this feasible region at a corner of this polyhedron.Therefore, there will always be an optimal solution that is acorner point. There may be other optimal solutions - but at
-
8/3/2019 A Linear Program
43/58
least one will be at a corner.
In the section on the Simplex Method, you will see how thisknowledge can be used to think about solving large LPs.
In the previous section, we looked at the geometry of linearprogramming. Without going into an elaborate proof, weprovided some intuition to why "normal" linear programshave optimal solutions at corners. We say "normal"because we need to ignore infeasible or unbounded LPswhich don't have optimal solutions at all.
This leads us to believe that if we could list all the cornerpoints of our feasible region, we could calculate theobjective value at each and take the best one. Given thatthere is always a corner point optimal solution, we would beassured that this process would give us an optimalsolution. Note that there may be others (as we saw in thesecond example) but we would have found one optimalsolution.
This raises two questions: 1) How do we find the corners?
and 2) What if there are a lot of corners?
Well, like many things there are short answers to thesequestions and long ones. We will provide short answershere, but there is a wealth of insight that can be gained byunderstanding the longer, more formal answers.
First, how do we find the corner points? You should noticethat corners occur where constraints intersect. In 2-dimensions, they occur where 2 constraints intersect. In 3-
dimensions, they occur where 3 constraints intersect. And,you guessed it, in n-dimensions, they occur where nconstraints intersect.
http://fisher.osu.edu/~croxton_4/tutorial/Geometry/simplex.htmlhttp://fisher.osu.edu/~croxton_4/tutorial/Geometry/simplex.html -
8/3/2019 A Linear Program
44/58
Finding where n constraints intersect is a matter of solvinga system of n equations which we can do efficiently usingGaussian elimination. Therefore, we can find our cornerpoints by solving a series of systems of n equations.
But what if there are a lot of corners? You can imagine evenin 3-dimensions a polyhedron with hundreds or even
thousands of corner points. Surely we don't need to findthem all. We would like to be intelligent about which cornerswe consider.
This brings us to the Simplex Method. This is an efficientapproach for solving linear programs and is currently themethod used in most commercial solvers.
In essence, the simplex method intelligently moves fromcorner to corner until it can be proven that it has found theoptimal solution. Each corner that it visits is animprovement over the previous one. Once it can't find abetter corner, it knows that it has the optimal solution.
-
8/3/2019 A Linear Program
45/58
Needless to say, this is a quick description and there aremany interesting features of the algorithm, but we will leaveit at this. For the purpose of solving most LPs withcommercial solvers, this basic understanding of thealgorithm will suffice.
Once we have solved an LP, we may find ourselvesinterested in more than just the solution. We may be
interested in knowing what happens to our solution as wechange one of the coefficients. In other words, we maywish to know how sensitive our solution is to changes inthe data. We call this sensitivity analysis.
One way to learn this information is to change one of thecoefficients and re-solve the problem. Fortunately, there isa better way. The output of the simplex method gives usinformation about what happens as we change the righthand side of a constraint, or as we change the objective
function coefficient on a variable.In order to motivate this analysis, we will return to a 2-dimensional example and look at the graphic effects ofchanging coefficients. We will first consider changing theright hand side of a constraint.
-
8/3/2019 A Linear Program
46/58
(page2 of9)
We will first look at what happens as we change therighthand side (RHS) of a constraint. Lets look at our thirdexample from the section on graphing 2-dimensional LPs.
Minimize x + 1/3 y
Subjectto:
x + y 20
-2 x + 5 y 150 x 5 x 0 y 0
To the right you have thegraphical solution that wedeveloped previously. Theoptimal solution to thisproblem was x = 5, y = 15which has an objective valueof 10.
Now lets look at what happensif we decrease the right handside of the second constraintfrom 20 to10.
-
8/3/2019 A Linear Program
47/58
To the right we have re-graphed the constraint. The
dotted line represents its oldlocation, while the solid line isits new one. Notice thatdecreasing the right hand sidetranslates into moving theconstraint out, while keepingthe same slope.
Any time we move aconstraint, our feasible region
changes. In this case, ourfeasible region enlarges toinclude the darker green area.
With this new feasible region,is our old solution (the bluedot) still optimal?
Notice that now that ourfeasible region is larger, wecan push our objectivefunction (the blue line) evenfurther than we could before.
In fact, we can move it fromthe dotted line, to the solidline. This gives us a new
optimal solution with a newobjective value.
-
8/3/2019 A Linear Program
48/58
Notice that when we decrease the right hand side of thisconstraint, we find a solution with a lower objective value.This is becausea decrease in the right hand side makes thisconstraint less restrictive, therefore the size of our feasible
region increases and we can expect that we might be ableto find a "better" solution.
To the right we have thegraphical solution when theRHS = 10. The optimalsolution to this problem is x =5, y = 5 which has an objectivevalue of 6 2/3.
Now lets look at what happensif we further decrease the righthand side of the secondconstraint from 10 to 5.
To the right we have re-graphed the constraint.Again, the dotted linerepresents its old location,while the solid line is its newone. We have again movedthis constraint out and addedthe darker green area to ourfeasible region.
With this new feasible region,is our old solution (the bluedot) still optimal?
-
8/3/2019 A Linear Program
49/58
As before, we can push ourobjective function from the
dotted blue line, to the solidblue line. This gives us a newoptimal solution with a newobjective value.
We are going to try this onemore time. To the right wehave the graphical solutionwhen the RHS = 5. Theoptimal solution to thisproblem is x = 5, y = 0 which
has an objective value of 5.
Now lets look at what happensif we decrease the right handside of the second constraintyet again, this time from 5 to 0.
-
8/3/2019 A Linear Program
50/58
To the right we have re-graphed the consraint. As
before, the change in the righthand side translates into ashift of the constraint.
Notice that this time however,our feasible region does notchange. Do you think ouroptimal solution will change?
Because our feasible regiondoes not change, we can'tmove our objective functionany further. Therefore we areleft with the same optimalsolution as before.
-
8/3/2019 A Linear Program
51/58
So we have now decreased the right hand side of thesecond constraint 3 times and looked at how each change
effected our feasible region and our optimal solution.
Lets look now at what happens as we INCREASE the righthand side of this same constraint.
Lets first check our intuition. Which direction do youexpect the objective value to go? Do you think we will finda solution with an objective value higher than 10 or lowerthan 10 (10 is the optimal objective value when the RHS =20)?
Click this button to get the answer.
We will go back to our originalformulation with the originalright hand side value of 20. Tothe right we have the graphical
representation of the optimalsolution.
New lets consider increasingthe right hand side of thesecond constraint from 20 to30.
We will do to two steps in onethis time. As before, thechange in the right hand side
http://answer%28%29/ -
8/3/2019 A Linear Program
52/58
translates into a shift of theconstraint, this time theconstraint shifts in the opposite
direction as when we weredecreasing the right hand side.
This change in the constraintshrinks the feasible region tothe dark green area. Now ourprevious solution is no longerfeasible.
We therefore have to slide the
objective function in a bit.This gives us a new optimalsolution.
Now lets increase the right handside again - this time from 30 to 37.37 may seem arbitrary, but you'llsee why we chose that value soon.
-
8/3/2019 A Linear Program
53/58
Once again, we will shift theconstraint in which shrinks the
feasible region. We then have toshift the objective function in sothat our optimal solution is feasible.And again this gives us a new
optimal solution with a newobjective value.
OK, we are going to do this one lasttime, this time with a change from 37to 40.
-
8/3/2019 A Linear Program
54/58
Once again, we will shift the constraintin which shrinks the feasible region.
We then have to shift the objectivefunction in so that our optimalsolution is feasible. And again thisgives us a new optimal solution with anew objective value.
Now we will graph the results of all this work. Below you see aplot of the right hand side value versus the optimal objective value.
Notice that the result is piecewise linear and the slope of thesegment that contains the original right hand side value of 20, is
equal to 1/3. This implies that as we change the value of the righthand side of this constraint by one unit, within this range, theoptimal objective value changes by 1/3. This value, 1/3, is calledthe SHADOW PRICE of this constraint. Here is a more formaldefinition:
-
8/3/2019 A Linear Program
55/58
SHADOW PRICE: the change in the objective value per unitincrease in the right hand side, given all other data remain thesame.
Notice that the definition refers to the change in the objective value
resulting from an increase in the right hand side. If we decreasethe right hand side, we can expect the same change (as long as weare within the range) but in the opposite direction. In other words,if the shadow price ispositve, then the objective value willincrease by the amount of the shadow price for each unit increasein the right hand side, and it will decrease by the same amount foreach unit decrease in the right hand side. Similarly, if the shadowprice is negative, then the objective value will decrease by theamount of the shadow price for each unit increase in the righthand side, and it will increase by the same amount for each unitdecrease in the right hand side.
That is confusing to digest, but you can always fall back on yourlogic and intuition: If you are making the constraint less restrictive,then you are increasing the size of the feasible region and youmight be able to find a "better" solution (if you are minimizing, abetter solution will have a smaller objective value, and if you aremaximizing, it will have a higher objective value). If you aremaking the constraint more restrictive, then you are decreasingthe size of the feasible region and you might have to settle for a"worse" solution.
Here are some important facts about shadow prices.
Associated with each constraint is a shadow price.
The shadow price is the change in the objective valueper unit change in the right hand side, given all otherdata remain the same.
Associated with each shadow price is a range overwhich this shadow price holds.
Most solvers provide shadow prices and ranges as partof the solution information.
Shadow prices are also called dual values.
-
8/3/2019 A Linear Program
56/58
The shadow price on a non-binding constraint is zero. Ifwe have not used all of a resource available to us (theconstraint is non-binding), then small changes in theright hand side do not effect the optimal solution.
(Think about the 2-dimensional graph to see why this isso.)
With the information we are given from most solvers, wecan only state the exact effects of changes to the right handside if they are within the specified range. Changing theright hand side of a constraint to values outside the rangeof the shadow price, will effect the objective value but wecannot state by how much, at least not without re-solvingthe problem.
In the case of the example we did graphically, we know theeffects of changing the right hand side to 40 because wegraphed it, but a solver would only tell us that within therange [5,37] the shadow price is 1/3.
Note also that although we can state with certainty the newoptimal objective value as we change the right hand sidewithin the allowabe range, we cannot make any statementsabout what the new optimal solution is.
If you feel comfortable with the concept of shadow pricesand would like to move on to the next topic,click on"Reduced Costs." If you would like to look at an exampleand quiz yourself on shadow prices, click on "Shadow PriceQuiz"
We can think of non-negativity constraints as additionalconstraints on our system. How then would we interpret
the shadow prices on these constraints? First, we will givethem a different name. We call this new value the reducedcost and we will associate it with the variable that is non-negative. In non-negativity constraints our right hand sideis zero. Using a shadow price interpretation then, thereduced cost of a variable is the change in the objectivefunction if we require that variable to be greater than or
-
8/3/2019 A Linear Program
57/58
equal to one (instead of zero), assuming a feasible solutionstill exists. In fact, the reduced cost is the rate of change ofthe objective function per unit increase in the right handside of this non-negativity constraint.
It should be clear that if, at an optimal solution, a variablehas a positive value, the reduced cost for that variable willbe zero. It is optimal to give this variable a value greaterthan zero, so forcing the variable to be greater than zeroshould not change the optimal solution nor the objectivevalue.
If however, at an optimal solution a variable has a value ofzero then forcing this variable to be greater than zero will
change the optimal solution and most likely the optimalobjective value (if there are multiple optimal solutions, itmay be that the optimal solution changes, but the optimalobjective value does not). In this case of a variable with azero optimal value, we can interpret the reduced cost as theamount by which the objective value changes if we increasethe value of this variable to one (or at least the rate ofchange).
Alternatively, the reduced cost of a variable with a zero
optimal value is the amount by which the objectivecoefficient would have to decrease in order to have apositive optimal value for that variable.
So to summarize, we make the following points aboutreduced costs:
If, at an optimal solution, a variable has a positive value,the reduced cost for that variable will be 0.
If, at an optimal solution, a variable has a value of 0, the
reduced cost for that variable can be interpreted as theamount by which the objective value will change if weincrease the value of this variable to one, assuming afeasible solution still exists.
If, at an optimal solution, a variable has a value of 0, thereduced cost for that variable can also be interpreted as
-
8/3/2019 A Linear Program
58/58
the amount by which the objective coefficient wouldhave to decrease in order to have a positive value forthat variable in an optimal solution.
Now lets make a quick check of your intuition. We alreadystated that a variable with a positive optimal value will havea reduced cost of zero. But what will the sign be on reducedcosts for variables that have a zero optimal value? If weour minimizing our objective function and in the optimalsolution the value of x equals zero, will the reduced cost forx by positive or negative?
Click this button to get the answer.
You should be able to see why the sign will change if we areinstead maximizing our objective function.
If you feel comfortable with the concept of reduced costsand would like to move on to the next topic, click on"Objective Coefficients." If you would like to look at anexample and quiz yourself on reduced costs, click on"Reduced Cost Quiz".
http://rc_answer%28%29/