03 decision analysis part1

Upload: rama-dulce

Post on 06-Jul-2018

215 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/17/2019 03 Decision Analysis Part1

    1/11

    - 1 -

    DECISION ANALYSIS  –   PART 1

    Topics Outline

    Elements of Decision Analysis

    Decision Criteria

    Risk Profiles

    Sensitivity AnalysisDecision Trees

    Bayes’ Rule

    Value of Information

    Elements of Decision Analysis

    Many examples of decision making under uncertainty exist in the business world, including the followin

     –  Bidding competitionsThe trade-off is between bidding low to win the bid and bidding high to make a larger profit.

     –  Introducing a new product into the marketIf the product generates high customer demand, the company will make a large profit.But if demand is low the company could fail to recoup its development costs.

     –  Capacity expansions of manufacturing companiesIf they don’t expand and demand for  their products is higher than expected, they will loserevenue because of insufficient capacity. If they do expand and demand for their products islower than expected, they will be stuck with expensive underutilized capacity.

    Although decision making under uncertainty occurs in a wide variety of contexts, all problemshave three common elements:

     –  set of decisions (or strategies) available to the decision maker –  set of possible outcomes and the probabilities of these outcomes –  a value model that prescribes monetary values for the various decision-outcome combinations.

    Once these elements are known, the decision maker can find an optimal decision, depending on

    the optimality criterion chosen.Decision Criteria

    Example 1 A decision maker must choose among three decisions, labeled D1, D2, and D3.Each of these decisions has three possible outcomes, labeled O1, O2, and O3.

    A payoff table lists the payoff for each decision-outcome pair. Positive values correspond torewards (or gains) and negative values correspond to costs (or losses).Here is the payoff table for our decision problem.

    DecisionOutcome

    O1 O2 O3

    D1 10 10 10D2  – 10 20 30

    D3  – 30 30 80

    This table shows that the decision maker can play it safe by choosing decision D1. This provides a sure$10 payoff. With decision D2, rewards of $20 or $30 are possible, but a loss of $10 is also possible.Decision D3 is even riskier; the possible loss is greater, and the maximum gain is also greater.

    Which decision would you choose?Would your choice change if the values in the payoff table were measured in thousands of dollars?

  • 8/17/2019 03 Decision Analysis Part1

    2/11

    - 2 -

    There are several possible criteria for making decisions. Some criteria involve the assignment of probabilities to each event, but others do not. We will explore two criteria that do not use probabilities and one that does.

    Maximin Criterion

    The maximin criterion finds the worst payoff in each row of the payoff table and chooses thedecision corresponding to the best of these.

    This criterion is appropriate for a very conservative (or pessimistic) decision maker.Such a criterion tends to avoid large losses, but it fails to even consider large rewards.

    Example 1 (continued)Determine the best decision according to the maximin criterion.

    DecisionOutcome Minimum

    profitO1  O2 O3

    D1 10 10 10 10

    D2   – 10 20 30  – 10

    D3   – 30 30 80  – 30

    Because the maximum of the minimum profits is $10, the decision maker chooses decision D1.

    Maximax Criterion

    The maximax criterion finds the best payoff in each row of the payoff table and chooses thedecision corresponding to the best of these.

    This criterion is appropriate for a risk taker (or optimist).This criterion looks tempting because it focuses on large gains, but its very serious downside is

    that it ignores possible losses.Example 1 (continued)Determine the best decision according to the maximax criterion.

    DecisionOutcome Maximum

    profitO1  O2 O3

    D1 10 10 10 10

    D2   – 10 20 30 30

    D3   – 30 30 80 80

    Because the maximum of the maximum profits is $80, the decision maker chooses decision D3.The maximin and maximax criteria make no reference to how likely the various outcomes are.However, decision makers typically have at least some idea of these likelihoods, and they oughtto use this information in the decision-making process. After all, if outcome O1 in our problem isextremely unlikely, then the pessimist who uses maximin is being overly conservative.Similarly, if outcome O3 is quite unlikely, then the optimist who uses maximax is taking anunnecessary risk. The following criterion uses probabilites and is generally regarded as the preferred criterion in most decision problems.

  • 8/17/2019 03 Decision Analysis Part1

    3/11

    - 3 -

    Expected Monetary Value (EMV) Criterion

    The expected monetary value (EMV) for a decision is the expected payoff from this decision.That is, EMV for any decision is a weighted average of the possible payoffs for this decision,weighted by the probabilities of the outcomes.

     N 

     j

     j j   O P  X  D EMV 1

    )()(  

    where

     j X     –  payoff for decision D when outcome O j occurs

    )(  jO P    –  probability of outcome O j if decision D is made

     N    –  number of possible outcomes

    Where do the probabilities come from?The probabilities assigned are based on information available from historical data, expertopinions, government forecasts, or knowledge about the probability distribution that the event

    may follow.

    Using the expected monetary value (EMV) criterion, you do the following:

    1. Calculate the EMV for each decision.2. Choose the decision with the largest EMV.

    Example 1 (continued)

    The decision maker assesses the probabilities of the three outcomes as

    0.3, 0.5, 0.2 if decision D2 is made,0.5, 0.2, 0.3 if decision D3 is made.

    Determine the best decision according to the EMV criterion.

    The EMV for each decision is:

    EMV(D1) = 10 (a sure thing)

    EMV(D2) = – 10(0.3) + 20(0.5) + 30(0.2) = 13

    EMV(D3) = – 30(0.5) + 30(0.2) + 80(0.3) = 15

    The optimal decision is D3 because it has the largest EMV.

    Interpretation of EMVThe EMV of 15 for decision D3 does not mean that you expect to gain $15 from this decision.It is true only on average. That is, if this situation can occur many times and decision D3 is usedeach time, then on average, you will make a gain of about $15. About 50% of the time you willlose $30, about 20% of the time you will gain $30, and about 30% of the time you will gain $80.These average to $15. For this reason, using the EMV criterion is some-times referred to as“ playing the averages.” 

  • 8/17/2019 03 Decision Analysis Part1

    4/11

    - 4 -

    Risk Profiles

    The risk profile for a decision is a “spike” chart that represents the probability distribution ofmonetary outcomes for this decision. The spikes are located at the possible monetary values,and the heights of the spikes correspond to the probabilities.

    The EMV for any decision is a summary measure of the complete risk profile –  it is the mean ofthe corresponding probability distribution. Therefore, when you use the EMV criterion formaking decisions, you are not using all of the information in the risk profiles; you are comparingonly their means.

    Risk profiles can be useful as extra information for making decisions. For example, a managerwho sees too much risk in the risk profile of the EMV-maximizing decision might choose tooverride this decision and instead choose a somewhat less risky alternative.

    Example 1 (continued)

    Construct the risk profile for the EMV-maximizing decision D3.Here is the probability distribution for decision D3.

    Monetary value  – 30 30 80

    Probability  0.5 0.2 0.3

    Here is the risk profile.

    It shows that a loss of $30 has probability 0.5, a gain of $30 has probability 0.2, and a gain of$80 has probability 0.3.

    The risk profile for decision D2 is similar, except that its spikes are above the values – 10, 20, and 30.

    The risk profile for decision D1 is a single spike of height 1 over the value 10.

  • 8/17/2019 03 Decision Analysis Part1

    5/11

    - 5 -

    Sensitivity Analysis

    Some of the quantities in a decision analysis, particularly the probabilities, are often intelligentguesses at best. Therefore, it is important, especially in real-world business problems, to assessthe sensitivity of the conclusions. Here we systematically vary inputs to the problem to see how

    the outputs change.

    Usually the most important information from a sensitivity analysis is whether the optimaldecision continues to be optimal as one or more inputs change. If the optimal decision does notchange, you can be more confident in it. But if small changes in the inputs result in largedifferences in the outputs, you should take great care and not rely too heavily on your model.

    Example 1 (continued)

    Here are the probability distributions and the EMVs for our example.

    How sensitive is the optimal decision D3 to small changes in the inputs,e.g. if the probabilities change only slightly to 0.6, 0.2, and 0.2?

    EMV(D3) = – 30(0.6) + 30(0.2) + 80(0.2) = 4

    The EMV for D3 changes to 4. Now D3 is the worst decision and D2 is the best.So it appears that the optimal decision is quite sensitive to the assessed probabilities.

    Is D3 still the best decision if the probabilities remain the same but the last payoff for D2changes from 30 to 45?

    EMV(D2) = – 10(0.3) + 20(0.5) + 45(0.2) = 16

    The EMV for D2 changes to 16, and D2 becomes the best decision.

  • 8/17/2019 03 Decision Analysis Part1

    6/11

    - 6 -

    Decision Trees

    Using decision trees is a way of representing the events (decisions and outcomes) graphically.They are composed of nodes and branches and show the sequence of events, as well as probabilities and monetary values. There are three types of nodes:

     –  a decision node (a square) represents a time when the decision maker makes a decision;

     –  a probability node (a circle) represents a time when the result of an uncertain outcome becomes kno

     –  an end node (a triangle) indicates that the problem is completed –  all decisions have beenmade, all uncertainty has been resolved, and all payoffs and costs have been incurred.

    The nodes represent points in time. Time proceeds from left to right. This means that any branches leading into a node (from the left) have already occurred. Any branches leading out of anode (to the right) have not yet occurred. The optimal decision branch(es) are usually marked insome way, for example with a notch.

    Probabilities are listed on probability branches.These probabilities are conditional on the events that have already been observed (those to the left).

     Note that the probabilities on branches leading out of any probability node must sum to 1.

    Monetary values are shown to the right of the end nodes.(Some monetary values are also placed under the branches where they occur in time.)

    EMVs are shown above the various nodes.They are calculated using the so called folding-back procedure:

    Starting from the right of the decision tree and working back to the left:1. At each probability node, calculate an EMV (a sum of products of monetary values and probabilities).2. At each decision node, take a maximum of EMVs to identify the optimal decision.

    Example 1 (continued)

    The decision node comes first (to the left) becauthe decision maker must make a decision beforeobserving the uncertain outcome. The probabilitnodes then follow the decision branches, and the probabilities appear above their branches. (Therno need for a probability node after the D1 bran because its monetary value is a sure 10.)

    The ultimate payoffs appear next to the end nodto the right of the probability branches.The EMVs above the probability nodes are for th

    various decisions. For example, using the foldin back procedure the EMV for the D2 branch is

    EMV(D2) = – 10(.3) + 20(.5) + 30(.2) = 13

    The maximum of the EMVs is for the D3 branchwritten above the decision node. Because itcorresponds to D3, we put a notch on the D3 branch to indicate that this decision is optimal.

  • 8/17/2019 03 Decision Analysis Part1

    7/11

    - 7 -

    Bayes’ Rule 

    The purpose of Bayes’ rule is to revise probabilties as new information becomes available. 

    Prior probabilities

     New informationB

    with likelihoods

    Application ofBayes' Rule

    Posterior probabilites

    P(A1)P(A2)

    .

    .

    .P(An)

    P(B|A1)P(B|A2)

    .

    .

    .P(B|An)

    P(A1|B)P(A2|B)

    .

    .

    .P(An|B)

    Often we begin a decision analysis process with estimating initial or prior probabilities P(A1), P(A2), . . . , P(An) for specific events of interest A1, A2, . . . , An. Then, from sources suchas a sample, a special report, or a product test, we obtain additional information B about the eventstogether with the conditional probabilities P(B|A1), P(B|A2), . . . , P(B|An) called likelihoods.

    Given this new information, we update the prior probability values by calculating revised probabilities P(A1|B), P(A2|B), . . . , P(An|B), referred to as posterior probabilities.Bayes’ rule provides a means for making these probability calculations. 

    Example 2

    A manufacturing firm receives shipments of parts from two different suppliers.Let A1 and A2 denote the events that a part is from suplier 1 and suplier 2, respectively.Currently, 65% of the parts purchased by the firm are from supplier 1 and the remaining 35% arefrom supplier 2. Historic data suggest that the two suppliers provide 2% and 5% bad parts, respectively.

    (a) What are the prior probabilities of events A1 and A2?

    If a part is selected at random, we would assign the prior probabilities

    P(A1) = 0.65 and P(A2) = 0.35.

    (b) Let B denote the event that a part is bad. What are the likelihoods P(B|A1) and P(B|A2)?

    The likelihoods that a part is bad are given by the conditional probabilities

    P(B|A1) = 0.02 and P(B|A2) = 0.05

  • 8/17/2019 03 Decision Analysis Part1

    8/11

    - 8 -

    Let A1, A2, . . . , An be mutually exclusive events, whose union is the entire sample space andP(A1) ≠ 0, P(A2) ≠ 0, . . . , P(An) ≠ 0. 

    Law of Total Probability

    )()|()()|()( 11   nn   A P  A B P  A P  A B P  B P     

    That is, the probability of event B is the sum of likelihoods times priors.For B to occur, it must occur along with one of the As. The Law of Total Probability simplydecomposes the probability of B into all of these possibilities.

    Bayes’ Rule

    )(

    )()|()|(

     B P 

     A P  A B P  B A P    iii  

    In words, the posterior of event i A  is the likelihood times the prior, divided by the probability P(B).

    Example 2 (continued)(c) What is the probability that a randomly selected part is bad?

    Using the Law of total probability,

    )()|()()|()( 2211   A P  A B P  A P  A B P  B P   

    = 0.02(0.65) + 0.05(0.35) = 0.013 + 0.0175 = 0.0305

    There is about 3% chance that a randomly selected part is bad.

    (d) The parts from the two suppliers are used in the firm’s manufacturing process and a machine breaks down because it attempts to process a bad part.What is the probability that the bad part came from supplier 1 and from supplier 2, respectively?

    We are looking for the posterior probabilities )|( 1   B A P   and )|( 2   B A P  .

    Bayes’ rule can be used to find them. 

    4262.00305.0

    )65.0(02.0

    )(

    )()|()|(   111

     B P 

     A P  A B P  B A P   

    5738.00305.0

    )35.0(05.0)(

    )()|()|(   222 B P 

     A P  A B P  B A P   

     Note that we started with a probability of 0.65 that a part selected at random was fromsupplier 1. However, given the information that the part is bad, the probability that the part isfrom supplier 1 drops to 0.4262. In fact, if the part is bad, it has better than a 50/50 chancethat it came from supplier 2. That is P(A2|B) = 0.5738.

  • 8/17/2019 03 Decision Analysis Part1

    9/11

    - 9 -

    Value of Information

    Many decision problems are solved in stages. The first-stage decision is whether to obtain someinformation that could be useful. If you decide not to obtain the information, you make adecision right away, based on prior probabilities. If you do decide to obtain the information, then

    you first observe the information and then make the second-stage decision, based on posterior probabilities. However, information usually comes at a price.

    How much does the information cost? Is the information worth its price?

    Example 3

    Pittsburgh Development Corporation (PDC)

    Pittsburgh Development Corporation (PDC) purchased land that will be the site of a new luxurycondominium complex. PDC commissioned preliminary architectural drawings for threedifferent-sized projects: small, medium, and large with 30, 60, and 90 condominiums, respectively.

    PDC is optimistic about the potential for the luxury condominium complex and has an initialsubjective probability assessment of 0.8 that demand will be strong and a corresponding probability of 0.2 that demand will be weak. The payoff table with profits expressed in millionsof dollars is shown below.

    DecisionOutcome

    Strong demand (O1)  Weak demand (O2) 

    Small complex (D1)  8 7

    Medium complex (D2)  14 5Large complex (D3)  20  – 9

    (a) Select the size of the condominium that will lead to the largest profit.

    The expected monetary value for each of the three decisions follows:

    EMV(D1) = 0.8(8) + 0.2(7) = 7.8EMV(D2) = 0.8(14) + 0.2(5) = 12.2EMV(D3) = 0.8(20) + 0.2( – 9) = 14.2

    Thus, using the EMV criterion, we find that the large condominium complex,with an expected monetary value of $14.2 million, is the recommended decision.

  • 8/17/2019 03 Decision Analysis Part1

    10/11

    - 10 -

    Perfect information is information that will indicate with certainty which ultimate outcome will occur.The Expected Value of Perfect Information (EVPI) is the maximum amount you might bewilling to pay for perfect information about the outcome.

    EVPI = EMV with (free) perfect information –  EMV without information

    That is, the amount you should be willing to spend for information is the expected increase inEMV you can obtain from having the information. If the actual price of the information is lessthan or equal to this amount, you should purchase it; otherwise, the information is not worth its price. In addition, information that never affects your decision is worthless, and it should not be purchased at any price.

    Example 3 (continued)

    (b) Suppose that PDC could determine with certainty, prior to making a decision, which of thetwo outcomes –  strong demand (O1) or weak demand (O2) –  is going to occur.What is PDC’s optimal strategy if this perfect  information becomes available?

    We can state PDCs optimal decision strategy if the perfect information becomes available as follows:If O1 occurs, select D3 and receive a payoff of $20 million.If O2 occurs, select D1 and receive a payoff of $7 million.

    (c) What is the expected monetary value for this decision strategy?

    There is a 0.8 probability that the perfect information will indicate outcome O1 and theresulting decision alternative D3 will provide a $20 million profit. Similarly, with a 0.2 probability for outcome O2, the optimal decision D1 will provide a $7 million profit.Thus, the expected monetary value of the decision strategy based on perfect information is

    EMV with perfect information = 0.8(20) + 0.2(7) = 17.4

    (d) What is the expected value of the perfect information (EVPI)?

    EVPI = EMV with perfect information –  EMV without information

    In (a) we showed that the recommended decision is D3 with an EMV of $14.2 million.Because this decision recommendation and EMV computation were made without the benefitof perfect information, $14.2 million is the expected monetary value without information.

    In (c) we showed that the EMV with perfect information is $17.4 million.Therefore, the expected value of the perfect information is

    EVPI = $17.4 –  $14.2 = $3.2

    (e) Interpret the EVPI.

    The EVPI of $3.2 million represents the additional expected monetary value that can beobtained if perfect information were available. Generally speaking, a market research studywill not provide perfect information; however, if the market research study is a good one,the information gathered might be worth a sizable portion of the $3.2 million. Given theEVPI of $3.2 million, PDC might seriously consider a market survey as a way to obtain moreinformation about the possible outcomes.

  • 8/17/2019 03 Decision Analysis Part1

    11/11

    - 11 -

    Admittedly, perfect information is almost never available at any price, but finding its value isstill useful because it provides an upper bound on the value of any information.In other words, the value of any information can never be greater than the value of perfectinformation that would eliminate all uncertainty.

    Most often, additional (imperfect ) information is obtained through experiments designed to provide sample information. Raw material sampling, product testing, and market researchstudies are examples of experiments (or studies) that may enable management to revise or updatethe probabilities of the possible outcomes. The Expected Value of Sample Information (EVSI)is the most you would be willing to pay for the sample information.

    EVSI = EMV with (free) sample information –  EMV without information

    Example 3 (continued)

    (f) The PDC management is considering a six-month market research study with one of the

    following two results:

    1. Favorable report: A significant number of the individuals contacted express interest in purchasing a PDC condominium.

    2. Unfavorable report: Very few of the individuals contacted express interest in purchasinga PDC condominium.

    Suppose that the decision analysis (after constructing the corresponding decision tree andfinding the respective EMV’s) shows that the optimal decision for PDC is to conduct themarket research study and has an EMV of $15.93 million.What is ehe expected value of the sample information (EVSI)?

    The market research study is the sample information used to determine the optimal decisionstrategy. The EMV associated with the market research study is $15.93.

    In (a), we showed that the best expected monetary value if the market research study is notundertaken is $14.20. Thus, we can conclude that

    EVSI = EMV with sample information –  EMV without information

    = $15.93 –  $14.20 = $1.73

    In other words, conducting the market research study adds $1.73 million to the PDC expectedmonetary value.