ch08

26
1 Decision Theory Decision Theory Professor Ahmadi

Upload: furry-agarwal

Post on 17-Dec-2014

373 views

Category:

Technology


3 download

DESCRIPTION

 

TRANSCRIPT

Page 1: Ch08

1 1 Slide

Slide

Decision TheoryDecision Theory

Professor Ahmadi

Page 2: Ch08

2 2 Slide

Slide

Learning ObjectivesLearning Objectives

Structuring the decision problem and decision Structuring the decision problem and decision treestrees

Types of decision making environments:Types of decision making environments:• Decision making under uncertainty when Decision making under uncertainty when

probabilities are not knownprobabilities are not known• Decision making under risk when Decision making under risk when

probabilities are knownprobabilities are known Expected Value of Perfect InformationExpected Value of Perfect Information Decision Analysis with Sample InformationDecision Analysis with Sample Information Developing a Decision StrategyDeveloping a Decision Strategy Expected Value of Sample InformationExpected Value of Sample Information

Page 3: Ch08

3 3 Slide

Slide

Types Of Decision Making Types Of Decision Making EnvironmentsEnvironments

Type 1: Decision Making under Type 1: Decision Making under CertaintyCertainty.. Decision maker know for sure Decision maker know for sure (that is, with certainty) outcome or (that is, with certainty) outcome or consequence of every decision alternative.consequence of every decision alternative.

TType 2: Decision Making under ype 2: Decision Making under Uncertainty.Uncertainty. Decision maker has no Decision maker has no information at all about various outcomes or information at all about various outcomes or states of nature. states of nature.

Type 3: Decision Making under RiskType 3: Decision Making under Risk.. Decision maker has some knowledge Decision maker has some knowledge

regarding probability of occurrence of each regarding probability of occurrence of each outcome or state of nature. outcome or state of nature.

Page 4: Ch08

4 4 Slide

Slide

Decision TreesDecision Trees

A A decision treedecision tree is a chronological is a chronological representation of the decision problem.representation of the decision problem.

Each decision tree has two types of nodes; Each decision tree has two types of nodes; round nodesround nodes correspond to the states of nature correspond to the states of nature while while square nodessquare nodes correspond to the decision correspond to the decision alternatives. alternatives.

The The branchesbranches leaving each round node leaving each round node represent the different states of nature while represent the different states of nature while the branches leaving each square node the branches leaving each square node represent the different decision alternatives.represent the different decision alternatives.

At the end of each limb of a tree are the At the end of each limb of a tree are the payoffs attained from the series of branches payoffs attained from the series of branches making up that limb. making up that limb.

Page 5: Ch08

5 5 Slide

Slide

Decision Making Under UncertaintyDecision Making Under Uncertainty

If the decision maker does not know with certainty If the decision maker does not know with certainty which state of nature will occur, then he/she is which state of nature will occur, then he/she is said to be said to be making decision under uncertaintymaking decision under uncertainty..

The five commonly used criteria for decision The five commonly used criteria for decision making under uncertainty are:making under uncertainty are:

1.1. the the optimisticoptimistic approach (Maximax) approach (Maximax)

2.2. the the conservativeconservative approach (Maximin) approach (Maximin)

3.3. the the minimax regretminimax regret approach ( approach (Minimax regret)Minimax regret)

4.4. Equally likely (Equally likely (LaplaceLaplace criterion) criterion)

5.5. Criterion of realism with Criterion of realism with ( (Hurwicz criterionHurwicz criterion) )

Page 6: Ch08

6 6 Slide

Slide

Optimistic ApproachOptimistic Approach

The The optimistic approachoptimistic approach would be used by an would be used by an optimistic decision maker.optimistic decision maker.

The The decision with the largest possible payoffdecision with the largest possible payoff is is chosen. chosen.

If the payoff table was in terms of costs, the If the payoff table was in terms of costs, the decision with the lowest costdecision with the lowest cost would be chosen. would be chosen.

Page 7: Ch08

7 7 Slide

Slide

Conservative ApproachConservative Approach

The The conservative approachconservative approach would be used by a would be used by a conservative decision maker. conservative decision maker.

For each decision the minimum payoff is listed For each decision the minimum payoff is listed and then the decision corresponding to the and then the decision corresponding to the maximum of these minimum payoffs is selected. maximum of these minimum payoffs is selected. (Hence, the (Hence, the minimum possible payoff is minimum possible payoff is maximizedmaximized.).)

If the payoff was in terms of costs, the maximum If the payoff was in terms of costs, the maximum costs would be determined for each decision and costs would be determined for each decision and then the decision corresponding to the minimum then the decision corresponding to the minimum of these maximum costs is selected. (Hence, of these maximum costs is selected. (Hence, the the maximum possible cost is minimizedmaximum possible cost is minimized.).)

Page 8: Ch08

8 8 Slide

Slide

Minimax Regret ApproachMinimax Regret Approach

The minimax regret approach requires the The minimax regret approach requires the construction of a construction of a regret tableregret table or an or an opportunity opportunity loss tableloss table. .

This is done by calculating for each state of This is done by calculating for each state of nature the difference between each payoff and nature the difference between each payoff and the largest payoff for that state of nature. the largest payoff for that state of nature.

Then, using this regret table, the maximum Then, using this regret table, the maximum regret for each possible decision is listed. regret for each possible decision is listed.

The decision chosen is the one corresponding The decision chosen is the one corresponding to the to the minimum of the maximum regretsminimum of the maximum regrets..

Page 9: Ch08

9 9 Slide

Slide

Example: Marketing StrategyExample: Marketing Strategy

Consider the following problem with two decision Consider the following problem with two decision alternatives (dalternatives (d1 1 & d& d22) and two states of nature S) and two states of nature S11 (Market Receptive) and S(Market Receptive) and S22 (Market Unfavorable) (Market Unfavorable) with the following payoff table representing with the following payoff table representing profits ( $1000):profits ( $1000):

States of NatureStates of Nature

ss11 ss33

dd11 20 20 6 6 DecisionsDecisions

dd22 25 25 3 3

Page 10: Ch08

10 10 Slide

Slide

Example: Example: Optimistic ApproachOptimistic Approach

An optimistic decision maker would use the An optimistic decision maker would use the optimistic approach. All we really need to do is optimistic approach. All we really need to do is to choose the decision that has the largest single to choose the decision that has the largest single value in the payoff table. This largest value is value in the payoff table. This largest value is 25, and hence the optimal decision is 25, and hence the optimal decision is dd22..

MaximumMaximum

DecisionDecision PayoffPayoff

dd11 20 20

choose choose dd22 dd22 25 25 maximummaximum

Page 11: Ch08

11 11 Slide

Slide

Example: Example: Conservative ApproachConservative Approach

A conservative decision maker would use the A conservative decision maker would use the conservative approach. List the minimum payoff conservative approach. List the minimum payoff for each decision. Choose the decision with the for each decision. Choose the decision with the maximum of these minimum payoffs.maximum of these minimum payoffs.

MinimumMinimum

DecisionDecision PayoffPayoff

choose choose dd11 dd11 6 6 maximummaximum

dd22 3 3

Page 12: Ch08

12 12 Slide

Slide

Example: Example: Minimax Regret ApproachMinimax Regret Approach

For the minimax regret approach, first For the minimax regret approach, first compute a regret table by subtracting each compute a regret table by subtracting each payoff in a column from the largest payoff in payoff in a column from the largest payoff in that column. The resulting regret table is: that column. The resulting regret table is:

ss11 ss22 MaximumMaximum

dd11 5 0 5 0 5 5

dd22 0 3 0 3 3 3 minimumminimum

Then, select the decision with minimum regret.Then, select the decision with minimum regret.

Page 13: Ch08

13 13 Slide

Slide

Example: Example: Equally Likely (Laplace) Criterion

Equally likely, also called Laplace, criterion finds decision alternative with highest average payoff.

• First calculate average payoff for every alternative.

• Then pick alternative with maximum average payoff.

Average for Average for dd11 = (20 + 6)/2 = 13 = (20 + 6)/2 = 13

Average for Average for dd22 = (25 + 3)/2 = 14 Thus, = (25 + 3)/2 = 14 Thus, dd22 is is selectedselected

Page 14: Ch08

14 14 Slide

Slide

Example: Example: Criterion of Realism (Hurwicz)Criterion of Realism (Hurwicz) Often called Often called weighted averageweighted average, the , the criterion of criterion of

realismrealism (or (or HurwiczHurwicz) decision criterion is a ) decision criterion is a compromise between optimistic and a compromise between optimistic and a pessimisticpessimistic decision.decision.

• First, select First, select coefficient of realismcoefficient of realism, , , with a value , with a value between 0 and 1. When between 0 and 1. When is close to 1, decision is close to 1, decision maker is optimistic about future, and when maker is optimistic about future, and when is close is close to 0, decision maker is pessimistic about future.to 0, decision maker is pessimistic about future.

• Payoff = Payoff = x (maximum payoff) + (1-x (maximum payoff) + (1-) x (minimum payoff)) x (minimum payoff)

In our example let In our example let = 0.8 = 0.8

Payoff for dPayoff for d11 = 0.8*20+0.2*6=17.2 = 0.8*20+0.2*6=17.2

Payoff for dPayoff for d22 = 0.8*25+0.2*3=20.6 Thus, select d2 = 0.8*25+0.2*3=20.6 Thus, select d2

Page 15: Ch08

15 15 Slide

Slide

Decision Making with ProbabilitiesDecision Making with Probabilities

Expected Value ApproachExpected Value Approach• If probabilistic information regarding the If probabilistic information regarding the

states of nature is available, one may use the states of nature is available, one may use the expected Monetary value (EMV) approach expected Monetary value (EMV) approach (also known as Expected Value or EV)(also known as Expected Value or EV). .

• Here the expected return for each decision is Here the expected return for each decision is calculated by summing the products of the calculated by summing the products of the payoff under each state of nature and the payoff under each state of nature and the probability of the respective state of nature probability of the respective state of nature occurring. occurring.

• The decision yielding the The decision yielding the best expected best expected returnreturn is chosen. is chosen.

Page 16: Ch08

16 16 Slide

Slide

Expected Value of a Decision AlternativeExpected Value of a Decision Alternative

The The expected value of a decision alternativeexpected value of a decision alternative is the is the sum of weighted payoffs for the decision alternative.sum of weighted payoffs for the decision alternative.

The expected value (EV) of decision alternative The expected value (EV) of decision alternative ddii is is defined as:defined as:

where: where: NN = the number of states of nature = the number of states of nature

PP((ssjj) = the probability of state of nature ) = the probability of state of nature ssjj

VVijij = the payoff corresponding to = the payoff corresponding to decision decision alternative alternative ddii and state and state of nature of nature ssjj

EV( ) ( )d P s Vi j ijj

N

1

EV( ) ( )d P s Vi j ijj

N

1

Page 17: Ch08

17 17 Slide

Slide

Example: Marketing StrategyExample: Marketing Strategy

Expected Value ApproachExpected Value Approach

Refer to the previous problem. Assume the Refer to the previous problem. Assume the probability of the market being receptive is probability of the market being receptive is known to be 0.75. Use the expected monetary known to be 0.75. Use the expected monetary value criterion to determine the optimal decision. value criterion to determine the optimal decision.

Page 18: Ch08

18 18 Slide

Slide

Expected Value of Perfect InformationExpected Value of Perfect Information

Frequently information is available that can Frequently information is available that can improve the probability estimates for the improve the probability estimates for the states of nature. states of nature.

The The expected value of perfect informationexpected value of perfect information (EVPI) is the increase in the expected profit (EVPI) is the increase in the expected profit that would result if one knew with certainty that would result if one knew with certainty which state of nature would occur. which state of nature would occur.

The EVPI provides an The EVPI provides an upper bound on the upper bound on the expected value of any sample or survey expected value of any sample or survey informationinformation. .

Page 19: Ch08

19 19 Slide

Slide

Expected Value of Perfect InformationExpected Value of Perfect Information

EVPI CalculationEVPI Calculation• Step 1:Step 1:

Determine the optimal return corresponding Determine the optimal return corresponding to each state of nature.to each state of nature.

• Step 2:Step 2:

Compute the expected value of these optimal Compute the expected value of these optimal returns.returns.

• Step 3:Step 3:

Subtract the EV of the optimal decision from Subtract the EV of the optimal decision from the amount determined in step (2).the amount determined in step (2).

Page 20: Ch08

20 20 Slide

Slide

Example: Marketing StrategyExample: Marketing Strategy

Expected Value of Perfect InformationExpected Value of Perfect Information

Calculate the expected value for the best action Calculate the expected value for the best action for each state of nature and subtract the EV of for each state of nature and subtract the EV of the optimal decision.the optimal decision.

EVPI= .75(25,000) + .25(6,000) - 19,500 = $750EVPI= .75(25,000) + .25(6,000) - 19,500 = $750

Page 21: Ch08

21 21 Slide

Slide

Decision Analysis With Sample Decision Analysis With Sample InformationInformation

Knowledge of sample or survey information can Knowledge of sample or survey information can be used to revise the probability estimates for be used to revise the probability estimates for the states of nature. the states of nature.

Prior to obtaining this information, the Prior to obtaining this information, the probability estimates for the states of nature are probability estimates for the states of nature are called called prior probabilitiesprior probabilities. .

With knowledge of With knowledge of conditional probabilitiesconditional probabilities for for the outcomes or indicators of the sample or the outcomes or indicators of the sample or survey information, these prior probabilities can survey information, these prior probabilities can be revised by employing be revised by employing Bayes' TheoremBayes' Theorem. .

The outcomes of this analysis are called The outcomes of this analysis are called posterior probabilitiesposterior probabilities..

Page 22: Ch08

22 22 Slide

Slide

Posterior ProbabilitiesPosterior Probabilities

Posterior Probabilities Calculation Posterior Probabilities Calculation • Step 1:Step 1:

For each state of nature, multiply the prior For each state of nature, multiply the prior probability by its conditional probability for the probability by its conditional probability for the indicator -- this gives the indicator -- this gives the joint probabilitiesjoint probabilities for for the states and indicator.the states and indicator.

• Step 2: Step 2: Sum these joint probabilities over all states -- Sum these joint probabilities over all states -- this gives the this gives the marginal probabilitymarginal probability for the for the indicator.indicator.

• Step 3: Step 3: For each state, divide its joint probability by the For each state, divide its joint probability by the marginal probability for the indicator -- this marginal probability for the indicator -- this gives the posterior probability distribution.gives the posterior probability distribution.

Page 23: Ch08

23 23 Slide

Slide

Expected Value of Sample InformationExpected Value of Sample Information

The The expected value of sample informationexpected value of sample information (EVSI) (EVSI) is the additional expected profit possible through is the additional expected profit possible through knowledge of the sample or survey information.knowledge of the sample or survey information.

EVSI CalculationEVSI Calculation• Step 1: Step 1: Determine the optimal decision and Determine the optimal decision and

its expected return for the possible outcomes its expected return for the possible outcomes of the sample using the posterior probabilities of the sample using the posterior probabilities for the states of nature. for the states of nature.

• Step 2: Step 2: Compute the expected value of these Compute the expected value of these optimal returns.optimal returns.

• Step 3: Step 3: Subtract the EV of the optimal Subtract the EV of the optimal decision obtained without using the sample decision obtained without using the sample information from the amount determined in information from the amount determined in step (2).step (2).

Page 24: Ch08

24 24 Slide

Slide

Efficiency of Sample InformationEfficiency of Sample Information

Efficiency of sample informationEfficiency of sample information is the ratio of is the ratio of EVSI to EVPI. EVSI to EVPI.

As the EVPI provides an upper bound for the As the EVPI provides an upper bound for the EVSI, efficiency is always a number between 0 EVSI, efficiency is always a number between 0 and 1.and 1.

Page 25: Ch08

25 25 Slide

Slide

Refer to the Marketing Strategy ExampleRefer to the Marketing Strategy Example

It is known from past experience that of all the It is known from past experience that of all the cases when the market was receptive, a cases when the market was receptive, a research company predicted it in 90 percent of research company predicted it in 90 percent of the cases. (In the other 10 percent, they the cases. (In the other 10 percent, they predicted an unfavorable market). Also, of all predicted an unfavorable market). Also, of all the cases when the market proved to be the cases when the market proved to be unfavorable, the research company predicted it unfavorable, the research company predicted it correctly in 85 percent of the cases. (In the correctly in 85 percent of the cases. (In the other 15 percent of the cases, they predicted it other 15 percent of the cases, they predicted it incorrectly.)incorrectly.)

Answer the following questions based on the Answer the following questions based on the above information.above information.

Page 26: Ch08

26 26 Slide

Slide

Example: Marketing StrategyExample: Marketing Strategy

1. Draw a complete probability tree. 1. Draw a complete probability tree.

2. Find the posterior probabilities of all states of 2. Find the posterior probabilities of all states of nature.nature.

3. Using the posterior probabilities, which plan 3. Using the posterior probabilities, which plan would you recommend? would you recommend?

4.4. How much should one be willing to pay How much should one be willing to pay (maximum) for the research survey? That is, (maximum) for the research survey? That is, compute the expected value of sample compute the expected value of sample information (EVSI). information (EVSI).