inde 411 stochastic models and decision...

48
Decision Analysis-1 IndE 411 Stochastic Models and Decision Analysis UW Industrial and Systems Engineering Instructor: Prof. Zelda Zabinsky

Upload: others

Post on 31-Jan-2021

0 views

Category:

Documents


0 download

TRANSCRIPT

  • Decision Analysis-1

    IndE 411 Stochastic Models and Decision Analysis

    UW Industrial and Systems Engineering

    Instructor: Prof. Zelda Zabinsky

  • What is Operations Research?

    •  Operations Research (OR) is a discipline that deals with the application of advanced analytical methods to help make better decisions

    •  OR uses techniques, such as mathematical modeling, statistical analysis, and mathematical optimization, to arrive at optimal or near-optimal solutions to complex decision-making problems

    •  Operations research overlaps with other disciplines, notably industrial engineering, operations management and predictive analytics

    Decision Analysis-2

  • Decision Analysis-3

    Operations Research Modeling Toolset

    Linear Programming

    Network Programming

    Dynamic Programming

    Integer Programming

    Nonlinear Programming Game

    Theory

    Decision Analysis

    Markov Chains

    Queueing Theory

    Inventory Theory

    Forecasting Markov

    Decision Processes

    Simulation Stochastic

    Programming

    410 411

    412

  • Decision Analysis-4

    IndE 411

    •  Decision analysis –  Decision making without

    experimentation –  Decision making with

    experimentation –  Decision trees –  Utility theory

    •  Markov chains –  Modeling –  Chapman-Kolmogorov equations –  Classification of states –  Long-run properties –  First passage times –  Absorbing states

    •  Queueing theory –  Basic structure and modeling –  Exponential distribution –  Birth-and-death processes –  Models based on birth-and-death –  Models with non-exponential

    distributions

    •  Applications of queueing theory

    –  Waiting cost functions –  Decision models –  Jackson networks

  • Decision Analysis-5

    Decision Analysis

    Chapter 15

  • Decision Analysis-6

    Why Study Decision Analysis? •  In INDE 410, you studied decision making when the

    consequences of alternative decisions were assumed to be known with certainty

    •  However, decisions often need to be made in uncertain environments –  In 2007, Apple introduced the revolutionary iPhone amid

    uncertainty about reaction of potential customers •  How many iPhones should be produced? •  Should iPhones be available with multiple service providers or just

    one? Which one? •  Should iPhones be first test marketed in a small region before a full-

    scale release? Which region? –  Other examples?

    •  Financial decisions regarding portfolio investment •  Medical decisions regarding treatment, tests, diagnosis •  Agriculture, Military, Transportation, Manufacturing

  • Decision Analysis-7

    Decision Analysis

    •  Decision making without experimentation –  Decision making criteria

    •  Decision making with experimentation –  Expected value of experimentation –  Decision trees

    •  Utility theory

  • Decision Analysis-8

    Decision Making without Experimentation

  • Decision Analysis-9

    Goferbroke Example

    •  Goferbroke Company owns a tract of land that may contain oil •  Consulting geologist: “1 chance in 4 of oil” •  Offer for purchase from another company: $90k •  Can also hold the land and drill for oil with cost $100k •  If oil, expected revenue $800k, if not, nothing

    Payoff

    Action, or Alternative Oil Dry

    Drill for oil

    Sell the land

    Chance 1 in 4 3 in 4

  • Decision Analysis-10

    Goferbroke Example

    •  Goferbroke Company owns a tract of land that may contain oil •  Consulting geologist: “1 chance in 4 of oil” •  Offer for purchase from another company: $90k •  Can also hold the land and drill for oil with cost $100k •  If oil, expected revenue $800k, if not, nothing

    Payoff

    Action, or Alternative Oil Dry

    Drill for oil 800-100=700 -100

    Sell the land 90 90

    Chance 1 in 4 (0.25) 3 in 4 (0.75)

  • Decision Analysis-11

    Notation and Terminology

    •  Actions: {a1, a2, …} –  The set of actions the decision maker must choose from –  Example:

    •  States of nature: {θ1, θ2, ...} –  Possible outcomes of the uncertain event –  Example:

  • Decision Analysis-12

    Notation and Terminology

    •  Actions: {a1, a2, …} –  The set of actions the decision maker must choose from –  Example: Drill for oil; Manufacture iPhones

    •  States of nature: {θ1, θ2, ...} –  Possible outcomes of the uncertain event –  Example: Oil or dry; Number of customers for iPhones

  • Decision Analysis-13

    Notation and Terminology

    •  Payoff/Loss Function: L(ai, θk) –  The payoff/loss incurred by taking action ai when state θk occurs –  Example: Profit, L(action=drill, state=oil) = 700

    •  Prior distribution: –  Distribution representing the relative likelihood of the possible

    states of nature

    •  Prior probabilities: P(Θ = θk) –  Probabilities (provided by prior distribution) for various states of

    nature –  Example: P(oil) = 0.25, P(dry) = 0.75

  • Decision Analysis-14

    Decision Making Criteria

    Can “optimize” the decision with respect to several criteria •  Maximin payoff (Minimax regret) •  Maximum likelihood •  Bayes’ decision rule (expected value)

  • Decision Analysis-15

    Maximin Payoff Criterion

    •  For each action, find minimum payoff over all states of nature •  Then choose the action with the maximum of these minimum

    payoffs

    •  Minimax Regret is similar to Maximin Payoff •  This criterion is very conservative – choose the action with the best

    payoff in the worst state of nature

    State of Nature Min Payoff Action Oil Dry

    Drill for oil 700 -100

    Sell the land 90 90

  • Decision Analysis-16

    Maximin Payoff Criterion

    •  For each action, find minimum payoff over all states of nature •  Then choose the action with the maximum of these minimum

    payoffs

    •  Minimax Regret is similar to Maximin Payoff •  This criterion is very conservative – choose the action with the best

    payoff in the worst state of nature

    State of Nature Min Payoff Action Oil Dry

    Drill for oil 700 -100 -100 Sell the land 90 90 90

  • Decision Analysis-17

    Maximum Likelihood Criterion

    •  Identify the most likely state of nature •  Then choose the action with the maximum payoff under that state of

    nature

    •  Does not depend on the actual numerical likelihood values, but only on identifying the most likely one

    •  Completely ignores much relevant information. Criterion does not consider taking a chance on a low-probability high-payoff event

    State of Nature Action Oil Dry Drill for oil 700 -100

    Sell the land 90 90 Prior probability 0.25 0.75

  • Decision Analysis-18

    Maximum Likelihood Criterion

    •  Identify the most likely state of nature •  Then choose the action with the maximum payoff under that state of

    nature

    •  Does not depend on the actual numerical likelihood values, but only on identifying the most likely one

    •  Completely ignores much relevant information. Criterion does not consider taking a chance on a low-probability high-payoff event

    State of Nature Action Oil Dry Drill for oil 700 -100

    Sell the land 90 90 Prior probability 0.25 0.75

  • Decision Analysis-19

    Bayes’ Decision Rule (Expected Value Criterion)

    •  For each action, find expectation of payoff over all states of nature •  Then choose the action with the maximum of these expected

    payoffs

    •  Incorporates detailed information. However, decision choice is sensitive to actual numerical probability values which are difficult to estimate

    State of Nature Expected Payoff Action Oil Dry

    Drill for oil 700 -100

    Sell the land 90 90 Prior probability 0.25 0.75

  • Decision Analysis-20

    Bayes’ Decision Rule (Expected Value Criterion)

    •  For each action, find expectation of payoff over all states of nature •  Then choose the action with the maximum of these expected

    payoffs

    •  Incorporates detailed information. However, decision choice is sensitive to actual numerical probability values which are difficult to estimate

    State of Nature Expected Payoff Action Oil Dry

    Drill for oil 700 -100 700(0.25) - 100(0.75)=100

    Sell the land 90 90 90(0.25) + 90(0.75) = 90

    Prior probability 0.25 0.75

  • Decision Analysis-21

    Sensitivity Analysis with Bayes’ Decision Rule

    •  How is the decision affected by the probability values? •  What is the minimum probability of oil such that we choose to drill

    the land under Bayes’ decision rule?

    State of Nature Expected Payoff Action Oil Dry

    Drill for oil 700 -100

    Sell the land 90 90 Prior probability p 1-p

  • Decision Analysis-22

    Sensitivity Analysis with Bayes’ Decision Rule

    •  How is the decision affected by the probability values? •  What is the minimum probability of oil such that we choose to drill

    the land under Bayes’ decision rule?

    State of Nature Expected Payoff Action Oil Dry

    Drill for oil 700 -100 700p - 100(1-p) = 800p - 100

    Sell the land 90 90 90p - 90(1-p) = 90

    Prior probability p 1-p

    When 800p-100=90, or p=0.2375, the two actions have an equal expected payoff.

  • Decision Analysis-23

    Expected Payoff as A Function of Probability p p is likely to be in the range [0.15, 0.35]

  • Decision Analysis-24

    Decision Making with Experimentation

  • Decision Analysis-25

    Goferbroke Example (cont’d)

    •  Option available to conduct a detailed seismic survey to obtain a better estimate of oil probability

    •  Costs $30k •  Possible findings:

    –  Unfavorable seismic soundings (USS), oil is fairly unlikely –  Favorable seismic soundings (FSS), oil is fairly likely

    •  Which policy is optimal?

    State of Nature Action Oil Dry Drill for oil 700 -100 Sell the land 90 90

    Prior probability 0.25 0.75

  • Decision Analysis-26

    Posterior Probabilities

    •  Perform experiments to get better information and improve estimates for the probabilities of states of nature. These improved estimates are called posterior probabilities

    •  Experimental Outcomes or Findings: {x1, x2, …} Example: USS or FSS

    •  Cost of experiment: Δ Example: $30k

    •  Posterior Distribution: P(Θ = θk | X = xj) Example: P(oil|FSS), P(dry|FSS), P(oil|USS), P(dry|USS)

  • Decision Analysis-27

    Posterior Probabilities

    •  Typically, we get P(experimental finding | state of nature), P(X = xj| Θ = θk ), from historical data

    •  However, we want P(state of nature | experimental finding), P(Θ = θk | X = xj)

    •  We can calculate the posterior probabilities from the prior probabilities and the historical data using Bayes’ Theorem

  • Decision Analysis-28

    Goferbroke Example (cont’d)

    •  Based on past data and experience, we are given: If there is oil, then –  the probability that seismic survey findings is USS = 0.4 =

    P(USS | oil) –  the probability that seismic survey findings is FSS = 0.6 =

    P(FSS | oil) If there is no oil, then –  the probability that seismic survey findings is USS = 0.8 =

    P(USS | dry) –  the probability that seismic survey findings is FSS = 0.2 =

    P(FSS | dry)

  • Decision Analysis-29

    Bayes’ Theorem

    •  Calculate posterior probabilities using Bayes’ theorem: Given P(X = xj | Θ = θk), find P(Θ = θk | X = xj)

    P( | ) P( )P( | )

    P( | ) P( )i

    j k kk j

    j i i

    X xX x

    X x!

    ! !!

    ! !

    = " = " =" = = =

    = " = " =#

  • Decision Analysis-30

    Review of Conditional Probabilities

    •  Consider two events, A and B •  The conditional probability of A given B is:

    •  There is also the Law of Total Probability. Suppose event A is partitioned into pairwise disjoint events, {A1, …, Am}. Then,

    ( )( | )( )

    P A BP A BP B!

    =

    P(B) = P(B! Ai )i=1

    m

    ! = P(B | Ai )i=1

    m

    ! *P(Ai )

  • Decision Analysis-31

    Conditional Probabilities (continued) •  We want to switch the conditional probabilities around •  We have P(B|A) and we want P(A|B)

    •  What is P(B)?

    P(A | B) = P(A!B)P(B)

    P(A | B)*P(B) = P(A!B) = P(B | A)*P(A)

    P(A | B) = P(B | A)*P(A)P(B)

    P(B) = P(B | A1)*P(A1)+ P(B | A2 )*P(A2 ) + ! + P(B | Am )*P(Am )

  • Decision Analysis-32

    Example 1 of Conditional Probabilities

    •  The king comes from a family of 2 children. What is the probability that the other child is his sister?

  • Decision Analysis-33

    Example 2 of Conditional Probabilities

    •  52% of the students at a certain college are females. 5% of the students in this college are majoring in computer science. 2% of the students are women majoring in computer science. If a student is selected at random, find the conditional probability that

    1)  this student is female, given that the student is majoring in computer science

    2)  this student is majoring in computer science, given that the student is female

  • Decision Analysis-34

    Example 1 of Bayes’ Theorem

    •  Suppose that an insurance company classifies people into one of three classes – good risks, average risks, and bad risks. Their records indicate that the probabilities that good, average, and bad risk persons will be involved in an accident over a 1-year span are, respectively, 0.05, 0.15, and 0.30. If 20% of the population are “good risks”, 50% are “average risks”, and 30% are “bad risks”, what proportion of people have accidents in a fixed year? If policy holder A had no accidents in 1987, what is the probability that he or she is a good risk?

  • Decision Analysis-35

    Example 2 of Bayes’ Theorem

    •  Suppose that there was a cancer diagnostic test that was 95% accurate both on those that do and those that do not have the disease. If 0.4% of the population have a cancer, compute the probability that a tested person has cancer, given that his or her test result indicates so.

  • Decision Analysis-36

    Goferbroke Example (cont’d)

    •  We have P(USS | oil) = 0.4 P(FSS | oil) = 0.6 P(oil) = 0.25 P(USS | dry) = 0.8 P(FSS | dry) = 0.2 P(dry) = 0.75

    •  P(oil | USS) =

    •  P(oil | FSS) =

    •  P(dry | USS) =

    •  P(dry | FSS) =

    Do calculations on the board in class

  • Decision Analysis-37

    Goferbroke Example (cont’d)

    •  We have P(USS | oil) = 0.4 P(FSS | oil) = 0.6 P(oil) = 0.25 P(USS | dry) = 0.8 P(FSS | dry) = 0.2 P(dry) = 0.75

    •  P(oil | USS) = 1/7

    •  P(oil | FSS) = 1/2

    •  P(dry | USS) = 6/7

    •  P(dry | FSS) = 1/2

  • Decision Analysis-38

    Probability Tree

    Draw on the board in class

  • Decision Analysis-39

    Probability Tree

  • Decision Analysis-40

    Goferbroke Example (cont’d)

    Optimal policies with experimentation (cost of 30) •  If finding is USS: State of Nature Expected

    Payoff Action Oil Dry Drill for oil 700-30 -100-30 Sell the land 90-30 90-30

    Posterior probability

    •  If finding is FSS: State of Nature Expected Payoff Action Oil Dry

    Drill for oil 700-30 -100-30 Sell the land 90-30 90-30

    Posterior probability

  • Decision Analysis-41

    Goferbroke Example (cont’d)

    Optimal policies with experimentation (cost of 30) •  If finding is USS: State of Nature Expected

    Payoff Action Oil Dry Drill for oil 700-30 -100-30 -15.7 Sell the land 90-30 90-30 60

    Posterior probability 1/7 6/7

    •  If finding is FSS: State of Nature Expected Payoff Action Oil Dry

    Drill for oil 700-30 -100-30 270 Sell the land 90-30 90-30 60

    Posterior probability 1/2 1/2

  • Decision Analysis-42

    Optimal Policy Table

    Experimental finding

    Optimal decision Expected payoff excluding the cost

    of experimentation

    Expected payoff including the cost

    of experimentation

    USS Sell the land 90 60 FSS Drill for oil 300 270

    Decision Analysis-42

  • Decision Analysis-43

    Decision Trees

  • Decision Analysis-44

    Decision Tree •  Tool to display decision problem and relevant computations •  Nodes on a decision tree called forks. •  Arcs on a decision tree called branches. •  Decision forks represented by a square. A decision needs to

    be made. •  Chance forks (or random event forks) represented by a

    circle. A random event occurs. •  Payoff values and probabilities are included on the tree.

    •  Optimum expected payoff is determined by both decisions and random events.

  • Decision Analysis-45

    Goferbroke Example (cont’d) Decision Tree

    Decision Analysis-45

  • Decision Analysis-46

    Goferbroke Example (cont’d) Decision Tree with Payoffs and Probabilities

    Decision Analysis-46

  • Decision Analysis-47

    Analysis Using Decision Trees

    1.  Start at the right side of tree and move left a column at a time. For each column, if chance fork, go to (2). If decision fork, go to (3).

    2.  At each chance fork, calculate its expected value. Record this value in bold next to the fork. This value is also the expected value for branch leading into that fork.

    3.  At each decision fork, compare expected value and choose alternative of branch with best value. Record choice by putting slash marks through each rejected branch.

    •  Comments: –  This is a backward induction procedure. –  For any decision tree, such a procedure always leads to an optimal

    solution.

  • Decision Analysis-48

    Goferbroke Example (cont’d) Decision Tree with Analysis

    Decision Analysis-48