chapter 13 decision making under uncertainty to accompany operations research: applications and...

59
Chapter 13 Decision Making Under Uncertainty to accompany Operations Research: Applications and Algorithms 4th edition by Wayne L. Winston Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc.

Upload: gavin-harrington

Post on 17-Dec-2015

320 views

Category:

Documents


17 download

TRANSCRIPT

Page 1: Chapter 13 Decision Making Under Uncertainty to accompany Operations Research: Applications and Algorithms 4th edition by Wayne L. Winston Copyright (c)

Chapter 13

Decision Making Under Uncertainty

to accompany

Operations Research: Applications and Algorithms

4th edition

by Wayne L. Winston

Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc.

Page 2: Chapter 13 Decision Making Under Uncertainty to accompany Operations Research: Applications and Algorithms 4th edition by Wayne L. Winston Copyright (c)

2

Description

We have all had to make important decisions where we were uncertain about factors that were relevant to the decisions.

In this chapter, we study situations in which decisions are made in an uncertain environment.

The chapter presents the basic theory of decision making under certainty: the widely used Van Neumann-Morgenstern utility model and the use of decision trees from making decisions at different points in time. We close by looking at decision making with multiple objectives.

Page 3: Chapter 13 Decision Making Under Uncertainty to accompany Operations Research: Applications and Algorithms 4th edition by Wayne L. Winston Copyright (c)

3

13.1 Decision Criteria

Dominated Actions Definition: An action ai dominated by an action ai'

for all sj S, rij ≤ ri' j, and for some state si', rij' < ri' j'.

The Maximin Criterion For each action, determine the worst outcome

(smallest reward). The maximin criterion chooses the action with the “best” worst outcome.

Page 4: Chapter 13 Decision Making Under Uncertainty to accompany Operations Research: Applications and Algorithms 4th edition by Wayne L. Winston Copyright (c)

4

Definition: The maximin criterion chooses the action ai with the largest value of minjS

rij.

The Maximax Criterion For each action, determine the best outcome (largest

reward). The maximax criterion chooses the action with the “best” best outcome.

Definition: The maximax criterion chooses the action ai with the largest value of maxj S

rij.

Page 5: Chapter 13 Decision Making Under Uncertainty to accompany Operations Research: Applications and Algorithms 4th edition by Wayne L. Winston Copyright (c)

5

Minimax Regret The maximax regret criterion (developed by L. J. Savage)

uses the concept of opportunity cost to arrive at a decision.

For each possible state of the world sj, find an action i* (j) that maximizes rij.

i*(j) is the best possible action to choose if the state of the world is actually sj.

For any action ai and state sj, the opportunity loss or regret for ai in sj is ri*(j),j–rij.

The Expected Value Criterion Chooses the action that yields the largest expected

reward.

Page 6: Chapter 13 Decision Making Under Uncertainty to accompany Operations Research: Applications and Algorithms 4th edition by Wayne L. Winston Copyright (c)

6

13.2 Utility Theory

We now show how the Von Neumann-Morgenstern concept of a utility function can be use as an aid to decision making under uncertainty.

Consider a situation in which a person will receive, for i = 1,2,…,n, a reward ri with probability pi.

This is denoted as the lottery (p1, r1; p2, r2; …; pn, rn)

Page 7: Chapter 13 Decision Making Under Uncertainty to accompany Operations Research: Applications and Algorithms 4th edition by Wayne L. Winston Copyright (c)

7

A lottery is often represented by a tree in which each branch stands for a possible outcome of the lottery

The number on each branch represents the probability that the outcome will occur

Our goal is to determine a method that a person can use to choose between lotteries.

Page 8: Chapter 13 Decision Making Under Uncertainty to accompany Operations Research: Applications and Algorithms 4th edition by Wayne L. Winston Copyright (c)

8

Suppose he or she must choose to play L1 or L2 but not both. We write L1pL2 if the person prefers L1.

We write L1iL2 if he or she is indifferent between choosing L1 and L2.

If L1iL2, we say that L1 and L2 are equivalent lotteries.

More formally, a lottery L is a compound lottery if for some I, there is a probability pi that the decision maker’s reward is to play another lottery L′.

Page 9: Chapter 13 Decision Making Under Uncertainty to accompany Operations Research: Applications and Algorithms 4th edition by Wayne L. Winston Copyright (c)

9

If a lottery is not a compound lottery, it is a simple lottery.

The utility of the reward ri, written u(r), is the number qi such that the decision maker is indifferent between the following lotteries:

1ri

and

qi

1 - qi

Most favorable outcome

Least favorable outcome

Page 10: Chapter 13 Decision Making Under Uncertainty to accompany Operations Research: Applications and Algorithms 4th edition by Wayne L. Winston Copyright (c)

10

The specification of u(ri) for all rewards ri is called the decision maker’s utility function.

For a given lottery L = (p1, r1; p2, r2;…;pn,rn), define the expected utility of the lottery L, written E(U for L), by

ni

iii rupLforUE

1

)()(

Page 11: Chapter 13 Decision Making Under Uncertainty to accompany Operations Research: Applications and Algorithms 4th edition by Wayne L. Winston Copyright (c)

11

Von Neumann-Morgenstern Axioms

Axiom 1: Complete Ordering Axiom For any two rewards r1 and r2, one of the following

must be true: The decision maker(1) prefers r1 to r2, (2) prefers r2 to r1, or (3) is indifferent between r1 and r2. Also, if the person prefers r1 to r2 and r2 to r3, then he or she must prefer r1 to r3 (transitivity of preferences).

Axiom 2: Continuity Axiom If the decision maker prefers r1 to r2 and r2 to r3, then

for some c(0<c<1),L1iL2, where c

1 - c

r1

r3

L2L1

1$0

Page 12: Chapter 13 Decision Making Under Uncertainty to accompany Operations Research: Applications and Algorithms 4th edition by Wayne L. Winston Copyright (c)

12

Axiom 3: Independence Axiom Suppose the decision maker is indifferent between

rewards r1 and r2. Let r3 be any other reward. Then for any c (0 < c < 1), L1iL2, where

L1 and L2 differ only in that L1 has a probability c of yielding a reward r1, whereas L2 has the probability c of yielding a reward r2.

c

1 - c

r2

r3

L2

c

1 - c

r1

r3

L1

Page 13: Chapter 13 Decision Making Under Uncertainty to accompany Operations Research: Applications and Algorithms 4th edition by Wayne L. Winston Copyright (c)

13

Thus the Independence Axiom implies that the decision maker views a chance at r1 and a chance c at r2 to be identical value, and this view holds for all values of c and r3.

Axiom 4: Unequal Probability Axiom Suppose the decision maker prefers reward r1 to

reward r2. If two lotteries have only r1 and r2, as their possible outcomes, he or she will prefer the lottery with the higher probability of obtaining r1.

Axiom 5: Compound Lottery Axiom Suppose that when all possible outcomes are

considered, a compound lottery L yields a probability pi of receiving a reward r1.

Page 14: Chapter 13 Decision Making Under Uncertainty to accompany Operations Research: Applications and Algorithms 4th edition by Wayne L. Winston Copyright (c)

14

Why We May Assume u (Worst Outcome)=0 and u (Best Outcome)=1

Up to now, we have assumed that u (least favorable outcome)=0 and u (most favorable outcome)=1.

Even if a decision maker’s utility function does not have these values, we can transform his or her utility function into a utility function having u=1.

Lemma 1 – Given a utility function u(x), define for any a>0 and any b the function v(x) = au(x)+b. Given any two lotteries L1 and L2 it will be the case that

Page 15: Chapter 13 Decision Making Under Uncertainty to accompany Operations Research: Applications and Algorithms 4th edition by Wayne L. Winston Copyright (c)

15

1. A decision maker using u(x) as his or her utility function will have L1pL2 is and only if a decision maker using v(x) as his or her utility function will have L1pL2.

2. A decision maker using u(x) as his or her utility function will have L1iL2 if and only if a decision maker using v(x) as his or her utility functions will have L1iL2.

Page 16: Chapter 13 Decision Making Under Uncertainty to accompany Operations Research: Applications and Algorithms 4th edition by Wayne L. Winston Copyright (c)

16

Using Lemma 1, we can show that without changing how an individual ranks lotteries, we can transform the decision maker’s utility function into one having u (least favorable outcome)=0 and u (most favorable outcome)=1.

Page 17: Chapter 13 Decision Making Under Uncertainty to accompany Operations Research: Applications and Algorithms 4th edition by Wayne L. Winston Copyright (c)

17

Estimating an Individual’s Utility Function

How might we estimate an individual's utility function?

We begin by assuming that the least favorable outcome (say - $10,000) has a utility of 0 and that the most favorable outcome (say, $30,000) has a utility of 1.

Next we define a number x1/2 having u(x1/2) = ½.

Eventually the utility function can be approximated by drawing a curve (smooth, we hope) joining the points.

Page 18: Chapter 13 Decision Making Under Uncertainty to accompany Operations Research: Applications and Algorithms 4th edition by Wayne L. Winston Copyright (c)

18

Unfortunately, if a decision maker’s preference violate any of the preceding axioms (such as transitivity), this procedure may not yields a smooth curve.

If it does not yield a relatively smooth curve, more sophisticated procedures for assessing utility function must be used.

Page 19: Chapter 13 Decision Making Under Uncertainty to accompany Operations Research: Applications and Algorithms 4th edition by Wayne L. Winston Copyright (c)

19

Relation Between an Individual’s Utility Function and His or Her Attitude Toward Risk

Definition: The certainty equivalent of a lottery L, written CE(L), is the number CE(L) such that the decision maker is indifferent between the lottery L and receiving a certain payoff of CE(L).

Definition: The risk premium of a lottery L, written RP(L), is given by RP(L) = EV(L)-CE(L), where EV(L) is the expected value of the lottery’s outcomes.

Page 20: Chapter 13 Decision Making Under Uncertainty to accompany Operations Research: Applications and Algorithms 4th edition by Wayne L. Winston Copyright (c)

20

Let a nondegenerate lottery be any lottery in which more than one outcome can occur.

With respect to attitude toward risk, a decision maker is 1 Risk-averse if and only if for any nondegenerate

lottery L, RP(L) > 0

2 Risk-neutral if and only if for any nondegenerate lottery L, RP(L) = 0

Risk-seeking if and only if for any nondegenerate lottery L, RP(L) < 0

Page 21: Chapter 13 Decision Making Under Uncertainty to accompany Operations Research: Applications and Algorithms 4th edition by Wayne L. Winston Copyright (c)

21

Definition: A function u(x) is said to be strictly concave (or strictly convex) if for any two points on the curve y=u(x), the line segment joining those two points lies entirely (with the exception of its endpoints) below (or above) the curve y= u(x).

In reality, many people exhibit risk-seeking behavior and risk avers behavior.

A person whose utility function contains both convex and concave segments may exhibit both risk-avers and risk-seeking behavior.

Page 22: Chapter 13 Decision Making Under Uncertainty to accompany Operations Research: Applications and Algorithms 4th edition by Wayne L. Winston Copyright (c)

22

Exponential Utility

One important class is called exponential utility and has been used in many financial investment analyses.

An exponential utility function has only one adjustable numerical parameter, and there are straightforward ways to discover the most appropriate value of this parameter for a particular individual or company.

An exponential utility function has the following form: U(x) = 1-e-x/R

Page 23: Chapter 13 Decision Making Under Uncertainty to accompany Operations Research: Applications and Algorithms 4th edition by Wayne L. Winston Copyright (c)

23

Here x is a monetary value (a payoff if positive, a cost if negative) U(x) is the utility of this value, an R>0 is an adjustable parameter called the risk tolerance.

To assess a person’s (or company’s) exponential utility function, we need only to assess the value of R.

It’s been shown that the risk tolerance is approximately equal to that dollar amount R such that the decision maker is indifferent between the following two options.

Page 24: Chapter 13 Decision Making Under Uncertainty to accompany Operations Research: Applications and Algorithms 4th edition by Wayne L. Winston Copyright (c)

24

Option 1 – Obtain no payoff at all.

Option 2 – Obtain a payoff of R dollars or a loss of R/2 dollars, depending on the flip of a fair coin.

A second tip for finding R is based on empirical evidence found by Ronald Howard, a prominent decision analyst.

He discovered tentative relationships between risk tolerance and several financial variables – net sales, net income, and equity.

Page 25: Chapter 13 Decision Making Under Uncertainty to accompany Operations Research: Applications and Algorithms 4th edition by Wayne L. Winston Copyright (c)

25

13.3 Flaws in Expected Utility Maximization: Prospect Theory and Framing Effects

The axioms underlying expected maximization of utility (EMU) seem reasonable, but in practice people’s decisions often deviate from the predictions of EMU.

Psychologists Kahneman and Tversky (1981) developed prospect theory and framing effects for values to try and explain why people deviate from the predictions of EMU.

Page 26: Chapter 13 Decision Making Under Uncertainty to accompany Operations Research: Applications and Algorithms 4th edition by Wayne L. Winston Copyright (c)

26

The shape of the π(p) function in the figure implies that individuals are more sensitive to changes in probability when the probability of an event is small (near 0) or large (near 1).

Page 27: Chapter 13 Decision Making Under Uncertainty to accompany Operations Research: Applications and Algorithms 4th edition by Wayne L. Winston Copyright (c)

27

Framing

Kahneman and Tversky’s idea of framing is based on the fact that people often set their utility functions from the standpoint of a frame or status quo from which they view the current situation.

Most people’s utility functions treat a loss of a given value as being more serious than a gain of an identical value.

Page 28: Chapter 13 Decision Making Under Uncertainty to accompany Operations Research: Applications and Algorithms 4th edition by Wayne L. Winston Copyright (c)

28

13.4 Decision Trees

Often, people must make a series of decisions at different points in time.

Then decision trees can often be used to determine optimal decisions.

A decision tree enables a decision maker to decompose a large complex decision problem into several smaller problems.

A decision fork represents a point in time when Colaco (the company in Example 3 in the book) has to make a decision.

Page 29: Chapter 13 Decision Making Under Uncertainty to accompany Operations Research: Applications and Algorithms 4th edition by Wayne L. Winston Copyright (c)

29

An event fork is drawn when outside forces determine which of several random events will occur.

Each branch of an event fork represents a possible outcome, and the number on each branch represents the probability that the event will occur.

A branch of a decision tree is a terminal branch if no forks emanate from the branch.

Page 30: Chapter 13 Decision Making Under Uncertainty to accompany Operations Research: Applications and Algorithms 4th edition by Wayne L. Winston Copyright (c)

30

Incorporating Risk Aversion into Decision Tree Analysis

In the Colaco example, the optimal strategy yields a .45 chance that the company will end up with a relatively small final asset position of $50,000.

On the other hand, the strategy of test marketing and acting optimally on the results of the test market study yields only a .09 chance that Colaco’s asset position will be below $100,000.

Thus if Colaco is a risk-averse decision maker, the strategy of immediately marketing nationally may not reflect the company’s preference.

Page 31: Chapter 13 Decision Making Under Uncertainty to accompany Operations Research: Applications and Algorithms 4th edition by Wayne L. Winston Copyright (c)

31

Expected Value of Sample Information

Decision tree can be used to measure the value of sample or test market information.

We begin by determining Colaco’s expected final asset position if the company acts optimally and the test market study is costless.

We call this expected final asset position Colaco’s expected value with sample information (EVWSI).

Page 32: Chapter 13 Decision Making Under Uncertainty to accompany Operations Research: Applications and Algorithms 4th edition by Wayne L. Winston Copyright (c)

32

We next determine the largest expected final asset position that Colaco would obtain if the test market study were not available.

We call this the expected value with original information (EVWOI).

Now the expected value of the test market information, referred to as expected value of sample information (EVSI), is defined to be EVSI = EVWSI – EVWOI.

Page 33: Chapter 13 Decision Making Under Uncertainty to accompany Operations Research: Applications and Algorithms 4th edition by Wayne L. Winston Copyright (c)

33

Expected Value of Perfect Information

We can modify the analysis used too determine EVSI to find the value of perfect information.

By perfect information we mean that all uncertain events that can affect Colaco’s final asset position still occur with the given probabilities.

Expected value with perfect information (EVWPI) is found by drawing a decision tree in which the decision maker has perfect information about which state has occurred before making a decision.

Page 34: Chapter 13 Decision Making Under Uncertainty to accompany Operations Research: Applications and Algorithms 4th edition by Wayne L. Winston Copyright (c)

34

Then the expected value of perfect information (EVPI) is given by EVPI=EVWPI – EVWOI.

Page 35: Chapter 13 Decision Making Under Uncertainty to accompany Operations Research: Applications and Algorithms 4th edition by Wayne L. Winston Copyright (c)

35

13.5 Bayes’ Rule and Decision Trees

We are also given estimates of the probabilities of each state of the world.

These are called prior probabilities.

In different states of the world, different decisions may be optimal.

It may be desirable to purchase information that gives the decision maker more foreknowledge about the state of the world.

This may enable the decision maker to make better decisions.

Page 36: Chapter 13 Decision Making Under Uncertainty to accompany Operations Research: Applications and Algorithms 4th edition by Wayne L. Winston Copyright (c)

36

Given knowledge of the outcome of the experiment, these probabilities give new values for the probability of each state of the world.

The probabilities p(si|oj) are called posterior probabilities.

In many situations, however, we may be given the prior probabilities p(si) for each state of the world, and instead of being given the posterior probabilities p(si|oj), we might be given the likelihoods.

Page 37: Chapter 13 Decision Making Under Uncertainty to accompany Operations Research: Applications and Algorithms 4th edition by Wayne L. Winston Copyright (c)

37

With the help of Bayes’ rule, we can use the prior probabilities and likelihoods to determine the needed posterior probabilities.

To begin the computation of the posterior probabilities, we need to determine the joint probabilities of each state of the world and experimental outcome.

We obtain the joint probabilities by using the definition of conditional probabilities.

Page 38: Chapter 13 Decision Making Under Uncertainty to accompany Operations Research: Applications and Algorithms 4th edition by Wayne L. Winston Copyright (c)

38

Next we computer the probability of each possible experimental outcome p(LS) and p(LF).

Now Bayes’ rule can be applied to obtain the desired posterior probabilities.

)(

)()|(

)(

)()|(

)(

)()|(

)(

)()|(

LFp

LFNFpLFNFp

LFp

LFNSpLFNSp

LSp

LSNFpLSNFp

LSp

LSNSpLSNSp

Page 39: Chapter 13 Decision Making Under Uncertainty to accompany Operations Research: Applications and Algorithms 4th edition by Wayne L. Winston Copyright (c)

39

In summary, to find posterior probabilities, we go through the following three step process: Step 1 – Determine the joint probabilities of the form

p(si∩oj) by multiplying the prior probability p(si) times the likelihood p(oj|si).

Step 2 – Determine the probabilities of each experimental outcome p(oj) by summing up all joint probabilities of the form p(sk∩oj).

Step 3 – Determine each posterior probability (p(si|oj)) by dividing the joint probability (p(si∩oj)) by the probability of the experimental outcome oj(p(oj)).

Page 40: Chapter 13 Decision Making Under Uncertainty to accompany Operations Research: Applications and Algorithms 4th edition by Wayne L. Winston Copyright (c)

40

Using LINGO to Compute Posterior Probabilities

LINGO can be used to compute posterior probabilities.

An example program can be found in the book.

Page 41: Chapter 13 Decision Making Under Uncertainty to accompany Operations Research: Applications and Algorithms 4th edition by Wayne L. Winston Copyright (c)

41

13.6 Decision Making with Multiple Objectives

Suppose a woman believes that there are n attributes that will determine her decision.

Let xi(a) be the value of the ith attribute associated with an alternative a.

She associates a value v(x1(a), x2(a),…, xn(a)) with the alternative a.

The function v(x1, x2,…, xn) is the decision maker’s value function.

Alternatively, the decision maker can associate a cost c(x1(a), x2(a),…, xn(a)) with the alternative a.

Page 42: Chapter 13 Decision Making Under Uncertainty to accompany Operations Research: Applications and Algorithms 4th edition by Wayne L. Winston Copyright (c)

42

The function c(x1, x2,…, xn) is her cost function.

Definition: A value function v(x1, x2,…, xn) is an additive value function if there exist n functions v1(x1), v2(x2),…, v2(xn) satisfying

Definition: A cost function c(x1, x2,…, xn) is an additive cost function if there exist n functions c1(x1), c2(x2),…, c2(xn) satisfying

ni

iiin xv), x,, xv(x

121 )(

ni

iiin xc), x,, xc(x

121 )(

Page 43: Chapter 13 Decision Making Under Uncertainty to accompany Operations Research: Applications and Algorithms 4th edition by Wayne L. Winston Copyright (c)

43

Definition: An attribute (call it attribute 1) is preferentially independent (pi) of another attributes (attribute 2) if preferences for values of attribute 1 do not depend on the value of attribute 1.

Definition: If attribute 1 is pi of attribute 2, and attribute 2 is pi of attribute 1, then attribute is mutually preferentially independent (mpi) of attribute 2.

Page 44: Chapter 13 Decision Making Under Uncertainty to accompany Operations Research: Applications and Algorithms 4th edition by Wayne L. Winston Copyright (c)

44

Definition: A set of attributes S mutually preferentially independent (mpi) of a set of attributes S′ if (1) the values of the attributes in S′ do not affect preferences for the values of attributes in S, and (2) the values of attributes in S do not affect preference for the values of attributes in S′.

Definition: A set of attributes 1,2,…,n is mutually preferentially independent (mpi) if for all subsets S of {1,2,…,n}, S is mpi of S.

Page 45: Chapter 13 Decision Making Under Uncertainty to accompany Operations Research: Applications and Algorithms 4th edition by Wayne L. Winston Copyright (c)

45

Theorem 1: If the set of attributes 1,2,…n is pi, the decision maker’s preferences can be represented by an additive value (or cost) function.

When more than one attribute affects a decision maker’s preferences, the person’s utility function is called a multiattribute utility function.

We restrict ourselves here to explaining how to assess and use multiattribute utility functions when only two attributes are operative.

Page 46: Chapter 13 Decision Making Under Uncertainty to accompany Operations Research: Applications and Algorithms 4th edition by Wayne L. Winston Copyright (c)

46

Properties of Multiattribute Utility Functions

Definition: Attribute 1 is utility independent (ui) of attribute 2 if preferences for lotteries involving different levels of attribute 1 do not depend on the level of attribute 2.

Definition: If attribute 1 is ui of attribute 2 is ui of attribute 1, then attributes 1 and 2 are mutually utility independent (mui).

If attributes 1 and 2 are mui, it can be shown that the decision maker’s utility function u(x1,x2) must be of the following form

)()()()()2,1( 22113222111 xuxukxukxukxxu

Page 47: Chapter 13 Decision Making Under Uncertainty to accompany Operations Research: Applications and Algorithms 4th edition by Wayne L. Winston Copyright (c)

47

This equation is often called the multilinear utility function.

Theorem 2: Attributes 1 and 2 are mui if and only if the decision maker’s utility function u(x1,x2) is a multilinear function of the form

The determination of a decision maker's utility function u(x1, x2) can be further simplified if it exhibits additive independence.

)()()()()2,1( 22113222111 xuxukxukxukxxu

Page 48: Chapter 13 Decision Making Under Uncertainty to accompany Operations Research: Applications and Algorithms 4th edition by Wayne L. Winston Copyright (c)

48

Definition: A decision maker’s utility function exhibits additive independence if the decision maker is indifferent between

½

½

x1(best), x2(worst)

x1(worst), x2(best)

½

½

x1(best), x2(best)

x1(worst), x2(worst)

Page 49: Chapter 13 Decision Making Under Uncertainty to accompany Operations Research: Applications and Algorithms 4th edition by Wayne L. Winston Copyright (c)

49

Essentially, additive independence of attributes 1 and 2 implies that preferences over lotteries involving only attribute 1 depend only on the marginal distribution for possible values of attribute 1 and do not depend on the joint distribution of the possible values of attributes 1 and 2.

Page 50: Chapter 13 Decision Making Under Uncertainty to accompany Operations Research: Applications and Algorithms 4th edition by Wayne L. Winston Copyright (c)

50

Assessment of Multiattribute Utility Functions

If attributes 1 and 2 are mui, how can we determine u1(x1), u2(x2), k1, k2), and k3?

The procedure to be used in assessing a multiattribute utility function may be summarized as follows: Step 1: Check whether attributes 1 and 2 are mui. If

they are go to Step 2. If the attributes are not mui, the assessment of the multiattribute utility function is beyond the scope of our discussion.

Step 2: Check for additive independence.

Step 3: Assess u1(x1) and u2(x2).

Page 51: Chapter 13 Decision Making Under Uncertainty to accompany Operations Research: Applications and Algorithms 4th edition by Wayne L. Winston Copyright (c)

51

Step 4: Determine k1, k2, and (if there is no additive independence k3.

Step 5: Check to see whether the assessed utility function is really consistent with the decision maker’s preferences. To do this, set up several lotteries and use the expected utility of each to rank the lotteries from most to least favorable.

Page 52: Chapter 13 Decision Making Under Uncertainty to accompany Operations Research: Applications and Algorithms 4th edition by Wayne L. Winston Copyright (c)

52

Use of Multiattribute Utility Functions

To illustrate how a multiattribute utility function might be used, suppose that a company must determine whether to mount a small or large advertising campaign during the coming year.

To find the best option the company must determine which of the lotteries has a larger expected utility.

Page 53: Chapter 13 Decision Making Under Uncertainty to accompany Operations Research: Applications and Algorithms 4th edition by Wayne L. Winston Copyright (c)

53

13.7 The Analytic Hierarchy Process

We have discussed situations in which a decision maker chooses between alternatives on the basis of how well the alternatives meet various objectives.

When multiple objectives are important to a decision maker, it may be difficult to choose between alternatives.

Thomas Saaty’s analytic hierarchy process (AHP) provides a powerful tool that can be use to make decisions in situations involving multiple objectives.

Page 54: Chapter 13 Decision Making Under Uncertainty to accompany Operations Research: Applications and Algorithms 4th edition by Wayne L. Winston Copyright (c)

54

Obtaining Weights for Each Objective

Suppose there are n objectives.

We begin by writing down an n x n matrix (known as the pairwise comparison matrix) A.

The entry in row i and column j of A indicates how much more important objective I is than objective j.

Page 55: Chapter 13 Decision Making Under Uncertainty to accompany Operations Research: Applications and Algorithms 4th edition by Wayne L. Winston Copyright (c)

55

Checking for Consistency

We can now use the following four-step procedure to check for consistency of the decision maker’s comparison. (w denotes our estimate of the decision maker’s weights) Step 1: Compute AwT.

Step 2: Compute

Step 3: Computer the consistency index(CI) as follows

Step 4: Compare CI to the random index (RI) for the appropriate value of n.

ni

iT

T

wi

Awi

n 1 in entry th

in entry th1

1

)result 2 (Step

n

nCI

Page 56: Chapter 13 Decision Making Under Uncertainty to accompany Operations Research: Applications and Algorithms 4th edition by Wayne L. Winston Copyright (c)

56

For a perfectly consistent decision maker, the ith entry in AwT=n (ith entry of wT).

If CI is sufficiently small, the decision maker’s comparisons are probably consistent enough to give useful estimates of the weights for his or her objective function.

Page 57: Chapter 13 Decision Making Under Uncertainty to accompany Operations Research: Applications and Algorithms 4th edition by Wayne L. Winston Copyright (c)

57

Finding the Score of an Alternative for an Objective

We now determine how well each job “satisfies” or “scores” on each objective.

To determine these scores, we construct for each objective a pairwise comparison matrix in which the rows and columns are possible decisions.

As described earlier, we can now “synthesize” the objective weights with the scores of each job on each objective to obtain an overall score for each alternative.

Page 58: Chapter 13 Decision Making Under Uncertainty to accompany Operations Research: Applications and Algorithms 4th edition by Wayne L. Winston Copyright (c)

58

AHP has been applied by decision makers in countless areas, including accounting, finance, marketing, energy resource planning, microcomputer selection, sociology, architecture, and political science.

Page 59: Chapter 13 Decision Making Under Uncertainty to accompany Operations Research: Applications and Algorithms 4th edition by Wayne L. Winston Copyright (c)

59

Implementing AHP on a Spreadsheet

Figure 5 in the book illustrates how easy it is to implement AHP on a spreadsheet.

The work is completed in the AHP.xls file.

To computer the consistency index for a pairwise comparison matrix for objectives the EXCEL matrix multiplication function MMULT is used, computing AwT.

The MMULT function is easily used to multiply matrices.