introduction to decision making theory

Click here to load reader

Post on 12-Jan-2017

638 views

Category:

Economy & Finance

1 download

Embed Size (px)

TRANSCRIPT

  • . . . . . .

    Introduction to Decision Making Theory

    YASUDA, Yosuke

    Osaka University, Department of Economics

    [email protected]

    September, 2015

    Last updated: September 22

    1 / 31

  • . . . . . .

    Readings

    Main Textbook

    Rubinstein, A. (2012). Lecture notes in microeconomic theory: the

    economic agent, 2nd. Lectures 1-3 and 7 are closely related.

    Can be downloaded FOR FREE from the authors website:http://gametheory.tau.ac.il/arielDocs/

    Other Related Books

    Binmore, K. (2008). Rational decisions. Chapters 1 and 3 are related.

    Gilboa, I. (2009). Theory of decision under uncertainty. Advanced

    Kreps, D. (1988). Notes on the Theory of Choice. Classic and popular

    Mas-Colell, A., Whinston, M. D., and Green, J. R. (1995). Microeconomic

    theory. Standard introduction for economics students

    My Lecture Website at Osaka U. For extensive references

    https://sites.google.com/site/yosukeyasuda2/home/lecture/decision14

    2 / 31

    http://gametheory.tau.ac.il/arielDocs/https://sites.google.com/site/yosukeyasuda2/home/lecture/decision14

  • . . . . . .

    Lecture Outline

    1st. Decision Making over Certain Outcomes (Consequences)

    Preference, Choice, and Utility

    What Is Rationality?

    When Does Agent Look As If Rational?

    2nd. Decision Making over Uncertain Outcomes

    Expected Value and Expected Utility (EU)

    Axiomatic Approach to EU Theory

    When Does EU Look Unrealistic?

    3 / 31

  • . . . . . .

    Preferences

    The notion of preferences plays a central role in economic theory, which

    specifies the form of consistency or inconsistency in the persons choices.

    is the mental attitude of an individual toward alternatives independent of

    any actual choice.

    Preferences require only that the individual make binary comparisons.

    Individual only examines two choice alternatives in the choice set X at a

    time and make a decision regarding those two.

    The description of preferences should provide an answer to the question of

    how the agent compares the two alternatives.

    Considering questionnaires P and R, we formulate the consistency

    requirements necessary to make the responses preferences.

    4 / 31

  • . . . . . .

    Questionnaire P

    P (x, y) For all distinct x and y in the set X. How do you compare x and y?Tick one and only one of the following three options.

    .

    ..

    1 I prefer x to y, or x is strictly preferred to y: x y

    .

    .

    .

    2 I prefer y to x, or y is strictly preferred to x: y x

    .

    .

    .

    3 I am indifferent, or x is indifferent to y: x y

    A legal answer to the questionnaire P can be formulated as a function f which

    assigns to any pair (x, y) of distinct elements in X exactly one of the three

    values: x y, y x or x y.

    f(x, y) =

    8

    b1 or both a1 = b1 and a2 b2, does not have a utility representation.

    13 / 31

  • . . . . . .

    Choice Function

    We consider an agents behavior as a hypothetical response to the following

    questionnaire, one for each A X: Q(A) Assume you must choose from a set of alternatives A. Whichalternative do you choose?

    A choice function C assigns to each set A X a unique element of A. C(A) is the chosen element from the set A.

    Here are a couple of remarks on choice functions.

    .

    .

    .

    1 We assume that the agent selects a unique element in A for every

    question Q(A). cf. choice corresponding

    .

    .

    .

    2 The choice function C need not to be observable.

    .

    .

    .

    3 The agent behaving in accordance with C will choose C(A) if she has to

    make a choice from a set A.

    14 / 31

  • . . . . . .

    Rational Choice

    When the agent has in mind a preference relation % on X, given any choiceproblem Q(A) for A X, she chooses an element in A which is % optimal.

    .

    Definition 6

    .

    .

    .

    An induced choice function C% is the function that assigns every nonemptyset A X the %-best element of A. A choice function C can be rationalizedif there is a preference relation % on X so that C = C%.

    Q Under what conditions any choice functions can be presented as ifderived from some preference relation?

    .

    Definition 7

    .

    .

    .

    Choice function C satisfies (Sens) condition if for any A B, C(B) Aimplies C(A) = C(B).

    Fg Figure 3.1 in Rubinstein (pp.25)15 / 31

  • . . . . . .

    Choice Preference ( Utility)

    .

    Theorem 4

    .

    .

    .

    Assume C is a choice function with a domain containing at least all subsets ofX of size 2 or 3. If C satisfies condition , then there is a preference relation% on X so that C = C%.

    .

    Proof.

    .

    .

    .

    Define % by x % y if x = C({x, y}). Let us first show that % satisfiescompleteness and transitivity.Completeness: Follows from that C = ({x, y}) is well-defined.Transitivity: If x % y and y % z, then by definition of % we haveC({x, y}) = x and C({y, z}) = y. If C({x, z}) = z, then, by condition ,C({x, y, z}) 6= x. Similarly, by C({x, y}) = x and condition ,C({x, y, z}) 6= y, and by C({y, z}) = y and condition , C({x, y, z}) 6= z.A contradiction to C({x, y, z}) {x, y, z}.Next we show that C(A) = C%(A) for all A X. Suppose oncontrary C(A) 6= C%(A). That is, C(A) = x and C%(A) = y(6= x).By y % x, this means C({x, y}) = y, contradicting condition .

    16 / 31

  • . . . . . .

    As If Rational Preferences?

    Rm Any induced choice function satisfies condition , and Theorem 4establishes the converse. Condition C = C%

    Can each of the following procedures be rationalized?

    .

    .

    .

    1 Choose the worst procedure

    .

    .

    .

    2 Second-best procedure

    .

    .

    .

    3 Satisficing procedure (by Herbert Simon)

    .

    .

    .

    4 Satisficing using two orderings

    The satisfying procedure seems unrelated to the maximization of a preference

    relation or utility function. Nevertheless, it can be rationalized, i.e., described

    as if the decision maker (DM) maximizes a preference relation.

    17 / 31

  • . . . . . .

    Decision under Uncertainty

    We have so far not distinguished between individuals actions and consequences,

    but many choices made by agents take place under conditions of uncertainty.

    We introduce an environment in which the correspondence between actions and

    consequences is not deterministic but stochastic.

    The domain of choice functions should be extended.

    The choice of an action is viewed as choosing a lottery where the prizes

    are the consequences.

    The DM is assumed not to care about the nature of the random factors

    but only about the distribution of consequences.

    18 / 31

  • . . . . . .

    Lotteries

    We consider preferences and choices over the set of lotteries.

    Let S be a set of consequences or prizes. We assume that S is a finite set

    and the number of its elements (= |S|) is S.

    A lottery p is a function that assigns a nonnegative number to each prize

    s, whereP

    sS p(s) = 1.

    p(s) is the probability of obtaining the prize s given the lottery p.

    Let x (1 ) y denote the lottery in which the prize x is realizedwith probability and the prize y with 1 .

    Let L(S) be the (infinite) space containing all lotteries with prizes in S.

    {x RS+|P

    xs = 1}.

    We will discuss preferences over L(S).

    19 / 31

  • . . . . . .

    St Petersburg Paradox (1)

    The most primitive way to evaluate a lottery is to calculate its mathematical

    expectation, i.e., E[p] =P

    sS p(s)s.

    Daniel Bernoulli first doubts this approach in the 18th century when he

    examined the St Petersburg paradox. Ex St Petersburg ParadoxA fair coin is tossed until it shows heads for the first time. If the first head

    appears on the k-th trial, a player wins $2k. Q How much are you willing to pay to participate in this gamble? Rm The expected value of the lottery is infinite:2

    2+

    22

    22+

    23

    23+ = 1 + 1 + 1 + = .

    20 / 31

  • . . . . . .

    St Petersburg Paradox (2)

    The St Petersburg paradox shows that maximizing your dollar expectation may

    not always be a good idea; an agent in risky situation might want to maximize

    the expectation of some utility function with decreasing marginal utility:

    E[u(x)] = u(2)1

    2+ u(4)

    1

    4+ u(8)

    1

    8+ ,

    which can be a finite number.

    Q Under what kinds of conditions can a DM be described as if she maximizesthe expectation of some utility function? Rm We know that for any preference relation defined on the space of lotteriesthat satisfies continuity, there is a utility representation U : L(S) R,continuous in the probabilities, such that p % q if and only if U(p) U(q).

    21 / 31

  • . . . . . .

    Properties of Lotteries

    We impose the following three assumptions on the lotteries.

    .

    . . 1 1 x (1 1) y x Getting a prize with prob. 1 is the same as getting the prize for certain.

    .

    .

    .

    2 x (1 ) y (1 ) y x DM does not care about the order in which the lottery is described.

    .

    .

    .

    3 ( x (1 ) y) (1 ) y () x (1 ) y A DMs perception of a lottery depends only on the net probabilities ofreceiving the various prizes.

    The first two assumptions appear to be innocuous. The third assumption

    sometimes called reduction of compound lotteries is somewhat suspect.

    There is some evidence to suggest that DM treats compound lotteries

    different than one-shot lotteries.

    22 / 31

  • . . . . . .

    Expected Utility Theory (1)

    We will use the following two axioms to isolate a family of preference relations

    which have a representation by a more structured utility function.

    Independence Axiom (I): For any p, q, r L(S) and any (0, 1),

    p % q p (1 ) r % q (1 ) r.

    Continuity Axiom (C): If p q r, then there exists (0, 1) such that

    q [ p (1 ) r].

    .

    Theorem 5

    .

    .

    .

    Let % be a preference relation over L(S) satisfying the I and C. There arenumbers {v(s)}sS such that

    p % q U(p) =X

    sS

    p(s)v(s) U(q) =X

    sS

    q(s)v(s).

    23 / 31

  • . . . . . .

    Expected Utility Theory (2)

    .

    Sketch of the proof.

    .

    .

    .

    Let M and m be a best and a worst certain lotteries in L(S). When M m,choosing v(s) = 0 for all s we have

    P

    sS p(s)v(s) = 0 for all p L(S).Consider the case that M m. By I and C, there must be a single numberv(s) [0, 1] such that

    v(s) M (1 v(s)) m [s]

    where [s] is a certain lottery with prize s, i.e., [s] = 1 s.In particular, v(M) = 1 and v(m) = 0. I implies that

    p

    X

    sS

    p(s)v(s)

    !

    M

    1 X

    sS

    p(s)v(s)

    !

    m.

    Since M m, we can show that

    p % q X

    sS

    p(s)v(s) X

    sS

    q(s)v(s).

    24 / 31

  • . . . . . .

    vNM Utility Function (1)

    Note the function U is a utility function representing the preferences on L(S)

    while v is a utility function defined over S. Building block for U(p)

    v is called a vNM (Von Neumann-Morgenstern) utility function. Q How can we construct the vNM utility function?Let si( S), i = 1, ...,K be a set of consequences and s1, sK be the best andthe worst consequences. That is, for any i,

    [s1] % [si] % [sK ].

    Then, construct a function v : S [0, 1] in the following way:

    v(s1) = 1 and v(sK) = 0, and

    [sj ] v(sj) [s1] (1 v(sj) [sK ] for all j.

    By continuity axiom, we can find a unique value of v(sj) [0, 1].

    25 / 31

  • . . . . . .

    vNM Utility Function (2)

    Q To what extent, vNM utility function is unique?The vNM utilities are unique up to positive affine transformation, i.e.,

    multiplication by a positive number and adding any scalar.

    Not invariant to arbitrary monotonic transformation

    .

    Theorem 6

    .

    .

    .

    Let % be a preference relation defined over L(S), v(s) be the vNM utilitiesrepresenting the preference relation, and w(s) = v(s) + for all s (where > 0). Then, the utility function W (p) =

    P

    sS p(s)w(s) also represents %.

    Note that vNM utility functions do NOT (directly) attach numerical

    numbers to lotteries.

    v(s) is NOT a cardinal utility function, but a numerical function which is

    (intermediately) used to construct a utility representation U over L(S).

    26 / 31

  • . . . . . .

    vNM Utility Function (3)

    .

    Proof.

    .

    .

    .

    For any lotteries p, q L(S), p % q if and only ifX

    sS

    p(s)v(s) X

    sS

    q(s)v(s).

    Now, the followings hold.

    X

    sS

    p(s)w(s) =X

    sS

    p(s)(v(s) + ) = X

    sS

    p(s)v(s) + .

    X

    sS

    q(s)w(s) =X

    sS

    q(s)(v(s) + ) = X

    sS

    q(s)v(s) + .

    Thus,X

    sS

    p(s)v(s) X

    sS

    q(s)v(s)

    holds if and only if

    X

    sS

    p(s)w(s) X

    sS

    q(s)w(s) (for > 0).

    27 / 31

  • . . . . . .

    Allais Paradox (1)

    Many experiments reveal systematic deviations from vNM assumptions.

    The most famous one is the Allais paradox. Ex Allais paradoxChoose first the between

    L1 = [3000] and L2 = 0.8 [4000] 0.2 [0]

    and then choose between

    L3 = 0.5 [3000] 0.5 [0] and L4 = 0.4 [4000] 0.6 [0].

    Note that L3 = 0.5 L1 0.5 [0] and L4 = 0.5 L2 0.5 [0].

    Axiom I requires that the preference between L1 and L2 be the same as

    that between L3 and L4. However, a majority of people express the

    preferences L1 L2 and L3 L4, violating the axiom.28 / 31

  • . . . . . .

    Allais paradox (2)

    Assume L1 L2 but L (1 ) L1 L (1 ) L2.(In our example of Allais paradox, = 0.5 and L = [0].)

    Then, we can perform the following trick on the DM:

    .

    .

    .

    1 Take L (1 ) L1.

    .

    .

    .

    2 Take instead L (1 ) L2, which you prefer (and you pay mesomething...).

    .

    .

    .

    3 Let us agree to replace L2 with L1 in case L2 realizes (and you pay me

    something now...).

    .

    .

    .

    4 Note that you hold L (1 ) L1.

    .

    .

    .

    5 Let us start from the beginning...

    This argument may make the independence axiom looks somewhat reasonable

    (and Allais paradox unreasonable).

    29 / 31

  • . . . . . .

    Zeckhousers Paradox (1)

    The following paradox also shows that many people do not necessarily follow

    the expected utility maximization behavior. Ex Zeckhausers paradoxSome bullets are loaded into a revolver with six chambers. The cylinder is then

    spun and the gun pointed at your head.

    Would you be prepared to pay more to get one bullet removed when only

    one bullet was loaded, or when four bullets were loaded? Q People usually say they would pay more in the first case, because theywould then be buying their lives for certain. Is this decision reasonable? Rm Note that you cannot use your money once you die...

    30 / 31

  • . . . . . .

    Zeckhousers Paradox (2)

    Suppose $X (resp. $Y ) is the most that you are willing to pay to get one

    bullet removed from a gun containing one (resp. four) bullet.

    Let L mean death, and W mean being alive after paying nothing.

    Let C mean being alive after paying $X, and D alive after paying $Y .

    Note that L is the worst and W is the best consequences, and

    u(C) > u(D) C D X < Y .

    Let u(L) = 0 and u(W ) = 1. Then, u(C) and u(D) can be calculated by

    u(C) =1

    6u(L) +

    5

    6u(W ) =

    5

    6, and

    1

    2u(L) +

    1

    2u(D) =

    2

    3u(L) +

    1

    3u(W ) u(D) = 2

    3.

    Since u(C) > u(D), you must be ready to pay less to get one bullet removed

    when only one bullet was loaded than when four bullets were loaded.

    31 / 31