01 static decision making

Upload: cristian-horea

Post on 04-Apr-2018

221 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/29/2019 01 Static Decision Making

    1/16

    Financial Modeling I, js Static Decision Making 2

    1. Static Decision Making

    Objectives:

    Understanding of

    Individual decision making under uncertainty Individual evaluation of goods with uncertain future values Behavior towards risk The concept of the risk premium Preferences on assets with normal distributed returns

    Concepts

    Expected Utility Hypothesis Risk Averseness, Risk Neutrality, Risk Loving Markowitz Risk Premium and Certainty Equivalent Arrow-Pratt Approximation of Risk Premium

    Mean-Variance Preferences

    Contents:

    1.1 Contingent Goods

    1.2 Structure of the Model

    1.3 Expected Utility Hypothesis

    1.4 Behavior towards Risk

    1.5 Risk Premium

    1.6 Mean Variance Criterion

  • 7/29/2019 01 Static Decision Making

    2/16

    Financial Modeling I, js Static Decision Making 3

    1.1 Contingent Goods

    The concept of contingent goods developed by Kenneth Arrow is very useful to characterize

    the future value of goods in an uncertain world. In the following we apply it only to financial

    assets.

    Under certainty goods are characterized by their physical quality, by the time and thelocation of their availability.

    Under uncertainty we characterize goods additionally by the incidence of future(events that influence the future) value of goods.

    It is convenient to declare the incidence of a certain event as a state of the world andthe future value of the good is thus contingent upon this state of the world.

    In this case the future value of goods is a random variable. The random variable mayhave finite or infinite, but numerable realizations.

    Event Tree

    In an inter-temporal context the concept of states of the world can be presented with an event

    tree. The event tree represents the complete sequence of the realizations of a discrete random

    variable from the present to a future date.

    Figure 1.1: 3-period event tree

    In this graph we observe a simple example of an economy that lasts three periods and can

    take three randomly determined states. The random generator may influence the level of

    business activities at each node (yellow point) of the tree. The level of activity may be high in

    the left (blue) branch, medium in the middle (red) branch and low in the right (green) brunch.

    In this example we have3

    3= 27

    endpoints of the tree.

    - In general, if an economy lasts t= 1,Tperiods and the random influence generates = 1, possible outcomes in each point of realization the tree has

    Tdifferent

    endpoints.

    - Each endpoint is associated with a complete history of random realizations from thepresent to a certain future date.

    - Each endpoint of the tree is representing a state of the world at date T. There ares = 1,Sof them with S=

    T .

    - The complete history of these realizations is of economic relevance. Think of a sequenceof good or bad economic developments.

    t = 3

    t = 2

    t = 1

  • 7/29/2019 01 Static Decision Making

    3/16

    Financial Modeling I, js Static Decision Making 4

    One Period Model

    We assume that in the 1-period future the economy assumes s = 1,S states of the world.

    Figure 1.2: 1-period event tree

    Information

    We do not discuss any information problems. We can imagine different types information

    problems, such as insider information, or lower costs of public or private availableinformation. These are very interesting topics. However, it is beyond Financial Modeling I.

    Value of Assets

    When we discuss assets we think of financial assets in most cases. However most of the

    insights apply also for real assets. Because of depreciation and storage costs some minor

    modifications and extensions are necessary.

    Moreover, we restrict our analysis to a one-period analysis. In this case we can characterize

    assets by their current and future values including (intra-period) payoffs or even more

    conveniently by their returns. Therefore we use the following conventions:

    - The value of an assetXat the beginning of the periodis X0 .- The random future value of an asset - at the end of the current (or the beginning of the

    future) period is denoted as X1 with the probability distributiong X1( ) . (Thecircumflex is attached to random variables.)

    - In discrete presentations it is convenient to express the random future values of assetsand their probability of occurrence as functions of state of the world s. In this case

    X1s

    ( )denotes the future value of the asset in the state s that occurs with probability

    gX

    1s( ) .

    - The random generator governing the future values of assets (associated with certainstates of the world) may follow any probability distribution. However, many models

    assume that future values of assets are lognormal distributed. Lognormal, since the

    minimal value of a many assets stocks, plain vanilla bonds etc. is zero.

    Return on Assets

    According to convenience the returns of assets are defined as arithmetic or geometric rate ofreturns. The future (gross) return R

    X

    1 of the assetx is defined as

    t = 1

    1 2 s S-1 S

  • 7/29/2019 01 Static Decision Making

    4/16

    Financial Modeling I, js Static Decision Making 5

    RX

    1=

    X1

    X0

    , or RX

    1s( ) =

    X1s( )

    X0

    (1.1)

    In some models we use the (net) rates of return, which are defined as

    rX

    1

    RX

    11 ln R

    X

    1

    , orrX

    1s

    ( )R

    X

    1s

    ( )1 ln R

    X

    1s

    ( ) (1.2)As the future value of the assets is random the return is, of course, also a random variable.

    We express the probability distribution as f rX

    1( ) , or f rX1s( ) , respectively. The probability

    functions, f rX

    1( ) and f rx1 s( ) , are approximately normal distributed if the probability

    functions g X1( ) and g X1 s( ) are lognormal distributed. (For the definition of normal orlognormal probability distributions see Appendix A.2)

    Simplification of the Notation

    Utility functions can be defined on total wealth, on contingent goods, or contingent values of

    assets or portfolios of assets. In the second and third section we show that these insights apply

    to all objects of individuals preferences. Thus we omit the indication of specific goods or

    assets in these sections.

    Since we restrict our analysis to the one-period analysis during these lectures we simplify

    notation by omitting the time indices. To simplify notations we omit the time indices. We

    denote the present value of the assetx as X, and its random future values as X, orX s( ) ,

    and returns as RX

    and RXs( ) , or r

    Xand r

    Xs( ) , respectively.

    1.3 Expected Utility Hypothesis

    John von Neumann and Oskar Morgenstern proved that under some specific assumptions (see

    Appendix A.1.3) individual preferences concerning lotteries (games) can be presented as the

    expected utility of the payoffs of the lottery.

    1.3.1 Definitions and Concepts

    - A twice continuously differentiable (concave) utility function u ( ) is defined over the(future) value of the asset.

    - Future value of the asset can be grasped by a lottery (game) that is defined by therandom future payoffs X and the probability function g X( ) , or in a state depending

    presentation by X s( ) = X 1( ),,X S( ) and gX = gX 1( ),gX 2( ),gX S( ) . This

    presentation can be applied to total wealth and to future value of assets or portfolios of

    assets as well.

    - We write a lottery (game) as G X ;g X( ) orG X s( );gX s( ) . For simplicity we omit

    the specification of the probability distribution.

  • 7/29/2019 01 Static Decision Making

    5/16

    Financial Modeling I, js Static Decision Making 6

    - The utility of a lottery (game) presents the individual preferences over the lottery(game), i.e. U G X ;g( ) , orU G X s( );g( ) .

    - The expected utility of the lottery (game) is defined as the expected value of the utility

    of the payoffs of a lottery (game), i.e.U G X ;g( ) =E U X( );g{ } , or

    U G X s( );g( ) =g 1( )u X 1( ) ++g S( )u X S( ) . (1.3)

    - The expected utility of a lottery (game) has to be distinguished from the utility of theexpected value (of the payoffs) of the lottery (game). The latter is defined as

    U E G X ;g( ) { } = u E X( ){ } , or

    U E G X s( );g( )

    { }=

    u g 1( )X 1( )++

    g S( )X S( )

    (1.4)

    In order to present the expected utility hypothesis in a graphical exposition and a small

    numerical example it is recommended to introduce a binomial world with two states of the

    world, s = 1,2 , with the payoffs X 1( ) and X 2( ) which occur with probabilities g 1( ) , and

    g 2( ) = 1g 1( ) .

    In order to simplify notation we express the expected value of X as X

    , the variance as XX

    ,

    and the standard deviation as X

    .

    The expected value and the variance of these payoffs are

    X=g 1( )X 1( )+g 2( )X 2( ) , (1.5)

    and

    XX

    =g 1( ) X 1( ) X 2

    +g 2( ) X 2( ) X 2

    . (1.6)

    The expected utility of this game is

    E U X s( ); p { } =g 1( )u X 1( ) +g 2( )u X 2( ) , (1.7)

    and the utility of the expected value is

    U E X s( );g { } = u g 1( )X 1( )+g 2( )X 2( ) = u X( ) . (1.8)

    In the subsequent graphic the expected utility is presented as a convex combination of the

    payoffs somewhere along the red line depending on the probabilities of the payoffs. The

    utility of the single payoffs and moreover the utility of the expected value of the game

    (lottery) are located along the utility function. If the utility function is concave the utility of

    the expected value must exceed the expected utility of the game (lottery).

  • 7/29/2019 01 Static Decision Making

    6/16

    Financial Modeling I, js Static Decision Making 7

    Figure 1.3: Expected Utility

    The expected utility of the game presented in figure 1.3 can be illustrated by a simple

    numerical example. We assume that the utility function is a square root u X s( ) = X s( ) .

    Moreover we specify the payoffs of the lottery as x 1( ) = 100 , and x 2( ) = 10000 , and the

    binomial probabilities as g 1( ) = 3 4 , and therefore g 2( ) = 1g 1( ) = 1 4 . The expectedvalue of the game is thus

    X= 3 4* 100 + 1 4* 10000 = 2,575 .

    The expected utility of the game is thus defined as

    E U X s( );g { } = 3 4* 100 + 1 4* 10000 = 7.5 + 25 = 32.5 .

    The utility of the expected value of the game is

    U E X s

    ( );g

    { }= 3 4* 100 + 1 4* 10000 = 75 + 2500 = 2575 50.74 .

    Obviously the utility from the game with uncertain outcomes (the expected utility) is

    significantly smaller than the utility from the deterministic expected value of the game.

    1.3.2 An Alternative Presentation of the Expected Utility Hypothesis

    Sometimes it is convenient to decompose the random variable X into expected value of

    E X( ) and a lottery .

    X=E X( )+ , (1.9)

    Obviously, is a fair lottery with an expected value of zero and a constant variance equal to

    the variance of X:

    !

    "!#"

    "#!!$"$

    "#!!%"$

    &!%""#!!%"$%&!$""#!!$"$

    '"!+!"

    &+!&&!!%" !!$"&

  • 7/29/2019 01 Static Decision Making

    7/16

    Financial Modeling I, js Static Decision Making 8

    E ( ) =E X X

    ( ) = 0 (1.10)

    Var ( ) =E X X

    E X X

    ( ) 2

    { } =E XE X( ) 2

    { } = XX. (1.11)

    In the binomial case with two states, s=

    1,2 , and the probabilities g s( ) the values of theassets deviate from its mean by s( ) . The asset value can be presented by the sum of the

    expected value of the initial game plus the game of the stochastic deviations. The expected

    value and the variance of the initial game can be expressed as

    E X+ { } = X +g 1( ) 1( )+g 2( ) 2( ) = X , (1.12)

    and

    Var X+ ( ) = g 1( ) X + 1( ) X

    2

    + g 2( ) X + 2( ) X 2

    = g 1( ) 1( )2

    + g 2( ) 2( )2

    . (1.13)

    Thus the expected utility of the game can be expressed as

    E u X+G 1( ), 2( );g { } =g 1( )u X + 1( ) +g 2( )u X + 2( ) . (1.14)

    Also this version of the expected utility can be illustrated graphically:

    Figure 1.4: Expected Utility Fair Game

    In the numerical example of the previous section the stochastic deviations of the payoffs from

    the mean are defined as

    !

    "!!"##$

    "!+!%"$#$

    "!+!%"%#$

    &'"%#"!+!%"%#$

    %&'"$#"!+!%"$#$

    &""+!#

    +!&&+!%"%# +!%"$#

  • 7/29/2019 01 Static Decision Making

    8/16

    Financial Modeling I, js Static Decision Making 9

    1( ) = X 1( ) X= 100 2575 = 2475 ,

    and

    2( ) = X 2( ) X= 10000 2575 =7425 .

    Thus the mean of the deviations equals

    = g 1( ) 1( )+ g 2( ) 2( ) = 3 4* 2475( )+1 4*7425 = 0 .

    1.3.3 The Existence of a von Neumann Morgenstern Utility Function

    If preferences are defined over the entire range of lotteries and if these preferences satisfy

    some specific assumptions (see Appendix A.1.3) a utility function with the following

    property can be defined over the range of lotteries:

    U G X s( );g( ) = E U G X s( );g( ) { } g s( )u X s( )( )s=1S

    (1.15)

    Under the von Neumann Morgenstern axiomatic, the preferences over lotteries can be

    represented by the expected utility of the outcomes of the lottery.

    1.4 Behavior towards Risk

    Without explanation we used a concave utility function that implies risk averse investors. In

    this section we will discuss this topic explicitly. We will classify the individuals preferences,

    their attitudes towards risk, and the shape of the utility function.

    1.4.1 Characterization of the Behavior towards Risk

    An individuals behavior towards risk can be classified into three categories:

    - Risk-averse behavior- Risk-loving behavior- Risk-neutral behavior

    Risk Averse Behavior

    The individual is risk-averse if it prefers the expected value of the payoffs of a lottery (game)

    rather than the lottery itself. In terms of utilities this means the utility of the expected value of

    the payoffs of a lottery is greater than the utility of the lottery (game). This condition is

    obviously fulfilled in case of a concave utility function.

    u X

    = u g s( ) X s( )s=1

    S

    > g s( )u X + s( )

    s=1

    S

    = E u X + { } . (1.16)

    Depending on the probabilities the expected utilities are along the red lines in the graphs

    below whereas the utility of the expected value is somewhere along the utility function.

  • 7/29/2019 01 Static Decision Making

    9/16

    Financial Modeling I, js Static Decision Making 10

    Figure 1.5: Utility functions of risk averse, risk loving, and risk neutral individuals

    Risk Loving Behavior

    If the individual prefers the lottery (game) rather than the expected value of the payoffs of the

    lottery (game) it is a risk lover. In terms of utilities this means the utility of the expected

    value of the payoffs of a lottery is smaller than the utility of the lottery (game). This condition

    is satisfied if the utility function is convex.

    u X

    = u g s( ) X s( )s=1

    S

    < g s( )u X + s( )

    s=1

    S

    = E u X + { } (1.17)

    Risk Neutral Behavior

    The individual is risk neutralif it is indifferent between the expected value of the payoffs of a

    lottery and the lottery itself, or, if the utility of the expected value of the payoffs of a lottery

    equals the utility of the lottery (game). This is the case if the utility function is linear.

    u X

    = u g s( ) X s( )s=1

    S

    = g s( )u X + s( )

    s=1

    S

    = E u X + { } (1.18)

    1.4.2 Local measures of risk behavior

    In the economic analysis we use two measures of behavior towards risk:

    - Absolute Risk-Aversion (ARA)- Relative Risk-Aversion (RRA)

    As both ARA and RRA are local measures we should apply them only for small changes ofwealth. (In this section we omit the indices of time and state of the world because risk

    aversion is a time independent property of the utility function rather than the random

    realization of future wealth.)

    The absolute risk-aversion is defined as:

    ARA u X( )u X( )

    (1.19)

    ARA is a measure of the curvature of the utility function. It decreases with its slope and it

    increases with the change of the slope.

    !!"" !!#" ! ! !

    #!!" #!!" #!!"

    !!"" !!#" !!"" !!#"

  • 7/29/2019 01 Static Decision Making

    10/16

    Financial Modeling I, js Static Decision Making 11

    For many economic problems it is of great importance how ARA changes with increasing

    wealth. The answer can be seen from the derivative of ARA with respect to wealth:

    dARA

    dX

    du X( )u X( )

    dX=

    u X( ) u X( ) u X( ) 2

    u X( ) 2 (1.20)

    As the denominator is positive the sign of the change of ARA is determined by the following

    condition:

    dARA

    dX ( )0 u X( ) u X( ) ( ) u X( )

    2

    (1.21)

    Since u X( ) < 0 we can rewrite the condition as follows:

    dARA

    dX ( )0 u X( )

    u X( ) ( ) u X( )

    u X( )(1.21)

    Therefore absolute risk-aversion decreases with increasing wealth if the second derivative is

    less elastic than the first derivative of the utility function.

    The relative risk aversion is defined as the elasticity of the marginal utility, i.e.

    RRA u X( )u X( )

    X= ARA X (1.22)

    In order to investigate how the relative risk aversion changes with increasing wealth we

    calculate its respective derivation:

    dRRA

    dX=

    d u X( )u X( )

    X

    dX=

    d ARA X( )dX

    = ARA+ XdARA

    dX(1.23)

    = u X( ) u X( ) + u X( ) u X( )X u X( )

    2

    X

    u X( ) 2

    dRRA

    dX ( )0 1

    u X( )u X( )

    X ( )u X( )u X( )

    X (1.24)

    1.4.3 Two Useful Examples

    A constant Absolute Risk Aversion (CARA) utility function

    u x( ) = ebx u ( )= bebx > 0 , u ( )= b2ebx < 0 , (1.25)

    ARA x( ) = b2ebx

    bebx

    = b > 0 dARA x( )

    dx= 0 (1.26)

  • 7/29/2019 01 Static Decision Making

    11/16

    Financial Modeling I, js Static Decision Making 12

    Obviously, this utility function exhibits CARA (Constant ARA).

    RRA x( ) = b

    2ebx

    bebx

    x = bx > 0 dRRA x( )

    dx= b > 0 (1.27)

    Obviously,RRA increases with increasing outcome (goods).

    A Constant Relative Risk Aversion (CRRA) utility function

    In many models economists use an utility function with constant relative risk aversion.

    u x( ) =x

    1

    1 with u x( ) =x > 0 , u x( ) = x1 < 0 . (1.28)

    The absolute risk aversion of this utility function is therefore decreasing inx:

    ARA = x

    1

    x

    =

    1

    x> 0

    dARA

    dW=

    1

    x2< 0

    (1.29)

    The relative risk aversion is obviously a constant

    RRA =x

    1

    x

    x = > 0 (1.30)

    1.5 Risk premium

    The concept of the risk premium has been developed by Harry M. Markowitz. He defines the

    risk premium as the difference between the expected wealth of a lottery (game) and the

    certainty equivalence of the lottery (game), i.e. the certain amount of wealth (money) for

    which an individual is indifferent against the lottery. Therefore we can view the risk premium

    as the costs of a lottery (game).

    1.5.1 Definition of the Markowitz Risk Premium

    Following this idea the risk premium X, can be defined as

    u X

    X,

    =E u X +

    { } , (1.31a)

    or

    u g s( ) X s( ) X, s( ),g s( ) s=1

    S

    = g s( )u X + s( )

    s=1

    S

    (1.31b)

    The risk premium is defined as the amount of money that can be deducted from the expected

    value in order to equalize the utility of the residual with the expected utility of the game. The

    risk premium has to equalize the utility of the certainty equivalent, i.e. the difference of the

    expected payoffs and the risk premium, and the expected utility of the game (lottery).

    For the numerical example and the graphic presentation we use again the binomial

    simplification in the fair game version with the payoffs X 1( ) = X + 1( ) , andX 2( ) =

    X+ 2( ) . In this model the certainty equivalent is defined as

  • 7/29/2019 01 Static Decision Making

    12/16

    Financial Modeling I, js Static Decision Making 13

    CE= g 1( ) X + 1( ) + g 2( ) X + 2( ) , s( ),g s( ) = X , s( ),g s( ) , (1.32)

    and the risk premium is implicitly determined by the equation

    u CE =g 1( )u X + 1( )( )+g 1( )u X + 1( )( ) . (1.33)

    The risk premium can thus be expressed as

    X, s( ),g s( ) = X u

    1g 1( )u X + 1( )( )+g 1( )u X + 1( )( ) . (1.34)

    From this presentation it is obvious that the size of the risk premium depends on the mean of

    the game, the stochastic deviations of the payoffs from the mean, and the probabilities of

    these stochastic deviations.

    In the subsequent graph the risk premium is displayed by the green line.

    Figure 1.6: Markowitz risk premium

    The graphic representation shows that the height of the risk premium depends on four factors:

    - The concavity of the utility function- The location of the expected value- The stochastic deviations of the payoffs from the mean- The probabilities of the stochastic payoffs

    Of course we can determine the risk premium in the numerical example. It is determined by

    the equation

    !

    "!"#

    "$+!%!##&

    "$+!%!$#&

    %&

    /

    "!%

    +!%!##+

    !+!%!$#

  • 7/29/2019 01 Static Decision Making

    13/16

    Financial Modeling I, js Static Decision Making 14

    3 4* 100 + 1 4* 10000 ( ) = 3 4* 100 + 1 4* 10000 ,

    or

    2575

    ( )= 32.5 .

    If we square the equation, and solve for the risk premium we receive

    ( )= 2575 1056.25 = 1518.75 .

    The size of the risk premium is due to the strong concavity of the root-function, the big

    stochastic deviations from the mean, and the high probability of the low payoff in state 1.

    1.5.2 A local approximation of the Markowitz risk premium

    Kenneth Arrow and John W. Pratt developed a useful and simple measure of this riskpremium. The measure can be derived by a Taylor approximation of ( ) from the equation

    u X

    X,

    =E u X +

    { } . (1.31a)

    We expand the left side in order to receive

    u X

    X, ( ) u X( ) X, ( ) u X( ) . (1.35)

    Expanding the right hand side around E ( ) = 0 gives

    E u X+ ( ) E u X( )+ u X( )+ 1

    22 u

    X( )

    = u X

    ( )+E ( ) u X

    ( )+1

    2Var ( ) u

    X( ) (1.36)

    = u X

    ( )+1

    2Var ( ) u

    X( )

    The second order term is necessary since the middle term vanishes because ofE ( ) = 0 .

    If we insert (1.35) and (1.36) into (1.31a), and solve for X,( ) we receive the so-called

    Arrow-Pratt approximation of the risk premium:

    X, ( ) =

    u X( )

    u X( )

    1

    2Var ( ) = ARA

    1

    2Var ( ) . (1.37)

    Of course it is only a local measure of the risk premium around X

    . From this Arrow-Pratt

    approximation we acknowledge the central determinants of a risk premium:

    The level of wealth The (absolute) risk aversion of the individual The volatility of the lottery (game)

  • 7/29/2019 01 Static Decision Making

    14/16

    Financial Modeling I, js Static Decision Making 15

    The Markowitz risk premium of the CRRA utility function

    The risk premium of the CRRA utility function is:

    X, ( ) =

    X

    1

    2Var ( ) (1.38)

    1.6 Mean Variance Criterion

    Many financial models are based on the assumption that the future values of assets are

    lognormal and/or their returns are normal distributed. In these cases the von Neumann

    Morgenstern utility function can be redefined on the mean and the variance of a lottery.

    1.6.1 Return and Risk of Assets

    As the rate of return of an asset is defined as rX=

    X X

    1 the (expected) return r equals

    rE

    X

    X1

    =

    E X( )X

    1=X

    X 1 , (1.39)

    and the variance denoted as rr

    is

    rr= E

    X

    X 1 E

    X

    X 1

    2

    =

    1

    X2E X E X( )

    2

    { } = 1X

    2Var X( )

    1

    X2

    XX, (1.40)

    where

    XX is the variance of the value of the asset.

    1.6.2 The EUH in terms of the Rate of Return

    For some applications it is useful to define the utility over the rate of return rather than the

    wealth. If we denominate the utility of the rate or return as u r( ) and the probability function

    of the normal distributed rate of return as f r;r,

    rr( ) with the return r and the variance rr (or the standard deviation

    r

    ) the expected utility of the rate of return can be expressed as

    E u r( )

    = u r( )f r ;r,rr( )dr

    . (1.41)

    It is useful to use the standard normal distributed random variable as

    =r

    r

    r

    . (1.42)

    The expected value of is zero and the variance is equal to one:

    =E

    r r

    r

    =

    E r( ) r

    r

    = 0 (1.43)

  • 7/29/2019 01 Static Decision Making

    15/16

    Financial Modeling I, js Static Decision Making 16

    =E

    r r

    r

    Er

    r

    r

    =0

    2

    =

    E r r

    ( )2{ }

    rr

    = 1 (1.44)

    Having in mind the relation r= r

    +r we can write dr=

    rd , and taking into account

    f r;r,

    rr( ) =f ;0,1( )

    r

    , (1.45)

    we can rewrite the expected utility of the rate of return in terms of the standard normal

    variable :

    E u r( ) = u r +r( )f ;0,1( )

    r

    rd

    = u r +r( ) f ;0,1( )d

    (1.46)

    1.6.3 Indifference curves of a Risk-Averse Investor

    In order to calculate the - indifference curves it is recommended to understand the

    expected rate of return as an implicit function of the variance of the rate of return. Therefore,

    we derive the expected utility with respect to the standard deviation to receive

    dE u r( ) d

    r

    = u r+

    r( )

    dr

    dr

    +

    f ;0,1( )d = 0

    . (1.47)

    It is convenient to decompose the right hand side of the double equation asd

    r

    dr

    u ( )

    f ;0,1( )d + u ( )

    f ;0,1( )d = 0 . (1.48)

    Figure 1.7: Mean-Variance Preferences

    If we denote positive values of as

    +

    and negative values as

    the concavity of the utilityfunction implies for+ =

    +!

    m!

    "#

    "$

  • 7/29/2019 01 Static Decision Making

    16/16

    Financial Modeling I, js Static Decision Making 17

    u ( )1+

    f 1;0,1( ) < u ( ) 1

    f 1;0,1( ) . (1.49)

    From (1.48) we derive the slope of the indifference curve as

    dr

    dr d E U( ) =0

    =

    u r+

    r( ) f ;0,1( )d

    u r+

    r( ) f ;0,1( )d

    = A

    B> 0 . (1.50)

    (1.49) implies A < 0 . The denominatorB is positive since each element of the integrand is

    positive.Thus, the slope of the indifference curve is positive.

    The slope of a - indifference curve is increasing iff (if and only if) the individual is risk

    averse. This can be seen if we derive (1.50) with respect to R

    . Applying the fraction rule we

    receive

    d2r

    dr

    2=

    B u ( )d

    r

    dr

    +

    f ;0,1( )d A u ( )

    dr

    dr

    +

    f ;0,1( )d

    B2> 0 . (1.51)

    Equation (1.51) can be rewritten as

    d2r

    dr

    2=

    1

    B

    u ( )d

    r

    dr

    + 2

    A

    B

    dr

    dr

    A

    B

    f ;0,1( )d . (1.52)

    Using equation (1.50) we can express (1.52) as

    d2r

    dr

    2=

    1

    B

    u ( ) A

    B +

    2+

    A

    B

    A

    B

    A

    B

    f ;0,1( )d . (1.53)

    Applying the binomial formula we can simplify (1.53) to

    d2r

    dr

    2=

    1

    B

    0 , and 2

    r

    rr

    2

    = 2 > 0 .