jump notes oct20

Upload: nicholaspalko

Post on 10-Apr-2018

219 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/8/2019 Jump Notes OCT20

    1/44

  • 8/8/2019 Jump Notes OCT20

    2/44

    We could equally well study the indicator function

    N t () 1 () t =0 () > t1 () t

    N 0 = 0 and N t {0, 1}for t > 0 N is right continuous with left limits (RCLL, cadlag)

    For xed we have t N t () is increasing N is a submartingale and therefore admits a Doob-Meyer decom-

    position

    How should the increments of N ,N t N s , t > s,

    be distributed? Independently? Stationary?

    Assumptions and notation

    For all time points T we will assume that there is a default free zerocoupon paying off 1 at time T B (t, T ) price at time t for this zero-coupon bond

    r denotes the risk-free interest rate, say continuously compounded

    To make things simpler, we will assume that the defaultable securityhas zero recovery in the case of default:

    Payoff of defaultable zero coupon bond =1 > T 0 T

    We denote by B (t, T ) the price at time t for the defaultable bondmaturing at time T

    2

  • 8/8/2019 Jump Notes OCT20

    3/44

    No Arbitrage Observations

    The time value of money yields for T > S B (t, T ) B (t, S )B (t, T ) B (t, S )

    Defaultable bonds carry a spread, soB (t, T ) B (t, T ) 0

    From term structure we know that no arbitrage yields

    B (t, T ) = E Q exp T t r s ds F tIf r and are independent

    Under the pricing measure Q let us assume for a second assume thatthe interest rate r and default time are independent This assumption has nothing to do with no-arbitrage. Does it

    seem realistic?

    OK if r is constant like in the Black-Scholes-Merton world The fundamental relationship for t = 0

    B (0, T ) = E Q [Discounted Payoff]

    = E Q [exp T 0 r s ds 1>T ]= E Q exp T 0 r s ds E Q [1>T ]= B (0, T )P (0, T )

    where we have dened the survival probability until T by

    P (0, T ) Q ( > T ) = E Q [1>T ]

    3

  • 8/8/2019 Jump Notes OCT20

    4/44

    We need the following ingredients to compute B (0, T ) The spot rate, r t , determines B (0, T ) The distribution of determines P (0, T )

    These objects are in turn given by a measure Q . Changing Q means changing B (0, T ) and P (0, T )

    A simple example is to takeB (0, T ) exp(rT ), P (0, T ) exp(T )

    where r and are positive constants

    In this case, will be the spreadB (0, T ) = exp (r + )T

    Survival Probabilities

    In general, the survival probability from time t to time T isP (t, T ) Q ( > T |F t ) = E Q [1>T |F t ]

    There are two arguments in P (t, T ) Maturity: Fix t = 0 and look at the relationship

    (0, ) T P (0, T ) Right continuous function with left limits Decreasing with values in [0, 1] P (0, 0) = 1

    P (0, ) = 0? Price process: Fix T > 0 and look at the relationship

    [0, T ] t P (t, T )

    4

  • 8/8/2019 Jump Notes OCT20

    5/44

    Stochastic process with values [0 , 1] t P (t, T ) is a martingale, starting at E Q [1>T ] and ending upat 1 >T at time T .

    We assume that ( F s )s 0 satises the usual conditions. In particular,we assume that F s is right continuous Therefore t P (t, T ) is a RCLL process on [0, T ] News are reected in F t , hence in P (t, T )

    Discontinuous at time t0 if

    F t 0 =

    F t 0 , i.e., we cannot peak into the

    future

    The distribution function of is given byQ ( T ) = 1 P (0, T )

    If we assume T P (0, T ) is smooth, has the density p(0, T )

    T

    Q ( T ) =

    T P (0, T )

    The Poisson Example

    Let be a positive constant and deneP (0, T ) exp( T 0 ds ) = exp( T )

    is exponentially distributed under Q If instead s (s) is a deterministic continuous function we call acurtailed Poisson model.Continuous Hazards - A quick look

    h(t, T ) lim 0

    P (t, T ) P (t, T + )P (t, T + )

    = 1

    P (t, T )P T

    (t, T )

    5

  • 8/8/2019 Jump Notes OCT20

    6/44

    Often the terms hazard rate and intensity are used interchangeably inthe literature. The formal denition of hazards from general analysisis as follows:Denition: Given a measure on (0, ) we dene the correspondinghazard measure by the Radon-Nikodym derivative:

    d d

    (t)1

    [t, ), t(0, )

    Theorem: s hazard measure completely characterizes .

    We will often face the following situation ( dm is the Lebesgue measure):A measure on (0,

    ) is given with Radon-Nikodym derivative u :

    d dm

    (t) = u(t)

    where u is a non-negative continuous function. We then dene themeasure by its survival function

    ([t, )) exp( t0 u(s)ds )and we will call u the hazard function corresponding to .Theorem: is s hazard measureProof: Since u is continuous we have

    ddm

    (t ) = ([t, ))u(t)and hence we get

    d d

    (t) =d dm (t )ddm (t )

    =u(t)

    ([t, ))u(t)=

    1([t, ))

    Back to the Survival Probabilities

    We will often assume that, h() 0 is cont.P (0, T ) = exp( T 0 h(s)ds )

    6

  • 8/8/2019 Jump Notes OCT20

    7/44

    More general: for xed t the function s h(t, s ) is non-negative andcontinuous, then P is taken to be

    P (t, T ) = exp( T t h(t, s )ds ) For xed t we see that P (t, ) is differentiable:

    P T

    (t, T ) = P (t, T )h(t, T )

    A Heuristical Viewpoint

    We let be a small time step and T t :Q ( [T, T + ] |F t ) = Q ( T + |F t ) Q ( T |F t )

    = P (t, T ) P (t, T + )=

    P (t, T ) P (t, T + )

    P T

    (t, T ) = P (t, T )h(t, T )

    For t = T = 0 and h(, ) = we see thatQ ( )

    The hazard/intensity controls default within the next second

    An Equivalent Approach

    Default can be described by the default indicator

    N t0 the rm survived until t1 the rm defaulted before time t

    Right continuous function with left limits (RCLL) Not predictable Increasing with values in {0, 1}and N 0 = 0.

    7

  • 8/8/2019 Jump Notes OCT20

    8/44

    The time of default is given by inf {t > 0|N t = 1}

    On the other hand: Given we can dene the simple point processN t 1{ t }

    The Flow of Information

    We let F t denote the market information at time t Right continuous (the usual condition):

    F t = F t+ Might not be left continuous:

    F t = F tWe cannot look, not even innitesimal, into the future

    Examples of ltrations include: F t (B s )s[0,t ] where B is a Brownian motion F t (X s )s[0,t ] where X is a diffusion process F t (N s )s[0,t ] where N is a default indicator

    Stopping Times

    The formal denition of a stopping time is as a positive random vari-able: : [0, ]

    At time t we should be able to tell whether or not default has occurred:

    { t} F t t Since F is right-continuous it is enough that

    { < t } F t t

    8

  • 8/8/2019 Jump Notes OCT20

    9/44

    We think of = as the case of no default Knowledge up to time is given by F Examples of stopping times

    Every positive constant is a stopping time T

    The rst time a Brownian motion hits some set AR : inf

    {t > 0

    |B t

    A

    }.

    See the end of this note for more on passage times.

    The optimal time to exercise an American optionDoob-Meyers Decomposition

    To model the distribution of (the time of the credit event) we willmodel the simple point process N N is increasing, hence N is a submartingale

    The Decomposition Theorem of submartingales by Doob-Meyer yieldsa unique process A such that: A is of nite variation A is predictable (left continuous) A is increasing with A0 = 0 The following denes a RCLL martingale:

    M t N t A t

    Example (I)

    Let B be Brownian motion and F t is its natural ltration:

    F t (B s )s[0,t ]

    9

  • 8/8/2019 Jump Notes OCT20

    10/44

    By Jensens inequality B 2t is a submartingale, hence we can nd A suchthat: B 2t Atis a martingale. What is A?

    Example (II)

    A process N is a Poisson process if N satises Starts at zero, N 0 = 0 Only has positive jumps of size one, N t {0, 1, 2, 3...} Such a process is by nature a submartingale (increasing) and the

    Poisson process is characterized by N t t being a martingale fora positive constant The compensator is given by A t = t and we call the intensity

    Similarly to the Brownian case, we can nd an increasing predictableprocess A such that(N t t )2 A t

    is a martingale. What is A?

    The Crucial Property of Compensators

    When modeling credit events the compensator will be very handy. Tosee this let N be a default indicator Since both M and A are 0 at time 0 we get

    E [N t ] = E [At ] t

    Using the connection with we getE [A t ] = E [N t ] = P ( t) t

    To model the distribution of we only need to model the compensator.A Few Warnings

    We will face situations where10

  • 8/8/2019 Jump Notes OCT20

    11/44

    If we change the measure, i.e., change the distribution of , the

    Doob-Meyer Decomposition changes If we change the information set, i.e., change F t , the Doob-MeyerDecomposition changes

    Brownian Measure Shifts

    Let us review the Brownian case and take B to be a P -Brownian motion.What happens if we switch to an equivalent measure Q ? For a process , we dene the stochastic exponential

    E dB t exp t

    0 u dB u 12

    t

    0 2u du

    From Stochastic Calculus II we know that if the above denes a genuinemartingale we can switch measure on F T by deningdQdP

    exp T 0 u dB u 12 T 0 2u du Under this measure Q , the process B is no longer a Brownian motionbut will have drift:

    B Qt B t t

    0u du

    is a Q -Brownian motion

    There are many ways to ensure that E () is a martingale (e.g., Novikovscondition). To be on safe ground we will typically assume that isbounded.

    The importance of E () being a martingale is seen fromQ () =

    dQ

    dPdP = E

    dQ

    dP= E

    E (

    dB )T = 1

    One can construct examples of (unbounded) for which this propertybreaks (remember the cookie question in stoch II, see also the very endof this note).

    11

  • 8/8/2019 Jump Notes OCT20

    12/44

    Brownian Filter Shift

    Let B be a Brownian motion and let F denote the natural ltrationgenerate by B Given that we know B s , our best estimate of B t is B s

    B is a F -martingale, E [B t |F s ] = B s Let us try to add some future information about B :

    Gt F t(B 1) = (B s , B 1)s[0,t ]

    Knowing where B is going to end at time 1 changes how we view thedistribution of B t at time s . Allowing additional info can create arbitrage possibilities in the model(think of B as modeling stock prices) We interpolate B s and B 1 to get the estimate of B t :

    E [B t |Gs ] = B s +B 1 B s

    1 s(t s),

    so B is not a G-martingale

    Dene the continuous processB t B t t0 B 1 B u1 u du

    We can prove that B is a Brownian motion for G B is a G-martingale Quadratic variation [ B ]t = [B ]t = t

    B has no

    F -drift; however, does have a

    G-drift

    B t = B t + t0 B 1 B u1 u du See the end of these notes for more details.

    12

  • 8/8/2019 Jump Notes OCT20

    13/44

    A General Simple Point Process

    The modeling of the default time is often done by the rst jump timefor a counting process N . N is right continuous with left limits N 0 = 0, N is increasing and N t {0, 1,...}for all t Only nite many jumps in nite time

    The Poisson process is our main example: For s > t the increments for N are stationary and independent

    distributed:

    N s N t Poisson with parameter (s t)General Intensities

    is called N s intensity if the compensator A from the Doob-MeyerDecomposition of N :M t = N t At

    can be written as (like the previous examples)

    A t =

    t

    0 u du

    For all (u, ) we have u () 0 is predictable (left continuous) Avens result: Let N t 1 t for some stopping time . The defaultintensity for the rm is given by

    t =

    T Q ( > T |F t ) T = t .

    Some times A is called the predictable quadratic variation of N sinceyet another martingale is given by[N ]t t0 u du

    13

  • 8/8/2019 Jump Notes OCT20

    14/44

    Predictability

    Can we take A t N tin the Doob-Meyer Decomposition?

    N is not predictable, we cannot foretell the value of N Observing N on the interval [0 , t 0[ does not imply that we predict

    N t 0

    Often we are given an adapted RCLL process X and by taking leftlimitsX t limst X s

    we obtain a predictable process.

    For a Brownian motion B we can dene the stochastic integralF t t0 f u dB u

    for an adapted process f t () (say bounded to avoid regularity problems)

    We would like F to inherit B s properties:

    B is martingale, so is F B is continuous, so is F

    In order to preserve the martingale property f has to be adapted(Stratonovich integrals) In order to illustrate what can go wrong, let N be a Poisson processwith jump times n and dene

    f t

    1 for t < 10 for t 1

    f t is adapted since 1 is a stopping time t f t is right continuous

    14

  • 8/8/2019 Jump Notes OCT20

    15/44

    Dene the integrated processF t

    t

    0f u dM u

    Is F a right-continuous martingale? Computation of F to gives

    F t = t0 f u dN u t0 f u du=

    N t

    n =1

    f n

    t

    0f u du

    = 0 t0 f u du= (t 1)

    which is decreasing in t (min( x, y ) = xy)

    Right continuous integrands do not lead to martingalesTheorem: Assume that g is predictable (left-continuous and adapted)and say bounded. Then the process

    G t t

    0gu dM u

    is a martingale.

    The theorem is stated for bounded processes but is often valid forunbounded integrands as well. To exemplify, let us dene

    gs N s , and h s N s

    and compute for the Poisson case M t N t

    t

    E t0 gs dM s , and E t0 h s dM sSee the end of this note for all the calcs.

    15

  • 8/8/2019 Jump Notes OCT20

    16/44

    Main points related to predictability:

    Martingales, M , are right continuous with left limits The integrated process

    F t t0 f u dM uis right-continuous

    F is a martingale if f is predictable (i.e., adapted and left contin-uous)

    F can still be well-dened if f fails to be predictable

    For xed the processes f t and f t only differ at nite many pointsbut that can cause the martingale property of F to failChange of Measure

    The stochastic exponential E (X ) is dened as the solution of the SDEdY t = Y t dX t ,

    for some process X (with or without jumps). Typically, X will be amartingale and Y will be used to change measure.

    By using an appropriate version of Itos formula we see

    E (X )t = exp X ct 12

    [X c]t0 s t

    (1 + X s )

    where X c is the continuous part of X :

    X ct X t 0 s t

    X t

    In the Brownian case we have X t t0 u dB u and hence X = X c and

    so

    E (X )t = exp t0 u dB u 12 t0 (u )2du16

  • 8/8/2019 Jump Notes OCT20

    17/44

    In the jump case N , with jump times 1, 2,... we haveX t

    t

    0u (dN u u du ) =

    N t

    n =1

    n t

    0u u du

    So the jumps in X are given by X n = n and the continuous partis thereforeX ct = t0 u (dN u u du ) N tn =1 n = t0 u u du

    So [X c]t = 0 and we get

    E (X )t = exp t

    0u u du

    N t

    n =1(1 + n )

    For the Poisson case ( u = ), let us take a constantt k

    So the stochastic exponential equalsE (X )t = exp t0 u u du N tn =1 (1 + n )

    = exp

    kt

    N t

    n =1

    (1 + k)

    = exp kt (1 + k)N t

    Instead of specifying we often specify + 1 E (X ) is a martingale if is bounded.Girsanovs theorem

    We will construct Q by specifying X likedX t t dB t + ( t 1)(dN t t dt )

    dX Bt + dX N tand then dene the measure Q by

    dQdP E (X )T

    17

  • 8/8/2019 Jump Notes OCT20

    18/44

    For Q to be an equivalent probability measure we need E (X ) to be agenuine strictly positive martingale. E.g., if and are bounded andthe intensity is deterministic we are safe

    can depend on N and can depend on B The general case can now be computed as follows by using Yors relationusing [N, B ]t = 0

    E (X B )tE (X N )t = E (X B + X N + [X B , X N ])t = E (X )t

    Inserting the expressions from the previous slides shows that E (X )tequals

    exp t0 u dB u 12 t0 (u )2du t0 (u 1) u duN t

    n =1

    n

    Can be generalized to multiple Brownian motions and marked pointprocesses There is no standard as to whether or + 1 should be used.

    Theorem Given that the E (X ) denes a true measure Q then The following is a Q -Brownian motion

    B Qt B t t

    0u du

    N has intensity Qt t t

    under Q . In other words, the following is a Q -martingale

    M Qt N t t0 Qu duApplication (I)

    Say that we are given a Poisson process N , meaningN t t

    is a P martingale.

    18

  • 8/8/2019 Jump Notes OCT20

    19/44

    Construct a measure Q such that N is a Poisson with intensity = under Q

    So we are looking for a measure Q such thatN t t

    is Q -martingale.

    Let us dene the constant

    From the previous slides the stochastic exponential is

    E (X N )t = exp ( 1)t N t

    = exp ( 1)t

    N t

    = exp ( )t

    N t

    E (X N ) is a true martingale since is bounded

    Given that E (X N ) is a genuine strictly positive martingale we can denethe measuredQdP E (X

    N )1

    Under this measure N has intensity =

    meaning thatN t t

    is Q -martingale.

    19

  • 8/8/2019 Jump Notes OCT20

    20/44

    Application (II)

    Assume that we have the nancial market

    The risk-free asset: dbtbt

    r t dt

    The risky asset:dS tS t

    = t dt + t dB t + t (dN t t dt )

    The dynamics of the deated market is thendS tS t

    = ( t r t )dt + t dB t + t (dN t t dt )

    If we change measure from P Q using the E (X ) determined by ( , )we getdS tS t

    = ( t r t + t t t t + t t t )dt+ t (dB t t dt ) + t (dN t t t dt )

    In order for and to dene a Q we needt r t + t t t t + t t t = 0

    for all t and so that S has no Q -drift

    A family of processes (, ) will satisfy this drift equation meaning thatthe market is incomplete

    20

  • 8/8/2019 Jump Notes OCT20

    21/44

    Cox processes

    The Classical Poisson Process

    Let N be a counting process According to Doob-Meyer we have

    N t A tis a RCLL-martingale.

    If A t = t we call N a Poisson and we know that the incrementsN t 1 N t 0 ,...,N t n N t n 1

    for 0 t 0 < t 1 < ... < t n are

    Independent Stationary (only the time difference matters)

    Inhomogeneous Poisson Process

    Let N be a counting process with independent increments, i.e., N t N sN v N u for all t s v u . We dene the deterministic function t (t) by

    (t) E [N t ]

    Some immediate properties of are (0) = 0 is increasing is RCLL but predictable (deterministic)

    One can show using the independence of increments that also pos-sesses the martingale property

    M t N t (t)meaning that indeed is N s compensator.

    21

  • 8/8/2019 Jump Notes OCT20

    22/44

    We want to construct N with a given intensity. Let u (u) 0 be a deterministic function and dene

    (t) t0 (u)du The construction of N can be carried out as follows:

    Let N (1) be a standard Poisson meaning N (1)t t is a martingale Dene the time changed process by

    N t N (1)(t )

    N t (t) is a martingale, meaning that compensates N Given that N (1) is Poisson, one can show that for t > s

    N t N sPoisson ts (u)dumeaning that for k = 0 , 1, 2,...

    Q (N t N s = k) =1k!

    t

    s (u)du

    kexp

    t

    s (u)du

    In particular, we see thatQ (N T = 0) = Q (N T N 0 = 0) = exp T 0 (u)du

    Let B be a Brownian motion Are B and N (1) always independent? What about B and N ?

    Pricing with Inhomogeneous Poisson

    We let be the rst jump time of N . Then we haveP (0, T ) = Q ( > T ) = Q (N T = 0) = exp T 0 (u)du

    22

  • 8/8/2019 Jump Notes OCT20

    23/44

  • 8/8/2019 Jump Notes OCT20

    24/44

    The risky payoffs we need to value typically depend in some way onthe state process X

    The Cox Process

    Conditional on knowing everything in the economy up to some time T (knowing the state process) the integral

    T 0 (X u )ducan be seen as a constant

    If, given

    GT , N is an inhomogeneous Poisson we call N a Cox-process

    This excludes N T being GT measurable By knowing the entire path of X we cannot tell whether default

    has happen or not (we only know the probability that it will hap-pen in the next second)

    Construction of a Cox Process

    We could stochastic time change N (1) to create a Cox process, however,the following construction works well with MC-simulation (HW3)

    Let U be uniform distributed on [0 , 1], independent of X :Q (U u) = u

    We dene the countdown process Y byY s exp s0 (X u )du

    Since 0, Y is a decreasing process with Y 0 = 1 In the Cox framework, the default time is dened as

    inf s

    Y s U = inf s s

    0 (X u ) log(U ) ,

    and we note that log(U ) is exponentially distributed with parameterone.24

  • 8/8/2019 Jump Notes OCT20

    25/44

    The Independence Lemma

    Assume that we are given two independent random variables V and W and we need to calculateE Q [f (V, W )|(V )]

    Let us rst assume that f (v, w ) f 1(v)f 2(w) and computeE Q [f (V, W )|(V )] = f 1(V )E Q [f 2(W )|(V )]

    = f 1(V )E Q [f 2(W )]= E Q [f (v, W )]v= V

    The general result follows from an approximation argument. s Distribution

    We assume that X and U are independent under Q . Therefore also Y and U are independent under Q .

    The independence lemma gives usQ ( > T |GT ) = Q Y T > U GT

    = E Q 1Y T >U |GT = E Q [1v>U ]v= Y T

    = Q (U < v )v= Y T = Y T = exp T 0 (X u )duThe Information Sets

    As usual we dene the default process

    N t1 t

    0 t <

    and the corresponding ltration

    F N t (N u )u[0,t ]25

  • 8/8/2019 Jump Notes OCT20

    26/44

    is a stopping time for the ltration F N

    The market has access to both Gand F N , meaning that the marketobservesF t GtF N t (N u , X u )u[0,t ]

    Iterated Conditional Expectations

    Remember that for any random variable Y we haveE Q [Y ] = E Q [E Q [Y |GT ]]

    So e.g., we get that

    Q ( > T ) = E [Q ( > T |GT )] = E exp T 0 (X u )du We need to price payoffs like

    f (X T )1>T

    We are promised f (X T ) but will only receive it in the case of no-default

    Cox Pricing

    We consider a defaultable zcb with payoff (no recovery)1 > T 0 T

    In the Cox-framework the price is then given byB (0, T ) = E Q [exp T 0 r (X s )ds 1>T ]

    = E Q E Q [exp

    T

    0r (X s )ds 1>T |GT ]

    = E Q exp T

    0r (X s )ds E Q [1>T |GT ]

    = E Q exp T 0 r (X s )ds Q ( > T |GT )26

  • 8/8/2019 Jump Notes OCT20

    27/44

    Using the look of the survival probability yieldsB (0, T ) = E Q exp

    T

    0r (X s )ds Q ( > T |GT )

    = E Q exp T 0 r (X s )ds exp T 0 (X s )ds= E Q exp T 0 (r (X s ) + (X s )) ds

    Instead of r we discount by r + so we can interpret the intensity asa stochastic spread No independence assumption needed for this resultMore Building Blocks

    More general payoff at maturity:f (X T )1>T

    Payoffs until default (per units of time)g(X u )1>u

    Risky annuity

    Payoff at defaulth(X )1 T

    Allow the recovery to depend on the state process

    Cox Pricing - Rates

    The contract has payment dates ( t1 < t 2... < t N ) Say that a payment of s is made per time unit

    If the sequence ( t1 < t 2... < t N ) is equally spaced, tn tn 1 = ,then a payment of s () s is made at each t n if no default hasoccurred

    27

  • 8/8/2019 Jump Notes OCT20

    28/44

    The time zero value of such a payment at time tn isE Q exp

    t n

    0r (X u )du s () 1>t n

    = s () E Q exp t n0 (r (X u ) + (X u ))du The overall value of the fee-leg is then

    N

    n =1

    s () E Q exp t n0 (r (X u ) + (X u ))du By letting 0 we obtain the continuous analogue

    sE Q T 0 exp t0 (r (X u ) + (X u ))du dt Instead of a constant rate we could allow for g(X s )1>s and would get

    E Q T 0 exp s0 r (X u )du g(X s )1>s ds= E Q

    T

    0exp

    s

    0(r (X u ) + (X u ))du g(X s )ds

    Cox Pricing - Default Payments

    We need to computeE Q exp 0 r (X u )du h(X )1 T

    = E Q E Q exp 0 r (X u )du h(X )1 T |GT To compute the inner expectation we need s conditional density:

    p(t|Gt ) ddt

    Q ( > t |Gt ) = (X t ) exp( t0 (X u )du )28

  • 8/8/2019 Jump Notes OCT20

    29/44

    Using this conditional density yieldsE Q exp

    0r (X u )du h(X )1 T |GT

    = 0 exp t0 r (X u )du h(X t )1t T p(t|Gt )dt= T 0 exp t0 r (X u )du h(X t ) p(t|Gt )dt= T 0 exp t0 (r (X u ) + (X u ))du h(X t ) (X t )dt

    So the price of recovery isE Q T 0 exp t0 (r (X u ) + (X u ))du h(X t ) (X t )dt

    Dynamic Versions

    We need to incorporate the arrival of new information Let us try to use the conditional trick on a ZCB:

    B (t, T ) = E Q exp

    T

    tr (X

    s)ds 1

    >T F t= E Q E Q exp T t r (X s )ds 1>T GT F t F t= E Q exp T t r (X s )ds E Q [1>T |GT F t ] F t= E Q exp T t r (X s )ds Q ( > T |GT F t ) F t

    For a Cox-driven default time we need to computeQ ( > T |GT F t )

    We need to compute the probability of survival beyond maturity given GT , that is how the economy evolves until time T

    29

  • 8/8/2019 Jump Notes OCT20

    30/44

    F t , that is we know whether or not default already occurred Assume the rm survived till time t (otherwise we get?), we know thatU < Y t , hence we would no longer view U as uniform on [0, 1] but as

    uniform on [0, Y t ]:

    Q ( > T |F tGT ) =Y T Y t

    = exp T t (X u )du For more details see the end of these notes

    This gives the ZCB dynamics:

    B (t, T ) = E Q exp T

    tr (X s )ds 1>T F t

    = E Q exp T t r (X s )ds Q ( > T |GT F t ) F t= 1 >t E Q exp T t r (X s ) + (X s ) ds F t

    The other building blocks can be treated similarly Can we replace F t with Gt ?

    30

  • 8/8/2019 Jump Notes OCT20

    31/44

    Details on the Doob-Meyer examples

    Let B be a Brownian motion and let us use Ito with f (x) x2. We get

    dB 2t = 2 B t dB t + dt

    or equivalently we have that

    B 2t t = 2 t0 B s dB sshowing that A t t ensures that B 2t A t is a martingale. To get the generalpicture, let us take X and Y to be two martingales (may have jumps). It osproduct rule says that

    dX t Y t = Y t dX t + X t dY t + d[X, Y ]t

    and remembering that if the integrands are predictable, the stochastic inte-grals are actually martingales (or at least local martingales) gives

    X t Y t [X, Y ]tis a martingale, agreeing with the Brownian example above (where X Y B ). However, the quadratic covariation is not predictable so we need to doa little more work before we can reach the compensator. Let us try this withthe compensated Poisson:

    M t N t tSo using the above, we know that M 2t [M, M ]t is a martingale. Let uscompute the quadratic variation:

    [N t t,N t t ] = [N t ]= lim (N t k N t k 1 )2

    =N t

    n =1

    ( N n )2 = N t

    Therefore we have that M 2t N t is a martingale and by adding M we getanother martingaleM 2t t

    31

  • 8/8/2019 Jump Notes OCT20

    32/44

    showing that A t t is the compensator for M 2t .

    Let us compute the integrals

    E t0 N u (dN u du ) , E t0 N u (dN u du ) .By switching the order of integration we get

    E t0 N u du = t0 E [N u ]du = 2 t0 udu = 12 2t2. (0.1)Since the Lebesgue measure does not put mass on single points we can replaceN u by N u without changing the integral.

    Remember that 1 + 2 + 3 + ... + n = n (n +1)2 and we get

    E t0 N u dN u = E N tn =1 N n = EN t

    n =1

    n (0.2)

    = EN t (N t + 1)

    2=

    12

    (E [N 2t ] + E [N t ]) =12

    2t2 + t (0.3)

    meaning that the rst integral has value t , hence is not a martingale. Bythe way, how did we get the second moment, E [N 2t ] = 2t2 + t ? On outletis via the distribution Poisson distribution

    P (N t = k) =(t )k

    ke t , k = 0 , 1, 2..,

    but this is tedious. Instead, use the Doob-Meyer decomposition: ( N t t )2 t is a martingale, and therefore solve for E [N 2t ] in the following equationE [(N t t )2 t ] = 0.

    To get the second integral, one can either use the martingale propertybecause of predictability or make a similar calculation as above using that

    N t

    n =1

    N T n =N t

    n =1

    n 1 =(N t 1)N t

    2.

    Either way the resulting expectation is zero.

    32

  • 8/8/2019 Jump Notes OCT20

    33/44

    Details on the dynamic Cox process

    In the Cox setting, we claim that for t[0, T ] we have

    Q ( > T |F tGT ) =Y T Y t

    1N t =0 ,

    where Y is the countdown process. To see why this relation is really true, letus rst assume that the state process X is a deterministic function of time inwhich case we have F t = F N t . The claim is then equivalent to (by denition)

    E Q [1>T 1A ] = E QY T Y t

    1N t =0 1A , A F N t . (0.4)

    Inserting A {N t = 0}= {Y t > U }and using that U is uniformly dis-tributed show that (0.4) holds.The general case is almost the same. We note that F tGT = F N t GT .We then need to verify (0.4) for A {Y t > U } {Y s }where s[0, T ]and [0, 1]. The independence lemma can be used to show that (0.4)holds for such sets; indeed, both sides of (0.4) equal E Q [1Y s Y T ].

    33

  • 8/8/2019 Jump Notes OCT20

    34/44

    Details on the Brownian bridge

    For a Brownian motion B , we dene the ltration

    Gt F t(B 1) = (B s , B 1)s[0,t ].We claim that

    E [B t B s |Gs ] =B 1 B s

    1 s(t s).

    To see this is true, we need some preparation. First need:

    Lemma 0.1. Let A and B be two independent random variables and let X be a random variable independent of B . Then we have

    E [X |(A, B )] = E [X |(A)]Proof: By the denition of conditional expectation, we need to show thatfor any sets D and E

    A 1 (D )B 1 (E ) Xd P = A 1 (D )B 1 (E ) E [X |(A)]dPEvaluating the left hand side gives us

    E [1A

    1 (D )1B

    1 (E )X ] = E [1B

    1 (E )]E [1A

    1 (D )X ].The right hand side equals

    E [1A 1 (D )1B 1 (E )E [X |(A)]] = E [1B 1 (E )E [1A 1 (D )X |(A)]]= E [1B 1 (E )]E [E [1A 1 (D )X |(A)]]= E [1B 1 (E )]E [1A 1 (D )X ],

    where we for the second equality use that E [1A 1 (D )X |(A)] is a function of A and hence is independent of B .

    By means of this lemma, we can then write for 1 > t > s > s 1 > ....E [B t B s |Gs ] = E [B t B s |(B 1 B s )(B s B s 1 )... ]

    = E [B t B s |(B 1 B s )].

    34

  • 8/8/2019 Jump Notes OCT20

    35/44

    To compute this conditional expectation we can use that X B t B s andY B 1 B s are jointly normally distributed. Dene the quantities

    t s1 s

    , Y X Y.The random variable Y is independent of Y since (by joint normality)

    Cov (Y , Y ) = Cov (X Y,Y ) = ( t s) (1 s) = 0 .We can then nally compute the needed conditional expectation to be

    E [B t B s |(B 1 B s )] = E [X |Y ]= E [Y + Y |Y ]= E [Y ] + Y = 0 +

    t s1 s

    (B 1 B s ).We also claim that the continuous process

    B t B t t0 B 1 B u1 u du,is a

    G-martingale. To see this we compute the conditional expectation using

    the above result. For t > s we get by Fubinis theorem for conditionalexpectations

    E [B t B s |Gs ] = E B t B s ts B 1 B u1 u du |Gs=

    B 1 B s1 s

    (t s) ts B 1 E [B u |Gs ]1 u du=

    B 1 B s1 s

    (t s) ts B 1 B s B 1 B s1 s (u s)1 u du=

    B 1

    B s

    1 s (t s) t

    s

    B 1

    B s

    1 s du = 0 .

    35

  • 8/8/2019 Jump Notes OCT20

    36/44

    Hitting times

    The central object will be the following stopping time

    (w) inf {t : B t (w) = 1}where B is a Brownian motion starting at 0, B 0 = 0, generating the ltration

    F t (B s )s[0,t ]. You can take as given that P ( < ) = 1. Since is apositive the Laplace transform, Lis well-dened:L( ) E [exp( )], 0.

    1. Sketch the graph of L( ). Your picture should just give a roughidea about some of L

    s quantitative properties.

    2. Argue that for any 0 the following process is a F tmartingale

    M t exp(2B t t ).

    We denote by M t the martingale M stopped at the random time ,see Shreve, p. 342-343 for an illustration and discussion of a stoppedprocess.

    3. Give bounds for M t , i.e. nd constants a and b such that for all t and we have

    M t (w)[a, b].

    4. Use the Optional Sampling Theorem (Theorem 8.2.4 in Shreve) to show

    that for all tE [M t ] = 1.

    5. By letting

    tapproach innity and using the Dominated ConvergenceTheorem (Theorem 1.4.9 in Shreve) show that

    L( ) = exp( 2 ).Does your graph in the rst question t?

    36

  • 8/8/2019 Jump Notes OCT20

    37/44

    6. Use the previous result to nd s density function, i.e., nd a density

    function f such

    exp(2 ) = E [exp( )] = 0 exp(t )f (t)dt.This is a tedious question, so fell free to skip it!

    Now take a positive drift and dene the stopping time

    inf {t : t + B t = 1}.

    7. This question can be solved without computing f explicitly from theprevious question. Explain how to use Girsanovs Theorem to provethe relation for all T

    P ( T ) = T 0 exp 122s f (s)ds.

    Solution:

    1. Starts at one and decreases to 0.2. The stochastic exponential E (B ) is a martingale since is a constantand hence is trivially bounded.3. One could choose a 0 and b exp(2 ).4. The Optional Sampling Theorem gives that M t is a martingale andhence E [M t ] = M 0 = 1 .5. Dominated convergence gives us that

    1 = limt

    E [M t ] = E [limt

    M t ] = E [M ] = E [exp(2 )]and dividing through gives the result.6. Try with the function

    f (t)1

    2t 3 exp 12t

    .

    37

  • 8/8/2019 Jump Notes OCT20

    38/44

    On way to come up with this guess is by applying the reection principle for

    the Brownian motion.7. Girsanov tells you that under the measure Q given by

    dQdP

    = exp( B t 0.52t)we have that t + B t is a Brownian motion and hence we can apply all theprevious results under Q . We get

    P ( T ) = E P [1 T ] = E QdPdQ

    1 T

    = E Q exp(B + 0 .52)1 T = E Q exp((B + )) exp( 0.52)1 T = E Q exp()exp(0.52)1 T

    and by using that s Q -distribution equals s P -distribution the result fol-lows.

    38

  • 8/8/2019 Jump Notes OCT20

    39/44

    Local Martingales

    This note is only intended as extra information and is beyond our scope. Itcontains a brief discussion of local martingales and how they natural appearin nance.

    An adapted RCLL-process M is called a local martingale if there existsan increasing sequence of stopping times { n }n =1 ( 1 2 ... ) convergingto say and such that for any n the stopped process

    Y t M t n

    is a true martingale.To see what is going on, let us look at the Brownian case. The Brownian

    integral

    (H B )t t0 H u dB uis well-dened for t [0, T ] if T 0 H 2u (w)du < for all w. The stochasticintegral ( H B ) is a true martingale if the following stronger condition issatised

    E T 0 H 2u du < .To see that ( H B ) is always a local martingale we need to construct stoppingtimes

    {

    n}such that for all n the stopped process Y

    t(H

    B )

    t nis a true

    martingale. One way to do this is as follows

    n inf t t0 H 2s ds n n

    which is an increasing sequence. In this case we get that

    Y t = t n0 H u dB u = t0 H u 1u n dB ubut now by the construction of n we have the needed integrability:

    E T

    0H 2u 1u n du = E

    T n

    0H 2u du nE [T n ] < .

    The crucial lacking feature of local martingales is that they do not haveconstant expectations, i.e, if M is only a local martingale it can easily be

    39

  • 8/8/2019 Jump Notes OCT20

    40/44

    that E [M T ] = M 0. To see how this property ts into mathematical nance,

    let us note that if M is a non-negative local martingale (like the stochasticexponential we use as density of measures, E (X )), Fatous Lemma gives usthat t > s :lim inf

    nE [M t n |F s ] E [lim inf n M t n |F s ].

    By the martingale property and the fact that n it follows thatM s E [M t |F s ].

    meaning that non-negative local martingales are in general super-martingales.A classical example in this respect can be constructed using the Bessel

    process: Let R t be dened as B t where

    is the standard norm in R 3

    and B is a three dimensional Brownian motion. The following adapted RCLLprocess is an example of a local martingale that is not a true martingale:

    M t1

    R t

    If you want more info about this, try Google Bessel process and local mar-tingale and go back to HW1 in stoch II (including its solution).

    Try to preform this analysis for the jump case, and you will see that thingsare much more difficult. Indeed, it is not true in general that stochastic inte-grals of a predictable processes are even local martingales if the integratingprocess has jumps; it will only be what is called a sigma-martingale.

    40

  • 8/8/2019 Jump Notes OCT20

    41/44

    The Feller condition

    In this short note we explain how to derive the Feller condition for thesquare root process. Dene X by

    dX t ( X t )dt + X t dB t , X 0 > 0,where , and are positive constants and B is Brownian motion. Then wedene the stopping time

    (a ) inf t {r t = a}, a 0.

    The claim is that if 2 2

    - Fellers condition - we have P ( (0)

    = ) = 1,meaning that r never reaches zero. To justify the claim, we dene the familyof stopping times

    (a,b ) inf t {r t {a, b}}, 0 < a < X 0 < b < ,

    and we note that lim a 0,b (a,b ) = (0) almost surely. Let us nd a functionf such that M t f (X t ) has no drift. Itos lemma gives us

    dM t = f (X t )dX t +12

    f (X t )d X t

    = f (X t )( X t ) + 12f (X t )2X t dt + X t dB t .We are therefore looking for a function f satisfying0 = f (x)( x ) +

    12

    f (x)2x, x > 0.

    The solution to this ODE can be found via Mathematica to be (the Gammafunction)

    f (x) = A + B x1 e 2 2 t t 2 2 dt,for some constants A and B depending on f s boundary conditions. Thecrucial observation is that

    limx0 |f (x)| = + ,

    41

  • 8/8/2019 Jump Notes OCT20

    42/44

    if and only if Fellers condition holds, i.e., if and only if we have 2 2 1. Wecan now proceed and nish the proof. Using optional sampling together withthe martingale M give us

    f (X 0) = E [M t ( a,b ) ] = E [f (X t ( a,b ) )].

    Since X is bounded between a and b we can use dominated convergence tosee

    f (X 0) = E [f (X ( a,b ) )] = f (a )P ( (a ) < (b)) + f (b)P ( (b) < (a )).

    Passing a to zero shows us that

    f (X 0) = f (0)P ( (0) < (b)) + f (b)P ( (b) < (0) ).

    Since f explodes at zero, we must have P ( (0) < (b)) = 0 for any b. Passingb to innity then shows that P ( (0) < ) = 0 under Fellers condition andthe claim follows.

    42

  • 8/8/2019 Jump Notes OCT20

    43/44

    Conditional independence

    In this short note we work out some of the details related to the Longstaff-Schwartz two factor term structure model but we will face this problem manyother times too. Here we have two CIR processes:

    dX (i )t = i (i X (i)t )dt + i X ( i)t dB (i)t , i = 1 , 2,

    for constants i , i and i . The question is why do we have

    E exp T 0 (X (1)u + X (2)u )du |F t= E exp

    T

    0X (1)u du |F t E exp

    T

    0X (2)u du |F t ?

    It does not matter if we have a t in the lower limit of integration and itdoes not matter which measure we are under. To show the above claimis not trivial, as we know independence alone does not imply conditionalindependence, see e.g., problem 0 on the additional exercise sheets. We willneed the following lemma:

    Lemma 0.2. Let A and B be two independent random variables and let X be a random variable independent of B . Then we have

    E [X |(A, B )] = E [X |(A)]Proof: By the denition of conditional expectation, we need to show thatfor any sets D and E

    A 1 (D )B 1 (E ) Xd P = A 1 (D )B 1 (E ) E [X |(A)]dPEvaluating the left hand side gives us

    E [1A 1 (D )1B 1 (E )X ] = E [1B 1 (E )]E [1A 1 (D )X ].

    The right hand side equalsE [1A 1 (D )1B 1 (E )E [X |(A)]] = E [1B 1 (E )E [1A 1 (D )X |(A)]]

    = E [1B 1 (E )]E [E [1A 1 (D )X |(A)]]= E [1B 1 (E )]E [1A 1 (D )X ],

    43

  • 8/8/2019 Jump Notes OCT20

    44/44

    where we for the second equality use that E [1A 1 (D )X |(A)] is a function of A and hence is independent of B . Armed with this lemma, we can then turn to statement related to theterm structure model by Longstaff and Schwartz. The iterated expectationrule yields

    E exp T 0 (X (1)u + X (2)u )du F t= E exp T 0 X (1)u du exp T 0 X (2)u du F t= E E exp

    T

    0X (1)u du exp

    T

    0X (2)u du (B (1)s , B (2)u )s[0,t ],u[0,T ] F t

    = E exp T 0 X (2)u du E exp T 0 X (1)u du (B (1)s , B (2)u )s[0,t ],u[0,T ] F t .By the above lemma we have

    E exp T 0 X (1)u du (B (1)s , B (2)u )s[0,t ],u[0,T ]= E exp

    T

    0

    X (1)u du (B(1)s )s[0,t ]. ,

    Since this term is F t -measurable we getE exp T 0 X (2)u du E exp T 0 X (1)u du (B (1)s , B (2)u )s[0,t ],u[0,T ] F t= E exp T 0 X (2)u du E exp T 0 X (1)u du (B (1)s )s[0,t ] F t= E exp T 0 X (2)u du F t E exp T 0 X (1)u du (B (1)s )s[0,t ] .

    Finally, we can re-apply the lemma to replace (B (1)s )s[0,t ] with F t in thelast line.