tesi giulio laudani

54
UNIVERSITÀ COMMERCIALE LUIGI BOCCONI MASTER OF SCIENCE FINANCE –QUANTITATIVE FINANCE A new Tree-Pricing approach to overcame log normality Giulio Laudani 1256809 14/02/2013 Anna Battauz – Relatore Fabrizio Iozzi – Controrelatore

Upload: giulio-laudani

Post on 22-Nov-2014

460 views

Category:

Documents


0 download

DESCRIPTION

 

TRANSCRIPT

Page 1: Tesi giulio laudani

UNIVERSITÀ COMMERCIALE LUIGI BOCCONI MASTER OF SCIENCE FINANCE –QUANTITATIVE FINANCE

A new Tree-Pricing approach to overcame log normality

Giulio Laudani 1256809

14/02/2013

Anna Battauz – Relatore Fabrizio Iozzi – Controrelatore

Page 2: Tesi giulio laudani

1

Page 3: Tesi giulio laudani

2

Ringraziamenti:

Quando si scrivono i ringraziamenti, si ha sempre un po’ lo sguardo al futuro, a

quando presi dalla nostalgia di questi giorni fantastici, si rivivono quei passati

ricordi. Questo è uno dei pochi momenti che si cristallizzano nella nostra memoria

come cesura tra due fasi: da giovane uomo a uomo maturo che con si appresta a

mettere a frutto tutte quelle conoscenze (costate anche sacrifici e dolori). Con

questo animo mi appresto a scrivere, quindi, non solo per le persone a me care, ma

anche al mio “me del futuro”.

Dopo molti sforzi e sacrifici finalmente anche questi due anni si sono conclusi.

Questa esperienza formativa mi ha permesso di avvicinarmi al mondo del lavoro

pronto a nuove sfide. In questi anni non sono mancati momenti di tensione, ma ho

avuto la fortuna di non essere mai stato solo.

Come molti miei colleghi l’università è stata anche un momento di crescita umana:

la prima volta a dover vivere soli, la prima volta a doversi gestire e prevendersi le

prime responsabilità. Ricordo come ogni cosa, anche la più piccola, sembrava essere

un evento e con quanto entusiasmo le ho vissute. Credo che in tutto ciò mi abbia

aiutato l’aver sempre avuto idee chiare sul mio futuro, di avere precisi obbiettivi.

Questo mio modo di vivere ha anche esasperato alcuni dei miei rapporti, essendo

poco incline al compromesso e tendenzialmente un idealista. Fortunatamente

l’ambiente come l’esperienza universitaria mi ha permesso di avere anche quel

pizzico di flessibilità unita a malizia, o forse di cinismo, per migliorare.

Vorrei ringraziare oggi tutte le persone con cui ho passato questi momenti e chi mi

hanno sostenuto e aiutato nei momenti più difficili. I miei genitori per i quali dedico

un ringraziamento più sentito per avermi sostenuto e avermi fatto dono sempre

della loro disponibilità e fiducia. Ai molti amici che ho conosciuto; per alcuni la

nostra amicizia inizia proprio con i cinque anni di università. Un po’ come soldati in

arme la nostra amicizia è diventata quasi una fratellanza temprata dalle comuni

difficoltà e gioie.

Qualche parola anche alla Bocconi. Credo di poter dire che in pochi amano la propria

università quanto me. Per me la Bocconi è stata un po’ come una seconda mamma,

Page 4: Tesi giulio laudani

3

è il luogo dove sono diventato, quello che, probabilmente, sarò per il prossimo

futuro. Ricorderò sempre piacevolmente le ore di studio passate nei corridoi, le

interminabili chiacchierate e la paura nel ricevere i fatidici voti via SMS, vero test

per i nostri cuori.

Anche Milano riveste un ruolo importante, infatti molti dei miei ricordi più felici

rimarranno legati a questa città, ormai sentita come casa. Questa città non è stata

solo luogo di studio, ma anche di lavoro, dove le mie prime esperienze hanno avuto

modo di formarsi. Ed anche ai miei colleghi devo dei ringraziamenti, infatti grazie

anche alla loro esperienza ed insegnamenti che sono riuscito a redigere questa

stessa tesi.

LAUDANI, Giulio

Page 5: Tesi giulio laudani

4

Section I:_________________________________________________ 6

1.a. Market Data Analysis: ____________________________ 10

1.b. The Derivatives chosen: ___________________________ 16

Section II: _______________________________________________ 21

1.a. The Methodologies: ______________________________ 23

1.b. The Barrier results _______________________________ 33

1.c. Stress and Non Arbitrage test: ______________________ 35

1.d. The empirical Comparison: _________________________ 38

Section III: ______________________________________________ 41

1.a. Strength & Weakness of the model: __________________ 41

1.b. Possible future evolution: _________________________ 43

Conclusions: _____________________________________________ 45

Appendix: _______________________________________________ 46

1.a. GARCH and JB test insight _________________________ 46

1.b. MatLab Code insight: _____________________________ 47

Opere citate: _____________________________________________ 51

Page 6: Tesi giulio laudani

5

Abstract:

We study a new lattice method specifically tailored to overcome the log-

normality hypothesis of the underlying financial asset. Our work analyzes the

performance of this new method in both the pricing and the hedging of

different derivatives, finding higher accuracy at any level; thereby we

suggest it as a future “good” market practice. The novelty of our model is

the fitting of the pricing algorithms up to the fourth moments computed

using historical data. The chosen derivatives belong to the barrier option

family and the underlying is the S&P 500 index, since it is a very liquid and

actively traded security. We have used adaptive mesh techniques to improve

the accuracy of the pricing/hedging approximation. We have also run specific

optimization procedure to properly specify the correct time window to be

used for fitting the historical data and the model robustness under different

market conditions.

Page 7: Tesi giulio laudani

6

Section I:

This paper aims to introduce a new pricing algorithm to overcome the log-

normality hypothesis by using a tree path evolution model initially presented

in “A relaxed lattice option pricing model: implied skewness and kurtosis”

(Brorsen & Dasheng, 2009) seminal paper and later developed by the same

authors in the paper “A recombining lattice option pricing model that relaxes

the assumption of lognormality” (Dasheng & Brorsen, 2010). We want to

show i) the superiority of this approach compared to the other market most

used techniques and ii) the hedging strategy based on this approach.

We have chosen the S&P 500 index since it is a well-recognized index, in

which all components are characterized by high liquidity and they are vastly

used as underlying in the derivatives market1. To support this statement we

provide in [Tab.1] the average 30 days volume, Last Price and other Security

Information.

In [Tab.2] we provide some information on the benchmark European call and

Put option written on S&P 500 index vs. Apple US options.

1 The Chicago Board Options Exchange (CBOE) now offers Mini-SPX options, based on the

Standard & Poor's 500 Stock Index.

Tab. 1

SPX index [Blomberg Ticker]

Average Last 30 Days (M.) 555.87

Last Price 1507.84

Last Data 29/01/2013

Source: Bloomberg

Page 8: Tesi giulio laudani

7

The chosen derivative shows higher open interest than the Apple one,

although the Bid-Ask ratio2 is lower. The chart below provides us an

historical comparison from 25/10/2012 to 29/01/2013. This costant

difference is related to the higher complexity in replicating the SPX security

(500 components).

The chart shows in logartim scale3 the total number of contract written4 (call

and Put) on SPX Index and AAPL US Equity option. It is clear that the SPX

2 We present only the call Bid-ask ratio, because put ratio shows a more erratic behavior,

alight the interpretation is not in contrast with the one presented based on call ratio. 3 We have chosen to present data under logarithmic scale to reduce the scaling effect which

might reduce chart’s readability.

Tab. 2

SPX US 03/16/13 C1400 Index SPX US 03/16/13 P1400 Index

Open Interest 118,579 161,206

Last Price 109.05 4.85

Last Data 29/01/2013 29/01/2013

BidAsk Ratio 97.91% 90.20%

AAPL US 03/16/13 C450 Equity AAPL US 03/16/13 p450 Equity

Open Interest 3,936 15,707

Last Price 20.3 14.175

BidAsk Ratio 98.05% 97.56%

Source: Bloomberg

85%

90%

95%

100%

ott-12 nov-12 nov-12 nov-12 nov-12 nov-12 dic-12 dic-12 dic-12 dic-12 gen-13 gen-13 gen-13 gen-13

Bid-Ask Ratio Comparison

SPX Call Ratio Apple Call Ratio

Page 9: Tesi giulio laudani

8

option volume is by far higher than any of its single components, signaling

its higher liquidity (price meaningfulness). The upwarding trend is related

with the usual option market behavior: the closer the option gets to

maturity, the more it becomes liquid (the liquidity dry down in the delivery

month) (Donna, 2001)

On top of that, Vix index5 is a vastly used security to trace volatility in the

market, the popularity of this instrument allows us to assume that S&P 500

4 Open interest provide useful information that should be considered when entering an

option position. Unlike stock trading, in which there is a fixed number of shares to be traded,

option trading can involve the creation of a new option contract when a trade is placed.

Open interest will tell you the total number of option contracts that are currently open—in

other words, contracts that have been traded but not yet liquidated by either an offsetting

trade or an exercise or assignment. Open interest also gives you key information regarding

the liquidity of an option. If there is no open interest for an option, there is no secondary

market for that option. When options have large open interest, it means they have a large

number of buyers and sellers, and an active secondary market will increase the odds of

getting option orders filled at good prices. So, all other things being equal, the bigger the

open interest, the easier it will be to trade that option at a reasonable spread between the

bid and ask. Declining open interest means that the market is liquidating and implies that

the prevailing price trend is coming to an end. A knowledge of open interest can prove

useful toward the end of major market moves. 5 VIX is a trademarked ticker symbol for the Chicago Board Options Exchange Market

Volatility Index, a popular measure of the implied volatility of S&P 500 index options. It

represents one measure of the market's expectation of stock market volatility over the next

30 day period. Prof. Menachem Brenner and Prof. Dan Galai (Brenner & Galai, 1986) first

developed the idea of a volatility index, and financial instruments based on such an index.

In 1992, the CBOE retained Prof. Robert Whaley to create a stock market volatility index

based on index option prices, which was launched in January 1993. Subsequently, the CBOE

has computed VIX on a real-time basis. The market convention on VIX is: percentage points

1

10

100

1,000

10,000

100,000

1,000,000

ott-12 nov-12 nov-12 nov-12 nov-12 nov-12 dic-12 dic-12 dic-12 dic-12 gen-13 gen-13 gen-13 gen-13

Total Open Interest

Apple Tot Open Interest SPX Total Open Interest

Page 10: Tesi giulio laudani

9

index together with Vix quote have significant price and that they are less

sensitive to market rumors, typically affecting single components. However

SPX index is highly recognized as a market proxy, hence whenever new

information related to macroeconomic data or new data release are

disseminated in the market, then market players do prefer to

correct/reallocate their exposure by trading on this security, rather than

correct single security exposure in their portfolio. This market behavior

allows us to assume that all significant information is promptly incorporated

into the security price.

The chart presented above provides the SPX Index and VIX index time series

(VIX scale is on the left) between January 2000 to January 2013. We want to

point out the strong relationship between downtrend and increased market

implied volatility. Moreover peak variation/level corresponds, as we can

expect, to IT Bubble, Subprime Bubble and Sovereign Debt crisis.

The paper is divided into three sections. In Section I, we outline why the new

pricing approach is worth using, by analyzing underlying historical time

and translates, roughly, to the expected movement in the S&P 500 index over the upcoming

30-day period, which is then annualized.

0

10

20

30

40

50

60

70

80

90

0

200

400

600

800

1,000

1,200

1,400

1,600

1,800

Price Time Series

SPX Index Vix Index

Page 11: Tesi giulio laudani

10

series, and what is so peculiar for barrier option: basically we address i) its

main features and ii) sensitivity on underlying distribution assumptions.

Section II is focused on the core of our dissertation, where we explain the

methodology, the pricing techniques results, the stress test performed as

well as the robustness test. Later, we analyze market condition evolution and

on other pricing techniques and we develop our hedging procedure. In

Section III, we conclude by summing up the strength of the model proposed

and its drawback, we also dedicate a sub section on possible future develops.

1.a. Market Data Analysis:

Given the S&P 500 nature, we should expect that this time series behaves

closely to Gaussian hypothesis, however this paper shows that although an

index should show less extreme and more stable time evolution, it

significantly diverges from normal assumptions.

Recent market evolution has shown the inadequacy of Gaussian normal

hypothesis on probability distribution. The 2008 financial crisis has pushed

researchers to develop proper model framework to describe data behavior.

On top of that, pricing derivatives has changed to take into account other

risk sources (such as counterpart risk, liquidity risk and so on). Those

distributions cannot be described anymore with parametric function with just

two parameters (mean and variance). Academia has started to focus on

analyzing the impact of excess Kurtosis and Skewness. Those two features

have a significant impact on risk management and on option pricing (where

the problem gets more severe the more exotic is the derivatives payoff

itself).

Return has been computed as price ratio according to the logarithm

standard, since Logarithmic returns are best used for single stock return

over time, in this case the return will only depend on the initial and last

element of the time series

Page 12: Tesi giulio laudani

11

(( )

) ∏

∑ ; ∑ [ ∑ ] ∑ ( ).

We remember that the relationship between logarithmic and arithmetic

returns can be better understood by using the Taylor expansion for the log

return, which is the same of the linear if truncated at the first parameter

( ) ( ) ( ) ( ) This formula shows how the

difference between the linear and log return (for price ratio far away from 1)

will be always greater than zero, since ( ) .

Normal distribution assumption fails to capture empirical data excess kurtosis

as well as asymmetry. We have chosen the time period between January

2000 and September 2012 to show S&P 500 Gaussian hypothesis failure, by

using QQ plot, by reproducing Jarque Bera test (Jarque & Bera, 1980) results

and by plotting time dynamics of mean, variance, skewness and kurtosis.

In the chart above, we present the three QQ plots.

Page 13: Tesi giulio laudani

12

In the chart above we show the standardize distribution via kernel6 plots. We

have chosen to represent the distribution in the first row using the kernel

instead of the usual histogram, since the first is a generalization of the

latest. We have chosen the kernel Gaussian function

∫ ( )

( )

First figure represents data standardized by sample variance

, as we can see the distribution has a strong excess

kurtosis, despite the quite symmetric behavior.

According to “Value at Risk when daily changes in market variables are not

normally distributed” (Hull & White, 1998) article this behavior can be

dramatically reduced by assuming heteroskedasticity7. That’s exactly what

we have showed in second figure: those figures are based on data

6 A kernel density estimator is an empirical density “smoother” (from discrete to continuous distribution) based on the choice of the two objects: the kernel function itself ( ) and the

bandwidth parameter “h”, which is chosen according most used convention (equation

reported above) and also the standard one used in MatLab 7 Heteroskedasticity means that the error variance is correlated with the values of the

independent variables in the regression. The Breush-Pagan (1979) test is widely used in

finance research because of its generality, we didn’t run this test.

Page 14: Tesi giulio laudani

13

standardized with variance computed with GARCH8 (1,1). Here in the

following we provide the equation used to fit the GARCH model, in the annex

is provided the MatLab printout.

(

) ( ) where

.

Although there is an improvement in the QQ plot figure, we notice the

mismatch on the left tail is still significant, signaling asymmetry as well as

excess kurtosis behavior. Hence also standardized time series by GARCH

(1,1) fails to respect Gaussian assumption.

We run a GARCH(1,1) assuming a t-student underlying distribution, the

results are shown in figure 3. As it can be noted there is no significant

improvements, excess kurtosis and asymmetry is still there.

Besides the graphical examination, here in the following we present our

results using the univariate Jarque Bera test (Jarque & Bera, 1980).

( )

Where we define the asymptotic distribution of the sample estimators of

skewness and kurtosis as in the following:

√ ( ) √ ( ) ( )

We have run the test on both empirical data and standardized one. As

already suggested by QQ plot figure, data fails JB test (details provided in

Annex), providing further evidence on the misrepresentation of Gaussian

assumption when trying to model data distribution.

Here in the following we provide four charts representing: mean, volatility,

skewness and kurtosis of the chosen underlying between January 2000 and

September 2012. The blue line has been drown with a time rolling range of

8 Generalized Autoregressive Conditional Heteroskedasticity (Bollerslev, 1986) (p, q) (where

p is the order of the GARCH terms and q is the order of the ARCH terms )

Page 15: Tesi giulio laudani

14

233 days9, while the red one represents the whole sample size respective

centered moments10.

The above charts provide us two important information: i) time dependence

of data moments and ii) strong divergence from “long term” level at 2001 IT

bubble and at 2008 subprime bubble. By analyzing those figures we can

infer: i) the correlation between crisis/financial crush and ii) the divergence

from Gaussian assumptions. Given the material effect on pricing and

hedging, developing models, which does fit with these features, has become

a crucial task.

Here in the following we propose the same analysis based on the ratio of

price

rather than returns. We present below charts since the relaxed

model is featured to use this ratio instead of returns.

9 We didn’t directly choose 233 as time windows, it is obtained dividing the total sample size

by 20 10 We used the standard MatLab functions. We have used those values as proxy to long

term/natural levels

Page 16: Tesi giulio laudani

15

Those charts represent the un-centered moments, from whom we may

obtain the equivalent mean, variance, skewness and kurtosis given their

definition in term of moments:

( ( ))

( ( ) )

( ( ))

( ( ) )

Market players have historically recognized this potential risk; however there

wasn’t any significant case/event which might justify a framework change.

Market participants have tracked this problem by systematically overpricing

deep in/out of the money options. That’s the reason behind the constant put

option higher implied volatility than at the money option in the equity market

(the so called “Crush Phobia” represented by volatility Skew11).

We would like to propose an interesting parallel between 1987 Crush and

2008 financial crush. As before 1987 there wasn’t any evidence of Volatility

skewness, today we find ourselves in the same situation back in time, where

11 When implied volatility is plotted against strike price, the resulting graph is typically

downward sloping for equity markets, or valley-shaped for currency markets. For markets

where the graph is downward sloping, such as for equity options, the term "volatility skew"

is often used. For other markets, such as FX options or equity index options, where the

typical graph turns up at either end, the more familiar term "volatility smile" is used.

Page 17: Tesi giulio laudani

16

suddenly the used models have proofed themselves to be wrong. We cannot

count any more on demand-offer driven volatility pricing and Gaussian

simplification (i.e. closed formula).

Thanks to IT development12, simulation/numerical approximation models

does not suffer any more of the usual time constrain and bias problem. Our

work tries to move one step closer to a new market standard by exploiting

the flexibility granted by lattice methods.

1.b. The Derivatives chosen:

In the late 1980’s and early 1990’s, exotic options became more visible and

popular in the over-the counter (OTC) market. Corporations, financial

institutions, fund managers, private bankers are the typical users. The most

popular group of exotic options is path-dependent options, and barrier option

is among this group. The most appreciate feature of barrier option is to offer

a cheaper protection compared to vanilla option. As an example if we take a

down-and-out barrier call option, a trader with a bull perspective view on the

market may regard the condition of the barrier being reached as quite

unlikely and be more interested in it than the regular one. Or as another

example we might consider the case of a hedger buying a barrier contract to

hedge a position with a natural barrier, e.g. the foreign currency exposure on

a deal that will take place only if the exchange rate remains above a certain

level.

The barrier options (also called trigger options) are essentially conditional

options, dependent on whether some barriers (“H”) or triggers are breached

12 A multi-core processor is a single computing component with two or more independent

actual central processing units (called "cores"), which are the units that read and execute

program instructions. The instructions are ordinary CPU instructions such as add, move

data, and branch, but the multiple cores can run multiple instructions at the same time,

increasing overall speed for programs amenable to parallel computing. In the best case, so-

called embarrassingly parallel problems may realize speedup factors near the number of

cores, or even more if the problem is split up enough to fit within each core's cache(s),

avoiding use of much slower main system memory

Page 18: Tesi giulio laudani

17

within the lives of the options. Hence, the main feature is to activate a

specific event (becoming a regular option or getting a rebate), depending on

whether the barrier is breached or not during the life of the option. According

to the relative position of H and S, there are four kinds of typical barrier,

which are outlined below.

Down and Out: knock-out options with H < S.

Down and In: knock-in options with H < S.

Up and Out: knock-out options with H > S.

Up and In: knock-in options with H > S.

Barriers may be monitored continuously or discretely. Continuous monitoring

of the barrier means that the barrier is active at any time prior to maturity.

Discrete monitoring implies that the barrier is active only at discrete times,

such as daily or weekly intervals.

The importance of barrier options is growing together with the evolution and

more sophistication of market operators. Most models for pricing barrier

options assume continuous monitoring of the barrier; we have developed our

model according to this assumption, which simplify the model set up.

“Complex Barrier Option” paper (Cheuk & Vorst, 1996) shows that even

hourly versus continuous monitoring can make a significant difference in

option value. Discrete barrier options, convergence may be slow and erratic,

producing great errors even with thousands of time steps and millions of

node calculations. The reason is that the payoff of a barrier option is very

sensitive to the position of the barrier in the lattice.

Page 19: Tesi giulio laudani

18

The chart above provides us with graphical example of the typical jump

around the Strike (100) for ATM option. The dashed line represents the true

probability density, while the light dash line is the discrete approximation

made by the model, where each node contribution is made by its value time

its probability. The bias arise from the difference between the hard and light

dash line, this problem get more severe the less linear is the option payoff

around critical area.

Barrier option can be priced with i) “closed form” expressed in terms of

multivariate normal probabilities, ii) approximation methods, such as

standard lattice techniques, or Monte Carlo simulation. It has been paid more

attention to analytic solutions than to lattice methods, which obtains the

correct valuation asymptotically. However, the convergence of the closed-

form solutions is slow, and the results tend to have a large bias when the

asset price is close to the barrier.

Page 20: Tesi giulio laudani

19

We have used as market proxy the Bloomberg function “OV” (option

Valuation), we provide in the following some screenshots of Bloomberg

function:

Page 21: Tesi giulio laudani

20

The model was used more as a directional control rather than try as an exact

benchmark to fit our model. The Bloomberg valuation platform does allow

user to choose among several options, note that by default Bloomberg

propose a Heston model for the volatility, while our work does assume a

deterministic and flat volatility.

Page 22: Tesi giulio laudani

21

Section II:

The need to capture this empirical features has pushed the researchers to

develop new pricing approaches, among those we want to show the

superiority of lattice model as proposed in “A relaxed lattice option pricing

model: implied skewness and kurtosis” (Brorsen & Dasheng, 2009) and “A

recombining lattice option pricing model that relaxes the assumption of

lognormality” (Brorsen & Dasheng, 2010). The idea is similar to a Taylor

expansion or the nonparametric approach using the Cornish Fisher expansion

(Cornish & Fisher, 1937) where we add the third and fourth moments used

as new parameters to better fit the empirical distribution.

The paper “Option pricing: A simplified approach” (Cox, Ross, & Rubinstein,

1979) (“CRR”) pioneered the lattice approach. They developed a discrete-

time, binomial approach to option pricing. The essence of their approach is

the construction of a binomial lattice of stock prices where the risk neutral

valuation rule is maintained. With a particular selection of binomial

parameters including probabilities and jumps, they showed that the CRR

binomial model converges to the Black-Scholes model (Black & Scholes,

1973). The CRR methodology has been extended subsequently by various

researchers. The article “Option valuation using a tree Jump process” (Boyle

P. P., 1986) took the CRR methodology one step further and proposed a

trinomial option pricing model, a three-jump model, where the stock price

can either move upwards, downwards, or stay unchanged in a given time

period. Most recent studies have been developed to handle non-normal

distribution by varying the parameters within the lattice, such as “Implied

binomial tree” paper (Rubinstein, 1994). “A modified Lattice approach to

option pricing” paper (Tian, 1993) defines binomial and trinomial models

where it has been relaxed the symmetry restriction, but it has still retained

recombination and lognormal underlying distribution assumption. (Brorsen &

Page 23: Tesi giulio laudani

22

Dasheng, 2009) and (Brorsen & Dasheng, 2010) papers have further relaxed

the restrictions, maintaining only the recombination constrain.

Their articles were tested on the Commodity market (option written on

Wheat), we want to propose their algorithm for equity derivatives and to

extend it to the Barrier option case.

Although lattice models provide powerful, intuitive and asymptotically exact

approximations there are essentially two related but distinct kinds of

approximation errors in any pricing techniques of lattice framework, which

we refer to as distribution error and nonlinearity error:

Distribution error: The true asset price distribution is unknown,

hence when we develop price model we need to assume an underlying

distribution. Usually market players have used lognormal density,

approximated by a finite set of nodes with probabilities. Even though

the mean and variance of the continuous distribution are matched by

the discrete distribution of lattice model, there is the possibility that

the model itself is biased given the wrong initial assumption on the

distribution, hence the option does not converge to the correct value.

Nonlinearity error: The finite set of nodes with probabilities used by

lattice model can be thought as a set of probability weighted average

option price over a range of the continuous price space around the

node. If the option payoff function is highly nonlinear, evaluating the

nonlinear region with only one or several nodes would give a poor

approximation to the average value over the whole interval.

We will minimize/solve the later by the adaptive mesh model with slight

computation increase, while the first issue will be solved via the intuition

initially proposed by (Brorsen & Dasheng, 2009).

Page 24: Tesi giulio laudani

23

1.a. The Methodologies:

Let and be the asset price at the current time and at one period later.

We will define the price ratio as

and we assume to take value “u” with

probability “q” and “d” with probability “ – ” in a binomial tree. In a

trinomial tree, the price ratio has three possible values “u”, “m” and “d” with

corresponding probabilities , and respectively. There are nodes

at the final step in an n-step binomial tree, while there are in the n-

step trinomial tree.

We used a backward13 way algorithm to run our initial pricing procedure on

European and American option.

In “Bumping up against Barrier option with the Binomial Method” (Boyle &

Lau, 1994) the authors show that for continuous barrier options there is a

potentially large pricing errors, resulting from a lattice, even with a large

number of steps. “Pricing Barrier option with adaptive mesh model” article

(Ahn, Figlewske, & Gao, 1999) provides an approach that greatly increases

the efficiency in lattice models, they proposed an “adaptive mesh” method

(“AMM”), to deal with ”barrier-too-close” problem in continuous barrier

options. The AMM model is very powerful in both efficiency and flexibility as

many papers have testified.

Firstly, we have developed our model testing their performance under the

plain vanilla case. Our test has been performed using both the “A

recombining lattice option pricing model that relaxes the assumption of

lognormality” (Brorsen & Dasheng, 2010) algorithms proposed: i) use

historical moments, and ii) bootstrap the implied moments by fitting the

model parameters to the market quotation. Secondly, we have used the

historical moments to run our barrier options.

13 The option values at the nodes of the final step of the lattice are calculated first. Then, the

values at the step before the final one are calculated. This procedure is continued until the

initial step, i.e. the current time.

Page 25: Tesi giulio laudani

24

1. We have chosen the “SPX US 03/16/13 C1400 Index & SPX US

03/16/13 P1400 Index14” option, since they were at the money

(“ATM”) option at inception [Strike at 1400; SPX index at 1409

@19/03/2012]

Here we provide the two Bloomberg screenshot of the call and put

European option used as reference derivatives in developing our model.

14 Note that the date convention showed in the Bloomberg ticker is the American one.

Page 26: Tesi giulio laudani

25

2. Rather than use directly the Libor15 quote (with tenor chosen according

to option maturity), we have bootstrapped from Put-Call Parity

equation the implied rate used in the market. This method allows us to

avoid producing an interest rate model.

3. We have bootstrapped16 the implied volatility surface via Black Scholes,

using put and call options with strike 1200 1300 1400 1500 1600

between March 2012 to January 2013, with maturity at 16/03/2013. In

the above chart we present the volatility surface; at the top there is

the implied call options volatility, while at the second there are the

results from put options.

15 Libor is a trimmed average of reported funding rates for unsecured borrowing deals from a

panel of large banks with the highest credit quality. It is used as risk free rate, rather than

Treasury Bills, since that market is highly influenced by central Banks and currency flotation

(Duffee, 1996). Although Libor is not a deterministic function, the source of risk coming from

the equity underlying is far higher than the one coming from money market rate, hence

assuming a deterministic dynamics won’t cause big errors/bias in pricing 16 We have minimized the difference between market price and theoretical price by fixing all

parameters besides volatility.

Page 27: Tesi giulio laudani

26

We didn’t use any smoother function and we have taken the closing options

ASK quote, hence the time evolution shows some abrupt/erratic change,

however the smile shape is clearly represented. The sharp increase in the

implied call volatility in the last days, for deep in the money options, is

caused i) the options considered were getting closer to maturity (intrinsic

value component is predominant in option price) and ii) January ending

month improvements in all markets given the higher confidence.

4. We have bootstrapped17 the implied third and fourth moments using

the relaxed lattice (Brorsen & Dasheng, 2010) trinomial model.

Plain vanilla option case:

In the n-step binomial tree case we have at the n+1 nodes of the final step

the following payoff (we denote with “c” the call and with “p” the put):

( )

( )

17 Differently form the BS case, where we have separately bootstrapped the implied volatility

(call and put; strike by strike), in this case we do have minimized the absolute difference,

hence we have considered both call and option as well as all the 5 Strike altogether. We

have forced the model to accept as input for first and second moments:

Page 28: Tesi giulio laudani

27

At the nodes n-1 for the European option the payoff is defined:

(

( ) )

(

( ) )

The American option payoff is defined as:

( | | (

( ) ) )

( | | (

( ) ) )

In the n-step trinomial case the above formula becomes:

( ( ) | | ( ) )

( ( ) | | ( ))

(

)

(

)

( ( ) | | ( ) (

) )

( ( ) | | ( ) (

) )

The relaxed tree models do not assume a specific parametric distribution for

the ratio Y, which is unknown. Instead they are based on matching the

moments of the underlying distribution. In practice, the moments could be

estimated directly from empirical data or they could be taken from an

estimated parametric distribution. The relaxed binomial model is constructed

such that the first 3 moments are matched via a discrete approximation to a

continuous via a Gaussian quadrature approach18 (DeVuyst & Preckel, 2007)

( ) ( ) ∫ ( )

Our unknown parameters are “u”, “d” and “q” for the n-step binomial model.

We will compute them by solving the approximation equation corresponding

18 K nodes allow matching the first 2k-1 moments, therefore we can match up to the third

moments for the binomial and till the fourth moments for the trinomial tree

Page 29: Tesi giulio laudani

28

to the moments up to the third, as suggested by “Discrete approximation for

probability distribution” article (Miller & Rice, 1983).

( ) ( ) ( ) ( )

The binomial case do allow for a close solution. We define a polynomial

( ) ( )( ) , where ( )

( )

and are than expressed as function of moments, given the moments.

We have taken the approximation equation and we have multiply the first by

the second by and the third by one (the case where i=0 is redundant

since it imposes that the probability sum to one). The solutions are:

( ( ) ( ) ( ( ))

)

( ) ( ( ))

( ( ) ( ) ( ( ))

)

( ) ( ( ))

The only requirement to the Y is to have finite moments and to come from a

valid probability distribution: note that the Gaussian quadrature procedure

cannot be applied directly to subjective estimates of continuous distributions

because neither the form nor the moments of these distributions are known.

Typically, the probability assessment process produces only a graph of the

cumulative distribution. However, a two-step procedure can be used to

determine a discrete approximation based solely on the information

contained in the graph. The first step uses Gaussian quadrature to determine

the moments of a continuous distribution, and the second step uses

Gaussian quadrature again to determine a discrete approximation with these

moments. Note that the time represented by each step in the lattice is the

Page 30: Tesi giulio laudani

29

time interval for the moments. For example, to represent daily step we need

to use daily data19. “A recombining lattice option pricing model that relaxes

the assumption of lognormality” (Brorsen & Dasheng, 2010) paper show that

lognormal binomial model is a special case of this framework, which

generalize the previous one.

The n-step trinomial case does not allow a closed formula solution, since the

equation system as shown below:

( )

( ) ( )

The equation20 is over-determinate and the recombine condition is not linear,

thus to avoid it we need to simultaneous solve the system by minimizing the

squared differences between the two side of the equation:

∑( ( ) )

Subject to the following constrain

Alternatively we can imply the parameters by minimizing the sum of

squared errors as suggested by ”Valuation of American call options on

dividend paying stock” paper (Whaley, 1982):

19 In cases where the central limit theorem holds, the higher moments will change

nonlinearly as time steps change. Carr and Wu (2003) find that the volatility smile does not

flatten out as maturity increases, which implies that the higher moments are stable under

addition. Wu (2006), however, finds that this is not always the case. Regardless, users of

the relaxed lattice approach must always be careful to match moments with the size of the

time step. 20 Note that the trinomial model do allow to fit up to the fifth moments, however we do not

undergo this possibility, since there is no literature on fifth moments behavior

Page 31: Tesi giulio laudani

30

∑( ( )) ∑( ( ))

where is the observed option premium for the ith call contract and

is the observed option premium for the i-th put contract. We have taken 5

contracts per option type: one ATM, two in the money (“ITM”) and two out of

the money (“OTM”).

In a binomial model, three parameters, i.e. “u”, “d”, and “q”, need to be

implied. In a trinomial model, though there are six parameters, only four of

them need to be implied, because the probability condition, ,

and the recombining condition, , have to be satisfied. Since the

lognormal distribution constraint is relaxed, the move sizes and transition

probabilities capture the information about the underlying distribution that is

contained in the option prices.

A by-product of implying the parameters by the relaxed lattice models is that

the parameters can be used directly to estimate the moments of the

unknown underlying distribution. With the binomial, the asset price ratio is

assumed to take value u with probability q and d with probability 1 - q. So its

first moment is ( ) . Similarly, we can approximate the k-th moment

of the asset price ratio based on the continuous dynamic process by the k-th

moment based on the binomial framework. The highest moment that can be

implied by a binomial model is skewness. The relaxed trinomial model yields

both skewness and kurtosis. The approximation equations based on the

trinomial framework are:

( ( )) ( ( ))

( ( ))

( ( ))

“Y” represents the ratio of the underlying asset price at the next step and the

current step. Riskless arbitrage and efficient markets were assumed and the

additional restriction of a zero mean was imposed. Note that the implied

Page 32: Tesi giulio laudani

31

moments do depend on the chosen number of steps, unless the underlying

distribution is of the stable Paretian class21.

Now we provide some numerical example:

The chart provides the binomial (red line) and trinomial (green line) relaxed

call price computed with historical moments with a rolling windows of 260

days:

( ) ( )

Here in the following we show the graph of American put option (yellow line)

and the European put option (blue line).

21 In probability theory, a random variable is said to be stable (or to have a stable

distribution) if it has the property that a linear combination of two independent copies of the

variable has the same distribution, up to location and scale parameters. “Let X1 and X2 be

independent copies of a random variable X. Then X is said to be stable if for any constants a

> 0 and b > 0 the random variable aX1 + bX2 has the same distribution as cX + d for some

constants c > 0 and d. The distribution is said to be strictly stable if this holds with d = 0”

(Nolan, 2009)

Page 33: Tesi giulio laudani

32

It is evident that the model correctly incorporates the difference between

European (lower premium) and American, note that we have increased the

risk free rate up to “9%” to increase the difference. Here in the following we

do provide the same figure with market rate (around 0.5%):

The early exercise option is not sizable at all, since the negligible time value,

and it gets smaller, the closer gets the maturity.

Page 34: Tesi giulio laudani

33

There is no difference between call option premiums since we do not assume

any dividend yield/dividend payment.

8

1.b. The Barrier results

The mean is assumed to be equal to risk free with respect to NA argument,

while the variance values are taken from implied 9022 days traded option

volatility. The skewness and the kurtosis fitting are based on historical data

with a time range of 260 days (roughly one year of daily observation).

As speaking about our study, we do not model a specific view on assessing

the proper moments/input parameters. We have just tested if the mean over

the time period chosen (90 days) is statistically different from zero, and as

excepted the t-test do not allow us to reject the zero assumption, hence we

have forced the model to factor in a zero mean.

Remember that historical and implied volatility although they are correlated

they are different, moreover there exist several implied volatilities in the

market based on different future time horizon. Here in the following we show

the historical vs. implied 30 days volatility.

22 ATM option. 90 days is the tenor chosen for our options.

Page 35: Tesi giulio laudani

34

As we have early stated, there isn’t any significant difference between the

two time series, furthermore there is no clear direction, even if we may

address the relative lag between implied and historical volatility, where the

last seem to be less reactive to market change.

Here in the following we show the results of our model with and without

adaptive mesh model for a Knock-in call option written on SPX index.

As expected the Adaptive mesh model reduce the erratic behavior and speed

up the convergence of the model.

0

10

20

30

40

50

60

70

Volatility 30d Implied vs Historical

Implied Historic

Page 36: Tesi giulio laudani

35

1.c. Stress and Non Arbitrage test:

Our paper is supported by several stress and robustness test, moreover we

have always tracked its relative performance against other pricing approach.

We have test Non-arbitrage by checking put-call parity23 condition running a

500 steps simulation.

( )

Where “K” is the strike and the sub-scrip “i” represents time. The model does

respect the condition all across the time considered as the chart above show:

there is a small divergence from the value “0” by a non-significant amount,

mainly due to simulation approximation error, note that this difference is

largely within SPX index options bid-ask spread.

23 put–call parity defines a relationship between the price of a European call option and

European put option, both with the identical strike price and expiry, namely that a portfolio

of long a call option and short a put option is equivalent to (and hence has the same value

as) a single forward contract at this strike price and expiry. This is because if the price at

expiry is above the strike price, the call will be exercised, while if it is below, the put will be

exercised, and thus in either case one unit of the asset will be purchased for the strike price,

exactly as in a forward contract.

Page 37: Tesi giulio laudani

36

Here in the following we present the price convergence (plain vanilla case) of

the lattice model as function of increased number of steps by controlling for

time elapse

We have tested only the trinomial model, since it more demanding compared

with the binomial one. Time as function of step is an exponential function,

however we address that a 5500 step trinomial tree has required around 2

seconds.

The above figure show that with more than 1000 steps the lattice model has

a small error vs. theoretical market price (117.775). 1000 steps lattice

Page 38: Tesi giulio laudani

37

model requires less than half a second, hence the model does allow a good

approximation within a suitable time window.

Now we describe the hedging procedure: we have computed option delta by

approximating the first derivative with respect underling price change with

Central Difference Estimator24 (“CDE”) as follow:

[ ( )]

( ( ) ( ))

Given this definition we will proxy this quantity with:

( ) ( )

( ) [ ( )]

( )

∑ ( )

( )

∑ ( )

The CDE is biased: ( ) ( )

24 Note that this approach is superior to the forward difference estimator (“FDE”), since the bias goes to zero faster than the FDE: ( )

Page 39: Tesi giulio laudani

38

The two charts above show the step evolution of Delta Greek letter25.

1.d. The empirical Comparison:

Here in this section we will compare the result of our model against standard

Binomial and Trinomial lognormal lattice model, together with the classic

Black and Scholes formula (European case) and Monte Carlo simulation

approach.

Since this paper is mainly oriented to market participants we have

specifically addressed the computational time burden of our new algorithms

compared to a Monte Carlo simulation with control variate26 and Antithetic27

variate variance reduction techniques, with a parametric Gaussian

distribution. We have set up a specific procedure to stress our model under

time constrain. This procedure consists on running a Monte Carlo with

100.000, 300.000 and 500.000 simulation respectively and we have saved

the time required by them. Then we have optimized the number of nodes in

our model to respect the three time constrains.

25 the Greeks are the quantities representing the sensitivities of the price of derivatives

such as options to a change in underlying parameters on which the value of an instrument

or portfolio of financial instruments is dependent. 26 Control Variate is based on using the error in the estimate of known quantities to reduce the error in the

estimate of the unknown one. We will use the combination of the known variable and the unknown one ( )

( ( )), which it will be used as estimator.

This estimator is unbiased for ( ) ( ( ( ))) ( )

So we need to choose a parameter “b” to minimize the new estimator variance to ensure ( ( )) ( ). This

method allows to reduce the variance if the control variate is correlated to the unknown, the sign do not

matter, only size the higher the better ( ) with the trivial requirement

If we joint estimate b and X we will have a bias, in fact those variables will be correlated so ( ( ( ))

( ) ( ( )) . To solve this issue we need to run two independent simulation, the first regressing Y on X to

obtain “b” ( it converges to the correct value b) and the second running the simulation for the estimator

itself

The “b” comes from ( ( ( ))) ( ) ( ( ( ))) ( )

, now we

can compute the FOC or just notice that it is a parabola so the vertex is the minimum as well. Note that

( ( )) ( )

27 Antithetic Variate consists on using for each simulation the given percentile and its opposite, so that they have

the same distribution but they are not independent, but negatively correlated. [ ] ( [ ] [ ]). It allows

doubling the sample size without doubling the time burden.

Page 40: Tesi giulio laudani

39

Here in the following we want to test our model without specifying any

models for computing input moments, we have used the historical one as

previously defined. The redline represents the relaxed model, the yellow one

is standard model together with Black and Scholes formula and the dashed

blue line is market price of the call options SPX US C1400 index:

The relaxed model diverges from the lognormal assumptions, however they

share the same time evolution. As we do expect the skewness and the

excess kurtosis do play an important role. The case of the call written on the

S&P 500 we see that our model do diverge from the Black and Scholes and

binomial/trinomial models according to the direction of underlying empirical

distribution departure from normality. Moreover, in relation with historical

market price we can see how the relaxed model does perform better. Note

that we have used historical moments as previously defined without

specifying any assumptions on them.

In the plots above we provide a graphical representation of the difference

between the historical moments used to run the above charts and the

implied moments bootstrapped from market price time series, according to

Page 41: Tesi giulio laudani

40

∑( ( )) ∑( ( ))

The first chart provides the implied parameters ( ), in order form the red

line to the blue line we have the implied parameters of order tree to 1. As we

can note there is a significant difference between the two plots.

Page 42: Tesi giulio laudani

41

Section III:

1.a. Strength & Weakness of the model:

The below chart allows us to graphically understand the difference between

the standard model (red points ‘X’) (Cox, Ross, & Rubinstein, 1979) and the

relaxed model (blue points ‘X’) (Brorsen & Dasheng, 2010). We report in

Tab3 the moments used as input to define the model parameters.

This graph provides (based on binomial case) us a clear example where by

fitting up to Skewness, the relaxed lattice tree does better fit with empirical

distribution behavior.

Page 43: Tesi giulio laudani

42

The chart above based on binomial case provides another graphical intuition

of the effect of the positive skewness, the blue points describe the dynamics

of the relaxed model. Tab4 summarizes the moments used to run the above

chart.

Page 44: Tesi giulio laudani

43

The chart above shows the tree plot based on trinomial tree (blue point

represents relaxed model, while red ones the standard lattice). To run this

plot we used the moments in Tab3, as we can see the skewness effect is

predominant on kurtosis. Note that the relaxed model value range is larger

than the standard one.

Our thesis aims to overcome the usual implied volatility surface modeling by

properly assessing asymmetry and extreme value occurrence. This relaxed

Lattice model is one step closer to define a new generalized framework by

exploiting the recent IT develops which allow exploiting simulation technique

flexibility without suffering of the usual computational time burden.

1.b. Possible future evolution:

Possible future research can accommodate this relaxed model to fit the

stochastic volatility behavior such as Heston model (Heston, 1993), where

mean and variance are allowed to change. Note that this lattice extension

would cause recombination condition to fail.

On top of this theoretic improvement there are more practical issues that can

be further analyzed. We didn’t propose a specific econometric framework to

Page 45: Tesi giulio laudani

44

compute the proper moments to be used as input in our model. There are

lots of papers on how properly forecast variance, however there is not such

agreement on higher moments. A possible evolution can be to set up an

econometric framework to properly incorporate market expectations and to

implement a cross Strike options strategy to trade on volatility surface to see

if the relaxed model grants an excess return. Note that there is a strong

evidence of non stationarity time series, this fact may seem like a

mathematical subtlety, but it is not. If we use ordinary least squares to

estimates randomly distributed independent variable, which is a lagged value

of the dependent variable, our statistical inference may be invalid. To

conduct valid statistical inference, we must make a key assumption in time-

series analysis.

Furthermore there is still room to improve the model to handle double or

more exotic barrier option payoff. In double barrier option cases there are

two areas that would generate nonlinearity error at every barrier monitored

dates. The error would be cumulative that incorrectness is even mounting

when the barrier condition is checked more frequently. On top of that the

usual approach to halve price step will quadruple the number of time steps

and would make the node value calculation become 16 times more

consuming. It is notorious how this programming structure gets unfriendly

due to overlapping nodes.

Page 46: Tesi giulio laudani

45

Conclusions:

This thesis does not only implement the new relaxed Lattice approach

(Brorsen & Dasheng, 2010) for European and American equity options but

also extend model to price barrier options. Besides, we also comprehensively

implement other competitive methods with detailed numerical results to

compare with our relaxed lattice such as the standard trinomial/binomial

tree, Monte Carlo simulation and American option via finite differences. We

have also implemented for the Barrier option case an Adaptive Mesh Model to

reduce nonlinearity error. From our research data, we numerically prove i)

the efficiency of relaxed lattice in better tailoring empirical distribution, ii)

the closer mimicking of market option prices and iii) parsimonious time

burden.

Page 47: Tesi giulio laudani

46

Appendix:

1.a. GARCH and JB test insight

The MatLab Printout showed above refers to the Gaussian GARCH(1,1). As

we can see all the model parameters are statistically different form zero (t-

test are far above the z-percentile at 95%). The parameters do respect the

condition stated in the paper to avoid that variance will explode.

The same considerations are inferred from the t-GARCH(1,1), as we can see

all variable are statistically different form zero (@95%). Note that we have

force the model to fit to the degree of freedom

( ) as computed according to the following MatLab algorithm:

Page 48: Tesi giulio laudani

47

df_init = 4; [df,qmle] = fminsearch('logL1',df_init,[],n_ldayly(1:end),sigma);

The target equation is:

function [sumloglik] = logL1(df,ret,sigma) d=df; y=ret; s=sigma;

[R,C]=size(y); logL=NaN(R,1); logL=gammaln((d+1)/2)-gammaln(d/2)-0.5*log(pi)-0.5*log(d-2)-

0.5*(1+d)*log(1+(y./s).^2./(d-2));

sumloglik=-sum(logL);

The JB tests, performed on the empirical distribution as well as on the

standardized ones, have a p-value smallest than 0.0001, signaling a strong

divergence from the Gaussian assumptions.

1.b. MatLab Code insight:

We have chosen MatLab since it is a flexible platform with a huge

programming community with lots of solutions and pre-defined functions,

which have speed up our coding preparation. Our general aim is to offer an

efficient and readable code, however we didn’t focus our work on delivering a

user friendly interface.

At first the naming convention used is to name the variables with

combination of worlds as clear as possible, divided with the Upper case, in

some routines there are some service variables which are usually named

with fuzzy or dummy name. Matrix variables are named with the suffix “m_”

while vector with “v_”.

We didn’t define any error handler since the aims of this code was to test our

statements rather than producing a re-usable/commercial code

The code is structured into three levels:

Page 49: Tesi giulio laudani

48

The main routine “OptionControl”, where all the other sub routine as

well as functions are managed. There are some control points where

the user is asked to provide some parameters on/off switch points. The

code is divided into 8 sections as showed here in the following:

disp(' Steps: ') disp('1. Data Loading ---------------------------------') disp('2. Re-elaborate Data ----------------------------') disp('3. Compute Implied Moments ----------------------') disp('4. Compute Historical Moments and Gaussian Test -') disp('5. Pricing Plain Vanilla and Check on Data ------') disp('6. Relative Difference plain vanilla case -------') disp('7. Pricing American case ------------------------') disp('8. Pricing Barrier case -------------------------')

In the sub routine “dtAnalysis” we will perform all the data analysis

and test on Gaussian distribution

In the Sub routine “SenAnalysis” we will perform some sensitivity

analysis and specific algorithmic to test time burden and efficiency

Among the many user-defined functions we will mention:

The optimization function to fit parameters in the trinomial relaxed

model, using the MatLab function (“fmincon”), which finds a

constrained minimum of a function of several variables. It attempts to

solve problems of the form:

( ) (linear constraints)

( ) ( ) (nonlinear constraints)

(bounds)

“myfun” is the target function used:

f=0; for i=0:4 f = f + (x(1)*(x(3)^i)+x(2)*(x(4)^i)+(1-x(1)-x(2))*(x(5)^i)-new(i+1))^2; end

Page 50: Tesi giulio laudani

49

where “x” is the target parameters to be optimized under the non-

linear constrains defined in “mycons”

c= [-x(1) ; -x(2); x(1)-1; x(2)-1];

ceq=[(x(3)*x(5))-(x(4)^2)];

The bootstrapping function to compute implied moments:

options = optimset('Algorithm','interior-point','Display','notify'); for t=1:2 for j=1:5 for i=1:tot_days x= fmincon(@(x)

optimp(x,m_option(i,j,t),ts_rate(i),ts_underlying(i),v_tenor(i),v_strike(j),t)

,x0,-1,0,[],[],[],[],[],options); m_vol(i,j,t) = x; clear x end end end

we have used the interior-point approach28 to constrained minimization is to

solve a sequence of approximate minimization problems. The method

consists of a self-concordant barrier function used to encode the convex set.

The function “optimp” have as target the equation:

( )

Where represents the implied volatility which is the parameter that we are

looking after.

The tree plots provided in section 3 have been drown with the user-

defined function “plottree” and “plottree_tri”

St = St*Up^max((i-j),0)*Down^esimo;

St = AssetP * Up ^ max(esimo-nSteps,0) * Mid ^ max((i-abs(j-i)),0) *

Down ^ max(i-j, 0);

Where we have one by one computed each tree arm and saved the into a

user-defined matrix, which is than plotted

28 “The interior point method was invented by John von Neumann, who suggested a new

method of linear programming, using the homogeneous linear system of Gordan (1873)

which was later popularized by Karmarkar's algorithm in 1984 for linear programming.”

Page 51: Tesi giulio laudani

50

The AMM model has been set in the following way:

We have find the minimum number of Up or Down step to breach the

barrier. We have used the function “fminsearch” targeting the absolute

difference between the underlying dynamics and the barrier level

At the critical node instead of running the usual “n” step lattice method

with defined step size “h” we plug in our Adaptive Mesh model with

step size equal to till the end (“n”)

The plug is made using a user defined function “adaptivemesh” where

we pass our parameters, as defined into the calling lattice model. The

parameters are then resized to fit new step size and to ensure the

respect the isomorphism condition. We have used the simplification:

.

Page 52: Tesi giulio laudani

51

Opere citate:

Ahn, D., Figlewske, S., & Gao, B. (1999). Pricing Discrete Barrier Options

with an Adaptive Mesh Model. Journal of Derivatives, 6, 33-43.

Black, F., & Scholes, M. (1973). The Pricing of Options and Corporate

Liabilities. Journal of political Economy, 81, pp. 637-659.

Bollerslev, T. (1986). Generalized Autoregressive Conditional

Heteroskedasticity. Journal of Econometrics, 31, pp. 307-327.

Boyle, P. (1986). Option Valuation Using a Three-jump Process. International

Options Journal, 3, pp. 7-12.

Boyle, P., & Lau, S. (1994). Bumping Up Against the Barrier with the

Binomial Method. Journal of Derivatives , 4, pp. 6-14.

Brenner, M., & Galai, D. (1986). New Financial Instuments for hedging

changes in Volatility. Financial Analysts Journal July/agust.

Brorsen, B. W., & Dasheng, J. (2010). A recombining lattice option pricing

model that relaxes the assumption of lognormality. © Springer

Science+Business Media, LLC.

Brorsen, W., & Dasheng, J. (2009). A relaxed lattice option pricing model:

implied skewness and Kurtosis. Agricultural Finance Review, 69, Iss: 3

pp. 268 - 283.

Cheuk, T., & Vorst, T. (1996). Complex Barrier Option. Journal of

Derivatives, 4, 8-22.

Cornish, E. A., & Fisher, R. A. (1937). Moments and Cumulants in the

Specification of Distributions. Extrait de la Revue de l'Institute

International de Statistique, 4, pp. 1-14.

Page 53: Tesi giulio laudani

52

Cox, J. C., Ross, S., & Rubinstein, M. (1979). Option Pricing: A Simplified

Approach. Journal of Financial Economics , 7, pp. 229-264.

Dasheng, J., & Brorsen, B. (2010). A recombining lattice option pricing model

that relaxes the assumption of lognormality. Springer

Science+Business Media, LLC 2010, 349 - 367.

DeVuyst, E. A., & Preckel, P. V. (2007). Gaussian cubature: A practitioner’s

guide. Mathematical and computer Modeling, 45, pp. 787-797.

Donna, K. (2001). Fundamentals of the futures market. McGraw-Hill

Professional Retrieved 2010-09-01., pp 142-143.

Duffee, G. R. (1996). Idiosyncratic Variation of Treasury Bill Yields. Journal of

Finance, pp. 527-551.

Heston, S. L. (1993). A closed-form solution for options with stochastic

volatility with applications to bond and currency options. . The Review

of Financial Studies, 6, pp. 327–343.

Hull, J., & White, A. (1998). Value at Risk when daily changes in market

variables are not normally ditributed. Journal of Derivatives, 5, 9-19.

Jarque, C. M., & Bera, A. K. (1980). Efficient tests for normality,

homoscedasticity and serial indipendence of regression residuals.

Economics Letters 6.

Miller, A. C., & Rice, T. R. (1983). Discrete approximations of probability

distributions. Management Science, 29, pp 352-362.

Nolan, J. P. (2009). Stable Distributions: Models for Heavy Tailed Data.

Rubinstein, M. (1994). Implied binomial trees. Journal of Finance, 3, pp.

771-818.

Page 54: Tesi giulio laudani

53

Tian, Y. (1993). A modified lattice approach to option pricing. Journal of

Futures Markets, 13, pp. 563–577.

Whaley, R. (1982). Valuation of American call options on dividend paying

stocks’. Journal of Financial Economics,, 10, pp. 29-58.