gsoc iid-finance - r project
DESCRIPTION
Quant Models used SummaryTRANSCRIPT
INTRODUCTION:
The fact that financial data is not independent and identically distributed (IID) exhibitingextraordinary levels of autocorrelation is a wellknown and an accepted fact. The effect of thisautocorrelation on investment returns diminishes the apparent risk of such asset classes as thetrue returns/risk is easily camouflaged within a haze of illiquidity, stale prices, averaged pricequotes and smoothed return reporting.
Such discrepancies lead to misleading performance statistics such as volatility, Sharpe ratio,correlation,marketbeta and other investment indicators based on the Assumptions ofNormality/IID of data.Our aim is to develop the different approaches for addressingautocorrelation observed in financial data that have recently been discussed in research journalsand include the functions in PerformanceAnalytics, an R package that provides a collection ofeconometric functions for performance and risk analysis.
CODING PLAN & METHODS:
Main Steps : I propose to implement the following list of algorithms that have addressedthe issue of IID in financial time series data.
1. Calculate Unsmoothed Returns: Okunev and White Model
Methodology: To remove the autocorrelation effect in the series the model describes aniterative approach until the first m autocorrelations are sufficiently close to zero.
In general, to remove any order m, autocorrelations from a given return series we would makethe following transformation to returns series:
r m,t = 1 − c m
r − c r m−1,t m * m−1,t
Once we have found this solution for to create , we will need to iterate back tocm rm,tremove the first ( m − 1 ) autocorrelations again.As mentioned the process is iterated until all thevalues are sufficiently close to zero.
Implementation: As the solution( closed form) for is different from the one obtained fromrm,tlme/gls acf functions in package ‘nlme’.I plan to implement a function which would requirechecking of following condition at every step:
and also repeat the step until the autocorrelation are sufficiently close to zero.
GLM Model:
Methodology: The GLM model rests on a particular assumption on the structure ofreported returns which are supposed to be a weighted average of past “true” unobservedreturns. This structure is supposed to be the following:
where is observed return at time t, is the “true” unobserved return at time t and each isrt
0 rt θinterpreted as the “smoothing parameter” that must be estimated.
Implementation: The estimated method is based on a maximum likelihood assumption of amoving average provess.It is uses the algorithm proposed by Brockwell and Davies(1991).I planto use the package ‘itsmr’, which provides a subset of the functionality using the textbook‘‘Introduction to Time Series and Forecasting’’ by Peter J. Brockwell and Richard A. Davis. Thefunction “ma.inf” would be used for this purpose to calculated unsmoothed returns.
Gentler Model: skip it as it already has been implemented.
2. Comparative table for assessing normality of data after GLM and OW model havebeen implemented on the time series:
Smoothing has an impact on the third and fourth moments of the returns distributiontoo.Skewness is a measure of symmetry, or more precisely, the lack of symmetry whereas,Kurtosis is a measure of whether the data are peaked or flat relative to a normal distribution.The“true” skewness/kurtosis may be computed from the observed one using the formula:
The same principle can be applied to the kurtosis:
Where : r (mean) X t = t0 − r
0
as a consequence the equation can be written as for MA(k) :
Implementation : I plan to use the ‘moments’ package in R, where functions such as skewness,kurtosis would be used to print a table for comparative measure between smoothed andunsmoothed return.
3. Calculate normalized drawdown for given volatility and track record length
Methodology: The analysis of paper shows that the three most important determinants ofdrawdowns are I) length of track record,II) mean return and III) volatility of returns.According tothe paper, the simulated drawdown distributions explaining the drawdown patterns that CTAs (commodity trading advisers) have exhibited over the past 10 years.
Implementation : The drawdowns obtained in the paper are the result of adding togethersequences of returns. As a result, even though the distribution from which any given return isdrawn may be highly skewed /exhibit fat tails, the result of adding returns together produces arandom variable that tends to be more normally distributed.
Hence, due to the normality of drawdowns, I plan to generate monte carlo simulation ofdrawdown(n= 1000, as given in paper) using rmaxdd function, of ‘fbasics’ package which takesas the input mean, standard deviation and track length of the distribution to generate a randomdrawdown.The scenario of deleverage limit ( which some managers would in case ofdrawdown , would be inputted by the user) and conditioned upon in the simulation process..
4. Calculate expected maximum drawdown under Brownian motion assumptions
Theory: The maximum drawdown is a commonly used in finance as a measure of risk for astock that follows a particular random process.
Under Stochastic framework, the maximum drawdown is defined as:
The Distribution function for is defined by the probability density function(pdf) = P[ h].D GD(h) D ≥
The paper defines the pdf as :
where :
satisfies the above equation.θn
Using the identity , E[ ] = .dh and defining D ∫∞
0GD(h) μα = * √T /2σ2
Implementation: The paper continues to describe the closed form solution, whose proof isbeyond the scope of this proposal.The paper proposes a numerical solution given the asymptoticbehaviour of the integral function with a table ( given in Appendix B) .The code submitted for theproposal uses this table along with various integral condition (depends on ratios (such as /μ ≥ σ2
h) ,to compute the Expected Drawdown.
5. Calculate expected maximum drawdown using moving block bootstrap
Theory: Continuing from the previous function, the paper discussed a approach where it dividesmanagers into one of three groups: low volatility (0% to 12.5%), medium volatility (12.5% to 25%)and high volatility (25% to 50%).
Implementation: Using the group returns and group volatilities, three maximumdrawdown distributions are simulated. Then, using the numbers of managers in each of thethree groups, we produced a composite distribution that is a weighted average of the threeseparate distributions.
I plan to implement a wrapper : which would use the normalized drawdown functionimplemented in function 4 and takes user’s input for number of managers in each group togenerate a weighted distribution.
6. Calculate time under water, penance, and closedform expected maximum drawdown
Theory: The framework described under research paper provides investment managementinformation to portfolio managers.The paper accommodates return series for firstorderseriallycorrelated investment outcomes.The derives closed form solutions for the given threeparameters which are described below:
Maximum Drawdown:
For significance level , the associated/2α ≤ 1
Maximum drawdown can be defined as the maximum probabilistic drawdown or worstdrawdown regardless of time horizon has been defined as:
(1)Maximum Time Under Water(TuW) is time elapsed until the occurrence of the drawdown.
(2)
Triple Penance Rule: An investment manager at the bottom of his performance has to beprepared for the possibility of remaining under water for an additional triple of the period elapsedsince the previous highwater mark within the same confidence interval.
The Triple Penance rule however assumes IID cash flows over time( ).πΔ t
Implementation: Snippets of python code have been provided in the paper for the analyticalsolution.
Maximum Drawdown: The snippet uses a Standard Normal Distribution for a given set ofconfidence level, time horizon, mean and std deviation to generate a montecarlo simulation(1 million ) with first order autoregressive coefficient.
I plan to use the qrnorm (n, mean, sd) function in R,which returns the specified quantile lossunder given confidence level, to generate random normal distribution and acf() functions inpackage ‘nlme’ to accommodate for AR(1) to generate drawdown under confidence level
Similarly a Golden Section Algorithm has been used to compute maximum drawdown of theprocess.is a technique for finding the extremum (minimum or maximum) of a strictly unimodalfunction by successively narrowing the range of values inside which the extremum is known toexist.
Time Under Water(TuW): The analytical solution in the snippet code has been implementedusing the Golden Section Algorithm under a given set of quantile where extreme ofsearches(drawdown quantile, would be prespecified by the user).
The Triple Penance Rule: The value of the parameter would be a multiplication of TuW to theproduct of 3.
7. Add an autocorrelation adjustment method to StdDev, skewness and kurtosis
Theory: In finance, the translation of variance from dayday to weekweek ,monthmonth or on anannual basis is done under the assumption of IID as shown in example below.
σ n = √n * σ1
To account for autocorrelation, the formula proposed is:
where represents the autocorrelation coefficient for the lag.ρi ith
Implementation: I plan to use the acf() function to compute the lag coefficients in theith
series.Given the condition n > k is satisfied, the translated variance is calculated.The user wouldinput the frequency of the data series matrix and also the translated version desired.
8. Add an AC adjusted 'Lo' method to SharpeRatio
Consider first the case of IID returns. Denote by the following qperiod return:(q)Rt
(q)Rt ≡ ........ R Rt + Rt−1 + t−q+1
The relation between the LHS with the set of lagging returns is determined by the followingequation:
Var[ ] = (q)Rt ov(R R ) q 2 (q ) ∑i−1
i = 0∑j−1
j=0C t−i, t−j = * σ2 + * σ2 * ∑
q−1
k=1− k * ρk
Where : is the korder autocorrelation of the time series, ov(R R )/ V ar(R ) ρk = C t, t−k t
As in the “NonIID Return” section, we can use GMM to estimate these autocorrelations as wellas their asymptotic joint distribution, which can then be used to derive the following limitingdistribution of (q):SR
︿
SR(q) R(q) ) = Normal Distribution (0, (q)) √T * (︿
− S = V MM
Where: = (q)V MM g/∂Θ g/∂Θ∂ * Σ * ∂
whereas g, and are defined under stochastic framework.Θ Σ
Also, the Standard Error of sharpe ratio is constructed with the confidence time interval as :
SE( ) = (q)SR︿
√V (q)/T MM
Implementation : The distribution generation for the Shapre ratio can be computed by first findingfirst k order autocorrelations as specified by the user, defining of matrices g, and to output aΘ ΣSR estimated(upper tail) under the specified confidence interval.The standard error of the sharpratio estimated can also be obtained under the variance and confidence time interval.
9. Add an AC adjusted 'probabalistic' method to SharpeRatio
Methodology: In this paper a new, practical approach to estimate sharpe ratio in a probalisticformat is determined.
Implementation: The code generates new Sharpe ratio based on mixture of correlated normaldistributions correlated among themselves.I plan to replicate the python code given in the paperbased on the equations derived in paper.
10. Finish GLM's smoothing index
Methodology: The GLM model rests on a particular assumption on the structure of reportedreturns which are supposed to be a weighted average of past “true” unobserved returns. Thisstructure is supposed to be the following:
where is observed return at time t, is the “true” unobserved return at time t and each isrt0 rt θ
interpreted as the “smoothing parameter” that must be estimated.
In the context of smoothed returns, a lower value of implies more smoothing, and the upperξbound of 1 implies no smoothing, hence we shall refer to as a ‘‘smoothingindex’ ’ξ
The concentration of weights can be measured as:
ξ θ≡ ∑k
j=0 j2
Implementation : It would be a continuation of the GLM function which computes the xts vector ofvarious and compute the sum of square to arrive at the “smoothing index”sΘi′
11. Plot of maximum drawdown distribution
Methodology: The computation part has already been implemented in the previous sections forBurghardt and Liu (2012).
Implementation : R, Sweave, and LaTeX. would be used for chart publication. In particular,xtable in the xtable ‘package’ would be used to generate a comparative table for NonAC andAC maximum drawdown distribution.
12. Plot expected max DD versus realized max DD
Implementation: Plot expected versus realized maximum drawdowns for the originalset of data , but this time we have used their respective autocorrelation estimates whencalculating the expected maximum drawdown for each methods such as using brownian motion,normalized, magdonIsmail, Bailey and Lopez de Prado(computation portion discussed in abovesections) in a composite plot(Expected max DD vs realized max DD) also containing ‘y=x’ linefor comparative purpose.
13. Stacked bar plot of lags for AC
Implementation: Usage of acf() function and ‘‘ggplot2’ and ‘Rcolorbrewer’ package for creation ofautocorrelation lags stacked bar plot( number dependent on user)
14. Plot maximum drawdown/annualized volatility as a function of SR
Methodology: Suppose that the rate of return of the asset follows a normal law with mean μ andstandard deviation σ, N(μ,σ2 ). In that case, the expected maximum loss can be calculated. Thisis equal to:
where is the cumulative standard normal. and N is the number of days.φ
The above equation shows Maximum Drawdown/Volatility as a function of Return/Volatility
Implementation:
Proceeded to MonteCarlo simulations(1 million/ 6000 carried by paper for 36 months timeperiod) by simulating returns over a period of daily returns(frequency to be inputted by user) andmeasured maximum drawdown for varied levels of annualised return divided by volatility varyingfrom [a,a] (specified by user) by step of 0.01 and plotting data for varied level of lower tailquantiles(referred as threshold points in paper,1%,5% & 10%) under standard normal distribution.The function would also input xts return series by user whose Expected Maximum Loss wouldbe calculated and benchmarked against the monte carlo simulated ‘MaxDD/vol vs return/vol’ plotfor comparative insights.
15. Provide an alternate version of Conditional Drawdown
The portfolio’s Conditional Value at Risk (CVaR) using methods in “ Comparative Analysis ofLinear Portfolio Rebalancing Strategies: An Application to Hedge Funds by Krokhmal, P., S.Uryasev, and G. Zrazhevsky ” has been implemented under the function min.cdar.portfolio,which has been published in rbloggers website.
Chekhlov, Uryasev and Zabarankin (2003) propose alternative version of optimized calculation toreduce the computation time.
Given a time series of instrument's drawdowns = ( ,........, ), corresponding to timeε ε1 εnmoments ( , the CDD functional is presented by CVAR ( ), which computation ist )t1,......., n α εreduced to the following linear programming procedure:
leading to a single optimal value of y equal to ( ) if ( ( )) > , and to a closed interval ofζ α πε ζ α αoptimal y with the left endpoint of ( ) if ( ( )) = .ζ α πε ζ α α
The computational improvement mentioned here, transforms the objective function to aknapsack problem:
The knapsack problem is dual to linear programming problem mentioned above. Based onduality theory, optimal values of both the objective functions coincide.
Hence, the problem can be solved by the standard greedy algorithm in O(n log2 n) time.
I plan to implement The ‘subselect’ package which provides three functions with different searchalgorithms: anneal a simulated annealingtype search algorithm; genetic a genetic algorithm; andimprove a modified local improvement algorithm.
They perform better than the greedytype algorithms, such as the standard stepwise selectionalgorithms widely used in linear regression.
All four search functions invoke code written in either C++ (eleaps) or Fortran (anneal, geneticand improve), to speed up computation times.
16. Calculate Conditional Drawdown at Risk (CDaR)
The algorithm discretizing time interval into N parts and the returns, which would be used tocompute the Average,Max and Condition drawdown based on different constraints and objectivefunction.The problem is then solved by maximizing the object function using linear programmingtools.
17. Calculate Rolling Economic Drawdown(REDD)
Methodology: The paper propose an alternative to the anchored time window (since inception) fordrawdown calculation: a constant rolling time window. Define a Rolling Economic Max (REM) attime t, looking back at portfolio wealth history(W) for a rolling window of length H:
From REM, the Rolling Economic Drawdown (REDD) is consequently defined as:
Implementation: Using the return time series and risk free rate(rf),rolling window horizon(H) asinput, generate portfolio wealth as a function of time(Wt).Next step would create a lookbackwindow(H) upon mentioned conditions generate a xts vector for REDD at all time intervals.
18. Write a wrapper to produce standard error estimates for a given function inPerformance Analytics
Implementation: Wrapper would support various methods: based on the variance of the inputdata , bootstrap ,block bootstrap to keep some autocorrelation structure and different sets ofdistributions functions.
19.Support HC and HAC methods within regression functions
Implementation : in the package sandwich in the R system, HC and HAC covariances matricescan now be extracted from the same fitted models using vcovHC and vcovHAC.Use thematrices for assessing partial t or z test(for regression) for coefficients under functions forregression available in ‘lmtest’ package., Testing and dating structural changes in the presenceof heteroskedasticity and autocorrelation(OLSbased CUSUM test using quadratic spectralkernel HAC estimator of Andrews (1991)).
Timeline
Community period :April 24th to May 27.
To familiarize myself with R functionalities useful for project,Interaction with mentor about the exact requirements. I will remain active on IRC and MailingLists to discuss and finalize the design and structure of new algorithms to be implemented.Implementation of OW Model (Okunev and White),GLM Model (Getmansky, Lo and Makarov) forcalculation of unsmooth returns.The functions can be tested on datasets researched upon in thepapaer.
Coding period :
May 27 to Sept 27.
week 1 : Add an AC adjusted 'Lo' method to SharpeRatio,Finish GLM's smoothing index
week 2 : Implement Burghardt, Duncan and Liu (2003) normalized drawdown method andexpected maximum drawdown using moving block bootstrapTesting and Debugging
week 3: Implementation of Burghardt and Liu (2012) functions for autocorrelation adjustment,plot of Expected Drawdown vs Max Drawdown and stacked graph plotTesting and Debugging
week 4 : Implement Acar and James (1997) functions.
week 5 : Chekhlov, Uryasev and Zabarankin (2000) conditional drawdown implementation
week 6 : Chekhlov, Uryasev and Zabarankin (2003) alternative versions (methods).
week 7 : Bailey and Lopez de Prado (2012) AC adjusted 'probabilistic' method to Sharpe Ratio
week 8 : time under water, penance, and closedform expected maximum drawdown functionsBailey and Lopez de Prado (2013)
week 9 :Calculate Rolling Economic Drawdown function as mentioned Yang and Zhong (2012)
week 10 : Support HC and HAC methods within regression functions Zeileis (2004)
week 11 :Backtesting HC and HAC methods on various data samples.Comparing results fromequivalent matlab functions.
week 12 : Wrapper function for standard error estimates.
week 13 : Creating example datasets, Demos, writing documentation.Finalize the package,checking its reusability, compatibility with other packages.
August 26:
Submission.
Some future proposals are (to be implemented if time permits, as I would be completing theproject early due to joining date of my job) :
Researching about faster methods of evaluation of the programs.Using RCpp to speed up the execution time.
MANAGEMENT OF CODING PROJECT
What is the communication plan with mentors?
I will report to them daily or at least once in every two days through emails. Will contact themthrough skype or gtalk in case of any confusion.
How do you propose to ensure code is submitted / tested?
I will try my best to stick to the schedule and work everyday and have close contact with thementors. I will be using subversion for tracking changes to the programs. With regular work andsincerity, I think it will not be a problem for me to finish my work.
What is your contingency plan for things not going to schedule?
If for some unforeseen reason, the work does not go according to schedule, I will try tocompensate for the time lost by working for some extra hours. I will finish the most importantprograms first, to provide the basic functionalities, then move on to more sophisticatedprograms.
EDUCATIONAL QUALIFICATION AND PREVIOUS EXPERIENCE :
About me :
I am a Masters in Financial Engineering student from a collaborative Nanyang TechnologicalUniversityCarnegie Mellon University program..
Why is this project important for me ?:
I wish to pursue a doctoral degree in Applied Mathematics after working in the industry as afinance professional for a few years.. I will be starting my job from fall 2012. I think this project willprovide me an excellent opportunity to learn the latest developments in investment analyticsindustry and understanding of returns from various illiquid investment assets whose behaviourdivert from normality.
Time commitments :
I have 4 subjects to pursue during my last trimester at Carnegie Mellon University.Apart fromthis, I do not have any commitment.I will be able to devote at least 30 Hrs per week for Gsoc.
Why do I feel eligible for this project ? :
As a postgraduate student of Financial Engineering, I have done all the required
coursework for this project. I had courses like Regression, Time series and Forecasting,
Asset Pricing in my curriculum. Also I have worked with C++,Python and R in all of my
previous projects and have done lab courses with them. So I feel very comfortable
working with them.I believe this experience is quite relevant and valuable to my Gsoc
proposal.
References :
Acar, E., and Shane, J.: Maximum Loss and Maximum Drawdown in Financial Markets,
Unpublished manuscript,
1997http://www.intelligenthedgefundinvesting.com/pubs/easj.pdf
Bailey, David H. and Lopez de Prado, Marcos, The Sharpe Ratio Efficient Frontier (July
1, 2012). Journal of Risk, Vol. 15, No. 2, Winter 2012/13. Available at SSRN:
http://ssrn.com/abstract=1821643 or http://dx.doi.org/10.2139/ssrn.1821643
Bailey, David H. and Lopez de Prado, Marcos, Drawdown-Based Stop-Outs and the
‘Triple Penance’ Rule (January 1, 2013). Available at SSRN:
http://ssrn.com/abstract=2201302
Bailey, David H. and Lopez de Prado, Marcos, The Strategy Approval Decision: A
Sharpe Ratio Indifference Curve Approach (January 2013). Algorithmic Finance, Vol. 2,
No. 1 (2013). Available at SSRN: http://ssrn.com/abstract=2003638
orhttp://dx.doi.org/10.2139/ssrn.2003638
Burghardt, G., Duncan, R. and L. Liu, Deciphering drawdown. Risk magazine, Risk
management for investors, September, S16-S20,
2003.http://www.risk.net/data/risk/pdf/investor/0903_risk.pdf
Kat, Harry M. and Brooks, Chris, The Statistical Properties of Hedge Fund Index
Returns and Their Implications for Investors (October 31, 2001). Cass Business School
Research Paper. Available at SSRN: http://ssrn.com/abstract=289299
orhttp://dx.doi.org/10.2139/ssrn.289299
Burghardt, G., and L. Liu, It’s the Autocorrelation, Stupid (November 2012) Newedge
working paper.http://www.amfmblog.com/assets/Newedge-Autocorrelation.pdf
Cavenaile, Laurent, Coen, Alain and Hubner, Georges, The Impact of Illiquidity and
Higher Moments of Hedge Fund Returns on Their Risk-Adjusted Performance and
Diversification Potential (October 30, 2009). Journal of Alternative Investments,
Forthcoming. Available at SSRN:http://ssrn.com/abstract=1502698 Working paper is
athttp://www.hec.ulg.ac.be/sites/default/files/workingpapers/WP_HECULg_20091001_
Cavenaile_Coen_Hubner.pdf
Chekhlov, Alexei, Uryasev, Stanislav P. and Zabarankin, Michael, Portfolio Optimization
with Drawdown Constraints (April 8, 2000). Research Report #2000-5. Available at
SSRN: http://ssrn.com/abstract=223323 or http://dx.doi.org/10.2139/ssrn.223323
Chekhlov, Alexei, Uryasev, Stanislav P. and Zabarankin, Michael, Drawdown Measure in
Portfolio Optimization (June 25, 2003). Available at SSRN:
http://ssrn.com/abstract=544742 or http://dx.doi.org/10.2139/ssrn.544742
Getmansky, Mila, Lo, Andrew W. and Makarov, Igor, An Econometric Model of Serial
Correlation and Illiquidity in Hedge Fund Returns (March 1, 2003). MIT Sloan Working
Paper No. 4288-03; MIT Laboratory for Financial Engineering Working Paper No.
LFE-1041A-03; EFMA 2003 Helsinki Meetings. Available at SSRN:
http://ssrn.com/abstract=384700 or http://dx.doi.org/10.2139/ssrn.384700
Grossman, S. and Z. Zhou (1993): “Optimal Investment Strategies for controlling
drawdowns.” Mathematical Finance, Vol. 3, pp.
241-276.http://www.intelligenthedgefundinvesting.com/pubs/rb-zzsg.pdf
Harding D, G Nakou and A Nejjar, 2003. The pros and cons of drawdown as a
statistical measure for risk in investments. AIMA Journal, April 2003, pages 16–17.
http://www.intelligenthedgefundinvesting.com/pubs/rb-hnn.pdf or as working
paper:http://www.turtletrader.com/drawdown.pdf
Hayes, Brian T.: Maximum Drawdowns of Hedge Funds with Serial Correlation, Journal
of Alternative Investments 8, pp. 26-38, 2006 (N/A online?)
Lo, Andrew W., The Statistics of Sharpe Ratios. Financial Analysts Journal, Vol. 58,
No. 4, July/August 2002. Available at SSRN:http://ssrn.com/abstract=377260
Lopez de Prado, Marcos and Foreman, Matthew, Markowitz meets Darwin: Portfolio
Oversight and Evolutionary Divergence (July 15, 2012). Johnson School Research
Paper Series No. 39-2011. Available at SSRN: http://ssrn.com/abstract=1931734
orhttp://dx.doi.org/10.2139/ssrn.1931734
Lopez de Prado, Marcos and Peijan, Achim, Measuring Loss Potential of Hedge Fund
Strategies. Journal of Alternative Investments, Vol. 7, No. 1, pp. 7-31, Summer 2004.
Available at SSRN: http://ssrn.com/abstract=641702
Magdon-Ismail, M. and Amir Atiya, Maximum drawdown. Risk Magazine, 01 Oct
2004.http://www.risk.net/data/Pay_per_view/risk/technical/2004/1004_tech_atiya.pd
f or http://alumnus.caltech.edu/~amir/mdd-risk.pdf. Matlab code at
http://www.mathworks.com/help/finance/emaxdrawdown.html
Magdon-Ismail, M., Atiya, A., Pratap, A., and Yaser S. Abu-Mostafa: On the Maximum
Drawdown of a Browninan Motion, Journal of Applied Probability 41, pp. 147-161, 2004
http://alumnus.caltech.edu/~amir/drawdown-jrnl.pdf
Mendes, Beatriz V.M. and Leal, Ricardo P.C., Maximum Drawdown: Models and
Applications (November 2003). Coppead Working Paper Series No. 359. Available at
SSRN: http://ssrn.com/abstract=477322 or http://dx.doi.org/10.2139/ssrn.477322
Meucci, Attilio, Review of Dynamic Allocation Strategies: Utility Maximization, Option
Replication, Insurance, Drawdown Control, Convex/Concave Management (July 7,
2010). Available at SSRN: http://ssrn.com/abstract=1635982
orhttp://dx.doi.org/10.2139/ssrn.1635982
Okunev, John and White, Derek R., Hedge Fund Risk Factors and Value at Risk of
Credit Trading Strategies (October 2003). Available at SSRN:
http://ssrn.com/abstract=460641 or http://dx.doi.org/10.2139/ssrn.460641
Skeggs, J. Do the autocorrelations hold with daily and weekly data? (November 2012)
Newedge working
paper.http://www.newedge.com/content/dam/newedgecom/documents/brokerage-ser
vices/alternativeedge-snapshots/Newedge_Snapshot_Do_the_autocorrelations_hold_w
ith_daily_data.pdf
Yang, Z. George and Zhong, Liang, Optimal Portfolio Strategy to Control Maximum
Drawdown - The Case of Risk Based Dynamic Asset Allocation (February 25, 2012).
Available at SSRN: http://ssrn.com/abstract=2053854 or
http://dx.doi.org/10.2139/ssrn.2053854
Zabarankin, M., Pavlikov, K., and S. Uryasev. Capital Asset Pricing Model (CAPM) with
Drawdown Measure. Research Report 2012-9, ISE Dept., University of Florida,
September 2012
Code for Calculating Expected Maximum Drawdown under Brownian Motion:
Refrence: MagdonIsmail, et. al. (2003 and 2004)
# function emaxdrawdown calculates the Expected Value E[D]# of the maximum drawdown of brownian Motion for a given# drift (mu), standard deviation(sig), and Time Horizon(T)# of the Brownian Motion# mu : drift of the brownian motion# sig : Variance of the brownian motion# T : Time horizon of the brownian motion
emaxdrawdown<function(mu,sig,T) gamma<sqrt(pi/8) if(mu==0) Ed<2*gamma*sig*sqrt(T) else alpha<mu*sqrt(T/(2*sig^2)) x<alpha^2 if(mu>0) mQp<matrix(c( 0.0005, 0.0010, 0.0015, 0.0020, 0.0025, 0.0050, 0.0075, 0.0100, 0.0125, 0.0150, 0.0175, 0.0200, 0.0225, 0.0250, 0.0275, 0.0300, 0.0325, 0.0350, 0.0375, 0.0400, 0.0425, 0.0450, 0.0500, 0.0600, 0.0700, 0.0800, 0.0900, 0.1000, 0.2000, 0.3000, 0.4000, 0.5000, 1.5000, 2.5000, 3.5000, 4.5000, 10, 20, 30, 40, 50, 150, 250, 350, 450, 1000, 2000, 3000, 4000, 5000, 0.019690, 0.027694, 0.033789, 0.038896, 0.043372, 0.060721, 0.073808, 0.084693, 0.094171, 0.102651, 0.110375, 0.117503, 0.124142, 0.130374, 0.136259, 0.141842, 0.147162, 0.152249, 0.157127, 0.161817, 0.166337, 0.170702, 0.179015, 0.194248, 0.207999, 0.220581, 0.232212, 0.243050, 0.325071, 0.382016, 0.426452, 0.463159, 0.668992, 0.775976, 0.849298, 0.905305, 1.088998, 1.253794, 1.351794, 1.421860, 1.476457, 1.747485, 1.874323, 1.958037, 2.020630, 2.219765, 2.392826, 2.494109, 2.565985, 2.621743),ncol=2)
if(x<0.0005) Qp<gamma*sqrt(2*x) if(x>0.0005 & x<5000) Qp<spline(log(mQp[,1]),mQp[,2],n=1,xmin=log(x),xmax=log(x))$y
if(x>5000) Qp<0.25*log(x)+0.49088 Ed<(2*sig^2/mu)*Qp if(mu<0) mQn<matrix(c( 0.0005, 0.0010, 0.0015, 0.0020, 0.0025, 0.0050, 0.0075, 0.0100, 0.0125, 0.0150, 0.0175, 0.0200, 0.0225, 0.0250, 0.0275, 0.0300, 0.0325, 0.0350, 0.0375, 0.0400, 0.0425, 0.0450, 0.0475, 0.0500, 0.0550, 0.0600, 0.0650, 0.0700, 0.0750, 0.0800, 0.0850, 0.0900, 0.0950, 0.1000, 0.1500, 0.2000, 0.2500, 0.3000, 0.3500, 0.4000, 0.5000, 1.0000, 1.5000, 2.0000, 2.5000, 3.0000, 3.5000, 4.0000, 4.5000, 5.0000, 0.019965, 0.028394, 0.034874, 0.040369, 0.045256, 0.064633, 0.079746, 0.092708, 0.104259, 0.114814, 0.124608, 0.133772, 0.142429, 0.150739, 0.158565, 0.166229, 0.173756, 0.180793, 0.187739, 0.194489, 0.201094, 0.207572, 0.213877, 0.220056, 0.231797, 0.243374, 0.254585, 0.265472, 0.276070, 0.286406, 0.296507, 0.306393, 0.316066, 0.325586, 0.413136, 0.491599, 0.564333, 0.633007, 0.698849, 0.762455, 0.884593, 1.445520, 1.970740, 2.483960, 2.990940, 3.492520, 3.995190, 4.492380, 4.990430, 5.498820),ncol=2)
if(x<0.0005) Qn<gamma*sqrt(2*x) if(x>0.0005 & x<5000) Qn<spline(mQn[,1],mQn[,2],n=1,xmin=x,xmax=x)$y if(x>5000) Qn<x+0.50 Ed<(2*sig^2/mu)*(Qn) return(Ed)