operational risksubscriptions.risk.net/wp-content/uploads/2019/02/jop_13_2_trial_web... · the...

104
Volume 13 Number 2 June 2018 Operational Risk The Journal of Operational risk measurement beyond the loss distribution approach: an exposure-based methodology Michael Einemann, Joerg Fritscher and Michael Kalkbrener Distortion risk measures for nonnegative multivariate risks Montserrat Guillen, José María Sarabia, Jaume Belles-Sampera and Faustino Prieto An operational risk capital model based on the loss distribution approach Ruben D. Cohen Modeling very large losses Henryk Gzyl Trial Copy For all subscription queries, please call: UK/Europe: +44 (0) 207 316 9300 USA: +1 646 736 1850 ROW: +852 3411 4828

Upload: hoangminh

Post on 17-Aug-2019

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

The Jo

urn

al of O

peratio

nal R

isk Volum

e 13 Num

ber 2 June 2018

Volume 13 Number 2 June 2018

PEFC Certified

This book has been produced entirely from sustainable papers that are accredited as PEFC compliant.

www.pefc.org

Operational Risk

The Journal of

■ Operational risk measurement beyond the loss distribution approach: an exposure-based methodology Michael Einemann, Joerg Fritscher and Michael Kalkbrener

■ Distortion risk measures for nonnegative multivariate risks Montserrat Guillen, José María Sarabia, Jaume Belles-Sampera and Faustino Prieto

■ An operational risk capital model based on the loss distribution approach Ruben D. Cohen

■ Modeling very large losses Henryk Gzyl

JOP-13-2_June_18.indd 1 14/05/2018 14:33

Tria

l Cop

y For all subscription queries, please call:

UK/Europe: +44 (0) 207 316 9300

USA: +1 646 736 1850 ROW: +852 3411 4828

Page 2: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

in numbers

140,000

Users

Page views

19,400+ on Regulation

6,900+ on Commodities

19,600+ on Risk Management

6,500+ on Asset Management

58,000+ articles stretching back 20 years

200+

New articles & technical papers

370,000

21,000+ on Derivatives £

Visit the world’s leading source of exclusive in-depth news & analysis on risk management, derivatives and complex fi nance now.

(each month)

(each month)

See what you’re missing

(each month)

RNET16-AD156x234-numbers.indd 1 21/03/2016 09:44

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 3: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

The Journal of Operational RiskEDITORIAL BOARD

Editor-in-ChiefMarcelo Cruz

Associate EditorsStephen Brown NYU SternAriane Chapelle University College LondonAnna Chernobai Syracuse UniversityRodney Coleman Imperial CollegeEric Cope Credit SuisseMichel Crouhy IXIS Corporate Investment BankPatrick de Fontnouvelle Federal Reserve Bank of BostonThomas Kaiser Goethe University FrankfurtMark Laycock ML Risk Partners LtdMarco Moscadelli Bank of ItalyMichael Pinedo New York UniversityJeremy Quick Guernsey Financial Services CommissionSvetlozar Rachev Stony Brook UniversityDavid Rowe David M. Rowe Risk AdvisoryAnthony Saunders New York UniversitySergio Scandizzo European Investment BankEvan Sekeris Oliver WymanPavel Shevchenko Macquarie UniversityPeter Tufano Harvard Business School

SUBSCRIPTIONSThe Journal of Operational Risk (Print ISSN 1744-6740 j Online ISSN 1755-2710) is publishedquarterly by Infopro Digital, Haymarket House, 28–29 Haymarket, London SW1Y 4RX, UK.

Subscriptions to The Journal of Operational Risk, and Risk.net Journals, are available on an annualbasis. To find out about the different options, including our exclusive academic rates which startfrom £100, visit subscriptions.risk.net/journals-print or contact [email protected].

All subscription orders, single/back issues orders, and changes of address should be sent to:

UK & Europe Office: Infopro Digital, Haymarket House, 28–29 Haymarket,London SW1Y 4RX, UK. Tel: +44 (0) 207 316 9300

US & Canada Office: Infopro Digital, 55 Broad Street, Floor 22, New York,NY 10005, USA. Tel: +1 646 736 1850

Asia & Pacific Office: Infopro Digital, Unit 1704-05 Berkshire House,Taikoo Place, 25 Westlands Road, Hong Kong. Tel: +852 3411 4888

Website: www.risk.net/journals E-mail: [email protected]

The Journal of Operational Risk (Print ISSN 1744-6740 | Online ISSN 1755-2710) is published quarterly by Infopro Digital, Haymarket House, 28–29 Haymarket, London SW1Y 4RX, UK.

SUBSCRIPTIONS

Subscriptions to The Journal of Operational Risk, and Risk.net Journals, are available on an annual basis. To find out about the different subscriptions, including our exclusive academic package, visit subscriptions.risk.net/journals-print or contact [email protected] (EU/US) or [email protected] (ROW).

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 4: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

The Journal of Operational RiskGENERAL SUBMISSION GUIDELINES

The Journal of Operational Risk welcomes submissions from practitioners as well asacademics. Manuscripts and research papers submitted for consideration must be originalwork that is not simultaneously under review for publication in another journal or otherpublication outlets. All papers submitted for consideration should follow strict academicstandards in both theoretical content and empirical results. Papers should be of interest toa broad audience of sophisticated practitioners and academics.

Submitted papers should follow Webster’s New Collegiate Dictionary for spelling, andThe Chicago Manual of Style for punctuation and other points of style. Papers should besubmitted electronically via our online submissions site:

https://editorialexpress.com/cgi-bin/e-editor/e-submit_v15.cgi?dbase=risk

Please clearly indicate which journal you are submitting to.Papers should be submitted as either a LATEX file or a Word file (“source file”). The

source file must be accompanied by a PDF file created from the version of the source filethat is submitted. LATEX files need to have an explicitly coded bibliography included or besent with a BBL file. All files must be clearly named and saved by author name and dateof submission.

A concise and factual abstract of between 150 and 200 words is required and it should beincluded in the main document. Four to six keywords should be included after the abstract.Submitted papers must also include an Acknowledgements section and a Declaration ofInterest section. Authors should declare any funding for the paper or conflicts of interest.In-text citations should follow the author-date system as outlined in The Chicago Manualof Style. Reference lists should be formatted in APA style.

The number of figures and tables included in a paper should be kept to a minimum.Figures and tables must be included in the main PDF document and also submitted asclearly numbered editable files (please see the online submission guidelines for guidanceon editable figure files). Figures will appear in color online, but will be printed in black andwhite. Footnotes should be used sparingly. If footnotes are necessary then these shouldbe included at the end of the page and should be no more than two sentences. Appendixeswill be published online as supplementary material.

Before submitting a paper, authors should consult the full author guidelines at:

http://www.risk.net/static/risk-journals-submission-guidelines

Queries may also be sent to:

The Journal of Operational Risk, Infopro Digital, Haymarket House,28–29 Haymarket, London SW1Y 4RX, UKTel: +44 1858 438 800 (UK/EU), +1 212 776 8075 (USA), +852 3411 4828 (Asia)E-mail: [email protected]

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 5: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

The Journal of

OperationalRisk

The journalThe Basel Committee’s 2014 revision of its operational risk capital framework, alongwith the multi-billion-dollar settlements that financial institutions had to make withfinancial authorities, made operational risk the key focus of risk management. TheJournal of Operational Risk stimulates active discussion of practical approaches toquantifying, modeling and managing this risk as well as discussing current issues inthe discipline. It is essential reading for practitioners and academics who want to stayinformed about the latest research in operational risk theory and practice.

The Journal of Operational Risk considers submissions in the form of researchpapers and forum papers on, but not limited to, the following topics.

� The modeling and management of operational risk.

� Recent advances in techniques used to model operational risk, eg, copulas,correlation, aggregate loss distributions, Bayesian methods and extreme valuetheory.

� The pricing and hedging of operational risk and/or any risk transfer techniques.

� Data modeling external loss data, business control factors and scenario analysis.

� Models used to aggregate different types of data.

� Causal models that link key risk indicators and macroeconomic factors tooperational losses.

� Regulatory issues, such as Basel II or any other local regulatory issue.

� Enterprise risk management.

� Cyber risk.

� Big data.

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 6: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 7: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

The Journal of Operational Risk Volume 13/Number 2

CONTENTS

Letter from the Editor-in-Chief vii

RESEARCH PAPERSOperational risk measurement beyond the loss distribution approach:an exposure-based methodology 1Michael Einemann, Joerg Fritscher and Michael Kalkbrener

Distortion risk measures for nonnegative multivariate risks 35Montserrat Guillen, José María Sarabia, Jaume Belles-Samperaand Faustino Prieto

An operational risk capital model based on the loss distributionapproach 59Ruben D. Cohen

Modeling very large losses 83Henryk Gzyl

Editor-in-Chief: Marcelo Cruz Subscription Sales Manager: Aaraa JavedPublisher: Nick Carver Global Key Account Sales Director: Michelle GodwinJournals Manager: Sarah Campbell Composition and copyediting: T&T Productions LtdEditorial Assistant: Ciara Smith Printed in UK by Printondemand-Worldwide

© Infopro Digital Risk (IP) Limited, 2018. All rights reserved. No parts of this publication may be reproduced,stored in or introduced into any retrieval system, or transmitted, in any form or by any means, electronic,mechanical, photocopying, recording or otherwise without the prior written permission of the copyright owners.

Composition and copyediting: T&T Productions LtdPrinted in UK by Printondemand-Worldwide

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 8: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 9: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

LETTER FROM THE EDITOR-IN-CHIEF

Marcelo Cruz

Welcome to the second issue of Volume 13 of The Journal of Operational Risk.The most anticipated update in operational risk regulation has finally arrived:

December 2017 saw the publication by the Basel Committee on Banking Super-vision (BCBS) of the final version of its standardized measurement approach (SMA)methodology, which will replace the approaches set out in Basel II (ie, the simplerstandardized approaches and the advanced measurement approach (AMA), whichpermitted the use of internal models) starting from January 1, 2022. The SMA ispart of a new set of rules under Basel III, which also significantly changed the faceof credit risk measurement and heaped many other requirements on top of finan-cial institutions. Basel III has discretionary transitional arrangements until 2027 incase the SMA has an impact equal to or higher than 25% of the current firm’s riskweighted assets (RWAs), ie, if the SMA reduces a firm’s RWAs by more than 25% afloor will be imposed. In most cases, this should not impact the revised operationalrisk framework’s implementation date of 2022, as it is a standardized approach.

I am sure that all of our subscribers and readers have already comprehensively andcarefully digested the new rules, so this editor’s letter may not be the most appropriateplace for dwelling on them. I understand that most practitioners are currently under-going regulatory impact analysis, which, depending on where your firm is based,can be positive or negative. More comprehensive analysis will certainly come withtime. What I can say is that, independent of the Basel III rules, in order to manageand mitigate your risks, you need to be able to measure them. The operational riskindustry needs to keep that in mind. While the purpose of the now-defunct AMAwas to measure a firm’s regulatory capital level in order to protect the firm againstoperational risks, we can and should still use models to estimate operational risk eco-nomic capital, precisely because, without them, the task of managing and mitigatingcapital would be incredibly difficult. These internal models are now unshackled fromregulatory requirements and can be optimized for managing the daily risks to whichfinancial institutions are exposed. In addition, operational risk models can and shouldbe used for stress tests and Comprehensive Capital Analysis and Review (CCAR).From my conversations with practitioners, most firms are having their operationalrisk analytics teams focus on this.

One positive note regarding the Basel III rules is that the requirement for firms toprovide greater disclosure of their losses is likely to bring operational risk up to thestandard of credit risk in terms of data requirement and analysis. The regulatory focusunder Basel III will be on the comprehensiveness of bank data collection and reporting.

www.risk.net/journals

vii

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 10: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

However, if we look closely, we notice that the key regulatory tasks for operational riskcan be performed by finance or controller functions. The calculation of operationalrisk capital can be performed using a spreadsheet, and the key data with which wewill fill that spreadsheet, ie, the operational risk losses, can be taken straight from afirm’s balance sheet. Therefore, the focus of the operational risk manager should bediverted from these menial tasks toward actual loss avoidance and management. Thisreinforces the point I made earlier that, given the complexity of financial institutions,without models, managing and mitigating operational risk would be impossible.

Operational risk managers and quantitative analysts will now have more time tofocus on the emerging operational risks that are almost constantly generating head-lines. Despite much progress having been made over recent years, there are signifi-cant operational risks that remain poorly understood, and they deserve more attentionand better measurement frameworks. A notable example is cybersecurity: an areawhere banks struggle to make the right cost–benefit trade-offs between necessaryinvestments to improve the controls and risk exposures.

Given these changes in the regulatory landscape and their impact on the industry,it is fair to say that The Journal of Operational Risk, which is the most importantpublication in the industry and features most of the top-level thinking in operationalrisk, also needs to adapt to these changes. We are always talking to practitioners andacademics in order to understand their needs and concerns so we can be useful tothem. Over the next few months, we will increase these conversations and listen tosuggestions on how we can change what we do to be more useful to our subscribersand the industry as a whole.

From now on, we will be expecting more papers on cyber and IT risks, and notonly on their quantification but also on better ways to manage those kinds of risks. Wewould also like to publish more papers on important subjects such as enterprise riskmanagement and everything that is included in this broad subject, such as establish-ing risk policies and procedures, implementing firm-wide controls, risk aggregation,revamping risk organization, etc. As I said before, we still anticipate receiving ana-lytical papers on operational risk measurement, but these will now have a focus onstress testing and actually managing these risks.

These are certainly exciting times! The Journal of Operational Risk, as the leadingpublication in this area, aims to be at the forefront of these discussions, and wewelcome any paper that will shed light on them.

PAPERS

In this issue, we have four technical papers. It is interesting to note that these papersare already the result of research focused on operational risk beyond the AMA.

Journal of Operational Risk 13(2)

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 11: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

RESEARCH PAPERS

In our first paper, “Operational risk measurement beyond the loss distributionapproach: an exposure-based methodology”, Michael Einemann, Joerg Fritscher andMichael Kalkbrener present an alternative operational risk quantification techniquecalled the exposure-based operational risk (EBOR) model. EBOR aims to replace his-toric severity curves by measuring current exposures as well as using event frequen-cies based on actual exposures instead of historic loss counts. The authors introduce ageneral mathematical framework for exposure-based modeling that is applicable to alarge number of operational risk types. As a numerical example, an EBOR model forlitigation risk is presented. In addition, the authors discuss the integration of EBORand loss distribution approach models into hybrid frameworks facilitating the migra-tion of operational risk subtypes from a classical to an exposure-based treatment.The implementation of EBOR models is a challenging task, since new types of dataand a higher degree of expert involvement are required. In return, EBOR modelsprovide a transparent quantitative framework for combining forward-looking expertassessments, point-in-time data (eg, current portfolios) and historical loss experi-ence. Individual loss events can be modeled in a granular way, which facilitates thereflection of loss-generating mechanisms and provides more reliable signals for riskmanagement.

“Distortion risk measures for nonnegative multivariate risks” is this issue’s sec-ond paper. In it, Montserrat Guillen, José María Sarabia, Jaume Belles-Sampera andFaustino Prieto apply distortion functions to bivariate survival functions for nonneg-ative random variables. This leads to a natural extension of univariate distortion riskmeasures to the multivariate setting. Certain families of multivariate distributions leadto a straightforward risk measure. The authors show that an exact analytical expres-sion can be obtained in some cases, which makes the life of the operational riskanalyst much easier. As an illustration, the authors provide a case study that considersa couple of distributions: the bivariate Pareto distribution and the bivariate exponen-tial distribution. In this case study, the authors consider two loss events with a singlerisk value and monitor the two events over four different periods. They conclude thatthe dual power transform gives more weight to the observations of extreme losses,but that the distortion parameter can modulate this influence in all cases.

In “An operational risk capital model based on the loss distribution approach”, ourthird paper, Ruben D. Cohen constructs an economic capital model for operationalrisk based on the observation that operational losses can, under a certain dimen-sional transformation, converge into a single, universal distribution. Derivation of themodel is then accomplished by directly applying the loss distribution approach to thetransformed data, yielding an expression for risk capital that can be calibrated. Theexpression, however, is applicable only to nonconduct losses, because it incorporates

www.risk.net/journals

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 12: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

empirical behaviors that are specific to them. For loss data that falls under the conductcategory, this approach may not be applicable, so one may have to resort to a differenttype of modeling technique.

Henryk Gzyl presents a simple probabilistic model for aggregating very large lossesto a loss database in our fourth paper, “Modeling very large losses”. Most operationalrisk databases will be composed of small to moderate losses that, in aggregate, amountto high values. However, as very large losses occur very rarely, they may sometimesbe discarded due to a lack of fit with the overall distribution composed by thesesmaller loss events. Despite this, large losses cannot be discarded for regulatory (andeconomic) reasons. In this paper, the author develops a simple modeling procedurethat allows us to include very large losses in a loss distribution fitted with smaller-sizedlosses, with little impact on the overall results.

Journal of Operational Risk 13(2)

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 13: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

Journal of Operational Risk 13(2), 1–33DOI: 10.21314/JOP.2018.208

Research Paper

Operational risk measurement beyondthe loss distribution approach:an exposure-based methodology

Michael Einemann, Joerg Fritscher and Michael Kalkbrener

Deutsche Bank AG, Taunusanlage 12, 60325 Frankfurt, Germany;emails: [email protected], [email protected], [email protected]

(Received November 19, 2017; accepted January 23, 2018)

ABSTRACT

The loss distribution approach (LDA) has evolved as the industry standard for opera-tional risk models despite a number of known weaknesses. In particular, LDA’s tradi-tional focus on historical loss data often neglects expert knowledge that is available foroperational risk types of a more predictable nature. In this paper, we present an alter-native quantification technique, so-called exposure-based operational risk (EBOR)models, which aim to replace historical severity curves by measures of current expo-sures and use event frequencies based on actual exposures instead of historical losscounts. We introduce a general mathematical framework for exposure-based modelingthat is applicable to a large number of operational risk types.As an example, an EBORmodel for litigation risk is presented. Further, we discuss the integration of EBORand LDA models into hybrid frameworks facilitating the migration of operationalrisk subtypes from a classical to an exposure-based treatment. The implementationof EBOR models is a challenging task since new types of data and a higher degreeof expert involvement are required. In return, EBOR models provide a transparentquantitative framework for combining forward-looking expert assessments, point-in-time data (eg, current portfolios) and historical loss experience. Individual loss events

Corresponding author: M. Einemann Print ISSN 1744-6740 j Online ISSN 1755-2710© 2018 Infopro Digital Risk (IP) Limited

1 Journal of Operational Risk

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

www.risk.net/journals

Page 14: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

2 M. Einemann et al

can be modeled in a granular way, which facilitates the reflection of loss-generatingmechanisms and provides more reliable signals to risk management.

Keywords: operational risk; loss distribution approach (LDA); exposure; factor model; litigationrisk.

1 INTRODUCTION

The Basel Committee on Banking Supervision defines operational risk (OR) as “therisk of loss resulting from inadequate or failed internal processes, people and sys-tems or from external events” (Basel Committee on Banking Supervision 2004). Thisdefinition “includes legal risk, but excludes strategic and reputational risk”.

By its very nature, the operational risk of a typical bank is dominated by low-frequency, high-severity events, ie, single extreme losses (McNeil et al 2005). Thisposes a key challenge for OR modeling, since it requires the accurate reflection ofheavy tails of loss distributions. Two main approaches have emerged in the industryto tackle this problem. The scenario-based approach mainly relies on the knowledgeof experts in the subject matter, who specify loss scenarios for the relevant unitof measure, eg, a cell within a business-line/event-type matrix. Due to the inherentsubjectivity of human judgment, this approach leads to high model uncertainty, par-ticularly if scenarios are constructed for extreme and unlikely loss events. In manyfinancial institutions, historical loss data has therefore been considered the most reli-able source for modeling OR tail distributions. This methodology, called the lossdistribution approach (LDA), has evolved as an industry standard for OR modelssatisfying the requirements of the advanced measurement approach (AMA) underBasel II regulations (Basel Committee on Banking Supervision 2009).

The fundamental premise behind the LDA is that each firm’s operational losses area reflection of its underlying operational risk exposure. In fact, for those operationalrisks for which historical loss distributions are assumed to be the best predictor offuture losses, LDA models perform reasonably well. However, strong reliance onthe historical loss experience has a number of serious disadvantages. Models basedon historical loss data have an inherently backward-looking character and do notfully reflect the loss-generating process and control environment. In the wider AMAframework, business environment and internal control factors are required by Basel IIregulations. However, this element is usually still insufficient to fully capture expo-sures or forward-looking aspects. As a consequence, capital estimates are too slowin adapting to changes in the risk profile, eg, due to the introduction of new productsor changes in the business mix of the bank (eg, divestments), and do not providesufficient incentives for OR management to mitigate risk.

Journal of Operational Risk www.risk.net/journals

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 15: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

OR measurement beyond the LDA: an exposure-based methodology 3

The industry has also observed LDA model instability, and thus capital volatility,due to nonrobust model parameters with respect to the inclusion of data or method-ology changes necessitated by the data. No standards for consistent usage of inputdata or LDA model specification have emerged, which makes it difficult to compareLDA-based capital estimates calculated at different financial institutions.

The high uncertainty in OR capital estimates has triggered a regulatory debateabout replacing theAMA with a rather simplistic standardized measurement approach(SMA) in a future regulatory regime (Basel Committee on Banking Supervision 2016).Not surprisingly, recent studies have shown that the SMA suffers from a number ofdeficiencies, which make it unsuitable for a realistic quantification of operational risksand limit its applicability in OR management (Chapelle et al 2016; Cope et al 2016).

Given the weaknesses of the current OR model landscape, it is a main priority for ORmanagement to investigate alternative modeling techniques. The objective is not onlyto overcome some of the deficiencies for regulatory capital calculation listed above butalso to develop models that satisfy emerging requirements of risk measurement suchas the annual Comprehensive Capital Analysis and Review (CCAR) performed bythe US Federal Reserve System (Board of Governors of the Federal Reserve System2012) and the EU-wide stress tests (European Banking Authority 2011).

A promising approach in this respect is the so-called exposure-based operationalrisk (EBOR) methodology, which aims to replace unbounded historical severity curvesby measures of current exposures (maximum possible loss) as well as using event fre-quencies based on actual exposures (maximum number of events) instead of historicalloss counts. The objective of this paper is to introduce a general mathematical frame-work for exposure-based modeling that is applicable to a large number of operationalrisk types. As our prime example, we use an EBOR model to quantify operationalrisk for a portfolio of pending litigations. In many ways, this risk is particularly wellsuited for an exposure-based treatment within the OR framework. First, the pendinglitigations clearly specify the potential loss events that have to be captured in themodel. Second, for each case there exists an estimate for the exposure, and subjectmatter experts (SMEs) in the bank typically have an educated view on the probabilityand range of outflow, which can then be translated into the model parameters “eventprobability” and the stochastic “loss given event” (LGE) ratio. Techniques from creditportfolio modeling are used to specify the loss distribution of the portfolio of pendinglitigations, taking dependencies between individual litigations into account.

The development, calibration and validation of EBOR models is typically a chal-lenging task, since new types of data and a higher degree of expert involvement acrossthe institution are required. In return, EBOR models provide a transparent quantitativeframework for combining forward-looking assessments of the SMEs, point-in-timedata (eg, current portfolios) and historical loss experience. Individual events can bemodeled in a more granular and comprehensive way than in LDA models, which

www.risk.net/journals Journal of Operational Risk

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 16: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

4 M. Einemann et al

facilitates a better reflection of loss-generating mechanisms as well as risk mitigants.The increased model granularity combined with forward-looking expert assessmentleads to a more realistic dynamic of capital estimates, providing more reliable signalsto risk management and fostering dialogue between quants, risk management andbusiness experts.

OR quantification techniques using exposure- or factor-based concepts, eg, mod-eling for rogue trading, sales practices and other risks, are investigated in the financeindustry. Capital demand, stress testing (including CCAR) and risk appetite are high-lighted as (potential) applications (Baruh 2016). We also refer the reader to Yan andWood (2017), on a structural model associated with the mis-selling of retail bankingproducts.

This paper starts with a short review of LDA models in Section 2. We then introducethe general concept behind EBOR models and provide arguments for the exposure-based models being a promising approach to remediate some of the LDA shortcom-ings. The different components of an EBOR model are presented in more detail inSection 3. Section 4 deals with the application of EBOR techniques to a portfolioof pending litigations. The integration of EBOR models into an LDA framework isanalyzed in Section 5. Section 6 concludes.

2 AN EXPOSURE-BASED APPROACH TO OPERATIONAL RISKMODELING

2.1 Shortcomings of the LDA

The starting point of our EBOR development is a short review of LDA models,which is motivated by the fact that the LDA, as the industry standard, is a naturalbenchmark for new models. A complete replacement of LDA models by exposure-based techniques is not realistic in the short term and may not be feasible at all forsome risks. It is therefore important to develop techniques for a meaningful integrationof both concepts (see Section 5).

The LDA is based on actuarial techniques; see McNeil et al (2005) for a comparisonof different modeling approaches. The basic idea is to partition OR loss data intosufficiently homogeneous sets, typically corresponding to n combinations of ORevent types (ETs) and business lines (BLs), and to calibrate a frequency and a severitydistribution for each BL/ET combination. These distributions specify the loss variableXj for the j th BL/ET combination through a compound sum, ie,

Xj DNjX

kD1

Sjk; X DnX

j D1

Xj ; (2.1)

Journal of Operational Risk www.risk.net/journals

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 17: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

OR measurement beyond the LDA: an exposure-based methodology 5

where the frequency variable Nj and the severity variables Sj1; Sj 2; : : : follow therespective frequency and severity distributions. The loss variable X at bank level isthen obtained by aggregating theXj based on the dependence structure of the under-lying variables (see Aue and Kalkbrener (2006) for a comprehensive presentation ofan LDA model implemented in a bank).

As stated in the industry position paper by AMA Group (2013), the LDA providesa rigorous approach for modeling past loss distributions. It has become the standardpractice for modeling those operational risks for which historical loss distributionsare assumed to be the best predictor of future losses. For other risk types that havemore predictable characteristics, at least for the next twelve months, it is problematicto rely almost exclusively on the historical loss experience. AMA Group provides aspot-on analysis of the fundamental concerns related to LDA models, if applied to“predictable” risk types such as litigations.

Certain operational risk types have emerged as quite material over recent years andhave proven to be problematical for LDA from a representation-of-risk standpoint.

(1) Litigation events linked to credit or market risk losses emerged during therecent crisis as material sources of operational risk. Many of these eventsare related to representations and warranties on sold mortgages that defaultedduring the crisis. The risk exposure for these events is defined by the creditor market risk exposure in contrast to standard operational risk types withan undefined exposure, and although operational in nature the losses can bedriven by credit events such as defaults. In addition, predictive factors forthe operational risks are not captured in LDA, but could be assigned using acombination of statistical modeling and expert judgment, allowing for factorbased quantification of capital requirements.…

(3) The use of LDA for these “predictable” risk types has been observed to under-capitalize known risks before they occur, and overcapitalize for risk after thelosses materialize, creating inappropriate capital estimates, including:

(a) Underestimation at the time of manifestation of loss due to lag in “real-izing” losses for events that are already known to have been triggered.This is particularly relevant for large litigation losses for which there isa significant lag between the event trigger and the initial reserve....

(b) Overestimation of capital estimates in a time lag after the manifestationof loss due to extrapolation to the 99.9th percentile and an over-stretcheddistribution; and

(c) The large data gap is a modeling issue/challenge and capital results arequestionable.

(4) An unintended consequence of this timing paradox is that it results in disin-centives to taking strong risk management steps to mitigate risk. Where LDAmodels drive capital and risk management, capital will at times increase intandem with risk mitigation steps, a counterintuitive phenomenon, thus beingat odds with strong risk management....

www.risk.net/journals Journal of Operational Risk

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 18: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

6 M. Einemann et al

The industry has observed LDA model instability, and thus capital volatility, due tothe nonrobust model parameters relative to inclusion of the data or in combinationwith a methodology change of some type necessitated by the data (eg, distributionchange, fitting routine change). The implications can be:

(1) A disconnect with Market and Credit Risk practices, due to not using generallyaccepted risk measurement techniques.

(2) Loss of credibility with Senior Management due to the lack of transparencyand inability to apply their intuition to understand the model results.

(3) Reaction to public disclosure (Pillar 3) – industry and regulators alike mustbe sensitive to analyst review and reporting. Important for industry, of course,to deal with undesired market effects on stakeholders, and also important forregulators to deal with undesired consequences....

In addition to the shortcomings listed above, LDA models are typically not wellsuited for stress testing or loss projection under macroeconomic or idiosyncratic sce-narios as required by European Banking Authority stress tests or the Federal ReserveSystem’s CCAR. The main obstacle for implementing stress scenarios is the miss-ing (direct) link between macro or market drivers and the components of the LDAmodel, eg, frequency and severity distributions. The application of LDA models inlegal entity risk management is an even more challenging task due to increasing datascarcity issues.

2.2 Exposure-based OR modeling

The concerns raised in the previous section motivate the development of exposure-based modeling techniques to complement and (where possible) substitute existingLDA models.

In the following, a formal presentation of the basic structure of those EBOR modelswill be provided. We consider n potential loss events, where n can be considered asfrequency exposure since it specifies the maximum number of loss events. Theirinterpretation depends on the scope of the specific EBOR model, ie, on the ORsubtype to be covered. For each of these events there exists a Bernoulli variable Ij ,j D 1; : : : ; n, such that fIj D 1g represents the occurrence of the j th event in theperiod Œ0; T �, where T is typically set to one year. The probability pj WD P.Ij D 1/

is the corresponding event (or loss) probability. We assume that for each loss eventa maximum positive loss amount Ej can be specified, which is called the severityexposure of the j th event. Since only a fraction of the exposure is typically lost, arandom variableLj is specified, which describes the loss ratio or LGE as a percentageof exposure.

Journal of Operational Risk www.risk.net/journals

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 19: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

OR measurement beyond the LDA: an exposure-based methodology 7

The aggregate event loss variable of the exposure-based operational risk model isnow given by

Y WDnX

j D1

Yj (2.2)

with individual losses

Yj WD IjLjEj ; j 2 f1; : : : ; ng: (2.3)

In order to reflect dependencies between the occurrence of different loss events, theevent indicators Ij are typically modeled as dependent variables. A frequently usedapproach is the specification of dependencies between events through the introductionof risk factors. This concept is formalized in the definition of a Bernoulli mixturemodel (see McNeil et al 2005).

Given a vector of real-valued random variables (the risk factors)� D .�1; : : : ; �m/

the random vector .I1; : : : ; In/ follows a Bernoulli mixture model with factor vector� if m < n, and there exist conditional probability functions cpj W Rm ! Œ0; 1�,1 6 j 6 n, such that, conditional on � , the random vector .I1; : : : ; In/ is a vector ofindependent Bernoulli random variables satisfying P.Ij D 1 j � D / D cpj . /

for 2 Rm. Hence, for any y D .y1; : : : ; yn/ 2 f0; 1gn and 2 Rm, the conditionalevent probabilities are given by

P.I1 D y1; : : : ; In D yn j � D / DnY

j D1

cpj . /yj .1 � cpj . //

1�yj : (2.4)

Further risk factors �mC1; : : : ; �l might be required to specify dependenciesbetween the loss ratios L1; : : : ; Ln. The precise functional form between the �i

and the Lj depends on the specific application of the EBOR model.Risk factors not only introduce dependencies between the basic variables Ij andLj

of an EBOR model, but as long as they have an (economic) interpretation, can be usedto implement stress scenarios in EBOR models. More precisely, if a stress scenariois specified for some of the factors �1; : : : ; �l , then its impact can be quantifiedby performing an EBOR calculation conditional on the stressed values or stresseddistributions of � .

Note that the formal EBOR definition is similar to credit portfolio models basedon default thresholds. In these models, eg, CreditMetrics (Bhatia et al 1997) andMoody’s KMV Portfolio Manager (Bohn and Crosbie 2002), the portfolio loss of acredit portfolio due to defaults is also specified by the loss variables (2.2) and (2.3).Only the economic interpretation of the variables is adjusted to a credit risk setting:Ij now denotes the default indicator of the j th counterparty,Ej is the (deterministic)credit exposure andLj is the (stochastic) loss given default.Analogously to (2.4), joint

www.risk.net/journals Journal of Operational Risk

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 20: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

8 M. Einemann et al

default probabilities are specified through a Bernoulli mixture model. The system-atic factors �1; : : : ; �m frequently represent countries (or geographical regions) andindustries and are chosen to reflect credit concentrations in the underlying portfolio.

2.3 EBOR application: scope and examples

The general EBOR concept and its relation to frequencies Ij and severity ratios Lj

as well as risk factors , the frequency exposure n and severity exposures Ej aredepicted in Figure 1.

The model framework can be applied to rather different OR subrisks: from veryinhomogeneous portfolios of infrastructure or litigation risks with specific character-istics of individual events to portfolios of rogue trading risks with common character-istics for the events to be simulated. An EBOR model can be used even for situationswith potentially very large – but not infinite – exposures. A few examples are listedbelow.

Natural disaster: EBOR models for natural disaster impact on buildings could bebased on the number of buildings as frequency exposure and the value of the build-ings as severity exposure, using building-specific characteristics such as earthquakeprotection, region-specific earthquake indicators and typical earthquake severitiesas risk factors.

Rogue trading: a rogue trading EBOR model could take into account a specific groupof traders, with a homogeneous probability of going rogue, as frequency exposureand then model, for each rogue trading event, the severity based on the size of ahidden position as severity exposure, and time to detection or market movement asseverity risk factors.

Litigation: in Section 4, we employ the general EBOR framework to develop a modelfor pending litigations, ie, the event triggering the filing of the litigation has alreadyhappened and only the final outcome of the court case has to be modeled. Concep-tually, this model can be extended to include potential future litigations, eg, basedon credit properties of an underlying issuance portfolio (see Rosa 2012).

2.4 EBOR modeling: challenges and rewards

From a formal point of view, the basic EBOR model introduced in the previoussubsection can be considered as a special case of the LDA model specified in (2.1):the frequency variables Nj have to attain values in f0; 1g and the severity variableshave to be bounded. In this special case, the n different components of the LDAmodel correspond no longer to BL/ET combinations but rather to individual lossevents. Their event probabilities are defined by P.Nj D 1/.

Journal of Operational Risk www.risk.net/journals

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 21: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

OR measurement beyond the LDA: an exposure-based methodology 9

FIG

UR

E1

Gen

eral

fram

ewor

kfo

rE

BO

Rm

odel

shi

ghlig

htin

gth

efr

eque

ncy,

seve

rity

and

risk

fact

orco

mpo

nent

s.

Fre

quen

cy c

ompo

nent

Sev

erity

com

pone

nt

Ris

k fa

ctor

s Ψ

= (

ψ1,

…,ψ

l)

Loss

eve

nt in

dica

tor

l j : P

(l j =

1 | Ψ

= ψ

),

[…]

[…]

[…]

[…]

Mar

cro/

mar

ket

Con

trol

s

Tim

e to

patc

hsy

stem

s

Tim

e to

dete

cthi

dden

posi

tions

Com

pli-

ance

issu

es

Cre

dit

cycl

eE

arth

-qu

ake

ind.

j = 1

,…,n

Loss

rat

io r

ando

m v

aria

ble

L j

P(L j

= x

| Ψ

= ψ

), x

∈ [0

,1]

Get

ting

sued

Goi

ngro

gue

Set

tleca

se[…

]S

ettle

men

tra

tioIn

sura

nce

relie

fD

isas

ter

seve

rity

OR

loss

eve

ntO

ccur

ence

: lj

OR

loss

S

ize:

Lj

× E

j X

*

Fre

quen

cy e

xpos

ure

n(m

ax. #

of e

vent

s)

#Clie

nts

#Tra

ders

#Iss

u-an

ces

[…]

Sev

erity

exp

osur

e E

j(m

ax. p

ossi

ble

loss

for

even

t j)

Am

ount

clai

med

Val

ue o

fbu

ildin

gS

ize

ofpo

sitio

n

Agg

rega

te lo

ssdi

strib

utio

n

• E

xpec

ted

loss

pro

ject

ions

• C

apita

l

• A

ny o

ther

met

ric d

eriv

ed fr

om

a lo

ss d

istr

ibut

ion…

�M

onte

Car

losi

mul

atio

nca

nbe

appl

ied

toge

nera

telo

ssdi

strib

utio

nif

driv

ers

(fac

tors

and

prob

abili

ties)

are

stoc

hast

ic,i

e,sp

ecifi

edby

adi

strib

utio

n.

www.risk.net/journals Journal of Operational Risk

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 22: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

10 M. Einemann et al

Despite these formal similarities, the calibration and application of LDA and EBORmodels are different in many respects. In general, the development, calibration andvalidation of EBOR models is a challenging task, since new types of data and a higherdegree of expert involvement across the institution are required. In return, EBORmodels provide a transparent quantitative framework for combining forward-lookingassessments of subject matter experts, historical loss experience and point-in-time data(eg, current portfolios) instead of relying mainly on historical loss data. Consequently,EBOR models have a number of advantages that resolve many of the issues listed inSection 2.1. Individual events can be modeled in a more granular and comprehen-sive way than in LDA models, which facilitates a better reflection of loss-generatingmechanisms as well as risk mitigants. The increased model granularity combinedwith forward-looking expert assessment leads to a more realistic dynamic of capitalestimates, ie, EBOR modeling typically reduces the problem of undercapitalizationof known risks at an early stage and the subsequent overcapitalization after the lossesmaterialize (see Section 4.4 for a specific example). This feature incentivises riskmanagement to take appropriate risk-mitigating actions.

The transparency of EBOR models extends their scope beyond the quantificationof risk capital, eg, supporting risk-based decision processes by the capability to priceindividual transactions or to expand into new market segments. Further, EBOR modelsfacilitate communication between quants, risk managers and business experts (includ-ing, for example, legal and compliance departments) and ensure that discussions aimto identify the underlying risk drivers (and their potential management) instead ofsolely debating historical losses. In our experience, nonquant experts are more will-ing to share expertise and data from their day-to-day business as input into EBORmodels than to accept statistical relationships under the LDA if they have difficultiesin understanding the link to the actual (perceived) risk exposure (implausible modelsensitivities).

The development of EBOR models requires specific knowledge about potential lossevents and the underlying loss-generating mechanisms. Expert knowledge plays a cen-tral role in the parametrization of the model, because granular information, in partic-ular, is required for modeling individual events, eg, event probabilities and exposures.Historical data is mainly used to supplement and validate expert assessment.

A sensible route for a (partial) transition from LDA to EBOR could be based onthe following steps.

(1) Identification and classification: identify risk types (OR taxonomy) and orderby materiality.

(2) Model development: build EBOR models step-by-step for those risks wheresufficient information is available (starting with material risks).

Journal of Operational Risk www.risk.net/journals

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 23: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

OR measurement beyond the LDA: an exposure-based methodology 11

(3) Completeness check: use LDA for the remaining risks.

(4) Aggregation: perform a meaningful aggregation of risks across EBOR and LDAmodels.

Even if carefully designed EBOR models are applied, capital numbers at extremequantiles still suffer from a high level of uncertainty. The models themselves, how-ever, are more transparent and more closely related to loss-generating mechanisms.They also show plausible sensitivities. As a consequence, EBOR models facilitate abetter alignment to risk management and extend the scope of applicability beyondthe quantification of risk capital.

3 EXPOSURE-BASED OPERATIONAL RISK MODEL: DETAILS

The basic concept of an EBOR model was introduced in the previous section (see(2.2) and (2.3)). The main parameters of the model are the exposure Ej , the lossratio Lj and the event indicator Ij , whose product equals the loss variable Yj . In thissection, we discuss in more detail how to model these parameters. In particular, wepropose techniques to reflect the dependencies between event indicators as well asloss ratios (see Sections 3.1.1 and 3.1.2).

Not all losses in an EBOR model are necessarily triggered by event indicators, eg,legal fees might be independent of the outcome of a litigation. In order to capture thisaspect, in Section 3.2 we present a model extension that specifies “nonevent” losses.

Since techniques for risk mitigation such as third-party insurance or indemnifica-tion play a major role in operational risk management, in Section 3.3 we investigatethe implementation of risk mitigants in EBOR models. The deduction of existing pro-visions to reduce risk of future payments is also possible under EBOR and is describedin Section 3.4. Further, we provide information on the calculation of risk measuresand capital allocation in EBOR models. Section 3.5 contains a related detailed step-by-step procedure for the simulation of the aggregate loss distribution under an EBORmodel.

3.1 EBOR event losses

3.1.1 Modeling correlated EBOR events

In order to reflect dependencies between the occurrences of different loss events,the event indicators Ij are typically modeled as dependent variables. A powerfuldependence concept is that of Bernoulli mixture models (see Section 2.2). For modelspecification, risk factors �1; : : : ; �m must be identified that introduce dependenciesbetween the event indicators I1; : : : ; In with individual event probabilitiesp1; : : : ; pn.

www.risk.net/journals Journal of Operational Risk

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 24: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

12 M. Einemann et al

Subject matter experts typically play an important role in specifying dependen-cies, particularly since historical data is often too scarce to allow a purely statisticalapproach. In order to facilitate the use of expert knowledge, we apply a degenerateversion of a Bernoulli mixture model such that, conditionally on � D , the eventindicators I1; : : : ; In become deterministic. The definition of the risk factors is basedon partitioning events into independent clusters, ie, the factor �j represents the stateof the j th cluster and factors �1; : : : ; �m are independent. Typically, this kind ofinformation can be more easily provided by SMEs. The clustering approach leads toa particularly simple distribution function for the event frequency variable

Pnj D1 Ij ,

which also facilitates the integration of an EBOR into an LDA model (see Section 5).More precisely, we assume that the n loss events are split into clusters C1; : : : ; Cm.

Formally, C1; : : : ; Cm is a partition of f1; : : : ; ng, ie,[

iD1;:::;m

Ci D f1; : : : ; ng; Ci \ Cj D ; if i ¤ j:

We make the following two assumptions.

Maximum dependence within a cluster: for each clusterCi , 1 6 i 6 m, a continuousrandom variable �i W R ! Œ0; 1� with distribution function F�i

is specified. Werequire that for each j 2 Ci the conditional probabilities cpj can be written as

cpj . 1; : : : ; m/ D(0; i > F

�1�i.pj /;

1; i 6 F �1�i.pj /:

Independence of clusters: we assume that�1; : : : ; �m are independent of each other.This assumption can be easily relaxed if required. It mainly reflects the fact thatclusters of events, eg, litigations on unrelated issues or natural disasters in differentregions, are typically uncorrelated. The assumption also leads to a simple closedform of the total event distribution function (see (3.4b)), and thus to the smoothintegration of EBOR modeling into an LDA model.

Under the first assumption, the occurrence of the j th loss event depends only onthe random variable �i of the embedding cluster Ci , j 2 Ci , and the event-specificthreshold F �1

�i.pj /: Ij D 1 if and only if �i 6 F �1

�i.pj / is satisfied. Hence, the

randomness of events is fully captured by the random variables �1; : : : ; �m, and theconditional event probabilities in (2.4) are either 0 or 1.

In the following, we will formalize the consequences of our model assumptionsfor the likelihood of joint events. For any pair j1 2 Ci1 and j2 2 Ci2 the joint eventprobability is

P.Ij1D 1; Ij2

D 1/ D(

minfpj1; pj2

g; i1 D i2;

pj1pj2; i1 6D i2;

(3.1)

Journal of Operational Risk www.risk.net/journals

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 25: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

OR measurement beyond the LDA: an exposure-based methodology 13

and hence the implicit (linear) event correlation is equal to

Corr.Ij1; Ij2

/ D

8ˆ̂<ˆ̂:

spj1.1 � pj2

/

pj2.1 � pj1

/; i1 D i2;

0; i1 ¤ i2;

(3.2)

where we have assumed, without loss of generality, that pj16 pj2

. This implies thatloss events in the same cluster, which have identical event probabilities, are perfectlycorrelated. If their event probabilities are different, the occurrence of these eventsstill has the maximum degree of correlation that can be achieved. Events in differentclusters are uncorrelated.

Next we derive the distribution function for the event frequency variable IP WDPnj D1 Ij . For each cluster i 2 f1; : : : ; mg, we define the corresponding event

frequency variable and its (discrete) density function:

ICiWD

Xj 2Ci

Ij ; fi .k/ WD P.ICiD k/; k D 0; : : : ; n:

For each i 2 f1; : : : ; mg, we order the elements j1; : : : ; jjCi j of cluster Ci withdecreasing event probabilities

pj1> pj2

> � � � > pjjCi j ;

and obtain, from the assumption on maximal dependence within a cluster,

fi .k/ D

8ˆ̂<ˆ̂:

1 � pj1; k D 0;

pjk� pjkC1

; k D 1; : : : ; jCi j � 1;pjk

; k D jCi j:(3.3)

The cumulative density function gi covering the first i clusters is defined by

gi .k/ WD P� iX

j D1

ICjD k

�; k D 0; : : : ; n:

Note that gm is the density of IP , ie, gm.k/ D P.IP D k/.There exists a simple recursion formula for the cumulative densitygi . By definition,

g1 D f1. Since clusters are supposed to be independent, we have

gi .k/ DkX

j D0

gi�1.j /fi .k � j / (3.4a)

DX

.k1;:::;ki /2K.i;k/

iYj D1

fj .kj /; (3.4b)

www.risk.net/journals Journal of Operational Risk

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 26: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

14 M. Einemann et al

where

K.i; k/ WD�.k1; : : : ; ki / 2 f0; 1; : : : ; ngi

ˇ̌ˇ̌

iXj D1

kj D k

�:

Recursion (3.4a) will be used for a joint LDA and EBOR simulation (see Section 5.2).

3.1.2 Modeling correlated EBOR losses

Conditional on the occurrence of an event, ie, Ij D 1, we determine the amount ofoutflow as the product of the exposure Ej and the LGE ratio Lj . While the exposureis deterministic and represents the maximal loss, the variables .L1; : : : ; Ln/ followa multivariate distribution with individual marginal distributions, eg, specified byexpected value and volatility and an overall dependence structure.

EBOR models are rather flexible in terms of specifying the dependence structure ofthe LGE ratios L1; : : : ; Ln. Analogously to dependent events, risk factors as well ascopula functions can be used for this task (see McNeil et al 2005). If dependencies arespecified through risk factors with an economic interpretation, the behavior of LGEratios can potentially be linked to event indicators. Whether or not the identificationof adequate risk factors is feasible depends on the specific application of the EBORmodel.

We have proposed a simple clustering approach for modeling dependent events (seeSection 3.1.1). Assuming that LGE ratios within a cluster show rather homogeneousbehavior, these clusters could also be utilized to specify a dependence structure forLGE ratios. One option is to estimate intra- and intercluster correlations and to defineeither risk factors or copulas to implement the correlation structure in the model.

As for their dependence structure, EBOR models do not restrict the choice of(marginal) distribution functions for LGE ratios. In the literature, (truncated) beta orlognormal distributions are frequently used (see Memmel et al (2012) and Kapostyet al (2017) for stochastic loss given default models in the context of credit risk).Examples in the context of EBOR are provided in Section 4.3.

3.2 EBOR nonevent losses

For some risk types there are loss components that, although they are not conditionalon the occurrence of an event, are indirectly linked to it. Legal fees, which are anessential part of litigation risk, are an important example. These fees are assumed tobe paid on a regular basis regardless of whether or not a litigation has been settled.To capture losses of this kind we extend the loss definitions of the EBOR model asfollows: for each portfolio constituent j 2 f1; : : : ; ng we extend the loss variabledefined in (2.3) by adding a second component that does not depend on the event

Journal of Operational Risk www.risk.net/journals

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 27: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

OR measurement beyond the LDA: an exposure-based methodology 15

indicator, ie, we define the extended loss variable

Z.0/j D Yj C Y

.nE/j ; (3.5)

where Y .nE/j specifies a nonevent loss. More precisely, Y .nE/

j covers the aggregatedamount of losses that are expected to occur within the relevant time horizon and canbe considered to be closely related to (although not triggered by) the j th loss event.For instance, Yj and Y .nE/

j might have the same economic root cause or legal case(see Section 4.3.2 for details).

The aggregate event loss variable Y of the exposure-based operational risk modelis extended to give

Z.0/ WDnX

j D1

Z.0/j D

nXj D1

Yj C Y.nE/

j :

3.3 Risk mitigation

The standard risk-mitigating tools for OR are designed to transfer the risk – or partsof it – to a third party. This could, for instance, be achieved through insuring certainOR losses or by indemnification of losses by liable third parties. Any such activityclearly has to account for the uncertainty of the third party making its own payment.

Under LDA the set of options is, in practice, limited to insurance contracts. Thekey constraint is that the model simulates the total number of events but does notprovide specific information on their nature. Hence, it is, for instance, impossible toreflect the mitigating effect of event-specific indemnification (or, conversely, thereis the risk that the indemnitor cannot meet its liabilities). In contrast, the standardLDA provides the required granularity for reflecting insurance contracts (see Aue andKalkbrener 2006), since insurance categories can be mapped to OR loss categories inan LDA model, eg, to OR event types, business lines or a combination of both. If thelosses of one operational risk category are covered by several insurance policies, thepercentage of losses that fall into a specific insurance category needs to be determinedas well. For example, 70% of execution losses might be covered by general liability,20% by professional liability, and 10% are not insured.

Under EBOR, individual loss events are modeled. It is not necessary to specify cov-erage ratios for specific OR loss categories in insurance modeling, and event-specificindemnification can be incorporated. Usually, risk mitigants are applied in the fol-lowing order: the loss amount before risk mitigation, denoted by Z.0/

j D Yj C Y.nE/

j

for j 2 f1; : : : ; ng, is adjusted to Z.ind/j by applying case-specific indemnification

of losses. Subsequently, Z.ind/j is further reduced to Z.ins/

j if insurance exists for the

www.risk.net/journals Journal of Operational Risk

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 28: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

16 M. Einemann et al

j th loss event. Note that loss indemnification and insurance are assumed to cover allexpenses including nonevent losses.

Finally, we point out that, unlike the LDA, the EBOR model allows the deductionof existing provisions from loss Z.ins/

j . The related methodology will be described inSection 3.4.

3.3.1 Third-party indemnification

We formalize the (partial) indemnification of event losses by a third party, a so-calledindemnitor. Formally, the indemnitors Ind1; : : : ; Indl form a partition of f1; : : : ; ngsuch that all events j 2 Indk are covered by thekth indemnitor. The default probabilityof the kth indemnitor is denoted by p.ind/

k, and I .ind/

kis the corresponding Bernoulli

variable, defined by p.ind/

kWD P.I .ind/

kD 0/. Finally, !.ind/

j 2 Œ0; 1� is the indemnifiedproportion of the j th event. It is set to zero if no indemnification is available. Hence,the j th event loss after indemnification equals

Z.ind/j D .1 � I .ind/

k!

.ind/j /Z

.0/j ;

where j 2 Indk . In other words, indemnification will be applied to all events j 2 Indk

if the kth indemnitor has not defaulted.In the current model, we assume that the indemnification indicators I .ind/

1 ; : : : ; I.ind/

l

are independent of each other and of the event lossesZ.0/1 ; : : : ; Z

.0/n . This assumption

could be relaxed by specifying a dependence structure for indemnitor defaults, eg, byutilizing techniques from credit portfolio modeling.

3.3.2 Insurance

Since EBOR models reflect the specific characteristics of potential future loss events,event-specific contracts can be incorporated. In particular, it is not necessary to specifymappings between operational risk and insurance categories, which are required inLDA models due to their lower granularity, eg, modeling BL/ET combinations insteadof individual events.

Insurance contracts are characterized by certain specifications regarding thecompensation of losses.

(1) A deductible d is defined to be the amount the bank has to cover by itself.

(2) The single limit l of an insurance policy determines the maximum amount of asingle loss that is compensated by the insurer. In addition, there usually existsan aggregate limit lagg.

Journal of Operational Risk www.risk.net/journals

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 29: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

OR measurement beyond the LDA: an exposure-based methodology 17

(3) The methodology for recognizing insurance also has to take into account uncer-tainties related to insurance payments, such as the certainty and the speed ofpayment, insurance through captives or affiliates and the rating of the insurer.We translate the rating into a default probability p.ins/ and model defaults ofthe insurer by a Bernoulli variable defined byp.ins/ WD P.I .ins/ D 0/. The otherlimitations are transferred by expert judgment into a haircutH , which is appliedto the insurance payout.

In mathematical terms, the amount paid by the insurer is defined by

Ins WD I .ins/ max.min.l; y/ � d; 0/H;

as a function of the loss amount y. The actual specification of insurance parametersfor individual payouts Insj , j D 1; : : : ; n, including haircuts has to be based oninsurance data and expert knowledge and is beyond the scope of this paper.

As for third-party indemnification, we define a separate insurance indicator I .ins/k

for each insurer and assume that different insurance indicators are independent ofeach other. In addition, the calculation of the payouts Ins1; : : : ; Insn has to reflectaggregate limits. This aspect is covered in the simulation process (see Section 3.5.1).

Finally, the loss amount of the j th event after indemnification and insurance isdefined by

Z.ins/j D Z

.ind/j � Insj :

3.4 Deduction of provisions

Provisions are used in accounting to proactively reflect the expected payouts of certainevents. The risk of significant future losses (in excess of the provision) is reduced.Unlike the LDA, the event-specific modeling of the EBOR approach allows for a betterreflection of future cashflows (total losses minus provisions), making the forecastingmore time-congruent.

More formally, we deduct the provision �j for the j th event from its loss afterinsurance Z.ins/

j and arrive at the following loss definition:

Zj WD Z.ins/j � Ij�j :

We explicitly allow for gains if established provisions exceed the simulated loss.

3.5 Risk measures and capital allocation

One of the key benefits of the EBOR model is the possibility to determine risk contri-butions for each of the potential n loss events, ie, portfolio constituents. This feature ofthe model is due to its granularity, ie, individual loss events are modeled and thereforeall the information required to allocate risk capital down to event level is available.

www.risk.net/journals Journal of Operational Risk

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 30: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

18 M. Einemann et al

FIGURE 2 Process flow for generating one Monte Carlo sample of the aggregated lossdistribution.

Monte Carlosimulation

1. Generate risk factor

sample (ψ1,…,ψm

)

2. Generate event

indicator sample

(i1 ,…,in ) conditional

on (ψ1 ,…,ψ

m ) 3. For all loss events,

simulate sam

ple of

joint distributions of

loss ratios Li

4. C

alcula

te va

lue

of lo

ss ve

ctor

(Y 1,…

,Y n)

5. Simulate and add nonevent losses Y

(nE)j

6. Deduct

indemnification,

insurance and

provision

7. A

ggre

gate

to to

tal

Mon

te C

arlo

(MC

)

scen

ario

loss

z a

nd

star

t nex

t MC

cyc

le

Correlation

The allocation of risk capital to loss events facilitates the identification of key riskdrivers (or risk concentrations) and supports management actions to mitigate the risk.

The allocation of risk capital to individual events is derived from the risk capitalcalculated on aggregate level, which is based on the Monte Carlo simulation of theloss distribution of the EBOR model.

3.5.1 Simulation of the aggregate loss distribution

The aggregate loss distribution of an EBOR model cannot be represented in analyticform. We therefore apply Monte Carlo simulation to generate samples of the aggre-gate loss distribution Z, which are subsequently used for deriving risk measures ataggregate and event level:

Z DnX

j D1

Zj DnX

j D1

.1 � I .ind/

k!

.ind/j /.Y

.nE/j C IjLjEj / � Insj � Ij�j :

Journal of Operational Risk www.risk.net/journals

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 31: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

OR measurement beyond the LDA: an exposure-based methodology 19

These are the main steps in generating one Monte Carlo sample (the numbers inparentheses correspond to Figure 2, which depicts the simulation process flow).

Joint simulation of event indicators and loss ratios

(1) Generate a sample . 1; : : : ; m/ of the m-dimensional distribution of riskfactors �1; : : : ; �m that specify the Bernoulli mixture model for the eventindicators I1; : : : ; In.

(2) Conditionally on . 1; : : : ; m/, generate a sample .i1; : : : ; in/ of I1; : : : ; In.

(3) For those events that trigger losses, ie, events specified by the index set J WDfj 2 f1; : : : ; ng j ij D 1g, simulate a sample of the joint distribution of lossratios .Lj /j 2J .

(4) Based on the simulated values of I1; : : : ; In and .Lj /j 2J , calculate the valueof the loss vector .Y1; : : : ; Yn/.

Simulation of nonevent losses

(5) For each j D 1; : : : ; n, add the nonevent loss by simulating the correspond-ing variable Y .nE/

j to give a value of the loss variable Yj C Y.nE/

j before riskmitigation denoted by Z.0/

j .

Incorporation of risk mitigation and deduction of provisions

(6) Simulate indemnification and insurance indicators I .ind/ and I .ins/. Condi-tionally on the simulated values, calculate third-party indemnification andinsurance:

� for each j D 1; : : : ; n, compute the loss after indemnification Z.ind/j ;

� apply the respective insurance contract to each event and order all eventsrandomly to take aggregate limits into account; then calculate the lossafter insurance Z.ins/

j for all j D 1; : : : ; n;

� for all loss events j 2 J , reduce Z.ins/j by the provision of the j th loss

event resulting in a sample value zj of the loss variable Zj after riskmitigation.

Sample of aggregate loss distribution

(7) The sum of the losses z WDPn

j D1 zj is a sample of the aggregate lossdistribution Z.

www.risk.net/journals Journal of Operational Risk

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 32: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

20 M. Einemann et al

3.5.2 Risk measures and allocation techniques

In the last twenty years, a well-founded mathematical theory has been developed formeasuring and allocating risk capital. The most important cornerstone is the formal-ization of the properties of a coherent risk measure given in Artzner et al (1999). Theiraxiomatization provides an appropriate framework for the analysis and developmentof risk measures, eg, expected shortfall (ES) in Rockafellar and Uryasev (2000) andAcerbi and Tasche (2002). General principles for capital allocation can be found in anumber of papers (see, for example, Kalkbrener 2005).1

The general theory of measuring and allocating risk is independent of specific risktypes. In particular, these techniques can be applied in EBOR models, eg, standardrisk measures such as value-at-risk (VaR) or ES can be derived from the Monte Carlosamples representing the aggregate loss distribution. More precisely, the formulas

VaR.˛/ WD inffz 2 R j P.Z 6 z/ > ˛g

and

ES.˛/ WD E.Z j Z > VaR.˛//

can be numerically evaluated if a sample list of the aggregate loss distribution Z hasbeen calculated.2

EBOR models allow the allocation of risk capital to individual exposures. This isbecause each sample z of Z can be split into contributions for the n loss events, ie,Pn

j D1 zj , which provide the necessary information for calculating allocation formulassuch as the ES allocation:

ESj .˛/ WD E.Zj j Z > VaR.˛//:

Tail-focused allocation techniques such as ES based on a high quantile are designedto highlight risk concentrations. If the underlying portfolio is of limited granularity,risk capital is allocated to a small number of portfolio constituents. It depends on riskmanagement strategies whether this feature of tail-focused allocation is desirable orwhether alternative techniques, eg, allocation techniques that give more weight to thebody of the underlying distributions, are preferred. These considerations are particu-larly relevant for EBOR models, since these are typically designed to quantify specificaspects of operational risk, eg, litigation risk, which may have rather concentrated riskprofiles.

1 For more recent research on capital allocation techniques, we refer the reader to Centrone andGianin (2017) and the papers cited therein.2 Note that the formula for ES.˛/ is an approximation of the coherent risk measure expected shortfall,which is defined by .1 � ˛/�1

R 1˛ VaR.u/ du.

Journal of Operational Risk www.risk.net/journals

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 33: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

OR measurement beyond the LDA: an exposure-based methodology 21

4 EXPOSURE-BASED OPERATIONAL RISK MODEL FORLITIGATION RISK

In this section, we illustrate the application of the general EBOR concept to a portfolioof pending litigations.

Over the last decade, litigation risk has become a major driver for banks’ OR. Arecent report by the Boston Consulting Group (Grasshoff et al 2017, p. 16) showsthat the fifty largest US and European banks have paid “cumulative financial penaltiesof about US$321 billion assessed since the 2007–8 financial crisis through [to] theend of 2016”. The reflection of this risk under a classical LDA, however, suffersfrom a number of the issues described in Section 2.1. Summarizing, risk is oftenunderestimated at the beginning, since there is no capital impact until the initialprovision is established, and overcapitalized after the loss has materialized. However,the bank often has in-depth knowledge about the underlying risk, eg, the likelihoodand amount of a payment, as illustrated by the following examples. Event probabilitiespj can be parameterized from categories of “probability of outflow” into which all(material) litigations are classified under existing accounting rules. The (noninflated)claimed amount may provide an estimate for the maximum loss, ie, the exposure Ej .Legal fees are projected during the budgeting process. Finally, potential risk mitigants,eg, eligibility for third-party indemnification or insurance or existing provisions, areknown explicitly for each litigation.

In conclusion, (pending) litigations form a prime example of an exposure-basedtreatment within the OR framework that also follows the set of criteria developed inSection 2.4. In the following subsections, we will specify an EBOR model for a “litiga-tion portfolio”, starting, in Section 4.1, with its description and the definition of a lossevent in the context of litigations. We continue with data requirements (Section 4.2)and the corresponding model specification (Section 4.3). Finally, in Section 4.4 weillustrate the model behavior for a typical litigation over the hypothetical states in itslife cycle.

4.1 Basic portfolio definition and model variables

Throughout this section, we consider a “portfolio” of n pending litigations. A lossevent is defined as the occurrence of a payment due to settlement or negative courtruling within a one-year capital horizon. For the j th litigation it is indicated by theBernoulli variable Ij . The random quantity to be paid in the case of an event, theoutflow, is denoted by Oj ; this equals the product of the deterministic exposure Ej

and a stochastic LGE ratio Lj .3 In addition to such potential event losses, we model

3 Note that in our model the total (final) loss for a litigation is simulated, rather than incrementalprofit and loss hits per year as in the establishment of (additional) provisions.

www.risk.net/journals Journal of Operational Risk

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 34: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

22 M. Einemann et al

nonevent losses Y .nE/j in terms of legal fees, denoted by LegalFeej , j D 1; : : : ; n.

Fees and event losses are subject to risk mitigation through indemnification, insuranceor provisions. In this section, we focus on modeling (correlated) event losses andindependent legal fees for litigations, namely the components of

Z.0/j D LegalFeej C Yj D LegalFeej C IjLjEj :

The specification of risk mitigants from the relevant data is typically rather straight-forward, and therefore not covered here.

4.2 Data requirements

In the following, we outline which type of data is required to parameterize the EBORmodel for the litigation portfolio. Data requirements cover case-specific informationfor each litigation, as well as identification of dependencies (contagion effects) acrossthe portfolio. In the general case, the required set of data should be readily available,since it is likely to be part of existing data processes, such as a case classificationrequired for accounting or a specification of outflow estimates in the provisioningprocess. It should thus primarily be provided by the SMEs in the bank. If it cannotbe sourced in this way, it would need to be calibrated from historical internal andexternal recordings or using classical operational loss data. The existence of such adatabase also serves as a way to challenge and validate the estimates set by the SME.

4.2.1 Case-specific information

For each pending litigation we require a number of parameter specifications, which aredescribed in detail below. Those complement readily available figures such as existingprovisions �j or information as to whether a case is eligible for indemnification orinsurance (with corresponding parameterizations).

Probability of outflow. The probability of outflow p.total/j describes the probability

that a loss event for the j th litigation will happen (without timing restrictions). As afirst classification, we propose to use the categories set under the accounting rules. Forinstance, International Accounting Standard 37 (International Accounting StandardsBoard 1998), requires a classification of pending legal cases with respect to theirlikelihood, ie, whether associated losses are “probable”, “more than remote, less thanprobable” or “remote”. The IAS does not explicitly specify probabilities for thesecategories, and such an assignment is generally considered to be a difficult task.However, a reasonable choice could be to assume that the probability of outflow isgreater than 50% for probable cases, while its upper bound for remote cases is in therange of 5–15%. We can then either set p.total/

j to a common value by category or varythe estimate by case, eg, by including additional information about the likelihood ofa loss event.

Journal of Operational Risk www.risk.net/journals

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 35: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

OR measurement beyond the LDA: an exposure-based methodology 23

Remaining lifetime. The remaining lifetime OTj of the litigation estimates the timeuntil closure of the case. It often depends on the age of the litigation and is requiredfor the annualization of the probability of outflow.

Outflow estimates. We need estimates of the amount of outflow, Oj , for a lossevent. The following list provides some examples:

(1) maximum amount of outflow (eg, claimed amount);

(2) expected amount of outflow (eg, amount used for provisions);

(3) quantile estimate, ie, amount Hj that the outflow will not exceed with alikelihood of hj 2 Œ0; 1�.

In setting these estimates, a number of factors are considered, eg, type of claim,the plaintiff, the status of the case, rulings on dispositive motions, prior settlementdiscussions, other relevant rulings by courts or arbitration panels, relevant factualand legal developments, relevant settlements by others, the schedule for the litigationor arbitration and opposing counsel. Clearly, the estimates are subject to significantjudgment, and encompass a variety of assumptions, variables and known and unknownuncertainties.

The number of required outflow estimates depends on the distributional form of theoutflow variable or, equivalently, the loss ratio. As we show in the next subsection,our model specification requires at least two estimates. Any additional informationon the shape of the distribution improves the reflection of risk.

Legal fee estimates. Legal fees are, in general, paid on a regular basis, regardlessof whether or not a loss event has occurred. They are usually part of the budgetingprocess. Estimates of a similar kind to those for the outflow estimates above areavailable. Within our model specification we require at least two estimates.

4.2.2 Identification of dependencies between litigations

Usually, litigations can be grouped into clusters in the following sense: any courtruling for one litigation impacts the likelihood of a payment for other litigations inthe same cluster, but does not influence the outcome of litigations outside the cluster,eg, the same line of argument might be applicable to litigations in the same clusterbut not to the rest. This setup nicely ties in with the clustering approach describedin Section 3.1.1. The SME is asked to specify a partition C1; : : : ; Cm of f1; : : : ; ng,splitting the portfolio into m litigation clusters. The maximum dependence betweenevents within each cluster is assumed (see Section 3.1.1), while litigations in differentclusters are supposed to be independent.

www.risk.net/journals Journal of Operational Risk

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 36: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

24 M. Einemann et al

Clustering is also used to parameterize the correlation matrix ˙L 2 Rn�n for lossratio modeling. For each clusterCi , i D 1; : : : ; m, we specify a correlation parameter%i such that Corr.Lj1

; Lj2/ D %i for all j1 ¤ j2 2 Ci . In practice, this estimate is

expected to be difficult to calibrate from historical observations, and should be set bythe SMEs.

4.3 Model specification

4.3.1 Correlated litigation event losses

As introduced in Section 4.1, litigation event losses are modeled in terms of the eventindicators Ij and the amount of outflowOj . We next introduce the model specificationfor those model variables.

For any event indicator, we need to estimate its expected value: in other words, theevent probabilitypj . This is derived by annualizing the related (multi-year) probabilityof outflow p.total/

j . We first translate the estimated remaining lifetime of the litigation,OTj , into the time-adjustment factor TAFj 2 Œ0; 1� and then set the event probability

to be the product

pj WD TAFjp.total/j ; j 2 f1; : : : ; ng: (4.1)

Note that the high values for the probability of outflow buckets constitute a funda-mental difference to standard credit risk modeling, where typical default probabil-ities are significantly lower. This feature should be kept in mind when comparingcharacteristics of credit portfolio models and EBOR models for litigation risk.

The correlation between different litigation events is captured through the clusteringof litigations (see Section 4.2.2). Model details are provided in Section 3.1.1.

Given a loss event for the j th litigation, represented by Ij D 1, we need to modelthe uncertainty in the final payment, or amount of outflowOj . In line with the generalEBOR model concept, we have defined this random quantity as the product of adeterministic exposure Ej and the stochastic loss ratio Lj . While we have (in mostcases) an expert opinion on the expected value as well as the upper and/or lower boundofOj , we still need to model the uncertainties around these estimates, eg, the volatilityof Lj . In order to specify the distribution functions FLj

, the stochastic loss ratios Lj

are fitted to internal and external recordings of loss ratios from litigation events aslisted in Boettrich and Starykh (2017). We found that lognormal distributions oftenprovide a good fit:

Lj � LN.�.LGE/j ; �

.LGE/j /:

This implies that the amount of outflow Oj also follows a lognormal distribution,with

EŒOj � D Ej�.LGE/j ; �.Oj / D Ej�

.LGE/j ; (4.2)

Journal of Operational Risk www.risk.net/journals

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 37: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

OR measurement beyond the LDA: an exposure-based methodology 25

and we calibrate the mean �.LGE/j and volatility � .LGE/

j as well as the exposure Ej tothe set of available estimates on Oj .

We give two concrete examples. First, we suppose that there is an expert assessmentof the claimed amount and the expected amount of outflowOe

j (eg, from provisioning).In this case, we set the exposureEj to the claimed amount and calibrate the volatility� .1/ D �.Oj / such that the condition

Oj

almost surely6 Ej

is fulfilled for a lognormal distribution with mean Oej .4 From these conditions, we

finally derive�.LGE/j and � .LGE/

j (see (4.2)). Note that the exposure could also includesome buffer if there is a chance that the claimed amount may be exceeded.

In a variation, we assume that, in addition, the SME estimates that the final paymentwill not exceed some upper bound Ou

j with a probability of q.Ouj / 2 .0; 1/:

PŒOj 6 Ouj � D q.Ou

j /: (4.3)

The quantile q.Ouj /might vary with litigation type or the expected remaining lifetime

of the litigation. For instance, loss estimation for regulatory matters is sometimessubject to a higher uncertainty. We next calibrate a second volatility, � .2/ D �.Oj /,such that the conditions

EŒOj � D Oej ; PŒOj 6 Ou

j � D q.Ouj /

are satisfied. We then need to decide which volatility estimate, � .1/ or � .2/, is betterfor modeling the uncertainty in outflow estimates.

Finally, the joint simulation of the ratios is based on a Gaussian copula:

CL.x1; : : : ; xn/ D C.Gauss/˙L

.FL1.x1/; : : : ; FLn

.xn//;

with the correlation matrix ˙L 2 Rn�n specified in Section 4.2.2.

4.3.2 Nonevent losses: legal fees

In our model, nonevent losses Y .nE/j are represented by legal fees. Unlike litigation

payments, legal fees payments are supposed to occur on a regular predeterminedbasis and independently from each other. Loss distributions arising from legal feesare generally supposed to be less heavy in the tail than those reflecting the uncertaintyin final litigation payments, for which we utilized a lognormal distribution. Therefore,

4 Practically, we impose the conditions EŒOj � D Oej and PŒOj 6 Ej � D 1 � " for some small " > 0.

www.risk.net/journals Journal of Operational Risk

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 38: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

26 M. Einemann et al

we assume that realizations of next year’s legal fees follow a normal distribution:

LegalFeej � N .�.LegalFee/j ; �

.LegalFee/j /; j 2 f1; : : : ; ng:

Similarly to the parameter calibration for loss ratios Lj , we require at least two lossestimates for legal fees. Their expected value and volatility are then derived fromthese estimates. In addition, we floor all realizations of fee losses at zero.

4.4 Model illustration

In order to demonstrate its favorable properties, we illustrate the output of a (stand-alone) EBOR model for a hypothetical litigation over different states in its life cycle.Note that the life cycle parameters have been artificially chosen to emphasize certainmodel sensitivities and capital effects, and have not been derived from real litigations.

We divide the life cycle into five different phases, from initial filing (phase 1) tofinal payment and closure of the matter (phase 5). Since these phases typically do notcoincide in length, in this illustration we ignore the annualization of event probabilitieswith respect to the estimated remaining lifetime of the litigation. We also ignore legalfees as well as indemnification and insurance, focusing only on potential event lossesplus the effect of provisioning. The loss variable is hence given by

Z D Y � I� D I.LE � �/ D I.O � �/:

The estimated exposure of the hypothetical litigation is US$1 billion. Since little isknown about the case at the time of the initial filing, it is classified as “remote” and theevent probability is estimated as 5%. The expected loss ratio is calibrated to historicaldata for cases of a similar kind. Let us assume a value of 20%. Correspondingly,the expected amount of outflow equals US$200 million. In phase 2, eg, after onemonth, the classification changes. The case has been reviewed and is now consid-ered “more than remote, less than probable” with the event probability increasingto 30%. No specific provision is made, but the estimated expected loss is disclosed(in an aggregated way) as the so-called contingent liability (International AccountingStandards Board 1998). It is assumed by the SME that the amount of outflow willbe less than US$400 million with a likelihood of 90%. Afterward, in phase 3, theestimated range of outflow narrows as more detailed information becomes available.The expected amount of outflow increases from US$200 million to US$300 millionand the quantile estimate for US$400 million changes to 95%. In the fourth phase,the SME classifies the case as “probable” and a provision � is established that equalsthe current expected amount of outflow (US$300 million). The event probability

Journal of Operational Risk www.risk.net/journals

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 39: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

OR measurement beyond the LDA: an exposure-based methodology 27

FIGURE 3 Cumulative distribution function of the loss ratio over the life cycle of thehypothetical litigation.

Phase 1: filingPhase 2Phase 3Phase 4Phase 5: final payment

1.0

0.9

0.8

0.7

0.7

0.6

0.6

0.5

0.5

0.4

0.4

0.3

0.30.2

0.1 0.2

Cum

ulat

ive

dist

ribut

ion

func

tion

Loss ratio

increases to 75%. Further, the likelihood that the amount of outflow will be aboveUS$400 million is now considered to be 1%. We finally assume the event is settledfor US$350 million in phase 5.

We now illustrate how the different phases of the life cycle of a litigation arereflected in an EBOR model. For the entire life of the litigation we set its exposureE toUS$1 billion. The loss ratio is modeled using the technique presented in Section 4.3.1.Figure 3 shows the cumulative distribution function of the loss ratio over the differentphases. We observe that the estimate of the final payment stabilizes over time. Forinstance, the volatility is highest (18%) in phase 2, due to a comparatively highuncertainty of estimates, and lowers to 3.8% in phase 4, when the case approachesclosure.

For the first three phases the distribution function of the loss variable Z resemblesthat of the loss ratio. In phase 4, however, we are allowed to deduct the provision ofUS$300 million. This corresponds to a shift to the left in the distribution function.Table 1 lists the different key risk measures of the model over the different phasesand compares them to actual profit and loss (P&L) hits. In particular, the VaR plusaccumulated P&L (in absolute terms) converges to the final settlement amount of

www.risk.net/journals Journal of Operational Risk

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 40: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

28 M. Einemann et al

TABLE 1 Model output (EL and VaR) versus accumulated P&L impact due to provisioning(phase 4) and final payment (phase 5).

(a) Model input parameters

Phase‚ …„ ƒ1 2 3 4 5

p (%) 5 30 30 75 100E (US$m) 1000 1000 1000 1000 1000Oe (US$m) 200 200 300 300 350Ou (US$m)=q.Ou/ (%) — 400/90 400/95 400/99 —� (US$m) 0 0 0 300 0

(b) Derived model parameters

Phase‚ …„ ƒ1 2 3 4 5

�.L/ (%) 20 20 30 30 35�.L/ (%) 5.3 18.2 5.6 3.8 0

(c) Model output (US$m)

Phase‚ …„ ƒ1 2 3 4 5

Accumulated P&L 0 0 0 �300 �350EL 10 60 90 0 0VaR (99.5%) 269 771 437 108 0

US$350 million. Note that, once settled in phase 5, the litigation no longer impactsthe risk calculated for any remaining litigation.

These characteristics are different to the treatment under the LDA, where historicalcases determine the frequency and severity variables specified for litigation risk. Inaddition, the litigation would have been ignored until phase 4 under a traditional LDA,ie, the establishment of a provision, leading to undercapitalization at the beginningof the litigation life cycle and overcapitalization after the loss materializes. In fact, itis counterintuitive that provisioning, which reduces risk capital in a realistic model,leads to an increase in LDA models. These features illustrate the LDA shortcomingsdiscussed in Section 2.1.

Journal of Operational Risk www.risk.net/journals

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 41: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

OR measurement beyond the LDA: an exposure-based methodology 29

5 INTEGRATION OF THE LOSS DISTRIBUTION APPROACH ANDTHE EXPOSURE-BASED METHODOLOGY

5.1 Integrated simulation of OR events under LDA and EBOR

A sound OR capital calculation has to be conducted in a fully integrated and diversi-fied way. As a consequence, the integration of LDA and EBOR models becomes animportant component of OR quantification if both approaches are applied, eg, EBORmodels are used for “predictable” risk types and LDA models cover risks that arereflected well by historical events.

In the LDA model, the dependence structure is specified for the units of measure,eg, at the BL/ET level, which we refer to as cells. The precise definition of theLDA dependence structure has a significant impact on the integration strategy. In thispaper, we focus on the LDA model specified in (2.1) and assume that the frequencyvariables Nj , j D 1; : : : ; n, are correlated through a copula, whereas the severityvariables Sjk are independent (Aue and Kalkbrener 2006). In this setup, we integratethe EBOR model by specifying the dependence of the EBOR frequency variableand LDA frequency variables through an additional dimension of the copula, ie, thedimension of the copula is increased to n C 1. In other words, the EBOR model isconsidered as an additional cell, analogously to the BL/ET combinations in a classicalLDA model.5

Note that any integration concept would also trigger changes in the LDA model orits input data, eg, to avoid double counting of loss potential. Here, we assume that thisclear separation of LDA and EBOR events has been carried out beforehand, leavingus with the task of specifying the integration model. For simplicity, risk mitigationeffects are also ignored.

In our setup, we assume that we have r cells remaining under the LDA (“residualLDA cells”) and a .rC1/th cell treated under the EBOR approach (“EBOR cell”). Therestriction to one EBOR cell is for illustrative purposes only. The concept can naturallybe extended to an arbitrary number of EBOR cells. With respect to frequencies,we suppose that the EBOR cell is exposed to a maximum of n events. The eventfrequencies of all cells are linked via a copula:

C.freq/

˙.freq/.F.freq/1 .x1/; : : : ; F

.freq/rC1 .xrC1//;

with F .freq/

ldenoting the event frequency distribution of the l th cell, and˙ .freq/ being

the correlation matrix.

5 Other frequently used LDA models are based on dependence structures applied to aggregated cellloss distributions, which combine frequency and severity information. The integration of an EBORmodel into such a framework would be simpler, since the most complex task for integration, therecursive simulation of EBOR events starting from a total event frequency, can be omitted.

www.risk.net/journals Journal of Operational Risk

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 42: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

30 M. Einemann et al

For each scenario ` of our Monte Carlo simulation, we obtain a sample

.u1.`/; : : : ; ur.`/; urC1.`//

of correlated uniform random variables. They are cellwise translated into a numberof loss events by applying the inverse of the cell frequency distribution function:

nl.`/ WD .F.freq/

l/�1.ul.`//; l D 1; : : : ; r C 1:

Each loss event has a certain severity, ie, Sl.1/; : : : ; Sl.nl.`// for the l th cell, whosedetermination can now differ. For the first r cells treated under the LDA, losses willstill be sampled from a dedicated severity distribution:

Sl.`/ Dnl .`/Xj D1

Sl.j /:

For the r C 1th cell, we combine losses from EBOR events:

SrC1.`/ D Z.`/ DnX

j D1

Y.nE/

j .`/C Ij .`/Lj .`/Ej :

Note that the realizations of I1.`/; : : : ; In.`/ amount to a total of nrC1.`/.Finally, cell losses are aggregated, yielding the portfolio scenario loss:

S.`/ WDrC1XlD1

Sl.`/:

The realizations S.`/ form a basis for estimating the loss distribution, from which wecan read-off risk measures such as VaR.

Most of the procedures above are standard and straightforward to apply. The onlymissing piece is the translation of the total number of EBOR events, nrC1.`/, intorealizations of I1.`/; : : : ; In.`/. This will be shown in the next section.

5.2 Recursive simulation of EBOR events

We describe a simple algorithm for a recursive simulation of EBOR events.The simulation of EBOR loss events is based on the cluster densities f1; : : : ; fm

and cumulative cluster densities g1; : : : ; gm defined in Section 3.1.1. Formula (3.3)explicitly specifies the densities f1; : : : ; fm, whereas the recursion formula (3.4a)can easily be used to calculate the cumulative densities g1; : : : ; gm. Note that thefrequency distribution function F .freq/

rC1 is specified by the cumulative density gm via

F.freq/rC1 .k/ D

kXj D0

gm.k/; k D 0; : : : ; n:

Journal of Operational Risk www.risk.net/journals

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 43: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

OR measurement beyond the LDA: an exposure-based methodology 31

As shown in the previous subsection, F .freq/rC1 is used to perform a joint simulation of

EBOR and LDA frequency distributions, which are linked through a copula function.The output is the total number of EBOR events, nrC1.`/, for a specific scenario. Itremains to simulate the corresponding EBOR events, ie, determine a joint state forthe event indicator variables .I1; : : : ; In/ such that

IP .`/ DnX

j D1

Ij .`/ D nrC1.`/:

The simulation algorithm is based on an iterative application of recursion formula(3.4a) starting with the calculation of events in cluster Cm, then proceeding withCm�1, and so on.

We first set km WD nrC1.`/. Conditional on IP D km, the frequency variable ICm

has the form

P.ICmD j j IP D km/ D gm�1.km � j /fm.j /

gm.km/;

where j is an element of f0; : : : ; jCmjg. We sample this random variable and obtainlm 2 f0; : : : ; jCmjg, which denotes the number of losses in cluster Cm. According tothe dependence structure in Cm, losses have occurred for the lm cluster elements withthe highest event probabilities.

The number of losses in the remaining m � 1 clusters is km�1 WD km � lm. Weapply recursion formula (3.4a) to cluster Cm�1, ie, we sample ICm�1

conditional onIP D km and ICm

D lm, and obtain the loss events in cluster Cm�1. After repeatingthe procedure for the remaining clusters, the full set of EBOR loss events has beenspecified.

6 CONCLUSION

In this paper, we introduced a general framework for an exposure-based approach inthe context of OR quantification. We described a general methodological framework,highlighting in particular the benefits of the new approach over the commonly usedLDA. We detailed the application of EBOR to a portfolio of pending litigations. Thisrisk type is particularly well suited for an exposure-based approach due to betterusage of existing information and more plausible model behavior over the litigationlife cycle. Finally, we discussed a strategy to integrate EBOR and LDA models bybuilding hybrid frameworks, which facilitates the migration of OR subtypes from aclassical to an exposure-based treatment.

We highlighted a particular advantage of EBOR: the wide scope of its applicabilitybeyond capital calculation and its potential to evolve into an important OR manage-ment tool, accepted by risk managers, business experts and quants. One aim of this

www.risk.net/journals Journal of Operational Risk

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 44: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

32 M. Einemann et al

paper was to contribute to a conceptual/terminological basis for future developmentacross the industry, helping to establish a common language. We encourage furtheradvances in EBOR modeling and point out the importance of joint industry efforts(eg, through industry forums) to prove the successful applicability of EBOR, andto convince competent authorities that EBOR should be considered as a valuableinstrument for future OR measurement and management.

DECLARATION OF INTEREST

The authors report no conflicts of interest. The authors alone are responsible for thecontent and writing of the paper. The views and opinions expressed in this paper arethose of the authors in a personal capacity and do not necessarily reflect those ofDeutsche Bank. Examples of analysis performed within this paper do not concernactual data, processes or procedures. Consequent insights made within the analysisare not reflective of positions at Deutsche Bank. Any litigation data or examplesprovided are purely illustrative and completely unrelated to any real matters, eventsor processes.

ACKNOWLEDGEMENTS

We thank Robert Huebner for his contributions and previous support: he has beenadvocating exposure-based OR quantification for many years, and initiated the EBORproject at Deutsche Bank.

REFERENCES

Acerbi, C., and Tasche, D. (2002). On the coherence of expected shortfall. Journal of Bank-ing and Finance 26(7), 1487–1503 (https://doi.org/10.1016/S0378-4266(02)00283-2).

AMA Group (2013). AMA quantification challenges: AMAG range of practice and obser-vations on “the thorny LDA topics”. Industry Position Paper, The Risk ManagementAssociation, Philadelphia, PA.

Artzner, P., Delbaen, F., Eber, J. M., and Heath, D. (1999). Coherent measures of risk.Mathematical Finance 9(3), 203–228 (https://doi.org/10.1111/1467-9965.00068).

Aue, F., and Kalkbrener, M. (2006). LDA at work: Deutsche Bank’s approach to quantifyingoperational risk. The Journal of Operational Risk 1(4), 49–93 (https://doi.org/10.21314/JOP.2007.020).

Baruh, N. (2016). Bridging the gap between operational risk measurement and manage-ment. Presentation, RiskMinds International Conference, Amsterdam, December 6–9.

Basel Committee on Banking Supervision (2004). International convergence of capitalmeasurement and capital standards: a revised framework. Technical Report, Bank forInternational Settlements.

Journal of Operational Risk www.risk.net/journals

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 45: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

OR measurement beyond the LDA: an exposure-based methodology 33

Basel Committee on Banking Supervision (2009). Observed range of practice in keyelements of advanced measurement approaches (AMA). Technical Report, Bank forInternational Settlements.

Basel Committee on Banking Supervision (2016). Standardised measurement approachfor operational risk. Technical Report, Bank for International Settlements.

Bhatia, M., Finger, C., and Gupton, G. (1997). CreditMetrics: the benchmark for under-standing credit risk. Technical Report, JP Morgan, Inc.

Board of Governors of the Federal Reserve System (2012). Comprehensive Capital Analy-sis and Review 2012: methodology and results for stress scenario projections.TechnicalReport, Federal Reserve Board.

Boettrich, S., and Starykh, S. (2017). Recent trends in securities class action litigation:2016 full-year review. Technical Report, NERA Economic Consulting, White Plains, NY.

Bohn, J. R., and Crosbie, P. (2002). Modeling default risk. Technical Report, KMV, LLC.Centrone, F., and Gianin, E. (2017). Capital allocation à la Aumann–Shapley for non

differentiable risk measures. Working Paper, Social Science Research Network.Chapelle, A., Hassani, B., Peters, G.W., and Shevchenko, P.V. (2016).Should the advanced

measurement approach be replaced with the standardized measurement approach foroperational risk? The Journal of Operational Risk 11(3), 1–49 (https://doi.org/10.21314/JOP.2016.177).

Cope, E., Mignola, G., and Ugoccioni, R. (2016). Comments on the Basel Committee onBanking Supervision proposal for a new standardized approach for operational risk.TheJournal of Operational Risk 11(3), 51–69 (https://doi.org/10.21314/JOP.2016.184).

European Banking Authority (2011). 2011 EU-wide stress test aggregate report.TechnicalReport, EBA.

Grasshoff, G., Mogul, Z., Pfuhler, T., Gittfried, N., Wiegand, C., Bohn, A., and Vonhoff,V. (2017). Global risk 2017: staying the course in banking. Technical Report, BostonConsulting Group.

International Accounting Standards Board (1998). International Accounting Standard 37:provisions, contingent liabilities and contingent assets. Technical Report, IASB.

Kalkbrener, M. (2005). An axiomatic approach to capital allocation. Mathematical Finance15(3), 425–437 (https://doi.org/10.1111/j.1467-9965.2005.00227.x).

Kaposty, F., Loederbusch, M., and Maciag, J. (2017). Stochastic loss given default andexposure at default in a structural model of portfolio credit risk. The Journal of CreditRisk 13(1), 99–123 (https://doi.org/10.21314/JCR.2017.221).

McNeil, A., Frey, R., and Embrechts, P. (2005). Quantitative Risk Management. PrincetonSeries in Finance. Princeton University Press.

Memmel, C., Sachs, A., and Stein, I. (2012). Contagion at the interbank market withstochastic LGD. International Journal of Central Banking 8(3), 177–206.

Rockafellar, R. T., and Uryasev, S. (2000). Optimization of conditional value-at-risk. TheJournal of Risk 2, 21–24 (https://doi.org/10.21314/JOR.2000.038).

Rosa, P. (2012). Credit risk structural models in op risk. Operational Risk and Regulation13(10), 34–35.

Yan, H., and Wood, R. M. (2017). A structural model for estimating losses associated withthe mis-selling of retail banking products. The Journal of Operational Risk 12(1), 1–19(https://doi.org/10.21314/JOP.2017.186).

www.risk.net/journals Journal of Operational Risk

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 46: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 47: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

Journal of Operational Risk 13(2), 35–57DOI: 10.21314/JOP.2018.206

Research Paper

Distortion risk measures for nonnegativemultivariate risks

Montserrat Guillen,1 José María Sarabia,2

Jaume Belles-Sampera1 and Faustino Prieto2

1Department of Econometrics, Riskcenter-IREA, University of Barcelona, Avinguda Diagonal 690,08034 Barcelona, Spain; emails: [email protected], [email protected] of Economics, University of Cantabria, Avenida de los Castros S/N, 39005 Santander,Spain; emails: [email protected], [email protected]

(Received May 9, 2017; revised July 24, 2017; accepted September 1, 2017)

ABSTRACT

We apply distortion functions to bivariate survival functions for nonnegative randomvariables. This leads to a natural extension of univariate distortion risk measures to themultivariate setting. For Gini’s principle, the proportional hazard transform distortionand the dual power transform distortion, certain families of multivariate distributionslead to a straightforward risk measure. We show that an exact analytical expression canbe obtained in some cases. We consider the independence case, the bivariate Paretodistribution and the bivariate exponential distribution.An illustration of the estimationprocedure and the interpretation is also included. In the case study, we consider twoloss events with a single risk value and monitor the two events together over fourdifferent periods. We conclude that the dual power transform gives more weight tothe observations of extreme losses, but that the distortion parameter can modulate thisinfluence in all cases. In our example, multivariate risk clearly diminishes over time.

Keywords: distortion functions; multivariate risk; multiperiod risk assessment; dependence; riskaggregation; multivariate loss.

Corresponding author: M. Guillen Print ISSN 1744-6740 j Online ISSN 1755-2710© 2018 Infopro Digital Risk (IP) Limited

35 Journal of Operational Risk

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

www.risk.net/journals

Page 48: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

36 M. Guillen et al

1 INTRODUCTION

Classical risk measures are defined on univariate risks, ie, on unidimensional randomvariables, and not on a multivariate setting. However, risk evaluation problems inreal life are rarely one dimensional. In many practical applications, it is usual to dealwith multidimensionality by transforming multivariate risks into a unidimensionalrisk using some aggregation procedure, for instance using the sum of risks. Oncethe multiple dimensions of the risk problem have been reduced to one dimension,classical risk measures can be used to quantify the risk.

This paper takes a different perspective in proposing a set of risk measures fornonnegative multivariate risks. Our approach to multivariate risk assessment problemsdiffers from the traditional procedure in the way aggregation is performed: insteadof transforming the multivariate random variable first and then quantifying the riskin the univariate setting, we concentrate on the whole multidimensional distributionand define a one-dimensional risk measurement value for the distribution. We followthe definition given by Rüschendorf (2013, p. 180), which we present in Section 2.

Risk management often requires multivariate risk measures that capture the inter-dependence between many risk factors. When considering all the dimensions, it isnatural to take the joint multivariate distribution function of the risks as the startingpoint. For instance, the quantile of the joint distribution leads to the analysis of criticallayers (as defined by Salvadori et al (2011) and discussed later by Di Bernardino andPalacios-Rodríguez (2017)), which are multidimensional by definition. Our approachis totally different: we aim to obtain a single value that summarizes the risk of amultivariate random vector, but we apply a distortion to the joint survival multidi-mensional function and then we carry out a multiple integration in order to obtaina summary value. The main advantage is that we do not work with vectors of riskmeasures. Moreover, we show that, for some special multivariate distributions, thisapproach provides simple analytical expressions. A potential drawback is that thedistortion of the multivariate survival and the multiple integral, even if it is an elegantgeneralization, is a summary measure that combines all dimensions in one and maybe difficult to interpret.

As stated in Embrechts and Puccetti (2006), in the risk management and finance lit-erature random vectors are referred to as portfolios, and individual random subvectorsas risks. Usually portfolios of identically distributed, nonnegative risks are consid-ered. Note that even if financial returns can be positive or negative, the risk managerlooks at losses, so that one of the two axes is of particular interest. According toSun et al (2017), portfolio risk management measures the distribution of losses in aportfolio over a fixed horizon, but the dependence between risk factors complicatesthe computation. The dependence structure is then assumed from a joint multivariatedistribution that has a fixed dependence over time, or a multivariate copula function

Journal of Operational Risk www.risk.net/journals

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 49: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

Distortion risk measures for nonnegative multivariate risks 37

that could include some time varying dependence. Alternatively, in order to analyzeeach dimension separately, we must take the marginal distribution or the componen-twise measures. Cousin and Di Bernardino (2013) dealt with multidimensionality byanalyzing vector-valued measures with the same dimension as the underlying riskvariables; this approach is also referred to as set-valued risk measures. From the vec-tor of risk measures, Cousin and Di Bernardino define the lower-orthant value-at-risk(VaR), which is constructed from level sets of multivariate distribution functions, andthe upper-orthant VaR, which is constructed from level sets of multivariate survivalfunctions.

We should note that an application of multivariate risk measures is found in therisk management of financial institutions, since Basel III requires a minimum capitalthat is derived from the analysis of risk on an aggregated basis. Traditional univariaterisk measures cannot address portfolio risk management as a whole.

The set of risk measures we propose can be called distortion risk measures fornonnegative multivariate risks. As explained in the following sections, there is anatural parallelism between the unidimensional distortion risk measures introducedby Wang (1995a,b) and the risk measures introduced in this paper.

In the insurance setting, and in operational risk in particular, risk managers generallylook at losses only, and these are positive values. If these results were to be extended tothe analysis of returns, which can be either positive or negative, then the same principleof distortion as for the joint survival could be used. Belles-Sampera et al (2013)indicated that distortion risk measures can be interpreted as aggregation operators forfinite random variables that do not necessarily have to be positive.

We show in the illustrations that our proposal provides a good method to monitormultivariate risks that can be especially interesting in the context of operational riskanalysis.

2 DISTORTION RISK MEASURES FOR THE NONNEGATIVEUNIVARIATE CASE

Let us assume a probability space .˝;A; P /with sample space˝, a � -algebra A anda probability P from A to Œ0; 1�, and the set of all random variables defined on thisspace. Consider a nonnegative random variable X defined on this probability spaceand its survival function S.x/ D P.X > x/. A distortion risk measure applied to X ,which we denote by ŒgWS�, is defined by

ŒgWS� DZ C1

0

g.S.x// dx; (2.1)

where g is the associated distortion function, which is a function from Œ0; 1� to Œ0; 1�,and it must be increasing (not necessarily strictly increasing) and such that g.0/ D 0

www.risk.net/journals Journal of Operational Risk

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 50: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

38 M. Guillen et al

TABLE 1 Some examples of distortion functions for distortion risk measures.

Risk measure Parameters Distortion function

Gini’s principle 0 < � < 1 g� .t/ D .1 C �/t � � t2Proportional hazard transform m > 1 gm.t/ D t1=m

Dual power transform m > 1 gm.t/ D 1 � .1 � t /m

andg.1/ D 1.Two main examples of distortion risk measures broadly used in financialand insurance applications are VaR and tail VaR (TVaR) at a fixed confidence level˛ 2 .0; 1/, whose distortion functions are

ı˛.t/ D 1Œ1�˛;1�.t/ and �˛.t/ D t

1 � ˛ 1Œ0;1�˛/.t/C 1Œ1�˛;1�.t/;

respectively, where 1Œa;b�.t/ equals 1 if a 6 t 6 b, and 0 otherwise. Three classes ofdistortion risk measures that will be used in the rest of the paper and their associateddistortion functions are given in Table 1, namely the risk measure based on Gini’sprinciple, the proportional hazard transform and the dual power transform. Similarprocedures can be applied to Denneberg’s absolute deviation principle (Denneberg1990), which is defined through the distortion function g˛.t/ D t .1C˛/1Œ0;0:5/.t/C.˛ C .1 � ˛/t/1Œ0:5;1�.t/, and to the GlueVaR risk measures introduced by Belles-Sampera et al (2014), which generalize range VaR and follow from the distortionfunctions

gh1;h2

ˇ;˛.t/ D

�h1

.1 � ˇ/

�1Œ0;1�ˇ/.t/

C�h1 C .h2 � h1/

.ˇ � ˛/ Œt � .1 � ˇ/��

1Œ1�ˇ;1�˛/.t/C 1Œ1�˛;1�.t/;

with ˛ 6 ˇ < 1, 0 < h1 < 1, h1 6 h2 < 1. Note that when ˛ D ˇ neither .ˇ�˛/�1

nor 1Œ1�ˇ;1�˛/.t/ for t 2 .0; 1/ are well defined and, in addition, ˛ D ˇ implies thath1 D h2 because h1 and h2 represent the distorted survival probability associatedwith 1�ˇ and 1� ˛, respectively. So, in those cases, the distortion function reducesto

gh1˛ .t/ D

�h1

.1 � ˛/

�1Œ0;1�˛/.t/C 1Œ1�˛;1�.t/:

Expression (2.1) can be understood as the Choquet integral ofX with respect to theset function gıP , whereP is the probability function associated with the probabilityspace in which X is defined (Choquet 1954; Denneberg 1994). Henceforth, onlynonnegative random variables are considered.

The specific preference for a distortion function is difficult to determine. However,the transformation of the survival function reflects in some way the emphasis on the

Journal of Operational Risk www.risk.net/journals

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 51: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

Distortion risk measures for nonnegative multivariate risks 39

FIGURE 1 Distortion functions corresponding to (a) Gini’s principle, (b) proportionalhazard transform and (c) dual power transform.

0

0.2

0.4

0.6

0.8

1.0

0 0.2 0.4 0.6 0.8 1.0t

0 0.2 0.4 0.6 0.8 1.0t

0 0.2 0.4 0.6 0.8 1.0t

(a) (b) (c)

Dis

tort

ion

func

tion

g θ(t

)

0

0.2

0.4

0.6

0.8

1.0

Dis

tort

ion

func

tion

g θ(t

)

0

0.2

0.4

0.6

0.8

1.0

Dis

tort

ion

func

tion

g θ(t

)

θ = 1.0θ = 0.5θ = 0

m = 2.0m = 1.5m = 1

m = 10m = 5m = 1

extremes. Belles-Sampera et al (2016) examined how risk attitudes can be representedin the selection of a given distortion. They showed that the analysis of the distortionfunction offers a local description of the agent’s stance on risk in relation to theoccurrence of accumulated losses. Here, the concepts of absolute risk attitude andlocal risk attitude arise naturally. For example, the area under the distortion revealsthe global risk attitude, whereas the ratio of the distortion to the identity functionprovides us with local information about the relative risk behavior associated withthe risk measure at any point in the range of values.

A plot of the three distortion functions (see Figure 1) shows that the Gini principlerisk measure weights the right tail less heavily than the other measures because itsdistortion function is flatter than the others for low values. When the proportionalhazard transform is used, the importance of the large losses is moderate, but when thedual power transform is selected with parameter equal to 5 or 10 we observe a highcurve for low values, which means that the right tail of the positive losses will havemore importance for the calculation of the risk measure. Therefore, extreme lossesare weighted more than in other cases for the dual power transform for m D 10,because the distortion function is closer to 1 for low values of t .

3 DISTORTION RISK MEASURES FOR THE NONNEGATIVEBIVARIATE CASE

Let .X1; X2/T be a nonnegative bivariate random variable with joint survival function

S12.x1; x2/ and marginal survival functions S1.x1/ and S2.x2/.

www.risk.net/journals Journal of Operational Risk

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 52: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

40 M. Guillen et al

The idea is to introduce distortion risk measures defined on .X1; X2/T that are

congruent with the unidimensional distortion risk measures defined on the associatedmarginal distributions.

In a first step, we consider a distortion function g.�/, and we define a distortedbivariate survival as follows:

T12.x1; x2/ D gŒS12.x1; x2/�; (3.1)

where the g.�/ function is chosen in order to define a genuine bivariate survivalfunction in (3.1). Note that the marginal survival functions in (3.1), corresponding todistorted transformations of the joint survival function S12.x1; x2/, are

T1.x1/ D T12.x1; 0/ D gŒS1.x1/� (3.2)

and

T2.x2/ D T12.0; x2/ D gŒS2.x2/�: (3.3)

Once a suitable distortion function g.�/ has been selected, a possible distortion riskmeasure associated with (3.1) is simply

12ŒgWS12� DZ 1

0

Z 1

0

T12.x1; x2/ dx1 dx2 DZ 1

0

Z 1

0

gŒS12.x1; x2/� dx1 dx2:

(3.4)Note that the corresponding distortion risk measures associated with (3.2) and (3.3)are

1ŒgWS1� DZ 1

0

gŒS1.x1/� dx1 (3.5)

and

2ŒgWS2� DZ 1

0

gŒS2.x2/� dx2: (3.6)

So, there is a natural parallelism between the multivariate setting (3.4) and themarginals in (3.5) and (3.6) and the univariate case. This approach is the one proposedin Rüschendorf (2006, Section 3) and Rüschendorf (2013, p. 180). However, it is notthe only possible way to address risk measures for bivariate risks (see, for instance,Embrechts et al (2009), which shows how multivariate extreme value theory yieldsthe ideal modeling environment). Different extensions to multivariate risk measure-ment using VaR and TVaR can be found in Cousin and Di Bernardino (2013, 2014),where vector-valued measures are proposed with the same dimension as the underly-ing risk portfolio, and the lower-orthant (upper-orthant) risk measure is constructedfrom level sets of multivariate distribution functions (multivariate survival distribu-tion functions). Unlike allocation measures or systemic risk measures, these measuresare suitable for multivariate risk problems where risks are heterogeneous and cannotbe aggregated together before calculating the risk measure.

Journal of Operational Risk www.risk.net/journals

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 53: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

Distortion risk measures for nonnegative multivariate risks 41

4 SOME BIVARIATE DISTORTION RISK MEASURES WITH ACLOSED-FORM EXPRESSION

Before generalizing this definition to higher dimensions, we explore some expressionsfor 12ŒgWS12�where theg function has been restricted to belong to the set of distortionfunctions associated with univariate risk measures presented in Table 1. Some of thecases considered here have the advantage of providing a straightforward analyticalexpression. The main reason why having closed-form expressions is interesting isbecause these risk measures can then be implemented in spreadsheet calculations andsimulation procedures very easily.

Let us begin with a bivariate random variable .X1; X2/T with independent

marginals, and then assume a dependence structure between the marginals drivenby copulas in the Farlie–Gumbel–Morgenstern (FGM) family. In the case of inde-pendence, we do not assume any particular marginal distribution, but this situationis not the main focus of the paper, because what we really want to analyze is caseswhere we assume a dependence structure. The bivariate Pareto distribution is a clearexample of the type of two-dimensional distribution that a risk manager would use toanalyze losses coming from two lines of business, or two types of risk. For example,in operational risk, we can assume that losses can be of two types and therefore eachseverity is represented by one of the two dimensions. Similarly, the bivariate expo-nential distribution or the FGM distribution could reflect the monthly size of lossesin, for instance, internal and external fraud.

A bivariate Pareto distribution is a standard choice for finance/insurance losses. Forinstance, Embrechts and Puccetti (2006) calculate the bounds of a sum of two Paretoand lognormal bivariate risks, and provide a new definition of multivariate VaR.

4.1 Risk measures for the bivariate case assuming independence

Let .X1; X2/T be a bivariate risk with joint survival function S12.x1; x2/. In this

section, we obtain bivariate risk measures assuming independence between marginalrisks X1 and X2, that is, assuming that S12.x1; x2/ D S1.x1/S2.x2/. We considerthree different distortion risk measures.

4.1.1 Risk measures based on Gini’s principle

Let us consider the distortion function given by Gini’s principle, g� .t/ D .1C �/t �� t2, with 0 < � < 1. Using (3.4), we obtain the multivariate measure

12Œg� WS12� D .1C �/�1�2 � ��.1/1W2�

.2/1W2; (4.1)

where �i D E.Xi /, i D 1; 2, and �.i/1W2, i D 1; 2, represent the mathematical

expectations of the minimum of two copies of the random variableXi , with i D 1; 2.

www.risk.net/journals Journal of Operational Risk

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 54: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

42 M. Guillen et al

4.1.2 Risk measures based on the proportional hazard transform

Let us consider the proportional hazard transform principle given by the distortionfunction gm.t/ D t1=m,m > 1. In this case, using the notation Fi .xi / D 1� Si .xi /,i D 1; 2, the multivariate risk measure can be written as

12ŒgmWS12� D2Y

iD1

E

�F �1

i

�Be

�1;1

m

���; (4.2)

where Be.a; b/ represents a classical beta distribution. Note that the terms in the prod-uct correspond to the mathematical expectation of the generalized beta distribution(see Alexander et al 2012; Jones et al 2004) with baseline cumulative distributionfunction Fi and parameters .1; 1=m/.

4.1.3 Risk measures based on the dual power transform

The following bivariate risk measure is based on the dual power transform principle:gm.t/ D 1 � .1 � t /m with m > 1. The corresponding multivariate risk measure isgiven by

12ŒgmWS12� DmX

kD1

.�1/kC1

m

k

!�

.1/

1Wk�.2/

1Wk; (4.3)

where �.i/

1Wk , i D 1; 2, represent the mathematical expectations of the minimum of kindependent and identically distributed (iid) copies of the random variable Xi , withi D 1; 2. Note that �i

1W1 D �i for all i .

4.2 Risk measures for the bivariate Pareto distribution

The examples of bivariate risk measures with a closed-form expression that are pre-sented in Section 4.1 are based on the hypothesis of the independence of both risks.However, this assumption is often unrealistic in practice because losses from differentsources may occur simultaneously. Then, we work with different classes of dependentrisks.

In this section we consider the expressions that several bivariate distortion riskmeasures take when they are applied to a bivariate dependent Pareto distribution asproposed by Mardia (1962) (see also Arnold 1983), which is sometimes also calledthe bivariate Lomax distribution. The bivariate Pareto distribution is defined in termsof the following bivariate survival function:

S12.x1; x2/ D�1C x1

�1

C x2

�2

��a

; x1; x2 > 0; (4.4)

where �1; �2 > 0 are scale parameters and a > 0 is a shape parameter. Note thatboth marginal distributions are Pareto distributions with survival functions equal toSi .xi / D 1=.1C xi=�i /

a, with xi > 0, i D 1; 2.

Journal of Operational Risk www.risk.net/journals

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 55: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

Distortion risk measures for nonnegative multivariate risks 43

To compute the different bivariate risk measures, we use the result of Lemma 4.1.

Lemma 4.1 If �1; �2 > 0 and a > 2, then

Z 1

0

Z 1

0

dx1 dx2

�1C x1

�1

C x2

�2

��a

D �1�2

.a � 1/.a � 2/ : (4.5)

Proof The result is direct, taking into account that if a > 1, then

Z 1

0

dx1

�1C x1

�1

C x2

�2

��a

D �1

a � 1

�1C x2

�2

��.a�1/

:

4.2.1 Risk measures based on Gini’s principle

If we consider the distortion function given by Gini’s principle,g� .t/ D .1C�/t�� t2,with 0 < � < 1, using (3.4) and (4.5) we obtain

12Œg� WS12� D .1C �/�1�2

.a � 1/.a � 2/ � � �1�2

.2a � 1/.2a � 2/ ;

which can be written as

12Œg� WS12� D .3a� C 4a � 2/�1�2

2.a � 1/.a � 2/.2a � 1/ (4.6)

and is valid for a > 2.

4.2.2 Risk measures based on the proportional hazard transform

Now, we choose the proportional hazard transform principle represented by thedistortion function gm.t/ D t1=m with m > 1. The associated risk measure is

12ŒgmWS12� D m�1�2

.a �m/.a � 2m/ (4.7)

if a > 2m.

4.2.3 Risk measures based on the dual power transform

For the dual power transform principle with distortion function gm.t/ D 1� .1� t /mwith m > 1, the corresponding bivariate risk measure is given by

12ŒgmWS12� DmX

kD1

.�1/kC1

m

k

!�1�2

.ak � 1/.ak � 2/ (4.8)

with a > 2.

www.risk.net/journals Journal of Operational Risk

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 56: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

44 M. Guillen et al

4.3 Risk measures for the bivariate exponential distribution

Another dependence structure to be investigated is the bivariate exponential distribu-tion given by

S12.x1; x2/ D exp.�a1x1 � a2x2 � �a1a2x1x2/; x1; x2 > 0; (4.9)

where a1; a2 > 0 and 0 6 � 6 1. This joint survival function corresponds to theGumbel type-I bivariate exponential distribution considered by Gumbel (1960).

The following lemma is useful for the computation of the different risk measureswhen they are applied to this distribution.

Lemma 4.2 If S12.x1; x2/ denotes the bivariate survival function defined in (4.9),we have

Z 1

0

Z 1

0

S12.x1; x2/ dx1 dx2 D 1

�a1a2

�exp

�1

����Ei

�1

��; (4.10)

where

�Ei.z/ DZ 1

z

e�t

tdt (4.11)

represents the exponential integral function.

Proof Integrating (4.9) with respect to x1, we haveZ 1

0

S12.x1; x2/ dx1 D e�a2x2

a1.1C a2�x2/;

and integrating again with respect to x2 we obtain (4.10), using definition (4.11). �

4.3.1 Risk measures based on Gini’s principle

For Gini’s principle, we have that the risk measure expression for the bivariateexponential distribution is

12Œg� WS12� D 1

�a1a2

�.1C �/ exp

�1

����Ei

�1

��

� 1

2�a1a2

�� exp

�2

����Ei

�2

��: (4.12)

4.3.2 Risk measures based on the proportional hazard transform

In the case of the proportional hazard transform we obtain that the bivariate riskmeasure can be expressed as

12ŒgmWS12� D 1

�a1a2

�m exp

�1

m�

����Ei

�1

m�

��: (4.13)

Journal of Operational Risk www.risk.net/journals

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 57: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

Distortion risk measures for nonnegative multivariate risks 45

4.3.3 Risk measures based on the dual power transform principle

For the dual power transform principle we obtain the following closed-form expres-sion for the risk measure applied to a bivariate exponential distribution:

12ŒgmWS12� DmX

kD1

.�1/kC1

m

k

!1

k�a1a2

�exp

�k

����Ei

�k

��: (4.14)

4.4 A dependent model based on the FGM distributions

Now, we consider the Farlie–Gumbel–Morgenstern distribution (Farlie 1960; Gumbel1960; Morgenstern 1956) with joint survival function

S12.x1; x2I˛/ D S1.x1/S2.x2/Œ1C ı.1 � S1.x1//.1 � S2.x2//�; (4.15)

where ı 2 Œ�1; 1� is the dependence parameter, and ı D 0 corresponds to theindependent case.

To obtain the different bivariate risk measures, we need the following lemma.

Lemma 4.3 LetXi Wn be the i th order statistics in a sample of size n, and let the i thspacing be

Si Wn D XiC1Wn �Xi Wn: (4.16)

The fundamental formulas for moments of order statistics in terms of integralsconcerning distribution function only are given by

E.Si Wn/ D n

i

!Z 1

�1F.x/i Œ1 � F.x/�n�i dx: (4.17)

Proof See Pearson (1902), and also Jones and Balakrishnan (2002). �

Using Lemma 4.3, if X is a positive random variable, we haveZ 1

0

F.x/Œ1 � F.x/� dx D E.S1W2/

2D E.X2W2 �X1W2/

2; (4.18)

Z 1

0

F.x/Œ1 � F.x/�2 dx D E.S1W3/

3D E.X2W3 �X1W3/

3; (4.19)

Z 1

0

F.x/2Œ1 � F.x/�2 dx D E.S2W4/

6D E.X3W4 �X2W4/

6: (4.20)

4.4.1 Risk measures based on Gini’s principle

We consider the distortion function based on Gini’s principle, given by g� .t/ D.1C �/t � � t2. We have the following theorem.

www.risk.net/journals Journal of Operational Risk

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 58: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

46 M. Guillen et al

Theorem 4.4 Let .X1; X2/T be a bivariate random variable with bivariate

survival function given by (4.15). Then, we have

Z 1

0

Z 1

0

S12.x1; x2/ dx1 dx2 D �1�2 C 14ıE.S

.1/1W2/E.S

.2/1W2/ (4.21)

and

Z 1

0

Z 1

0

S12.x1; x2/2dx1 dx2

D �.1/1W2�

.2/1W2 C 2

9ıE.S

.1/1W3/E.S

.2/1W3/C 1

36ı2E.S

.1/2W4/E.S

.2/2W4/; (4.22)

where S .k/i Wn , k D 1; 2, is defined in (4.16), the superscript corresponds to the marginal

Xk , k D 1; 2, and �i1W2, i D 1; 2, are defined as in Section 4.1.1.

Proof The proof is direct using the expression for the survival FGM copula, ie,(4.15), and (4.18)–(4.20). �

Using the above result, the corresponding bivariate risk measure is

12Œg� WS12� D .1C �/f�1�2 C 14ıE.S

.1/1W2/E.S

.2/1W2/g

� �f�.1/1W2�

.2/1W2 C 2

9ıE.S

.1/1W3/E.S

.2/1W3/C 1

36ı2E.S

.1/2W4/E.S

.2/2W4/g:

(4.23)

4.4.2 Risk measures based on the dual power transform

If we take the distortion function gm.t/ D 1 � .1 � t /m with m D 2, we obtain

12Œg2WS12 W ı� D 2�1�2 � �.1/1W2�

.2/1W2

C 24ı.�

.1/2W2 � �.1/

1W2/.�.2/2W2 � �.2/

1W2/

� 29ı.�

.1/2W3 � �.1/

1W3/.�.2/2W3 � �.2/

1W3/

� 136ı2.�

.1/3W4 � �.1/

2W4/.�.2/3W4 � �.2/

2W4/; (4.24)

where

�.k/i Wj D EŒX

.k/i Wj �; k D 1; 2;

andX .k/i Wj , k D 1; 2, denotes the i th order statistics in a sample of size j corresponding

to the random variables X1 and X2.If we set ı D 0 in (4.24), we obtain (4.3), taking into account that �1W1.i/ D �i

for all i D 1; 2.

Journal of Operational Risk www.risk.net/journals

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 59: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

Distortion risk measures for nonnegative multivariate risks 47

5 EXTENSION TO THE MULTIVARIATE CASE

Let us consider ap-dimensional nonnegative random variable .X1; X2; : : : ; Xp/T and

a distortion functiong.Analogously to the definition of distortion risk measures for thenonnegative bivariate case given in (3.4), the distortion risk measure for multivariaterisks associated with g may be defined as follows.

Definition 5.1 A distortion risk measure for multivariate nonnegative risks canbe defined as

12���pŒgWS12���p� DZ 1

0

� � �Z 1

0

gŒS12���p.x1; : : : ; xp/� dx1 � � � dxp; (5.1)

where S12���p is the multivariate survival function of the p-dimensional nonnegativerandom variable .X1; X2; : : : ; Xp/

T, and g is a distortion function.

This definition corresponds to that given in Rüschendorf (2006, Section 3).Definition 5.1 may not be particularly appropriate for some purposes. For instance,

if an insurance company needs to determine solvency capital for a three-year window,it is necessary that the risk value preserves the scale, so it should correspond tomonetary units and not, for instance, to “monetary units to the power of three”. IfXs isthe random loss from period s�1 to period s, s D 1; 2; 3, then an insurance companyinterested in a risk measure for vector .X1; X2; X3/

T may find that 123ŒgWS123�

is too large. Our proposal is to consider . 123ŒgWS123�/1=3 to overcome such an

inconvenience.

Definition 5.2 A rescaled distortion risk measure for multivariate nonnegativerisks can be defined as

�12���pŒgWS12���p� D . 12���pŒgWS12���p�/1=p; (5.2)

where 12���pŒgWS12���p� comes from Definition 5.1, S12���p is the multivariate survivalfunction of the p-dimensional nonnegative random variable .X1; X2; : : : ; Xp/

T andg is a distortion function.

Note that, once a distortion function g has been selected, Definitions 5.1 and 5.2 areboth consistent with the definition of a distortion risk measure for the unidimensionalcase, because �1ŒgWS1� D 1ŒgWS1� by (5.2), and they also match (2.1).

Standardized data could be used when the different units of measurement are a con-cern. In many cases the dimensions use different units of measurement. For instance,in the financial services industry, some risks are price based (such as the betas),whereas others are calculated as an index (composite indicator of systemic stress) orare balance-sheet based (the ratio of nonperforming loans to total loans).

www.risk.net/journals Journal of Operational Risk

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 60: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

48 M. Guillen et al

In this section we compute only the multivariate risk measure (5.1), assuming thatthe components of the random vector .X1; : : : ; Xp/

T are independent, because in thiscase the expressions are straightforward.

For the distortion function given by Gini’s principle, g� .t/ D .1 C �/t � � t2,0 < � < 1, (5.1) turns into

12���pŒg� WS12���p� D .1C �/

pYiD1

�i � �pY

iD1

�.i/1W2; (5.3)

where �i D E.Xi /, i D 1; 2, and �.i/1W2, i D 1; 2; : : : ; p, represent the mathematical

expectations of the minimum of two copies of the random variableXi , with i D 1; 2.In this case, the closed-form expression of the multivariate risk measure for

independent risks is

12���pŒgmWS12���p� DpY

iD1

E

�F �1

i

�Be

�1;1

m

���; (5.4)

where Be.a; b/ represents a classical beta distribution.For the dual power transform principle gm.t/ D 1 � .1 � t /m with m > 1, the

expression for the multivariate risk measure is given by

12���pŒgmWS12���p� DmX

kD1

.�1/kC1

m

k

!pY

iD1

�.i/

1Wk; (5.5)

where�.i/

1Wk , i D 1; 2; : : : ; p, represent the mathematical expectation of the minimumof k (iid) copies of the random variable Xi , with i D 1; 2; : : : ; p.

6 A NUMERICAL EXAMPLE FOR BIVARIATE NONNEGATIVE RISKS

We considered an example where the occurrence of two phenomena is observed overtime. Our objective was to provide a multivariate risk measure in order to monitor theevolution of risk of these two magnitudes using a single synthesized value. Therefore,one multivariate risk measure is better than using two different risk measures for eachdimension separately.

This application shows that it is possible to analyze multivariate operational riskfrom many sources, for instance, when the risk manager has to monitor the occurrenceof operational events by looking at the number or severity of events by class, ie, severaldimensions, and wants to have only one risk value instead of a different risk measurefor every type of event.

Journal of Operational Risk www.risk.net/journals

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 61: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

Distortion risk measures for nonnegative multivariate risks 49

6.1 Data and methodology

For illustrative purposes, we obtained accidental (unintentional injury) death datafrom the Spanish national statistics institute (Instituto Nacional de Estadistica;www.ine.es). In this data set, causes of accidental death in Spain are classified asfollows:

(1) traffic accidents of motor vehicles;

(2) other transport accidents;

(3) accidental falls;

(4) accidental drowning, immersion or suffocation;

(5) accidents by fire, smoke or hot substances;

(6) accidental poisoning by psychoactive drugs or abuse of drugs;

(7) other accidental poisoning;

(8) other accidents.

For this work, we grouped these into two classes: deaths due to crashes (causes (1)–(2)) and deaths due to other accidental causes (causes (3)–(8)). Then, we analyzedthe following bivariate variable: the number of fatalities due to crashes (X1) and thenumber of deaths due to other accidental causes (X2) in a province or autonomouscity (according to the province of residence of the deceased) per year; there are fiftyprovinces and two autonomous cities in Spain. For this, we selected the years 2000,2004, 2008 and 2012. Table 2 shows the data set considered, and Figure 2 shows thecorresponding three-dimensional histograms.

Given that the observed number of occurrences is large, we did not fit a discretedistribution, but fitted the bivariate Pareto distribution described in Section 4.1 bymaximum likelihood. For this model, the probability density function is

f12.x1; x2I a; �1; �2/ D @2S12.x1; x2/

@x1@x2

D a.aC 1/

��1�2

�1C x1

�1

C x2

�2

�aC2��1

;

where .a; �1; �2/ is the unknown three-parameter vector of the model, S12.x1; x2/

is the corresponding bivariate survival function (see (4.4)), and the loglikelihood

www.risk.net/journals Journal of Operational Risk

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 62: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

50 M. Guillen et al

TABLE 2 Accidental deaths in Spain, due to crashes (X1) or other accidental causes (X2)in a province (or autonomous city). [Table continues on next page.]

2000 2004 2008 2012‚ …„ ƒ ‚ …„ ƒ ‚ …„ ƒ ‚ …„ ƒProvince X1 X2 X1 X2 X1 X2 X1 X2

Albacete 64 44 44 48 30 69 17 94Alicante/Alacant 219 165 198 209 133 255 65 226Almería 129 75 114 81 56 96 31 103Araba/Alava 43 27 37 39 19 43 16 53Asturias 188 181 151 245 74 241 66 297Avila 19 17 27 37 11 32 15 45Badajoz 107 57 100 66 64 54 40 99Balears, Illes 147 124 118 118 93 137 67 164Barcelona 646 902 414 940 253 1192 226 1101Bizkaia 168 148 106 168 61 195 51 200Burgos 92 42 42 87 44 79 31 85Cáceres 54 45 80 54 33 32 16 58Cádiz 123 87 129 131 61 133 38 140Cantabria 69 83 43 101 31 120 24 178Castellón/Castelló 100 68 87 75 41 74 31 89Ciudad Real 88 47 60 63 56 73 31 114Córdoba 91 84 82 98 57 112 47 119Coruña, A 251 160 169 211 105 229 65 163Cuenca 33 26 36 32 18 49 18 65Gipuzkoa 110 109 67 113 49 120 22 136Girona 94 106 71 119 51 130 48 132Granada 119 115 112 125 80 134 49 122Guadalajara 25 18 28 29 15 31 12 44Huelva 46 42 51 54 44 44 25 62Huesca 47 27 43 50 19 44 19 49Jaén 84 72 68 98 75 66 20 80León 121 72 78 96 69 99 48 121Lleida 100 61 86 76 56 94 43 75Lugo 100 73 79 76 50 78 37 85Madrid 488 665 357 782 253 767 86 696Málaga 150 135 142 170 113 189 63 240Murcia 222 132 218 174 118 178 80 172Navarra 108 68 94 106 52 115 40 114Ourense 76 69 64 117 40 90 21 94Palencia 34 19 24 37 17 38 6 44Palmas, Las 132 142 63 192 111 204 31 101Pontevedra 186 133 136 211 94 170 70 157

Journal of Operational Risk www.risk.net/journals

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 63: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

Distortion risk measures for nonnegative multivariate risks 51

TABLE 2 Continued.

2000 2004 2008 2012‚ …„ ƒ ‚ …„ ƒ ‚ …„ ƒ ‚ …„ ƒProvince/city X1 X2 X1 X2 X1 X2 X1 X2

Rioja, La 60 42 55 62 36 53 14 49Salamanca 44 34 61 58 36 64 19 51Santa Cruz de Tenerife 89 109 73 147 59 164 48 170Segovia 32 15 21 17 11 26 10 18Sevilla 204 180 216 213 121 213 72 196Soria 24 13 32 17 12 34 3 17Tarragona 143 116 130 120 72 157 48 137Teruel 33 22 23 38 16 40 10 21Toledo 81 55 79 77 66 74 32 147Valencia/València 311 274 286 300 183 307 101 360Valladolid 70 57 66 87 44 71 23 72Zamora 37 31 28 35 17 31 7 35Zaragoza 150 86 145 114 85 150 54 117

Ceuta 4 4 9 4 2 6 2 9Melilla 4 5 3 9 2 8 0 7

Source: Instituto Nacional de Estadistica (2014).

function is given by

log `.a; �1; �2/ DnX

iD1

log f .x1i ; x2i I a; �1; �2/

D n logŒa.aC 1/� � n log.�1/ � n log.�2/

� .aC 2/

nXiD1

log

�1C x1i

�1

C x2i

�2

�;

where .x1i ; x2i /, i D 1; : : : ; n, is the sample bivariate data, and the maximum like-lihood estimation of the parameter vector . Oa; O�1; O�2/ is that which maximizes theloglikelihood function log `.a; �1; �2/.

Finally, we obtained the risk measures for the bivariate Pareto distribution based onGini’s principle, on the proportional hazard transform and on the dual power transformdescribed in Section 4.1, by using (4.6)–(4.8), respectively.

6.2 Results

Table 3 shows the parameter estimates from the bivariate Pareto model (a, �1 and�2 parameters) fitted to the number of fatalities due to crashes and number of deaths

www.risk.net/journals Journal of Operational Risk

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 64: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

52 M. Guillen et al

FIGURE 2 Display of three-dimensional histograms: fatalities due to crashes (X1) anddeaths due to other accidental causes (X2) in a Spanish province in a year.

0200

400600

X1

0

500

1000

X2

0

10

20

0200

400X1

0

500

1000

X2

0102030

0100

200300

X1

0

500

1000

X2

05

101520

050

100150

200X10

500

1000

X2

05

1015

2000 2004

2008 2012

TABLE 3 Parameter estimates from the bivariate Pareto model for the accidental deathsdata set by maximum likelihood.

2000 2004 2008 2012

Oa 4.2406 5.7511 5.1390 6.1394O�1 395.43 471.63 271.84 206.63O�2 321.96 590.56 544.90 699.24

due to other accidental causes in a Spanish province or autonomous city per year, bymaximum likelihood, in the four years selected: 2000, 2004, 2008 and 2012.

Tables 4–6 show the risk measures for the bivariate Pareto distribution basedon Gini’s principle, on the proportional hazard transform and on the dual powertransform, respectively.

Journal of Operational Risk www.risk.net/journals

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 65: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

Distortion risk measures for nonnegative multivariate risks 53

TABLE 4 Risk measures for the bivariate Pareto distribution based on Gini’s principle foraccidental death bivariate data.

� 2000 2004 2008 2012

0.0 17 534.7 15 628.0 11 400.8 6 791.60.1 19 025.6 16 911.7 12 348.0 7 346.20.2 20 516.5 18 195.4 13 295.2 7 900.70.3 22 007.4 19 479.1 14 242.4 8 455.30.4 23 498.3 20 762.8 15 189.7 9 009.80.5 24 989.2 22 046.5 16 136.9 9 564.30.6 26 480.1 23 330.2 17 084.1 10 118.90.7 27 971.0 24 613.9 18 031.3 10 673.40.8 29 461.8 25 897.6 18 978.5 11 227.90.9 30 952.7 27 181.3 19 925.8 11 782.51.0 32 443.6 28 465.0 20 873.0 12 337.0

TABLE 5 Risk measures for the bivariate Pareto distribution based on the proportionalhazard transform for accidental death bivariate data.

m 2000 2004 2008 2012

1.0 17 534.7 15 628.0 11 400.8 6 791.61.1 21 853.0 18 549.4 13 725.9 8 005.91.2 27 299.4 21 914.5 16 474.9 9 387.11.3 34 308.1 25 814.7 19 755.2 10 966.01.4 43 558.0 30 366.8 23 711.6 12 780.91.5 56 170.7 35 722.1 28 544.1 14 880.21.6 74 136.6 42 080.6 34 536.5 17 325.61.7 101 351.0 49 711.5 42 104.9 20 197.31.8 146 589.0 58 985.5 51 883.7 23 601.31.9 234 589.0 70 427.1 64 889.3 27 680.22.0 472 423.0 84 802.8 82 855.8 32 630.6

It can be seen that increasing the value of � (Table 4) or the value of m (Tables 5and 6) results in an increase in the corresponding risk measure value. In addition, inthis example, it can be seen that risk measures decrease in most cases year-over-yearwhen � or m is held constant.

The conclusion for this illustration is that there is evidence of a decrease in the riskfor the number of deaths from two different causes from 2000 to 2012.

This application shows that our proposed method to quantify multivariate oper-ational risk is a straightforward method that is useful to monitor multivariaterisks.

www.risk.net/journals Journal of Operational Risk

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 66: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

54 M. Guillen et al

TABLE 6 Risk measures for the bivariate Pareto distribution based on the dual powertransform for accidental death bivariate data.

m 2000 2004 2008 2012

1 17 534.7 15 628.0 11 400.8 6 791.62 32 443.6 28 465.0 20 873.0 12 337.03 45 739.8 39 634.5 29 182.3 17 141.34 57 903.2 49 657.3 36 686.4 21 437.95 69 208.8 58 826.7 43 587.7 25 357.86 79 832.9 67 327.8 50 014.7 28 983.47 89 896.8 75 286.5 56 055.1 32 370.78 99 488.7 82 793.2 61 772.2 35 559.89 108 675.0 89 915.4 67 213.4 38 580.6

10 117 508.0 96 705.4 72 415.4 41 456.1

FIGURE 3 Trend in risk values based on (a) � D 0.5 for Gini’s principle, (b) m D 1.5 forthe proportional hazard transform and (c) m D 5 for the dual power transform.

Ris

k m

easu

re

2000200420082012 2000200420082012 2000200420082012

0

10 000

20 000

30 000

Ris

k m

easu

re

0

20 000

40 000

60 000

Ris

k m

easu

re

0

25 000

50 000

75 000(a) (b) (c)

When looking at the plots of the distortion functions presented in Figure 1, weclearly see that, for all of them, the larger the parameter, the closer the distortionfunction is to 1 for low values of t . In the distortion, low values of t correspond exactlyto large values of the loss variables. Therefore, we expect to obtain risk measures thatincrease when the distortion parameter increases. In Tables 4–6, we see that the largerthe value of � andm, the larger the resulting risk value. This happens for all the years(columns) and all the risk measures. The reason is that the weight of the right tail ofthe loss distribution in the computation of the risk summary value increases with �and m.

Journal of Operational Risk www.risk.net/journals

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 67: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

Distortion risk measures for nonnegative multivariate risks 55

When we look at the risk values by row, we always obtain a decreasing trend. Thiswould not occur if we were using a concave transform, as it would not weight thelarge value of losses so much.

In Figure 3, we plot the trend in risk values based on � D 0:5 for Gini’s principle,m D 1:5 for the proportional hazard transform and m D 5 for the dual powertransform. These correspond to the middle rows of Tables 4–6, respectively. Suchvalues are powerful indicators, able to capture the multivariate structure of risks andto represent it in a single value per year. When looking at the trend presented inFigure 3, we conclude that there is a clearly decreasing risk over the time period,when the two dimensions of losses are taken into consideration.

We have shown that the multivariate risk measure analysis provides a simple tool tomonitor the evolution of risk when we take into account the two dimensions consideredin this example: the number of victims by event type. We liked this particular examplebecause it is common to have several types of operational risk events needing to bemonitored both over time and simultaneously.

7 CONCLUSIONS

We presented a way to address multivariate distortion risk measures and we have givensome examples of distortion functions and distributions where the final expressionhas a closed form.

We believe that this methodological approach, although it is restricted to nonneg-ative cases, can be useful in many risk management applications.

The main advantage of our method is that there is no need to use vector-valued riskmeasures; instead, for some distributions that are typical in the operational risk con-text, such as the bivariate Pareto, we can obtain analytical expressions for multivariatedistortion risk measures. The main drawback of our method is the difficulty in inter-preting the summarizing measure in the scale and units of the original componentsof the vector of losses.

The main limitation regarding interpretation, as in many other aggregation methods,is that distortion functions combine and rescale the original units of measurement. Inthe multivariate case, when we use distorted multivariate survival functions to obtaina distortion risk measure for a multivariate risk, the units of measurement are alsodistorted.

DECLARATION OF INTEREST

The authors report no conflicts of interest. The authors alone are responsible for thecontent and writing of the paper.

www.risk.net/journals Journal of Operational Risk

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 68: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

56 M. Guillen et al

ACKNOWLEDGEMENTS

The authors acknowledge the support received from the Spanish Ministry of Sci-ence/FEDER grants ECO2016-76203-C2-1-P and ECO2016-76203-C2-2-P, andJ. M. Sarabia and F. Prieto acknowledge the support received from the SantanderFinancial Institute (SANFI) of the Fundación UCEIF through the University ofCantabria, funded by sponsorship from Banco Santander.

REFERENCES

Alexander, C., Cordeiro, M. G., Ortega, E. M. M., and Sarabia, J. M. (2012). Generalizedbeta-generated distributions. Computational Statistics and Data Analysis 56(6), 1880–1897 (https://doi.org/10.1016/j.csda.2011.11.015).

Arnold, B. C. (1983). Pareto Distributions. International Co-operative Publishing House,Fairland, MD.

Belles-Sampera, J., Merigó, J. M., Guillen, M., and Santolino, M. (2013). The con-nection between distortion risk measures and ordered weighted averaging opera-tors. Insurance: Mathematics and Economics 52(2), 411–420 (https://doi.org/10.1016/j.insmatheco.2013.02.008).

Belles-Sampera, J., Guillen, M., and Santolino, M. (2014). Beyond value-at-risk: GlueVaRdistortion risk measures.Risk Analysis 34, 121–134 (https://doi.org/10.1111/risa.12080).

Belles-Sampera, J., Guillen, M., and Santolino, M. (2016). What attitudes to risk underliedistortion risk measure choices? Insurance: Mathematics and Economics 68, 101–109(https://doi.org/10.1016/j.insmatheco.2016.02.005).

Choquet, G. (1954). Theory of capacities. Annales de l’Institut Fourier 5, 131–295. URL:http://eudml.org/doc/73714.

Cousin, A., and Di Bernardino, E. (2013). On multivariate extensions of value-at-risk.Journal of Multivariate Analysis 119, 32–46 (https://doi.org/10.1016/j.jmva.2013.03.016).

Cousin, A., and Di Bernardino, E. (2014). On multivariate extensions of conditional-tail-expectation. Insurance: Mathematics and Economics 55, 272–282 (https://doi.org/10.1016/j.insmatheco.2014.01.013).

Denneberg, D. (1990). Distorted probabilities and insurance premiums. Methods of Oper-ations Research 63(3), 3–5.

Denneberg, D. (1994). Non-Additive Measure and Integral. Theory and Decision Library,Volume 27. Springer (https://doi.org/10.1007/978-94-017-2434-0).

Di Bernardino, E., and Palacios-Rodríguez, F. (2017). Estimation of extreme component-wise excess design realization: a hydrological application. Stochastic EnvironmentalResearch and Risk Assessment 31(10), 2675–2689 (https://doi.org/10.1007/s00477-017-1387-y).

Embrechts, P., and Puccetti, G. (2006). Bounds for functions of multivariate risks. Journalof Multivariate Analysis 97(2), 526–547 (https://doi.org/10.1016/j.jmva.2005.04.001).

Embrechts, P., Lambrigger, D. D., and Wüthrich, M.V. (2009). Multivariate extremes and theaggregation of dependent risks: examples and counter-examples. Extremes 12, 107–127 (https://doi.org/10.1007/s10687-008-0071-5).

Journal of Operational Risk www.risk.net/journals

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 69: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

Distortion risk measures for nonnegative multivariate risks 57

Farlie, D. J. G. (1960).The performance of some correlation coefficients for a general bivari-ate distribution. Biometrika 47, 307–323 (https://doi.org/10.1093/biomet/47.3-4.307).

Gumbel, E. J. (1960). Bivariate exponential distributions. Journal of the American StatisticalAssociation 55(292), 698–707 (https://doi.org/10.1080/01621459.1960.10483368).

Jones, M. C., and Balakrishnan, N. (2002). How are moments and moments of spacingsrelated to distribution functions? Journal of Statistical Planning and Inference 103(1),377–390 (https://doi.org/10.1016/S0378-3758(01)00232-4).

Jones, M. C., Arnold, B. C., David, H. A., Kent, J.T., Nagaraja, H. N., Ferreira, J.T. A. S., andSteel, M. F. J. (2004). Families of distributions arising from distributions of order statistics.Test 13, 1–43 (https://doi.org/10.1007/BF02602999).

Mardia, K.V. (1962). Multivariate pareto distributions. Annals of Mathematical Statistics 33,1008–1015 (https://doi.org/10.1214/aoms/1177704468).

Morgenstern, D. (1956). Einfache Beispiele zweidimensionaler Verteilungen. Mitteilungs-blatt für Mathematische Statistik 8, 234–235.

Pearson, K. (1902). Note on Francis Galton’s problem. Biometrika 1, 390–399 (https://doi.org/10.2307/2331627).

Rüschendorf, L. (2006). Law invariant convex risk measures for portfolio vectors. Statistics& Decisions 24, 97–108 (https://doi.org/10.1524/stnd.2006.24.1.97).

Rüschendorf, L. (2013). Mathematical Risk Analysis: Dependence, Risk Bounds, OptimalAllocations and Portfolios. Springer (https://doi.org/10.1007/978-3-642-33590-7).

Salvadori, G., De Michele, C., and Durante, F. (2011). On the return period and design in amultivariate framework. Hydrology and Earth System Sciences 15, 3293–3305 (https://doi.org/10.5194/hess-15-3293-2011).

Sun, E. W., Wang, Y. J., and Yu, M. T. (2017). Integrated portfolio risk measure: estimationand asymptotics of multivariate geometric quantiles. Computational Economics (https://doi.org/10.1007/s10614-017-9708-2).

Wang, S. (1995a). Insurance pricing and increased limits ratemaking by proportional haz-ards transforms. Insurance: Mathematics and Economics 17(1), 43–54 (https://doi.org/10.1016/0167-6687(95)00010-P).

Wang, S. (1995b). Premium calculation by transforming the layer premium density. ASTINBulletin 26, 71–92 (https://doi.org/10.2143/AST.26.1.563234).

www.risk.net/journals Journal of Operational Risk

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 70: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 71: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

Journal of Operational Risk 13(2), 59–81DOI: 10.21314/JOP.2018.207

Research Paper

An operational risk capital model based on theloss distribution approach

Ruben D. Cohen

Independent Consultant, London, UK; email: [email protected]

(Received June 23, 2017; revised October 26, 2017; accepted January 4, 2018)

ABSTRACT

In this paper, we construct a capital model for operational risk based on the observa-tion that operational losses can, under a certain dimensional transformation, convergeinto a single, universal distribution, as previously established by Cohen in a 2016paper. Derivation of the model is accomplished by directly applying the loss distri-bution approach to the transformed data, yielding a calibratable expression for riskcapital. The expression, however, is applicable only to nonconduct losses becauseit incorporates empirical behaviors that are specific to them. For loss data that fallsunder the conduct category, this approach may not be valid; in such cases, one mayhave to resort to a different type of modeling technique.

Keywords: operational risk; capital model; standardized measurement approach (SMA); advancedmeasurement approach (AMA); loss distribution approach (LDA); method of dimensional analysisand similitude.

1 INTRODUCTION

Operational risk capital modeling was put in limbo after the publication of a consul-tative paper by the Basel Committee on Banking Supervision (BCBS), outlining anew methodology for capital estimation. In that paper (Basel Committee on Banking

Print ISSN 1744-6740 j Online ISSN 1755-2710© 2018 Infopro Digital Risk (IP) Limited

59 Journal of Operational Risk

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

www.risk.net/journals

Page 72: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

60 R. D. Cohen

Supervision 2016), the BCBS proposed a standardized measurement approach (SMA)to replace the prevailing advanced measurement approach (AMA). A key reason forthis proposal was that the AMA had become so irretrievably convoluted, it was feltthat a simpler, more transparent capital framework had to be instituted in lieu.

Although the SMA was able to offer the simplicity and transparency it promised,it very quickly proved to be flawed, having failed various conceptual and sensibilitytests undertaken in a number of independent studies (see Mignola et al 2016; Peterset al 2016; McConnell 2017). In response, the BCBS went back to the drawing boardand returned after twenty months with a recalibrated variant of the original SMA,together with an outline of how a bank might implement the revised capital model,taking into account the bank’s business indicator and the local regulator’s discretionas to the use of internal losses (Basel Committee on Banking Supervision 2017a,b).Clearly, after considerable deliberation, the BCBS was still unable to deliver a modelmuch different than that previously proposed. All this is testimony to the fact thatoperational risk is not fully understood by anyone, including the BCBS itself.

A thorough understanding of operational risk, deep enough to warrant a theorythat is both sound and widely accepted, must, like the progress of knowledge inscience and engineering, evolve through a process that is disciplined, focused and,very importantly, stepwise: each step must be consistent with and constructed onthe ones before. Without a clear grasp of the underlying principles, no single entityshould simply be given free rein to dictate how a model should be formulated andthen demand that each participating enterprise either adopt it (as with the BCBS’sSMA) or develop it to its own liking and then run it in any way it pleases (as with theAMA). This is a recipe for disaster, as exemplified by both AMA and SMA. Now, inlight of the BCBS’s recalibrated version of the SMA, we cannot but expect a repeatof the same situation.

To create a robust model for operational risk capital in the absence of any universallyrecognized theory, one must follow an empirical process. Two things are vital here.The first is to start with simplicity and transparency. One should not complicate aprocess when there is little knowledge about its inner workings. TheAMA, once again,serves as a classic example. Second, it is important to scrutinize the existing literature,together with any available and relevant data. From this, one must try to seek out thefundamental behaviors that have, time after time, demonstrated empirical consistencyand repeatability across the spectrum of research. These behaviors should, in essence,constitute the characteristic features that must be satisfied by any model before it canprogress to the next level. The above steps are crucial for the development of a modelwith limited know-how, and it is the objective of this paper to highlight as many ofthese features as possible. These features will then be joined together analytically intoa single framework, before being incorporated into capital modeling.

Journal of Operational Risk www.risk.net/journals

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 73: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

An LDA-based model for operational risk capital 61

2 FEATURES SPECIFIC TO OPERATIONAL LOSSES

Operational risk was conceptualized about twenty years ago; since then, a sizableamount of research has been carried out to analyze the field and to help us understandit better. Although one could say there is still no clear understanding of operationalrisk deep enough to deliver a universally accepted theory, the research that has beenconducted so far has led to some important findings, one being that operational lossespossess certain empirical attributes that characterize them in a unique way. Below is alist of these attributes, which, over time, have exhibited a high degree of repeatabilityin the literature. The list may not be exhaustive, but it does contain the most commonand well-known characteristics that dominate the behavior of operational loss data.

(a) Operational losses are fat tailed and it is very often difficult to predict them orto set a limit on their severities.

(b) In terms of actual, observed behavior, operational losses seem to fall into twomajor categories. These are denoted in the literature as “conduct” (or “miscon-duct”) and “nonconduct”. Classification of operational risk into two categoriesis in sharp contrast with the seven event types initially proposed by the BCBS.

(c) Among all the distributions that are available for fitting, there are two that standout in their ability to epitomize the most prominent features of operationallosses. These are the generalized Pareto distribution (GPD) and the lognormaldistribution. While the GPD depicts a constant slope in log–log plane (namely,a Pareto distribution) in the tail, extending all the way to the end, the lognormaldistribution exhibits an ever-declining trend.

(d) When losses are plotted in their cumulative form, one may sometimes, but notalways, see a rapid drop in the distribution at the far end of the tail.

(e) Correlations of operational loss frequencies and severities – whether takenseparately or in aggregate, or measured against macroeconomic factors, acrossevent types, business lines or another formal/informal unit of measure – arelargely found to be either weak or dominated by statistical ambiguity. As aresult, they tend to be of limited or no use in their contribution to capitalmodeling and/or the determination of diversification impacts (Nešlehová et al2006).

In the section that follows, we shall discuss these empirical findings, relating eachto some supporting literature, and then explain their roles in the model that we proposeto build here.

www.risk.net/journals Journal of Operational Risk

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 74: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

62 R. D. Cohen

3 MORE ON THE COMMON EMPIRICAL FEATURES

The pervasiveness of the five empirical properties of operational losses mentionedabove, all frequently cited across independent sources, is a sign that operational riskis, in effect, a manifestation of these behaviors. A more thorough discussion andanalysis of each of these behaviors is presented below.

3.1 Behavior (a)One of the most dominant and distinctive features of operational losses, which hasbeen verified in the literature time after time (see, for example, Cruz et al 2015), isthat they behave very much like random selections from fat-tailed distributions. Thisfeature is now considered to be so standard that one of the first steps to simulatingoperational losses in any investigation is to select some representative distribution andthen pick points from it at random to portray the losses (see, for instance, Mignola et al(2016) and Peters et al (2016), who use classical simulation techniques to quantifyoperational risk).

Further to the above, the messages gathered from the literature pertaining to the pre-dictability of operational losses are not so clear-cut. For example, it has been claimedthat the magnitude of revenue and assets (that is, the most direct representations of anentity’s size and scale) do not display any robust correlation with operational losses(Moosa and Li 2013). However, the SMA proposal document (Basel Committee onBanking Supervision 2016) points to a relationship between a bank’s profit and loss,and its loss averages, characterized by the business indicator component (BIC) and theloss component (LC), respectively. The connection between the two has been subse-quently quantified by Mignola et al (2016) and presented as a regression relationshipfor use in simulations.

Thus, the general consensus on operational losses is that they are fat tailed andextremely difficult to predict or to place a limit on. Together, these characteristicshave rendered the use of extreme value theory for operational risk analysis an industrystandard.

3.2 Behavior (b)Even though operational risk is formally segregated into seven event types (Euro-pean Banking Authority 2006), there has been a move lately, especially by bankingregulators and supervisors, to regroup them into two categories, namely “conduct”(or “misconduct”) and “nonconduct” (European Banking Authority 2016; PrudentialRegulation Authority 2017; Bank of England 2017).1 While the conduct category

1 The seven event types, which shall be denoted by their acronyms throughout this paper, are inter-nal fraud (IF); external fraud (EF); employment practices and workplace safety (EPWS); clients,products and business practice (CPBP); damage to physical assets (DPA); business disruption andsystems failures (BDSF); and execution, delivery and process management (EDPM).

Journal of Operational Risk www.risk.net/journals

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 75: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

An LDA-based model for operational risk capital 63

relates exclusively to CPBP plus some special IF loss cases, nonconduct is an amal-gamation of all the remaining losses. The reason for this segregation – apart from thefact that the general shapes of the distributions characterizing the two categories canbe vastly different – is that conduct-related losses are far more difficult to grasp thannonconduct ones.

In a paper by Cohen (2016), it was demonstrated that expressing the loss distributionof an event type (or, similarly, any specific unit of measure) in dimensionless form as

NwN0

�N

�wversus

w

Nwand plotting several of these dimensionless distributions on the same graph can helpidentify the fundamental similarities and differences between their risk profiles. Here,w is the loss severity, Nw the average losses in the population of N0 losses, �w theseverity bucket size and �N the number of losses that falls inside the bucket �w.

Leaving the mathematical details aside (because they appear in Cohen (2016)), wenote that the paper highlights a number of expected and unexpected results related tooperational loss data. One of these is that the notion of a two-category segregation ofoperational losses, consistent with conduct and nonconduct, is fully supported.

A further empirical outcome of the aforementioned paper (Cohen 2016), which isgraphically duplicated here in Figure 1, is that the loss distribution of the merged datacan, in general, be expressed by the following relationship:2

NwN0

�N

�wD fn

�w

Nw

�; (3.1)

where fn.�/ represents an as-yet-undetermined function. Cohen (2016) also foundthat, when placed on the same graph, all the data points, with the exception of CPBP,merge together to produce a tail characterized by a slope of �2; this translates into atail parameter of 1when plotted as a cumulative distribution. The CPBP data, however,follows a different trend, especially at the tail end.

Excluding the CPBP losses (or conduct losses in general), we may rewrite (3.1) as

NwN0

dN

dwD �˛

�w

Nw

��2

; (3.2)

based on empirical observation. Here, ˛ is a constant along the curve and dN=dwdepicts �N=�w in the limit of differential calculus. Therefore, (3.2) portrays thedistribution curve of the nonconduct losses in dimensionless coordinates.

With regard to SAS industry (or external) data, Figure 2 – where the losses areplotted in the dimensionless frame of reference – conveys a different story.3 Here,

2 A description of the data in Figure 1 can be found in Cohen (2016).3 A more detailed description of the data in Figure 2 can be found in Cohen (2016).

www.risk.net/journals Journal of Operational Risk

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 76: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

64 R. D. Cohen

FIGURE 1 Dimensional analysis supports the concept of combining the seven event typesinto two, CPBP and non-CPBP, for this particular global bank.

1E–8

1E–6

1E–4

1E–2

1E+0

1E+2

1E–3 1E–1 1E+1 1E+3w/ w–

BDSFCPBPDPAEPWSEDPMEF and IF

ΔNΔwN0

w–– ––––

This is consistent with the “conduct” and “nonconduct” event types lately proposed by some regulators. The dashedline has a slope of �2 in log–log coordinates; this is equivalent to a tail parameter of 1 when plotted in cumulativeform. Graph obtained from Cohen (2016).

in contrast with the internal data displayed in Figure 1, there seems to be no cleardistinction between the conduct and nonconduct losses: in this case, CPBP and non-CPBP, respectively. At the same time, while the data displays a downward concavityat the beginning, all results converge onto a single tail, whose behavior is consistentwith the functional form given by (3.2). The concavity is presumably caused by theloss disclosure bias (de Fontnouvelle et al 2003).

3.3 Behavior (c)

Plotting the nonconduct data of Figures 1 and 2 in unison leads to the distributionsdisplayed in Figure 3. It is noted here that, even though the data sets originate fromdifferent and totally independent sources, a strong similarity, and even a tight conver-gence in the dimensionless frame of reference, emerges between the two distributions.This observation is comparable with those in the literature in two ways. First, the dis-tributions of nonconduct losses follow a similar pattern, in line with the consensusview that a narrow range of theoretical curves can support operational loss model-ing. Second, the tail behaviors of these distributions could, in many instances, be

Journal of Operational Risk www.risk.net/journals

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 77: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

An LDA-based model for operational risk capital 65

FIGURE 2 Application of the method of dimensional analysis to SAS industry data.

1E–6

1E–5

1E–4

1E–3

1E–2

1E–1

1E+0

1E+1

1E+2

1E–2 1E–1 1E+0 1E+1 1E+2 1E+3

BDSFCPBPDPAEPWSEDPMEFIF

w/w–

ΔNΔwN0

w–– ––––

Application of the method of dimensional analysis to SAS industry data does not appear to support combining theseven event types into two, as it seems that all loss behaviors converge into a single type. The dashed line has aslope of �2 in log–log coordinates, which is equivalent to a tail parameter of 1 when plotted in cumulative form.Graph obtained from Cohen (2016).

characterized as Pareto, in this case with a tail parameter that is very close to 1. Thisis consistent with the findings reported in Nagafuji et al (2011).4

3.4 Behavior (d)

When plotted as a cumulative distribution, operational losses occasionally exhibit analmost sudden drop as one approaches the tip of the tail. Visual examples of these maybe noted in both de Fontnouvelle et al (2007, Figures 10.1 and 10.2) and Nagafujiet al (2011, Figure 1). Such observations have led researchers and practitioners toapply the lognormal distribution to fit the data, because – owing to its nature – thisdistribution has the ability to capture the drop at the tail end. We show below that

4 The tail parameter of a loss distribution in operational risk characterizes the slope of the tail of thecumulative distribution curve plotted in log–log coordinates.

www.risk.net/journals Journal of Operational Risk

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 78: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

66 R. D. Cohen

FIGURE 3 Combined event types excluding CPBP in the internal data shown in Figure 1compared with the combined excluding CPBP in the industry data shown in Figure 2.

1E–8

1E–6

1E–4

1E–2

1E+0

1E+2

1E–2 1E–1 1E+0 1E+1 1E+2 1E+3 1E+4

Internal ex-CPBPAll industry,ex-selected bank,ex-CPBP

w/w–

ΔNΔwN0

w–– ––––

The results display a stark similarity in the tail behaviors. The dashed line has a slope of �2 in log–log coordinates,which is equivalent to a tail parameter of 1 when plotted in cumulative form. Graph obtained from Cohen (2016).

this deviation of the tail from Pareto behavior could be caused simply by imposing amaximum limit on the loss severity.

To demonstrate, we begin by rewriting (3.2) as

dN D �˛N0

Nw

�w

Nw

��2

dw; (3.3)

and then integrate between two arbitrary loss limits of w1 and w2, that is,

Z N2

N1

dN D �˛N0 NwZ w2

w1

w�2 dw: (3.4)

This reduces to

N1 �N2 D ˛N0 Nw�1

w1

� 1

w2

�; (3.5)

where N0 is the number of losses equal to or above the minimum loss threshold,and N1 and N2 denote the number of losses of size equal to or above w1 and w2,respectively.

For a distribution that is limited by a largest single loss of size, say, wmax, thenumber of losses equal to or abovewmax would, by definition, be equal to 1, implying

Journal of Operational Risk www.risk.net/journals

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 79: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

An LDA-based model for operational risk capital 67

that there is a single loss at wmax with nothing else above it. Inserting this into (3.5)and simplifying yields

N

N0

� 1

N0

D ˛

� Nww

� Nwwmax

�: (3.6)

The maximum loss, wmax, introduced here is in fact very real, as it manifests itselfin each and every actual distribution. While theoretical distributions may extend toinfinity, real-world distributions are always bounded by a largest loss. Even in thecase of the AMA, the 99.9% confidence level implies the existence of a largest loss.Thus, the inclusion of wmax not only serves a practical purpose, but also helps avoidthe consequences of the infinite mean when dealing with tail parameters of 1, as thismodel does.

We now return to (3.3). By multiplying both sides of (3.3) by w, incorporating thenegative sign and integrating between the limits of the loss threshold, wth, and themaximum loss, wmax, we obtain

Z N0

0

w dN D ˛N0 NwZ wmax

wth

w�1 dw D ˛N0 Nw lnwjwmaxwth

(3.7)

Recognizing that the left-hand side of (3.7) is the total sum of the losses in thepopulation, that is, N0 Nw, we can solve the above and simplify to

1 D ˛ ln

�wmax

wth

�: (3.8)

Finally, combining (3.6) and (3.8) to eliminate the unknown constant ˛ yields theexpression for the cumulative distribution, which is characterized by tail parameter 1and bounded by wth and wmax. This is given below in terms of N versus w:

N D 1CN0

. Nw=w/ � . Nw=wmax/

lnŒwmax=wth�: (3.9)

Parts (a) and (b) of Figure 4 are provided to illustrate some of the relevant featuresassociated with (3.9). Figure 4(a) depicts the probability distribution, while Figure 4(b)shows the cumulative distribution of a simulated set of losses. The dashed lines inFigure 4 are not fitted curves. They illustrate the results of (3.2) and (3.9) for boththe probability and cumulative distributions, respectively; the triangles represent thesimulation outputs. The simulation consists of N0 D 1000 random selections from aPareto distribution with tail parameter 1, a loss threshold wth D 1 and subjected to amaximum loss limit of wmax D 100.

Two important features stand out from Figure 4. First, both (3.2) and (3.9) canaccurately represent the simulation outputs without even a single fitting parameter.

www.risk.net/journals Journal of Operational Risk

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 80: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

68 R. D. Cohen

FIGURE 4 Probability and cumulative functions shown together with simulation results.

1E+0

1E+1

1E+2

1E+3

1E+0 1E+1 1E+2

N

1E–4

1E–2

1E–1

1E+0

1E+1

1E–1 1E+0 1E+1 1E+2

1E–3

w/w–

ΔNΔwN0

w–– ––––

(a) (b)

w

(a) The probability function representing a Pareto distribution with tail parameter 1. The dashed line going throughthe points, which has a slope of �2, signifies the behavior of (3.2); the triangles are the result of a simulation. (b) Thecumulative function representing a Pareto distribution with a tail parameter of 1 and capped by a maximum loss,wmax, of 100. The dashed line is not a fitted curve. It portrays the behavior of (3.9); the triangles indicate the resultsof the simulation. The drop at the end is caused by the limit imposed on the maximum loss.

Second, what appears to be a deviation from a constant tail parameter into a rather sud-den drop at the tail end of the cumulative distribution curve in Figure 4(b) – frequentlyobserved when plotting actual operational loss data in cumulative distributions – turnsout not to be an indication of a change in distribution from Pareto to lognormal atsome point along the curve.5 This feature is caused by placing an upper bound on thelosses. The distribution remains Pareto all the way, as confirmed in Figure 4(a) by aprobability distribution that displays a constant slope throughout.

In reference to similar empirical observations in the operational risk literature, thiscould be a sign that something is acting to limit the maximum loss severity. The searchfor the source of such a limitation constitutes one of the key challenges in operationalrisk capital modeling and finding out its root cause could potentially be a startingpoint toward developing a theory of operational risk.

3.5 Behavior (e)

Investigations into correlations of both loss frequency and severity in operationalrisk abound in the literature. These generally fit into two distinct categories: cor-relations between losses and macroeconomic factors, and loss correlations acrossdifferent units of measure (UoMs). Within a single institution, these UoMs mightconsist of event types, business lines or any combination thereof. Thus far, the overall

5 The concave-down feature in Figure 4(b) is caused by setting wmax < wthn0. The downwardconcavity could be reversed to upward by simply setting wmax > wthn0 and still maintain thePareto probability distribution.

Journal of Operational Risk www.risk.net/journals

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 81: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

An LDA-based model for operational risk capital 69

outcome in nearly all of these studies has remained inconclusive, which, in almost allcircumstances, has been attributed to insufficient data.

The application of correlations between operational losses and macroeconomicfactors has found a special niche in stress testing, where it is generally carried outvia regressions (Abdymomunov 2014). Criticism of such implementations, however,has been raised, dwelling on issues such as data scarcity, truncation of loss databases,diverse loss dates, resolution times and more (Cruz et al 2015). The lack of a robustrelationship between operational losses and the macroeconomy has also been notedin Curti et al (2016).

Similarly, operational loss correlations across different UoMs have generally beenfound to be either weak or inconclusive. Examples of these can be seen in Copeand Antonini (2008) and Groenewald (2014), with the latter emphasizing the “prag-matic” approach of setting the cross-UoM correlations artificially at specified positivelevels, such as 0, 0.25, 0.50, 0.75 and 1.00. This practice has, in effect, been infor-mally endorsed by regulators to help overcome the difficulties with implementingcorrelations in AMA models.

In view of the above – and regardless of whether the lack of correlations in oper-ational risk is caused by data scarcity, plain random behavior or both – the finalverdict is easy to pin down: that is, although correlations may have a theoretical rolein operational risk modeling, there is not much one can do with them when it comesto practical application.

Since a major focus of this work is on practicality and objectivity, it is preferableto leave out the effects of loss correlations from here. Bringing in correlations extra-neously would mean adding unknown, indeterminable parameters in their dozens, oreven hundreds or thousands, all based on hand-waving argument and speculation. Asthis has already proven fatal to the AMA, we find it best to outright exclude themfrom this paper.

Further, the empirical observation that operational losses converge to a tail param-eter of 1 can have the effect of making capital calculations immune to correlation, aswill be shown later. Thus, by its very nature, the model that is developed in the nextsection will have no need for correlation to be entered exogenously.

4 AN OPERATIONAL RISK CAPITAL MODEL

Capital modeling in operational risk has always presented significant and overwhelm-ing challenges. These challenges emerge from numerous sources and include issuessuch as inappropriate capital volatilities and sensitivities as well as inconclusive mes-sages in the literature, all thanks to data scarcity and the lack of a deep understandingof operational risk itself. Such mixed results create confusion, making it particularlydifficult for the practitioner to decide which findings to believe and which not to

www.risk.net/journals Journal of Operational Risk

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 82: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

70 R. D. Cohen

believe, as evidenced by the subjectivity and the numerous off-the-cuff curve fits thathave typically made their way into previous AMA models. However, empirical find-ings on loss behaviors that are objective and repeatedly observed across the literatureadd credibility; hence, their presence in a model is crucial.

The operational loss model developed here and summarized by (3.9) displays manyof the common but important empirical attributes frequently noted in the literature.These attributes include, for instance, tails that are both fat and Pareto-like in shape,together with the ability to capture the frequently observed decline at the tail end of acumulative loss distribution, a behavior that could easily be mistaken for lognormal.Also, as will be shown later, the model renders the role of correlation irrelevant byeliminating the need for it to be entered exogenously. Now, with the loss distributionmodel rooted in (3.9), we move on to develop the capital model itself.

Development of the capital model will follow the methodology and conclusionsoutlined in the previous section. We begin by writing (3.5) in the following form:

N1 �N2

.1=w1/ � .1=w2/D ˛N0 Nw: (4.1)

This is valid between any two arbitrary points, w1 and w2, along the distribution,with its right-hand side (ie, ˛N0 Nw) remaining constant along the entire curve. Sincew1 and w2 are arbitrary, we can eliminate the right-hand side of the equation in thefollowing way. Select three points on the curve, that is, N.wth/ D N0, N.wmax/ D 1

and N.w ! 1/ D 0, and then substitute them into (4.1) and simplify. This gives

N0 � 1.1=wth/ � .1=wmax/

D wmax; (4.2)

from which the single largest loss, wmax, may be obtained as

wmax D wthN0: (4.3)

The procedure outlined in (4.1)–(4.3) can be repeated for any arbitrary loss along thecurve, that is, w1; w2; : : : ; thus leading to

w1N1 D w2N2 D � � � D wthN0 D wmax: (4.4)

The linearity of (4.4) comes with the important implication that the single largestloss, wmax, within a combination of UoMs is independent of the choice of threshold.Although this might be considered obvious due to the nature of the Pareto distributionwith tail parameter 1, it still has a profound impact on the development of the riskcapital model that follows next.

It is important to note that, up to this point, all derivations have been based on apopulation of operational loss data. This population is limited by a minimum loss

Journal of Operational Risk www.risk.net/journals

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 83: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

An LDA-based model for operational risk capital 71

threshold of wth and has a total loss count N0 that is equal to or above wth, with N0

not necessarily representing an annual loss count.Following conventional capital modeling protocols, we revert to the annual loss

count (or “frequency”, as it is known in operational risk lingo) and introduce n0 asthe frequency of losses with severities equal to or above some loss threshold, wth.With the frequency n0 defined, all the equations derived so far (from (3.1)–(4.4)) willapply in their exact original forms, except that we need to substitute the annual counts(n0, n1, n2, etc) for their population counterparts (N0,N1,N2, etc), where n0, n1, n2,etc, represent the frequency of losses equal to or above their respective loss thresholdsof wth, w1, w2, etc.6 It therefore turns out that the total event count in 1000 yearsbecomes simply 1000n0. Hence, with the help of (4.3), the largest single loss in 1000years becomes

wmax D 1000wthn0: (4.5)

A generalization of (4.5) would be to incorporate the confidence level of thedistribution, denoted here by cL. For the case of (4.5), therefore, one obtains

wmax D wthn0

1 � cL; (4.6)

which is consistent with the literature (see Nagafuji et al (2011) and Degen (2010)for general representations of the single largest loss). With the reference cL beingset at 99.9% (although this number has always been a subject of debate in AMAimplementation), one could then recover (4.5).

A crucial feature of (4.6) is that, by virtue of (4.4), the single loss approximationat cL becomes independent of the arbitrarily selected loss threshold. The implicationof this result for risk capital modeling is significant because it eliminates the problemof capital sensitivity to the loss threshold, a recognized and well-documented issuein the more practical aspects of capital modeling.

The insensitivity of capital to the loss threshold is a consequence of applying thePareto distribution with tail parameter 1 across the entire length of the curve, fromwth to wmax. In relation to capital modeling, this means that only data that falls in thetail portion of the loss distribution probability – that is, beyond the point where theprobability curve assumes a constant slope in the log–log coordinate system – can beused. An example of this may be seen in Figures 2 and 3, where, for the external data,the constant tail parameter of 1 comes into effect at w= Nw Š 2. Thus, only the lossdata that falls above and beyond this point may be used; any data that falls below mustbe discarded, since its inclusion would violate the validity of (4.6). The advantage of

6 The annual frequency, n0, also contains uncertainties that are generally modeled to follow a certaindistribution, such as the Poisson or the negative binomial. Dealing with this, however, is out of thescope of this work and will not therefore be covered here.

www.risk.net/journals Journal of Operational Risk

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 84: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

72 R. D. Cohen

this process is the elimination of capital sensitivity to the loss threshold, while thedisadvantage is the loss of valuable data.

Equation (4.6) is the single largest approximation, which represents a reasonablyclose estimate of the capital, but not all of it. A more suitable depiction of capital isthrough the total sum of the losses incurred by a bank in a single year (ie, the annualloss aggregate), which we denote here by s. In practice, s is generally determined bysimulation procedures, where a number of losses equal to the average annual count,n0, are selected randomly from an underlying loss distribution and then summed up.Repeating this process many times leads to a distribution of the annual loss aggregate,whose value at the confidence level, cL, should reflect a more accurate capital estimatethan the single loss approximation.

However, for a Pareto distribution with tail parameter 1, it is known that the largestloss in a population would still provide an accurate representation of the largest totalsum of, say, n0 annual losses sampled from within the population. To demonstrate,we took n0 random picks from a population of 10 000 points that describe a Paretodistribution of tail parameter 1 and threshold wth D 1. We then summed them up toobtain the total annual loss. After repeating this procedure several thousand times,the largest sum of the n0 random picks, denoted here by smax, was singled out andcompared with the largest loss in the population of 10 000.

The comparison is portrayed in Figure 5 as the ratio wmax=smax versus the annualfrequency, n0. The nearness of this ratio to unity, which is seen in the graph to liewithin a couple or so percentage points at most, is testament to the proximity ofsmax and wmax to one another. This result is well known in the context of operationalrisk capital modeling and has been thoroughly explored by a number of investigators(see, for example, Degen (2010) and Griffiths and Mnif (2017), and the referencescontained therein).

The outcome displayed in Figure 5 effectively shows that the capital,X , of an entity(which is, by definition, smax evaluated at confidence level cL) may be representedsimply by (4.6). This therefore gives

X D wthn0

1 � cL(4.7)

as the capital for an entity that generates a frequency, n0, of losses, each equal to orabove the selected loss threshold of wth.

The fact that (4.7) is linear in wthn0 suggests further that the capital is linearlyadditive across all the UoMs, that is, with the exception of conduct losses, where adeviation from tail parameter 1 may be observed, as indicated by the data shown inFigure 1. Thus, with a tail parameter of 1, the capital, X , which may be apportionedto nonconduct risk, becomes expressible as

X D XA CXB CXC C � � � ; (4.8)

Journal of Operational Risk www.risk.net/journals

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 85: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

An LDA-based model for operational risk capital 73

FIGURE 5 The ratio wmax=smax plotted as a function of sample size n0.w

max

Loss

rat

io, s

max

0.9

1.0

1.1

0 5 10 15 20

Sample size, n0

A graph illustrating how accurately the largest loss, wmax, in a population of 10 000 data points characterized by aPareto distribution with tail parameter 1, could represent the largest total sum of the losses in a sample of size n0,picked randomly from the population.

where XA, XB , XC , etc, are the capital amounts allocated to units of measure A, B ,C , etc, respectively. This is because

wthn0 D wthn0A C wthn0B C wthn0C C � � � ; (4.9)

from which the total annual loss count at the entity level, n0, can be written as

n0 D n0A C n0B C n0C C � � � ; (4.10)

with n0A, n0B , n0C , etc, representing the annual loss counts generated by, or allocatedto, the units A, B , C , etc, respectively, and measured from the same threshold. Notethat the linear additivity of capital across the UoMs, as portrayed in (4.8), is valideven when the thresholds are not the same, based on (4.4) and the arguments thatfollow it.

Linear additivity of capital across the different UoMs, exclusive of conduct losses,means capital estimation and allocation may be performed top down as well as bottomup, with both leading to the same answer. It also means that there is no superadditivityto cause the conceptual issues that plague the SMA (Peters et al 2016). Likewise,diversification effects disappear because the capital is linearly additive across theUoMs, implying that frequency correlations are intrinsically equal to 1; thus, there isno need to bring them in extraneously. This, of course, greatly facilitates the capital

www.risk.net/journals Journal of Operational Risk

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 86: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

74 R. D. Cohen

estimation process for nonconduct risk, as it enables one to steer clear of correlation,along with the baggage that comes with it. Last but not least, since a correlation ofone is regarded as being the most conservative, it would be difficult to argue againstincorporating it into any capital model.

In summary, (4.7) comes with the advantage that it satisfies all the empirical char-acteristics mentioned in Section 2, as each has been embedded in its derivation.Moreover, as reasoned earlier, (4.7) comes with several added benefits, renderingcapital independent of the selected threshold and linearly additive across the differentUoMs, with the latter eliminating the need for correlations to be brought in from theoutside.

5 CALIBRATION AND BENCHMARKING

The capital model derived here and represented by (4.7) has embedded in it theempirical attributes of operational losses listed in Section 2. Moreover, the model isnot burdened with the same issues as theAMA and the SMA: namely, the complexitiesof the former and the undue capital volatilities and lack of sensitivity of the latter. Itscalibration is also straightforward, as it involves only the confidence level cL.

The model could be benchmarked and calibrated against either actual bank capi-tal numbers or another capital model; ideally, both would be used. However, sinceactual bank capital numbers are not available to us at the time of writing, we aim tobenchmark against the SMA, granted not the revised (Basel Committee on BankingSupervision 2017a,b) but its predecessor (Basel Committee on Banking Supervision2016). The reason for this is that a practical simulation methodology is readily avail-able for the 2016 model, which can be implemented with (a) the assumption thatthe two versions of the SMA would lead to more or less consistent trends due tothe similar nature of their underlying equations, and (b) the recognition that bench-marking against the new standardized approach may be carried out once a simulationprocedure is formulated for it – something that currently lies beyond the scope ofthis work. Another useful exercise is to benchmark the model against the capital ofsome hypothetical bank to help illustrate a potential calibration and implementationapproach.

Next, we commence benchmarking against the original SMA by following thesimulation procedure described in Mignola et al (2016) and utilizing their BIC-versus-LC relationship. Benchmarking against the AMA is ruled out for the simple reasonthat it is not possible (Cohen 2017).

As regards the currency, we note that the SMA simulation methodology is pegged tothe euro, owing to the nonlinear relationship between the BIC and the LC. Hence, thecomparison conducted here against the SMA is also based on the euro. Extending thisto another currency should not pose an issue as long as the coefficients in the BIC–LC

Journal of Operational Risk www.risk.net/journals

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 87: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

An LDA-based model for operational risk capital 75

FIGURE 6 Simulated SMA capital output plotted against the annual frequency of lossesgreater than or equal to €1 million.

1E+2

1E+3

1E+4

1E+5

0.1 1.0 10.0 100.0

Annual loss frequency(≥ €1 million)

c L= 99.75%c L

= 99.90%

c L= 99.00%

SM

A c

apita

l (€

mill

ion)

Simulations are based on a Pareto distribution with tail parameter 1 with no maximum loss limit imposed.

relationship devised by Mignola et al (2016), or any other simulation proceduredesigned for the new SMA, are adjusted to accommodate the new currency (this isnot within the scope of the present paper).

Capital figures were generated from simulations and are plotted in Figure 6 asSMA capital versus annual loss frequency greater than or equal to €1 million. Thegraph contains over 300 simulation points, each based on a Pareto loss distributionwith tail parameter 1, consistent with empirical observation and subjected to a losscollection threshold of €0.01 million with no upper loss limit imposed. Together withthe SMA simulations, three diagonal lines are shown, depicting capital based on theloss distribution approach (LDA) and calculated via (4.7). The top line representscL D 99:90%, the middle 99.75% and the bottom 99.00%.

The frequency range in Figure 6 is from 0.1 to slightly over 100, covering smallbanks (with 0.1 to 1 annual loss counts greater than or equal to €1 million), mediumbanks (with 1 to 10 annual loss counts greater than or equal to €1 million) and largebanks (with 10 to 100 annual loss counts greater than or equal to €1 million). The

www.risk.net/journals Journal of Operational Risk

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 88: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

76 R. D. Cohen

characterization of size by annual loss count is introduced here to help differentiatebetween banks according to their operational loss frequencies.

A considerable amount of scatter in capital relative to loss frequency is noted inFigure 6. This is related to the volatility issue that is known to be associated with theSMA (see Cohen (2017) and the references therein). In addition, the capital-versus-frequency data appears in this graph as a band of points lying parallel to the linesrepresenting (4.7). This parallel feature is believed to be due to the Pareto distributionwith tail parameter 1 underlying all simulations.

Visually and without relying on detailed statistics, three observations are worthnoting. First, the width of the band of data points, which is roughly one order ofmagnitude at its widest, appears to narrow down with increasing frequency. Thissuggests that the SMA capital’s volatility is higher for smaller banks. Second, thedistribution of the scattered points varies vertically, with the bottom being denserthan the top. This is a reflection of how data density typically varies along a fat-taileddistribution; that is, there are more data points between the severity intervals of, say,US$100 000 and US$200 000 than between US$1 100 000 and US$1 200 000. Third,the vast majority of data points lie above the 99% confidence level of the capitaldetermined by (4.7). This means that, subject to the simulation conditions appliedhere, the use of an LDA-based capital model with anything equal to or less thana 99% confidence level would be disastrous, resulting in most of the SMA capitalsimulations breaching the LDA capital.

Along the same line of reasoning, the above approach could also be used to quantifythe notion of risk appetite. Risk appetite depends on certain properties of an institution,presumably size, lines of business and culture among others. Although this concept isapplied widely in risk management to portray a bank’s “comfort zone”, there seemsto be no formal definition for it; hence, it is determined subjectively. The proposedmethod of adjusting the confidence level in (4.7) to control the number of breaches(as shown in Figure 6) could help to add an objective angle to the assessment of riskappetite.

Figure 7 is added to illustrate the impact of confidence level on the percentage ofbreaches. The sensitivity of the latter to the former is shown to be quite strong, withabout 80.00% of the SMA data points breaching the cL D 99:00% line. This numberfalls rapidly to 5.50% at the 99.75% level and less than 0.50% at the 99.90% level.7

The percent breaches reported here are, of course, subject to statistical noise, but thisis expected to decrease with increasing sample size. Nevertheless, the numbers shownhere provide a rough idea of the magnitudes involved and the methodology could, asindicated above, suggest a new way for us to measure risk appetite.

7 The differences between the percentage breaches by capital amount and by the number of datapoints are quite small.

Journal of Operational Risk www.risk.net/journals

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 89: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

An LDA-based model for operational risk capital 77

FIGURE 7 Percentage of simulated SMA capital breaching different confidence levels.

0

20

40

60

80

100

By data pointsBy capital

Per

cent

bre

ach

(%)

98.5 99.0 100.099.5Confidence level (%)

Breaches are shown in terms of the percentage of the capital amount and the percentage of the number of datapoints, both lying close to one another.

Implementation of the model is also straightforward, as it involves only oneadjustable parameter: the confidence level. Figure 8 provides an illustrative exam-ple of this, where the capital of a hypothetical bank is plotted by event type (thoughthis could also be done by UoM) against the US$1 million frequency of operationallosses. Along with the capital numbers, the graph contains three lines representing(4.7), with cL equal to 99.50%, 99.75% and 99.90%. This example demonstratesthat the nonconduct portion (all losses excluding CPBP, in this case) of this bank’soperational risk capital, individually and collectively excluding CPBP, is sufficientlycovered by a 99.50% confidence level, while the conduct portion, CPBP, breachesthe 99.90% level. Also, the entirety of the bank’s capital, including both conduct andnonconduct, is shown to be safely covered by a 99.75% confidence level.

In the same manner, a universal calibration of the model may be achieved byplacing in Figure 8 the actual capital numbers of a sample of banks, distinguishingbetween conduct and nonconduct for each and then adjusting the confidence levelaccordingly to confine the number of breaches to some acceptable level. Unlike theAMA’s 99.90%, which comes with no prior justification, the confidence level derivedin this way will at least have the empirical support to substantiate its case.

We now return to external data, the second of the four elements of the AMA.Anyone with any involvement in the practical implementation of the AMA can testify

www.risk.net/journals Journal of Operational Risk

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 90: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

78 R. D. Cohen

FIGURE 8 Capital of a hypothetical bank plotted by event type against the loss frequency.

EDPMEF + IF

BDSF

CPBP

1E+2

1E+3

1E+4

1E+5

0.1 1.0 10.0 100.0

All bank

All bank ex-CPBP

By event type All bank

All bank

Loss frequency (≥ US$1 million)

Cap

ital (

US

$ m

illio

n)

EPWS

ex-CPBP

DPA

c L= 99.75%

c L= 99.90%

c L= 99.50%

to the extreme difficulties encountered when trying to incorporate external data inthis model. The reason for this is that no specific instructions were laid out in thebeginning on how external data should be incorporated – again due to the lack of aproper understanding of operational risk – and, thus, in the name of “flexibility”, itwas all left to the model developers to decide how best to approach this, and for theregulators to review and either reject or approve their attempts.

Subsequently, external data was added to the AMA in a number of different ways.While some model developers simply mixed both internal and external data, and rantheir curve fitting exercises, others scaled the data before combining and then perform-ing the curve fits. Others, alternatively, curve fitted both sets of data separately and,based on certain subjective/statistical arguments, incorporated a weighted-averagecombination of the two distributions in the capital calculation. Faced with a growinglist of different approaches, the question of how to incorporate external data in theframework became a prime unanswered puzzle among users of the AMA.

As a possible solution to integrating external data into operational risk capital mod-eling, we refer to Figure 3, which demonstrates that external data may also be charac-terized by a Pareto distribution with tail parameter 1. The presence of these propertiesin the capital model proposed in (4.7) thereby suggests that external data acquires arole in it as well. In addition, the determination of the confidence level, as described

Journal of Operational Risk www.risk.net/journals

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 91: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

An LDA-based model for operational risk capital 79

in reference to Figure 8, utilizes capital data belonging to different banks. One couldsay then that (4.7), together with its empirically determined confidence level, employsboth internal and external data, and thus embodies the two data elements of the AMA.

6 CONCLUSIONS

There is no doubt that operational risk is a highly complex area, proven to be verydifficult to understand and to model. The strongest testament to this is the fact that,with over ten years of AMA modeling, backed up by massive investment and fullydedicated resources provided by many major global institutions, there is still so littleto show in the way of a robust theoretical foundation. A further testament is the returnof the BCBS after twenty months of deliberation, with a model not in the form of anovel and less controversial approach, but merely in a brushed-up and recalibratedformat of the original SMA.

Nevertheless, what the related literature collectively reveals is that operational lossdata tends to display certain empirical features, making it unique among all othertypes of risk. These features, all of which are supported by repeated observations,include fat tails, a lack of robust correlations and the potential for classification intotwo differentiable categories.

This work builds on these properties by incorporating a further empirical observa-tion that nondimensionalized operational loss data belonging to nonconduct eventsconverges along a universal tail behavior with tail parameter 1. The construction of acapital model for nonconduct operational risk using the LDA then follows naturallyand comprises the main focus of this work.

The capital model that emerges here is specific to nonconduct losses. It is dependenton loss counts rather than severities and independent of the loss threshold. Besides,since the effect of correlation (of one) is readily built into the model, there is no needto bring it in from the outside. Therefore, the only adjustable parameter that remainsis the confidence level, which may be used for calibration purposes as well as thequantification of risk appetite.

Another important feature of the model is that the entity’s capital for nonconduct riskmay be evaluated in either direction, top down or bottom up, with both delivering thesame answer. Further, the model displays neither superadditive nor subadditive prop-erties, that is, properties that are frequently associated with conceptual and practicalissues.

As regards external data, we note that it too is included in the model, as its featuresare embedded in both the underlying distribution as well as the empirically determinedconfidence level. The model is also accessible to smaller banks, which generallystruggle with issues related to data scarcity. In such cases, the banks would require onlyestimates of their annual loss frequencies to compute their operational risk capital.

www.risk.net/journals Journal of Operational Risk

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 92: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

80 R. D. Cohen

Finally, with no theory yet available (nor any in sight) to support operational risk, itis best to begin with a simple and transparent model for capital estimation. Simplicityaside, the model should at least be able to manifest many of the empirical featuresthat operational loss data tends to display continually and consistently. The modelproposed here achieves this for nonconduct operational risk. However, capital mod-eling for conduct-related operational risk, which is believed to present a much biggerchallenge, is expected to be governed by a completely different approach.

DECLARATION OF INTEREST

The author reports no conflicts of interest. The author alone is responsible for thecontent and writing of the paper.

REFERENCES

Abdymomunov, A. (2014). Banking sector operational losses and macroeconomic environ-ment. Report, March, Federal Reserve Bank of Richmond.

Bank of England (2017). Stress testing the UK banking system: 2017 guidance forparticipating banks and building societies. Report, March, Bank of England.

Basel Committee on Banking Supervision (2016). Standardised measurement approachfor operational risk. Consultative Document, March, Bank for International Settlements.

Basel Committee on Banking Supervision (2017a). Basel III: finalising post-crisis reforms.Report, December, Bank for International Settlements.

Basel Committee on Banking Supervision (2017b).High-level summary of Basel III reforms.Report, December, Bank for International Settlements.

Cohen, R. D. (2016). An assessment of operational loss data and its implications for riskcapital modeling.The Journal of Operational Risk 11(3), 71–95 (https://doi.org/10.21314/JOP.2016.178).

Cohen, R. D. (2017).The issues with the standard measurement approach and a potentialfuture direction for operational risk capital modeling. The Journal of Operational Risk12(3), 1–12 (https://doi.org/10.21314/JOP.2017.203).

Cope, E., and Antonini, G. (2008). Observed correlations and dependencies among oper-ational losses in the ORX consortium database. The Journal of Operational Risk 3(4),47–74 (https://doi.org/10.21314/JOP.2008.052).

Cruz, M. G., Peters, G. W., and Shevchenko, P. V. (2015). Fundamental aspects ofoperational risk and insurance analytics. In Handbook of Operational Risk. Wiley.

Curti, F., Ergen, I., Le, M., Migues, M., and Stewart, R. (2016). Benchmarking operationalrisk models. Working Paper, Social Science Research Network (https://doi.org/10.2139/ssrn.2741179).

de Fontnouvelle, P., DeJesus-Rueff, V., Jordan, J., and Rosengren, E. (2003). Using lossdata to quantify operational risk. Technical Report, Federal Reserve Bank of Boston(https://doi.org/10.2139/ssrn.395083).

de Fontnouvelle, P., Rosengren, E. S., and Jordan, J. S. (2007). Implications of alternativeoperational risk modeling techniques. In Risks of Financial Institutions, Carey, M., andStulz, R. M. (eds), Chapter 10, pp. 475–511. University of Chicago Press.

Journal of Operational Risk www.risk.net/journals

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 93: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

An LDA-based model for operational risk capital 81

Degen, M. (2010).The calculation of minimum regulatory capital using single-loss approx-imations. The Journal of Operational Risk 5(4), 3–17 (https://doi.org/10.21314/JOP.2010.084).

European Banking Authority (2006). Article 324: capital requirements regulation. In Inter-active Single Rule Book, Chapter 4. EBA, London.

European Banking Authority (2016). EU-wide stress test: methodological note. Report,February, EBA.

Griffiths, R., and Mnif,W. (2017).Various approximations of the total aggregate loss quantilefunction with application to operational risk.The Journal of Operational Risk 12(2), 23–46(https://doi.org/10.21314/JOP.2017.191).

Groenewald, A.P.(2014).Practical methods of modelling operational risk.Lecture, ActuarialSociety of South Africa’s 2014 Convention, October 22–23, Cape Town InternationalConvention Centre.

McConnell, P. (2017). Standardized measurement approach: is comparability attainable?The Journal of Operational Risk 18(1), 71–110 (https://doi.org/10.21314/JOP.2017.194).

Mignola, G., Ugoccioni, R., and Cope, E. (2016). Comments on the BCBS proposal for anew standardized approach for operational risk. The Journal of Operational Risk 11(3),51–69 (https://doi.org/10.21314/JOP.2016.184).

Moosa, I., and Li, L. (2013). An operational risk profile: the experience of British firms.Applied Economics 45, 2491–2500 (https://doi.org/10.1080/00036846.2012.667556).

Nagafuji, N., Nakata,T., and Kanzaki,Y. (2011).A simple formula for operational risk capital:a proposal based on the similarity of loss severity distributions observed among eighteenJapanese banks. Report, May, Japan Financial Services Agency.

Nešlehová, J., Embrechts, P., and Chavez-Demoulin, V. (2006). Infinite-mean models andthe LDA for operational risk. The Journal of Operational Risk 1(1), 3–25 (https://doi.org/10.21314/JOP.2006.001).

Peters, G.W., Shevchenko, P.V., Hassani, B., and Chapelle, A. (2016).Should the advancedmeasurement approach be replaced with the standardized measurement approach foroperational risk? The Journal of Operational Risk 11(3), 1–49 (https://doi.org/10.21314/JOP.2016.177).

Prudential Regulation Authority (2017). Statement of policy: the PRA methodologiesfor setting Pillar 2 capital. Report, February, Bank of England–Prudential RegulationAuthority.

www.risk.net/journals Journal of Operational Risk

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 94: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 95: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

Journal of Operational Risk 13(2), 83–91DOI: 10.21314/JOP.2018.209

Research Paper

Modeling very large losses

Henryk Gzyl

Centro de Finanzas, IESA, Avenue IESA, San Bernardino, Caracas 1010, Venezuela;email: [email protected]

(Received September 18, 2017; revised January 18, 2018; accepted January 24, 2018)

ABSTRACT

In this paper, we present a simple probabilistic model for aggregating very large lossesinto a loss collection. This supposes that “standard” losses come in various possiblesizes – small, moderate and large – which, fortunately, seem to occur with decreasingfrequency. Standard modeling allows us to infer a probability distribution describingtheir occurrence. From the historical record, we know that very large losses do occur,albeit very rarely, yet they are not usually included in the available data sets. Suchlosses should be made part of the distribution for computation purposes. For example,to a bank they may helpful in the computation of economic or regulatory capital, whileto an insurance company they may be useful in the computation of premiums of lossesdue to catastrophic events. We develop a simple modeling procedure that allows usto include very large losses in a loss distribution obtained from moderately sized lossdata. We say that a loss is large when it is larger than the value-at-risk (VaR) at a highconfidence level. The original and extended distributions will have the same VaR butquite different values of tail VaR (TVaR).

Keywords: modeling very large losses; loss distribution; value-at-risk (VaR); expected shortfall(ES); very large losses.

Print ISSN 1744-6740 j Online ISSN 1755-2710© 2018 Infopro Digital Risk (IP) Limited

83 Journal of Operational Risk

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

www.risk.net/journals

Page 96: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

84 H. Gzyl

1 PRELIMINARIES

To paraphrase the abstract, an increasingly common problem in operational risk man-agement at banks and in the insurance industry proceeds as follows. One observesa large collection of small or moderate losses, which occur with high frequency inevery standard unit of time (for instance, a year). There exists a large body of literatureexplaining how to model such losses. For a description of these research efforts, andas a guide to the enormous amount of literature on this subject, see, for example,Panjer (2006), Klugman et al (2012) (updated in Klugman et al (2013)), McNeil et al(2005) or Shevchenko (2011).

There is a line of work, seemingly relatively new, that consists of obtaining thedistribution of losses from the empirical Laplace transform. If we invert the Laplacetransform with density on the interval Œ0;1/, it can be transformed into a fractionalmoment problem on Œ0; 1�. This can then be efficiently solved using the maximumentropy method. Such methodology allows for aggregation using copulas (see Gomes-Gonçalves et al 2016a), and it is flexible enough to study the dependency of the samplesize (see Gomes-Gonçalves et al 2016b). Actually, the maxentropic technique allowsus to splice a preassigned tail to a given density in such a way that the resultingdistribution has very large moments (see Carrillo et al (2008) for details).

It is the aim of this paper to examine a problem recently dealt with in Cirilloand Taleb (2016) and Geman et al (2016). These authors consider the problem ofdetermining the tail behavior of the loss distribution for the computation of regulatorycapital. Consider Geman et al (2016) in particular, in which the authors use themaximum entropy technique, coupled with a change of variables and an applicationof extreme value methodology, to obtain the tail behavior of a loss distribution.

In Section 2, we develop a simple model to include a finite number of very largelosses in order to obtain a better description of a total loss distribution. For thatwe write the total loss L as the sum L1 C L2, where L1 describes standard lossescollected over many time periods. These losses may range from small to large, andwe denote their probability density as f . ByL2, we denote very large and rare losses:here, “very large” means several orders of magnitude larger than VaR0:99.L1/, while“rare” means occurring with a small probability (say, less than 0:001). An exampleof this would be an insurance company covering damage to property having to coverlosses due to earthquakes, or a bank having to cover the mismanagement of assetsonce every so many years.

In Section 2, we suppose thatL1 andL2 are independent, whereas in Section 3 webriefly repeat these arguments under the assumption that the very large losses maynot be independent of ordinary losses. We shall devote Section 4 to some examples,which will be simple enough to allow computations to be carried out analytically. Weconclude in Section 5 with a few remarks.

Journal of Operational Risk www.risk.net/journals

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 97: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

Modeling very large losses 85

2 LARGE LOSS MODELING: INDEPENDENT CASE

To begin, suppose that, after a number of periods of observation, we have assembleda model for the losses described by two random positive variables L1 and L2 suchthat the following hold.

(1) The random variable L1 itself may be the result of a process of aggregation,and it may include losses that can be called small, medium or large. We supposethat it is continuous and that we have determined its probability density f .x/.

(2) We suppose that L2 is a discrete random variable, taking values f`1 < `2 <

� � � < `Kg, with probabilities fpk W k D 0; : : : ; Kg, with K being a “small”number (a one-digit figure, say).

(3) We use `0 D 0 to denote the fact that no large loss occurs during an observationperiod, and that p0 D 1 �

PKkD1 pk is very close to 1.

(4) We suppose, for the time being, that L1 and L2 are independent.

The proof of the following assertion is simple.

Lemma 2.1 Using the notation introduced above, and under the four assumptionslisted, if h is any positive measurable function, and L D L1 C L2, then

EŒh.L/� D EŒEŒh.L1 C L2/ j L2�� DKX

kD0

pk

Z 1

`k

h.x/f .x � `k/ dx: (2.1)

It will be rather convenient to explicitly split this sum:

EŒh.L/� D p0

Z 1

0

h.x/f .x/ dx CKX

kD1

pk

Z 1

`k

h.x/f .x � `k/ dx: (2.2)

This has a straightforward interpretation: the total loss is described by a mixture ofdensities, which are simple translations of the density f .x/ by amounts `k . If wewere to plot this density, we would see a plot of density f .x/ plus a sequence of blipsat `k for k D 1; : : : ; K. The blips consist of translated copies of the graph of f .x/scaled down by the pk , k D 1; : : : ; K. Thus, the tail behavior of f is determined bythat of L1 and the scaled-down copies of f . In order to compute quantiles invoking(2.2), it is convenient to introduce h.x/ D IŒ0;V �.x/, where IA is standard notationfor the indicator function of set A. In this case, (2.2) becomes

P.L 6 V / D p0P.L1 6 V /CX

`k6V

pkP.L1 6 V � `k/: (2.3)

Recall that for a positive random variable with a positive density, VaR˛.X/ is definedas the solution of ˛ D P.X 6 V / in V . With this, the proof of the following is clear.

www.risk.net/journals Journal of Operational Risk

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 98: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

86 H. Gzyl

Lemma 2.2 Suppose that VaR˛=p0.L1/ < `1. Then, VaR˛.L/ D VaR˛=p0

.L1/.

The proof of the assertion is clear, as both sides of (2.3) are increasing in V , andthe level ˛ is reached by the first term in the right-hand side before the second termis larger than 0. Even though we might exclude this case as part of the very largelosses definition, we can still ask: what about the case of VaR˛=p0

.L1/ > `1? In theright-hand side of (2.3), replace V by `1 C v and note the following.

Lemma 2.3 Suppose there is a v1 such that

˛ D p0P.L1 6 `1 C v1/C p1P.L1 6 v1/ < `2I

then, VaR˛.L/ D `1 C v1.

Clearly, this procedure can be carried out recursively. However, let us supposethat we are operating in the first case, under the very large losses definition. Forease of notation, let us now suppose V is such that ˛ D P.L 6 V /, and considerEŒL j L > V �:According to (2.2),

EŒL j L > V � D 1

1 � ˛

�po

Z 1

V

xf .x/ dx CKX

kD1

pk

Z 1

V

xf .x � `k/ dx

�;

where we suppose that f .x � `k/ D 0 whenever x < `k . For the next step, we needthe following observation:

IŒV;1/IŒ`;1/ D(IŒV;1/; ` 6 V;

IŒ`;1/; ` > v:

With this in mind, let us compute the two possible cases for the right-hand side ofEŒL j L > V �. Consider first the case ` 6 V . Changing variables x� ` ! y, we get

Z 1

V

xf .x � `/ dx DZ 1

V �`

.y C `/f .y/ dy

D P.L1 > V � `/EŒL1 j L1 > V � `�C `P.L1 > V � `/:

Consider now V < `. In this case, as similar computations prove,

Z 1

`

xf .x � `/ dx DZ 1

0

.y C `/f .y/ dy D EŒL1�C `:

We collect these remarks in the following proposition.

Journal of Operational Risk www.risk.net/journals

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 99: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

Modeling very large losses 87

Proposition 2.4 Using the notation introduced above, and with V D VaR˛.L/,we have

EŒL j L > V � D1

1 � ˛

�p0P.L1 > V /EŒL1 j L1 > V �

CX

`k<V

pk.P.L1 > V � `k/EŒL1 j L1 > V � `k�

C `kP.L1 > V � `//CX`>V

pk.EŒL1�C `k/

�: (2.4)

The interesting consequence of this is shown in the corollary below.

Corollary 2.5 Suppose that `k > V for k D 1; : : : ; K. Then, reorganizing (2.4)a bit, we get

EŒL j L > V � D p0

P.L1 > V /

P.L > V /EŒL1 j L1 > V �C

1

1 � ˛

KXkD1

pk.EŒL1�C `k/:

Keep in mind that, in this case, VaR˛.L/ D VaR˛=p0.L1/, which makes the

interpretation of the result easier. For this, note that

p0

P.L1 > VaR˛.L//

P.L > VaR˛.L//D p0

1 � ˛=p0

1 � ˛ D p0 � ˛1 � ˛ D 1 � 1

1 � ˛

KXkD1

pk :

With this, the identity in Corollary 2.5 can be rewritten as

EŒL j L > V � D EŒL1 j L1 > V ��1

1 � ˛

NXkD1

pk.`k CEŒL1��EŒL1 j L1 > V �/:

(2.5)That is, the expected shortfall (ES) of the total loss is the sum of the ES of the commonlosses plus a corrected contribution due to the very large losses.

3 LARGE LOSS MODELING:THE DEPENDENT CASE

Although we suppose the losses described by L1 and L2 may have causes that allowus to regard them as statistically independent, sometimes it may be reasonable tosuppose they are not, eg, if behind their occurrence we suspect some common causalagent. For instance, an earthquake may cause losses of differing severities, a few ofthem qualifying as very large. In order to develop a methodology similar to that ofthe previous section, the assumptions made there are replaced as follows.

www.risk.net/journals Journal of Operational Risk

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 100: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

88 H. Gzyl

(1) L1 is a continuous random variable with marginal probability density f .x/.

(2) L2 is a discrete random variable, taking values f`k W k D 0; : : : ; Kg withmarginal probabilities fpkI k D 1; : : : ; Kg, where K is as above.

(3) We use `0 D 0 to denote the fact that no large loss occurs in the period, andthat p0 D 1 �

PKkD1 pk is very close to 1.

(4) Denote by F.x; y/ D P.L1 6 x;L2 6 y/ the joint density of .L1; L2/. Fork D 0; : : : ; K, let us put

gk.x/ D @

@x.F.x; `k/ � F.x; `k�//; (3.1)

where F.x; `k�/ D limy"`kF.x; y/ and F.x; `k/ � F.x; `k�/ is the jump

of F.x; y/ at y D `k . If the random variables were independent, that quantitywould be f .x/pk in the notation of the previous section.

Certainly, it may be unrealistic to suppose that, at the outset, a joint distri-bution of L1 and L2 is known. Nevertheless, we may suppose that F.x; y/ DC.FL1

.x/; FL2.y// for some appropriate copula, and carry on with the below without

further ado.With the notation just introduced, the analogue of (2.2) is

EŒh.L/� D p0

Z 1

0

h.x/g0.x/ dx CKX

kD1

pk

Z 1

`k

h.x/gk.x � `k/ dx: (3.2)

Again, if we convene on setting gk.x/ D 0 for x < 0, we may suppose that the totalloss has a density given by

fL.x/ DKX

kD0

gk.x/:

Just to make sure, observe thatZ 1

0

gk.x/ dx D F.1; `k/ � F.1; `k�/ D pk :

It is not hard to verify that, with some minor changes of notation, the contents ofLemmas 2.2 and 2.3, Proposition 2.4 and Corollary 2.5 are applicable to this case aswell. We shall not pursue the matter any further, and instead consider some examples.

4 EXAMPLES

As mentioned above, we now consider some examples that illustrate our differentresults. These are simple enough for the computations to be carried out analytically.

Journal of Operational Risk www.risk.net/journals

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 101: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

Modeling very large losses 89

Here, we suppose that all large losses are grouped together into a large loss of size `,which occurs with probability p1, and p0 D 1 � p1 again denotes the probability ofnot observing the large losses.

4.1 Exponential density

Here, we suppose that L1 � exp.1=m/. Then, EŒL1� D m, and for any V > 0 andany positive function h, we have

EŒh.L1/ j L > V � DZ 1

0

h.mx C V /e�x dx:

If we consider˛ < p0 (otherwise things will not make sense) and suppose the solutionto P.L 6 V / D ˛=p0 is less than `, then, according to Lemma 2.2, we have

VaR˛.L/ D VaR˛=p0.L1/ D m ln

�p0

p0 � ˛

�:

In this case, according to (2.5), the tail VaR (TVaR) (or ES) of the total loss can bereadily computed using the remark above. After rearranging things a bit, we obtain

EŒL j L > VaR� D VaR CmC p1

1 � ˛�` � VaR/:

Here, the first term denotes the shortfall of L1, while the second term denotes thecontribution of the big losses to the shortfall of L. Clearly, if ` is very large, sayseveral orders of magnitude larger than VaR, its contribution to the total loss can bequite important.

4.2 Generalized Pareto density

Let us now suppose that the common losses follow a generalized Pareto densityf .x/ D k.1C x/�.kC1/, with k large enough for the L1 to have as many momentsas we need. Then, F.x/ D .1C x/�k and EŒL1� D .k � 1/�1. In addition, for 0 <˛ < 1 the solution to F.V / D ˛ is given by V D .1=˛/1=k � 1. Now, if we supposethat VaR˛=p0

.L1/ < `, according to Lemma 2.3 we have VaR˛=p0.L1/ D VaR˛.L/,

and EŒL1 j L1 > V � D .1C kV /=.k � 1/. Therefore, according to (2.5),

EŒL j L > V � D EŒL1 j L1 > V �Cp1

1 � ˛

�` � kV

k � 1

�:

4.3 Lognormal density

Let us suppose thatL1 D L0emC�Z , whereL0; m; � are constants andZ � N.0; 1/.Now, for 0 < u < 1, P.L1 6 V / D ˛ can be solved in V to yield V D L0 exp.mC

www.risk.net/journals Journal of Operational Risk

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 102: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

90 H. Gzyl

�z˛/, where z˛ is the ˛-quantile of the standardized normal random variable. Thistime,

EŒL1� D LemC�2=2 and EŒL1 j L1 > V � D EŒL1�1 � ˚.V � �/P.L1 > V /

;

where ˚.x/ denotes the cumulative distribution function of theN.0; 1/ random vari-able. Again, when V D VaR˛=p0

.L1/ D VaR˛.L/ < `, then, according to (2.5), wehave

EŒL j L > V � D EŒL1 j L1 > V �Cp1

1 � ˛ .`CEŒL1�˚.V � �//:

5 CONCLUDING REMARKS

The modeling of large losses proposed above requires only basic probability. Perhapsthat is the reason it has not yet been examined in treatises requiring modeling withmore advanced mathematics, which is surely necessary to determine the distributionof the common (small, moderate or large) losses.

Despite its conceptual simplicity, the approach presented above does allow forthe systematic determination of VaR and TVaR, taking large losses into account ina systematic way. It shifts focus onto the determination of the distribution of com-mon (small, moderate or large) losses: a problem for which many tools have beendeveloped.

DECLARATION OF INTEREST

The author reports no conflicts of interest. The author alone is responsible for thecontent and writing of the paper.

ACKNOWLEDGEMENTS

I wish to thank the editors and the reviewers for their comments on the first draft ofthis paper. They contributed to make it much clearer.

REFERENCES

Carrillo, S., Gzyl, H., and Tagliani, A. (2008). Reconstructing heavy-tailed distributions bysplicing with maximum entropy in the mean. The Journal of Operational Risk 7(2), 3–15(https://doi.org/10.21314/JOP.2012.108).

Cirillo, P., and Taleb, N. N. (2016). Expected shortfall estimation for apparently infinite-meanmodels of operational risk. Quantitative Finance 16, 1485–1494 (https://doi.org/10.1080/14697688.2016.1162908).

Journal of Operational Risk www.risk.net/journals

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 103: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

Modeling very large losses 91

Geman, D., Geman, H., and Taleb, N. (2016). Tail risk constraints and maximum entropy.Entropy 17, 3724–3737 (https://doi.org/10.3390/e17063724).

Gomes-Gonçalves, E., Gzyl, H., and Mayoral, S. (2016a). A maximum entropy approachto the loss data aggregation problem. The Journal of Operational Risk 11(1), 49–70(https://doi.org/10.21314/JOP.2016.170).

Gomes-Gonçalves, E., Gzyl, H., and Mayoral, S. (2016b).Loss data analysis:analysis of thesample dependence in density reconstruction. Insurance: Mathematics and Economics16, 3257–3272.

Klugman, S. A., Panjer, H. H., and Willmot, G.E. (2012). Loss Models: From Data toDecisions, 3rd edn. Wiley.

Klugman, S. A., Panjer, H. H., and Willmot, G. E. (2013). Loss Models: Further Topics.Wiley(https://doi.org/10.1002/9781118787106).

McNeil, A. J., Frey, R., and Embrechts, P. (2005). Quantitative Risk Management. PrincetonUniversity Press.

Panjer, H. (2006). Operational Risks: Modeling Analytics. Wiley (https://doi.org/10.1002/0470051310).

Shevchenko, P. (2011). Modeling Operational Risk Using Bayesian Inference. Springer(https://doi.org/10.1007/978-3-642-15923-7).

www.risk.net/journals Journal of Operational Risk

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]

Page 104: Operational Risksubscriptions.risk.net/wp-content/uploads/2019/02/JOP_13_2_TRIAL_WEB... · The Journal of Operational Risk considers submissions in the form of research papers and

To subscribe to a Risk Journal visit subscriptions.risk.net/journals or email [email protected]